Introduction: (How I got the idea and the process of how the dataset was developed)
“ Naruto, an anime that teaches about many profound things like no matter how bad life puts you down, you got to get back up and move forward” as said by one of my stoned friends. He was high, but I couldn’t refute him not because I was stoned too but because I agreed with him completely.
A little context about how I got the idea, I am currently a visiting researcher in Australia at UTS. I am working on research in ML, XR and quantum domains.
So, being the geek I am, I marveled at the possible implications of making a naruto game in AR where you make the signs and a Jutsu follows up. Think: When you do the Dog, hare, Dragon, boar, tiger signs, a fire dragon comes into existence and crashes onto the enemy..killing him if he doesn’t counter-attack with a water or earth Jutsu. No Dodging allowed lol.
It was then the moment of “Ohhhhhh myyyy goddddd!!!!!!!” hit me like a bull on steroids hitting a person covered in red paint.
I immediately got to work………..sadly, it was never done before, so I had to start from the part of recognizing the hand signs and then working my way up to AR game work. I knew I couldn’t do this alone as I would need hundreds of images of each hand sign which would be a huge time commitment and a lot of work. So, being the lazy…….smart ass I am, I decided to enlist the social power of of otakus. We had a group back in VIT for anime lovers literally called “ VIT ANIME LOVERS”. I went there and posted about what I wanted to do and many were willing to help out but in the end they weren’t enough so after getting a certain amount of images, I made a video of myself doing all the signs in order and used opencv to get frame by frame of all the hand signs while removing unclear ones. This is how the dataset was developed and this is the link for the Dataset.
Nitty-Gritty Technical Part:
You can find the trained models in the Github repo, so thank me by clapping here and staring at the repo. Thanks mate!!
Tools used: FastAI
Gets shit done faster with less code and uses the latest methods to improve the performance and reduce overfitting of the model like dropout and other such methods already configured.
Environment setup: Kaggle for storing the dataset and Colab for training the model.
If you are asking why not use kaggle for everything…..to be honest, I don’t remember now but it was giving a lot of problems when training and some tools were missing. Also, the GPU time is not unlimited like Colab. Some of you might ask, why not only Colab then? That would be because I would have to store the data in my drive which also has limited space. So, decided to use unlimited storage of data in kaggle and unlimited GPU time in colab….I know I know I am awesome.
If you have any questions regarding the project, let me know in the comments or talk to me on LinkedIn. Cheers!