Hello, all!
I have just started in the wondrous world of machine learning, and I can already tell that I have a lot to learn. So my first project is one that will try to detect gestures and hand positions. For example, if a person were to hold up a peace sign, then a fist, then wave at the camera, it will output the label correctly. Now, some of these gestures are stationary, like pointing or a peace sign, whereas others are in motion, like a wave. I am considering implementing a dual level machine learning model that is both an image classifier and an action classifier in order to capture both stationary and dynamic movements. I was wondering if there is a way to combine those two createml models in order to accomplish this goal. If it is possible, how would I go about combining those models? Or would it simply be easier to only use the action classifier and generate videos of people pointing and doing peace signs to feed to the machine?
Thanks for the help!
I have just started in the wondrous world of machine learning, and I can already tell that I have a lot to learn. So my first project is one that will try to detect gestures and hand positions. For example, if a person were to hold up a peace sign, then a fist, then wave at the camera, it will output the label correctly. Now, some of these gestures are stationary, like pointing or a peace sign, whereas others are in motion, like a wave. I am considering implementing a dual level machine learning model that is both an image classifier and an action classifier in order to capture both stationary and dynamic movements. I was wondering if there is a way to combine those two createml models in order to accomplish this goal. If it is possible, how would I go about combining those models? Or would it simply be easier to only use the action classifier and generate videos of people pointing and doing peace signs to feed to the machine?
Thanks for the help!