I'd like my app to be able to detect when the user carrying the phone falls, using only accelerometer data (because it's the only sensor available on all smartphones).
I first tried to implement an algorithm to detect free fall (accelerometer total acceleration nearing zero, followed by high acceleration due to ground hitting, and a short period of motionlessness to ditch false positives when the user is just walking downstairs quickly), but there's a lot of ways to fall, and for my algorithm implementation, I can always find a case where a fall is not detected, or where a fall is wrongly detected.
I think Machine Learning can help me solve this issue, by learning from a lot of sensor values coming from different devices, with different sampling rates, what is a fall and what is not.
Tensorflow seems to be what I need for this as it seems it can run on Android, but while I could find tutorials to use it for offline image classifying (here for example), I didn't find any help to make model that learns patterns from motion sensors values.
I tried to learn how to use Tensorflow using the Getting Started page, but failed to, probably because I'm not fluent in Python, and do not have machine learning background. (I'm fluent in Java and Kotlin, and used to Android APIs).
I'm looking for help from the community to help me use Tensorflow (or something else in machine learning) to train my app to recognize falls and other motion sensors patterns.
As a reminder, Android reports motion sensors values at a random rate, but provides a timestamp in nanoseconds for each sensor event, which can be used to infer the time elapsed since the previous sensor event, and the sensor readings are provided as a float (32bits) for each axis (x, y, z).