I have various security cameras set to trigger when motion is detected. A program then grabs a still image from the camera, runs it through some deep learning stuff, and attempts to classify various objects in the image. I then log the results of the object classification into InfluxDB so I can use Grafana to browse how frequently various objects are detected, the classification confidence of the different objects, etc. The cameras process and classify a couple thousand motion events per day.
Here is a snippet of Python code that demonstrates my schema:
event_name = "object_detection"
camera_name = "front"
label = "person"
json_body = [{
'measurement': event_name,
'tags': {
'camera': camera_name,
'label': label,
},
'time': data['timestamp'],
'fields': {
'confidence': prediction['confidence'],
'min_confidence': prediction['min_confidence'],
'alert': prediction['alert'],
'y_min': prediction['y_min'],
'x_min': prediction['x_min'],
'y_max': prediction['y_max'],
'x_max': prediction['x_max'],
}
}]
The problem I'm having is that I can't (easily) query this data in Grafana--I just get an empty dataset. But if I drilldown in the query inspector, I do see the data. It's just heavily nested.
I don't know if I'm doing something wrong in InfluxDB or in Grafana? Interestingly, the InfluxDB Python library uses this example schema in the docs:
>>> json_body = [
{
"measurement": "cpu_load_short",
"tags": {
"host": "server01",
"region": "us-west"
},
"time": "2009-11-10T23:00:00Z",
"fields": {
"value": 0.64
}
}
]
So now I'm confused? Should I breakout each of my field values into it's own datapoint? It would make querying in Grafana easier, but seems like an inefficient solution. What's the best option?