I have a jsonb document in a table. This document has an array of cameraIds in the document. I am trying to join this data with the cameras table that is a normal table where cameraId is a column, and return unique rows from the table with the jsonb column (why I am using a group by in my query).
Any advice on how to optimize this query for performance would be greatly appreciated.
JSONB Col Example:
{
"date": {
"end": "2018-11-02T22:00:00.000Z",
"start": "2018-11-02T14:30:00.000Z"
},
"cameraIds": [100, 101],
"networkId": 5,
"filters": [],
"includeUnprocessed": true,
"reason": "some reason",
"vehicleFilter": {
"bodyInfo": "something",
"lpInfo": "something"
}
}
Query:
select ssr.id,
a.name as user_name,
ssr.start_date,
ssr.end_date,
ssr.created_at,
ssr.payload -> 'filters' as pretty_filters,
ssr.payload -> 'reason' as reason,
ssr.payload -> 'includePlates' as include_plates,
ssr.payload -> 'vehicleFilter' -> 'bodyInfo' as vbf,
ssr.payload -> 'vehicleFilter' -> 'lpInfo' as lpInfo,
array_agg(n.name) filter (where n.organization_id = ${orgId}) as network_names,
array_agg(c.name) filter (where n.organization_id = ${orgId}) as camera_names
from
ssr
cross join jsonb_array_elements(ssr.payload -> 'cameraIds') camera_id
inner join cameras as c on c.id = camera_id::int
inner join networks as n on n.id = c.network_id
inner join accounts as a on ssr.account_id = a.id
where n.organization_id = ${someId}
and ssr.created_at between ${startDate} and ${endDat}
group by 1,2,3,4,5,6,7,8,9,10
order BY ssr.created_at desc
OFFSET 0
LIMIT 25;
ANALYZE
). – Bergissr.payload
column to work. Sincessr
has acreated_at
column, I suggest adding a table,ssr_camera
withssr_id
,camera_id
, andcreated_at
as columns, and maintain it when you insert intossr
. Join through the newssr_camera
table in your query. – Mike Organek