Let's say we need to check if a jsonb column contains a particular value matching by a substring in any of the value (non-nested, only first level).
How does one effectively optimize a query to search entire JSONB
column (this means every key) for a value?
Is there some good alternative to doing ILIKE %val%
on jsonb datatype casted to text?
jsonb_each_text(jsonb_column) ILIKE '%val%'
As an example consider this data:
SELECT
'{
"col1": "somevalue",
"col2": 5.5,
"col3": 2016-01-01,
"col4": "othervalue",
"col5": "yet_another_value"
}'::JSONB
How would you go about optimizing a query like that when in need to search for pattern %val%
in records containing different keys configuration for different rows in a jsonb column?
I'm aware that searching with preceding and following %
sign is inefficient, thus looking for a better way but having hard time finding one. Also, indexing all the fields within the json column explicitly is not an option since they vary for each type of record and would create a huge set of indexes (not every row has the same set of keys).
Question
Is there a better alternative to extracting each key-value pair to text and performing an ILIKE/POSIX search?
pg_trgm
may be the best option (ilike/posix type) for that as you are still be using pattern matching criteria type in jsonb column – Dmitry Savinkovgin_trgm_ops
operator class can be applied, you can also check check this answer – Dmitry Savinkovsomethink LIKE '%<somevalue>%'
is inefficient by default because it always causes full scan of the data. So the @DmitrySavinkov 's suggestion is the almost best solution. IMO it should be the answer, with brief explanation. – Abelisto