8
votes

Let's say we need to check if a jsonb column contains a particular value matching by a substring in any of the value (non-nested, only first level).

How does one effectively optimize a query to search entire JSONB column (this means every key) for a value?

Is there some good alternative to doing ILIKE %val% on jsonb datatype casted to text?

jsonb_each_text(jsonb_column) ILIKE '%val%'

As an example consider this data:

SELECT 
  '{
   "col1": "somevalue", 
   "col2": 5.5, 
   "col3": 2016-01-01, 
   "col4": "othervalue", 
   "col5": "yet_another_value"
  }'::JSONB

How would you go about optimizing a query like that when in need to search for pattern %val% in records containing different keys configuration for different rows in a jsonb column?

I'm aware that searching with preceding and following % sign is inefficient, thus looking for a better way but having hard time finding one. Also, indexing all the fields within the json column explicitly is not an option since they vary for each type of record and would create a huge set of indexes (not every row has the same set of keys).

Question

Is there a better alternative to extracting each key-value pair to text and performing an ILIKE/POSIX search?

1
This may be a better suit for dba.stackexchange.com, I just wanted to get vast audience for this matter.Kamil Gosciminski
pg_trgm may be the best option (ilike/posix type) for that as you are still be using pattern matching criteria type in jsonb columnDmitry Savinkov
@DmitrySavinkov could you please elaborate? I believe that I would still need to unpack the json data into separate rows.Kamil Gosciminski
yes, you need to unpack the value, so gin_trgm_ops operator class can be applied, you can also check check this answerDmitry Savinkov
Filters like somethink LIKE '%<somevalue>%' is inefficient by default because it always causes full scan of the data. So the @DmitrySavinkov 's suggestion is the almost best solution. IMO it should be the answer, with brief explanation.Abelisto

1 Answers

1
votes

If you know you will need to query only a few known keys, then you can simply index those expressions.

This is a too simple but self explaining example:

create table foo as SELECT '{"col1": "somevalue", "col2": 5.5, "col3": "2016-01-01", "col4": "othervalue", "col5": "yet_another_value"}'::JSONB as bar;

create index pickfoo1 on foo ((bar #>> '{col1}'));
create index pickfoo2 on foo ((bar #>> '{col2}'));

This is the basic idea, even it isn't useful for ilike querys, but you can do more things (depending on your needs).

For example: If you need only case insensitive matching, it would be sufficient to do:

-- Create index over lowered value:
create index pickfoo1 on foo (lower(bar #>> '{col1}'));
create index pickfoo2 on foo (lower(bar #>> '{col2}'));

-- Check that it matches:
select * from foo where lower(bar #>> '{col1}') = lower('soMEvaLUe');

NOTE: This is only an example: If you perform an explain over the previous select, you will see that postgres actually performs a sequential scan instead of using the index. But this is because we are testing over a table with a single row, which is not the usual. But I'm sure you could test it with a bigger table ;-)

With huge tables, even like queries should benefit of the index if the firt wilcard doesn't appear at the beginning of the string (but it isn't a matter of jsonb but a matter of btree indexes itself).

If you need to optimize queries like:

select * from foo where bar #>> '{col1}' ilike '%MEvaL%';

...then you should consider using GIN or GIST indexes instead.