3
votes

I'm evaluating the possibilities of using PostgreSQL's Row Level Security (RLS) features for soft deletion of customers. Unfortunately I'm having issues with poor performance. Here is a simple test setup in PostgreSQL version 9.5.10:

A table containing 10,000,000 customers:

CREATE TABLE customers (
    customer_id integer PRIMARY KEY,
    name text,
    hidden boolean DEFAULT FALSE
);

INSERT INTO customers (customer_id, name) SELECT generate_series(0, 9999999), 'John Doe';
ANALYZE customers;

A table containing one order for each customer:

CREATE TABLE orders (
    order_id integer PRIMARY KEY,
    customer_id integer REFERENCES customers (customer_id)
);

INSERT INTO orders (order_id, customer_id) SELECT generate_series(0, 9999999), generate_series(0, 9999999);
ANALYZE orders;

An untrusted user that will be doing SELECTs only:

CREATE ROLE untrusted;
GRANT SELECT ON customers TO untrusted;
GRANT SELECT ON orders TO untrusted;

A policy that makes hidden customers invisible to the unstrusted user:

CREATE POLICY no_hidden_customers ON customers FOR SELECT TO untrusted USING (hidden IS FALSE);
ALTER TABLE customers ENABLE ROW LEVEL SECURITY;

A simple test query: what is the name of the customer that made the order with order_id = 4711?

Without RLS:

EXPLAIN ANALYZE SELECT name FROM orders JOIN customers USING (customer_id) WHERE order_id = 4711;
                                                           QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------
 Nested Loop  (cost=0.87..16.92 rows=1 width=9) (actual time=0.121..0.123 rows=1 loops=1)
   ->  Index Scan using orders_pkey on orders  (cost=0.43..8.45 rows=1 width=4) (actual time=0.078..0.078 rows=1 loops=1)
         Index Cond: (order_id = 4711)
   ->  Index Scan using customers_pkey on customers  (cost=0.43..8.45 rows=1 width=13) (actual time=0.039..0.040 rows=1 loops=1)
         Index Cond: (customer_id = orders.customer_id)
 Planning time: 0.476 ms
 Execution time: 0.153 ms
(7 rows)

With RLS:

EXPLAIN ANALYZE SELECT name FROM orders JOIN customers USING (customer_id) WHERE order_id = 4711;
                                                           QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------
 Hash Join  (cost=8.46..291563.48 rows=1 width=9) (actual time=1.494..2565.121 rows=1 loops=1)
   Hash Cond: (customers.customer_id = orders.customer_id)
   ->  Seq Scan on customers  (cost=0.00..154055.00 rows=10000000 width=13) (actual time=0.010..1784.086 rows=10000000 loops=1)
         Filter: (hidden IS FALSE)
   ->  Hash  (cost=8.45..8.45 rows=1 width=4) (actual time=0.015..0.015 rows=1 loops=1)
         Buckets: 1024  Batches: 1  Memory Usage: 9kB
         ->  Index Scan using orders_pkey on orders  (cost=0.43..8.45 rows=1 width=4) (actual time=0.012..0.013 rows=1 loops=1)
               Index Cond: (order_id = 4711)
 Planning time: 0.358 ms
 Execution time: 2565.170 ms
(10 rows)

How can I avoid the sequential scan when joining the table? I've tried every index I can think of, to no avail.

1

1 Answers

3
votes

I recommend you to upgrade to latest Postgres version 10.3. Since version 9.5 significant improvements regarding performance of the Row-Level Security features have been made. For example check out this improvement that is only available since Postgres 10.0: https://github.com/postgres/postgres/commit/215b43cdc8d6b4a1700886a39df1ee735cb0274d

I don't think it makes sense to try to optimize RLS queries in Postgres 9.5 since it was a very new feature back then and wasn't really optimized for performance yet back then. Just upgrade.