I have Postgres 9.4.7 and I have a big table ~100M rows and 20 columns. The table queries are 1.5k selects, 150 inserts and 300 updates per minute, no deletes though. Here is my autovacuum config:
autovacuum_analyze_scale_factor 0
autovacuum_analyze_threshold 5000
autovacuum_vacuum_scale_factor 0
autovacuum_vacuum_threshold 5000
autovacuum_max_workers 6
autovacuum_naptime 5s
In my case database are almost always in the constant state of vacuuming. When one vacuuming session ends another one begins.
So the main question: Is there a common way to vacuum big tables?
Here are some other questions.
Standard vacuum do not scan entire table and 'analyze' only scans 30k rows. So under the same load I should have a constant execution time, is it true? Do I really need to analyze table? Can frequent 'analyze' do any useful changes in query plans for a large table?
Standard vacuum do not scan entire table
is false – Vao Tsun