In environments of hign concurrency SQLite block table with lose data. I'm looking the way for improve the performance of SQLite with high concurrency without lose data for insert query. My intention is know the limit of concurrent users for insert in way that a site work (in this case 1 insert for request) with a “high concurrency”. For to make this test case more simple, users will send data for to save in database
After looking how improve the performance and use advice other users:
- sqlite.org
- sqlite.org/faq.html#q19
- stackoverflow.com/questions/1711631/how-do-i-improve-the-performance-of-sqlite
- stackoverflow.com/questions/54998/how-scalable-is-sqlite
I decided to make a little test in a amazon small instance.
Platform
- Linux version 2.6.35.14-106.53.amzn1.i686 ([email protected]) (gcc version 4.4.5 20110214 (Red Hat 4.4.5-6) (GCC) ) #1 SMP Fri Jan 6 16:20:23 UTC 2012
- Sqlite 3.6.20
- Lighttpd 1.4.29
- Php 5.3.10
test.php with insert sql:
PRAGMA synchronous = OFF; BEGIN TRANSACTION; INSERT INTO
test
(data1
,date
) VALUES ('".$_POST['data1']."',date() ); END TRANSACTION;
Use Apache HTTP server benchmarking tool.
Database is a file in file system not in ram.
CONCLUSION: Test 1: ab -n 10000 -c 50 -k xxxx.xxx/test.php?data1=fc82bf71a0d5f8a3c2dc868ebe0d6eaa
- Time taken for tests: 63.637 seconds
- Complete requests: 10000
- Failed requests: 0
- Sqlite inserted rows: 10050
- Average: 159.52 insert/sg
Test2: ab -n 10000 -c 100 -k xxxx.xxx/test.php?data1=fc82bf71a0d5f8a3c2dc868ebe0d6eaa
- Time taken for tests: 64.221 seconds
- Complete requests: 10000
- Failed requests: 0
- Sqlite inserted rows: 10100
- Average: 157.26 insert/sg
Test3: ab -n 10000 -c 150 -k xxxx.xxx/test.php?data1=fc82bf71a0d5f8a3c2dc868ebe0d6eaa
- Time taken for tests: 33.338 seconds
- Complete requests: 10000
- Failed requests: 7095
- (Connect: 0, Receive: 0, Length: 7095, Exceptions: 0)
- SQLITE: 2905 inserted rows
- Average: LOST DATA!!
TEST4: ab -n 10000 -c 200 -k xxxx.xxx/test.php?data1=fc82bf71a0d5f8a3c2dc868ebe0d6eaa
- Time taken for tests: 33.705 seconds
- Complete requests: 10000
- Failed requests: 7049
- (Connect: 0, Receive: 0, Length: 7049, Exceptions: 0)
- SQLITE: 2918 inserted rows
- Average: LOST DATA!!
In this specific environment we can use SQLite up to 100 concurrent users with average of 157.26 insert/sg. You take into account this result is only for insert data.
From my ignorance, is possible prevent lose data? is possible to improve this performance?