Quantcast
Channel: MySQL Performance Blog » Search Results » percona server 5.1.57
Viewing all articles
Browse latest Browse all 8

InnoDB compression woes

$
0
0

InnoDB compression is getting some traction, and I see quite contradictory opinions. Someone has successful deployments in productions, and someone says that compression in current implementation is useless.
To get some initial impression about performance I decided to run some sysbench with multi-tables benchmarks.
I actually was preparing to do complex research, but even first initial results are quite discouraging.

My setup: Dell PowerEdge R900, running Percona-Server-5.1.57-rel12.8 (will be in public release soon), storage is FusionIO 320GB MLC card, which does not matter a lot in this case of CPU-bound benchmark.

First stage – load data. Scripts for multi-table sysbench allow to load data in parallel, so let’s load in 16 tables in 16 parallel threads, 25,000,000 rows in each tables. That gives about 6GB of data per table (uncompressed) and 96GB of data in total.

./sysbench --test=tests/db/parallel_prepare.lua --oltp-tables-count=16 --num-threads=16 --oltp-table-size=25000000 run

Results: Load time for regular tables: 19693 sec, for compressed tables: 38278 sec.
Compressed tables are create as: ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=8,
with final size 3.1GB per table. So we have 2x win in space in trade for 2x worse time. Maybe fair deal.

Now let’s run oltp read-only workload in 16 parallel threads with limiting dataset to 3,000,000 rows per table, that is in total about 11GB of working set. Using 24GB of memory for buffer_pool will give us fully in-memory CPU-bound workload.

command to run:

./sysbench --test=tests/db/oltp.lua --oltp-tables-count=16 --oltp-table-size=5000000 --oltp-read-only=on --rand-init=on --num-threads=16 --max-requests=0 --rand-type=uniform --max-time=1800 --mysql-user=root --report-interval=10 run

this will report us results each 10 sec.

After initial warm-up the throughput for regular tables are stabilized on level 4650 transactions per sec. I expected some overhead for compressed tables, but not such: the throughput with compressed tables are 30 transactions per sec. This is 150x difference.

As workload is clear read-only CPU bound, let’s check CPU stats:

regular tables:

-----cpu------
 us sy id wa st
  1  0 98  0  0
 85 13  2  0  0
 85 13  2  0  0
 85 13  1  0  0
 85 13  2  0  0

Compressed tables:

-----cpu------
us sy id wa st
  2  0 97  1  0
  7  0 93  0  0
  7  0 93  0  0
  7  0 93  0  0
  7  0 93  0  0
  7  0 93  0  0
  7  0 93  0  0

With regular tables CPU is utilized 85% and this is quite decent number. With compressed tables
CPU utilization is 7%. Obviously we have some mutex serialization problem.

Analyzing SHOW INNODB STATUS (SEMAPHORES) for workload with compression tables we can see

1 Mutex at 0xe13880 '&buf_pool_zip_mutex'
     14 Mutex at 0xe13780 '&LRU_list_mutex'

Apparently using compressed tables we have very strong contention in LRU_list_mutex.

I should check how compressed tables perform in IO-bound workload (this is where they should give main benefit),
but for in-memory load it shows significant scalability problem.

The post InnoDB compression woes appeared first on MySQL Performance Blog.


Viewing all articles
Browse latest Browse all 8

Trending Articles