Deprecated: Assigning the return value of new by reference is deprecated in /home/content/27/11020327/html/wp-settings.php on line 267

Deprecated: Assigning the return value of new by reference is deprecated in /home/content/27/11020327/html/wp-settings.php on line 269

Deprecated: Assigning the return value of new by reference is deprecated in /home/content/27/11020327/html/wp-settings.php on line 270

Deprecated: Assigning the return value of new by reference is deprecated in /home/content/27/11020327/html/wp-settings.php on line 287
MySQL / InnoDB performance optimization | ProDevTips - Web Development Tutorials


MySQL / InnoDB performance optimization

Update: Do not forget to erase the two ib_log* files in /var/lib/mysql before restarting with the new configuration!

Update: I have had to set innodb_stats_on_metadata=0 to avoid ridiculous startup times when accessing with phpMyAdmin, since it runs a SHOW TABLE STATUS FROM database_name query each time it shows a database overview. These queries slowed down the overall performance of the whole database too! Apparently there is virtually no downside to making this change so it’s a win win! More info here.

Update: I’ve set innodb_flush_log_at_trx_commit to 0 as it supposedly should give a small boost compared to having it at 2 without much of a drawback.

Update: I’ve increased the amount of allowed connections to match the amount of allowed Apache processes. At the moment the config line in question looks like this: max_connections = 2000

I’ve recently gotten some help and done some research into optimizing a MySQL database. The following applies if you have a powerful box at your disposal, I have 32GB RAM and 32 cores. If you are very limited hardware wise then not much of the following will apply.

As it turned out some of my most write intensive tables were MyISAM, first I converted them to InnoDB, of course a must if you want to reap the advantages of the InnoDB optimizations below.

If you have many MyISAM tables that you need to convert the following PHP snippet might interest you:

function toInno($sql, $db){
	$res = $sql->loadArray('show tables');
	foreach($res as $arr){
		$tbl = $arr['Tables_in_'.$db];
		$sql->query("ALTER TABLE `$tbl` ENGINE = InnoDB");
	}
}

The function takes the database name as input that you want to work with and a database connection ($sql), replace with the one you’re currently using and change as needed, I’m sure you get the gist of it though.

Let’s get on with the optimization itself, the below lines are to be added to /etc/mysql/my.cnf.

innodb_buffer_pool_size = 12GB

The buffer pool size should be as big as possible or be able to contain all your data. I’ve set it to 12GB which is more than enough to hold my current data. When I get more data I will crank it up to 16GB which is 50% of my available RAM.

innodb_log_buffer_size=64M
innodb_log_file_size = 256M
innodb_flush_log_at_trx_commit=2

I’ve set innodb_log_buffer_size to 25% of innodb_log_file_size. Apparently 256/64MB is quite large which I can afford on my monster box. It will save I/O to disk. I quote Oracle’s Jenny Chen:

Set the amount of memory allocated to the buffer storing InnoDB write-ahead log entries. For large transactions, the log can be loaded into the log buffer instead of writing log to the log files on disk untill the log buffer is flushed on each transaction commit. If you see large log I/Os in the show innodb status output at runtime, you probably need to set a larger value for the innodb_log_buffer_size parameter to save disk I/O. For workloads which don’t have long transactions like sysbench, it is not necessary to waste memory resources by setting a higher value for the log buffer; it is fine to set it to 8Mbytes.

As far as the trx commit value goes I quote Jenny again:

InnoDB flushes the transaction log to disk approximately once per second in the background. As a default, innodb_flush_log_at_trx_commit is set to 1, meaning the log is flushed to the disk at a transaction commit, and modifications made by the transaction won’t be lost during a MySQL, OS, or HW crash. For workloads running with many small transactions, you can reduce disk I/O to the logs to improve performance by setting the innodb_flush_log_at_trx_commit parameter to 0, meaning no log flushing on each transaction commit. However, the transaction might be lost if MySQL crashes. In the sysbench OLTP I/O bound workload test on a T2000 server, setting innodb_flush_log_at_trx_commit =0 in the read-only test can improve performance by 4%. You can set this value to 2 to flush the log to the OS cache to save disk I/O on each transaction commit.

innodb_flush_method=O_DIRECT

I have a lot of writes and a RAID setup which seems to suggest that O_DIRECT is the right way to go for me, from the MySQL reference:

Depending on hardware configuration, setting innodb_flush_method to O_DIRECT can either have either a positive or negative effect on performance. Benchmark your particular configuration to decide which setting to use. The mix of read and write operations in your workload can also affect which setting performs better for you. For example, on a system with a hardware RAID controller and battery-backed write cache, O_DIRECT can help to avoid double buffering between the InnoDB buffer pool and the operating system’s filesystem cache. On some systems where InnoDB data and log files are located on a SAN, the default value or O_DSYNC might be faster for a read-heavy workload with mostly SELECT statements. Always test this parameter with the same type of hardware and workload that reflects your production environment.

innodb_file_per_table

This one will control if we have one database file per table, it seemed like a good idea to have one file per table (at least for tables created from now on).

transaction-isolation=READ-COMMITTED

With regards to transaction isolation Ovais Tariq explains:

With the READ-COMMITTED isolation level, the phenomenon of dirty read is avoided, because any uncommitted changes is not visible to any other transaction, until the change is committed. Within this isolation level each SELECT uses its own snapshot of the committed data that was committed before the execution of the SELECT. Now because each SELECT has its own snapshot, so the same SELECT when run multiple times during the same transaction could return different result sets. This phenomenon is called non-repeatable read.

To me this seems like a good compromise between the dirty data of the READ-UNCOMMITTED option and the REPEATABLE-READ option. I avoid dirty data and my application is not advanced enough that it needs REPEATABLE-READ which will probably result in more locking which degrades performance.

innodb_thread_concurrency=64

From the manual:

The correct value for this variable is dependent on environment and workload. You will need to try a range of different values to determine what value works for your applications. A recommended value is 2 times the number of CPUs plus the number of disks.

I’ve got 32 cores and so I set this one to 64.

innodb_additional_mem_pool_size=20M

From the manual again:

The size in bytes of a memory pool InnoDB uses to store data dictionary information and other internal data structures. The more tables you have in your application, the more memory you need to allocate here. If InnoDB runs out of memory in this pool, it starts to allocate memory from the operating system and writes warning messages to the MySQL error log. The default value is 1MB for the built-in InnoDB, 8MB for InnoDB Plugin.

So if default is 8 then 20 should be OK to avoid junk in the logs, on the whole this one seems pretty unimportant.

Related Posts

Tags: , , ,