The fundamental battle in a database server is usually between the CPU(s) and available disk I/O performance;
we’ll discuss memory momentarily. The CPU in an average server is orders of magnitude faster than the hard disks.
If you can’t get data to the CPU fast enough, it must sit idle while the disks locate the data and transfer
it to main memory…
To bridge the gap between blazingly fast CPUs and comparatively slow disks, we have memory. With respect to
performance, it’s in the middle – significantly faster than disks but still much slower than the CPU. The
underlying operating system generally uses free memory to cache data read from and written to disk. That
means if you frequently query the same small MyISAM table over and over, there’s a very good chance
you’ll never touch the disk. Even though MySQL doesn’t cache row data for MyISAM tables (only the index blocks),
the entire MyISAM table is likely in the operating system’s disk cache.
3. MySQL’s buffers and caches
By adjusting how much memory MySQL uses, you can often realize significant performance improvements.
To do that effectively, you first need to understand how MySQL uses memory. Most of the memory MySQL
allocates is used for various internal buffers and caches. These buffers fall into two major groups:
global buffers and per-connection buffers. As their name implies, global buffers are shared among
all the connections (or threads) in MySQL.
The performance of your network usually doesn’t have much bearing on MySQL. In most deployments,
clients are very near the servers – often connected to the same switch – so latency is low, and
available bandwidth is quite high. But there are less common circumstances in which the network
can get in the way.
As a multithreaded server, MySQL is most efficient on an operating system that has a well implemented threading
system. Windows and Solaris are excellent in this respect. Linux, as usual, is a bit different. Traditionally,
Linux has had a slightly unusual threading implementation – using cloned processes as threads. It performs
well under most circumstances, but in situations with thousands of active client connections, it imposes a
bit of overhead.
If your queries are already optimized and using the most efficient indexes, it’s still possible to run into
I/O bottlenecks at some point. Simply running too many queries, no matter how efficient they are, can become
too much for the disk(s) to keep up with. If so, it’s time to consider caching. The easiest thing to do is
make sure you’re using the MySQL query cache. Available since MySQL 4.0, the query cache keeps the results
of frequently executed SELECTs in memory so that MySQL doesn’t need to perform any disk I/O at all. Taking
things a step further, you might consider application-level aching. If there’s data that doesn’t change
frequently at all, query for it once in a while and store it in memory or on local disk until you requery for it