MySQL Database Design for Scalability and Performance

Resource Usage and Size Limits

Operating System File-size Limit
Win32 w/ FAT/FAT32 2GB/4GB
Win32 w/ NTFS 2TB (possibly larger)
Linux 2.2-Intel 32-bit 2GB (LFS: 4GB)
Linux 2.4+ (using ext3 file system) 4TB
Solaris 9/10 16TB (who does file systems better than the guys at Sun?)
MacOS X w/ HFS+ 2TB
NetWare w/NSS file system 8TB

In this section I will be discussing database design for large data sets.  First, it is good to be aware of what the actual limits imposed upon your database by the hardware and operating system it is running on are.  MySQL in most cases on modern machines is capable of managing very large data sets, often times, much larger than most applications ever call for.  In general, the hard limits to be aware of are imposed by the file system.  If you are using a 32 bit linux os or a fat file system, table sizes will be limited by this to between 2 and 4 gig.  A newer 64 bit linux os with an ext3-4 file system expands this to 4 terrabytes.  On solaris, table file sizes can reach as much as 16 terrabytes without being limited by the file system.  This is quite a lot of data for a single table.  Be sure to start new projects on a newer 64 bit OS and use a modern file system with higher file size limits if at all possible to minimize the impact of this type of limit.  Even if you do not see a current need, database applications are usually better than most at taking advantage of such newer technologies as 64 bit processors and operating systems as well as multi-core processors and large ram allocations.  In cases in which this hard limit is reached, partitioning data sets in different tables by such logical attributes as date is possible.  Rarely will this be the case in a single node system though.  Having enough disk space is another important limitation to be aware of.  The amount of disk space needed for your particular implementation will depend upon the amount of data you have, indexing schemes, logging needs, backup methods and data population requirements.  In today’s environments, disk space is now inexpensive enough that it is rarely lacking due to its cost but more often due to the administrator’s failure to specify enough.

First, the database engine and tools themselves require space on your node.  This will vary based upon your choice of tools but is minimal relative to the space used for data, index’s, backups etc.  To understand these requirements, check disk usage before installing MySQL and tools(df -h works in linux) and then directly after installation.

The space requirements for you data and indexes a related to the sum of data length and index length.  To look at a particular table try:

SELECT data_length+index_length FROM information_schema.tables WHERE table_schema='mydb' AND table_name='mytable';

To look at usage across your entire database you can use something like this which I ripped off the web somewhere:

SELECT IFNULL(B.engine,'Total') "Storage Engine", CONCAT(LPAD(REPLACE(FORMAT(B.DSize/POWER(1024,pw),3),',',''),17,' '),' ',SUBSTR(' KMGTP',pw+1,1),'B') "Data Size", CONCAT(LPAD(REPLACE(FORMAT(B.ISize/POWER(1024,pw),3),',',''),17,' '),' ',SUBSTR(' KMGTP',pw+1,1),'B') "Index Size", CONCAT(LPAD(REPLACE(FORMAT(B.TSize/POWER(1024,pw),3),',',''),17,' '),' ',SUBSTR(' KMGTP',pw+1,1),'B') "Table Size" FROM (SELECT engine,SUM(data_length) DSize,SUM(index_length) ISize,SUM(data_length+index_length) TSize FROM information_schema.tables WHERE table_schema NOT IN ('mysql','information_schema','performance_schema') AND engine IS NOT NULL GROUP BY engine WITH ROLLUP) B,(SELECT 3 pw) A ORDER BY TSize;

Or you could just check the disk usage in your data partition with df -h again or file manager in windows.  You can also just count bytes per row based on known/specified lengths and expected volume.

Indexes can take a bunch of space too.  Some ways to alleviate this include:

Avoid creating secondary indexes on the really big tables if you can.  The way they are stored, the primary index is duplicated along with them.  This would usually mean one entry per row even if there were only a few distinct values in the second field.

Fragmentation in indexes is bad…worse when using secondary indexes.  Dump and reload tables periodically when index fragmentation is suspected.  I do believe more recent versions of MySQL have done a bit to address this though.

In addition, you must have space available for backing up your entire database and for raw files used in load procedures.  Your particular implementation  will dictate the size of these files.  Logical backups take a lot of room initially as will be discussed later.  These are full backups of your data in text format either using insert statements or delimited rows.  Raw backups or snapshots just keep track of changes so they are small at first but do grow rapidly on a busy system.  Informational and operational logs can also take up a lot of space and vary depending upon your implementation.  This space and managing it will be discussed later in the course and will depend greatly upon your specific requirements.  As noted earlier, informational logs should usually be turned off when not being used to diagnose a problem but there should still be space available to use them when needed.  Operational log size dictates the level of information safety you have relative to your ability to roll back to an earlier state and avoid data loss in the event of a failure.  In most cases it is better to be safe than sorry here.  Leave plenty of space for your binlogs.  This will also be discussed later.  It is a good idea to specify innodb_file_per_table in your config file as separating your tables into individual files allows for future flexibility, especially during disaster recovery and disk space recovery when a table’s size decreases(scaling down). MySQL’s create database command is fairly simple and doesn’t offer a lot of options.  Most configuration is done at the table level as table’s using different engines such as MyISAM vs InnoDB can exist in the same database.  The majority of specification options are at the table level.

MySQL Table Design for Scalability and Performance

Table Specification
Indexing
Foreign Keys
Views
Star Schema(pre-join)
Temp Tables

Table specification is done, as you are probably aware, through the create table command.  Usually you will want your scripts to contain specification of fields, options, indexes and foreign keys all in one place.  Check online for syntactical guides.  Personally, I usually handle this process through an ide.  Many people prefer to start with a db design tool such as DBVisualizer.  In this case, you create a picture of your database design with tables connected at foreign keys by lines.  Then, when satisfied you reverse engineer a create script through the software.  I am not a big fan of this method as it seems slow to me however for others who are more visual it seems to work well.  My MySQL client tool of choice is Navicat.  It is fairly easy to use relative to the other options out there.  To create a table, I will fill out a form with Navicat’s create table option which in turn generates the create sql for me.  There are many free options out there as well, however Navicat essentials offers ssh tunneling, the ability to directly access a database that is not directly accessible to the internet, and this is important enough to me to warrant the 40$ a Navicat essentials license costs.

In the past, specification of foreign keys has been a requirement of good database design.  Many db’s feel the same way about this now, especially those that have been around for a while.  For most MySQL implementations, i agree that this is a helpful way of ensuring data integrity and wise to use.  In the case of really large data sets, however, it is important to understand the cost of using foreign keys.  In many cases, it may be better to rely upon the application for this integrity.  For example, when an insert happens on a table with a foreign key, MySQL has to go check the key in a recursive fashion to make sure all required rows exist in the database according to the key.  While this is usually more efficient than handling this check yourself(what a mess), this check can be costly performance wise.  A happy medium can be achieved by creating foreign keys to control small inserts coming from the application but then disabling them using foreign_key_checks=0 when doing bulk loads.  Of course you as the administrator are much less likely to make mistakes than the application programmers (haha).  The performance gain by disabling these checks in a rigorously keyed database can be huge.  We will discuss more about this when we address database loading and ETL.

Another feature available in newer versions of MySQL is views.  Views are selections from the data than can be used to offer apparently pre-joined tables for use by application programmers.  While some sense can be seen in this as it may help in consistent and efficient caching of queries, this is probably of minimal importance.  MySQL creates views on the fly for the most part either using a merge or temp tables depending upon the terms of the query.  When tuning queries for performance, it is usually better to manage these joins per query at the application level as the path taken by the view function may not be optimal.  Basically, like foreign keys, views are a luxury that should be avoided when optimal performance is desired.
A more effective way of providing a true pre-join to application developers is through the use of star schema tables.  These are tables that contain required data from 2 or more standard tables populated usually directly after the base tables are loaded.  A star schema table can be separately indexed and optimized for a particular need and can prevent costly joins from being preformed over and over at the application level through performing them one time during your ETL process.  Your particular application will dictate how effective this technique is in increasing performance.  In the past, space requirements would often limit the use of this technique however given the low cost of disk space now, this usually a better choice than a view for increasing the performance of often used joins.
Use of temporary tables is also a technique that may or may not be effective in your situation but can have its uses in large MySQL databases.  Temporary tables are only available for the life of a connection.  More information about the use of these is available online.  More recent releases of MySQL handle temporary tables very effectively.  In some cases, especially those in which a costly join produces a very small set of data needed for part of a query, memory usage can be optimized through the use of a temporary table.  When building data transforms, you should consider temporary table use when this type of situation arises as ETL performance like all query performance quickly becomes unacceptable when disk paging is required.