My Impending VR Headset Purchase

The wait is almost over. Most of the vr headset companies out there plan to have a commercial product released in the last few months of 2015 or first quarter 2016. I have been so tempted to grab a dev kit in the past year but have held off as I don’t believe I will be doing anything with vr commercially and the consumer editions are likely to be different. That being said, I still am not sure which consumer edition(s) I will end up with when they do come out.

Samsung VR Gear:

I tried this just a few days after the innovator edition became available.  A new 99$ consumer edition is on its way this november 2015.  It is definitely amazing (as I expect all the vr products to be) but likely not nearly as capable as dedicated hardware. I was using it with the galaxy note. At the time there was not a lot of content available but the demos that were really showed the potential of the product. The main weakness that I could see was the phone. With prolonged use, the galaxy note was just not up to the task and would over heat. It made for a fun demonstration but probably not sufficient for anyone looking to view lots of vr content (like an entire movie). It definitely worked though. Since its release, google put out a design for a phone holder made of cardboard that does much the same thing. No-name chinese factories are churning out thousands of headsets already based on this design. At the end of the day, this route will never be anything but sub-par because of the phone. If producing vr content cooked a note 2, i can only imagine what it would do to my lg volt. I don’t think i will be using my phone for vr any more than i use it as a gaming device. However, some people will.

Oculus Rift:

This is the one that got the ball rolling. It looks to offer both proprietary content as well as compatibility with open source options. How much compatibility is the question. The company has recently partnered with microsoft as the xbox needed a vr option to compete with the one coming from the ps4 folks. While it may be nice to have for the pc, I have a playstation not an xbox. I am not counting on the open source capability being any more than a hack as it is now as they might find themselves in the market position of being able to stay proprietary. This xbox partnership makes this more likely.

Here are the oculus rift system requirements recently released.  Looks like even with my 770 gtx I am a bit behind.  I’m sure this is not just the oculus.  Basically this is saying, if you want to do vr right, you probably need a nasty graphics card.  I guess I will sli in another 770 and hope that does the trick

NVIDIA GTX 970 / AMD 290 equivalent or greater
Intel i5-4590 equivalent or greater
Compatible HDMI 1.3 video output
2x USB 3.0 ports
Windows 7 SP1 or newer

SteamVR, HTC Hive:

This one looks interesting. I expect it to be released before the oculus and sony options. The steam market should make it a solid choice. Controllers are available and I can imagine there will be a lot of game support early on. Their api is open source but not sure how this relates to the osvr project. Technology wise, it includes positional tracking which I am not sure is the case with the other products. This is accomplished with “base stations” that allow tracking to the millimetre. It looks like it will have some very impressive technical specs over all.

Sony Morpheus, Playstation VR:

Project morpheus but now going as Playstation VR. This is likely to be just a solution for the gaming console at first. I would expect hacks allowing use with the pc to come out but Sony has historically worked hard to thwart hacking efforts. This is likely to be a great product like most stuff sony does. I just don’t know if it makes sense to spend 300-500 bucks on something that only works with the playstation.

Razer OSVR:

My favorite mouse and keyboard company is working hard on osvr which should develop in to the vr open source standard. You can now order something they call a hacker dev kit and it looks great. The company insists that they are just interested in making vr a reality and that their ultimate focus will be controllers.  Consistent with this statment, the headsets will be sold as kits and no consumer version has been announced or is planned. Different features such as infra-red tracking are modularized and optional. Ultimately, the design should be produced by a lot of different companies which will be great from a pricing standpoint. If the model works, it should offer quite an amazing alternative to closed source efforts like the Oculus.

Microsoft HoloLens:

This is a different type of product all together. Kindof a mix between vr and google glass. It looks like a neat idea, however, the winner here will be determined by the availability of content. Microsoft is big enough to force the issue but as seen with the windows phone, its not always enough.  Sometime by the end of this year you should be able to get a developer edition for around 3k… really $3000 bucks.  I’m out haha but i guess some companies will spring for it.  Don’t expect to see a consumer level product before july of 2016 though.


So far they seem to be staying out of this. Is the whole idea of vr beneath those that frequent the apple ecosystem? Who knows? What is certain is that if they do come up with a product, it will possess a feature or re-brand a feature in a way to make it seem unique. Consider the apple time machine which is a 2-3tb usb hard drive costing between 3-4 hundred bucks. Need I say more? Whatever they build will be completely proprietary no matter how late to market they are. Oh, and it will probably be white, cost over $600 and will become annoyingly prevalent at coffee shops in no time.

While I can’t say I will never toss my phone into a $20 holder and see how it does, I am definitely going to grab a dedicated headset some time soon. The only real options at the moment are the oculus, the htc/steam, morpheus(playstationvr) and razer’s osvr. Whether or not I at some time get the sony product depends totally on the content that is released. This will most definitely not be my first headset. Even if it is eventually capable of working with my pc and osvr or the hive stuff, this is likely months away and I won’t be able to wait. For xbox one owners, the oculus is probably a no brainer. If I did not have a game system already I might even be persuaded to go xbox given the fact the oculus should work with both it and the pc.  The xbox is without much argument technically inferior to the playstation and I am kindof glad I didn’t know when i was making that decision. Whether or not I consider the oculus then now depends on how proprietary it ends up being. I am fully expecting them to feel that their microsoft partnership is going to allow them to corner the market. It’s only fair, after all, wasn’t this whole thing their idea in the first place? I am already preparing myself to be irritated by this company and that’s not good 🙂 So, it looks like my first headset is coming from razer or htc. With these two, I will just have to wait and see. The steam marketplace is quite an advantage, however, both vendors are pursuing an open source route and there will likely be a lot of interoperability. Another thing consideration for me is that I primarily use linux. It will be interesting to see which of these products has the best initial linux support. This will not be important to most people of course but could really narrow things down for me. Price is not likely to be an issue as I expect both of these products to come in at around $300. Actually, the osvr stuff could end up much cheaper than this when produced by no-name factories later on in the year. It could even come down to a comparison of hardware specs or just which one becomes available first.


Finally after going to bed really early all week I woke up this morning at a decent hour without the alarm.  For the past 6 days I’ve laid down by 9 each night.  For the first few, I woke up a lot, only getting back to sleep with the help of a few programming books.  Regardless of how early I crashed, I could not get myself up until around 9 each day. This night however, I slept through solidly until 7 am and awoke feeling ready to go.  While not all of it was sleep, I seem to have wasted at least 12 hours a day all week trying.  I am sure that I need more sleep than I used to but am hoping it will be no more than 6-7 hours a day as who wants to spend their life in bed?  I shutter to think that I may have become one of those people who simply don’t function right without 10 hours unconscious a day.  What a waste of time right?  Regardless of what my personal, optimal sleep amount is I plan to shorten it with the help of naps.  Tim Ferris discussed this in his book “the four hour body” where he had managed to reduce his required sleep to just 2 hours a day broken up into 6 evenly placed 20 minute naps spaced evenly 4 hours apart.  Everything about this was critical, the amount and the spacing.  While I do not have his flexibility or discipline, I believe I can still benefit from his research.  I plan to take 2 naps a day ongoing.   I estimate Tim was able to reduce his total required sleep approximately 75% with his new schedule.  As you cannot go without sleep, I am assuming some sort of diminishing returns curve which crosses at a total sleep time line at some minimally optimal point, in his case 2 hours.  Making a lot of assumptions about the shape of these curves, and assuming the worst about my required sleep (10 hours a night) I conservatively am planning to reduce my required sleep by 4 house total with 2 evenly spaced 1 hour naps a day.  My schedule will be roughly as follows.

  • Wake at 6am
  • Nap from 1pm to 2pm
  • Nap from 6pm to 7pm
  • Sleep at 2am
  • Repeat

I figure that lunch time will be probably my best shot at a nap during the work week and the evening nap is early enough to not interfere with band practice.  As afternoons are my least productive time of day, the 1pm nap should get me back on track.  I normally waste this time anyway so might as well be asleep.  The 6pm nap should allow me to stay up till 2am so as to get in a productive evening push at the end of every day.  As a sleep aid, I will continue to use programming textbooks.   I am leaving an hour for each nap rather than the 20 minutes Tim slept as I just do not see myself falling asleep as quickly.  If I do, I will try and shorten these as well.  I will stick to the schedule and deprive myself of sleep if necessary until I adapt.  In the end, I expect to be much more productive in the afternoon and evening than I am currently and will gain 4-6 hours of total time awake each day.  With the increased productivity I expect a net gain of at least 10 hours of productivity a day as compared with my current, horribly inefficient schedule.   I will have 3 “mornings” a day and morning has clearly become my best time for getting things done.  Detailing it out roughly, this 10 hours will come from having an additional 3 hours of morning time, 4 hours regained in the afternoon that are normally wasted and 4 house in the evening programming before I go to bed which I’ve not been good about in the last year.  I will still be spending 6 hours in bed total which for many people is standard.  Realistically, this would be impossible for me if lumped in to say the 12-6am range every day.  Broken in to 3 sections, however, it just may be enough.  I am cautiously optimistic regarding both the sleep time and the productivity gains.  The whole plan is open to adjustment of course.  I may even find it possible to try 3 naps a day depending upon my schedule.  Regardless, if this works even in part, it will be nothing short of life changing.  For more information on Tim’s self experimentation and findings check these out:

Relax Like A Pro: 5 Steps to Hacking Your Sleep
The Secrets of a Fantastic Night’s Rest

Books by self help author Hung Pham


I just finished reading these 3 books by self help author Hung Pham.  I found each to be well worth the time and money (3-4$ kindle).  His bio states “Life is short, do stuff that matters.”  This is something most people agree with and yet still find themselves at a loss as to how to move forward in their own lives.  Pham’s advice is short, to the point, and easy to follow.  His focus is on “actionable steps” rather than broad philosophical change.  I would imagine most anyone could find something useful in each of these easy reads and really encourage you to check them out.  I left a review on Amazon and rather than write it all over, I will post a copy of it here.

“A search for self help in books on Amazon yields over 400k results. There are millions of people searching for answers and thousands upon thousands responding to and profiting from this demand. I myself, among those millions, have read more than my share of approaches to self help. Through this, over time, I have graduated from envy of those so blessed as to be so in touch with the answers to life’s biggest questions as to be able to coach others, to an almost bitter scepticism of the whole concept of self improvement. I believe my disdain for the whole industry peaked a few weeks ago when I was finally talked into reading and watching “The Secret”. These people had helped themselves to millions from the pockets of lost, despairing individuals who had reached such a point of hopelessness as to buy in to a solution that could perhaps best be described as magic? Rather than simply give up at this point, I doubled down deciding to once and for all figure things out with or without the help of the self helpers. It was at this time I stumbled across Hung’s books. I immediately identified with his journey not because it was crafted so as to be easily identified with but because, in it’s simple transparency and honest personal approach, it truly mirrors the life experience of so many of us in today’s age. Just pages into this book I realized this was not going to be another collection of life stories in which belief in magic or some type of quantum theory that could only have been developed in complete isolation from actual science and math had transformed others into shiny happy folk. His refreshingly pragmatic approach is hopeful and inspiring while being simultaneously logically undeniable. You just know what he is saying will work because it makes so much sense. in fact, much of what he addresses, I already knew and believed. My problem, as with most people still searching, is, for whatever reason, a lack of action. Faced with such truths its no wonder so many people turn to quackery and figure the lottery into their retirement. Hung recognizes all of this. He has been through it himself, solving his own problems in a logical, proactive manner. Rather than the hundreds of pages of repetitive fluff that mark the style of most self help writers, every line seems to have a purpose, to be necessary. In direct, efficient prose he details actionable steps which, if followed, could only lead to the promised land. Where I had a question, he was there with a clear, specific answer. Where I felt doubt, he responded with reassurance couched in logic. His approach is not exhaustively exhaustive but definitely sufficient and complete. One cannot help but be convinced that the author sincerely desires to help others achieve success and meaning in their own lives. Read his books, and before moving on to another writer, read them a few more times.”


Distributed Processing Test Server

My SUN server is back up.  It is an x4600 m2 with 8 – 4 core opterons and 64 gig of ram.  Just 2 146 gig 10k SAS drives so will be adding more local storage soon.  Last week I played around with doing a bare metal install of openstack.  After running into a few annoying problems with the UAR i have finally decided to just use virtualbox on top of Solaris 11.3 for distributed processing tests.  Probably not near the performance of a bare metal install but should be plenty adequate to test systems before deploying into production elsewhere.  These machines are an incredible bargain nowadays.  You can grab one off ebay for under 500$ all day long.  The SUN software repository is gone now.  If you want to run linux native you will have to scam the appropriate drivers somewhere else or purchase an oracle support contract.  Seems kinda lame that they want to charge the new owner for this.  A paid support contract makes sense when you are buying hardware like this new at 50k+ a pop.  Not so much when grabbing off of ebay for the price of a crappy laptop.  Fortunately, this doesn’t force the thing into retirement as Solaris installs just fine.  I will typically set up 6 virtualboxes running Ubuntu 14.04 lts each with 8 gigs to play with.  This is perfect for testing clustered projects (Galera, SPARK, Scattersphere).  While it won’t keep up with my i-7 desktop for single core, it is amazingly quick even running virtuals for problems distributed correctly across it’s resources.  Its awesome having something like this to work with at home.  Would have made a hell of a game server back in the day.  Additionally I might add some NAS and host me some opensim.  The little server on top running FreeNAS will do the trick.







In 1945, Isaac Asimov wrote Escape!, a short story featuring a character known as “The Brain”, a self-aware supercomputer.  Such machines have played a prominent role in science fiction ever since.  At the same time, real computer technology has advanced at an astounding pace, often surpassing even the wildest speculation of anterior futurists.   While some areas of artificial intelligence technology have shown great progress,  others, including many critical to the development of sentient silicon, have lagged.  Even today, whether or not a machine will ever possess the sense of self-awareness shown by an average house cat is a matter of serious debate.  It may never happen, or, it may one day be man’s greatest accomplishment.

Really it has been kinda disappointing watching the progress of this technology.  Only now do things really seem to be taking off.  As a little kid, i created an Applesoft basic game of pick-up sticks using lo-res graphics and a simple AI.  In college philosophy as an undergrad I wrote papers on theories about free-will and their implications for the development of artificial personalities.  Fresh out of school I started looking into chatterbots and the turing test.  I was fond of checking out every bot I could find, spending lots of time on the Simon Laven page and pondering the problem of context.  After another 15 years of programming, java, big data, medical software, etc. I have come back to the problem with a few ideas at a point where available hardware and software might finally be up to the task.  Or at least some of the task.  Most significantly, concepts such as horizontal scaling, neural networks, contextual thesauruses, etc may finally allow computers to replace humans in many decision making roles and create the appearance of self awareness, consciousness or even fondness.  Will  this consciousness be real or just appear so?  Does it matter?  After all, are we not as people judged entirely by what can be perceived through observation?  Is the unobservable quality we are looking for really the soul/magic?

MySQL Performance and Scalability Notes Introduction

The MySQL category of this blog consists of a collection of my notes on MySQL Administration for Performance and Scalability.
I am starting with information related to single node systems. Some of the things I intend to address include:
1. Basic scaling concepts
2. Various MySQL forks and engines
3. How to use logs and tools to understand your database’s performance
4. The configuration of your single node MySQL database for optimal performance
5. Raw and logical backups
6. Disaster recovery procedures and considerations
7. The generation of sample data and the de-identification of data for testing and demo purposes.
My goal here is to present effective MySQL based solutions to the problems that arise when working with extremely large data sets. I will discuss how to eliminate bottlenecks and tune for maximum performance and scalability in single node environments. In addition, I intend to cover preparing for and dealing with failures including high performance methods of moving and recovering very big databases.  I will note that I’m not much of a web developer but hopefully this information, while not very artfully delivered, will be of use to someone.  If you are new to MySQL or databases in general, stop reading now!  There are much better ways to spend your weekends.  Oh, and keep in mind that these are basically just notes.  I was not writing a book but just compiling a big list of helpful stuff for myself and am now sharing it.  As such, I have plagiarized liberally using copy/paste and done very little in the way of proofreading.  If something is technically wrong or debatable, please point it out and I will fix or qualify it.  If something needs credits or even removal please let me know that too.

MySQL Scaling

Types of scaling:
1. Scaling down
2. Scaling up
3. Scaling out

The scalability of a database system in its most basic sense is how much the database can handle.  This of course is a measure that is very dependent upon what you are actually doing with the database.  For our discussion, making a database more scalable means increasing its ability to handle the job at hand, whether it be highly transactional or a data warehouse built only for running giant reports, on very large data sets.  Increasing your MySQL databases ability to do its job with these large data sets can be approached in several different ways without resorting to more nodes and machines.  Next we will discuss in general the most common approaches to database scaling.


Methods of scaling your database can be grouped into 3 categories.  1. scaling down 2. scaling up and 3. scaling  out which is becoming very popular as of late.

Scaling Down

Eliminating data from the database.
1. Unused data
2. Old data

Decreasing load on the server
1. Optimizing queries
2. Limiting access


Scaling down is a software solution which involves understanding your goals and current implementation enough to repair code and eliminate unnecessary data to put less of a load on your system.  Smart query benchmarking and design (which we will not discuss in this here but most likely in another post) can drastically decrease the load on your database server in both transactional and data warehouse scenarios.  We have all experienced the poorly written report (often created by a guy using Access and and odbc connection) that brings the new 50k dollar server to its knees for hours.  Eliminating this type of mistake is critical.  Often, simply limiting access can be a game changer.  In some cases, users may be running reports against a production server and competing for resources with customers.  In this case, creation of a data warehouse to offload this function to another machine or simply saying don’t do that between these hours can scale down the load on your system thus increasing its capacity for intended purposes.  Also, limiting the amount of data in your database can provide great performance improvements.  Very often I have found production databases on large systems to be loaded with data that is not being used or too old to be relevant.  Intelligently reviewing what is actually stored in your database relative to its purpose and culling out the unnecessary stuff will go a long way towards providing acceptable performance with your one node system.

Scaling Up

1. Upgrading Storage
2. Upgrading Processing
3. Better Servers
4. Optimize Database Configuration
5. Optimize Database Maintenance


Usually when people use this term they are  talking about a hardware solution.  Often the most cost effective initially, hardware upgrades are a good first approach to scalability.  Buying newer hardware, faster hard drives, more memory and more or better processors will often be the most cost effective way to approach increasing the capacity of your single node database.  Once you have the best hardware you can afford or get approved it is time to turn to software solutions.  Along with hardware upgrades, I also like to consider software upgrades performed on a single node as part of “scaling up”.  Reviewing your database’s configuration and optimizing it can help eliminate bottlenecks in much the same way hardware upgrades do.  Using higher performance maintenance solutions, like the enterprise level tools offered by Oracle and Percona, can also critical to maximizing the scalability of your database.

Scaling Out or Horizontal Scaling

1. Separating data into different databases
2. Placing data on different servers
3. Clustering
4. Sharding
5. Distributed processing
6. Document databases and key-value stores


Once everything that can be done in terms of scaling down or up has been considered or exhausted and it is clear that a one node solution just won’t cut it, it is time to consider a multi-node configuration or scaling out.  Solutions of this type include placing more related data into different databases and servers logically, clustering, sharding, distributed processing solutions like hadoop and spark, and switching to NoSQL data stores like document databases and key value stores.  This is not a decision that should be taken lightly.  Multi node solutions are inherently complex and difficult to use and maintain.  Very often I see people jump to scaling out in situations where scaling down and up are more than enough for current and all expected future capacity.  It makes the contractor’s more money, provides interesting challenges to the team and satisfies the customer by checking off stuff in their bucketlist of buzzwords.  Often, the customer will insist on a multi-node solution with no real understanding of whether or not it is necessary.  While giving these customers what they want can be fun and educational, not to mention profitable, I feel more ethical when given the opportunity to provide the right solution to the problem.  There is definitely a time and place for scaling out, but it can be a very expensive and time consuming mistake for the larger share of projects out there.  For this reason, it is important to completely understand single node scaling before making the decision to go with an expensive, complicated horizontally scaled solution.  That being said, the primary focus of these posts will be the software side of scaling up.  Scaling down, moving lesser used data, splitting data, and eliminating unnecessary data from your database, is a fairly obvious task which doesn’t require much more explanation than has already been given.  Scaling out is a topic later on.  Scaling up your hardware is equally as obvious as scaling down.  Basically, when a resource related bottleneck is located, and you have room to upgrade hardware in such a way as to resolve it, it is usually a good idea to do so.

MySQL Forks

1. Oracle
2. Drizzle
3. MariaDB
4. Percona
5. Webscale
(these are the main ones at least)

MySQL is open-source.  People, usually former MySQL employees, have branched off of the open source MySQL codebase with their own versions, usually offering some degree of compatibility with the original product.  From my experience, MariaDB and PerconaDB are forks that can be considered drop in replacements for MySQL while adding functionality and performance.

MySQL (Oracle)

This is the original MySQL database as well as new development since Oracle’s purchase of the product. There is still a “community” edition however Oracle seems committed to differentiating community(free for non-embeded uses) and enterprise(the one you always pay for) licenses by concentrating its development of advanced features included in the enterprise version. The community edition is just as good if not better than its ever been however do not expect modern “big-data” style enhancements to come from Oracle for free in the future.  Basically, if you are dealing with 20 million rows or less in your largest tables and don’t expect to need advanced features such as clustering or raw backups, the original community edition of this product as Oracle offers it is a fine way to go.  Anything beyond this will require either an investment in Oracle Enterprise technology or consideration of another fork.


Drizzle is open source and was forked from the  6.0 development branch of the MySQL.  It has a client/server architecture and works with standard SQL.  Drizzle is distributed under the the GNU General Public License with certain drivers and such issued under the BSD license.


MariaDB is a fork designed to offer a higher performance drop in replacement for the MySQL product offered by Oracle. I would say it is best know for its advanced query optimizer which really speeds things up on certain types of queries without the user really having to worry about what has changed.  It was forked by some of the original developers of MySQL who were worried about how the Oracle acquisition would turn out.  The plan for Maria is to strive for as high of a level of compatibility as possible including all commands and functions.  While you can use InnoDB with Maria, the default storage engine is Percona’s XtraDB which offers quite a few performance enhancements and fixes for particular types of queries that InnoDB did not seem to like.

Percona Server

Percona Server is another fork aiming for high compatibility with official MySQL releases.  In addition, they have focused on performance enhancements and monitoring tools to aid in administration and query optimization.  Like MariaDB, Percona server uses XtraDB.  Percona generously offers for free lots of scaling, monitoring and backup tools rivalling Oracle’s enterprise level products which are, of course, not free.   My usual product choice as of late for a 1-node product is to install MariaDB with XtraDB tables and using Percona enterprise tools against it when needed for optimization and replication.


WebScaleSQL is a branch developed in a partnership between Facebook, Google, LinkedIn and Twitter for the purpose of handling very large scale data sets horizontally scaled across ridiculous amounts of commodity hardware.   All of these companies being faced with similar challenges were wise to pool their efforts in the development of this branch.   WebScale is also GNU and you can access the source on github.


MySQL Engines

2. InnoDB and XtraDB
3. Memory
4. Merge
5. Archive
6. Federated
7. NDB

MySQL supports several different types of Table Engines also known as “Table Types”. A database can have its tables being a mix of different table engine types or all of the same type.
Most common by far are MyISAM vs InnoDB
MyISAM is older and less sophisticated of the two.  In some ways its actually down right primitive, however, you I still run in to MyISAM tables a lot so it’s good to be familiar with them.
InnoDB improves on MyISAM by offering modern features such as:
1. It is fully transactional
2. Has automated crash recovery using logged transactions which we will discuss later
3. And, most importantly, (at least to me) row level locking. MyISAM is limited to table level locking.


MySQL uses a pluggable storage engine architecture that enables storage engines to be loaded into and unloaded from a running MySQL server.

The default storage engine for MySQL as of 5.5 was InnoDB.  It is ACID compliant which basically means that it is transactionally safe.  It has configurable rollback as well as crash recovery capability.  It functions much better than earlier architectures such as MyISAM in dw and reporting uses due to it’s non-locking reads.  It supports foreign key constraints which is of course important (just don’t over-do them because they are also a pain) and uses clustered indexes speed up queries based on primary keys.

Percona XtraDB is an enhanced version of the InnoDB storage engine, designed to better scale on modern hardware, and including a variety of other features useful in high performance environments. It is completely backwards compatible with InnoDB.  I have yet to run in to an issue in this regard and would swap engines in production without even running a bunch of tests.  It just works.  Like I mentioned before, XtraDB is the default in Percona and Maria.   However, you can still use InnoDB if you want to in Maria (not sure about Percona)

To skip XtraDB in MariaDB add this to my.cnf



# The following should not be needed if you are using a mariadb package:


MyISAM: The default storage engine prior to MySQL 5.5.  If you run in to MyISAM tables, you might want to consider upgrading them.  You can do this a table at a time as you can run MyISAM and InnoDB/XtraDB tables in the same database.  In some cases, MyISAM actually seems to perform better, however this is not usually the case (table level locking yuck) and furthermore, when things go wrong with MyISAM, they can really go wrong.  Minor corruption/failure events which more modern engines handle in stride will send you searching for backups with MyISAM.

Memory Engine: This used to be called the Heap engine.  Basically, as you would expect, it stores everything in RAM similar to something you would write yourself using hashmaps in java.  This can really really speed things up in certain situations, however, limits the amount of data you can deal with in one table quite a bit.

Merge: Allows grouping of identical MyISAM tables so they can be referenced at the same time.  I haven’t really run in to a need for this (some uses in datawarehousing and I guess transactional safety is less of an issue in reporting)

Archive: Designed for large data sets that do not change often.

Federated: Allows for databases spread across many machines to be viewed virtually as one.  I’ve never tried it this way.

NDB (NDBCLUSTER):  Another engine designed to spread a database across multiple machines.

Understanding MySQL Performance Bottlenecks and Logs

1. Hardware
2. Software
a. Configuration
b. Tools and Procedures

Informational Logs
1. Slow query log
2. General log
3. Error log

In deciding the best approach to scaling your system, one of the most important concepts is that of bottlenecks.  Bottlenecks are the limitations of your current system that have been reached first and prevent the maximization of other resources as the load on your system increases.  So, obviously, the identification of bottlenecks is critical to knowing what improvements, if any, will allow your system to perform faster and handle more data.  In this section we will discuss how to analyse your system to identify these bottlenecks and the types of solutions that certain bottlenecks suggest to us.

Informational Logs

MySQL has a few different logs that can be utilized to identify bottlenecks and monitor server status.  Each of these is enabled and managed through the MySQL configuration file, my.cnf.  This file in linux will most likely be located in /etc/mysql or just /etc.  On windows, look for my.ini rather in the windows directory, c:, or the MySQL instalation directory.  MySQL actually looks for the file in this order so make sure you modify the one that it finds first.  The first type of log we will discuss is the slow query log.  To enable this log in your my.cnf file add this in the [mysqld] section.log_slow_queries = 1;

slow_query_log_file = <path to slow log>/mysql-slow.log

Additionally you can specify long_query_time=<some number of seconds> to tell it how long a query has to run for you to consider it slow.  Other options are available in the mysql documentation.  One of the most helpful is log-queries-not-using-indexes.  Monitor this log over a period of time to identify indexes that need to be added and queries being submitted that need to be optimized.  MySQL is fast so long as the amount of data being operated upon at any given time stays within memory.  Once swapping to disk begins, it’s performance declines dramatically.  The size of your available ram in relation to the amount of data being processed is a common source of bottlenecks.  Using this log to identify the queries that are taking a lot of time will give you hints as to whether this type of paging to disk is going on.  Increasing ram and adjusting queries to minimize paging will allow you to process much larger tables in a reasonable amount of time.

The general log is enabled by adding general_log_file=<path to general log/mysql.log or whatever file name you wish.  This contains information about all queries that have been run including selects and who, that is, which database user, attempted to execute them.  Whatever directory you chose for your logs, it is usually a good practice to put them all in the same place make sure that mysql has the correct permissions necessary to write to that directory.  Otherwise, it is not critical to name or place these files in any particular manner.  Just choose a strategy that works for you.

Finally, you can log errors with the error log by adding log-error=<path to error log>/mysql-error.log  This, as can be expected, logs errors and if configured to, warnings generated by the database.  Otherwise, errors will be logged to /var/syslog in linux.  The error log helps in diagnosing startup and shutdown issues as well as failures and warnings that impact performance and stability.  When dealing with large tables, disk space limitations and slow queries due to memory issues, failures resulting in logged errors will be common until your system is properly tuned.

Use these logs to identify the source of trouble and once issues have been resolved and everything is running smoothly, you can turn them off if you like.  After all, logging is an expensive operation too.  With MySQL, disk space issues related to operational and informational logs are more common than those due to actual data storage.  Only enable logging that you need. Due to the wide range of errors and warnings that can be logged in these files, I will not go into detail regarding the interpretation of any particular one.  When looking for the cause of an error or performance problem, check the logs and search on google for more information.  MySQL is open source and the internet will have answers to most any problem you encounter.