mySQL Replication - mysql

We have an update process which currently takes over an hour and means that our DB is unusable during this period.
If I setup up replication would this solve the problem or would the replicated DB suffer from exactly the same problem that the tables would be locked during the update?
Is it possible to have the replicated DB prioritize reading over updating?
Thanks,
D

I suspect that with replication you're just going to be dupolicating the issue (unless most of the time is spent in CPU and only results in a couple of records being updated).
Without knowing a lot more about the scema, distribution and size of data and the update process its impossible to say how best to resolve the problem - but you might get some mileage out of using innodb instead of C-ISAM and making sure that the update is implemented as a number of discrete steps (e.g. using stored procuedures) rather than a single DML statement.

MySQL gives you the ability to run queries delaye. Example: "INSERT DELAYED INTO...", this will cause the query to only be executed when MYSQL has time to take the query.
Based on your input, it sounds like you are using MyISAM tables, MyISAM only support table-wide locking. That means that a single update will lock the whole database table until the query is completed. InnoDB on the other hand uses row locking, which will not cause SELECT queries to wait(hang) for updates to complete.
So you have the best chances of a better sysadmin life if you change to InnoDB :)
When it comes to replication it is pretty normal to seperate updates and selects to two different MySQL servers, and that does tend to work very well. But if you are using MyISAM tables and does a lot of updates, the locking issue itself will still be there.
So my 2 cents: First get rid of MyISAM, then consider replication or a better scaled MySQL server if the problem still exists. (The key for good performance in MySQL is to have at least the size of all indexes across all databases as physical RAM)

Related

MySQL query synchronization/locking question

I have a quick question that I can't seem to find online, not sure I'm using the right wording or not.
Do MySql database automatically synchronize queries or coming in at around the same time? For example, if I send a query to insert something to a database at the same time another connection sends a query to select something from a database, does MySQL automatically lock the database while the insert is happening, and then unlock when it's done allowing the select query to access it?
Thanks
Do MySql databases automatically synchronize queries coming in at around the same time?
Yes.
Think of it this way: there's no such thing as simultaneous queries. MySQL always carries out one of them first, then the second one. (This isn't exactly true; the server is far more complex than that. But it robustly provides the illusion of sequential queries to us users.)
If, from one connection you issue a single INSERT query or a single UPDATE query, and from another connection you issue a SELECT, your SELECT will get consistent results. Those results will reflect the state of data either before or after the change, depending on which query went first.
You can even do stuff like this (read-modify-write operations) and maintain consistency.
UPDATE table
SET update_count = update_count + 1,
update_time = NOW()
WHERE id = something
If you must do several INSERT or UPDATE operations as if they were one, you'll need to use the InnoDB engine, and you'll need to use transactions. The transaction will block SELECT operations while it is in progress. Teaching you to use transactions is beyond the scope of a Stack Overflow answer.
The key to understanding how a modern database engine like InnoDB works is Multi-Version Concurrency Control or MVCC. This is how simultaneous operations can run in parallel and then get reconciled into a consistent "view" of the database when fully committed.
If you've ever used Git you know how you can have several updates to the same base happening in parallel but so long as they can all cleanly merge together there's no conflict. The database works like that as well, where you can begin a transaction, apply a bunch of operations, and commit it. Should those apply without conflict the commit is successful. If there's trouble the transaction is rolled back as if it never happened.
This ability to juggle multiple operations simultaneously is what makes a transaction-capable database engine really powerful. It's an important component necessary to meet the ACID standard.
MyISAM, the original engine from MySQL 3.0, doesn't have any of these features and locks the whole database on any INSERT operation to avoid conflict. It works like you thought it did.
When creating a database in MySQL you have your choice of engine, but using InnoDB should be your default. There's really no reason at all to use MyISAM as any of the interesting features of that engine (e.g. full-text indexes) have been ported over to InnoDB.

Move existing tables to InnoDB from MyISAM and which one is faster?

A Database already has up to 25-30 tables and all are MyISAM. Most of these tables are related to each other meaning a lot of queries use joins on IDs and retrieve data.
One of the tables contain 7-10 Million records and it becomes slow if i want to perform a search or update or even retrieval of all data. Now i proposed a solution to my boss saying that converting tables into InnoDB might give better performance.
I also explained the benefits of InnoDB:
Since we anyways join multiple tables on keys and they are related, it will be better to use foreign keys and have relational database which will avoid Orphan Rows. I found around 10-15k orphan rows in one of the big tables and had to manually remove them.
Support for transactions, we perform big updates from time to time and if one of them fails on the way we have to replace the entire table with the backed-up one and run the update again to make sure that all queries were executed. With InnoDB we can revert back any changes from query 1 if query 2 fails.
Now the response i got from my boss is that I need to prove that InnoDB will run faster than MyISAM. My question is, wont above 2 things improve the speed of the application itself by eliminating orphan rows?
In general is MyISAM faster than InnoDB?
Note: using MySQL 5.5
You should also mention to your boss probably the biggest benefit you get from InnoDB for large tables with both read/write load - You get row-level locking rather than table-level locking. This can be a great performance benefit for the application in cases where you see a lot of waits for table locks to be released.
Of course the best way to convince your boss is to prove it. Make copies of your large table and place on a testing database. Make one version of data in MyISAM and one in InnoDB. Then run load testing against it with a load mix that approximates your current DB read/write activity. Find out for yourself if it is better.
Just updated for your comment that you are on 5.5. With 5.5 it is a no brainer to use InnoDB. MyISAM engine basically has seen no improvement over the last several years and development effort has been around InnoDB. InnoDB is THE MySQL engine of choice going forward.

How to speed up MySQL INSERTs/UPDATEs?

I have three large MySQL tables. They are approaching 2 million records. Two of the tables are InnoDB and are currently around 500 MB in size. The other table is MyISAM and is about 2.5 GB.
We run an import script from FileMaker to insert and update records in these tables but lately it has become very slow - only inserting a few hundred records per hour.
What can I do to increase performance to make inserts and updates happen faster?
For INSERT it could have to do with the indexes you have defined on the tables (they have to be updated after each INSERT). Could you post more information about them? And are there triggers set on the tables?
For UPDATE it is a different story, it could be that not the record update is slow but finding the record is slow. Could you try to change the UPDATE into a SELECT and see if it is still slow? If yes, then you should investigate your indexes.
For the Innodb table, if it's an acceptable risk, I'd consider changing the innodb_flush_log_at_trx_commit level. Some more details in this blog post, along with some more Innodb tuning pointers.
For both engines, batching INSERTs together can speed things up to a point. See doc.
What version of MySQL are you running? There have been many improvements with the new InnoDB "Plugin" engine and concurrency of operations on servers with multiple processors.
Is the query slow when executed on MySQL from the command line?
If you're using the Execute SQL Script step from FileMaker, that connects and disconnects after every call, causing major slowdowns when executing large numbers of queries. We've had clients switch to our JDBC plugin (self-promotion disclaimer here) to avoid this, resulting in major speedups.
It turns out the reason for the slowness was from the FileMaker side of things. Exporting the FileMaker records to a CSV and running INSERT/UPDATE commands resulted in very fast execution.

Is MySQL InnoDB is appropriate for this scenario?

My MysQL database contains multiple MyISAM tables, with each table containing millions of rows. There is a heavy insert load on the database, so I cannot issue SELECTs on that live database. Instead, I create a replica of the database for queries and conduct analysis on that.
For the analysis, I need to issue multiple parallel queries. The queries are independent (i.e., the results of the queries are not combined together), but they operate on same tables most of the time. As far as I know, the entire MyISAM table is locked for each query, which means parallel independent queries would be slow. Ideally, I would prefer an engine that supports "NO LOCKING". I am assuming MySQL doesnt have such an engine, so should I use InnoDB? I might be missing lot of things here. Please suggest what is the right path to take here.
Thanks
MyISAM read locks are compatible, so the SELECT queries won't lock each other.
If your analysis queries on the replica database don't write, only read, then it's OK to use MyISAM.
You could stick to MyISAM and use INSERT DELAYED:
When a client uses INSERT DELAYED, it gets an okay from the server at once, and the row is queued to be inserted when the table is not in use by any other thread.
Another major benefit of using INSERT DELAYED is that inserts from many clients are bundled together and written in one block. This is much faster than performing many separate inserts.

Will a MySQL table with 20,000,000 records be fast with concurrent access?

I ran a lookup test against an indexed MySQL table containing 20,000,000 records, and according to my results, it takes 0.004 seconds to retrieve a record given an id--even when joining against another table containing 4,000 records. This was on a 3GHz dual-core machine, with only one user (me) accessing the database. Writes were also fast, as this table took under ten minutes to create all 20,000,000 records.
Assuming my test was accurate, can I expect performance to be as as snappy on a production server, with, say, 200 users concurrently reading from and writing to this table?
I assume InnoDB would be best?
That depends on the storage engine you're going to use and what's the read/write ratio.
InnoDB will be better if there are lot of writes. If it's reads with very occasional write, MyISAM might be faster. MyISAM uses table level locking, so it locks up whole table whenever you need to update. InnoDB uses row level locking, so you can have concurrent updates on different rows.
InnoDB is definitely safer, so I'd stick with it anyhow.
BTW. remember that right now RAM is very cheap, so buy a lot.
Depends on any number of factors:
Server hardware (Especially RAM)
Server configuration
Data size
Number of indexes and index size
Storage engine
Writer/reader ratio
I wouldn't expect it to scale that well. More importantly, this kind of thing is to important to speculate about. Benchmark it and see for yourself.
Regarding storage engine, I wouldn't dare to use anything but InnoDB for a table of that size that is both read and written to. If you run any write query that isn't a primitive insert or single row update you'll end up locking the table using MyISAM, which yields terrible performance as a result.
There's no reason that MySql couldn't handle that kind of load without any significant issues. There are a number of other variables involved though (otherwise, it's a 'how long is a piece of string' question). Personally, I've had a number of tables in various databases that are well beyond that range.
How large is each record (on average)
How much RAM does the database server have - and how much is allocated to the various configurations of Mysql/InnoDB.
A default configuration may only allow for a default 8MB buffer between disk and client (which might work fine for a single user) - but trying to fit a 6GB+ database through that is doomed to failure. That problem was real btw - and was causing several crashes a day of a database/website till I was brought in to trouble-shoot it.
If you are likely to do a great deal more with that database, I'd recommend getting someone with a little more experience, or at least oing what you can to be able to give it some optimisations. Reading 'High Performance MySQL, 2nd Edition' is a good start, as is looking at some tools like Maatkit.
As long as your schema design and DAL are constructed well enough, you understand query optimization inside out, can adjust all the server configuration settings at a professional level, and have "enough" hardware properly configured, yes (except for sufficiently pathological cases).
Same answer both engines.
You should probably perform a load test to verify, but as long as the index was created properly (meaning indexes are optimized to your query statements), the SELECT queries should perform at an acceptable speed (the INSERTS and/or UPDATES may be more of a speed issue though depending on how many indexes you have, and how large the indexes get).