Will hadoop be faster than mySQL - mysql

I am facing a big data problem. I have a large MySQL (Percona) table which joins on itself once a day and produces about 25 billion rows. I am trying to to group together and aggregate all the rows to produce a result. The query is a simple join:
--This query produces about 25 billion rows
SELECT t1.colA as 'varchar(45)_1', t2.colB as 'varchar(45)_2', count(*)
FROM table t1
JOIN
table t2
on t1.date = t2.date
GROUP BY t1.colA, t2.colB
The problem is this process takes more than a week to complete. I have started reading about hadoop and wondering if the map reduce feature can improve the amount of time to process the data. I noticed HIVE is a nice little add-on to allow SQL like queries for hadoop. This all looks very promising, but I am facing an issue where I will only be running on a single machine:
6-core i7-4930K
16GB RAM
128 SSD
2TB HDD
When I run the query with MySQL, my resources are barley being used, only about 4Gb of ram and one core is only working 100% while the other are working close to 0%. I checked into this and found MySQL is single threaded. This is also why Hadoop seems to be promising as I noticed it can run multiple mapper functions to better utilize my resources. My question remains is hadoop able to replace MySQL in my situation in which it can produce results within a few hours opposed to over a week even though hadoop will only be running on a single node (although I know it is meant for distributed computing)?

Some very large hurdles for you are going to be that hadoop is really meant to run on a cluster and not a single server. It can make use of multiple cores but the amount of resources that it will consume will be very significant. I have a single system that I use for testing that has hadoop and hbase. It has namenode, secondary name node, data node, nodemanager, resourcemanager, zookeeper etc running. This is a very heavy load for a single system. Plus HIVE is not a true SQL compliant replacement for a RDBMS so it has to emulate some of the work by creating map/reduce jobs. These jobs are considerably more disk intensive and use the hdfs file system for mapping the data into virtual tables (verbage may vary). HDFS also has a fairly significant overhead due to the fact that the filesystem is meant to be spread over many systems.
With that said I would not recommend solving your problem with Hadoop. I would recommend checking out what it has to offer though in the future.
Have you looked into sharding the data which can take advantage of multiple processors. IMHO this would be a much cleaner solution.
http://www.percona.com/blog/2014/05/01/parallel-query-mysql-shard-query/
You might also look into testing postgres. It has very good parallel query support built in.
Another idea is you may look into trying an olap cube to do the calculations and it can rebuild the indexes on the fly so that only changes will be taken into affect. Due to the fact that you are really dealing with data analytics this may be an ideal solution.

Hadoop is not a magic bullet.
Whether anything is faster in Hadoop than in MySQL is mostly a question of how well your abilities to write Java code (for mappers and reducers in Hadoop) or SQL are...
Usually, Hadoop shines when you have a problem running well on a single host, and need to scale it up to 100 hosts at the same time. It is not the best choice if you have a single computer only; because it essentially communicates via disk. Writing to disk is not the best way to do communication. The reason why it is popular in distributed systems is crash recovery. But you cannot benefit from this: if you lose your single machine, you lost everything, even with Hadoop.
Instead:
figure out if you are doing the right thing. There is nothing worse than spending time to optimize a computation that you do not need. Consider working on a subset, to first figure out whether you are doing the right thing at all... (chances are, there is something fundamentally broken with your query in the first place!)
optimize your SQL. Use multiple queries to split the workload. Reuse earlier results, instead of computing them again.
reduce your data. A query that is expected to return 25 billion must be expected to be slow! It's just really inefficient to produce results this size. Choose a different analysis, and double-check that you are doing the right computation; because most likely you aren't; but you are doing much to much work.
build optimal partitions. Partition you data by some key, and put each date into a separate table, database, file, whatever, ... then process the joins one such partition at a time (or if you have good indexes on your database, just query one key at a time)!

Yes you are right MySQL is single threaded i.e. 1 thread per query.
Having 1 machine only I don't think it will help you much because you may utilize the cores but you will have contention over I/O since all threads will try to access the disk.
The number of rows you mentioned are a lot but you have not mentioned the actual size of your table on disk.
How big is your table actually? (In bytes on HD I mean)
Also you have not mentioned if the date column is indexed.
It could help you if you removed the t2.colB or removed the GROUP BY all together.
GROUP BY does sorting and in your case it isn't good. You could try to do the group by in your application.
Perhaps you should tell us what exactly are you trying to achieve with your query. May be there is a better way to do it.

I had a similarly large query and was able to take advantage of all cores by breaking up my query into multiple smaller ones and running them concurrently. Perhaps you could do the same. Instead of one large query that processes all dates, you could run two (or N) queries that process a subset of dates and write the results into another table.
i.e. if your data spanned from 2012 to 2013
SELECT INTO myResults (colA,colB,colC)
SELECT t1.colA as 'varchar(45)_1', t2.colB as 'varchar(45)_2', count(*)
FROM table t1
JOIN table t2 on t1.date = t2.date
WHERE t1.date BETWEEN '2012-01-01' AND '2012-12-31'
GROUP BY t1.colA, t2.colB
SELECT INTO myResults (colA,colB,colC)
SELECT t1.colA as 'varchar(45)_1', t2.colB as 'varchar(45)_2', count(*)
FROM table t1
JOIN table t2 on t1.date = t2.date
WHERE t1.date BETWEEN '2013-01-01' AND '2013-12-31'
GROUP BY t1.colA, t2.colB

Related

Why does my MySQL search take so long?

I have a MySQL table with about 150 million rows. When i perform a search with one clause it takes about 1.5 minutes for it to get the result. Why does it take so long? I am running debian in virtualbox with 2 CPU cores and 4gb of ram. I am using MySQL and apache2.
I am a bit new to this so so don't know what more information to provide.
Searches, or rather queries, in databases like MySQL or any other Relational Database Management System (RDBMS) are subject to a number of factors for performance including:
Structure of the WHERE clause and Indexing to support it
Contention for system resources such as Memory and CPU
The amount of data being retrieved and how it is delivered
Some quick wins and strategies for each:
Structure of the WHERE clause and Indexing to support it
Order your WHERE clause in the order that will cut down the results by the biggest margin as you go from left to right. Also, use Indexes and align these Indexes to the order of those columns in the WHERE clause. If you're searching a large database with SELECT * FROM TABLE WHERE SomeID = 5 AND CreatedDate > '10-01-2015' then be sure you have an Index in place with the columns SomeID and CreatedDate in the order that makes the most sense. If SomeID is a column that is highly unique or likely to have results much smaller than CreatedDate > '10-01-2015' then you should create the query in that order and an Index with columns in the same order.
Contention for system resources such as Memory and CPU
Are you using a table that is constantly updated? There are transactional databases (OLTP) and databases meant for analysis (OLAP). If you're hitting a table that is being constantly updated you may be slowing things down for everyone including yourself. Remember you're a citizen in an environment and as such you need to respect the other use cases. This includes knowing a bit about how the system is used, what resources are available and making sure you are mindful of how your queries will affect others.
The amount of data being retrieved and how it is delivered
Even the best query cannot escape the time it takes to get data from one place to another. You can optimize settings of the RDBMS, have incredible bandwidth etc. but many factors including disk IOPS, network bandwidth, et. al. all play into a cost of doing business. Make sure you're using the right protocols to transfer, have good disk IOPS and all the Best Practices around MySQL.
Some final thoughts:
If you're using AWS and hosting your database in the cloud you may
consider using Amazon Aurora which is a MySQL-compatible RDBMS
that is substantially faster than MySQL.

How to make sqlite in-memory db join query as fast as MySQL

I have a very complex sql query - the logic is simple, but I need to join 17 tables (each as 10-20 fields and 100 to 1 million records) so there're a lot of (LEFT) JOINs and WHERE clauses.
SELECT table1.column_A
table2.column_B
table3.column_C
table4.column_D
....
FROM table1
LEFT JOIN table2 ON table1.column_a = table2.column_b
JOIN table3 ON table3.column_c = table1.column_d
LEFT JOIN table4.column_e = table3.column_f
AND LENGTH(table4.column_g) > 6 AND (table4.column_h IN (123,234))
LEFT JOIN ....
....
WHERE table1.column_i = 21
AND (table1.column_j IS NULL OR DATE(table1.column_k) <> DATE(table1.column_l))
The above query only takes 5 seconds to run in MySQL. But when I run it in sqlite in-memory db (using Perl on Linux), it takes about 20 min. This is still acceptable.
When I add a ORDER BY clause (I do need this), the execution time increases dramatically.
ORDER BY table1.column_m, table6.column_n, table7.column_o IS NULL;
It'll take 40 seconds in MySQL. In sqlite in-memory db (using Perl on Linux), I waited for over an hour, but it still didn't finish.
What kind of tuning do I need to do to make the query faster? My threshold is within 1 hour.
The reason I'm making it an in-memory db is that I receive SQL generated normalized data but we need to load data into a non-SQL db in the end, so I don't want to create a intermediate SQL db just for data loading - that makes the code ugly and increases maintenance complexity. Plus, the current timing issue I'm facing is just a one-time thing. In the future on a daily basis, the data volume we receive will be much much smaller (less than 1% of what I have today)
Thanks in advance for your help!!
Your ORDER BY clause is on columns from 3 different tables. No amount of query optimization or index creation is going to change the fact that the DBMS must do an external sort, after (or as) the result set is produced. If you've constrained the amount of memory that SQLite can use (I'm not a SQLite expert, but I assume this is at least possible, if not required), then that could be the cause (e.g. it's going through some incredible machinations to get the job done within its limits). Or it's just hung. What's the CPU utilization for that hour you were waiting? What about I/O (is it thrashing because there was no limit on the amount of memory SQLite can use, as Sinan alluded to)?
To make your query faster , you need to do some changes in one of the following :
method of writing query
Database table structure
making proper indexes
And all this you can find at http://www.perlmonks.org/?node_id=273952

Run analytics on huge MySQL database

I have a MySQL database with a few (five to be precise) huge tables. It is essentially a star topology based data warehouse. The table sizes range from 700GB (fact table) to 1GB and whole database goes upto 1 terabyte. Now I have been given a task of running analytics on these tables which might even include joins.
A simple analytical query on this database can be "find number of smokers per state and display it in descending order" this requirement could be converted in a simple query like
select state, count(smokingStatus) as smokers
from abc
having smokingstatus='current smoker'
group by state....
This query (and many other of same nature) takes a lot of time to execute on this database, time taken is in order of tens of hours.
This database is also heavily used for insertion which means every few minutes there are thousands of rows getting added.
In such a scenario how can I tackle this querying problem?
I have looked in Cassandra which seemed easy to implement but I am not sure if it is going to be as easy for running analytical queries on the database especially when I have to use "where clause and group by construct"
Have Also looked into Hadoop but I am not sure how can I implement RDBMS type queries. I am not too sure if I want to right away invest in getting at least three machines for name-node, zookeeper and data-nodes!! Above all our company prefers windows based solutions.
I have also thought of pre-computing all the data in a simpler summary tables but that limits my ability to run different kinds of queries.
Are there any other ideas which I can implement?
EDIT
Following is the mysql environment setup
1) master-slave setup
2) master for inserts/updates
3) slave for reads and running stored procedures
4) all tables are innodb with files per table
5) indexes on string as well as int columns.
Pre-calculating values is an option but since requirements for this kind of ad-hoc aggregated values keeps changing.
Looking at this from the position of attempting to make MySQL work better rather than positing an entirely new architectural system:
Firstly, verify what's really happening. EXPLAIN the queries which are causing issues, rather than guessing what's going on.
Having said that, I'm going to guess as to what's going on since I don't have the query plans. I'm guessing that (a) your indexes aren't being used correctly and you're getting a bunch of avoidable table scans, (b) your DB servers are tuned for OLTP, not analytical queries, (c) writing data while reading is causing things to slow down greatly, (d) working with strings just sucks and (e) you've got some inefficient queries with horrible joins (everyone has some of these).
To improve things, I'd investigate the following (in roughly this order):
Check the query plans, make sure the existing indexes are being used correctly - look at the table scans, make sure the queries actually make sense.
Move the analytical queries off the OLTP system - the tunings required for fast inserts and short queries are very different to those for the sorts of queries which potentially read most of a large table. This might mean having another analytic-only slave, with a different config (and possibly table types - I'm not sure what the state of the art with MySQL is right now).
Move the strings out of the fact table - rather than having the smoking status column with string values of (say) 'current smoker', 'recently quit', 'quit 1+ years', 'never smoked', push these values out to another table, and have the integer keys in the fact table (this will help the sizes of the indexes too).
Stop the tables from being updated while the queries are running - if the indexes are moving while the query is running I can't see good things happening. It's (luckily) been a long time since I cared about MySQL replication, so I can't remember if you can batch up the writes to the analytical query slave without too much drama.
If you get to this point without solving the performance issues, then it's time to think about moving off MySQL. I'd look at Infobright first - it's open source/$$ & based on MySQL, so it's probably the easiest to put into your existing system (make sure the data is going to the InfoBright DB, then point your analytical queries to the Infobright server, keep the rest of the system as it is, job done), or if Vertica ever releases its Community Edition. Hadoop+Hive has a lot of moving parts - its pretty cool (and great on the resume), but if it's only going to be used for the analytic portion of you system it may take more care & feeding than other options.
1 TB is not that big. MySQL should be able to handle that. At least simple queries like that shouldn't take hours! Can't be very helpful without knowing the larger context, but I can suggest some questions that you might ask yourself, mostly related to how you use your data:
Is there a way you can separate the reads and writes? How many read so you do per day and how many writes? Can you live with some lag, e.g write to a new table each day and merge it to the existing table at the end of the day?
What are most of your queries like? Are they mostly aggregation queries? Can you do some partial aggregation beforehand? Can you pre-calculate number of new smokers every day?
Can you use hadoop for the aggregation process above? Hadoop is kinda good at that stuff. Basically use hadoop just for daily or batch processing and store the results into the DB.
On the DB side, are you using InnoDB or MyISAM? Are the indices on String columns? Can you make it ints etc.?
Hope that helps
MySQL is have a serious limitation what prevent him to be able to perform good on such scenarious. The problem is a lack of parralel query capability - it can not utilize multiple CPUs in the single query.
Hadoop has an RDMBS like addition called Hive. It is application capable of translate your queries in Hive QL (sql like engine) into the MapReduce jobs. Since it is actually small adition on top of Hadoop it inherits its linear scalability
I would suggest to deploy hive alongside MySQL, replicate daily data to there and run heavy aggregations agains it. It will offload serious part of the load fro MySQL. You still need it for the short interactive queries, usually backed by indexes. You need them since Hive is iherently not-interactive - each query will take at least a few dozens of seconds.
Cassandra is built for the Key-Value type of access and does not have scalable GroupBy capability build-in. There is DataStax's Brisk which integrate Cassandra with Hive/MapReduce but it might be not trivial to map your schema into Cassandra and you still not get flexibility and indexing capabiilties of the RDBMS.
As a bottom line - Hive alongside MySQL should be good solution.

MySQL slows down after INSERT

I run into performance issues with my web application. Found out that bottleneck is db. App is running on LAMP server(VPS) with 4 CPU and 2GB RAM.
After insertion of new record into DB (table with around 100.000 records) select queries significantly slows down for a while (sometimes several for minutes). I thought that problem is reindexing, but there is practicly no activity at VPS after insert. There are plenty of memory left, no need for swapping. CPU is idle.
Truth is, selects are quite complex:
SELECT COUNT(A.id), B.title FROM B JOIN A .... WHERE ..lot of stuff..
Both A and B has about 100K records. A has many columns, B only few but it is tree structure represented by nested set. B doesnt change very often, but A does. WHERE conditions are mostly covered by indexes. There are usually about 10-30 rows in result set.
Are there any optimizations I could perform?
You might want to include your "lot of stuff"... you could be doing 'like' comparisons or joining on unindexed varchar columns :)
you'll also need to look at indexing columns that are used heavily.
First thing is: DO NOT trust any CPU/RAM etc. measurements inside a VPS - they can be wrong since they don't take into account what is going on on the machine (in other VPS)!
As for the performance:
Check the query plans for all your SQL statements... use a profiler on the app itself and see where the bottlenecks are...
Another point is to check the configuration of your MySQL DB... is there any replication going on (might cause a slowdon too) ? Does the DB have enough RAM ? Is the DB on a different machine/VPS or in the same VPS ?

database for analytics

I'm setting up a large database that will generate statistical reports from incoming data.
The system will for the most part operate as follows:
Approximately 400k-500k rows - about 30 columns, mostly varchar(5-30) and datetime - will be uploaded each morning. Its approximately 60MB while in flat file form, but grows steeply in the DB with the addition of suitable indexes.
Various statistics will be generated from the current day's data.
Reports from these statistics will be generated and stored.
Current data set will get copied into a partitioned history table.
Throughout the day, the current data set (which was copied, not moved) can be queried by end users for information that is not likely to include constants, but relationships between fields.
Users may request specialized searches from the history table, but the queries will be crafted by a DBA.
Before the next day's upload, the current data table is truncated.
This will essentially be version 2 of our existing system.
Right now, we're using MySQL 5.0 MyISAM tables (Innodb was killing on space usage alone) and suffering greatly on #6 and #4. #4 is currently not a partitioned tabled as 5.0 doesn't support it. In order to get around the tremendous amount of time (hours and hours) its taking to insert records into history, we're writing each day to an unindexed history_queue table, and then on the weekends during our slowest time, writing the queue to the history table. The problem is that any historical queries generated in the week are possibly several days behind then. We can't reduce the indexes on the historical table or its queries become unusable.
We're definitely moving to at least MySQL 5.1 (if we stay with MySQL) for the next release but strongly considering PostgreSQL. I know that debate has been done to death, but I was wondering if anybody had any advice relevant to this situation. Most of the research is revolving around web site usage. Indexing is really our main beef with MySQL and it seems like PostgreSQL may help us out through partial indexes and indexes based on functions.
I've read dozens of articles about the differences between the two, but most are old. PostgreSQL has long been labeled "more advanced, but slower" - is that still generally the case comparing MySQL 5.1 to PostgreSQL 8.3 or is it more balanced now?
Commercial databases (Oracle and MS SQL) are simply not an option - although I wish Oracle was.
NOTE on MyISAM vs Innodb for us:
We were running Innodb and for us, we found it MUCH slower, like 3-4 times slower. BUT, we were also much newer to MySQL and frankly I'm not sure we had db tuned appropriately for Innodb.
We're running in an environment with a very high degree of uptime - battery backup, fail-over network connections, backup generators, fully redundant systems, etc. So the integrity concerns with MyISAM were weighed and deemed acceptable.
In regards to 5.1:
I've heard the stability issues concern with 5.1. Generally I assume that any recently (within last 12 months) piece of software is not rock-solid stable. The updated feature set in 5.1 is just too much to pass up given the chance to re-engineer the project.
In regards to PostgreSQL gotchas:
COUNT(*) without any where clause is a pretty rare case for us. I don't anticipate this being an issue.
COPY FROM isn't nearly as flexible as LOAD DATA INFILE but an intermediate loading table will fix that.
My biggest concern is the lack of INSERT IGNORE. We've often used it when building some processing table so that we could avoid putting multiple records in twice and then having to do a giant GROUP BY at the end just to remove some dups. I think its used just infrequently enough for the lack of it to be tolerable.
My work tried a pilot project to migrate historical data from an ERP setup. The size of the data is on the small side, only 60Gbyte, covering over ~ 21 million rows, the largest table having 16 million rows. There's an additional ~15 million rows waiting to come into the pipe but the pilot has been shelved due to other priorities. The plan was to use PostgreSQL's "Job" facility to schedule queries that would regenerate data on a daily basis suitable for use in analytics.
Running simple aggregates over the large 16-million record table, the first thing I noticed is how sensitive it is to the amount of RAM available. An increase in RAM at one point allowed for a year's worth of aggregates without resorting to sequential table scans.
If you decide to use PostgreSQL, I would highly recommend re-tuning the config file, as it tends to ship with the most conservative settings possible (so that it will run on systems with little RAM). Tuning takes a little bit, maybe a few hours, but once you get it to a point where response is acceptable, just set it and forget it.
Once you have the server-side tuning done (and it's all about memory, surprise!) you'll turn your attention to your indexes. Indexing and query planning also requires a little effort but once set you'll find it to be effective. Partial indexes are a nice feature for isolating those records that have "edge-case" data in them, I highly recommend this feature if you are looking for exceptions in a sea of similar data.
Lastly, use the table space feature to relocate the data onto a fast drive array.
In my practical experience I have to say, that postgresql had quite a performance jump from 7.x/8.0 to 8.1 (for our use cases in some instances 2x-3x faster), from 8.1 to 8.2 the improvement was smaller but still noticeable. I don't know the improvements between 8.2 and 8.3, but I expect there is some performance improvement too, I havent tested it so far.
Regarding indices, I would recommend to drop those, and only create them again after filling the database with your data, it is much faster.
Further improve the crap out of your postgresql settings, there is so much gain from it. The default settings are at least sensible now, in pre 8.2 times pg was optimized for running on a pda.
In some cases, especially if you have complicated queries it can help to deactivate nested loops in your settings, which forces pg to use better performing approaches on your queries.
Ah, yes, did I say that you should go for postgresql?
(An alternative would be firebird, which is not so flexible, but in my experience it is in some cases performing much better than mysql and postgresql)
In my experience Inodb is slighly faster for really simple queries, pg for more complex queries. Myisam is probably even faster than Innodb for retrieval, but perhaps slower for indexing/index repair.
These mostly varchar fields, are you indexing them with char(n) indexes?
Can you normalize some of them? It'll cost you on the rewrite, but may save time on subsequent queries, as your row size will decrease, thus fitting more rows into memory at one time.
ON EDIT:
OK, so you have two problems, query time against the daily, and updating the history, yes?
As to the second: in my experience, mysql myism is bad at re-indexing. On tables the size of your daily (0.5 to 1M records, with rather wide (denormalized flat input) records), I found it was faster to re-write the table than to insert and wait for the re-indexing and attendant disk thrashing.
So this might or might not help:
create new_table select * from old_table ;
copies the tables but no indices.
Then insert the new records as normally. Then create the indexes on new table, wait a while. Drop old table, and rename new table to old table.
Edit: In response to the fourth comment: I don't know that MyIsam is always that bad. I know in my particular case, I was shocked at how much faster copying the table and then adding the index was. As it happened, I was doing something similar to what you were doing, copying large denormalized flat files into the database, and then renormalizing the data. But that's an anecdote, not data. ;)
(I also think I found that overall InnoDb was faster, given that I was doing as much inserting as querying. A very special case of database use.)
Note that copying with a select a.*, b.value as foo join ... was also faster than an update a.foo = b.value ... join, which follows, as the update was to an indexed column.
What is not clear to me is how complex the analytical processing is. In my oppinion, having 500K records to process should not be such a big problem, in terms of analytical processing, it is a small recordset.
Even if it is a complex job, if you can leave it over night to complete (since it is a daily process, as I understood from your post), it should still be enough.
Regarding the resulted table, I would not reduce the indexes of the table. Again, you can do the loading over night, including indexes refresh, and have the resulted, updated data set ready for use in the morning, with quicker access than in case of raw tables (non-indexed).
I saw PosgreSQL used in a datawarehouse like environment, working on the setup I've described (data transformation jobs over night) and with no performance complaints.
I'd go for PostgreSQL. You need for example partitioned tables, which are in stable Postgres releases since at least 2005 - in MySQL it is a novelty. I've heard about stability issues in new features of 5.1. With MyISAM you have no referential integrity, transactions and concurrent access suffers a lot - read this blog entry "Using MyISAM in production" for more.
And Postgres is much faster on complicated queries, which will be good for your #6.
There is also a very active and helpful mailing list, where you can get support even from core Postgres developers for free. It has some gotchas though.
The Infobright people appear to be doing some interesting things along these lines:
http://www.infobright.org/
-- psj
If Oracle is not considered an option because of cost issues, then Oracle Express Edition is available for free (as in beer). It has size limitations, but if you do not keep history around for too long anyway, it should not be a concern.
Check your hardware. Are you maxing the IO? Do you have buffers configured properly? Is your hardware sized correctly? Memory for buffering and fast disks are key.
If you have too many indexes, it'll slow inserts down substantially.
How are you doing your inserts? If you're doing one record per INSERT statement:
INSERT INTO TABLE blah VALUES (?, ?, ?, ?)
and calling it 500K times, your performance will suck. I'm surprised it's finishing in hours. With MySQL you can insert hundreds or thousands of rows at a time:
INSERT INTO TABLE blah VALUES
(?, ?, ?, ?),
(?, ?, ?, ?),
(?, ?, ?, ?)
If you're doing one insert per web requests, you should consider logging to the file system and doing bulk imports on a crontab. I've used that design in the past to speed up inserts. It also means your webpages don't depend on the database server.
It's also much faster to use LOAD DATA INFILE to import a CSV file. See http://dev.mysql.com/doc/refman/5.1/en/load-data.html
The other thing I can suggest is be wary of the SQL hammer -- you may not have SQL nails. Have you considered using a tool like Pig or Hive to generate optimized data sets for your reports?
EDIT
If you're having troubles batch importing 500K records, you need to compromise somewhere. I would drop some indexes on your master table, then create optimized views of the data for each report.
Have you tried playing with the myisam_key_buffer parameter ? It is very important in index update speed.
Also if you have indexes on date, id, etc which are correlated columns, you can do :
INSERT INTO archive SELECT .. FROM current ORDER BY id (or date)
The idea is to insert the rows in order, in this case the index update is much faster. Of course this only works for the indexes that agree with the ORDER BY... If you have some rather random columns, then those won't be helped.
but strongly considering PostgreSQL.
You should definitely test it.
it seems like PostgreSQL may help us out through partial indexes and indexes based on functions.
Yep.
I've read dozens of articles about the differences between the two, but most are old. PostgreSQL has long been labeled "more advanced, but slower" - is that still generally the case comparing MySQL 5.1 to PostgreSQL 8.3 or is it more balanced now?
Well that depends. As with any database,
IF YOU DONT KNOW HOW TO CONFIGURE AND TUNE IT IT WILL BE SLOW
If your hardware is not up to the task, it will be slow
Some people who know mysql well and want to try postgres don't factor in the fact that they need to re-learn some things and read the docs, as a result a really badly configured postgres is benchmarked, and that can be pretty slow.
For web usage, I've benchmarked a well configured postgres on a low-end server (Core 2 Duo, SATA disk) with a custom benchmark forum that I wrote and it spit out more than 4000 forum web pages per second, saturating the database server's gigabit ethernet link. So if you know how to use it, it can be screaming fast (InnoDB was much slower due to concurrency issues). "MyISAM is faster for small simple selects" is total bull, postgres will zap a "small simple select" in 50-100 microseconds.
Now, for your usage, you don't care about that ;)
You care about the ways your database can compute Big Aggregates and Big Joins, and a properly configured postgres with a good IO system will usually win against a MySQL system on those, because the optimizer is much smarter, and has many more join/aggregate types to choose from.
My biggest concern is the lack of INSERT IGNORE. We've often used it when building some processing table so that we could avoid putting multiple records in twice and then having to do a giant GROUP BY at the end just to remove some dups. I think its used just infrequently enough for the lack of it to be tolerable.
You can use a GROUP BY, but if you want to insert into a table only records that are not already there, you can do this :
INSERT INTO target SELECT .. FROM source LEFT JOIN target ON (...) WHERE target.id IS NULL
In your use case you have no concurrency problems, so that works well.