I have two big tables for example:
'tbl_items' and 'tbl_items_transactions'
First table keeping some items metadata which may have 20 (varchar) columns with millions rows... and second table keeping each transaction of first table.
for example if a user insert new record to tbl_items then automatically a new record will be adding to tbl_items_transactions with same data plus date, username and transaction type to keep each row history.
so in the above scenario two tables have same columns but tbl_items_transactions have 3 extra columns date, username, transaction_type to keep each tbl_items history
now assume we have 1000 users that wants to Insert, Update, Delete tbl_items records with a web application. so these two tables scale very soon (maybe billion rows in tbl_items_transactions)
I have tried MySQL, MariaDB, PostgreSQL... they are very good but when table scale and millions rows inserted they are slow when run some select queries on tbl_items_transactions... but sometimes PostgreSQL is faster than MySQL or MariaDB
now I think I'm doing wrong things... If you was me... do you use MariaDB or PostgreSQL or somthing like that and structure your database like what I did?
Your setup is wrong.
You should not duplicate the columns from tbl_items in tbl_items_transactions, rather you should have a foreign key in the latter table pointing to the former.
That way data integrity is preserved, and tbl_items_transactions will be much smaller. This technique is called normalization.
To speed up queries when the table get large, define indexes on them that match the WHERE and JOIN conditions.
I'm using a MySQL database and have to perform some select queries on large/huge tables (e.g. 267,736 rows and 30 columns).
Query details:
Only select queries (the data in the table is fixed, never an update, insert or delete)
Select query on all the columns (business requirement)
Mostly limit the number of rows (LIMIT 10 to all rows -> user can choose)
Could be ordered by one or multiple columns (creation of indexes here will not help since the user can order by any column he likes)
Could be filtered by a value the user chooses (where filter on one or more columns)
Currently the queries take up to 2 seconds, which is to long.
Is there a way to speed them up?
Which storage engine should I use: InnoDB/MyISAM/...
Should I have a primary key, even if I will never use him?
...?
You should (must actually) use indexes.
Create indexes on all columns with which WHERE or ORDER BY is going to be used. Also study and use EXPLAIN to see the impact of the indexes and to optimize your queries.
You don't have to create a primary key if there is no column with unique data in your table, but it is very likely that you do have such a column (id, time...). In this case you should use primary key to filter your queries.
Number of columns in the query has close to no impact on SELECT speed.
As long as you make "Only select queries" storage engine does not matter either. MyISAM might be a bit faster, but InnoDB has many features you will need when you decide that your "Only select queries" rule must be broken.
I am currently using mysql
I have two tables called person and zim_list_id both tables has over 2 million rows
I want to update person table using zim_list_id table
the query I am using is
update person p JOIN zim_list_id z on p.person_id = z.person_id
set p.office_name = z.`Office Name`;
I have also created index on zim_list_id table and person table , the query I executed was
create index idx_person_office_name on person(`Office_name`);
create index idx_zim_list_id_office_name on zim_list_id(`Office name`);
the query execution is taking very long. is there any way to reduce the execution time?
The indexes on Office Name do nothing at all for this query. All you've done with those indexes is make inserts and updates slower, as now the database has to update the index any time that column changes.
What you really need, if you don't already have them, are indexes on the person_id field in those tables, to make the join more efficient.
You might also consider adding Office_Name as a second column on the zim_list_id table's index, as this will allow the database to fullfill that part of the query entirely from the index. But I wouldn't do that until I had checked the results after setting the plain person_id indexes first.
Finally, I'm curious how much memory is in that server (especially relative to the total size of the database), how much of it is available in your MySql buffer_pool_size setting, and what other work that server might be doing... there could always be an environmental factor as well.
I am wondering what is more efficient and faster in performance:
Having an index on one big table or multiple smaller tables without indexes?
Since this is a pretty abstract problem let me make it more practical:
I have one table with statistics about users (20,000 users and about 30 million rows overall). The table has about 10 columns including the user_id, actions, timestamps, etc.
Most common applications are: Inserting data by user_id and retrieving data by user_id (SELECT statements never include multiple user_id's).
Now so far I have an INDEX on the user_id and the query looks something like this
SELECT * FROM statistics WHERE user_id = 1
Now, with more and more rows the table gets slower and slower. INSERT statements slow down because the INDEX gets bigger and bigger; SELECT statements slow down, well, because there are more rows to search through.
Now I was wondering why not have one statistics table for each user and change the query syntax to something like this instead:
SELECT * FROM statistics_1
where 1 represents the user_id obviously.
This way, no INDEX is needed and there is far less data in each table, so INSERT and SELECT statements should be much faster.
Now my questions again:
Are there any real world disadvantages to handle so many tables (in my case 20,000) instead of using of using one table with an INDEX?
Would my approach actually speed things up or might the lookup for the table eventually slow down things more than everything?
Creating 20,000 tables is a bad idea. You'll need 40,000 tables before long, and then more.
I called this syndrome Metadata Tribbles in my book SQL Antipatterns Volume 1. You see this happen every time you plan to create a "table per X" or a "column per X".
This does cause real performance problems when you have tens of thousands of tables. Each table requires MySQL to maintain internal data structures, file descriptors, a data dictionary, etc.
There are also practical operational consequences. Do you really want to create a system that requires you to create a new table every time a new user signs up?
Instead, I'd recommend you use MySQL Partitioning.
Here's an example of partitioning the table:
CREATE TABLE statistics (
id INT AUTO_INCREMENT NOT NULL,
user_id INT NOT NULL,
PRIMARY KEY (id, user_id)
) PARTITION BY HASH(user_id) PARTITIONS 101;
This gives you the benefit of defining one logical table, while also dividing the table into many physical tables for faster access when you query for a specific value of the partition key.
For example, When you run a query like your example, MySQL accesses only the correct partition containing the specific user_id:
mysql> EXPLAIN PARTITIONS SELECT * FROM statistics WHERE user_id = 1\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: statistics
partitions: p1 <--- this shows it touches only one partition
type: index
possible_keys: NULL
key: PRIMARY
key_len: 8
ref: NULL
rows: 2
Extra: Using where; Using index
The HASH method of partitioning means that the rows are placed in a partition by a modulus of the integer partition key. This does mean that many user_id's map to the same partition, but each partition would have only 1/Nth as many rows on average (where N is the number of partitions). And you define the table with a constant number of partitions, so you don't have to expand it every time you get a new user.
You can choose any number of partitions up to 1024 (or 8192 in MySQL 5.6), but some people have reported performance problems when they go that high.
It is recommended to use a prime number of partitions. In case your user_id values follow a pattern (like using only even numbers), using a prime number of partitions helps distribute the data more evenly.
Re your questions in comment:
How could I determine a resonable number of partitions?
For HASH partitioning, if you use 101 partitions like I show in the example above, then any given partition has about 1% of your rows on average. You said your statistics table has 30 million rows, so if you use this partitioning, you would have only 300k rows per partition. That is much easier for MySQL to read through. You can (and should) use indexes as well -- each partition will have its own index, and it will be only 1% as large as the index on the whole unpartitioned table would be.
So the answer to how can you determine a reasonable number of partitions is: how big is your whole table, and how big do you want the partitions to be on average?
Shouldn't the amount of partitions grow over time? If so: How can I automate that?
The number of partitions doesn't necessarily need to grow if you use HASH partitioning. Eventually you may have 30 billion rows total, but I have found that when your data volume grows by orders of magnitude, that demands a new architecture anyway. If your data grow that large, you probably need sharding over multiple servers as well as partitioning into multiple tables.
That said, you can re-partition a table with ALTER TABLE:
ALTER TABLE statistics PARTITION BY HASH(user_id) PARTITIONS 401;
This has to restructure the table (like most ALTER TABLE changes), so expect it to take a while.
You may want to monitor the size of data and indexes in partitions:
SELECT table_schema, table_name, table_rows, data_length, index_length
FROM INFORMATION_SCHEMA.PARTITIONS
WHERE partition_method IS NOT NULL;
Like with any table, you want the total size of active indexes to fit in your buffer pool, because if MySQL has to swap parts of indexes in and out of the buffer pool during SELECT queries, performance suffers.
If you use RANGE or LIST partitioning, then adding, dropping, merging, and splitting partitions is much more common. See http://dev.mysql.com/doc/refman/5.6/en/partitioning-management-range-list.html
I encourage you to read the manual section on partitioning, and also check out this nice presentation: Boost Performance With MySQL 5.1 Partitions.
It probably depends on the type of queries you plan on making often, and the best way to know for sure is to just implement a prototype of both and do some performance tests.
With that said, I would expect that a single (large) table with an index will do better overall because most DBMS systems are heavily optimized to deal with the exact situation of finding and inserting data into large tables. If you try to make many little tables in hopes of improving performance, you're kindof fighting the optimizer (which is usually better).
Also, keep in mind that one table is probably more practical for the future. What if you want to get some aggregate statistics over all users? Having 20 000 tables would make this very hard and inefficient to execute. It's worth considering the flexibility of these schemas as well. If you partition your tables like that, you might be designing yourself into a corner for the future.
Concrete example:
I have one table with statistics about users (20,000 users and about 30 million rows overall). The table has about 10 columns including the user_id, actions, timestamps, etc.
Most common applications are: Inserting data by user_id and retrieving data by user_id (SELECT statements never include multiple user_id's).
Do this:
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
...
PRIMARY KEY(user_id, id),
INDEX(id)
Having user_id at the start of the PK gives you "locality of reference". That is, all the rows for one user are clustered together thereby minimizing I/O.
The id on the end of the PK is because the PK must be unique.
The strange-looking INDEX(id) is to keep AUTO_INCREMENT happy.
Abstract question:
Never have multiple identical tables.
Use PARTITIONing only if it meets one of the use-cases listed in http://mysql.rjweb.org/doc.php/partitionmaint
A PARTITIONed table needs a different set of indexes than the non-partitioned equivalent table.
In most cases a single, non-partitioned, table is optimal.
Use the queries to design indexes.
There is little to add to Bill Karwins answer. But one hint is: check if all the data for the user is needed in complete detail over all the time.
If you want to give usage statistics or number of visits or those things, you usually will get not a granularity of single actions and seconds for, say, the year 2009 from todays view. So you could build aggregation tables and a archive-table (not engine archive, of course) to have the recent data on action- base and an overview over the older actions.
Old actions don't change, I think.
And you still can go into detail from the aggregation with a week_id in the archive-table for example.
Intead of going from 1 table to 1 table per user, you can use partitioning to hit a number of tables/table size ratio somewhere in the middle.
You can also keep stats on users to try to move 'active' users into 1 table to reduce the number of tables that you have to access over time.
The bottom line is that there is a lot you can do, but largely you have to build prototypes and tests and just evaluate the performance impacts of various changes you are making.
I have a mysql 5.5 server with a big (~150M records) innodb table that I want to partition.
How can I determine the optimal number of partitions for the table?
The table is a "many-to-many" connection table that consists of only two int columns (aId, bId),
aId receives values roughly on the range of 1..1,000,000
bId receives values roughly on the range of 1..10,000,000
Most queries lookup aId first.
the table has two indexes:
primary(aId, bId)
index(bId)
And, again, the question is How can I determine the optimal number of partitions for the table?
Thanks
If you do not want to structure your data logically, then optimal number of partitions is the same as number of cores on the server. This will aid multiple concurrent queries. Individual queries will run the same time due to using primary index.
I would suggest changing your second index to index (bld,ald), so bld to ald queries can use only index to get ald value, without going to the database for it.