Finding Out Which Tables Are Included in a MySQL Merge Table - mysql

I have a couple of MRG_MyISAM tables that merge a bunch of other tables in a MySQL database. I would like to figure out programmatically which tables are included in each merge table.
I know I could run SHOW CREATE TABLE and then parse the UNION=(tbl1, tbl2) part of the statement, but that seems a little hacky. Is there a better way?
In an ideal world, I'm looking for something like this:
SELECT * FROM ?? WHERE merge_table = 'merge_table_1'
That would return rows that each contain the name of a table that's included in "merge_table_1":
--------------
| table_name |
--------------
| tbl1 |
--------------
| tbl2 |
--------------

I don't think there is any data in INFORMATION_SCHEMA to list the members of a MERGE table.
If your application has direct access to the data directory on your database server, you can simply read the .MRG file for the merge table. It is a human-readable file that simply lists the tables in the merge, and any other merge table options.
You really shouldn't be using MERGE tables anymore. You should use MySQL's PARTITIONING engine, which is much more flexible. With partitioned tables, you can query the INFORMATION_SCHEMA.PARTITIONS table to find information on each partition.
In fact, you shouldn't be using MyISAM tables either. InnoDB is more scalable, and MyISAM doesn't support any of the properties of ACID.

SHOW CREATE TABLE table_name; -- see if this gives you the information

Related

Mysql Batch insert around 11 GB data from one table to another [duplicate]

Is there a more-efficent, less laborious way of copying all records from one table to another that doing this:
INSERT INTO product_backup SELECT * FROM product
Typically, the product table will hold around 50,000 records. Both tables are identical in structure and have 31 columns in them. I'd like to point out this is not my database design, I have inherited a legacy system.
There's just one thing you're missing. Especially, if you're using InnoDB, is you want to explicitly add an ORDER BY clause in your SELECT statement to ensure you're inserting rows in primary key (clustered index) order:
INSERT INTO product_backup SELECT * FROM product ORDER BY product_id
Consider removing secondary indexes on the backup table if they're not needed. This will also save some load on the server.
Finally, if you are using InnoDB, reduce the number of row locks that are required and just explicitly lock both tables:
LOCK TABLES product_backup WRITE;
LOCK TABLES product READ;
INSERT INTO product_backup SELECT * FROM product ORDER BY product_id;
UNLOCK TABLES;
The locking stuff probably won't make a huge difference, as row locking is very fast (though not as fast as table locks), but since you asked.
mysqldump -R --add-drop-table db_name table_name > filepath/file_name.sql
This will take a dump of specified tables with a drop option to delete the exisiting table when you import it. then do,
mysql db_name < filepath/file_name.sql
DROP the destination table:
DROP TABLE DESTINATION_TABLE;
CREATE TABLE DESTINATION_TABLE AS (SELECT * FROM SOURCE_TABLE);
I don't think this will be worthy for a 50k table but:
If you have the database dump you can reload a table from it. As you want to load a table in another one you could change the table name in the dump with a sed command:
Here you have some hints:
http://blog.tsheets.com/2008/tips-tricks/mysql-restoring-a-single-table-from-a-huge-mysqldump-file.html
An alternative (depending on your design) would be to use triggers on the original table inserts so that the duplicated table gets the data as well.
And a better alternative would be to create another MySQL instance and either run it in a master-slave configuration or in a daily dump master/load slave fashion.

MYSQL Select tables starting with "x"

I have a bunch of tables in my "stats" database.
tcl20151w1d1
tcl20151w1d2
tcl20151w2d1
tcl20151w2d2
tcl20151w3d1
tcl20151w3d2
tcl20151w4d1
eu20151w1d1
eu20151w1d2
eu20151w2d1
eu20151w2d2
eu20151w3d1
eu20151w3d2
eu20151w4d1
..
How can i select all tables that starts with "tcl" in "stats" database. Is it possible? Do I have to union them manually?
You can query information_schema.tables table to get a list of tables where the table name start with tcl.
You can use the list to dynamically create a union query in a stored procedure using string concatenation and prepared statements.
If those tables are all myisam tables with the same structure, you may consider creating a merge table on them:
The MERGE storage engine, also known as the MRG_MyISAM engine, is a
collection of identical MyISAM tables that can be used as one.
“Identical” means that all tables have identical column and index
information.

How to sync two tables that are not identical?

I have two projects using the same data. However, this data is saved in 2 different databases. Each of these two databases has a table that is almost the same as his counterpart in the other database.
What I am looking for
I am looking for a method to synchronise two tables. Easier said, if database_one.table gets an insert, that same record needs to be inserted into database2.table.
Database and Table One
Table Products
| product_id | name | description | price | vat | flags |
Database and Table Two
Table Articles
| articleId | name_short | name | price | price_vat | extra_info | flags |
The issue
I have never used and wouldn't know how to use any method of database synchronisation. What also worries me is that the tables are not identical and so I will somehow need to map columns to one another.
For example:
database_one.Products.name -> database_two.articles.name_short
Can someone help me with this?
You can use MERGE function:
https://www.mssqltips.com/sqlservertip/1704/using-merge-in-sql-server-to-insert-update-and-delete-at-the-same-time/
Then create a procedure that runs at desired frequency or if it needs to be instant insert the merge into a trigger.
One of possible method is to use triggers. You need to create trigger for insert, update and delete on database_one.table, that does coresponding operation on database2.table. I guess, that there won't be any problems with insert/update/delete between both databases. When using triggers, you can very easily map columns.
However you need to consider prons and cons of using triggers - read something here or here. From my experience performance is very important, so if you have heavy loaded DB it is not a good idea to use triggers for data replication.
Maybe you should check this too?

Copy column data from one table to another

I have two databases, one which is old and one which is new. I need to copy one particular column from the old to the new. Structure-wise they are both totally identical, although the new table is significantly larger than the old, and the only way i can connect these two tables together by a foreignkey is the uni_id, which is just a normal integerfield, but its unique.
So this is basically the structure of the table:
| id | name | name_pseudo   | uni_id |
------------------------------------------------------------
I want to compare each row of new_db.mytable with old_db.mytable by uni_id and insert old_db.mytable.name_pseudo into new_db.mytable.name_pseudo.
Can such expression in pure MySQL be constructed?
From the MySQL manual on UPDATE
You can also perform UPDATE operations covering multiple tables. However, you cannot use ORDER BY or LIMIT with a multiple-table UPDATE. The table_references clause lists the tables involved in the join. Its syntax is described in Section 13.2.9.2, “JOIN Syntax”. Here is an example:
UPDATE items,month SET items.price=month.price
WHERE items.id=month.id;
Which in your case should read like:
UPDATE newdb.mytable AS new, old_db.mytable AS old
SET new.name_pseudo=old.name_pseudo
WHERE old.uni_id=new.uni_id;

Triggers with complex configurable conditions

Some background
We have a system which optionally integrates to several other systems. Our system shuffles data from a MySQL database to the other systems. Different customers want different data transferred. In order to not trigger unnecessary transfers (when no relevant data has changed) to these external systems, we have an "export" table which contains all the information any customer is interested in and a service which runs SQL queries defined in a file to compare the data in the export table to the data in the other tables and update the export table as appropriate, a solution we're not really happy with for several reasons:
No customer uses more than a fraction of these columns, although each column is used by at least one customer.
As the database grows, the service is causing increasing amounts of strain on the system. Some servers completely freeze while this service compares data, which may take up to 2 minutes (!) even though no customer has particularly large amounts of data (~15000 rows across all relevant tables, max). We fear what might happen if we ever get a customer with very large amounts of data. Performance could be improved by creating some indexes and improving the SQL queries, but we feel like that's attacking the problem from the wrong direction.
It's not very flexible, nor scalable. Having to add new columns every time a customer is interested in transferring data that no other customer has been interested in before (which happens a lot), just feels... icky. I don't know how much it really matters, but we're up to 37 columns in this table at the moment, and it keeps growing.
What we want to do
Instead, we would like to have a very slimmed down "export" table which only contains the bare minimum information, i.e. the table and primary key of the row that was updated, the system this row should be exported to, and some timestamps. A trigger in every relevant table would then update this export table whenever a column that has been configured to warrant an update is updated. This configuration should be read from another table (which, sometime in the future, could be configured from our web GUI), looking something like this:
+--------+--------+-----------+
| system | table | column |
+--------+--------+-----------+
| 'sys1' | 'tbl1' | 'column1' |
+--------+--------+-----------+
| 'sys2' | 'tbl1' | 'column2' |
+--------+--------+-----------+
Now, the trigger in tbl1 will read from this table when a row is updated. The configuration above should mean that if column1 in tbl1 has changed, then an export row for sys1 should be updated, if column2 has changed too, then an export row for sys2 should also be updated, etc.
So far, it all seems doable, although a bit tricky when you're not an SQL genius. However, we would preferably like to be able to define a little bit more complex conditions, at least something like "column3 = 'Apple' OR column3 = 'Banana'", and this is kind of the heart of the question...
So, to sum it up:
What would be the best way to allow for triggers to be configured in this way?
Are we crazy? Are triggers the right way to go here, or should we just stick to our service, smack on some indexes and suck it up? Or is there a third alternative?
How much of a performance increase could we expect to see? (Is this all worth it?)
This is actually impossible because dynamic SQL is not supported in SQL. Therefore we came up with reading the config table from PHP and generating "static" triggers. We'll try having 2 tables, one for columns and one for conditions, like so:
Columns
+--------+--------+-----------+
| system | table | column |
+--------+--------+-----------+
| 'sys1' | 'tbl1' | 'column1' |
+--------+--------+-----------+
| 'sys2' | 'tbl1' | 'column2' |
+--------+--------+-----------+
Conditions
+--------+--------+-------------------------------------------+
| system | table | condition |
+--------+--------+-------------------------------------------+
| 'sys1' | 'tbl1' | 'column3 = "Apple" OR column3 = "Banana"' |
+--------+--------+-------------------------------------------+
Then just build a statement like this in PHP (pseudo-code):
DROP TRIGGER IF EXISTS `tbl1_AUPD`;
CREATE TRIGGER `tbl1_AUPD` AFTER UPDATE ON tbl1 FOR EACH ROW
BEGIN
IF (*sys1 columns changed*) AND (*sys1 condition1*) THEN
updateExportTable('sys1', 'tbl1', NEW.primary_key, NEW.timestamp);
END IF;
IF (*sys2 columns changed*) THEN
updateExportTable('sys2', 'tbl1', NEW.primary_key, NEW.timestamp);
END IF;
END;
This seems to be the best solution for us, maybe even better than what I was asking for, but if anyone has a better suggestion I'm all ears!