duplicating a MariaDB table with a small /tmp filesystem - mysql

I am duplicating a table with this command:
CREATE TABLE new_table LIKE old_table;
INSERT INTO new_table SELECT * FROM old_table;
Unfortunately, on my system /tmp is placed on separate filesystem with only 1gb of space.
If a large query is executed, that 1gb gets filled by mariadb very quickly, making it impossible to execute large queries. It is a production server so I'd rather leave the file systems as they are and instruct mariadb to make smaller temporary files and delete them on the fly.
How do you instruct mariadb to split large query into multiple temporary files so that /tmp doesn't get jammed with temporary files that cause query termination?

You can always split the INSERT into smaller pieces.
In below example I assume that i is the primary key:
WITH RECURSIVE cte1 AS (
SELECT 1 as s, 9999 as e
UNION ALL
SELECT e+1, e+9999
FROM cte1
WHERE e<=(select count(*) from old_table)
)
select CONCAT('INSERT INTO new_table SELECT * FROM old_table WHERE i BETWEEN ',s,' AND ',e) from cte1 limit 10;
This script produces several insert statement that you can run 1 after the other...
output:
INSERT INTO new_table SELECT * FROM old_table WHERE i BETWEEN 1 AND 9999 |
INSERT INTO new_table SELECT * FROM old_table WHERE i BETWEEN 10000 AND 19998 |
INSERT INTO new_table SELECT * FROM old_table WHERE i BETWEEN 19999 AND 29997
You are, of course, free to change the 9999 to any other number.

It may be related to the table content.
As explained in the doc,
Some query conditions prevent the use of an in-memory temporary table, in which case the server uses an on-disk table instead
If you have the privileges, you can try to set another disk location using
Temporary files are created in the directory defined by the tmpdir variable
Other than that, the doc is pretty clear unfortunately :
In some cases, the server creates internal temporary tables while processing statements. Users have no direct control over when this occurs.

Related

INSERT INTO ... SELECT in mysql

I have a big table(more than 60k rows), I am trying to copy unique rows from this table to another table. Query is as follows
INSERT INTO tbl2(field1, field2)
SELECT DISTINCT field1, field2
FROM tbl1;
But it is taking ages to run this query, can someone suggest any way to accelerate this process
Execute a mysqldump of your table, generating a sql file, then filter duplicated data with a shell command:
cat dump.sql | uniq > dump_filtered.sql
Check the generated file. Then create your new table and load your dump_filtered.sql file with LOAD DATA INFILE.
Try this:
1. drop the destination table: DROP DESTINATION_TABLE;
2. CREATE TABLE DESTINATION_TABLE AS (SELECT * FROM SOURCE_TABLE);

mysql insert into table if exists

In my project I have two code paths that interact with MySQL during first time setup. The first step is the database structure creation, in here, a user has the ability to pick and choose the features they want - and some tables may not end up being created in the database depending on what the user selects.
In the second part, I need to preload the tables that did get created with some basic data - how could I go about inserting rows into these tables, only if the table exists?
I know of IF NOT EXISTS but as far as I know, that only works with creating tables, I am trying to do something like this
INSERT INTO table_a ( `key`, `value` ) VALUES ( "", "" ) IF EXISTS table_a;
This is loaded through a file that contains a lot of entries, so letting it throw an error when the table does not exist is not an option.
IF (SELECT count(*)FROM information_schema.tables WHERE table_schema ='databasename'AND table_name ='tablename') > 0
THEN
INSERT statement
END IF
Use information schema to check existence if table
If you know that a particular table does exist with at least 1 record (or you can create a dummy table with just a single record) then you can do a conditional insert this way without a stored procedure.
INSERT INTO table_a (`key`, `value`)
SELECT "", "" FROM known_table
WHERE EXISTS (SELECT *
FROM information_schema.TABLES
WHERE (TABLE_SCHEMA = 'your_db_name') AND (TABLE_NAME = 'table_a')) LIMIT 1;

Updating auto_increment value in an InnoDB table

Short version: Can I programatically update the auto_increment value on a table? I'm trying to do this via the mysql init_file so it happens on startup, but I don't see it working.
USE theDb;
SELECT max(maxid) FROM (SELECT max(RegistrationId)+1 maxid FROM Registration
UNION
SELECT max(RegistrationId)+1 maxid FROM RegistrationArchive) t into #maxId;
ALTER TABLE Registration AUTO_INCREMENT=#maxId;
Longer version:
I have a mysql database with InnoDB tables. One table (holding registration info) has an auto increment column and when a row is processed, it is copied to a second archive table and is deleted from the first. The archive table does not have an auto increment column. (btw, not my design...)
Problem is that when the database is restarted for some reason, which is infrequent, the first table recalulates the next increment value -- a feature of InnoDB. The table will often be empty or very small and the calculated next increment will correspond to an id that has already been used and is in the archive table. The data gets moved to archive ok, but subsequent processes don't work right after that.
I know this is a late, but after asking this related question, I found out that certain parts of an sql statement need to be literals and can't be replaced by user_defined_variables. If you need to change such parts of an SQL statement you need to use prepared statements.
So you need to do something like this:
SET #`stmt_alter` := CONCAT('ALTER TABLE `Registration` AUTO_INCREMENT = ', #`maxId`);
PREPARE `stmt` FROM #`stmt_alter`;
EXECUTE `stmt`;
DEALLOCATE PREPARE `stmt`;

Can't UNION ALL on a temporary table?

I'm trying to run the following simple test- creating a temp table, and then UNIONing two different selections:
CREATE TEMPORARY TABLE tmp
SELECT * FROM people;
SELECT * FROM tmp
UNION ALL
SELECT * FROM tmp;
But get a #1137 - Can't reopen table: 'tmp'
I thought temp tables were supposed to last the session. What's the problem here?
This error indicates that the way in which MySQL tables manages the temporary tables has been changed which in turn affects the joins, unions as well as subqueries. To fix MySQL error "can’t reopen table", try out the following solution:
mysql> CREATE TEMPORARY TABLE tmp_journals_2 LIKE tmp_journals;
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO tmp_journals_2 SELECT * FROM tmp_journals;
After this you can perform the union operation.
Useful reading
http://dev.mysql.com/doc/refman/5.0/en/temporary-table-problems.html
http://www.mysqlrepair.org/mysqlrepair/cant-reopen-table.php
Figured it out thanks to sshekar's answer- the solution in this case would be
Create an empty temp table
Insert the results we want to UNION into the table separately
Query the temp table
SQL:
CREATE TEMPORARY TABLE tmp LIKE people;
INSERT INTO tmp SELECT * FROM people; /* First half of UNION */
INSERT INTO tmp SELECT * FROM people; /* Second half of UNION */
SELECT * FROM tmp;
(See Using MySQL Temporary Tables to save your brain)
As documented under TEMPORARY Table Problems:
You cannot refer to a TEMPORARY table more than once in the same query. For example, the following does not work:
mysql> SELECT * FROM temp_table, temp_table AS t2;
ERROR 1137: Can't reopen table: 'temp_table'
This error also occurs if you refer to a temporary table multiple times in a stored function under different aliases, even if the references occur in different statements within the function.
As others may wander past this question/solution thread... if they have an older Ubuntu 16.04LTS machine or similar.
The limitation exists in Ubuntu 16.04, mysql 5.7, as documented here, like eggyal reported above. The bug/feature was logged here, and ignored for more than a decade. Similarly, it was also logged against mariadb, and was resolved in version 10.2.1. Since Ubuntu 16.04LTS uses mariadb 10.0, the feature is out of easy reach without upgrading to 18.04 etc. You have to download from external repo and install directly.

multi-row multi-value update in a MySQL transaction

I am using innodb tables in MySQL. I want to update several rows in a table, with each row getting a different value, e.g.:
UPDATE tbl_1 SET
col1=3 WHERE id=25,
col1=5 WHERE id=26
In Postgres I believe this is possible:
UPDATE tbl_1 SET col1 = t.col1 FROM (VALUES
(25, 3)
(26, 5)
) AS t(id, col1)
WHERE tbl_1.id = t.id;
How do you do this efficiently and effectively in a transaction?
Issues I hit so far:
using an intermediate temporary MEMORY table turns out to not be transaction safe
using a TEMPORARY table - persumably MEMORY type again - is virtually undocumented and I can find no real explanation of how it works and how well it works in my case, for example any discussion on whether the table is truncated after each transaction on the session
using an InnoDB table as a temporary table and filling, joining to update and then truncating it in the transaction seems a very expensive thing to do; I've been fighting MySQL's poor throughput enough as it is
Do you update with a case and set value for col1 depending on id
UPDATE tbl_1 SET col1=CASE id WHEN 25 THEN 3 WHEN 26 THEN 5 END WHERE id IN (25,26)