I have a table with about 50K rows. I need to multiply this data 10 fold to have at least 5M rows for testing the performance. Now, its taken me several long minutes to import 50K from a CSV file so I don't want to create a 5M record file and then import it into SQL.
Is there a way to duplicate the existing rows over and over again to create 5M records? I don't mind if the rows are identical, they should just have a diferrent id which is the Primary (Auto Increment) column.
I'm currently doing this on XAMPP with phpMyAdmin.
Insert into my_table (y,z) select y, z from my_table;
where x is your autoincrementing id.
REPEAT a (remarkably small) number of times
Option 1 : Use union
insert into your_table (col1,col2)
select col1,col2 from your_table
union all
select col1,col2 from your_table
union all
select col1,col2 from your_table
union all
select col1,col2 from your_table
continued...
Option 2 : Use dummy table with 10 records and do cross join
Create a dummy table with 10 rows
insert into your_table (col1,col2)
select col1,col2 from your_table, dummy_table
If you have ~50K rows, then copying them 99 times will give you ~5M rows.
To do so, you can create a procedure and use a loop to copy them 99 times.
DELIMITER $$
CREATE PROCEDURE populate()
BEGIN
DECLARE counter INT DEFAULT 1;
WHILE counter < 100 DO
insert into mytable(colA, colB) select colA, colB from mytable;
SET counter = counter + 1;
END WHILE;
END $$
DELIMITER ;
Then you can call the procedure using
call populate();
Related
I have this table seller whose columns are
id mobile1
1 787811
I have another table with same columns ,I just want to update the mobile1 field from this table with the values from other table say "copy".
I have written this query
UPDATE seller
SET mobile1 = (
SELECT SUBSTRING_INDEX(mobile1, '.', 1)
FROM copy)
WHERE 1;
I am getting this obvious error when I run it.
Sub-query returns more than 1 row ,
Any way to do this??
You need condition which will be using to select only one row or you should use LIMIT:
UPDATE seller
SET mobile1 = (
SELECT SUBSTRING_INDEX(mobile1, '.', 1)
FROM copy
LIMIT 1)
WHERE id = 1;
You can constrain the number of rows returned to just one using MySQL limit.
UPDATE seller SET mobile1=(SELECT SUBSTRING_INDEX(mobile1,'.',1)
FROM copy LIMIT 1)
WHERE id=1;
If anyone who is looking for the possible answer here is what I did,I created a procedure with while loop.
DELIMITER $$
CREATE PROCEDURE update_mobile(IN counting BIGINT);
BEGIN
declare x INT default 0;
SET x = 1;
WHILE x <= counting DO
UPDATE copy SET mobile1=(SELECT SUBSTRING_INDEX(mobile1, '.', 1) as mobi FROM seller WHERE id=x LIMIT 1) WHERE id=x;
SET x=x + 1;
END WHILE;
END
AND finally I calculated the number of rows by count(id) and passed this number to my procedure
SET #var =count;
CALL update_mobile(#var);
AND it worked like a Charm...
If you want to copy all data, you can do this :
INSERT INTO `seller` (`mobile1`) SELECT SUBSTRING_INDEX(mobile1,'.',1) FROM copy
I am using this code to insert a default row if the table is definitely empty. I am trying to extend this to insert multiple rows but cannot figure out the syntax:
INSERT INTO myTable(`myCol`)
SELECT 'myVal'
FROM DUAL
WHERE NOT EXISTS (SELECT * FROM myTable);
What i am getting (#Uueerdo)
CREATE TEMPORARY TABLE `myDefaults` ( name VARCHAR(100) NULL DEFAULT NULL);# MySQL returned an empty result set (i.e. zero rows).
INSERT INTO myDefaults (name) VALUES ('a'), ('b');# 2 rows affected.
SET #valCount := 0;# MySQL returned an empty result set (i.e. zero rows).
SELECT COUNT(1) INTO #valCount FROM blsf;# 1 row affected.
INSERT INTO blsf(name)
SELECT name
FROM myDefaults
WHERE #valCount > 0;# MySQL returned an empty result set (i.e. zero rows).
DROP TEMPORARY TABLE `myDefaults`;# MySQL returned an empty result set (i.e. zero rows).
Something like this should work:
CREATE TEMPORARY TABLE `myDefaults` ( the_value INT|VARCHAR|whatever... )
;
INSERT INTO myDefaults (the_value) VALUES (myVal1), (myVal2), ....
;
SET #valCount := 0; -- Because I am paranoid ;)
SELECT COUNT(1) INTO #valCount FROM myTable;
INSERT INTO myTable(myCol)
SELECT the_value
FROM myDefaults
WHERE #valCount = 0
;
DROP TEMPORARY TABLE `myDefaults`;
http://sqlfiddle.com/#!2/ba9ed/1
INSERT INTO table1 (myColumn)
SELECT
'myValue'
FROM (
SELECT COUNT(*) c
FROM table1 t
HAVING c=0) t2;
In a previous question I asked how I could sum up a total based on some conditions: Count total on couple of conditions
Suppose I have a table like this:
id col1 col2 col3
1 a 1 k1
2 a 2 k2
3 a -3 k3
4 b 3 k4
Now, when I get id=1, I want to delete all the rows where col1=a.
When I get id=4, I want to delete all the rows where col1=b.
How would I do this in SQL?
I tried based upon previous answer:
DELETE FROM table WHERE (col1) IN (SELECT col1 FROM table WHERE id = '1')
But that gave me an error: #1093 - You can't specify target table 'table' for update in FROM clause
This has been many times on stackowerflow, you cannot UPDATE/DELETE table with data from nested select on the same table. There're two ways to do this:
Load all data before (for example via php, sql procedure)
Create temporary table like the one you're using, clone data and use temporary table to select items
i have another suggested solution for this. What if you create a STORED PROCEDURE for this problem?
like this:
DELIMITER $$
CREATE PROCEDURE `DeleteRec`(IN xxx varchar(5))
BEGIN
DECLARE oID varchar(5);
SET oID := (SELECT col1 FROM table WHERE id = '1');
DELETE FROM table WHERE col1 = oID;
END$$
DELIMITER ;
do this helps you?
In mysql I can query select * ... LIMIT 10, 30 where 10 represents the number of records to skip.
Does anyone know how I can do the same thing in delete statements where every record after the first 10 records get deleted?
Considering there is no rowId in MySQL (like in Oracle), I would suggest the following:
alter table mytable add id int unique auto_increment not null;
This will automatically number your rows in the order of a select statement without conditions or order-by.
select * from mytable;
Then, after checking the order is consistent with your needs (and maybe a dump of the table)
delete from mytable where id > 10;
Finally, you may want to remove that field
alter table mytable drop id;
The following will NOT work:
DELETE
FROM table_name
WHERE id IN
( SELECT id
FROM table_name
ORDER BY --- whatever
LIMIT 10, 30
)
But this will:
DELETE
FROM table_name
WHERE id IN
( SELECT id
FROM
( SELECT id
FROM table_name
ORDER BY --- whatever
LIMIT 10, 30
) AS tmp
)
And this too:
DELETE table_name
FROM table_name
JOIN
( SELECT id
FROM table_name
ORDER BY --- whatever
LIMIT 10, 30
) AS tmp
ON tmp.id = table_name.id
When deleting a lot of rows, this is an efficient trick:
CREATE TABLE new LIKE real; -- empty table with same schema
INSERT INTO new SELECT * FROM real ... LIMIT 10; -- copy the rows to _keep_
RENAME TABLE real TO old, new TO real; -- rearrange
DROP TABLE old; -- clean up.
Is there a way to do an insert under a count condition, something like:
INSERT INTO my_table (colname) VALUES('foo') IF COUNT(my_table) < 1
Basically I want to insert a single default record if the table is currently empty. I'm using mysql.
Use SELECT instead of VALUES to be able to expand the query with a WHERE clause.
EXISTS is a better & faster test than COUNT
INSERT INTO my_table (colname)
SELECT 'foo'
WHERE NOT EXISTS (SELECT * FROM my_table)
One way would be to place a unique key on a column. Then execute a REPLACE:
REPLACE [LOW_PRIORITY | DELAYED]
[INTO] tbl_name [(col_name,...)]
{VALUES | VALUE} ({expr | DEFAULT},...),(...),...
REPLACE works exactly like INSERT,
except that if an old row in the table
has the same value as a new row for a
PRIMARY KEY or a UNIQUE index, the old
row is deleted before the new row is
inserted
This is easier to read:
INSERT INTO my_table (colname)
SELECT 'foo' FROM DUAL
WHERE NOT EXISTS (SELECT * FROM my_table);
The lack of a VALUES is mitigated by the SELECT FROM DUAL which will provide the values. the FROM DUAL is not always required, but it doesn't hurt to include it for that weird configurations where it is required (like the installation of Percona I am using).
The NOT EXISTS is faster than doing a count which can be slow on a table with a large number of rows.