Is it possible to insert a series of rows that number off at the same time instead of individually inserting each?
So if I had a table with columns A and B and I wanted 50 rows with column A filled in from 1-50 is it possible to do that all on the same command without writing each number out, individually?
As you tagged this with Postgres:
insert into some_table (col_a, col_b)
select i, null
from generate_series(1,50) i;
More details about generate_series() in the manual:
http://www.postgresql.org/docs/current/static/functions-srf.html
You also tagged this with mysql.
If you have a utility table of integers (0-9, and simpler I think than a series of UNIONs) then you can emulate Postgres's clever behaviour as follows:
DROP TABLE IF EXISTS ints;
CREATE TABLE ints(i INT NOT NULL PRIMARY KEY);
INSERT INTO ints VALUES (0),(1),(2),(3),(4),(5),(6),(7),(8),(9);
DROP TABLE IF EXISTS my_table;
CREATE TABLE my_table(a INT NOT NULL PRIMARY KEY,b CHAR(1) NOT NULL);
INSERT INTO my_table (a,b) SELECT i2.i*10+i1.i+1 n,'x' FROM ints i1 JOIN ints i2 HAVING n <= 50;
SELECT * FROM my_table;
+----+---+
| a | b |
+----+---+
| 1 | x |
| 2 | x |
| 3 | x |
| 4 | x |
| 5 | x |
| 6 | x |
| 7 | x |
| 8 | x |
| .. |.. |
| .. |.. |
| .. |.. |
| .. |.. |
| 46 | x |
| 47 | x |
| 48 | x |
| 49 | x |
| 50 | x |
+----+---+
Related
I am using MySQL for Windows 7. I have a column which has a "-" (minus) in its name. Somehow I can not run the following command:
INSERT INTO table (..., var-name, ...) VALUES(..., value, ...);
Can somebody please help me how I can execute this command?
Using
INSERT INTO table (..., [var-name], ...) VALUES(..., value, ...);
did not work
You have to wrap the name with backticks (`) like this:
INSERT INTO table (..., `var-name`, ...) VALUES(..., value, ...);
In order to escape the dash caracter.
MySQL escape character for column names is not [, it's `
So you need to use:
INSERT INTO table (..., `var-name`, ...) VALUES(..., value, ...);
An accident waiting to happen...
DROP TABLE IF EXISTS my_table;
CREATE TABLE my_table (plusses INT NOT NULL, minuses INT NOT NULL, `plusses-minuses` INT NOT NULL);
INSERT INTO my_table VALUES
(10,2,6),
(12,6,6),
(13,9,6),
(14,12,6),
(15,2,6);
SELECT * FROM my_table;
+---------+---------+-----------------+
| plusses | minuses | plusses-minuses |
+---------+---------+-----------------+
| 10 | 2 | 6 |
| 12 | 6 | 6 |
| 13 | 9 | 6 |
| 14 | 12 | 6 |
| 15 | 2 | 6 |
+---------+---------+-----------------+
SELECT plusses, minuses, plusses-minuses FROM my_table;
+---------+---------+-----------------+
| plusses | minuses | plusses-minuses |
+---------+---------+-----------------+
| 10 | 2 | 8 |
| 12 | 6 | 6 |
| 13 | 9 | 4 |
| 14 | 12 | 2 |
| 15 | 2 | 13 |
+---------+---------+-----------------+
I have a database table with a VARCHAR based CSV field called sizes:
id | sizes
----------
1 | '1,2,4,5'
2 | '3,4,5,6,8'
3 | '3,5,6,1'
I'd like to select the set of sizes referenced by any row in the table:
sizes
-----
1
2
3
4
5
6
8
Is this possible?
N.B. I'm aware of the potential problems of CSV fields.. I'm looking at one now. I just want to know if this can be done. I'm also aware of how to normalise this data.
DROP TABLE IF EXISTS my_table;
CREATE TABLE my_table
(id INT NOT NULL PRIMARY KEY
,sizes VARCHAR(30) NOT NULL
);
INSERT INTO my_table VALUES
(1 ,'1,2,4,5'),
(2 ,'3,4,5,6,8'),
(3 ,'3,5,6,1');
SELECT * FROM my_table;
+----+-----------+
| id | sizes |
+----+-----------+
| 1 | 1,2,4,5 |
| 2 | 3,4,5,6,8 |
| 3 | 3,5,6,1 |
+----+-----------+
SELECT * FROM ints;
+---+
| i |
+---+
| 0 |
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 6 |
| 7 |
| 8 |
| 9 |
+---+
SELECT DISTINCT SUBSTRING_INDEX(SUBSTRING_INDEX(sizes,',',i+1),',',-1) n
FROM my_table, ints i
ORDER BY n;
+---+
| n |
+---+
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 6 |
| 8 |
+---+
mySQL does not have a split function.
Reference:
https://dev.mysql.com/doc/refman/5.6/en/string-functions.html
I'd like to understand a side-effect of something I was working on.
I wanted to create a large (2+ million) test table of random integers, so I ran the following:
CREATE TABLE `block_tests` (`id` int(11) DEFAULT NULL auto_increment PRIMARY KEY, `num` int(11)) ENGINE=InnoDB;
INSERT INTO `block_tests` (`num`) VALUES(ROUND(RAND() * 1E6));
-- every repeat of this line doubles number of rows;
INSERT INTO block_tests (num) SELECT ROUND(RAND() * 1E6) FROM block_tests;
INSERT INTO block_tests (num) SELECT ROUND(RAND() * 1E6) FROM block_tests;
INSERT INTO block_tests (num) SELECT ROUND(RAND() * 1E6) FROM block_tests;
-- etc
The table size correctly doubles every iteration. What's strange are the ids of the rows that have been added:
mysql> select * from block_tests limit 17;
+----+--------+
| id | num |
+----+--------+
| 1 | 814789 |
| 2 | 84489 |
| 3 | 978078 |
| 4 | 636924 |
| 6 | 250384 |
| 7 | 341151 |
| 8 | 954604 |
| 9 | 749565 |
| 13 | 884014 |
| 14 | 171375 |
| 15 | 204833 |
| 16 | 510040 |
| 17 | 935701 |
| 18 | 148383 |
| 19 | 934814 |
| 20 | 228923 |
| 28 | 340170 |
+----+--------+
17 rows in set (0.00 sec)
For some reason, there are skips in the ids. There's a pattern with the skips:
4 skip to 6 - skip 1
9 skip to 13 - skip 4
20 skip to 28 - skip 8
43 skip to 59 - skip 16
What's going on?
Maybe an answer, it could be a side effect of a new algorithm called “consecutive“ for the innodb_autoinc_lock_mode - Source
I have a table that has approximately 4 million records. I would like to make it have 240 million like so:
Add an additional column of type BIGINT,
Import 59 times the data I already have,
And for each 4 million group of records, have the additional column to have a different value
The value of the additional column would come from another table.
So I have these records (except that I have 4 millions of them and not just 3):
| id | value |
+----+-------+
| 1 | 123 |
| 2 | 456 |
| 3 | 789 |
And I want to achieve this (except that I want 60 copies and not just 3):
| id | value | data |
+----+-------+------+
| 1 | 123 | 1 |
| 2 | 456 | 1 |
| 3 | 789 | 1 |
| 4 | 123 | 2 |
| 5 | 456 | 2 |
| 6 | 789 | 2 |
| 7 | 123 | 3 |
| 8 | 456 | 3 |
| 9 | 789 | 3 |
I tried to export my data (using SELECT .. INTO OUTFILE ...), then re-import it (using LOAD DATA INFILE ...) but it is really painfully slow.
Is there a fast way to do this?
Thank you!
Sounds like you'd like to take the cartesian product of 2 tables and create a new table since you say The value of the additional column would come from another table? If so, something like this should work:
create table yourtable (id int, value int);
create table yournewtable (id int, value int, data int);
create table anothertable (data int);
insert into yourtable values (1, 123), (2, 456), (3, 789);
insert into anothertable values (1), (2), (3);
insert into yournewtable
select t.id, t.value, a.data
from yourtable t, anothertable a
SQL Fiddle Demo
Results:
ID VALUE DATA
1 123 1
2 456 1
3 789 1
1 123 2
2 456 2
3 789 2
1 123 3
2 456 3
3 789 3
Edit, Side Note -- it looks like your ID field in your new table is not suppose to keep repeating the same ids? If so, you can use an AUTO_INCREMENT field instead. However, this could mess up the original rows if they aren't sequential.
First, I would recommend that you create a new table. You can do this using a cross join:
create table WayBigTable as
select t.*, n
from table t cross join
(select 1 as n union all select 2 union all select 3 union all select 4 union all select 5 union all
. . .
select 60
) n;
I'm not sure why you would want a bigint for this column. If you really need that, you can cast to unsigned.
Hmm. You need a cross join of your table with a range. Something in a line of this:
INSERT INTO table (id,value,data) SELECT id, value from table
CROSS JOIN (SELECT 2 UNION SELECT 3 UNION ... SELECT 60) AS data;
Use this answer Generating a range of numbers in MySQL as reference on number range.
Here's one idea...
DROP TABLE IF EXISTS my_table;
CREATE TABLE my_table
(id INT NOT NULL AUTO_INCREMENT PRIMARY KEY
,value INT NOT NULL
);
INSERT INTO my_table VALUES
(1 ,123),
(2 ,456),
(3 ,789);
ALTER TABLE my_table ADD COLUMN data INT NOT NULL DEFAULT 1;
SELECT * FROM my_table;
+----+-------+------+
| id | value | data |
+----+-------+------+
| 1 | 123 | 1 |
| 2 | 456 | 1 |
| 3 | 789 | 1 |
+----+-------+------+
SELECT * FROM ints;
+---+
| i |
+---+
| 0 |
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 6 |
| 7 |
| 8 |
| 9 |
+---+
INSERT INTO my_table SELECT NULL,value,data+i2.i*10+i1.i+1 FROM my_table,ints i1,ints i2;
SELECT * FROM my_table;
+-----+-------+------+
| id | value | data |
+-----+-------+------+
| 1 | 123 | 1 |
| 2 | 456 | 1 |
| 3 | 789 | 1 |
| 4 | 123 | 2 |
| 5 | 456 | 2 |
| 6 | 789 | 2 |
| 7 | 123 | 3 |
| 8 | 456 | 3 |
...
...
| 296 | 456 | 97 |
| 297 | 789 | 97 |
| 298 | 123 | 98 |
| 299 | 456 | 98 |
| 300 | 789 | 98 |
| 301 | 123 | 99 |
| 302 | 456 | 99 |
| 303 | 789 | 99 |
+-----+-------+------+
303 rows in set (0.00 sec)
Note, for 240 million rows, this is still going to be a bit slow :-(
I have a MySQL table with rows containing duplicate values of text ('a' and 'c'):
+------+-----+
| text | num |
+------+-----+
| a | 10 |
| b | 10 |
| c | 10 |
| d | 10 |
| c | 5 |
| z | 10 |
| a | 6 |
+------+-----+
So, I want to update these rows summing the values of num. After that the table should look like this:
+------+-----+
| text | num |
+------+-----+
| a | 16 |
| b | 10 |
| c | 15 |
| d | 10 |
| z | 10 |
+------+-----+
UPD: Edited question.
Use the aggregate function SUM with a GROUP BY clause. Something like this:
SELECT `text`, SUM(num) AS num
FROM YourTableName
GROUP BY `text`;
SQL fiddle Demo
This will give you:
| TEXT | NUM |
--------------
| a | 16 |
| b | 10 |
| c | 15 |
| d | 10 |
| z | 10 |
You can create a temporary table to store the aggregated data temporarily into that and then update the original table from it.
Create a temporary table
select the aggregated data from the original
then delete all data in the original table
and then select the aggregated data from the temporary table into the original table.
Example SQL:
BEGIN;
CREATE TEMPORARY TABLE `table_name_tmp` LIKE `table_name`;
INSERT INTO `table_name_tmp` SELECT `text`, SUM(num) AS num FROM `table_name` GROUP BY 1;
DELETE FROM `table_name`;
INSERT INTO `table_name` SELECT * FROM `table_name_tmp`;
-- COMMIT;
I commented out the COMMIT command to avoid unwanted errors, please check the results before using it.