I am using MySQL for Windows 7. I have a column which has a "-" (minus) in its name. Somehow I can not run the following command:
INSERT INTO table (..., var-name, ...) VALUES(..., value, ...);
Can somebody please help me how I can execute this command?
Using
INSERT INTO table (..., [var-name], ...) VALUES(..., value, ...);
did not work
You have to wrap the name with backticks (`) like this:
INSERT INTO table (..., `var-name`, ...) VALUES(..., value, ...);
In order to escape the dash caracter.
MySQL escape character for column names is not [, it's `
So you need to use:
INSERT INTO table (..., `var-name`, ...) VALUES(..., value, ...);
An accident waiting to happen...
DROP TABLE IF EXISTS my_table;
CREATE TABLE my_table (plusses INT NOT NULL, minuses INT NOT NULL, `plusses-minuses` INT NOT NULL);
INSERT INTO my_table VALUES
(10,2,6),
(12,6,6),
(13,9,6),
(14,12,6),
(15,2,6);
SELECT * FROM my_table;
+---------+---------+-----------------+
| plusses | minuses | plusses-minuses |
+---------+---------+-----------------+
| 10 | 2 | 6 |
| 12 | 6 | 6 |
| 13 | 9 | 6 |
| 14 | 12 | 6 |
| 15 | 2 | 6 |
+---------+---------+-----------------+
SELECT plusses, minuses, plusses-minuses FROM my_table;
+---------+---------+-----------------+
| plusses | minuses | plusses-minuses |
+---------+---------+-----------------+
| 10 | 2 | 8 |
| 12 | 6 | 6 |
| 13 | 9 | 4 |
| 14 | 12 | 2 |
| 15 | 2 | 13 |
+---------+---------+-----------------+
Related
I want to insert multiple row using one insert query.
My table looks like below:
+---------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------------+-------------+------+-----+---------+-------+
| seq_no | varchar(20) | YES | | NULL | |
| classname | varchar(20) | YES | | NULL | |
| classteacher | varchar(20) | YES | | NULL | |
| no_of_student | varchar(20) | YES | | NULL | |
| student_roll | varchar(20) | YES | | NULL | |
+---------------+-------------+------+-----+---------+-------+
Now I want to insert values in such a way that the table looks like below:
+--------+-----------+--------------+---------------+--------------+
| seq_no | classname | classteacher | no_of_student | student_roll |
+--------+-----------+--------------+---------------+--------------+
| 1 | physics | a | 02 | 01 |
| 1 | physics | a | 02 | 02 |
+--------+-----------+--------------+---------------+--------------+
where user doesn't have to put 'student_roll'.
Help me to find a way where using below kind of query one can get the above table:
insert into temp2(seq_no,classname,classteacher,no_of_student) values ('1','physics','a','02');
Here if no_of_student is 02 then row will be created with student_roll 01 and 02 sequentially.
I was thinking of this using for loop.
Solutions would typically include a table of numbers and insert ... select syntax.
Here is a simple solution using an explicit inline table:
insert into temp2(seq_no,classname,classteacher,no_of_student)
select 1, 'physics', 'a', 2, n
from (
select 1 n
union all select 2
union all select 3
union all select 4
union all select 5
union all select 6
union all select 7
union all select 8
union all select 9
union all select 10
) t
In MySQL 8.0, if you have a table with enough records, say other_table, you can use row_number():
insert into temp2(seq_no,classname,classteacher,no_of_student)
select 1, 'physics', 'a', 2, n
from (select row_number() over(order by rand()) n from other_table) t
where n <= :number_of_records to insert
Note: it has been commented already by RiggsFolly that the numeric-like values should be stored as numbers, not as strings. This answer assumes that. Else, you would need to cast() integer values to char, like so:
insert into ...
select '1', 'physics', 'a', '2', cast(n as char)
from ...
Here I have created another table named "other_table". It hold two columns "num" and "num_explain". and the table looks like below:
+-----+-------------+
| num | num_explain |
+-----+-------------+
| 02 | 01 |
| 02 | 02 |
+-----+-------------+
Then I have used the below code to get desired result:
insert into temp2(seq_no,classname,classteacher,no_of_student,student_roll) select '1','physics','a',num,num_explain from other_table where num='02';
I currently have a file which contains data in it that needs to populate 9 different tables. Each of these tables has a different number of columns and datatypes, therefore I need to filter the source file (using the first column which determines which table the row will go into).
My current method is to create a table that has generic columns names col_1, col_2 etc up to the last filled column in the file and then create 9 views that reference this file. The issue I have is that there are a different data types appearing in the same columns due to the fact the tables are all different structures.
Is there a possibility to create a dynamic schema that filters the .csv that the HIVE table points to base on the first column??
thanks
Demo
data.csv
1,1,Now,11,22,2016-12-12
1,2,I,33,44,2017-01-01
3,3,heard,55,66,2017-02-02
1,4,you,77,88,2017-03-03
2,5,know,99,1010,2017-04-04
1,6,that,1111,1212,2017-05-05
2,7,secret,1313,1414,2017-06-06
create external table mycsv
(
rec_type int
,id int
,mystring string
,myint1 int
,myint2 int
,mydate date
)
row format delimited
fields terminated by ','
stored as textfile
;
select * from mycsv;
+----------+----+----------+--------+--------+------------+
| rec_type | id | mystring | myint1 | myint2 | mydate |
+----------+----+----------+--------+--------+------------+
| 1 | 1 | Now | 11 | 22 | 2016-12-12 |
| 1 | 2 | I | 33 | 44 | 2017-01-01 |
| 3 | 3 | heard | 55 | 66 | 2017-02-02 |
| 1 | 4 | you | 77 | 88 | 2017-03-03 |
| 2 | 5 | know | 99 | 1010 | 2017-04-04 |
| 1 | 6 | that | 1111 | 1212 | 2017-05-05 |
| 2 | 7 | secret | 1313 | 1414 | 2017-06-06 |
+----------+----+----------+--------+--------+------------+
create table t1(id int,mystring string);
create table t2(id int,mystring string,mydate date);
create table t3(id int,mydate date,myint1 int,myint2 int);
from mycsv
insert into t1 select id,mystring where rec_type = 1
insert into t2 select id,mystring,mydate where rec_type = 2
insert into t3 select id,mydate,myint1,myint2 where rec_type = 3
select * from t1;
+----+----------+
| id | mystring |
+----+----------+
| 1 | Now |
| 2 | I |
| 4 | you |
| 6 | that |
+----+----------+
select * from t2;
+----+----------+------------+
| id | mystring | mydate |
+----+----------+------------+
| 5 | know | 2017-04-04 |
| 7 | secret | 2017-06-06 |
+----+----------+------------+
select * from t3;
+----+------------+--------+--------+
| id | mydate | myint1 | myint2 |
+----+------------+--------+--------+
| 3 | 2017-02-02 | 55 | 66 |
+----+------------+--------+--------+
I'd like to understand a side-effect of something I was working on.
I wanted to create a large (2+ million) test table of random integers, so I ran the following:
CREATE TABLE `block_tests` (`id` int(11) DEFAULT NULL auto_increment PRIMARY KEY, `num` int(11)) ENGINE=InnoDB;
INSERT INTO `block_tests` (`num`) VALUES(ROUND(RAND() * 1E6));
-- every repeat of this line doubles number of rows;
INSERT INTO block_tests (num) SELECT ROUND(RAND() * 1E6) FROM block_tests;
INSERT INTO block_tests (num) SELECT ROUND(RAND() * 1E6) FROM block_tests;
INSERT INTO block_tests (num) SELECT ROUND(RAND() * 1E6) FROM block_tests;
-- etc
The table size correctly doubles every iteration. What's strange are the ids of the rows that have been added:
mysql> select * from block_tests limit 17;
+----+--------+
| id | num |
+----+--------+
| 1 | 814789 |
| 2 | 84489 |
| 3 | 978078 |
| 4 | 636924 |
| 6 | 250384 |
| 7 | 341151 |
| 8 | 954604 |
| 9 | 749565 |
| 13 | 884014 |
| 14 | 171375 |
| 15 | 204833 |
| 16 | 510040 |
| 17 | 935701 |
| 18 | 148383 |
| 19 | 934814 |
| 20 | 228923 |
| 28 | 340170 |
+----+--------+
17 rows in set (0.00 sec)
For some reason, there are skips in the ids. There's a pattern with the skips:
4 skip to 6 - skip 1
9 skip to 13 - skip 4
20 skip to 28 - skip 8
43 skip to 59 - skip 16
What's going on?
Maybe an answer, it could be a side effect of a new algorithm called “consecutive“ for the innodb_autoinc_lock_mode - Source
Is it possible to insert a series of rows that number off at the same time instead of individually inserting each?
So if I had a table with columns A and B and I wanted 50 rows with column A filled in from 1-50 is it possible to do that all on the same command without writing each number out, individually?
As you tagged this with Postgres:
insert into some_table (col_a, col_b)
select i, null
from generate_series(1,50) i;
More details about generate_series() in the manual:
http://www.postgresql.org/docs/current/static/functions-srf.html
You also tagged this with mysql.
If you have a utility table of integers (0-9, and simpler I think than a series of UNIONs) then you can emulate Postgres's clever behaviour as follows:
DROP TABLE IF EXISTS ints;
CREATE TABLE ints(i INT NOT NULL PRIMARY KEY);
INSERT INTO ints VALUES (0),(1),(2),(3),(4),(5),(6),(7),(8),(9);
DROP TABLE IF EXISTS my_table;
CREATE TABLE my_table(a INT NOT NULL PRIMARY KEY,b CHAR(1) NOT NULL);
INSERT INTO my_table (a,b) SELECT i2.i*10+i1.i+1 n,'x' FROM ints i1 JOIN ints i2 HAVING n <= 50;
SELECT * FROM my_table;
+----+---+
| a | b |
+----+---+
| 1 | x |
| 2 | x |
| 3 | x |
| 4 | x |
| 5 | x |
| 6 | x |
| 7 | x |
| 8 | x |
| .. |.. |
| .. |.. |
| .. |.. |
| .. |.. |
| 46 | x |
| 47 | x |
| 48 | x |
| 49 | x |
| 50 | x |
+----+---+
I have a MySQL table with rows containing duplicate values of text ('a' and 'c'):
+------+-----+
| text | num |
+------+-----+
| a | 10 |
| b | 10 |
| c | 10 |
| d | 10 |
| c | 5 |
| z | 10 |
| a | 6 |
+------+-----+
So, I want to update these rows summing the values of num. After that the table should look like this:
+------+-----+
| text | num |
+------+-----+
| a | 16 |
| b | 10 |
| c | 15 |
| d | 10 |
| z | 10 |
+------+-----+
UPD: Edited question.
Use the aggregate function SUM with a GROUP BY clause. Something like this:
SELECT `text`, SUM(num) AS num
FROM YourTableName
GROUP BY `text`;
SQL fiddle Demo
This will give you:
| TEXT | NUM |
--------------
| a | 16 |
| b | 10 |
| c | 15 |
| d | 10 |
| z | 10 |
You can create a temporary table to store the aggregated data temporarily into that and then update the original table from it.
Create a temporary table
select the aggregated data from the original
then delete all data in the original table
and then select the aggregated data from the temporary table into the original table.
Example SQL:
BEGIN;
CREATE TEMPORARY TABLE `table_name_tmp` LIKE `table_name`;
INSERT INTO `table_name_tmp` SELECT `text`, SUM(num) AS num FROM `table_name` GROUP BY 1;
DELETE FROM `table_name`;
INSERT INTO `table_name` SELECT * FROM `table_name_tmp`;
-- COMMIT;
I commented out the COMMIT command to avoid unwanted errors, please check the results before using it.