all,
Suppose I have the following mysql table testtable:
+----------+------------------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------+------------------------------+------+-----+---------+----------------+
| testID | bigint(12) unsigned zerofill | NO | MUL | NULL | auto_increment |
| testcol | varchar(10) | YES | | NULL | |
| testcol1 | varchar(10) | YES | | NULL | |
| testcol2 | varchar(10) | YES | | NULL | |
| testcol3 | varchar(10) | YES | | NULL | |
| testcol4 | varchar(10) | YES | | NULL | |
+----------+------------------------------+------+-----+---------+----------------+
I then insert several rows by running, let's say 5 times:
INSERT INTO testtable VALUES (null, 'testcol', 'testcol1', 'testcol2', 'testcol3', 'testcol4');
Then delete one row with testID = 000000000002:
DELETE FROM testtable WHERE testID = 000000000002;
My question is:
Will testID be reassigned as 000000000002 again later by running the same insert statement?
Thanks in advance.
No. If you don't specify the key, the engine assign a new autoincremented value. You can assign the 0000000002 key with an explicit insert
INSERT INTO testtable VALUES (0000000002, 'testcol', 'testcol1', 'testcol2', 'testcol3', 'testcol4');
Related
I not sure if this is related to Laravel or not but I created the table with Laravel. I've got a table called programmers
DESC programmers;
+--------------+---------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+---------------------+------+-----+---------+----------------+
| id | bigint(20) unsigned | NO | PRI | NULL | auto_increment |
| name | varchar(255) | NO | | NULL | |
| age | int(11) | NO | | NULL | |
| created_at | timestamp | YES | | NULL | |
| updated_at | timestamp | YES | | NULL | |
| framework_id | int(10) unsigned | NO | | NULL | |
| test | tinyint(1) | NO | | NULL | |
+--------------+---------------------+------+-----+---------+----------------+
as you can see there's a column called test that's not nullable and has a default value of null. When I to run the following command from the database I expected an error
INSERT INTO programmers (name, age, framework_id) VALUES ('Melly2', 19, 2)
it actually worked fine and here's the data
SELECT * FROM programmers;
+----+--------+-----+---------------------+---------------------+--------------+------+
| id | name | age | created_at | updated_at | framework_id | test |
+----+--------+-----+---------------------+---------------------+--------------+------+
| 1 | melly | 20 | 2022-05-03 16:36:12 | 2022-05-03 16:36:12 | 1 | 0 |
| 2 | Melly2 | 19 | NULL | NULL | 2 | 0 |
+----+--------+-----+---------------------+---------------------+--------------+------+
the test column actually defaulted to 0 not null, and if I were to run the following command it tells me I can't have null as a value as expected
INSERT INTO programmers (name, age, framework_id, test) VALUES ('Melly2', 19, 3, null);
ERROR 1048 (23000): Column 'test' cannot be null
question: can someone briefly explain why test column didn't default to null?
In this scenario, the default value is null only if you don't provide a value. But when you provide some value, it should be compatible with the datatype you set for the column.
Here the datatype is tinyint. So, you should provide the values from true/false which infact will be converted into 1/0; else you should insert integers example:0,1,2,... etc.
I am posting this thread in order to have some advices regarding the performance of my SQL query.
I have actually 2 tables, one which called HGVS_SNP with about 44657169 rows and another on run table which has an average of 2000 rows.
When I try to update field Comment of my run table it takes lot's of time to perform the query. I was wondering if there is any method to increase my SQL query.
Structure of HGVS_SNP Table:
+-----------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+-------------+------+-----+---------+-------+
| snp_id | int(11) | YES | MUL | NULL | |
| hgvs_name | text | YES | | NULL | |
| source | varchar(8) | NO | | NULL | |
| upd_time | varchar(32) | NO | | NULL | |
+-----------+-------------+------+-----+---------+-------+
My run table has the following structure:
+----------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------------------+--------------+------+-----+---------+-------+
| ID | varchar(7) | YES | | NULL | |
| Reference | varchar(7) | YES | MUL | NULL | |
| HGVSvar2 | varchar(120) | YES | MUL | NULL | |
| Comment | varchar(120) | YES | | NULL | |
| Compute | varchar(20) | YES | | NULL | |
+----------------------+--------------+------+-----+---------+-------+
Here's my query:
UPDATE run
INNER JOIN SNP_HGVS
ON run.HGVSvar2=SNP_HGVS.hgvs_name
SET run.Comment=concat('rs',SNP_HGVS.snp_id) WHERE run.Compute not like 'tron'
I`m guessing since you JOIN a text column with a VARCHAR(120) column that you don`t really need a text column. Make it a VARCHAR so you can index it
ALTER TABLE `HGVS_SNP` modify hgvs_name VARCHAR(120);
ALTER TABLE `HGVS_SNP` ADD KEY idx_hgvs_name (hgvs_name);
This will take a while on large tables
Now your JOIN should be much faster,also add an index on compute column
ALTER TABLE `run` ADD KEY idx_compute (compute);
And the LIKE is unnecessary,change it to
WHERE run.Compute != 'tron'
I have to fill entries in a new database. The old schema looked like following:
+------------------------+----------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------------------+----------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| trainee_id | int(11) | NO | MUL | NULL | |
| date | date | NO | | NULL | |
| duration | int(11) | NO | | NULL | |
| documentationReference | longtext | YES | | NULL | |
| educationDepartment | longtext | YES | | NULL | |
| completedtasks | longtext | NO | | NULL | |
| yearOfTraining | int(1) | YES | | NULL | |
+------------------------+----------+------+-----+---------+----------------+
So now my insert statements looks like this:
INSERT INTO `report_completedtask`
VALUES (997,
3,
'2015-01-23',
8,
NULL,
'Netzwerk und Sicherheit',
'Berufsschule',
1);
But since my new schema looks like this:
+----------------------+----------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------------+----------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| trainee_id | int(11) | NO | MUL | NULL | |
| task | longtext | NO | | NULL | |
| date | date | NO | | NULL | |
| year_of_training | int(11) | NO | | NULL | |
| duration | int(11) | YES | | NULL | |
| documentation | longtext | YES | | NULL | |
| education_department | longtext | YES | | NULL | |
+----------------------+----------+------+-----+---------+----------------+
I would need the following insert statement structure:
INSERT INTO `report_completedtask`
VALUES (997,
3,
'Netzwerk und Sicherheit',
'2015-01-23',
1,
8,
NULL,
'Berufsschule');
Here is the real problem: I have a huge file with the old entries which has more than 1000 lines. Is there any way I can rearrange them all for the new schema and alter the file?
Edit: I took dan08's approach now and combined it with a simple vi command:
:%s/VALUES/(id,trainee_id,date,duration,documentation,education_department,task,year_of_training) VALUES/g
Sometimes it is just too simple :D
Did you know you can specify the columns to insert into. Example
INSERT INTO my_table (col_a, col_c, col_b) VALUES ('a', 'c', 'b');
So I think all you need to do is explicitly specify the columns to insert into. And they can be in any order, regardless of their order in the table.
You can also INSERT multiple rows at once like so:
INSERT INTO my_table (col_a, col_c, col_b) VALUES
('a', 'c', 'b'),
('b', 'c', 'a'),
... ,
('b', 'a', 'c');
I don't know your SGBD but with oracle you can do something like it :
INSERT INTO tbl_temp2 (fld_id)
SELECT tbl_temp1.fld_order_id
FROM tbl_temp1
WHERE tbl_temp1.fld_order_id > 100;
In your case I recommend you to write a block PL/SQLif your SGBD manage it.
Other possibility if you don't want bother with code you can use talend to read your file and insert your data !
I am getting this error
javax.servlet.ServletException: com.mysql.jdbc.NotUpdatable: Result
Set not updatable.
I know this error is regarding the primary key but for all my tables I initially insert a primary key.So for this table also I have a primary key.I am posting part of my code.
Statement st=con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_UPDATABLE);
ResultSet rs=st.executeQuery("Select * from test3 order by rand() limit 5");
List arrlist = new ArrayList();
while(rs.next()){
String xa =rs.getString("display");
if(xa.equals("1")){
arrlist.add(rs.getString("question_text"));
}
rs.updateString("display", "0");
rs.updateRow();
Just tell me if something is going wrong in this code.please help.
This is my database
+----------------+---------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------+---------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| index_question | varchar(45) | YES | | NULL | |
| question_no | varchar(10) | YES | | NULL | |
| question_text | varchar(1000) | YES | | NULL | |
| file_name | varchar(128) | YES | | NULL | |
| attachment | mediumblob | YES | | NULL | |
| display | varchar(10) | YES | | NULL | |
+----------------+---------------+------+-----+---------+----------------+
You have to update the row immediately after you have fetched it (FOR UPDATE and rs.updateRow(),
OR
you have to write an UPDATE tablename set = where statement to update a row at any time
The query can not use functions. Try removing the "rand()" from the SQL query string.
See the JDBC 2.1 API Specification, section 5.6 for more details.
mysql> desc oldtable;
+---------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+--------------+------+-----+---------+----------------+
| uid | int(11) | NO | PRI | NULL | auto_increment |
| active | char(1) | NO | | NULL | |
| field3 | char(256) | NO | | NULL | |
| field4 | char(256) | NO | | NULL | |
+---------------+--------------+------+-----+---------+----------------+
mysql> desc newtable;
+------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+----------------+
| uid | int(11) | NO | PRI | NULL | auto_increment |
| active | tinyint(1) | NO | | 0 | |
| field5 | int(12) | NO | | 0 | |
| field6 | varchar(12) | NO | | 0 | |
| field7 | varchar(12) | NO | | 0 | |
+------------+--------------+------+-----+---------+----------------+
This is similar to my previous query change a field and port mysql table data via script ?
[I would like to port data (dump) from oldtable into newtable. One issue is, earlier the table used char(1) for active which stores value either 'Y' or 'N'. Now the newtable stores it as int either 1 or 0.
How can i fix this before porting data? Should I use shell script for such fix & porting ? Any sample scripts or tips :)]
But this question, How to achieve the same porting,If both tables has different no.of fields
like above?
The answer is similar to the previus question answer:
INSERT INTO newtable (uid, active, field5, field6, field7 )
SELECT uid,FIELD(active,'Y') as active, 0,'',''
FROM oldtable
Then update newTable with new fields values:
update newTable
set
field5 = (select someExpression from someTable5 t where t.uid=newTable.uid),
field6 = (select someExpression from someTable6 t where ...),
field7 = (select someExpression from someTable7 t where ...)
Also, you can define new fields as null allowed and leave this fields without value.