I have to fill entries in a new database. The old schema looked like following:
+------------------------+----------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------------------+----------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| trainee_id | int(11) | NO | MUL | NULL | |
| date | date | NO | | NULL | |
| duration | int(11) | NO | | NULL | |
| documentationReference | longtext | YES | | NULL | |
| educationDepartment | longtext | YES | | NULL | |
| completedtasks | longtext | NO | | NULL | |
| yearOfTraining | int(1) | YES | | NULL | |
+------------------------+----------+------+-----+---------+----------------+
So now my insert statements looks like this:
INSERT INTO `report_completedtask`
VALUES (997,
3,
'2015-01-23',
8,
NULL,
'Netzwerk und Sicherheit',
'Berufsschule',
1);
But since my new schema looks like this:
+----------------------+----------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------------+----------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| trainee_id | int(11) | NO | MUL | NULL | |
| task | longtext | NO | | NULL | |
| date | date | NO | | NULL | |
| year_of_training | int(11) | NO | | NULL | |
| duration | int(11) | YES | | NULL | |
| documentation | longtext | YES | | NULL | |
| education_department | longtext | YES | | NULL | |
+----------------------+----------+------+-----+---------+----------------+
I would need the following insert statement structure:
INSERT INTO `report_completedtask`
VALUES (997,
3,
'Netzwerk und Sicherheit',
'2015-01-23',
1,
8,
NULL,
'Berufsschule');
Here is the real problem: I have a huge file with the old entries which has more than 1000 lines. Is there any way I can rearrange them all for the new schema and alter the file?
Edit: I took dan08's approach now and combined it with a simple vi command:
:%s/VALUES/(id,trainee_id,date,duration,documentation,education_department,task,year_of_training) VALUES/g
Sometimes it is just too simple :D
Did you know you can specify the columns to insert into. Example
INSERT INTO my_table (col_a, col_c, col_b) VALUES ('a', 'c', 'b');
So I think all you need to do is explicitly specify the columns to insert into. And they can be in any order, regardless of their order in the table.
You can also INSERT multiple rows at once like so:
INSERT INTO my_table (col_a, col_c, col_b) VALUES
('a', 'c', 'b'),
('b', 'c', 'a'),
... ,
('b', 'a', 'c');
I don't know your SGBD but with oracle you can do something like it :
INSERT INTO tbl_temp2 (fld_id)
SELECT tbl_temp1.fld_order_id
FROM tbl_temp1
WHERE tbl_temp1.fld_order_id > 100;
In your case I recommend you to write a block PL/SQLif your SGBD manage it.
Other possibility if you don't want bother with code you can use talend to read your file and insert your data !
Related
I not sure if this is related to Laravel or not but I created the table with Laravel. I've got a table called programmers
DESC programmers;
+--------------+---------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+---------------------+------+-----+---------+----------------+
| id | bigint(20) unsigned | NO | PRI | NULL | auto_increment |
| name | varchar(255) | NO | | NULL | |
| age | int(11) | NO | | NULL | |
| created_at | timestamp | YES | | NULL | |
| updated_at | timestamp | YES | | NULL | |
| framework_id | int(10) unsigned | NO | | NULL | |
| test | tinyint(1) | NO | | NULL | |
+--------------+---------------------+------+-----+---------+----------------+
as you can see there's a column called test that's not nullable and has a default value of null. When I to run the following command from the database I expected an error
INSERT INTO programmers (name, age, framework_id) VALUES ('Melly2', 19, 2)
it actually worked fine and here's the data
SELECT * FROM programmers;
+----+--------+-----+---------------------+---------------------+--------------+------+
| id | name | age | created_at | updated_at | framework_id | test |
+----+--------+-----+---------------------+---------------------+--------------+------+
| 1 | melly | 20 | 2022-05-03 16:36:12 | 2022-05-03 16:36:12 | 1 | 0 |
| 2 | Melly2 | 19 | NULL | NULL | 2 | 0 |
+----+--------+-----+---------------------+---------------------+--------------+------+
the test column actually defaulted to 0 not null, and if I were to run the following command it tells me I can't have null as a value as expected
INSERT INTO programmers (name, age, framework_id, test) VALUES ('Melly2', 19, 3, null);
ERROR 1048 (23000): Column 'test' cannot be null
question: can someone briefly explain why test column didn't default to null?
In this scenario, the default value is null only if you don't provide a value. But when you provide some value, it should be compatible with the datatype you set for the column.
Here the datatype is tinyint. So, you should provide the values from true/false which infact will be converted into 1/0; else you should insert integers example:0,1,2,... etc.
I want start_date and start_time copied into latest_time and latest_date, while adding a new entry into my logbook. But I want dependency on logbook.logbook_index_id = logbook_index.id for all entries too.
mysql> describe logbook;
+-------------------------------+-----------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------------------------+-----------------------+------+-----+---------+----------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| logbook_index_id | int(10) unsigned | NO | | NULL | |
| start_date | date | NO | | NULL | |
| start_time | time | NO | | NULL | |
mysql> describe logbook_index;
+--------------------+----------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------------+----------------------+------+-----+---------+----------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| first_date | date | NO | | NULL | |
| first_time | time | NO | | NULL | |
| latest_date | date | NO | | NULL | |
| latest_time | time | NO | | NULL | |
+--------------------+----------------------+------+-----+---------+----------------+
atm I got this far ...
create trigger update_dates after insert on logbook
for each row update logbook_index
set latest_date = start_date where logbook_index.id = logbook_index_id;
I do it mostly wrong I bet. How does this work correctly and how do I get the time copied too ?
If I understood your question correctly:
For this I would suggest using a trigger
You can put an AFTER INSERT trigger on the table that you insert, inside the trigger you can put the update to the other table.
In order to access variables from the newly insert record, you need to do the following:
UPDATE logbook_index
SET latest_date = NEW.start_date
WHERE logbook_index.id = NEW.logbook_index_id;
Notice the keyword NEW that is used to access the newly insert record.
If you were using an AFTER UPDATE trigger, you could access the old values by using OLD
What you're searching for is a Trigger, a procedure that's automatically invoked in response to an event, in your case the insertion of a row in the logbook table.
I have two tables gains and final_gains.
I'm wondering how I could calculate the sum of two columns and insert it into a different table...I need to be using a WHERE clause which would be inside runescape_name inside the gains table.
Like so
hitpoints_end_exp - hitpoints_starting_exp,
magic_end_exp - magic_starting_exp,
range_end_exp - range_starting_exp
And insert the result into final_gains.hp_gained, final_gains.magic_gains and final_gains.range_gained
Here are my two tables
gains
+------------------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+------------------------+-------------+------+-----+---------+-------+
| runescape_name | varchar(12) | NO | PRI | NULL | |
| hitpoints_starting_exp | int(50) | NO | | NULL | |
| magic_starting_exp | int(50) | NO | | NULL | |
| range_starting_exp | int(50) | NO | | NULL | |
| hitpoints_end_exp | int(50) | NO | | NULL | |
| magic_end_exp | int(50) | NO | | NULL | |
| range_end_exp | int(50) | NO | | NULL | |
+------------------------+-------------+------+-----+---------+-------+
final_gains
+----------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------------+-------------+------+-----+---------+-------+
| runescape_name | varchar(12) | NO | PRI | NULL | |
| hp_gained | int(50) | NO | | NULL | |
| magic_gained | int(50) | NO | | NULL | |
| range_gained | int(50) | NO | | NULL | |
+----------------+-------------+------+-----+---------+-------+
4 rows in set (0.00 sec)
Sorry If I'm unclear, trying to explain the best as I can, I hope I'm clear enough
Use an INSERT, SELECT like this:
INSERT INTO final_gains (runescape_name, hp_gained, magic_gains, range_gained)
SELECT runescape_name,
hitpoints_end_exp - hitpoints_starting_exp,
magic_end_exp - magic_starting_exp,
range_end_exp - range_starting_exp
FROM gains;
In order to avoid duplicate keys:
INSERT INTO final_gains (runescape_name, hp_gained, magic_gains, range_gained)
SELECT runescape_name,
hitpoints_end_exp - hitpoints_starting_exp,
magic_end_exp - magic_starting_exp,
range_end_exp - range_starting_exp
FROM gains
ON DUPLICATE KEY
UPDATE hp_gained = hitpoints_end_exp - hitpoints_starting_exp,
magic_gains = magic_end_exp - magic_starting_exp,
range_gained = range_end_exp - range_starting_exp;
This is untested code, but should be close.
Note: I removed the first suggestion as it s not applicable to these table definitions. runescape_name is primary key in table final_gains so it has to be inserted/assigned as well.
all,
Suppose I have the following mysql table testtable:
+----------+------------------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------+------------------------------+------+-----+---------+----------------+
| testID | bigint(12) unsigned zerofill | NO | MUL | NULL | auto_increment |
| testcol | varchar(10) | YES | | NULL | |
| testcol1 | varchar(10) | YES | | NULL | |
| testcol2 | varchar(10) | YES | | NULL | |
| testcol3 | varchar(10) | YES | | NULL | |
| testcol4 | varchar(10) | YES | | NULL | |
+----------+------------------------------+------+-----+---------+----------------+
I then insert several rows by running, let's say 5 times:
INSERT INTO testtable VALUES (null, 'testcol', 'testcol1', 'testcol2', 'testcol3', 'testcol4');
Then delete one row with testID = 000000000002:
DELETE FROM testtable WHERE testID = 000000000002;
My question is:
Will testID be reassigned as 000000000002 again later by running the same insert statement?
Thanks in advance.
No. If you don't specify the key, the engine assign a new autoincremented value. You can assign the 0000000002 key with an explicit insert
INSERT INTO testtable VALUES (0000000002, 'testcol', 'testcol1', 'testcol2', 'testcol3', 'testcol4');
mysql> desc oldtable;
+---------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+--------------+------+-----+---------+----------------+
| uid | int(11) | NO | PRI | NULL | auto_increment |
| active | char(1) | NO | | NULL | |
| field3 | char(256) | NO | | NULL | |
| field4 | char(256) | NO | | NULL | |
+---------------+--------------+------+-----+---------+----------------+
mysql> desc newtable;
+------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+----------------+
| uid | int(11) | NO | PRI | NULL | auto_increment |
| active | tinyint(1) | NO | | 0 | |
| field5 | int(12) | NO | | 0 | |
| field6 | varchar(12) | NO | | 0 | |
| field7 | varchar(12) | NO | | 0 | |
+------------+--------------+------+-----+---------+----------------+
This is similar to my previous query change a field and port mysql table data via script ?
[I would like to port data (dump) from oldtable into newtable. One issue is, earlier the table used char(1) for active which stores value either 'Y' or 'N'. Now the newtable stores it as int either 1 or 0.
How can i fix this before porting data? Should I use shell script for such fix & porting ? Any sample scripts or tips :)]
But this question, How to achieve the same porting,If both tables has different no.of fields
like above?
The answer is similar to the previus question answer:
INSERT INTO newtable (uid, active, field5, field6, field7 )
SELECT uid,FIELD(active,'Y') as active, 0,'',''
FROM oldtable
Then update newTable with new fields values:
update newTable
set
field5 = (select someExpression from someTable5 t where t.uid=newTable.uid),
field6 = (select someExpression from someTable6 t where ...),
field7 = (select someExpression from someTable7 t where ...)
Also, you can define new fields as null allowed and leave this fields without value.