GROUP_CONCAT creates NULL Entry - mysql

I have an old helpdesk ticketing system that I've been tasked with porting data out of and into our new ticketing system. Here's the query I'm running:
DROP TABLE IF EXISTS final_table;
CREATE TABLE final_table (
name varchar(40),
ticketID BIGINT,
ticket_id BIGINT,
subject varchar(255),
priority_id varchar(10),
status varchar(10),
updated DATE,
created DATE,
closed DATE,
response LONGTEXT
);
ALTER TABLE final_table ADD PRIMARY KEY(ticket_id);
INSERT INTO final_table
SELECT name, ticketID, ticket_id, subject, priority_id, status, updated, created, closed, response
FROM ost_ticket;
ALTER TABLE final_table ADD response LONGTEXT;
ALTER TABLE final_table ADD impact varchar(40);
ALTER TABLE final_table ADD category varchar(20);
ALTER TABLE final_table ADD queue varchar (20);
UPDATE final_table SET impact = "1 person can't work";
UPDATE final_table SET category = "Other";
UPDATE final_table SET queue = "Help Desk";
INSERT INTO final_table( response )
SELECT GROUP_CONCAT(ost_ticket_response.response SEPARATOR "\n\n") AS response
FROM ost_ticket_response
GROUP BY ost_ticket_response.ticket_id;
The old ticketing software has the PK listed multiple times in the ost_ticket_response table, hence the need for the GROUP_CONCAT call in the last four lines. Basically I'm trying to combine the entries that have the same ticket_id and then pull that into the temp table I'm creating. If I run the query as it stands now, it just populates the response column with null results. Any ideas?

Related

MySql result won't update after new changes

So I made a change from 'Chemistry' to 'Physic' on 'major' column but as soon I executed it nothing changes and shows the same result. Why? (I used MySql Workbench)
Here's the original code:
CREATE TABLE people_data1 (
id INT,
username VARCHAR(20),
major VARCHAR(20) UNIQUE,
PRIMARY KEY(id)
);
# Insert some data into the table
INSERT INTO people_data1(id,username) VALUES(3,'Clara'); # Person with no major
INSERT INTO people_data1 VALUES(51,'Mr Bald','Chemistry'); # Change to 'Physic'
SELECT * FROM people_data1;
The new one:
CREATE TABLE people_data1 (
id INT,
username VARCHAR(20),
major VARCHAR(20) UNIQUE,
PRIMARY KEY(id)
);
# Insert some data into the table
INSERT INTO people_data1(id,username) VALUES(3,'Clara'); # Person with no major
INSERT INTO people_data1 VALUES(51,'Mr Bald','Physic');
SELECT * FROM people_data1;
Inserts add new records; you want an update here:
UPDATE people_data1
SET major = 'Physics'
WHERE id = 51;
Run this update after your first two insert statements to get your desired results.

MYSQL Trigger To Populate New Table

Hey all so I have created a table named products that looks like so:
`CREATE TABLE products
(
Prod_ID int(10),
Prod_name varchar(20),
Prod_qty varchar(20),
Prod_price int(20)
);`
Product_log table is nearly identical to another table called products:
`CREATE TABLE product_log
(
Prod_ID int(10),
Prod_name varchar(20),
Action_Date date,
Updated_by varchar(30),
Action varchar(30)
);`
Next I have created a trigger called products_after_insert which should insert data into the product_log table after a row in products is inserted.
The requirement for after insert trigger is that action date should be inserted in the product_log table and the user name should be inserted like who inserted the data ex: data operator,
and on last the action should be inserted in the product_log table automatically like action here is insertion.
here is my trigger:
`DELIMITER //
CREATE TRIGGER products_after_insert
AFTER INSERT
ON products
FOR EACH ROW
BEGIN
DECLARE data_operator int(10);
DECLARE action_per varchar(200);
SET data_operator = 1;
SET action_per = 'INSERTION';
IF data_operator=1 THEN
INSERT INTO product_log(prod_id,prod_name,Action_date,Updated_by,Action)
VALUES(NEW.prod_id,NEW.prod_name,SYSDATE(),'data_operator','action_per');
END IF;
END;
//DELIMITER;`
Now, I assume I am constructing my trigger incorrectly because it appears to not be working.
Does anyone know what I am doing wrong? Any help would be greatly appreciated! Thanks!
There's at least this you are doing wrong :
VALUES(NEW.prod_id,NEW.prod_name,SYSDATE(),'data_operator','action_per');
You have properly defined the variables data_operator and action_per previously:
SET data_operator = 1;
SET action_per = 'INSERTION';
But in your INSERT instruction you surrounded them with quotes, which turned them into strings. Your instruction should be :
VALUES(NEW.prod_id,NEW.prod_name,SYSDATE(),data_operator,action_per);

mysql: insert operation errors "data too long "

in mysql, create a table:
CREATE TABLE fb_group_feed (
Post_ID varchar(64),
Permalink varchar(128),
Create_time varchar(32),
Updated_time varchar(32),
Author varchar(32),
Author_ID bigint,
Message text,
Link varchar(1024),
Likes int,
Comments int,
group_ID bigint,
foreign key(group_ID) references fb_group_info(ID)
)
when insert a line of data in this table, there is error:
INSERT INTO fb_group_feed VALUES (
'1610393525875114_1842755835972214','https://www.facebook.com/groups/1610393525875114/permalink/1842755835972214/',
'2017-01-22T17:12:41+0000','2017-01-23T00:45:16+0000','Chibura Hakkai',457297014658600,
'Pretty hands he has... credit Wilma Alberti',
'https://www.facebook.com/photo.php?fbid=466756323712669&set=gm.1842755835972214&type=3',
175,7,1610393525875114)
it errors:
Error 1406(22001): "Data too long for column 'Post_ID' at row 1")
but I have set 'Post_ID' varchar(64), i think it is long enough.Could you please help for that
First thing you should do is a describe fb_group_feed, that will tell you the current order of the columns, which is the order used for insertions if you don't specify those columns explicitly. As per the doco:
If you do not specify a list of column names for INSERT ... VALUES or INSERT ... SELECT, values for every column in the table must be provided by the VALUES list or the SELECT statement. If you do not know the order of the columns in the table, use DESCRIBE tbl_name to find out.
Alternatively (and far safer) would be to use the explicit form:
INSERT INTO fb_group_feed (
Post_ID,
Permalink,
<other columns>
) VALUES (
'1610393525875114_1842755835972214',
'https://www.facebook.com/groups/blah/blah/blah/',
<other values>
)

How to detect deleted rows when migrating data

I have a main database and am moving data from that database to a second data warehouse on a periodic schedule.
Instead of migrating an entire table each time, I want to only migrate the rows that has changed since the process last run. This is easy enough to do with a WHERE clause. However, suppose some rows have been deleted in the main database. I don't have a good way to detect which rows no longer exist, so that I can delete them on the data warehouse too. Is there a good way to do this? (As opposed to reloading the entire table each time, since the table is huge)
It could be done in following steps for let’s say in this example I am using customer table:
CREATE TABLE CUSTOMERS(
ID INT NOT NULL,
NAME VARCHAR (20) NOT NULL,
AGE INT NOT NULL,
ADDRESS CHAR (25) ,
LAST_UPDATED DATETIME,
PRIMARY KEY (ID)
);
Create CDC:
CREATE TABLE CUSTOMERS_CDC(
ID INT NOT NULL,
LAST_UPDATED DATETIME,
PRIMARY KEY (ID)
);
Trigger on source table like below on delete event:
CREATE TRIGGER TRG_CUSTOMERS_DEL
ON CUSTOMERS
FOR DELETE
AS
INSERT INTO CUSTOMERS_CDC (ID, LAST_UPDATED)
SELECT ID, getdate()
FROM DELETED
In your ETL process where you are querying source for changes add deleted records information through UNION or create separate process like below:
SELECT ID, NAME, AGE, ADDRESS, LAST_UPDATED, ‘I/U’ STATUS
FROM CUSTOMERS
WHERE LAST_UPDATED > #lastpulldate
UNION
SELECT ID, null, null, null, LAST_UPDATED, ‘D’ STATUS
FROM CUSTOMERS_CDC
WHERE LAST_UPDATED > #lastpulldate
If you just fire an update query, then it wont update the rows.
The way I see: lets say you have your way where you do a where clause. Youd have that as part of an update query, unless you are doing a csv export. If you do a mysql dump of the rows you wish to update and create a new tempTable in the main database,
Then
UPDATE mainTable WHERE id = (SELECT id from tempTable WHERE id >0 and id <1000)
If there is no corresponding match, then no update gets run, and no error occurs, by using the id limits as parameters.

How can I use a trigger for populating 3 tables based on another table (populated with 'load data infile')?

I have a final project for a MySQL class so I have the following requests:
I have a table called tab_interm with the following columns:
DateVisit varchar(50),
hourentering time,
SnamePatient varchar(100),
FnamePatient varchar(100),
SNameDoctors varchar(100),
FnameDoctors varchar(100),
Cabinet varchar(100));
The data for populating this table are in a file.txt.
So I have to create a trigger which populates the following tables:
Pacients (id_pacient int auto_increment PK, Sname varchar(50), fname varchar(50))
Cabinets (id_cabinet int auto_increment PK, name) and
visits (datehourvisits datetime, id_doctor, id_patient, id_cabinet, last 3 being FK).
The table doctors (id_doctor, sname, fname) is already populated.
The FK in visits will be populated using last_insert_id().
The trigger will also call a stored function that will transform datavisit and hourentering into a datetime column (probably with str_to_date and concat)
Also I have to declare a continue handler which is activated when a duplicated value is inserted.
I really don't know how to start this trigger. Maybe you will have an idea... thank you.