MySQL Stored Procedure | How to write it? - mysql

This time I have a MySQL question, I'm trying to create a stored procedure which will execute a prepared statement, the goal is to get a ranged list from a table("order_info"), the list is divided by "pages", each page is determined by a record count and should be ordered using a particular field sorted either 'ASC' or 'DESC', each record represents an "order" the catch here is that the procedure returns the orders of a particular group, the the order is associated to a user which belongs to a group. Here's what I've done so far:
CREATE DEFINER=`root`#`%` PROCEDURE `getGroupOrders`(IN grp INT,
IN page INT,
IN count INT,
IN ord TINYINT,
IN srt VARCHAR(4)
)
BEGIN
PREPARE prepGroupOrders FROM
"SELECT oi.* FROM `dbre`.`order_info` oi
INNER JOIN `dbre`.`users` usr
ON oi.`username` = usr.`username` AND usr.`id_group` = ?
ORDER BY ? ? LIMIT ?, ?";
SET #g := grp;
SET #cnt := count;
SET #start := #page*count ;
SET #orderBy := ord;
SET #sortBy := srt;
EXECUTE prepGroupOrders USING #g,#orderBy,#sortBy,#start,#cnt;
END
I get a syntax error when executing this, even though the editor does not higlight any errors and lets me save the procedure,I think that one of the follwing may be happening:
I am incorrectly usng the `ASC` or `DESC` since it is a SQL reserved word.
I read somewhere that Prepared statement are for only ONE SQL query, and since I have nested queries it can't be done.
I've tested this standard query:
SELECT oi.* FROM `dbre`.`order_info` oi
INNER JOIN `dbre`.`users` usr
ON oi.`username` = usr.`username` AND usr.`id_group` = 1
ORDER BY `status` DESC LIMIT 5, 10;
And it gives me the results I want. SO how would I design the procedure?
Any help is truly appreciated.

This may not necessarily solve your issue but, you can probably clean that query up a bit, eliminate the subquery and get something that should perform a little better.
SELECT oi.*
FROM `dbre`.`order_info` oi
INNER JOIN `dbre`.`users` u
ON oi.username = u.username
AND u.id_group = 1
ORDER BY `status` DESC
LIMIT 5, 10;

Related

MySQL 5.5.56 Temporary table always empty when used in a trigger but works when manually running the query

Can someone help or clear things for me. I've got this SQL code that I need to run on a trigger that doesn't work. But works when manually running the code on an SQL client.
SET #sr_id = NEW.purchase_id; /* SET #sr_id = 123456 when run manually */
SET #ndi = (SELECT COUNT(a.id) FROM purchase_rewards a LEFT JOIN item b ON b.id = a.item_id WHERE a.unit_id IS NOT NULL AND COALESCE(b.is_privileged,0) = 0 AND a.purchase_id = #sr_id);
SET #res = #ndi - CEIL(#ndi/2);
DROP TEMPORARY TABLE IF EXISTS for_removal;
CREATE TEMPORARY TABLE for_removal
SELECT ID FROM (
SELECT a.id, #rownum := #rownum + 1 AS `rank` FROM purchase_rewards a LEFT JOIN item b ON b.id = a.item_id
WHERE a.purchase_id = #sr_id AND COALESCE(b.is_privileged,0) = 0
) ft CROSS JOIN (SELECT #rownum := 0) r WHERE `rank` <= #res;
DELETE ta FROM purchase_rewards ta INNER JOIN for_removal tb ON ta.id = tb.id WHERE ta.purchase_id = #sr_id;
The code queries the purchased items that are not "privileged", putting a rank column on each and removing half of them. You only get rewarded for half of it, that's the point. The software was created by someone else with no source code so this is a piggy back system behind it.
I placed debug codes in between each to see if the connection changes or the results where empty but all is good except for the last part. Before the delete part I added a debug code:
SET #icount = (SELECT COUNT(ID) FROM for_removal);
INSERT INTO debug_log SET `log` = #icount;
and the result is that the table is always empty. I also tried converting the code into a stored procedure but I'm getting the same problem. Only running the code manually where it works.
I'm currently settling on CURSOR and loop-deletes which works, but it is slower when there are hundreds of items.
Sample Data: dbfiddle
Thanks!
Based on the comments above, the answer is to set the #rownum variable before the query.
SET #rownum = 0;
CREATE TEMPORARY TABLE ...
The reason is that you can't depend on the order of table evaluation in the CROSS JOIN. If the subquery is evaluated before the initialization of #rownum, then #rownum will be NULL, and any attempt to increment it with #rownum := #rownum + 1 will also yield NULL. So rank will have NULL on every row, and no rows will satisfy the WHERE clause.
As for why this works in the MySQL client but not in the trigger, I have a theory:
The session variable #rownum will keep its value if you test your query multiple times. So if you set it to some non-NULL value once in a session, then test the ranking query in the same session subsequently, it will increment.
But if you run it as part of a trigger, it will likely be a brand new session each time, and the value of #rownum will be initially NULL.

Making a query that works using one by one values, work for a bunch of values

I have this query that fetches results from a bunch of tables and functions (I use MySQL workbench).
It is like that:
SET #user_name := "any_username";
SELECT #user_id := user_id FROM main_db.user WHERE user_name=#user_name;
SELECT #available_balannce := JSON_EXTRACT(get_ewallet(#user_id),'$.available');
SELECT #current_commisions := JSON_EXTRACT(get_ewallet(#user_id),'$.current_commisions');
SELECT #commisions := JSON_EXTRACT(get_ewallet(#user_id),'$.commisions');
SELECT user_id, -- or you can use #user_id here. Since it's SET a bit higher
user_name,
#available_balannce,
#current_commisions,
#commisions
FROM main_db.user
where user_name=#user_name;
So if you type any of the usernames in the first row, it will fetch you the needed information. Result is of course a single line table that displays in the window of MySQL workbench.
Now I want to make that work with a bunch of usernames and preferably export that on the same window under several rows. I can then export that result. That will only be used through the Workbench interface. But I'm lost about how to do that looping process through the list of usernames.
I tried defining the list:
SET #user_list := (SELECT user_name FROM main_db.user WHERE user_name IN ("username1","username2","username3","username4"));
and then go through them with LIMIT and OFFSET
SET #user_name := #user_list LIMIT i,1;
But that didn't work. I was lost somewhere trying to figure it out syntactically I believe.
You don't really need that many statements to generate thet result you want. Instead, you can do:
select
user_id,
json_extract(get_ewallet(user_id), '$.available' ) as available_balance,
json_extract(get_ewallet(user_id), '$.current_commisions') as current_commisions,
json_extract(get_ewallet(user_id), '$.commisions' ) as commisions
from main_db.user
where user_name = #user_name;
Now this is easily extensible to handle several users at once. You would just change the where clause to an in condition, like:
where user_name in ('username1', 'username2', 'username3', 'username4')
In very recent versions of MySQL (8.0.14 or higher), you can use a lateral join, so the function is invoked only once per row:
select
u.user_id,
json_extract(e.ewallet, '$.available' ) as available_balance,
json_extract(e.ewallet, '$.current_commisions') as current_commisions,
json_extract(e.ewallet, '$.commisions' ) as commisions
from main_db.user u
left join lateral (select get_ewallet(u.user_id) as ewallet) e on true
where user_name = #user_name;

Update MySQL table any time another table changes

I am trying to reduce the number of queries my application uses to build the dashboard and so am trying to gather all the info I will need in advance into one table. Most of the dashboard can be built in javascript using the JSON which will reduce server load doing tons of PHP foreach, which was resulting in excess queries.
With that in mind, I have a query that pulls together user information from 3 other tables, concatenates the results in JSON group by family. I need to update the JSON object any time anything changes in any of the 3 tables, but not sure what the "right " way to do this is.
I could set up a regular job to do an UPDATE statement where date is newer than the last update, but that would miss new records, and if I do inserts it misses updates. I could drop and rebuild the table, but it takes about 16 seconds to run the query as a whole, so that doesn't seem like the right answer.
Here is my initial query:
SET group_concat_max_len = 100000;
SELECT family_id, REPLACE(REPLACE(REPLACE(CONCAT("[", GROUP_CONCAT(family), "]"), "\\", ""), '"[', '['), ']"', ']') as family_members
FROM (
SELECT family_id,
JSON_OBJECT(
"customer_id", c.id,
"family_id", c.family_id,
"first_name", first_name,
"last_name", last_name,
"balance_0_30", pa.balance_0_30,
"balance_31_60", pa.balance_31_60,
"balance_61_90", pa.balance_61_90,
"balance_over_90", pa.balance_over_90,
"account_balance", pa.account_balance,
"lifetime_value", pa.lifetime_value,
"orders", CONCAT("[", past_orders, "]")
) AS family
FROM
customers AS c
LEFT JOIN accounting AS pa ON c.id = pa.customer_id
LEFT JOIN (
SELECT patient_id,
GROUP_CONCAT(
JSON_OBJECT(
"id", id,
"item", item,
"price", price,
"date_ordered", date_ordered
)
) as past_orders
FROM orders
WHERE date_ordered < NOW()
GROUP BY customer_id
) AS r ON r.customer_id = c.id
where c.user_id = 1
) AS results
GROUP BY family_id
I briefly looked into triggers, but what I was hoping for was something like:
create TRIGGER UPDATE_FROM_ORDERS
AFTER INSERT OR UPDATE
ON orders
(EXECUTE QUERY FROM ABOVE WHERE family_id = orders.family_id)
I was hoping to create something like that for each table, but at first glance it doesn't look like you can run complex queries such as that where we are creating nested JSON.
Am I wrong? Are triggers the right way to do this, or is there a better way?
As a demonstration:
DELIMITER $$
CREATE TRIGGER orders_au
ON orders
AFTER UPDATE
FOR EACH ROW
BEGIN
SET group_concat_max_len = 100000
;
UPDATE target_table t
SET t.somecol = ( SELECT expr
FROM ...
WHERE somecol = NEW.family_id
ORDER BY ...
LIMIT 1
)
WHERE t.family_id = NEW.family_id
;
END$$
DELIMITER ;
Notes:
MySQL triggers are row level triggers; a trigger is fired for "for each row" that is affected by the triggering statement. MySQL does not support statement level triggers.
The reference to NEW.family_id is a reference to the value of the family_id column of the row that was just updated, the row that the trigger was fired for.
MySQL trigger prohibits the SQL statements in the trigger from modifying any rows in the orders table. But it can modify other tables.
SQL statements in a trigger body can be arbitrarily complex, as long as its not a bare SELECT returning a resultset, or DML INSERT/UPDATE/DELETE statements. DDL statements (most if not all) are disallowed in a MySQL trigger.

SQL Server Stored Procedure taking too long to retrive data from database

I have a stored procedure that retrieves simple data from the table. For more than a year it was working just fine, but for couple of days it is taking more than 30 secs to select the data. Even sometimes it does not show anything on User Interface.
If I execute the same stored procedure in SQL Server Management Studio, it takes 2-3 secs to execute. I tried to recompile table and procedure that is being used and also increased the Time out. But it didn't help and I need you suggestions.
Here below is my stored procedure:
ALTER PROCEDURE [dbo].[sp_Monitoring_ver2]
#AgentID int = NULL
AS
BEGIN
SET NOCOUNT ON;
-- Insert statements for procedure here
select ROW_NUMBER() OVER(order by AgentFullName ASC) as CodeID, res.*, DATEDIFF(mi, stsdate, getdate()) as MinFromLastSignal, DATEDIFF(MI, LastPaymentDateTime, getdate()) as MinFromLastPayment
from
(
SELECT s.AgentID, a.name+' '+a.surname as TerminalFullName, a.loginName,
s.KioskStatus, s.StsDate, s.TotalMoney, s.AmountMoney, s.MoneyInside, s.Version, s.PrinterErrorCode, s.ValidatorErrorCode,
(select top(1) StatusDateTime from Payment where AgentID = s.AgentID order by PaymentID desc) as LastPaymentDateTime,
prt.errtxt as PrinterErrorText, val.errtxt as ValidatorErrorText,
s.IPAddress,
b.AgentID as ParentID, b.[name]+' '+b.surname AS AgentFullName
,(SELECT TOP 1 i.RegDate FROM dbo.InkasaiyaOfTerm i WHERE i.AgentID=s.AgentID order by i.ID DESC) AS LastCollectionDate
,(SELECT TOP 1 i.Kol FROM dbo.InkasaiyaOfTerm i WHERE i.AgentID=s.AgentID order by i.ID DESC) AS LastCollectionQuantity
,(SELECT TOP 1 i.Summa FROM dbo.InkasaiyaOfTerm i WHERE i.AgentID=s.AgentID order by i.ID DESC) AS LastCollectionAmount
FROM StatusTerminal_ver2 s
INNER JOIN ErrorCodeTerminal prt ON s.PrinterErrorCode = prt.ecode
INNER JOIN ErrorCodeTerminal val ON s.ValidatorErrorCode = val.ecode
INNER JOIN Agents a ON s.AgentID=a.AgentID
INNER JOIN Agents b ON a.parentID=b.AgentID
where s.AgentID IN (select AgentID FROM Agents WHERE hrccrt LIKE '%.'+CAST(#AgentID as varchar(10))+'.%' and agentType=2)
and DATEDIFF(DAY, StsDate, GETDATE())<7
) as res
order by AgentFullName ASC
END
What is the best solution for this?
In the stored procedure at the beginning set:
SET ARITHABORT ON
If it doesn't make any difference then cause my parameter sniffing. The sql server has complied the query on the basis of first parameter you have passed and generated the execution plan. This plan may be bad for other parameters. You can use optimize for clause.

Loop through column and update it with MySQL?

I want to loop through some records and update them with an ad hoc query in MySql. I have a name field, so I just want to loop though all of them and append a counter to each name, so it will be name1, name2, name3. Most examples I see use stored procs, but I don't need a stored proc.
As a stepping stone on your way to developing an UPDATE statement, first generate a SELECT statement that generates the new name values to your liking. For example:
SELECT t.id
, t.name
, CONCAT(t.name,s.seq) AS new_name
FROM ( SELECT #i := #i + 1 AS seq
, m.id
FROM mytable m
JOIN (SELECT #i := 0) i
ORDER BY m.id
) s
JOIN mytable t
ON t.id = s.id
ORDER BY t.id
To unpack that a bit... the #i is a MySQL user variable. We use an inline view (aliased as i) to initialize #i to a value of 0. This inline view is joined to the table to be updated, and each row gets assigned an ascending integer value (aliased as seq) 1,2,3...
We also retrieve a primary (or unique) key value, so that we can match each of the rows from the inline view (one-to-one) to the table to be updated.
It's important that you understand how that statement is working, before you attempt writing an UPDATE statement following the same pattern.
We can now use that SELECT statement as an inline view in an UPDATE statement, for example:
UPDATE ( SELECT t.id
, t.name
, CONCAT(t.name,s.seq) AS new_name
FROM ( SELECT #i := #i + 1 AS seq
, m.id
FROM mytable m
JOIN (SELECT #i := 0) i
ORDER BY m.id
) s
JOIN mytable t
ON t.id = s.id
ORDER BY t.id
) r
JOIN mytable u
ON u.id = r.id
SET u.name = r.new_name
SQL Fiddle demonstration here:
http://sqlfiddle.com/#!2/a8796/1
I had to extrapolate, and provide a table name (mytable) and a column name for a primary key column (id).
In the SQL Fiddle, there's a second table, named prodtable which is identical to mytable. SQL Fiddle only allows SELECT in the query pane, so in order to demonstrate BOTH the SELECT and the UPDATE, I needed two identical tables.
CAVEAT: be VERY careful in using MySQL user variables. I typically use them only in SELECT statements, where the behavior is very consistent, with careful coding. With DML statements, it gets more dicey. The behavior may not be as consistent in DML, the "trick" is to use a SELECT statement as an inline view. MySQL (v5.1 and v5.5) will process the query for the inline view and materialize the resultset as a temporary MyISAM table.
I have successfully used this technique to assign values in an UPDATE statement. But (IMPORTANT NOTE) the MySQL documentation does NOT specify that this usage or MySQL user variables is supported, or guaranteed, or that this behavior will not change in a future release.
Have the names stored in a table. Do a join against the names and update in the second table you want to.
Thanks