In our product, we are extending support to Oracle & MySQL, so can anyone please help to migrate the following sample SQL query which works fine with MS-SQL Server, I already tried at my end but somehow it's not working for Oracle/MySQL, any help much appreciated & will convert rest of the queries by myself, thank you.
SELECT A.SERVERID,A.DATAID
,A.CREATETIMESTAMP AS 'Date Time'
,A.OBJECTINSTNAME
,A.PROJECTNAME
,TEMP_IND_1.TEMP_ROW_NUM FROM DATALOG AS A WITH (NOLOCK) INNER JOIN
(
SELECT DATAID,ROW_NUMBER() OVER(ORDER BY CREATETIMESTAMP DESC) AS TEMP_ROW_NUM FROM DATALOG WITH (NOLOCK)
WHERE PROJECTNAME='ProjectA'
) AS TEMP_IND_1 ON A.DATAID = TEMP_IND_1.DATAID
WHERE TEMP_IND_1.TEMP_ROW_NUM BETWEEN 1 AND 50;
You can use same query with removing WITH (NOLOCK) parts. As they have no effect in oracle and you don't need them in oracle. and also column alias is given without as keyword and column aliases has to be double quotes. So your query becomes like this :
SELECT A.SERVERID,A.DATAID
,A.CREATETIMESTAMP "Date Time"
,A.OBJECTINSTNAME
,A.PROJECTNAME
,TEMP_IND_1.TEMP_ROW_NUM FROM DATALOG A INNER JOIN
(
SELECT DATAID,
ROW_NUMBER() OVER(ORDER BY CREATETIMESTAMP DESC) TEMP_ROW_NUM
FROM DATALOG
WHERE PROJECTNAME='ProjectA'
) TEMP_IND_1 ON A.DATAID = TEMP_IND_1.DATAID
WHERE TEMP_IND_1.TEMP_ROW_NUM BETWEEN 1 AND 50;
EDIT:
For mysql there only thing you need to do alter session to set isolation level to read uncommited before executing your original query with out with (no lock) expressions.
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ;
-- your query without no lock expressions
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ ; -- set back to original isolation level
Related
I am trying to reduce the number of queries my application uses to build the dashboard and so am trying to gather all the info I will need in advance into one table. Most of the dashboard can be built in javascript using the JSON which will reduce server load doing tons of PHP foreach, which was resulting in excess queries.
With that in mind, I have a query that pulls together user information from 3 other tables, concatenates the results in JSON group by family. I need to update the JSON object any time anything changes in any of the 3 tables, but not sure what the "right " way to do this is.
I could set up a regular job to do an UPDATE statement where date is newer than the last update, but that would miss new records, and if I do inserts it misses updates. I could drop and rebuild the table, but it takes about 16 seconds to run the query as a whole, so that doesn't seem like the right answer.
Here is my initial query:
SET group_concat_max_len = 100000;
SELECT family_id, REPLACE(REPLACE(REPLACE(CONCAT("[", GROUP_CONCAT(family), "]"), "\\", ""), '"[', '['), ']"', ']') as family_members
FROM (
SELECT family_id,
JSON_OBJECT(
"customer_id", c.id,
"family_id", c.family_id,
"first_name", first_name,
"last_name", last_name,
"balance_0_30", pa.balance_0_30,
"balance_31_60", pa.balance_31_60,
"balance_61_90", pa.balance_61_90,
"balance_over_90", pa.balance_over_90,
"account_balance", pa.account_balance,
"lifetime_value", pa.lifetime_value,
"orders", CONCAT("[", past_orders, "]")
) AS family
FROM
customers AS c
LEFT JOIN accounting AS pa ON c.id = pa.customer_id
LEFT JOIN (
SELECT patient_id,
GROUP_CONCAT(
JSON_OBJECT(
"id", id,
"item", item,
"price", price,
"date_ordered", date_ordered
)
) as past_orders
FROM orders
WHERE date_ordered < NOW()
GROUP BY customer_id
) AS r ON r.customer_id = c.id
where c.user_id = 1
) AS results
GROUP BY family_id
I briefly looked into triggers, but what I was hoping for was something like:
create TRIGGER UPDATE_FROM_ORDERS
AFTER INSERT OR UPDATE
ON orders
(EXECUTE QUERY FROM ABOVE WHERE family_id = orders.family_id)
I was hoping to create something like that for each table, but at first glance it doesn't look like you can run complex queries such as that where we are creating nested JSON.
Am I wrong? Are triggers the right way to do this, or is there a better way?
As a demonstration:
DELIMITER $$
CREATE TRIGGER orders_au
ON orders
AFTER UPDATE
FOR EACH ROW
BEGIN
SET group_concat_max_len = 100000
;
UPDATE target_table t
SET t.somecol = ( SELECT expr
FROM ...
WHERE somecol = NEW.family_id
ORDER BY ...
LIMIT 1
)
WHERE t.family_id = NEW.family_id
;
END$$
DELIMITER ;
Notes:
MySQL triggers are row level triggers; a trigger is fired for "for each row" that is affected by the triggering statement. MySQL does not support statement level triggers.
The reference to NEW.family_id is a reference to the value of the family_id column of the row that was just updated, the row that the trigger was fired for.
MySQL trigger prohibits the SQL statements in the trigger from modifying any rows in the orders table. But it can modify other tables.
SQL statements in a trigger body can be arbitrarily complex, as long as its not a bare SELECT returning a resultset, or DML INSERT/UPDATE/DELETE statements. DDL statements (most if not all) are disallowed in a MySQL trigger.
I have a MySql table with a 'Order' field but when a record gets deleted a gap appears
how can i update my 'Order' field sequentially ?
If possible in one query 1 1
id.........order
1...........1
5...........2
4...........4
3...........6
5...........8
to
id.........order
1...........1
5...........2
4...........3
3...........4
5...........5
I could do this record by record
Getting a SELECT orderd by Order and row by row changing the Order field
but to be honest i don't like it.
thanks
Extra info :
I also would like to change it this way :
id.........order
1...........1
5...........2
4...........3
3...........3.5
5...........4
to
id.........order
1...........1
5...........2
4...........3
3...........4
5...........5
In MySQL you can do this:
update t join
(select t.*, (#rn := #rn + 1) as rn
from t cross join
(select #rn := 0) const
order by t.`order`
) torder
on t.id = torder.id
set `order` = torder.rn;
In most databases, you can also do this with a correlated subquery. But this might be a problem in MySQL because it doesn't allow the table being updated as a subquery:
update t
set `order` = (select count(*)
from t t2
where t2.`order` < t.`order` or
(t2.`order` = t.`order` and t2.id <= t.id)
);
There is no need to re-number or re-order. The table just gives you all your data. If you need it presented a certain way, that is the job of a query.
You don't even need to change the order value in the query either, just do:
SELECT * FROM MyTable WHERE mycolumn = 'MyCondition' ORDER BY order;
The above answer is excellent but it took me a while to grok it so I offer a slight rewrite which I hope brings clarity to others faster:
update
originalTable
join (select originalTable.ID,
(#newValue := #newValue + 10) as newValue
from originalTable
cross join (select #newValue := 0) newTable
order by originalTable.Sequence)
originalTable_reordered
on originalTable.ID = originalTable_reordered.ID
set originalTable.Sequence = originalTable_reordered.newValue;
Note that originalTable.* is NOT required - only the field used for the final join.
My example assumes the field to be updated is called Sequence (perhaps clearer in intent than order but mainly sidesteps the reserved keyword issue)
What took me a while to get was that "const" in the original answer was not a MySQL keyword. (I'm never a fan of abbreviations for that reason -- the can be interpreted many ways at times especially at these very when it is best they not be misinterpreted. Makes for verbose code I know but clarity always trumps convenience in my books.)
Not quite sure what the select #newValue := 0 is for but I think this is a side effect of having to express a variable before it can be used later on.
The value of this update is of course an atomic update to all the rows in question rather than doing a data pull and updating single rows one by one pragmatically.
My next question, which should not be difficult to ascertain, but I've learned that SQL can be a trick beast at the best of times, is to see if this can be safely done on a subset of data. (Where some originalTable.parentID is a set value).
I have a stored procedure that retrieves simple data from the table. For more than a year it was working just fine, but for couple of days it is taking more than 30 secs to select the data. Even sometimes it does not show anything on User Interface.
If I execute the same stored procedure in SQL Server Management Studio, it takes 2-3 secs to execute. I tried to recompile table and procedure that is being used and also increased the Time out. But it didn't help and I need you suggestions.
Here below is my stored procedure:
ALTER PROCEDURE [dbo].[sp_Monitoring_ver2]
#AgentID int = NULL
AS
BEGIN
SET NOCOUNT ON;
-- Insert statements for procedure here
select ROW_NUMBER() OVER(order by AgentFullName ASC) as CodeID, res.*, DATEDIFF(mi, stsdate, getdate()) as MinFromLastSignal, DATEDIFF(MI, LastPaymentDateTime, getdate()) as MinFromLastPayment
from
(
SELECT s.AgentID, a.name+' '+a.surname as TerminalFullName, a.loginName,
s.KioskStatus, s.StsDate, s.TotalMoney, s.AmountMoney, s.MoneyInside, s.Version, s.PrinterErrorCode, s.ValidatorErrorCode,
(select top(1) StatusDateTime from Payment where AgentID = s.AgentID order by PaymentID desc) as LastPaymentDateTime,
prt.errtxt as PrinterErrorText, val.errtxt as ValidatorErrorText,
s.IPAddress,
b.AgentID as ParentID, b.[name]+' '+b.surname AS AgentFullName
,(SELECT TOP 1 i.RegDate FROM dbo.InkasaiyaOfTerm i WHERE i.AgentID=s.AgentID order by i.ID DESC) AS LastCollectionDate
,(SELECT TOP 1 i.Kol FROM dbo.InkasaiyaOfTerm i WHERE i.AgentID=s.AgentID order by i.ID DESC) AS LastCollectionQuantity
,(SELECT TOP 1 i.Summa FROM dbo.InkasaiyaOfTerm i WHERE i.AgentID=s.AgentID order by i.ID DESC) AS LastCollectionAmount
FROM StatusTerminal_ver2 s
INNER JOIN ErrorCodeTerminal prt ON s.PrinterErrorCode = prt.ecode
INNER JOIN ErrorCodeTerminal val ON s.ValidatorErrorCode = val.ecode
INNER JOIN Agents a ON s.AgentID=a.AgentID
INNER JOIN Agents b ON a.parentID=b.AgentID
where s.AgentID IN (select AgentID FROM Agents WHERE hrccrt LIKE '%.'+CAST(#AgentID as varchar(10))+'.%' and agentType=2)
and DATEDIFF(DAY, StsDate, GETDATE())<7
) as res
order by AgentFullName ASC
END
What is the best solution for this?
In the stored procedure at the beginning set:
SET ARITHABORT ON
If it doesn't make any difference then cause my parameter sniffing. The sql server has complied the query on the basis of first parameter you have passed and generated the execution plan. This plan may be bad for other parameters. You can use optimize for clause.
In trying to avoid deadlocks and synchronize requests from multiple services, I'm using ROWLOCK, READPAST. My question is where should I put it in a query that includes a CTE, a subquery and an update statement on the CTE? Is there one key spot or should all three places have it (below)? Or maybe there's a better way to write such a query so that I can select ONLY the rows that will be updated.
alter proc dbo.Notification_DequeueJob
#jobs int = null
as
set nocount on;
set xact_abort on;
declare #now datetime
set #now = getdate();
if(#jobs is null or #jobs <= 0) set #jobs = 1
;with q as (
select
*,
dense_rank() over (order by MinDate, Destination) as dr
from
(
select *,
min(CreatedDt) over (partition by Destination) as MinDate
from dbo.NotificationJob with (rowlock, readpast)
) nj
where (nj.QueuedDt is null or (DATEDIFF(MINUTE, nj.QueuedDt, #now) > 5 and nj.CompletedDt is null))
and (nj.RetryDt is null or nj.RetryDt < #now)
and not exists(
select * from dbo.NotificationJob
where Destination = nj.Destination
and nj.QueuedDt is not null and DATEDIFF(MINUTE, nj.QueuedDt, #now) < 6 and nj.CompletedDt is null)
)
update t
set t.QueuedDt = #now,
t.RetryDt = null
output
inserted.NotificationJobId,
inserted.Categories,
inserted.Source,
inserted.Destination,
inserted.Subject,
inserted.Message
from q as t
where t.dr <= #jobs
go
I don't have an answer off-hand, but there are ways you can learn more.
The code you wrote seems reasonable. Examining the actual query plan for the proc might help verify that SQL Server can generate a reasonable query plan, too.
If you don't have an index on NotificationJob.Destination that includes QueuedDt and CompletedDt, the not exists sub-query might acquire shared locks on the entire table. That would be scary for concurrency.
You can observe how the proc behaves when it acquires locks. One way is to turn on trace flag 1200 temporarily, call your proc, and then turn off the flag. This will generate a lot of information about what locks the proc is acquiring. The amount of info will severely affect performance, so don't use this flag in a production system.
dbcc traceon (1200, -1) -- print detailed information for every lock request. DO NOT DO THIS ON A PRODUCTION SYSTEM!
exec dbo.Notification_DequeueJob
dbcc traceoff (1200, -1) -- turn off the trace flag ASAP
I'm trying to write a function to SELECT the least-recently fetched value from a table in my database. I do this by SELECTing a row and then immediately changing the last_used field.
Because this involves a SELECT and UPDATE, I'm trying to do this with locks. The locks are to ensure that concurrent executions of this query won't operate on the same row.
The query runs perfectly fine in phpMyAdmin, but fails in Magento. I get the following error:
SQLSTATE[HY000]: General error
Error occurs here:
#0 /var/www/virtual/magentodev.com/htdocs/lib/Varien/Db/Adapter/Pdo/Mysql.php(249): PDOStatement->fetch(2)
Here is my model's code, including the SQL query:
$write = Mage::getSingleton('core/resource')->getConnection('core_write');
$sql = "LOCK TABLES mytable AS mytable_write WRITE, mytable AS mytable_read READ;
SELECT #val := unique_field_to_grab FROM mytable AS mytable_read ORDER BY last_used ASC LIMIT 1;
UPDATE mytable AS mytable_write SET last_used = unix_timestamp() WHERE unique_field_to_grab = #val LIMIT 1;
UNLOCK TABLES;
SELECT #val AS val;";
$result = $write->raw_fetchrow($sql, 'val');
I've also tried using raw_query and query instead of raw_fetchrow with no luck.
Any thoughts on why this doesn't work? Or is there a better way to accomplish this?
EDIT: I'm starting to think this may be related to the PDO driver, which Magento is definitely using. I think phpMyAdmin is using mysqli, but I can't confirm that.
Probably a function that Magento uses doesn't support multiple sql statements.
Call each statement separately.
exec("LOCK TABLES mytable AS mytable_write WRITE, mytable AS mytable_read READ");
exec("SELECT #val := unique_field_to_grab FROM mytable AS mytable_read ORDER BY last_used ASC LIMIT 1");
exec("UPDATE mytable AS mytable_write SET last_used = unix_timestamp() WHERE unique_field_to_grab = #val LIMIT 1");
exec("UNLOCK TABLES");
exec("SELECT #val AS val");
Use appropriate functions instead of exec().