The following MySQL statement is working fine, and it returns me the rownumber as row, of each result. But now, what I want to do, is setting the column pos with the value of "row", by using an update statement, since I don't want to loop thousands of records with single queries.
Any ideas?
SELECT #row := #row + 1 AS row, u.ID,u.pos
FROM user u, (SELECT #row := 0) r
WHERE u.year<=2010
ORDER BY u.pos ASC LIMIT 0,10000
There is a risk using user defined variables
In a SELECT statement, each select expression is evaluated only when sent to the client. This means that in a HAVING, GROUP BY, or ORDER BY clause, referring to a variable that is assigned a value in the select expression list does not work as expected:
A more safe guard method will be
create table tmp_table
(
pos int(10) unsigned not null auto_increment,
user_id int(10) not null default 0,
primary key (pos)
);
insert into tmp_table
select null, u.ID
from user
where u.year<=2010
order by YOUR_ORDERING_DECISION
limit 0, 10000;
alter table tmp_table add index (user_id);
update user, tmp_table
set user.pos=tmp_table.pos
where user.id=tmp_table.user_id;
drop table tmp_table;
Related
Is there any way to create a trigger that automatically calculates the sum and updates the rows in database?
For now i have this query which sums my rows and displays a total.
Select
PreAgg.id,
PreAgg.debit,
#PrevBal := #PrevBal + PreAgg.debit As total
From
(Select
YT.id,
YT.debit
From
test.accounts YT
Order By
YT.id) As PreAgg,
(Select
#PrevBal := 0.00) As SqlVars
This gives me:
id debit total
1 1000 1000
2 2000 3000
My question is how can this be converted into a trigger that calculates the sum after every insert and inserts it into the total field? please give me complete detail and query. thanks.
After doing some research i came up with this trigger:
CREATE TRIGGER `update_bal` BEFORE INSERT ON `sp_records` FOR EACH ROW INSERT INTO ledger
SELECT
PreAgg.id,
PreAgg.tot_amnt,
#PrevBal := #PrevBal + PreAgg.tot_amnt as balance
from
( select
YT.id,
YT.tot_amnt
from
sp_records YT
order by
YT.id ) as PreAgg,
( select #PrevBal := 0.00 ) as SqlVars
But it doesn't let me update my table, says "Column count doesn't match value count at row 1" when i insert something in my sp_records table. It works fine though without this trigger.
I have two tables one is sp_records that i want my tot_amnt field from and the other is ledger in which i want to insert the "balance". Both tables have additional fields in addition to the field i have mentioned.
CREATE TABLE `ledger` (
`id` int(11) NOT NULL,
`date` varchar(15) NOT NULL,
`debit` float NOT NULL,
`credit` float NOT NULL,
`balance` float NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8
heres my ledger.
I think you can use formula field on your table. Where you want to show the calculation. The reason why I am suggestion this trigger may drop your SQL engine performance.
(I don’t have enough points to add comments that’s the why I am putting as an answer. )
Cheers.
SELECT LAST_INSERT_ID() as id FROM table1
Why does this query sometimes return the last inserted id of another table other than table1?
I call it in Node.js (db-mysql plugin) and I can only do queries.
LAST_INSERT_ID() can only tell you the ID of the most recently auto-generated ID for that entire database connection, not for each individual table, which is also why the query should only read SELECT LAST_INSERT_ID() - without specifying a table.
As soon as you fire off another INSERT query on that connection, it gets overwritten. If you want the generated ID when you insert to some table, you must run SELECT LAST_INSERT_ID() immediately after doing that (or use some API function which does this for you).
If you want the newest ID currently in an arbitrary table, you have to do a SELECT MAX(id) on that table, where id is the name of your ID column. However, this is not necessarily the most recently generated ID, in case that row has been deleted, nor is it necessarily one generated from your connection, in case another connection manages to perform an INSERT between your own INSERT and your selection of the ID.
(For the record, your query actually returns N rows containing the most recently generated ID on that database connection, where N is the number of rows in table1.)
SELECT id FROM tableName ORDER BY id DESC LIMIT 1
I usually select the auto-incremented ID field, order by the field descending and limit results to 1. For example, in a wordpress database I can get the last ID of the wp_options table by doing:
SELECT option_id FROM wp_options ORDER BY option_id DESC LIMIT 1;
Hope that helps.
Edit - It may make sense to lock the table to avoid updates to the table which may result in an incorrect ID returned.
LOCK TABLES wp_options READ;
SELECT option_id FROM wp_options ORDER BY option_id DESC LIMIT 1;
Try this. This is working
select (auto_increment-1) as lastId
from information_schema.tables
where table_name = 'tableName'
and table_schema = 'dbName'
Most easy way:
select max(id) from table_name;
I only use auto_increment in MySQL or identity(1,1) in SQL Server if I know I'll never care about the generated id.
select last_insert_id() is the easy way out, but dangerous.
A way to handle correlative ids is to store them in a util table, something like:
create table correlatives(
last_correlative_used int not null,
table_identifier varchar(5) not null unique
);
You can also create a stored procedure to generate and return the next id of X table
drop procedure if exists next_correlative;
DELIMITER //
create procedure next_correlative(
in in_table_identifier varchar(5)
)
BEGIN
declare next_correlative int default 1;
select last_correlative_used+1 into next_correlative from correlatives where table_identifier = in_table_identifier;
update correlatives set last_correlative_used = next_correlative where table_identifier = in_table_identifier;
select next_correlative from dual;
END //
DELIMITER ;
To use it
call next_correlative('SALES');
This allows you to reserve ids before inserting a record. Sometimes you want to display the next id in a form before completing the insertion and helps to isolate it from other calls.
Here's a test script to mess around with:
create database testids;
use testids;
create table correlatives(
last_correlative_used int not null,
table_identifier varchar(5) not null unique
);
insert into correlatives values(1, 'SALES');
drop procedure if exists next_correlative;
DELIMITER //
create procedure next_correlative(
in in_table_identifier varchar(5)
)
BEGIN
declare next_correlative int default 1;
select last_correlative_used+1 into next_correlative from correlatives where table_identifier = in_table_identifier;
update correlatives set last_correlative_used = next_correlative where table_identifier = in_table_identifier;
select next_correlative from dual;
END //
DELIMITER ;
call next_correlative('SALES');
If you want to use these workarounds:
SELECT id FROM tableName ORDER BY id DESC LIMIT 1
SELECT MAX(id) FROM tableName
It's recommended to use a where clause after inserting rows. Without this you are going to have inconsistency issues.
in my table inv_id is auto increment
for my purpose this is worked
select `inv_id` from `tbl_invoice`ORDER BY `inv_id` DESC LIMIT 1;
the base query works as intenden, but when i try to sum the first columns, its supose to be 5, but insted i get 4, why?
base query:
SET #last_task = 0;
SELECT
IF(#last_task = RobotShortestPath, 0, 1) AS new_task,
#last_task := RobotShortestPath
FROM rob_log
ORDER BY rog_log_id;
1 1456
0 1456
0 1456
1 1234
0 1234
1 1456
1 2556
1 1456
sum query
SET #last_task = 0;
SELECT SUM(new_task) AS tasks_performed
FROM (
SELECT
IF(#last_task = RobotShortestPath, 0, 1) AS new_task,
#last_task := RobotShortestPath
FROM rob_log
ORDER BY rog_log_id
) AS tmp
4
table structure
CREATE TABLE rob_log (
rog_log_id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT,
# RobotPosX FLOAT NOT NULL,
# RobotPosY FLOAT NOT NULL,
# RobotPosDir TINYINT UNSIGNED NOT NULL,
RobotShortestPath MEDIUMINT UNSIGNED NOT NULL,
PRIMARY KEY(rog_log_id),
KEY (rog_log_id, RobotShortestPath)
);
INSERT INTO rob_log(RobotShortestPath) SELECT 1456;
INSERT INTO rob_log(RobotShortestPath) SELECT 1456;
INSERT INTO rob_log(RobotShortestPath) SELECT 1456;
INSERT INTO rob_log(RobotShortestPath) SELECT 1234;
INSERT INTO rob_log(RobotShortestPath) SELECT 1234;
INSERT INTO rob_log(RobotShortestPath) SELECT 1456;
INSERT INTO rob_log(RobotShortestPath) SELECT 2556;
INSERT INTO rob_log(RobotShortestPath) SELECT 1456;
testing it at sqlfiddle: http://sqlfiddle.com/#!2/e80f5/3
as an answer for Counting changes in timeline with MySQL
but got relly confused
Here's the reason (as discussed on Twitter):
The variable #last_task was defined in a separate query "batch". I break up the queries on SQL Fiddle into individual batches, executed separately. I do this so you can see the output from each batch as a distinct result set below. In your Fiddle, you can see that there are two sets of output: http://sqlfiddle.com/#!2/e80f5/3/0 and http://sqlfiddle.com/#!2/e80f5/3/1. These map to the two statements you are running (the set and the select). The problem is, your set statement defines a variable that only exists in the first batch; when the select statement runs, it is a separate batch and your variable isn't defined within that context.
To correct this problem, all you have to do is define a different query terminator. Note the dropdown box/button under both the schema and the query panels ( [ ; ] ) - click on that, and you can choose something other than semicolon (the default). Then your two statements will be included together as part of the same batch, and you'll get the result you want. For example:
http://sqlfiddle.com/#!2/e80f5/9
It's probably a some bug in older version of MySQL.
I have tried it on MySQL 5.5 and its working perfectly.
I have a large table of data with a new field added called uniq_id what I am looking for is a query I can run that will update and increment this field for each row, without having to write a script to do this.
Any ideas?
somethink like that (hard to make better without your structure)
SET #rank = 0;
UPDATE <your table> JOIN (SELECT #rank:= #rank+ 1 AS rank, <your pk> FROM <your table> ORDER BY rank DESC)
AS order USING(<your pk>) SET <your table>.uniq_id = order.rank;
or easier
SET #rank=0;
UPDATE <your table> SET uniq_id= #rank:= (#rank+1) ORDER BY <anything> DESC;
What other columns does the table have? If you define the new column with AUTO_INCREMENT MySQL will fill it with sequential values starting from 1. There's a catch though: the column has to be a key, or at least a part of the key (see the documentation for details). If you don't have a primary key in the table already, you can simply do this:
alter table MyTable add uniq_id int auto_increment, add primary key (uniq_id);
If you can't change the keys in the table and just want to fill the values of the new column as a one-off thing, you can use this update statement:
update MyTable set uniq_id = (#count := coalesce(#count+1, 1));
(coalesce returns its first non-null argument; it effectively establishes the value of the column for the first row.)
I have one mysql table:
CREATE TABLE IF NOT EXISTS `test` (
`Id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`SenderId` int(10) unsigned NOT NULL,
`ReceiverId` int(10) unsigned NOT NULL,
`DateSent` datetime NOT NULL,
`Notified` tinyint(1) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (`Id`),
KEY `ReceiverId_SenderId` (`ReceiverId`,`SenderId`),
KEY `SenderId` (`SenderId`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
The table is populated with 10.000 random rows for testing by using the following procedure:
DELIMITER //
CREATE DEFINER=`root`#`localhost` PROCEDURE `FillTest`(IN `cnt` INT)
BEGIN
DECLARE i INT DEFAULT 1;
DECLARE intSenderId INT;
DECLARE intReceiverId INT;
DECLARE dtDateSent DATE;
DECLARE blnNotified INT;
WHILE (i<=cnt) DO
SET intSenderId = FLOOR(1 + (RAND() * 50));
SET intReceiverId = FLOOR(51 + (RAND() * 50));
SET dtDateSent = str_to_date(concat(floor(1 + rand() * (12-1)),'-',floor(1 + rand() * (28 -1)),'-','2008'),'%m-%d-%Y');
SET blnNotified = FLOOR(1 + (RAND() * 2))-1;
INSERT INTO test (SenderId, ReceiverId, DateSent, Notified)
VALUES(intSenderId,intReceiverId,dtDateSent, blnNotified);
SET i=i+1;
END WHILE;
END//
DELIMITER ;
CALL `FillTest`(10000);
The problem:
I need to write a query which will group by ‘SenderId, ReceiverId’ and return the first 100 highest Ids of each group, ordered by Id in ascending order.
I played with GROUP BY, ORDER BY and MAX(Id), but the query was too slow, so I came up with this query:
SELECT SQL_NO_CACHE t1.*
FROM test t1
LEFT JOIN test t2 ON (t1.ReceiverId = t2.ReceiverId AND t1.SenderId = t2.SenderId AND t1.Id < t2.Id)
WHERE t2.Id IS NULL
ORDER BY t1.Id ASC
LIMIT 100;
The above query returns the correct data, but it becomes too slow when the test table has more than 150.000 rows . On 150.000 rows the above query needs 7 seconds to complete. I expect the test table to have between 500.000 – 1M rows, and the query needs to return the correct data in less than 3 sec. If it’s not possible to fetch the correct data in less than 3 sec, than I need it to fetch the data using the fastest query possible.
So, how can the above query be optimized so that it runs faster?
Reasons why this query may be slow:
It's a lot of data. Lots of it may be returned. It returns the last record for each SenderId/ReceiverId combination.
The division of the data (many Sender/Receiver combinations, or relative few of them, but with multiple 'versions'.
The whole result set must be sorted by MySQL, because you need the first 100 records, sorted by Id.
These make it hard to optimize this query without restructuring the data. A few suggestions to try:
- You could try using NOT EXISTS, although I doubt if it would help.
SELECT SQL_NO_CACHE t1.*
FROM test t1
WHERE NOT EXISTS
(SELECT 'x'
FROM test t2
WHERE t1.ReceiverId = t2.ReceiverId AND t1.SenderId = t2.SenderId AND t1.Id < t2.Id)
ORDER BY t1.Id ASC
LIMIT 100;
- You could try using proper indexes on ReceiverId, SenderId and Id. Experiment with creating a combined index on the three columns. Try two versions, one with Id being the first column, and one with Id being the last.
With slight database modifications:
- You could save a combination of SenderId/ReceiverId in a separate table with a LastId pointing to the record you want.
- You could save a 'PreviousId' with each record, keeping it NULL for the last record per Sender/Receiver. You only need to query the records where previousId is null.