If I have table structure as so:
CREATE TABLE a (
aid INT AUTO_INCREMENT,
acol1 INT,
acol2 INT,
PRIMARY KEY(aid);
)
CREATE TABLE b (
bid INT AUTO_INCREMENT,
bcol INT,
PRIMARY KEY(bid);
)
and run the statement:
`INSERT INTO a SET acol1 = (SELECT MAX(acol1) + 1 as newMax FROM a WHERE id = ?)
Is there anyway for me to retrieve the value of newMax after the query is executed? I am looking for something similar to last_insert_id() in PHP but for temporary values in the query.
Obviously I am trying to not query the database again if possible.
EDIT:
Actual situation:
CREATE TABLE group (
group_id INT AUTO_INCREMENT,
PRIMARY KEY(group_id)
) ENGINE = MyISAM;
CREATE TABLE item (
group_refid INT, --references group.group_id
group_pos INT, --represents this item's position in its group
text VARCHAR(4096), --data
PRIMARY KEY(group_refid, group_pos)
) ENGINE = MyISAM;
So the issue is that when I add a new item to a group, I need to make its
group_pos = MAX(group_pos) WHERE group_refid = ?
which would require a query with something like:
INSERT INTO item (group_refid, group_pos) SET group_refid = 1, group_pos = (SELECT MAX(group_pos) + 1 FROM item WHERE group_refid = 1);
As you know, this query does not work. There is added complexity that there may not be an item entry yet for a particular group_id.
I am trying to get this all into one atomic statement to prevent race conditions.
INSERT INTO item (group_refid,group_pos)
SELECT 1, (
SELECT IFNULL(MAX(group_pos),0) + 1
FROM item
WHERE group_refid=1
);
However, if we're talking MyISAM tables explicitly, not another engine, this would work:
mysql> CREATE TABLE items (group_refid INT, group_pos INT AUTO_INCREMENT, PRIMARY KEY(group_refid,group_pos)) ENGINE=MyISAM;
Query OK, 0 rows affected (0.12 sec)
mysql> INSERT INTO items (group_refid) VALUES (1),(2),(1),(1),(2),(4),(2),(1);
Query OK, 8 rows affected (0.02 sec)
Records: 8 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM items ORDER BY group_refid, group_pos;
+-------------+-----------+
| group_refid | group_pos |
+-------------+-----------+
| 1 | 1 |
| 1 | 2 |
| 1 | 3 |
| 1 | 4 |
| 2 | 1 |
| 2 | 2 |
| 2 | 3 |
| 4 | 1 |
+-------------+-----------+
However, that AUTO_INCREMENT on a second column in the PK is not portable to another database engine.
you cant. insert query is for insering not selecting.
You must run other query like that
SELECT MAX(acol1) + 1 as newMax FROM a WHERE acol2 = ?
for more read this
I think you can do:
INSERT INTO b
SET bcol = (SELECT #acol := MAX(acol1) + 1 as newMax FROM a WHERE acol2 = ?);
Then you can use the variable #acol to get the value you want.
EDIT:
Is this what you want?
INSERT INTO item (group_refid, group_pos)
SELECT 1, MAX(group_pos) + 1
FROM item
WHERE group_refid = 1;
Not directly in the statement, no. You'll need a separate statement to retrieve values.
But, you could "capture" the value from the SELECT into a user-defined variable, and then retrieve that with a SELECT (in the same database session), if you needed to "know" the value returned from the SELECT.
For example:
INSERT INTO b (bcol)
SELECT #bcol := (MAX(a.acol1) + 1) AS newMax
FROM a WHERE a.acol2 = ?)
SELECT #bcol + 0 AS new_bcol
NOTE:
Note that the user-defined variable assigned in the select is subject to modification elsewhere in the session, for example, it could be overwritten by the execution of a trigger defined the target table of the INSERT.
As an edge case, not that anyone would do this, but it's also possible there might be a BEFORE INSERT trigger that modifies the value of bcol, before it gets inserted. So, if you need to "know" the value that was actually inserted, that would be available in an AFTER INSERT trigger. You could capture that in a user-defined variable in that trigger.
Running a second, separate query against the a table is subject to a race condition, a small window of opportunity for a another session to insert/update/delete a row in table a, such that it's possible that a second query could return a different value than the first query... it might not be the value that was retrieved the first time. Unless of course you are within the context of an InnoDB transaction with REPEATABLE READ isolation level, or you've implemented some concurrency-killing locking strategy.
Related
How can I sort a mySQL data set based on a certain set that is stored randomly in each row? (i.e., the 'field' I want to sort is like [x-yyyyyyy], where 'x' is the initial number I am looking for, and 'yyyyy' is what I want to sort) (EDIT: see at very end for the 'mysql' version).
I.e., this is my data in a mySQL field (lets say called 'items'):
row 1: [1-283482][3-4848484][6-484868]
row 2: [6-484444][1-1111][5-4338484]
row 3: [7-484444][1-9999][3-4338484]
I want to "sort" any field that starts with a "[1-", and then sort the 2nd half numerically?
So, for example, if I was sorting ascending, it would give me the results:
row 2: [6-484444][1-1111][5-4338484]
row 3: [7-484444][1-9999][3-4338484]
row 1: [1-283482][3-4848484][6-484868]
(because removing the '[1-', the order is:
"1111"
"9999"
"283482"
in terms of numerical values?)
and of course descending would be:
row 1: [1-283482][3-4848484][6-484868]
row 3: [7-484444][1-9999][3-4338484]
row 2: [6-484444][1-1111][5-4338484]
Thanks very much!
In other words (from a MYSQL perspective), the data looks like this:
CREATE TABLE `testTable` (
`autoID` int(11) NOT NULL,
`item` text NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO `testTable` (`autoID`, `item`) VALUES
(1, '[1-283482][3-4848484][6-484868]'),
(2, '[6-484444][1-1111][5-4338484]'),
(3, '[7-484444][1-9999][3-4338484]');
ALTER TABLE `testTable`
ADD PRIMARY KEY (`autoID`);
And I'd like to be able to do something like:
Select `item` from `testTable` order by '[1-*****]' asc
If all the rows contain this substring '[1-' in the column item then this should do:
select * from testTable
order by substring(item, locate('[1-', item) + 3) + 0
See the demo.
Results:
| autoID | item |
| ------ | ------------------------------- |
| 2 | [6-484444][1-1111][5-4338484] |
| 3 | [7-484444][1-9999][3-4338484] |
| 1 | [1-283482][3-4848484][6-484868] |
If there are also other rows that do not contain '[1-' and you want these rows at the end:
select * from testTable
order by item not like '%[1-%',
substring(item, locate('[1-', item) + 3) + 0
you can use the function substring of mysql for get the number on right
Example:
SELECT string,CONVERT(SUBSTRING_INDEX(REPLACE(SUBSTRING_INDEX('[1-283482][3-4848484][6-484868]','][',1),'[',''),'-',-1),SIGNED) as num from table
ORDER BY num desc
other option is whit SUBSTRING_INDEX
SELECT item,CONVERT(SUBSTRING_INDEX(REPLACE(SUBSTRING_INDEX(item,'][',1),'[',''),'-',-1),SIGNED) as num from testTable
ORDER BY num desc
I have datewise tables created with date as part of the table name.
ex. data_02272015, data_02282015 (name format is data_<mmddyyyy>). All the tables have the same schema.
Now, The tables have a datetime column TransactionDate. I need to get all the records by querying against this column. One table stores 24 hr data of the corresponding day. So, if I query with date 2015-02-28 xx:xx:xx, I can just query the table data_02282015. But, if I want to query with date 2015-02-27 xx:xx:xx, I have to consider both the tables data_02282015 and data_02272015.
I can get the union like this:
SELECT * FROM data_02272015
UNION
SELECT * FROM data_02282015;
But the problem is I also need to check whether either of the table exists. So if data_02282015 does not exists, the query fails. Is there a way with which query will return the records from the table(s) that exists.
So,
If both table exists, then it will return union of records of both the tables.
If either table does not exists, then it will return records for existing table only.
If both tables does not exists, empty resultset.
I tried things like:
SELECT IF( EXISTS(SELECT 1 FROM data_02282015), (SELECT * FROM data_02282015), 0)
...
But it didn't worked.
If I understand the question correctly, you need a FULL JOIN :
CREATE TABLE two
( val INTEGER NOT NULL PRIMARY KEY
, txt varchar
);
INSERT INTO two(val,txt) VALUES
(0,'zero'),(2,'two'),(4,'four'),(6,'six'),(8,'eight'),(10,'ten');
CREATE TABLE three
( val INTEGER NOT NULL PRIMARY KEY
, txt varchar
);
INSERT INTO three(val,txt) VALUES
(0,'zero'),(3,'three'),(6,'six'),(9,'nine');
SELECT *
FROM two t2
FULL JOIN three t3 ON t2.val = t3.val
ORDER BY COALESCE(t2.val , t3.val)
;
Result:
CREATE TABLE
INSERT 0 6
CREATE TABLE
INSERT 0 4
val | txt | val | txt
-----+-------+-----+-------
0 | zero | 0 | zero
2 | two | |
| | 3 | three
4 | four | |
6 | six | 6 | six
8 | eight | |
| | 9 | nine
10 | ten | |
(8 rows)
Try this script. As a complete solution, you could use the following embedded in a stored procedure, replacing id column with all your needed columns.
-- temp table that will collect results
declare #tempResults table (id int)
-- Your min and max dates to iterate between
declare #dateParamStart datetime
set #dateParamStart = '2015-02-25'
declare #dateParamEnd datetime
set #dateParamEnd = '2015-02-28'
-- table name using different dates
declare #currTblName nchar(13)
while #dateParamStart < #dateParamEnd
begin
-- set table name with current date
SELECT #currTblName = 'data_' + REPLACE(CONVERT(VARCHAR(10), #dateParamStart, 101), '/', '')
SELECT #currTblName -- show current table
-- if table exists, make query to insert into temp table
if OBJECT_ID (#currTblName, N'U') IS NOT NULL
begin
print ('table ' + #currTblName + 'exists')
execute ('insert into #tempResults select id from ' + #currTblName)
end
-- set next date
set #dateParamStart = dateadd(day, 1, #dateParamStart)
end
-- get your results.
-- Use distinct to act as a union if rows can be the same between tables.
select distinct * from #tempResults
Consider two tables like this:
TABLE: current
-------------------
| id | dept | value |
|----|------|-------|
| 4| A | 20 |
| 5| B | 15 |
| 6| A | 25 |
-------------------
TABLE: history
-------------------
| id | dept | value |
|----|------|-------|
| 1| A | 10 |
| 2| C | 10 |
| 3| B | 20 |
-------------------
These are just simple examples... in the actual system both tables have considerably more columns and considerably more rows (10k+ rows in current and 1M+ rows in history).
A client application is continuously (several times a second) inserting new rows into the current table, and 'moving' older existing rows from current to history (delete/insert within a single transaction).
Without blocking the client in this activity we need to take a consistent sum of values per dept across the two tables.
With transaction isolation level set to REPEATABLE READ we could just do:
SELECT dept, sum(value) FROM current GROUP BY dept;
followed by
SELECT dept, sum(value) FROM history GROUP BY dept;
and add the two sets of results together. BUT each query would block inserts on its respective table.
Changing the isolation level to READ COMMITTED and doing the same two SQLs would avoid blocking inserts, but now there is a risk of entries being double counted if moved from current to history while we are querying (since each SELECT creates its own snapshot).
Here's the question then.... what happens with isolation level READ COMMITTED if I do a UNION:
SELECT dept, sum(value) FROM current GROUP BY dept
UNION ALL
SELECT dept, sum(value) FROM history GROUP BY dept;
Will MySQL generate a consistent snapshot of both tables at the same time (thereby removing the risk of double counting) or will it still take snapshot one table first, then some time later take snapshot of the second?
I have not yet found any conclusive documentation to answer my question, so I went about trying to prove it instead. Although not proof in the scientific sense, my findings suggest a consistent snapshot is created for all tables in a UNION query.
Here's what I did.
Create the tables
DROP TABLE IF EXISTS `current`;
CREATE TABLE IF NOT EXISTS `current` (
`id` BIGINT NOT NULL COMMENT 'Unique numerical ID.',
`dept` BIGINT NOT NULL COMMENT 'Department',
`value` BIGINT NOT NULL COMMENT 'Value',
PRIMARY KEY (`id`));
DROP TABLE IF EXISTS `history`;
CREATE TABLE IF NOT EXISTS `history` (
`id` BIGINT NOT NULL COMMENT 'Unique numerical ID.',
`dept` BIGINT NOT NULL COMMENT 'Department',
`value` BIGINT NOT NULL COMMENT 'Value',
PRIMARY KEY (`id`));
Create a procedure that sets up 10 entries in the current table (id = 0, .. 9), then sits in a tight loop inserting 1 new row into current and 'moving' the oldest row from current to history. Each iteration is performed in a transaction, as a result the current table remains at a steady 10 rows, while the history table grows quickly. At any point in time min(current.id) = max(history.id) + 1
DROP PROCEDURE IF EXISTS `idLoop`;
DELIMITER $$
CREATE PROCEDURE `idLoop`()
BEGIN
DECLARE n bigint;
-- Populate initial 10 rows in current table if not already there
SELECT IFNULL(MAX(id), -1) + 1 INTO n from current;
START TRANSACTION;
WHILE n < 10 DO
INSERT INTO current VALUES (n, n % 10, n % 1000);
SET n = n + 1;
END WHILE;
COMMIT;
-- In tight loop, insert new row and 'move' oldest current row to history
WHILE n < 10000000 DO
START TRANSACTION;
-- Insert new row to current
INSERT INTO current values(n, n % 10, n % 1000);
-- Move oldest row from current to history
INSERT INTO history SELECT * FROM current WHERE id = (n - 10);
DELETE FROM current where id = (n - 10);
COMMIT;
SET n = n + 1;
END WHILE;
END$$
DELIMITER ;
Start this procedure running (this call won't return for some time - which is intentional)
call idLoop();
In another session on the same database we can now try out a variation on the UNION ALL query in my original posting.
I have modified it to (a) slow down execution,and (b) return a simple result set (two rows) that indicates whether any entries 'moved' whilst the query was running have been missed or double counted.
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
SELECT 'HST' AS src, MAX(id) AS idx, COUNT(*) AS cnt, SUM(value) FROM history WHERE dept IN (0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
UNION ALL
SELECT 'CRT' AS src, MIN(id) AS idx, COUNT(*) AS cnt, SUM(value) FROM current WHERE dept IN (0, 1, 2, 3, 4, 5, 6, 7, 8, 9);
The sum(value) and where dept in (...) are just there to add work to the query and slow it down.
The indication of a positive outcome is if the two idx values are adjacent, like this:
+-----+--------+--------+------------+
| src | idx | cnt | SUM(value) |
+-----+--------+--------+------------+
| HST | 625874 | 625875 | 312569875 |
| CRT | 625875 | 10 | 8795 |
+-----+--------+--------+------------+
2 rows in set (1.43 sec)
I'd still be happy to hear any authoritative information on this.
I need to update rows by their number(not AI ID, cause some of the rows may will be removed). How can I do this?
I mean something like this:
UPDATE cars SET idx = value WHERE row_number = i
I would do this in a 'for' statement, and i is the integer of my statement. So I would update every row in the statement.
Sorry for my bad english, and thanks!
Here's a pure MySQL solution:
/*test data*/
create table foo (id int auto_increment primary key, a int);
insert into foo (a) values (10), (11), (12);
/*update statement*/
update foo
set a = 5
where id = (
select id from (
select id, #rownum:=#rownum + 1 as rownumber
from foo, (select #rownum:=0) vars order by id
) sq where rownumber = 2
);
Results in:
| ID | A |
-----|----|--
| 1 | 10 |
| 2 | 5 |
| 3 | 12 |
Feel free to ask if you have any questions about this.
Also, note the order by id in there. It's important, cause in a database there is no first or last row. Without an order by clause theoretically there could be each time a different result.
You can also see it working live here in an sqlfiddle.
i don't know this about mysql but you can do this in php
$row_number=? ;//the row no of mysql you want to change the id
$id=? ;//the new id
mysql_connect //do it yourself
$query="select 8 from tablename"; //the query
$result=mysql_query($qyery,$conn);
$count=0;
while($row=mysql_fetch_array($result)) // fetch each row one by one an put data in array $row
{
$count++; //increment count means the no of rows are incermented
if($count==$rownumber) //the row where you want to edit the id
{
$query1="update tablename set id='".$id."' where id=".$row["id"]; //new query on that particular row
$result1=mysql_query($query1,$conn);
}
}
this will work , just modify this code according to your use
My problem is: I have a table with an auto_increment column. When I insert some values, all is right.
Insert first row : ID 1
Insert second row : ID 2
Now I want to insert a row at ID 10.
My problem is, that after this there are only rows inserted after ID 10 (which is the normal behaviour ).
But I want that the database first fills up ID 3-9 before making that.
Any suggestions?
EDIT:
To clarify: this is for an URL shortener I want to build for myself.
I convert the id to a word(a-zA-z0-9) for searching, and for saving in the database I convert it to a number which is the ID of the table.
The Problem is now:
I shorten the first link (without a name) -> ID is 1 and the automatically name is 1 converted to a-zA-Z0-9 which is a
Next the same happenes -> ID is 2 and the name is b, which is 2 converted.
Next interesting, somebody want to name the link test -> ID is 4597691 which is the converted test
Now if somebody adds another link with no name -> ID is 4597692 which would be tesu because the number is converted.
I want that new rows will be automatically inserted at the last gap that was made (here 3)
You could have another integer column for URL IDs.
Your process then might look like this:
If a default name is generated for a link, then you simply insert a new row, fill the URL ID column with the auto-increment value, then convert the result to the corresponding name.
If a custom name is specified for a URL, then, after inserting a row, the URL ID column would be filled with the number obtained from converting the chosen name to an integer.
And so on. When looking up for integer IDs, you would then use the URL ID column, not the table auto-increment column.
If I'm missing something, please let me know.
You could do 6 dummy inserts and delete/update them later as you need. The concept of the auto increment, by design, is meant to limit the application's or user's control over the number to ensure a unique value for every single record entered into the table.
ALTER TABLE MY_TABLE AUTO_INCREMENT = 3;
You would have to find first unused id, store it as user variable, use as id for insert.
SELECT #id := t1.id +1
FROM sometable t1 LEFT JOIN sometable t2
ON t2.id = t1.id +1 WHERE t2.id IS NULL LIMIT 1;
INSERT INTO sometable(id, col1, col2, ... ) VALUES(#id, 'aaa', 'bbb', ... );
You will have to run both queries for every insert if you still have gaps, its up to you to decide whether it is worth doing it.
not 100% sure what you're trying to achieve but something like this might work:
drop table if exists foo;
create table foo
(
id int unsigned not null auto_increment primary key,
row_id tinyint unsigned unique not null default 0
)
engine=innodb;
insert into foo (row_id) values (1),(2),(10),(3),(7),(5);
select * from foo order by row_id;
+----+--------+
| id | row_id |
+----+--------+
| 1 | 1 |
| 2 | 2 |
| 4 | 3 |
| 6 | 5 |
| 5 | 7 |
| 3 | 10 |
+----+--------+
6 rows in set (0.00 sec)