SqlServer Set Serial No Auto Increment and Decrement - mysql

I have field Serial_No which just count serial no of student i used
identity(1,1)
But Problem is that if i delete row/rows it not decrease the Serial_No like
Serial_No Name Marks
1 Ehsan 50
3 Nouman 40
9 ejaz 56
10 ali 30
11 saleem 78
15 abdullah 90
.... ..... ....
... .. .....
I need Serial_No auto increment but also update after delete and insert row
like 1,2,3,4,5,6,7,8,9,10,11,..........

SQL Server answer here since both are tagged in post. I would say to leave your Identity column alone. Let it perform the way it is designed. Instead, create a view that selects all your columns and also includes a window function (row_number) to generate your sequential list you are looking for.
CREATE VIEW view_YourTable AS (
select *
, ROW_NUMBER() OVER(order by serial_no) AS sequential_id
FROM your_table);
SELECT *
FROM view_YourTable

Related

SQL join each row in a table with a one row from another table

The Problem
I have a table window with start and end timestamps. I have another table activity that has a timestamp. I would like to create a query that:
For each row in activity it joins with a single row from window, where the timestamp occurs between start and end, choosing the older window.
Window Table
Start
End
ISBN
0
10
"ABC"
5
15
"ABC"
20
30
"ABC"
25
35
"ABC"
Activity Table
Timestamp
ISBN
7.5
"ABC"
27.5
"ABC"
Desired Result
Start
End
ISBN
Timestamp
0
10
"ABC"
7.5
20
30
"ABC"
27.5
The Attempt
My attempt at solving this so far has ended with the following query:
SELECT
*
FROM
test.activity AS a
JOIN test.`window` AS w ON w.isbn = (
SELECT
w1.isbn
FROM
test.window as w1
WHERE a.`timestamp` BETWEEN w1.`start` AND w1.`end`
ORDER BY w1.`start`
LIMIT 1
)
The output of this query is 8 rows.
When there is guaranteed to be a single oldest window (i.e. no two Start times are the same for any ISBN)
with activity_window as (
select
a.`Timestamp`,
a.`ISBN`,
w.`Start`,
w.`End`,
row_number() over (partition by a.`ISBN`, a.`Timestamp` order by w.`Start`) rn
from
`Activity` a
inner join `Window` w on a.`ISBN` = w.`ISBN` and a.`Timestamp` between w.`Start` and w.`End`
)
select `Start`, `End`, `ISBN`, `Timestamp` from activity_window where rn = 1;
Result:
Start
End
ISBN
Timestamp
0
10
ABC
7.5
20
30
ABC
27.5
(see complete example at DB<>Fiddle)
CTEs are available from MySQL 8.0. Use subqueries when you are still on MySQL 5. Try to avoid table- and column names that are reserved words in SQL (things like Window, Start, End or Timestamp are examples for bad name choices).
Keeping an index over (ISBN, Start, End) on Window (or clustering the entire table that way by defining those three columns as the primary key) helps this query.

Finding count of unique value before a character

I have a some entries in database table rows as follows.
101 - 1
101 - 2
101 - 3
102 - 1
102 - 2
102 - 3
103
I need to get the result of SELECT Query for count as '3' since there are 101 and 102 are the only number before the -.
So is there any way to find the unique value in db table columns before a character?
EDIT : I have entries even without the - .
In case your entries have always the format you have provided us, you just have to find the position of the '-' character, split the values, get the first n characters and count the distinct values
This works for SQL Server, otherwise informs us about what DBMS you are using or replace the functions with the ones of your DBMS on your own
SELECT COUNT(DISTINCT SUBSTRING(val,0,CHARINDEX('-', val))) from YourTable
create table T1
(
id int primary key identity,
col1 varchar(20)
)
insert into T1 values('101 - 1'),('101 - 2'),('101 - 3'),('102 - 1'),('102 - 2'),('102 - 3')
select SUBSTRING(col1,0,CHARINDEX(' ',col1)) as 'Value',count(*) as 'Count' from T1 group by SUBSTRING(col1,0,CHARINDEX(' ',col1))

Efficient way to remove successive duplicate rows in MySQL

I have a table with columns like (PROPERTY_ID, GPSTIME, STATION_ID, PROPERTY_TYPE, VALUE) where PROPERTY_ID is primary key and STATION_ID is foreign key.
This table records state changes; each row represents property value of some station at given time. However, its data was converted from old table where each property was a column (like (STATION_ID, GPSTIME, PROPERTY1, PROPERTY2, PROPERTY3, ...)). Because usually only one property changed at time I have lots of duplicates.
I need to remove all successive rows with same values.
Example. Old table contained values like
time stn prop1 prop2
100 7 red large
101 7 red small
102 7 blue small
103 7 red small
The converted table is
(order by time,type) (order by type,time)
time stn type value time stn type value
100 7 1 red 100 7 1 red
100 7 2 large 101 7 1 red
101 7 1 red 102 7 1 blue
101 7 2 small 103 7 1 red
102 7 1 blue 100 7 2 large
102 7 2 small 101 7 2 small
103 7 1 red 102 7 2 small
103 7 2 small 103 7 2 small
should be changed to
time stn type value
100 7 1 red
100 7 2 large
101 7 2 small
102 7 1 blue
103 7 1 red
The table contains about 22 mln rows.
My current approach is to use procedure to iterate over the table and remove duplicates:
BEGIN
DECLARE done INT DEFAULT FALSE;
DECLARE id INT;
DECLARE psid,nsid INT DEFAULT null;
DECLARE ptype,ntype INT DEFAULT null;
DECLARE pvalue,nvalue VARCHAR(50) DEFAULT null;
DECLARE cur CURSOR FOR
SELECT station_property_id,station_id,property_type,value
FROM station_property
ORDER BY station_id,property_type,gpstime;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
OPEN cur;
read_loop: LOOP
FETCH cur INTO id,nsid,ntype,nvalue;
IF done THEN
LEAVE read_loop;
END IF;
IF (psid = nsid and ptype = ntype and pvalue = nvalue) THEN
delete from station_property where station_property_id=id;
END IF;
SET psid = nsid;
SET ptype = ntype;
SET pvalue = nvalue;
END LOOP;
CLOSE cur;
END
However, it is too slow. On test table with 20000 rows it removes 10000 duplicates for 6 minutes. Is there a way to optimize the procedure?
P.S. I still have my old table intact, so maybe it is better to try and convert it without the duplicates rather than dealing with duplicates after conversion.
UPDATE.
To clarify which duplicates I want to allow and which not.
If a property changes, then changes back, I want all 3 records to be saved, even though first and the last contains same station_id, type, and value.
If there are several successive (by GPSTIME) records with same station_id, type, and value, I want only the first one (which represents the change to that value) to be saved.
In short, a -> b -> b -> a -> a should be optimized to a -> b -> a.
SOLUTION
As #Kickstart suggested, I've created new table, populated with filtered data. To refer previous rows, I've used approach similar to one used in this question.
rename table station_property to station_property_old;
create table station_property like station_property_old;
set #lastsid=-1;
set #lasttype=-1;
set #lastvalue='';
INSERT INTO station_property(station_id,gpstime,property_type,value)
select newsid as station_id,gpstime,newtype as type,newvalue as value from
-- this subquery adds columns with previous values
(select station_property_id,gpstime,#lastsid as lastsid,#lastsid:=station_id as newsid,
#lasttype as lasttype,#lasttype:=property_type as newtype,
#lastvalue as lastvalue,#lastvalue:=value as newvalue
from station_property_old
order by newsid,newtype,gpstime) sub
-- we filter the data, removing unnecessary duplicates
where lastvalue != newvalue or lastsid != newsid or lasttype != newtype;
drop table station_property_old;
Possibly create a new table, populated with a select from the existing table using a GROUP BY. Something like this (not tested so excuse any typos):-
INSERT INTO station_property_new
SELECT station_property_id, station_id, property_type, value
FROM (SELECT station_property_id, station_id, property_type, value, COUNT(*) FROM station_property GROUP BY station_property_id, station_id, property_type, value) Sub1
Regarding chainging properties, cant you put a unique constraint to ensure the combination of station/type/value columns is unique. That way you will not be able to change it to a value which will result in a duplication.

how find "holes" in auto_increment column?

when I DELETE, as example, the id 3, I have this:
id | name
1 |
2 |
4 |
5 |
...
now, I want to search for the missing id(s), because i want to fill the id again with:
INSERT INTO xx (id,...) VALUES (3,...)
is there a way to search for "holes" in the auto_increment index?
thanks!
You can find the top value of gaps like this:
select t1.id - 1 as missing_id
from mytable t1
left join mytable t2 on t2.id = t1.id - 1
where t2.id is null
The purpose of AUTO_INCREMENT is to generate simple unique and meaningless identifiers for your rows. As soon as you plan to re-use those IDs, they're no longer unique (not at least in time) so I have the impression that you are not using the right tool for the job. If you decide to get rid of AUTO_INCREMENT, you can do all your inserts with the same algorithm.
As about the SQL code, this query will match existing rows with the rows that has the next ID:
SELECT a.foo_id, b.foo_id
FROM foo a
LEFT JOIN foo b ON a.foo_id=b.foo_id-1
E.g.:
1 NULL
4 NULL
10 NULL
12 NULL
17 NULL
19 20
20 NULL
24 25
25 26
26 27
27 NULL
So it's easy to filter out rows and get the first gap:
SELECT MIN(a.foo_id)+1 AS next_id
FROM foo a
LEFT JOIN foo b ON a.foo_id=b.foo_id-1
WHERE b.foo_id IS NULL
Take this as a starting point because it still needs some tweaking:
You need to consider the case where the lowest available number is the lowest possible one.
You need to lock the table to handle concurrent inserts.
In my computer it's slow as hell with big tables.
I think the only way you can do this is with a loop:
Any other solutions wont show gaps bigger than 1:
insert into XX values (1)
insert into XX values (2)
insert into XX values (4)
insert into XX values (5)
insert into XX values (10)
declare #min int
declare #max int
select #min=MIN(ID) from xx
select #max=MAX(ID) from xx
while #min<#max begin
if not exists(select 1 from XX where id = #min+1) BEGIN
print 'GAP: '+ cast(#min +1 as varchar(10))
END
set #min=#min+1
end
result:
GAP: 3
GAP: 6
GAP: 7
GAP: 8
GAP: 9
First, I agree with the comments that you shouldn't try filling in holes. You won't be able to find all the holes with a single SQL statement. You'll have to loop through all possible numbers starting with 1 until you find a hole. You could write a sql function to do this for you that could then be used in a function. So if you wrote a function called find_first_hole you could then call it in an insert like:
INSERT INTO xx (id, ...) VALUES (find_first_hole(), ...)
This is a gaps&island problem, see my (and other) replies here and here. In most cases, gaps&islands problems are most elegantly solved using recursive CTE's, which are not available in mysql.

mysql update between row and shift current to right

how to update column value of specific id and shift after to right.
id track
1 3
2 5
3 8
4 9
want to update id 3 track column value to 10, result like this
id track
1 3
2 5
3 10
4 8
5 9
id column is auto_increment
or any suggestion it's my pleasure.
thank you.
You should avoid tweaking auto_increments. Auto increment keys are usually supposed to be used internally (e.g. for linking purposes). If you want to order tracks, i suggest you add a seperate numeric field "ordernro" to the table and update that
To add a column order nro to a table named album, do like this:
alter table album add ordernro int(2) after id;
Then copy the current value for id into this new column:
update album set ordernro=id;
(do this only once after adding the column)
To insert track 10 at position 3 first shift the rows:
update album set ordernro = ordernro + 1 where ordernro >= 3;
And then insert track 10:
insert into album (ordernro, track) values (3, 10);
Remember to update your existing insert/update/select statements accordingly.
The result can be checked by:
select * from album order by ordernro;
(The id will now be "mixed up", but that doesn't matter)
UPDATE table SET id = id + 1 WHERE id >= x;
x being the id where you place your current track.
The problem with JK 's answer is that MySQL returns error saying that is can't UPDATE because the index at x+1 would be duplicate.
What I did is
UPDATE table SET id = id + 100 WHERE id >= x;
UPDATE table SET id = id - 99 WHERE id >= x;
And then INSERT my row at index x