order and change one column of database - mysql

I have one col in my database named position used for ordering. However, when some records is deleted, the sequence get messed up. I want to reorder this col when the table is changed(maybe use trigger).
position(old) -> position(new)
1 1
3 2
7 3
8 4
like this.
I think there will not exist equal number even in position(old), because I have already attach some function in PHP to reorder the column when updates occurs. However, when a record is deleted because of the deletion of its parent, function will not be called.
Thanks for help!

If you are using the column just for ordering, you do not need to update column on deletion, because the order will still be correct. And you will save some resources.
But if you really need to update by sequence, look at this answer:
updating columns with a sequence number mysql
I believe (as scrowler wrote) the better way in such case is to update rows from the application, after application deletes the parent record.

If you decide to update it in the application then...
If position = n is deleted, you logic should be set position = position - 1 where position > n
Please note that this will work only if you delete one record at a time from your application and before assuming that before the delete is triggered the data is already in sequence

Related

What order do Duplicates get Removed in Power Query?

When running Remove Duplicates in Power Query, does it leave the 1st instance alone and then delete any following duplicates? E.G. If there were duplicates on rows 10,11 and 12, it would delete rows 10 & 11? Is there documentation on this somewhere?
Thanks!
As far as I am aware, remove duplicates will remove items based on the order the data was initially loaded into Power Query. Any sorting or other operations you have performed after the data is loaded will not be factored into this. So duplicate items on rows 11 and 12 would be removed in your example, even if you sorted the data so the items on rows 11 and 12 were now above the item on row 10.
It is possible to make remove duplicates follow the current sort order if you use the function Table.Buffer() on the data before using the remove duplicates function in PQ (the actual function it runs is Table.Distinct(). This is because Table.Buffer() loads the table at the current state it is called into memory, and this resets the "load" order that is used to remove duplicates by Table.Distinct.
In practice the simplest way to do it looks like changing the default function when you use Remove Duplicates from this
= Table.Distinct(#"Sorted Rows", {"DuplicateColumn"})
to this
= Table.Distinct(Table.Buffer(#"Sorted Rows"), {"DuplicateColumn"})
Not sure about documentation but from experience: Yes, the first item is retained, any duplicates following will be removed.
With this knowledge under your belt you can use an Index columns to manipulate the order of entries if the default order does not produce the result you want.

Mysql inserts in the middle instead at the end of table

I am new to database. I deleted few rows (till end of table) from a database table. Now every time i execute INSERT query via php, it inserts new row immediately after last deleted row and pushes the previous inserts down by one row.
As shown in above figure, the rows are deleted from 2019-08-18 (red rectangle) via the query:
DELETE FROM mytable WHERE date > '2019-08-18'
Now new inserts have wrong order as shown in green rectangle in above figure. Though the row with date 2019-08-19 is inserted first, it is pushed to the end of table.
What am i doing wrong?
Most relational databases will not return the rows in any particular order unless you ask explicitly. They'll just do whatever's easiest. Often that's insertion order, or maybe order on disk, or whatever index was last used. It's really arbitrary and may even change from version to version of the same database platform.
"If you cared you'd ask" is the principle at work here. You didn't ask, so MySQL presumes you don't care.
Add an ORDER BY clause to get predictable orders. Otherwise be prepared for the unexpected.

Best practice for handling positions of rows

What is the best practice for moving rows. So that you might want to change order of items. Now if you make a new column called order_id or something, wouldn't that fail if I delete or select rows.
Another method I guess is to just switch values completely with an primary ID, so just values except the ID are changed. however I do not know what people usually use. There are so many websites that give you the ability to change order of things.how so they do that?
Every SQL statement that returns a visible result set should include an ORDER BY clause so that the results are consistent. The Standard does not guarantee that the order of rows in a particular table will remain constant or consistent, even if obvious changes aren't made to the table.
What you use for your ORDER BY clause depends on the use case. A date value is the usual choice for a comment thread or blog entry ordering. However, if you want the user to be able to customize the order that a result set shows in, then you have to provide a column that represents the position of the row, and adjust the value of that column when the user makes changes to the order they see.
For example, if you decide that the column will contain a sequential number, starting with 1 for the first row, 2 for the second, etc. then you will be ok to delete rows when they need to be deleted without having to do updates. However, if you insert a row, you will need to give the row you insert the sequential number appropriate for it's position, and update all rows below that with their new position. Same goes for if you move a row from somewhere else to a new location; the rows between the new and old locations need to be updated with new postion indexes.

Preventing 2 users from updating the same record simultaneously

I have a table tbl_orders. It has a quantity field quantity.
In some part of my application I need to decrement the quantity by 1.
I already know the id of the record (available from the client side), so I issue an update statement:
UPDATE tbl_orders
SET quantity=quantity-1
WHERE id= 6
The problem is that this query can accidentally be run multiple times concurrently.
For example, 2 customer service operators may update the same record simultaneously.
That means that the quantity will be decremented by 2 when it is supposed to be decremented once only.
I tried putting the update in a transaction, but that resulted in only delaying the second transaction until the first one was committed. Once it was committed the second update ran and decremented the record again.
How can I make sure that other queries fail if one is modifying a record?
UPDATE:
for an update to be valid the quantity on the client side was the same in the database. For example if a user sees a quantity 5 on his browser and wants to decrement it, the value in database must be the same.
UPDATE 2
I found a good explanation here for optimistic locking using Doctrine 2:
One approach I've used/seen in the past was having a timestamp column. When querying, ensure you have both the ID and the original timestamp at the start of editing the record. Then, when you are trying to update, send via
update YourTable
set counter = counter -1,
TheTimestampColumn = new timestamp value
where ID = yourID
and TheTimeStampColumn = timeStampEditStartedWith
This way, whoever gets to it first with the original starting timestamp would win. For the one following, you would have to track how many records were updated, and if that count equals zero, then you would notify the user that another person made a change to the record while you were viewing it. Do you want to reload the latest data? (or something like that).

Select only recent updated rows in MySQL db with a large data volume

I'm working with a InnoDB table that contains up to 30000 records. Rows are updated frequenty with stock_quantity value. What i need to do is to select only the most recent updated rows with a scheduled task and perform some actions thru a WebService.
Just trying to understand which is the best way for doing this without kill performance. I'm thinking at 3 different solution:
using a datetime column and update its value on each modify. Then select rows where date_col > NOW()-20 min. (20 min. is the frequency crontab is running)
using a boolean column and set the value to true each time the row is modified. Then select rows where boolean_col is true. If the task is executed set back the value of boolean_col to false.
using a second table to store recent updated columns. On each update of a row in table_1 copy the row to table_2. Then select all rows from table_2, perform actions and truncate table_2.
Anyway I'm pretty sure the right solution is not listed up there... so does anyone have some good advice? Thanks.
fist at all,
30,000 record is not that big ...
i prefer method 1 with some additional changes
set the datetime column default to on update current_timestamp
build an index of this column
method 2 will incurred redundant write, and read
method 3 is the worse, it almost double x double the write and read operations
I would personally use your option 2.
I would seriously look at a tigger to set the value to 1 if the row is edited. Of course excluding and update that only effect the boolean col.
I would then have the cron search the table when boolean = 1, return the list process the file and update the field back to 0 once complete.
This would be my approach, but like you said there might be a better way.
Another Idea: You might also want to look at replacing your cron with the tigger and preform the action your cron does on record update might work...