I have a database table, for example 'items'. I have a timeline of these items, sorted by field ascended_at (datetime). I need to make a pagination api for such timeline. So, the first my version was:
HTTP GET /items/timeline?page=[PAGE_NUM]
which fires
SELECT * FROM items LIMIT 10 OFFSET [0, 10, 20, ...] ORDER BY ascended_at;
but here is the problem: when new item arrives, all pages shifts per 1 item. To avoid this, i have added from_asc_at parameter:
HTTP GET /items/timeline?page=[PAGE_NUM]&from_asc_at=123123123
which fires
SELECT * FROM items WHERE ascended_at <= [asc_at_parameter] LIMIT 10 OFFSET [0, 10, 20, ...] ORDER BY ascended_at;
but this is not accurate, because it is possible to have two items with same ascended_at, and you can see the same item in two different pages (but should not).
So, my question is: what are the possible solutions for this?
Use ID (because it is unique)? But what if it is not ordered by ID?
Any ideas more?
If your items IDs are auto-incremented, you could check what will be the next "autoincrement" value when retrieving items the first time (before pagination).
Store that value persistently (maybe in a session var) until the next search, and add a filter < {maximumID} to your SQL query, to improve the "result set stability" when the user paginates (all new items created between the initial search and paginations won't be retrieved).
EDIT
To handle items deletions, you will have to do "soft deletes" : do not immediately delete an item from DB, but store a deletion date in a datetime field, so that items still exist in DB for a while.
When a new search is issued, you will store in session the current server time, and add a criteria (for example date_deleted IS NULL OR date_deleted > {searchDate}), so that all the items deleted after a search will still be displayed for that specific search.
You will have to create a scheduled job to "really" delete items from DB after some delay.
Related
I know this is a tough one but I'm basically trying to say. Give me a service call and its completion date, then give me the Max date for all service calls where the date is less than the date of the service call I'm inquiring about.
Basically the end result I'm looking for is to say was there another service call on this piece of equipment that was within the last 30 days.
So as you can see in the image for say Asset 50698 service call 579032 we have a date of 11/9/2020 the call below that was 10/22/2020 which was less than 30 days. I want to somehow find a way to count how many service calls I have where this has occurred. Is this possible?
I think you're looking for a context operator In, ForEach or ForAll (in in this case)
Add a variable "MaxAssetDate" and assign it a Formula similar to the following based on your column headers.
=Max([Service Call Completion Date] In ([Asset ID];[Service Call])) In (Asset ID])
Then add this as a column. Provided you have a prompt filtering for a given asset or "date" this column will then show the max date for each service call of the same asset ID. Then add a new variable: ServiceCallDaysDiff: Then by using DatesBetween() with "MaxAssetDate" and ServiceCallCompletionDate and DayPeriod; =DatesBetween([ServiceCallCompletionDate];[MaxAssetDate];DayPeriod) you should get a number 0-X. Then add a filter based if the number is between 1 and 30 then you show those records, otherwise hide the rest; or do whatever logic is then needed.
Now if you're dealing with hundreds of thousands of records this isn't ideal as you're putting all the processing on the webi engine when it ideally would occur as an object in the database layer. However if you only have a few thousand records this should be managable.
To add a count of service calls...
add variable: ServiceCallsCount:
=Sum(Sum(If([ServiceCallDaysDiff]=0;0;1)) In ([AssetID]))
this will count the non zero day differents. Note this will extend beyond 30 so if you want to limit by 30 days adjust the if statement to zero out those not between 1 and 30.
This is but one approach: there may be simpler ways.
I have a query that returns some dates which are not in any order. I need to select the last row from the sub query. The problem is all the solutions I can find online uses something like
ORDER BY qry_doc_dates.arrival_date DESC LIMIT 1
Select qry_doc_dates.arrival_date
FROM (qry_doc_date) AS qry_doc_dates
ORDER BY qry_doc_dates.arrival_date DESC
LIMIT 1
which will not serve my purpose because it first orders the dates as DESC(or ASC).
Suppose the qry_doc_date returns :
"2019-05-27",
"2019-05-13",
"2019-05-20",
"2019-05-22",
"2019-07-12",
"2019-05-22",
"2019-07-16",
"2019-05-22"
As we can see that the returned values are not in order. If I use
ORDER BY qry_doc_dates.arrival_date DESC LIMIT 1
then it returns "2019-07-16" But I need "2019-05-22" which is the last row.
EDIT 1:
I am trying to convert this VBA query to MYSQL.
DLast("arrival_date", "qry_doc_date", "[package_id] = " & Me!lstPackage)
I suppose I misunderstood what the VBA query wants to return. Another issue is I do not have means to run this VBA query and check the result myself.
Your question doesn't make too much sense according to the SQL standard. In the absense of an ORDER BY clause the database engine is free to return the rows in any order. This order may even change over time.
So essentially you are requesting the "last random row" the query returns. If this is the case, why don't you get the "first random row"? It doesn't make any difference, does it?
The only way of getting the last random row is to get them all and discard all of them except for the last one.
Now, if you just need one random row, I would suggest you just get the first random row, and problem solved.
In response to the additional information from your edit:
EDIT 1: I am trying to convert this VBA query to MYSQL.
DLast("arrival_date", "qry_doc_date", "[package_id] = " & Me!lstPackage)
I suppose I misunderstood what the VBA query wants to return. Another
issue is I do not have means to run this VBA query and check the
result myself.
Unless your dataset qry_doc_date is ordered by means of an order by clause, the DFirst or DLast domain aggregate functions will return essentially a random record.
This is stated in the MS Access Documentation for these two functions:
You can use the DFirst and DLast functions to return a random record from a particular field in a table or query when you simply need any value from that field.
[ ... ]
If you want to return the first or last record in a set of records (a domain), you should create a query sorted as either ascending or descending and set the TopValues property to 1. For more information, see the TopValues property topic. From a Visual Basic for Applications (VBA) module, you can also create an ADO Recordset object and use the MoveFirst or MoveLast method to return the first or last record in a set of records.
What you need is to in qry_doc_date to include a sequential row number.
Then you can use something like this:
ORDER BY qry_doc_dates.row_number DESC LIMIT 1
Have a simple page that pulls results from MySQL and displays them in a table. I have enabled paging on the results, and allowed the user to set the number of results being displayed per page. I am passing two querystring values to handle this: 'page' and 'count'.
I am then taking these values to calculate the LIMIT's of my MySQL query, using the SQL_CALC_FOUND_ROWS directive and following that with a call to SELECT FOUND_ROWS(); to get the total number of results. This all works nicely.
Now, I want to validate the querystring values. As I am storing the possible "correct" values for the results/page value of 'count' in an array, I simply check that the passed 'count' value is in that array, and if not set it to the default value. For the 'page' value, I am having a bit of a mental block... in order to determine if there are any results for the passed 'page', meaning it is "correct", I need to go to the database and find the result count first, but since I only want to go to the db once, I need to include the LIMIT's, which are based on the passed 'page' value... chicken and egg. I have a couple thoughts on how to solve this:
Run the query as coded above, and if the (('page' - 1) * 'count') result is greater than the value returned from SELECT FOUND_ROWS();, re-run the query with new LIMIT's set to 0, count.
Get the full result set, verify that the passed page is correct, then do another pull from the database with the LIMIT values.
I'd rather not go back to the database at all, but as I mentioned, having a mental block on this rather common issue.
Thanks,
Paul
I ended up using the first solution above -
Run the query as coded above, and if the (('page' - 1) * 'count') result is greater than the value returned from SELECT FOUND_ROWS();, re-run the query with new LIMIT's set to 0, count.
It's not perfect in that a second database pull is required for cases where the passed page value is bad, but given that is an unexpected case only triggered by the intentional passage of bad data on the part of the user, it's acceptable. If anyone else has a better solution, I'd be happy to re-open the question.
I have an MLS site that you can search and it gives you a list of results. I then store the mysql string in a session.
When you click on a result you go to the page listed by its property MLS number.
From that page I have a forward and backward button that is supposed to go back and forth through properties.
How do I use that same stored query string to find the next result and previous result depending on what property I am looking at?
Assuming the results are ordered by id, you could modify the stored query like this:
WHERE id > current_id LIMIT 1
to get the next property, where current_id is the id of the currently viewed record, and likewise:
WHERE id < current_id LIMIT 1
to get the previous one.
I have an app which has tasks in it and you can reorder them. Now I was woundering how to best store them. Should I have a colomn for the ordernumber and recalculate all of them everytime I change one? Please tell me a version which doesn't require me to update all order numbers since that is very time consuming (from the executions point of view).
This is especially bad if I have to put one that is at the very top of the order and then drag it down to the bottom.
Name (ordernumber)
--
1Example (1)
2Example (2)
3Example (3)
4Example (4)
5Example (5)
--
2Example (1) *
3Example (2) *
4Example (3) *
5Example (4) *
1Example (5) *
*have to be changed in the database
also some tasks may get deleted due to them being done
You may keep orders as literals, and use lexical sort:
1. A
2. Z
Add a task:
1. A
3. L
2. Z
Add more:
1. A
4. B
3. L
2. Z
Move 2 between 1 and 4:
1. A
2. AL
4. B
3. L
etc.
You update only one record at a time: just take an average letter between the first ones that differ: if you put between A and C, you take B, if you put between ALGJ and ALILFG, you take ALH.
Letter next to existing counts as existing concatenated with the one next to Z. I. e. if you need put between ABHDFG and ACSDF, you count it as between ABH and AB(Z+), and write AB(letter 35/2), that is ABP.
If you run out of string length, you may always perform a full reorder.
Update:
You can also keep your data as a linked list.
See the article in my blog on how to do it in MySQL:
Sorting Lists
In a nutshell:
/* This just returns all records in no particular order */
SELECT *
FROM t_list
id parent
------- --------
1 0
2 3
3 4
4 1
/* This returns all records in intended order */
SELECT #r AS _current,
#r := (
SELECT id
FROM t_list
WHERE parent = _current
)
FROM (
SELECT #r := 0
) vars,
t_list
_current id
------- --------
0 1
1 4
4 3
3 2
When moving the items, you'll need to update at most 4 rows.
This seems to be the most efficient way to keep an ordered list that is updated frequently.
Normally I'll add an int or smallint column named something like 'Ordinal' or 'PositionOrdinal' as you suggest, and with the exact caveat you mention — the need to update a potentially significant number of records every time a single record is re-ordered.
The benefit is that given a key for a specific task and a new position for that task, the code to move an item is just two statements:
UPDATE `Tasks` SET Ordinal= Ordinal+1 WHERE Ordinal>=#NewPosition
UPDATE `Tasks` SET Ordinal= #NewPosition WHERE TaskID= #TaskID
There are other suggestions for a doubly linked list or lexical order. Either can be faster, but at the cost of much more complicated code, and the performance will only matter when you have a lot of items in the same group.
Whether performance or code-complexity is more important will depend on your situation. If you have millions of records the extra complexity might worth it. However, I normally prefer the simpler code because users normally only order small lists by hand. If there aren't all that many items in the list the extra updates won't matter. This can typically handle thousands of records without any noticeable impact in performance.
The one thing to keep in mind with your updated example is that the column is only used for sorting and not otherwise shown directly to the user. Thus, when dragging an item from the top to the bottom as shown the only thing you need to change is that one record. It doesn't matter that you'll leave the first position empty. This means there is a small potential to overflow your integer sort with enough re-ordering, but let me say again: users normally only order small lists by hand. I've never heard of this risk actually causing a problem.
Out of your answers I came up with a mixture which goes as follows:
Say we have:
1Example (1)
2Example (2)
3Example (3)
4Example (4)
5Example (5)
Now if I sort something between 4 and 5 it would look like this:
2Example (2)
3Example (3)
4Example (4)
1Example (4.5)
5Example (5)
now again something between 1 and 5
3Example (3)
4Example (4)
1Example (4.5)
2Example (4.75)
5Example (5)
it will always take the half of the difference between the numbers
I hope that works please do correct me ;)
We do it with a Sequence column in the database.
We use sparse numbering (e.g. 10, 20, 30, ...), so we can "insert" one between existing values. If the adjacent rows have consecutive numbers we renumber the minimum number of rows we can.
You could probably use Decimal numbers - take the average of the Sequence numbers for rows adjacent to where you are inserting, then you only have to update the row being "moved"
This is not an easy problem. If you have a low number of sortable elements, I would just reset all of them to their new order.
Otherwise, it seems it would take just as much work or more to "test-and-set" to modify only the records that have changed.
You could delegate this work to the client-side. Have the client maintain old-sort-order and new-sort-order and determine which row[sort-order]'s should be updated - then passes those tuples to the PHP-mySQL interface.
You could enhance this method in the following way (doesn't require floats):
If all sortable elements in a list are initialized to a sort-order according to their position in the list, set the sort-order of every element to something like row[sort-order] = row[sort-order * K] where K is some number > average number of times you expect the list to be reordered. O(N), N=number of elements, but increases insertion capacity by at least N*K with at least K open slots between each exiting pair of elements.
Then if you want to insert an element between two others its as simple as changing its sort-order to be one that is > the lower element and < the upper. If there is no "room" between the elements you can simply reapply the "spread" algorithm (1) presented in the previous paragraph. The larger K is, the less often it will be applied.
The K algorithm would be selectively applied in the PHP script while the choosing of the new sort-order's would be done by the client (Javascript, perhaps).
I'd recommend having an order column in the database. When an object is reordered, swap the order value in the database between the object you reordered and the objects that have the same order value, that way you don't have to reoder the entire set of rows.
hope that makes sense...of course, this depends on your rules for re-ordering.