For news-ticker applications, like the one in Facebook, we see that as we scroll further, older news appear. Now surely the news are inserted into the table as they occur, so a normal selection would always retrieve older records earlier, whereas here the reverse occurs. I assume the way it is done is that when the user scrolls down to the end, FB sends an Ajax request with the id of the last news-id currently present in the ticker (couldn't identify for sure in Firebug, FB sends loads of data!), the PHP queries the DB, flips the result set according to the time column, then extracts the say next 5 records following the one with the received id. Now such tables are huge, so flipping them frequently surely takes a heavy toll on the DB. So is there any way to achieve this without flipping?
If you don't specify an ORDER BY variable in your query, MySQL is not guaranteed to return the records in order of insertion. See this answer for more detailed information. If you want to be sure you're getting the most recent rows, you need to have an insertion time column and sort on that.
If you're sorting on time desc, then you can use the usual LIMIT clause to request just the first 5 records (i.e., the 5 most recent), or the 5 most recent after a certain ID, etc.
Yes, keeping in mind that we're speaking about this very abstractly. If you wanted to access an indexed collection in reverse order.
$collection = array(); // Filled through request to server
....
for ($i = sizeof($collection)-1; $i >= 0; $i--) {
echo $collection[$i]
.... // execute some action based on access to
}
Which is to say, there's no reason you can't access an array from its last index.
specify order by in your sql statement to get the oldest first. and do where id > last_id to get the rows after the last_id
Related
Ok, so what is the best practice when it comes down to paginating in mysql. Let me make it more clear, let's say that a given time I have 2000 records and there are more being inserted. And I am displaying 25 at a time, I know I have to use limit to paginate through the records. But what am I supposed to do for the total count of my records? Do I count the records every time users click to request the next 25 records. Please, don't tell me the answer straight up but rather point me in the right direction. Thanks!
The simplest solution would be to just continue working with the result set normally as new records are inserted. Presumably, each page you display will use a query looking something like the following:
SELECT *
FROM yourTable
ORDER BY someCol
LIMIT 25
OFFSET 100
As the user pages back and forth, if new data were to come in it is possible that a page could change from what it was previously. From a logical point of view, this isn't so bad. For example, if you had an alphabetical list of products and a new product appeared, then the user would receive this information in a fairly nice way.
As for counting, your code can allow moving to the next page so long as data is there to support a new page being added. Having new records added might mean more pages required to cover the entire table, but it should not affect your logic used to determine when to stop allowing pages.
If your table has a date or timestamp column representing when a record was added, then you might actually be able to restrict the entire result set to a snapshot in time. In this case, you could prevent new data from entering over a given session.
3 sugggestions
1. Only refreshing the data grid, while clicking the next button via ajax (or) storing the count in session for the search parameters opted .
2. Using memcache which is advanced, can be shared across all the users. Generate a unique key based on the filter parameters and keep the count. So you won't hit the data base. When a new record, gets added then you need to clear the existing memcache key. This requires a memache to be running.
3. Create a indexing and if you hit the db for getting the count alone. There won't be much any impact on performance.
I have a table "documents", with an id column, which is the principal key. The table has numerous other fields and users can view the table sorted by reference to many of these fields. The table data is displayed within a virtual tree control which requests only the data it requires for the current client area of the tree.
Say my document table had the following structure and data (it doesn't, but the simple eg below is hopefully suffucient to illustrate)
id description date_of_doc
----------------------------------
1 Doc 1 10/05/1987
2 Doc 2 11/06/1988
3 Doc 3 12/07/1989
4 Doc 4 13/08/1990
5 Doc 5 14/09/1991
6 Doc 6 15/10/1992
My virtual control loads the date in id order is as per the default table order.
However, the control allows you to click on headers which are called "description" and "date_of_doc". Clicking on these headers changes the order in which the data is displayed in the control. Click the same header twice and it will sort descending. I issue a new query to get the data with an "ORDER BY" command depending on what header has been clicked.
So if I am sorting by date_of_doc and it is descending then the new position of id 2 is in fact 5. Having sorted my user then clicks on the "Find by ID" link to find the document with the id "2". I now need to take him to the correct node within my tree control to find this document. From the simple dataset above we can work out that the new index of this position within the tree is 5. But how do I do that with a query taking into account the Order by clause.
Currently I am selecting the id field for every row in the table using the same Order by and then iterating through the query result until I can match the document id with the id requested by the user. There is nothing wrong with this query in the sense that it gets me the correct position, it just strikes me as grossly inefficient especially as I need to work with large tables.
What I am looking for is a query which is something like
SELECT row_num FROM documents WHERE id=12345 ORDER BY date_of_doc
However, the control allows you to click on headers which are called
"description" and "date_of_doc". Clicking on these headers changes the
order in which the data is displayed in the control. Click the same
header twice and it will sort descending. I issue a new query to get
the data with an "ORDER BY" command depending on what header has been
clicked.
This is not very efficient as it needlessly hits the DB everytime someone tries to sort columns. You can retrieve data once per usersession and either cache the data on the Web Servers Memory and sort it in memory. Or use some client side sorting techniques using many javascript libraries ... Iam not an expert on these techniques but you should be able to find help on this topic as it is a very common scenario. Also you haven't mentioned what technology stack you are using to build your web app. C# asp.net, java , php, etc ...
So assuming that we are sorting in memory the only other call to the db would be to fetch the document for the particular requested document.
That said for your immediate need to avoid iterating rows to find
To do that you can write a Stored procedure that takes the DocID and returns the recordset like so ( just pseudocode may need to tweak it for your actual scenario ) :
Create GetDocDetailsByID
(
#id int
)
Begin
select id, description, date, ....
from yourtablename
where id = #id
End
You should have that doc id in your application at run time when the user performs a button click or hits a hyperlink. Call the above SP with that ID. this part is platform specific. So let me know what is your front end platform and we can see if that needs tweaking.
I am looking for a way to create a View that when queried will automatically only retrieve new records since the last query. My tables have a timestamp field for all entries, so for a simple example I can
SELECT * WHERE timestamp >= 'blah'
but I don't know how to determine what blah should be from the last query. So if the View was queried at 11:00 and then again at 12:00, the query at 12:00 should only return records added since 11:00. And so on... This all needs to be accomplished in the View, the end user should simply be able to query the View and get the results.
Is this possible?
There are two ways:
Store last access date time in database per user persistent session
table, if you have one. On next view call to database, use the
previous latest access time in the session to filter rows starting
from.
Store last access date time in user virtual session at client
environment. On every call to server, send last access date time as
well. So that server uses it to filter rows starting from.
I prefer to use second option that process won't write any data in database tables.
As there may be an unread record that slips through undetected (say it came less than a second since the last one accessed, so it has the same timestamp), set a column to auto increment (typically labelled id) and check for entries using it e.g. in PHP save the last accessed record in a $lastId variable, and use:
$sql="SELECT * WHERE `id` > '$lastId'";
I have an online database CUSTOMERINFO with more than 100k details stored like following format Cust Id, Customer name, Addr, Phone,.......,Call back time
I want to retrieve data when call back time equals current time automatically.
I have designed front end with Java and currently 10 employees working with the database and now they are manually retrieving the data by ID,..
I know select command is very useful to retrieve but I want it to do automatically instead of calling each time manually.
Edited:
When the customer data is retrieved from table, either we will set another call back time or no call back and then pushed into table again.. In the next time if no call back is set in the place of call back time that row no need to be retrieved.
I'm likely missing something here, but:
Something like SELECT ... FROM table WHERE 'Call back time' >= NOW()
I'd recommend not using simply equals, as has been said in the comments, because you might miss items.
If you update the callback time after you have retrieved the callback items, that should work as long as you do not do the query to get callbacks more than once every few seconds.
As was mentioned in the comments, this is just the start though. There are going to be other issues you'll have to deal with.
I apologize in advance that I don't know the terminology of the tools I'm trying to use.
I have a table of events with a startdate field (among others) and a related repeats table with a reference to the event id. The repeats table stores the days of the week on which the event repeats and whether it's monthly, weekly, etc. What I'm hoping to do is duplicate the repeating events within the SQL query so my final result will have the the same event in different places when ordered on start date, so I can limit the results for proper pagination.
I'm looking at creating virtual tables and cloning tables documentation, but I'm having trouble applying the examples to my situation.
Update:
Hopefully I can elaborate on this.
The basics of what I have now is SELECT * FROM 'events' WHERE 'start_date' >= TODAY() ORDER BY 'start_date LIMIT 20 which gets me every event from today on, but I'm paginating the results so only 20 are displayed at a time.
What I would like to do is create a temporary 'virtual' table with the events which have an associated repeat entry, on which I will change the start_date based on the repeat information. So if it's a weekly repeat, this second table would be filled with identical events except that each start_date would be 7 days from the last. Then I could do a join on these two tables, limit those results to the 20 pagination limit I want, and have a query result with the events in the correct place and easy to perform pagination on.
I understand that creating a function in mySQL might be on the right track as I imagine I would have to loop through some information for adding to dates. I only know the level of SQL one picks up by writing in PHP, so functions are a bit out of my scope, though it doesn't seem that it'll be too hard to pick up with a little reading. I'm more confused about how I would create a fake table, add entries to it in a loop and then use a join on it to merge it with the first query.
I'm also beginning to wonder about the overhead for doing this in mySQL and, should I be successful in getting this to work, how I might cache these results, though it's only an afterthought right now.
Thanks to those who are trying to help me, I'm having trouble getting this question into words for some reason.