Table Data with sort, search query, pagination pass to different component in Angular - mysql

I will try to explain as simple as possible.
There is a table which in start gets first 30 rows of n rows and has sorting, search using mat-table. So even if it is sorted only the first 30 rows will be sent and when the user goes to next page it gets the next 30 rows from backend each time making a new request.
Now each row has a button that will take it to another component which will get some detailed data about the specific row.
This component has a previous and next feature that will get the detailed view of the next or previous row data in the same order that is displayed in the table(sorted, search Result, page number).
Currently the table rows are made again in backend(Django) with all the sort, search and other parameters then find the current row and send the next and previous row (will take minimum 5 hits on DB).
Hence it very slow.
In Frontend I can only pass the data of only that page, which will have problem in next or previous page.
How to properly tackle this...

Normal search UIs don't focus on 30 rows at a time. Instead, they first search the entire dataset, then 'paginate' the results. (Or is that what you intended to say?)
There are details that can let the processing work fast, or there may be details that prevent speed. Please go into details about the table structure and the possible search criteria.

Related

Find out the specific row that was selected in Dash DataTable

I have a Dash DataTable with row_selectable set to "multi". Therefore, the user can select multiple rows via a checkbox that will appear next to each row of the DataTable.
I also have a callback that has as input Input("datatable-id", "selected_rows"). Therefore, each time the user selects a row, I get ALL the rows that are selected.
What I want to do is to update my database column is_selected based on the row that the user just selected. To get the row that the user just selected I can either:
Read my entire is_selected column of my database and find the difference between that and selected_rows.
Use selected_rows to update ALL the rows in my database.
I wonder, is there another better way to find out which specific row the user selected? So that I can simply update my single row in my database accordingly?
So, you can use a dcc.Store as a way to get access to the previous state.
If I'm not wrong, your problem is a comparison between two states: the state_0, the one before the interaction; and the state_1, the one after the interaction.
In that case, using only DashTable attributes, it is hard (at least I don't know how) to get the previous state of selected rows. With that in mind you can create a dcc.Store to store the last state of selected_rows to get access every time the callback is triggered. In other words, put the data as the Output off the callback and a State to get the access to state_0. Update the data with the current selected_rows after making a simple set difference between the current selected_rows and the previous one.

How to split large json fetch in react

Have a question about best practise of fetching large json data from reactjs application.
Let's say we have fetch url foo.com and we also have from,to variables like 0 to 6 or whatever we put there. And we have about 10 000 elements in response json.
I need all these rows because I need to make sorting about all of them. But it takes too much time rendering them all together.
Is there any good practise.
Fetch them all and client waits (using at the moment)
Fetch n items until all fetched (better speed?)
Fetch n items, show already fetched data and fetch remaining values in background (In this case only sorting part waiting until last fetched, but client can look around already on page)
Fetching 10K items at once is not a good idea in most cases. Neither
is rendering 10K items on the screen. The trick is to find a sweet spot in UX that allows you to NOT render/fetch this many items.
You need to ask yourself what's the most important thing for user on this page.
Do they want to see the top most blah record?
Do they mostly care about seeing and eyeballing the items?
Are they searching for a particular item?
If you can't change the api do the sorting for you on the server, then here is what I would do:
load first 100 items, sort them and then render them on the scree
kick of the fetch for the next 100 item and show a loading message to user
once the second page is fully fetched, sort all 200 and before rendering indicate on the UI that "there is new data, click here to see" but still only render the top 100 ones
repeat until you have all the items but give the user the ability to cancel at any time
The benefits are:
√ user sees data as soon as first page is there
√ you always render 100 items with a "show more" button so your DOM will be lean
√ they can cancel at any time if they have found what they were looking for

Complex HTML issue with huge data set to show in one page

Person -> Item - > Work -> Component.
This are the main tables in the database.
I have to search for Item by a criteria. It will give a list. I will join the Person to get his "parent". After this maybe is a record in Work table maybe not, but if is, I will join also for Work I will join the list of Components if can be found.
The original code it was with nested tables. It will crash the browser, because taking to much memory with that design and is extremely slow around 150 records.
I rewrote it with divs the nested table. The performance got a huge boost, but start to be slow again because of buttons and design. ( It wasn't able to show 200 record before even after 10 min waiting!, now display 5k rows in 23 seconds)
Some of my benchmark logs:
SQL execution time: 0.18448090553284 seconds. Found 5624 rows.
For each result processing took: 0.29220700263977 seconds.
Writing the HTML code took:0.4107129573822 seconds.
Rows in HTML: 26551 headers + data.
Total Cells in HTML: **302491** headers and data.
Time until DOMready: 23691 milliseconds (in JavaScript)
0.18 + 0.29 + 0.41 = 0.88 So is around 1 second!
But when the browser actually want to show it to you ( paint ) it will take like 20 seconds!!!
Please don't suggest the paging! - the customer (final user) want to see all data in 1 web page for whatever his reason. No comment here.
Running on an i7 processor and 8/16 GB ram as requirement is accepted.
Most of the data rows has a collapse/expand button.
Most of the data row has the CRUD buttons: Add, Edit, Delete, View in details
All 4 kind of data table has headers and they don't match the length of the other kind of table header size, neither in column numbers.
When I want to just list the data in a blank page ( without design) and use 1 single table it is like 2 seconds or 3, not 20-30.
The original nested table solution has buttons with functionality in data row.
I would like to use it, and not implement it again.
My idea is to go back to the original nested table design ( to not re implement a lot of functionality of buttons) Then display only the top level table collapsed, with expand buttons. Then call an AJAX to get the second level data, when ready call the 3rd level then the 4th level.
The user is using intranet or the same PC as the server, so this maybe can be acceptable? -and doesn't have a blocking user interface for long time.
How would you handle this case, when there is not an option to show a next page button with 20 records / page.

MySQL query performance for paginating

Ok, so what is the best practice when it comes down to paginating in mysql. Let me make it more clear, let's say that a given time I have 2000 records and there are more being inserted. And I am displaying 25 at a time, I know I have to use limit to paginate through the records. But what am I supposed to do for the total count of my records? Do I count the records every time users click to request the next 25 records. Please, don't tell me the answer straight up but rather point me in the right direction. Thanks!
The simplest solution would be to just continue working with the result set normally as new records are inserted. Presumably, each page you display will use a query looking something like the following:
SELECT *
FROM yourTable
ORDER BY someCol
LIMIT 25
OFFSET 100
As the user pages back and forth, if new data were to come in it is possible that a page could change from what it was previously. From a logical point of view, this isn't so bad. For example, if you had an alphabetical list of products and a new product appeared, then the user would receive this information in a fairly nice way.
As for counting, your code can allow moving to the next page so long as data is there to support a new page being added. Having new records added might mean more pages required to cover the entire table, but it should not affect your logic used to determine when to stop allowing pages.
If your table has a date or timestamp column representing when a record was added, then you might actually be able to restrict the entire result set to a snapshot in time. In this case, you could prevent new data from entering over a given session.
3 sugggestions
1. Only refreshing the data grid, while clicking the next button via ajax (or) storing the count in session for the search parameters opted .
2. Using memcache which is advanced, can be shared across all the users. Generate a unique key based on the filter parameters and keep the count. So you won't hit the data base. When a new record, gets added then you need to clear the existing memcache key. This requires a memache to be running.
3. Create a indexing and if you hit the db for getting the count alone. There won't be much any impact on performance.

Editing table row data populated with with classic asp

I'm hoping this will be a rather simple question to answer, as I'm not looking for any specific code. I have a table on a classic asp page populated from an sql server. I've just set the table up so that each row is clickable and takes you to a page to edit the data in the row. My question is this: Would I be better off trying to use the recordset that populated the table or should I reconnect to the db and pull just the record I want edited.
As always; It Depends. It depends on what you need to edit about the record. It depends on How far apart your DB and site are from each other. It depends on which machine, if the DB and site are on separate machines, is more powerful.
That being said, you should make a new call for that specific record. The reason mainly being because of a specification you made in your question:
...and takes you to a page to edit the data in the row
You should not try to pass a record set between pages. There are a few reasons for this
Only collect what you need
Make sure data is fresh
Consider how your program will scale
On point 1 there are two ways to look at this. One is that you are trying to pass the entire record set across a page when you only need 1 record. There are few situations where another DB call would cost more than this. The other is you are only passing one record which would make me question your design. Why does this record set have every item related to a record. You are selecting way too much for just a result list. Or if the record is that small then Why do you need the new page. Why can you not just reveal an edit template for the item if it is that minimal.
On point 2 consider the following scenario. You are discussing with a coworker how you need to change a customer's record. You pull up this result set in an application but then nature calls and you step away from you desk. The coworker gets called by the customer and asked why the record is not updated yet. To placate the customer your coworker makes the changes. Now you are using an old record set and may overwrite additional changes your coworker made while you were away. This all happens because you never update the record set, you always just pass the old one from page to page.
On point 3 we can look back a point 1 a bit. let us say that you are passing 5 fields now. You decide though that you need a comments field to attach to one of your existing fields. do you intend to pass 2000 characters of that comment field to the next page? How about if each of the 5 need a comment field? Do you intend to pass 10,000 characters for a properly paged record set of 10? do you not do record set paging and need to pass 10,000 characters for a full 126 records.
There are more reasons too. Will you be able to keep your records secure passing them this way? Will this effect your user's experience because they have a crummy computer and cannot build that quick of a post request quickly? Generally it is better to only keep what you need and in most situations your result set should not have everything you need to edit.