Get the total number of records when doing pagination - linq-to-sql

To get a page from a database I have to execute something like this:
var cs = ( from x in base.EntityDataContext.Corporates
select x ).Skip( 10 ).Take( 10 );
This will skip the first 10 rows and will select the next 10.
How can I know how many rows would result because of the query without pagination? I do not want to run another query to get the count.

To get the total number of records before skip/take you have to run a separate query. Getting the actual number returned would use Count(), but wouldn't result in another query if the original query was materialized.
var q = from x in base.EntityDataContext.Corporates
select x;
var total = q.Count();
var cs = q.Skip(10).Take(10);
var numberOnSelectedPage = cs.Count();

Bottom line: you have to run two queries. You simply can't get around it.
Here's a good way to do it, however, that caches the original LINQ query and filter, making for less copy/paste errors:
var qry = from x in base.EntityDataContext.Coporates select x;
var count = qry.Count();
var items = qry.Skip(10).Take(10).ToList();

Related

How to construct having clause in Dynamic LINQ

I want to find duplicate rows in a table given the list of columns. I am using Dynamic LINQ to group by columns and then want to check if there are any records having count greater than 1.
The group by function and count is working correctly. However, I am not sure how do I construct having clause.
Currently, I am getting list of group count in memory and then identify if there is any duplicate.
var columns = "new(FirstName, LastName)"
dynamic groups = await _dbContext.Users
.Where(x=>x.ClientID = 1234)
.GroupBy(columns)
.Select("new(Count() AS Count)")
.ToListAsync();
I trying to avoid loading list in memory. The query should return boolean like Any() if count is > 1
I think i got it
var columns = "new(FirstName, LastName)"
var found = _dbContext.Users
.Where(x=>x.ClientID = 1234)
.GroupBy(columns)
.Select("new(Count() AS Count)").Where("Count > 1").Any();

getting different values, by condition becoming more specific

I have two pieces of code
SELECT * FROM etel.ti18n_country
inner join etel.ti18n
ON id_i18nid = i18nid WHERE id_countryid = 1
and
SELECT * FROM etel.ti18n_country
inner join etel.ti18n
ON id_i18nid = i18nid WHERE id_countryid = 1 and id_i18nid = 4460;
the first results in a bunch of results, but noticably none with id_i18nid = 4460
the second, however gets the result with id_i18nid = 4460.
how can that be? As I understand mysql the first piece of code should've had a result id_i18nid = 4460 for it to be possible for the second piece to have it aswell. Since I made the where clause more specific
Turns out the problem was that I was using Datagrips' ordering to find my id. since I had more than 500 results, datagrip takes random results and orders those. by ending the statement with ORDER BY id_i18nid DESC I found the result.

Select a random row with where statement is taking to long

I want to select a random row with a specific where statement but the query is taking to long (around 2.7 seconds)
SELECT * FROM PIN WHERE available = '1' ORDER BY RAND() LIMIT 1
The database contains around 900k rows
Thanks
SELECT * FROM PIN WHERE available = '1' ORDER BY RAND() LIMIT 1
means, that you are going to generate a random number for EVERY row, then sort the whole result-set and finally retrieve one row.
That's a lot of work for querying a single row.
Assuming you have id's without gaps - or only little of them - you better use the programming language you are using to generate ONE random number - and fetch that id:
Pseudo-Example:
result = null;
min_id = queryMinId();
max_id = queryMaxId();
while (result == null){
random_number = random_beetween(min_id, max_id);
result = queryById(randomNumber);
}
If you have a lot of gaps, you could retrieve the whole id-set, and then pick ONE random number from that result prior:
id_set = queryAllIds();
random_number = random_beetween(0, size(id_set)-1);
result = queryById(id_set[random_number])
The first example will work without additional constraints. In your case, you should use option 2. This ensures, that all IDs with available=1 are pre-selected into an 0 to count() -1 array, hence ignoring all invalid ids.
Then you can generate a random number between 0 and count() -1 to get an index within that result-set, which you can translate to an actual ID, which you are going to fetch finally.
id_set = queryAllIdsWithAvailableEqualsOne(); //"Condition"
random_number = random_beetween(0, size(id_set)-1);
result = queryById(id_set[random_number])

Mysql Really get rows from result?

Good day.
For page navigation useally need use two query:
1) $res = mysql_query("SELECT * FROM Table");
-- query which get all count rows for make links on previous and next pages, example <- 2 3 4 5 6 ->)
2) $res = mysql_query("SELECT * FROM Table LIMIT 20, $num"); // where $num - count rows for page
Tell me please really use only one query to database for make links on previous and next pages ( <- 2 3 4 5 6 -> ) and output rows from page (sql with limit) ?
p.s.: i know that can use two query and SELECT * FROM Table LIMIT 20 - it not answer.
If you want to know how many rows would have been returned from a query while still using LIMIT you can use SQL_CALC_FOUND_ROWS and FOUND_ROWS():
A SELECT statement may include a LIMIT clause to restrict the number of rows the server returns to the client. In some cases, it is desirable to know how many rows the statement would have returned without the LIMIT, but without running the statement again. To obtain this row count, include a SQL_CALC_FOUND_ROWS option in the SELECT statement, and then invoke FOUND_ROWS() afterward:
$res = mysql_query("SELECT SQL_CALC_FOUND_ROWS, * FROM Table");
$count_result = mysql_query("SELECT FOUND_ROWS() AS found_rows");
$rows = mysql_fetch_assoc($rows);
$total_rows = $rows['found_rows'];
This is still two queries (which is inevitable) but is lighter on the DB as it doesn't actually have to run your main query twice.
Many database APIs don't actually grab all the rows of the result set until you access them.
For example, using Python's built-in sqlite:
q = cursor.execute("SELECT * FROM somehwere")
row1 = q.fetchone()
row2 = q.fetchone()
Of course the library is free to prefetch unknown number of rows to improve performance.

How can I "order by" only the LIMIT results in a mysql Query?

Hi I need to get the results and apply the order by only in the limited section. You know, when you apply order by you are ordering all the rows, what I want is to sort only the limited section, here is an example:
// all rows
SELECT * FROM users ORDER BY name
// partial 40 rows ordered "globally"
SELECT * FROM users ORDER BY name LIMIT 200,40
The solution is:
// partial 40 rows ordered "locally"
SELECT * FROM (SELECT * FROM users LIMIT 200,40) AS T ORDER BY name
This solution works well but there is a problem: I'm working with a Listview component that needs the TOTAL rows count in the table (using SQL_CALC_FOUND_ROWS). If I use this solution I cannot get this total count, I will get the limited section count (40).
I hope you will give me solution based on the query, for example something like: "ORDER BY LOCALLY"
Since you're using PHP, might as well make things simple, right? It is possible to do this in MySQL only, but why complicate things? (Also, placing less load on the MySQL server is always a good idea)
$result = db_query_function("SELECT SQL_CALC_FOUND_ROWS * FROM `users` LIMIT 200,40");
$users = array();
while($row = db_fetch_function($result)) $users[] = $row;
usort($users,function($a,$b) {return strnatcasecmp($a['name'],$b['name']);});
$totalcount = db_fetch_function(db_query_function("SELECT FOUND_ROWS() AS `count`"));
$totalcount = $totalcount['count'];
Note that I used made-up function names, to show that this is library-agnostic ;) Sub in your chosen functions.