MYSQL Found Rows not working from phpmyadmin - mysql

I'm performing the following MySQL query inside phpmyadmin. The found_rows() function is not returning any result.
SELECT SQL_CALC_FOUND_ROWS `name` , `ability`
FROM `magic`
WHERE `name` LIKE 'Black%'
GROUP BY `name`
ORDER BY `name`
LIMIT 0 , 10;
SELECT FOUND_ROWS();
I've tried running the Select queries separately, one after another, but I get unexpected results. The first query alone returns 13 results, which is the correct total. When I run the found_rows() query afterwards, 20 is returned.
How can I get Found_rows() to report properly?
Thanks,
skibulk

You can't, in PHPMyAdmin.
PHPMyAdmin uses the FOUND_ROWS() itself to tell you how many rows were found in your last query, and parses this itself to tell you 'x results found'.
Every time you do a simple SELECT, PHPMyAdmin adds a limit and executes this select, and the FOUND_ROWS afterwards to tell you how many results you would have found in without the LIMIT. This is how PHPMyAdmin offers paging (by default 30 elements), too.
Plus, you cannot call FOUND_ROWS() two times: the second time you call it, it will refer to the first time you called it. Since the first time returns 1 row, you get 1 as result.
Do note also: by default each query you enter in PHPMyAdmin opens a new connection to MySQL, executes the query, and closes the connection. Your next query will not know of the first query, unless you put them all together in the input-textarea, separated by semicolons.
If you want to use FOUND_ROWS(), use another mysql-client. The mysql commandline is fine:
mysql> SELECT SQL_CALC_FOUND_ROWS TABLE_NAME FROM `information_schema`.tables WHERE TABLE_NAME LIKE "COL%" LIMIT 3;
+---------------------------------------+
| TABLE_NAME |
+---------------------------------------+
| COLLATIONS |
| COLLATION_CHARACTER_SET_APPLICABILITY |
| COLUMNS |
+---------------------------------------+
3 rows in set (0.01 sec)
mysql> SELECT FOUND_ROWS( );
+---------------+
| FOUND_ROWS( ) |
+---------------+
| 5 |
+---------------+
1 row in set (0.00 sec)

Related

MySQL SELECT * optimization

Is there a reason why there is enormous difference between
1. SELECT * FROM data; -- 45000 rows
2. SELECT data.* FROM data; -- 45000 rows
SHOW PROFILES;
+----------+------------+-------------------------+
| Query_ID | Duration | Query |
+----------+------------+-------------------------+
| 1 | 0.10902800 | SELECT * FROM data |
| 2 | 0.11139200 | SELECT data.* FROM data |
+----------+------------+-------------------------+
2 rows in set, 1 warning (0.00 sec)
As far as I know it, they both return the same number of rows and columns. Why the disparity in duration?
MySQL version 5.6.29
That's not much difference. Neither are optimized. Both do full table scans. Both will parse to the optimizer the same. You are talking about fractions of milliseconds difference.
You can't optimize full table scans. The problem is not "select " or "select data.". The problem is that there is no "where" clause, because that's where optimization starts.
The particular examples specified would return the same result and have the same performance.
[TableName].[column] is usually used to pinpoint the table you wish to use when two tables a present in a join or a complex statement and you want to define which column to use out of the two with the same name.
It's most common use is in a join though, for a basic statement such as the one above there is no difference and the output will be the same.

Sorting order behaviour between Postgres and Mysql

I have faced some strange sort order behaviour between Postgres & mysql.
For example, i have created simple table with varchar column and inserted two records as below in both Postgres and Mysql.
create table mytable(name varchar(100));
insert into mytable values ('aaaa'), ('aa_a');
Now, i have executed simple select query with order by column.
Postgres sort order:
test=# select * from mytable order by (name) asc;
name
------
aa_a
aaaa
(2 rows)
Mysql sort order:
mysql> select * from mytable order by name asc;
+------+
| name |
+------+
| aaaa |
| aa_a |
+------+
2 rows in set (0.00 sec)
Postgres and mysql both returning same records with different order.
My question is which one correct?
How to get results in same order in both database?
Edited:
I tried with query with ORDER BY COLLATE, it solved my problem.
Tried like this
mysql> select * from t order by name COLLATE utf8_bin;
+------+
| name |
+------+
| aa_a |
| aaaa |
+------+
3 rows in set (0.00 sec)
Thanks.
There is no "correct" way to sort data.
You need to read up on "locales".
Different locales will provide (among other things) different sort orders. You might have a database using ISO-8859-1 or UTF-8 which can represent several different languages. Rules for sorting English will be different for those from French or German.
PostgreSQL uses the underlying operating-system's support for locales, and not all locales are available on all platforms. The alternative is to provide your own support, but then you can have incompatibilities within one machine.
I believe MySQL takes the second option, but I'm no expert on MySQL.

Mysql-If I insert multiple values in a column of a table simultaneously ,is it possible that the inserting orders of values get change?

I am doing these :
insert into table_name(maxdate) values
((select max(date1) from table1)), -- goes in row1
((select max(date2) from table2)), -- goes in row2
.
.
.
((select max(date500) from table500));--goes in row500
is it possible that while insertion , order of inserting might get change ?.Eg when i will do
select maxdate from table_name limit 500;
i will get these
date1 date2 . . date253 date191 ...date500
Short answer:
No, not possible.
If you want to double check :
mysql> create table letest (f1 varchar(50), f2 varchar(50));
Query OK, 0 rows affected (0.00 sec)
mysql> insert into letest (f1,f2) values
( (SELECT SLEEP(5)), 'first'),
( (SELECT SLEEP(1)), 'second');
Query OK, 2 rows affected, 1 warning (6.00 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> select * from letest;
+------+--------+
| f1 | f2 |
+------+--------+
| 0 | first |
| 0 | second |
+------+--------+
2 rows in set (0.00 sec)
mysql>
SLEEP(5) is the first row to be inserted after 5 seconds,
SLEEP(1) is the second row to be inserted after 5+1 seconds
that is why query takes 6 seconds.
The warning that you see is
mysql> show warnings;
+-------+------+-------------------------------------------------------+
| Level | Code | Message |
+-------+------+-------------------------------------------------------+
| Note | 1592 | Statement may not be safe to log in statement format. |
+-------+------+-------------------------------------------------------+
1 row in set (0.00 sec)
This can affect you only if you are using a master-slave setup, because the replication binlog will not be safe. For more info on this http://dev.mysql.com/doc/refman/5.1/en/replication-rbr-safe-unsafe.html
Later edit: Please consider a comment if you find this answer not usefull.
Yes, very possible.
You should consider a database table unordered, and a SELECT statement without ORDER clause as well. Every DBMS can choose how to implement tables (often even depending on Storage Engine) and return the rows. Sure, many DBMS's happen to return your data in the order you inserted, but never rely on it.
The order of the retrieved data my depend on the execution plan, and may even be different when running the same query multiple times. Especially when only retrieving part of the data (TOP/LIMIT).
If you want to impose an order, add a field which orders your data. Yes, an autoincrement primary key will be enough in many cases. If you think you'll be wanting to change the order someday, add another field.

Total run time of multiple queries in mysql

I have some benchmark queries in a .sql file. If i use source in mysql to execute them, mysql will show run time after each query. And there are pages and pages query outputs. Is there any way that I can obtain the total run time of all queries?
Thanks a lot!
You can use MySQL's built in profiling support by adding this line to your my.cnf:
SET profiling=1;
This allows you to easily see the time it took for each query:
mysql> SHOW PROFILES;
+----------+-------------+-------------------------------------------------------------------+
| Query_ID | Duration | Query |
+----------+-------------+-------------------------------------------------------------------+
| 1 | 0.33174700 | SELECT COUNT(*) FROM myTable WHERE extra LIKE '%zkddj%' |
| 2 | 0.00036600 | SELECT COUNT(id) FROM myTable |
| 3 | 0.00087700 | CREATE TEMPORARY TABLE foo LIKE myTable |
| 4 | 33.52952000 | INSERT INTO foo SELECT * FROM myTable |
| 5 | 0.06431200 | DROP TEMPORARY TABLE foo |
+----------+-------------+-------------------------------------------------------------------+
5 rows in set (0.00 sec)
You can them sum up the times to get the total time:
SELECT SUM(Duration) from information_schema.profiling;
You can find more details on MySQL's profiling here.
Another approach you could take it to run execute the SQL queries from command line and use the Unix time command to time the execution. This may, however, not give you the most precise time though. Additionally, it won't give you a breakdown of how long each query took unless you use it in combination with MySQL profiling.
you could modify your .sql to create a begin and end row with a timestamp and then subtract without having to bring out an excel spreadsheet and add it up.
Thanks for all the suggestions.
I end up created another table just to record the start and end time of each query in the .sql file.
I edited the .sql file, add an insert statement after each original query just to record the time. At the end, I can query this "time" table for profiling the .sql file execution.

To cast or not to cast?

I am developing a system using MySQL queries written by another programmer, and am adapting his code.
I have three questions:
1.
One of the queries has this select statement:
SELECT
[...]
AVG(mytable.foo, 1) AS 'myaverage'`,
Is the 1 in AVG(mytable.foo, 1) AS 'myaverage' legitimate? I can find no documentation to support its usage?
2.
The result of this gives me average values to 2 decimal places, why?.
3.
I am using this to create a temp table. So:
(SELECT
[...]
AVG(`mytable`.`foo`, 1) AS `myaverage`,
FROM
[...]
WHERE
[...]
GROUP BY
[...])
UNION
(SELECT
[...]
FROM
[...]
WHERE
[...]
GROUP BY
[...])
) AS `tmptable`
ORDER BY
`tmptable`.`myaverage` DESC
When I sort the table on this column I get output which indicates that this average is being stored as a string, so the result is like:
9.3
11.1
In order to get around this what should I use?
Should I be using CAST or CONVERT, as DECIMAL (which I read is basically binary), BINARY itself, or UNSIGNED?
Or, is there a way to state that myaverage should be an integer when I name it in the AS statement?
Something like:
SELECT
AVG(myaverage) AS `myaverage`, INT(10)
Thanks.
On your last question: can you post the exact MySQL query that you are using?
The result type of a column from a UNION is determined by everything you get back. See http://dev.mysql.com/doc/refman/5.0/en/union.html .
So, even if your AVG() function returns a DOUBLE, the other part of the UNION may still return a string. In which case the column type of the result will be a string.
See the following example:
mysql> select a from (select 19 as a union select '120') c order by a;
+-----+
| a |
+-----+
| 120 |
| 19 |
+-----+
2 rows in set (0.00 sec)
mysql> select a from (select 19 as a union select 120) c order by a;
+-----+
| a |
+-----+
| 19 |
| 120 |
+-----+
2 rows in set (0.00 sec)
Just for anyone who's interested, I must have deleted or changed my predecessors code so this AVG question was incorrect. The correct code was ROUND(AVG(myaverage),1). Apologies to those who scrathed their heads over my stupidity.
on 1.
AVG() accepts exactly one argument, otherwise MySQL will raise an error:
mysql> SELECT AVG( id, 1 ) FROM anytable;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ' 1 )' at line 1
http://dev.mysql.com/doc/refman/5.1/en/group-by-functions.html#function_avg
Just because I'm curious - what should the second argument do?