I log into the (RDS) mysql server with the console client in two terminals, use the same database (not that that should matter), and run the query show status like 'created%' in each. They show a consistent number - no matter how many times I make the query, the answer doesn't change.
But they disagree with each other. Moreover,any time I use a different database, that query gives a different response, though that variable is supposed to be for the whole server.
The MySQL page gives this explanation for the variable:
The number of internal temporary tables created by the server while executing statements.
You can compare the number of internal on-disk temporary tables created to the total number of internal temporary tables created by comparing the values of the Created_tmp_disk_tables and Created_tmp_tables variables.
Can anybody explain why this would be happening? I can't understand how that variable could decrease at all, but the two sessions giving different numbers has me extra-stumped.
According to the manual at http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html you can request this specific status information both per session as well as globally.
Can you give us the output of
SHOW GLOBAL STATUS LIKE ...
and
SHOW SESSION STATUS LIKE ...
Related
When I execute \s in MySQL console, I get a bunch of information. Also the information I don't care about. I just want to get slow queries and Queries per second avg.
I found solution for slow_queries because it is present in the status table of MySQL. However, Queries per second average isn't available in the MySQL status table.
Is there any way to get only Queries per second average? I can use grep to scrap the information I need in the SSH Console but I don't want to expose MySQL password in logs. This process is automated and not manual. So, the process has to be non-interactive.
I tried to find information in the performance_schema but it looks like I am missing something. Is this value calculated while execution of \s command?
I i'm doing select from 3 joined tables on MySql server 5.6 running on azure instance with inno_db set to 2GB. I used to have 14GB ram and 2core server and I just doubled ram and cores hoping this will result positive on my select but it didn't happen.
My 3 tables I'm doing select from are 90mb,15mb and 3mb.
I believe I don't do anything crazy in my request where I select few booleans however i'm seeing this select is hangind the server pretty bad and I can't get my data. I do see traffic increasing to like 500MB/s via Mysql workbench but can't figure out what to do with this.
Is there anything I can do to get my sql queries working? I don't mind to wait for 5 minutes to get that data, but i need to figure out how to get it.
==================== UPDATE ===============================
I was able to get it done via cloning the table that is 90 mb and forfilling it with filtered original table. It ended up to be ~15mb, then I just did select all 3 tables joining then via ids. So now request completes in 1/10 of a second.
What did I do wrong in the first place? I feel like there is a way to increase some sizes of some packets to get such queries to work? Any suggestions on what shall I google?
Just FYI, my select query looked like this
SELECT
text_field1,
text_field2,
text_field3 ,..
text_field12
FROM
db.major_links,db.businesses, db.emails
where bool1=1
and bool2=1
and text_field is not null or text_field!=''
and db.businesses.major_id=major_links.id
and db.businesses.id=emails.biz_id;
So bool1,2 and textfield i'm filtering are the filds from that 90mb table
I know this might be a bit late, but I have some suggestions.
First take a look the max_allowed_packet in your my.ini file. This is usually found here in Windows:
C:\ProgramData\MySQL\MySQL Server 5.6
This controls the packet size, and usually causes errors in large queries if it isn't set correctly. I have mine set to 100M
Here is some documentation for you:
Official documentation
In addition I've slow queries when there are a lot of items in the where statement and here you have several. Make sure you have indexes and compound indexes on the values in your where clause especially related to the joins.
I have a big table, which saved data with an ID based on input from an external API. The ID is stored in an int field. When I developed the system, I encountered no problems, because the ID of records in the external API were always below 2147483647.
The system has been fetching data from the API for the last few months, and apparantly the ID crossed the 2147483647 mark. I now have a database with thousands of unusable records with ID 2147483647.
It is not possible to fetch this information from the database again (basically, the API allows us to look up data from max x days ago).
I am pretty sure that I am doomed. But might there be any backlog, or any other way, to retrieve the original input queries, or numbers that were truncated by MySQL to fit in the int field?
As already discussed in the comments, there is no way to retrieve the information from the table. It was silently(?!!!) truncated to 32 bits.
First, call the API provider, explain your situation, and see if you can redo the queries. Best that happens is they say yes and you don't have to try to reconstruct things from logs. Worst that happens is they say no and you're back where you are now.
Then there are some logs I would check.
First is the MySQL General Query Log. IF you had this turned on, it may contain the queries which were run. Another possibility is the Slow Query Log, more often enabled, if your queries happened to be slow.
In MySQL, data truncation is a warning by default. It's possible those warnings went into a log and included the original data. The MySQL Error Log is one possibility. On Windows it may have gone into the Windows Event Log. On a Mac, it might be in a log visible to the Console. In Unix, it might have gone to syslog.
Then it's possible the API queries themselves are logged somewhere. If you used a proxy it might contain them in its log. The program fetching from the API and adding to the database may also have its own logs. It's a long shot.
As a last resort, try grepping all of /var/log and /var/local/log and anywhere else you might think could contain a log.
In the future there are some things you can do to prevent this sort of thing from happening again. The most important is to turn on strict SQL mode. This will turn warnings, like that data has been truncated, into errors.
Set UNIQUE constraints on unique columns. Had your API ID column been declared UNIQUE the error would have been detected.
Use UNSIGNED BIGINT for numeric IDs. 2 billion is a number easily exceeded these days. It will mean 4 extra bytes per row or about 8 gigabytes extra to store 2 billion rows. Disk is cheap.
Consider turning on ANSI SQL mode. This will disable a lot of MySQL extensions and make your SQL more portable.
Finally, consider switching to PostgreSQL. Over the years MySQL has accumulated a lot of bad ideas, mish-mashes of functions, and bad default behaviors. You just got bit by one. PostgreSQL is far better designed, more powerful and flexible, and usually as fast or faster.
In Postgres, you would have gotten an error.
test=# CREATE TABLE foo ( id INTEGER );
CREATE TABLE
test=# INSERT INTO foo (id) VALUES (2147483648);
ERROR: integer out of range
If you have binary logging enabled, and you still have backups of the binlogs, and your binlog_format is not set to ROW then your original insert and/or update statements should be preserved there, where you could extract them and replay them into another server with a more appropriate table definition.
If you don't have the binlog enabled and/or you aren't archiving the binlogs in perpetuity... this is one of the reasons why you should consider doing it.
I have a number mysql servers running version 5.1.63 and whilst running some queries against the slave earlier this week, I noticed some data on the slave that should have been removed using an update statement on the master.
My initial thoughts were:
someone on the team was updating the slave, which I have since disproved
that the column being updated had changed
So, I investigated by running a mysql show status "table" query. This was run against a test database on each of the servers to see what the data length was, in a lot of cases it was showing me the data length differed between servers, but on an eyeball look at the data I could see the data was the same, so I couldn't use this method to see if there were any differences as it appears to be prone to error.
Next I ran a simple (across all dbs) row count for each table to confirm the row count was the same - it was.
I then started looking in the bin logs for replication. I could see the update statements that should have run clearly visible in the logs, but the update never ran.
What I need to know is:
is replication broken? I'm assuming it is
if I create new slave servers, will I encounter the same issue?
how do I find out the extent of the issue on my servers?
Any help is appreciated.
If you are using statement based replication then it is easily possible to end up with different results on master and slave due to badly constructed INSERT statements.
INSERT SELECT without ORDER BY, or where the ORDER BY can leave non deterministic results will cause the slaves to diverge from master.
From the MySQL site http://dev.mysql.com/doc/refman/5.1/en/insert-select.html
The order in which rows are returned by a SELECT statement with no
ORDER BY clause is not determined. This means that, when using
replication, there is no guarantee that such a SELECT returns rows in
the same order on the master and the slave; this can lead to
inconsistencies between them. To prevent this from occurring, you
should always write INSERT ... SELECT statements that are to be
replicated as INSERT ... SELECT ... ORDER BY column. The choice of
column does not matter as long as the same order for returning the
rows is enforced on both the master and the slave. See also Section
16.4.1.15, “Replication and LIMIT”.
If this has happened then your replicas have diverged and the only safe way to bring them back in line is to rebuild them from a recent backup of the master DB. The worst part of this is the error may never cause the replication to fail, yet the results are inconsistent. Normally replication fails when an UPDATE or DELETE statement affects a different number of rows than on master, this is confusing as it was not the UPDATE that actually caused the error and the only way I know to fix the issue is to inspect every INSERT query in the code base!
Status details are from information_schema which collates data from databases statistics for Mysql instance and it never remained the same at every execution. It can be considered as just a rough estimation of data sizes in bytes but never an exact value as for index and data length. It can be used for estimations but not for cross check. For replication you may check the slave io and sql against the master is running or not. And relay-info you might see the corresponding log details from master and that of slave.
Of,course (1) way of doing is count(*) of tables EOD ensures the data in tables on master and slave are consistent or not. But to be accurate either (2) take random value fields and cross check with master and slave. Also if you aren't satisfied with it, (3) you may take them into outfile and take diff or checksum. I prefer (1) and (2). If (1) is not possible (2) still convinces me. ;)
There is a tool to verify replication named pt-table-checksum
While working with MySQL and some really "performance greedy queries" I noticed, that if I run such a greedy query it could take 2 or 3 minutes to be computed. But if I retry the query immediately after it finished the first time, it takes only some seconds. Does MySQL store something like "the last x queries"?
The short answer is yes. there is a Query Cache.
The query cache stores the text of a SELECT statement together with the corresponding result that was sent to the client. If an identical statement is received later, the server retrieves the results from the query cache rather than parsing and executing the statement again. The query cache is shared among sessions, so a result set generated by one client can be sent in response to the same query issued by another client.
from here
The execution plan for the query will be calculated and re-used. The data can be cached, so subsequent executions will be faster.
Yes, depending on how the MySQL Server is configured, it may be using the query cache. This stores the results of identical queries until a certain limit (which you can set if you control the server) has been reached. Read http://dev.mysql.com/doc/refman/5.1/en/query-cache.html to find out more about how to tune your query cache to speed up your application if it issues many identical queries.