I have a intraweb application which uses AclExtras.
When i execute in shell
./Console/cake AclExtras.AclExtras aco_sync
it takes about 4-5 minutes (!!!).
Is there maybe something not correctely set up?
Well it could take that much time because it just needs it to regenerate all permissions, or:
If you're running it on an old or not powerful enough server it could still run slow.
If you have problems on the DB server due to high loads.
If you custom coded something it could run slower if not optimized.
.... Whatever else
There is no way to say why it takes 4-5 minutes if you do not provide any code and more detailed set up information, but on the other hand it could just be so resource intensive that the Shell requires these minutes to complete it's task... So a question like "it takes about 4-5 minutes (!!!).Is there maybe something not correctly set up?" is no good in this case. Please check this out on how to ask questions.
Related
Scenario - you have hundreds of reports running on a slave machine. These reports are either scheduled by MySQL's event scheduler or are called via a Python/R or Shell script. Apart from that, there are fifty odd users who are connecting to MySQL slave running random queries. These people don't really know how to write good queries and that's fair. They are not supposed to. So, every now and then (read every day), you see some queries which are stuck because of read/write locks. How do you fix that.
What you do is that you don't kill whatever is being written. Instead, you kill all the read queries. Now, that is also tricky because, if you kill all the read queries, you will also let go off OUTFILE queries, which are actually write queries (they just don't write to MySQL, but write to disk).
Why killing is necessary (I'm only speaking for MySQL, do not take this out of context)
I have got two words for you - Slave lag. We don't want that to happen, because if that happens, all users, reports, consumers suffer.
I have written the following to kill processes in MySQL based on three questions
how long has the query been running?
who is running the query?
do you want to kill write/modify queries too?
What I have intentionally not done yet is that I have not maintained a history of the processes that have been killed. One should do that so as to analyse and find out who is running all the bad queries. But there are other ways to find that out.
I have create a procedure for this. Haven't spend much time on this. So, please suggest if this is a good way to do it or not.
GitHub Gist
Switch to MariaDB. Versions 10.0 and 10.1 implement several limits and timeouts: https://mariadb.com/kb/en/library/query-limits-and-timeouts/
Then write an API between what the users write and actually hitting the database. In this layer, add the appropriate limitations.
Not sure how to state this question.
I have a very busy DB in production with close to 1 million hits daily.
Now I would like to do some research on the real-time data (edit: "real-time" can be a few minutes old).
What is the best way to do this without interrupting production?
Ideas:
in the unix shell, there is the nice concept. It lets me give a low priority to a specific thread so it only uses CPU when the other threads are idle. I am basically looking for the same in a mysql context.
Get a DB dump and do the research offline:
Doesn't that take down my site for the several minutes it takes to get the dump?
Is there a way to configure the dump command so it does the extraction in a nice way (see above)?
Do the SQL commands directly on the live DB:
Is there a way, again, to configure the commands so they are executed in a nice way?
Update: What are the arguments against Idea 2?
From the comments on StackOverflow and in-person discussions, here's an answer for whoever gets here with the same question:
In MySQL, there seems not to be any nice type control over prioritization of processes (I hear there is in Oracle, for example)
Since any "number-crunching" is at most treated like one more visitor to my website, it won't take down the site performance-wise. So it can safely be run in production (read-only, of course...).
Background
I have 5 servers all running essentially the same site but I have had difficulties with database speed. Part of the process has lead me to make changes to one of my my.cnf files to improve performance.
Problem
I am having difficulty finding if the settings are making any difference at all. I restart the mysql service and ever have rebooted the entire server, the variables show up as changed but I don't see any kind of noticeable difference when accessing my site. I would like a way to quantify how fast my database is without relying on the front end of the app so I can show my boss real figures for database speed instead of looking at the google console for load speeds.
Research
I thought that there might be some tool in phpmyadmin to help track speed but after going through the different tabs I couldn't find anything. All of the other on-line resources I have looked at seem to just talk about "expected results" instead of how to test directly.
Question
Is there a way to get speed information directly from the database (or phpmyadmin) instead of using the front end of the web app?
The optimal realistic benchmark goes something like this:
Capture a copy of the live dataset.
Turn on the general log.
Run for some period of time (say, an hour).
Turn off the general log.
That gives you 2 things: a starting point, and a list of realistic instructions. Now to replay:
Load the data on a test machine.
Change some my.cnf setting.
Apply the captured general log, timing how long it takes.
Replay with another setting change; see if the timing is more than trivially faster or slower.
Even better would be to arrange for the replay to be multi-threaded like the real product.
Caveat... Some settings won't make any difference until the size of something (data, index, number of connections, etc) exceeds some threshold. Only at that point will the setting show a difference. This benchmark method fails to predict such.
If you would like an independent review of your my.cnf, please provide these:
How much RAM do you have
SHOW VARIABLES;
SHOW GLOBAL STATUS; -- after mysld has been running at least a day.
I will compute a couple hundred formulas and judge which ones are red flags.
I have an 600ish-line perl script, not written by me but used with permission, that parses a set of xml files and inserts them into a few MySQL tables. I ran this script on my machine first back in May or so, and everything seemed fine; it was fast enough for my purposes (multiple insert queries per second), worked great. I recently acquired more data that needed the same parsing, so I was going to run it again. This time, it was glacially slow - 10-12 seconds per query.
No hardware changed in the interim; the only significant software change made was "upgrading" to Windows 8.1, when it was on 8.0 originally. Could that be the cause of the problem? Anyone know how I might troubleshoot this? At this pace, it's literally going to take three months to complete.
I'm happy to provide some/all of the script upon request, as well as any other details you might want.
Thanks in advance!
To diagnose the performance problem in your script, consider installing and using tools such as the following:
Devel::NYTProf. This excellent profiler toolkit will show you exactly where you're spending cycles executing Perl code. By design, it leaves out time spent in I/O, including database calls.
DBI::Profile. This offers multiple levels of debug profiling, and can show you how long each database action takes. It's an excellent companion to Devel::NYTProf and easy to use.
If neither Devel::NYTProf nor DBI::Profile show a hotspot in your code (either in a perl computation or a database query / transaction), then you should look through your code for system calls or network accesses to see if those are the culprits.
This is the most puzzling MySQL problem that I've encountered in my career as an administrator. Can anyone with MySQL mastery help me a bit on this?:
Right now, I run an application that queries my MySQL/InnoDB tables many times a second. These queries are simple and optimized -- either single row inserts or selects with an index.
Usually, the queries are super fast, running under 10 ms. However, once every hour or so, all the queries slow down. For example, at 5:04:39 today, a bunch of simple queries all took more than 1-3 seconds to run, as shown in my slow query log.
Why is this the case, and what do you think the solution is?
I have some ideas of my own: maybe the hard drive is busy during that time? I do run a cloud server (rackspace) But I have flush_log_at_trx_commit set to 0 and tons of buffer memory (10x the table size on disk). So the inserts and selects should be done from memory right?
Has anyone else experience something like this before? I've searched all over this forum and others, and it really seems like no other MySQL problem I've seen before.
There are many reasons for sudden stalls. For example - even if you are using flush_log_at_trx_commit=0, InnoDB will need to pause briefly as it extends the size of data files.
My experience with the smaller instance types on Rackspace is that IO is completely awful. I've seen random writes (which should take 10ms) take 500ms.
There is nothing in built-in MySQL that will help you identify the problem easier. What you might want to do is take a look at Percona Server's slow query log enhancements. There's a specific feature called "profiling_server" which can break down time:
http://www.percona.com/docs/wiki/percona-server:features:slow_extended#changes_to_the_log_format