MYSQL only INSERT query slow (takes exactly 60 sec), only for some tables - mysql

I'm new to MYSQL and there is something really weird happened and I can't figure out why.
Recently, the INSERT query to some of the table become extremely slow. Weirdly enough, the query time all around 60 secs.
The tables are all with the only 10k to 35k entries, so I think they are not that big.(But indeed they are the biggest one in the database, though.)
And the slowness is only with INSERT query, DELETE, UPDATE, SELECT are all executed with 0.000x sec.
Can some help me find out why is this happening?
UPDATE: So I turned on the general log and noticed all my INSERT queries are followed with 'DO sleep(60)'. It seems my server got hacked?
Where can I find this malicious script inject the sleep() command after INSERT?

If you use code to build the queries, copy the code base off the server to your machine (ideally in a VM, just in case) and search for the changes within the code. Alternatively, you could restore the code base from source control (you use source control, right?!).
If it's store procedures you use, you'll need to change them back to a working version without the sleep. Check previous backups to try and find out when this happened, which might help a wider investigation as to how they got in and did what they did.
You'll also need to think about the wider implications of this. Do you store user data? If so, then you'll need to inform them that you've had your database compromised and therefore they should assume their accounts are and change their passwords.
Finally, wipe the server. A hacked server is no longer in your control (or that's how you should look at it). Wipe it, reinstall everything, and put in changes to help prevent the same hack happening again.

Related

How can I run a test on a production app without leaving trace?

I am making some changes to a web app developped in PHP+MySQL and after testing everything in development I have uploaded it to production. However, I would like to make sure that everything runs, I may have forgotten some changes from development to production (database connection etc.).
If I run some changes that insert new rows in the MySQL database, removing the new registers and updating the AUTO_INCREMENT value (so that the auto id doesn't reveal that a test has been done and removed), is enough to leave no trace of the test?
Perhaps the question should be more general, how can I test that once in production everthing runs perfectly without leaving those "footprints"?
Thank you!
The first answer is of course to use a tools to compare them so nothing is forgotten. But if that is not possible or not doable for some reason, this is what I would do:
You have changes in form of queries that update the database in some way I guess.
And you want to make sure that everything works as expected before letting everyone else to test.
Then take a backup of production and restore it in a new database, now you have the same as in production. Then run your update queries (that you want to run in production) and run your code against this new database. This will be exactly as production will be after the changes. If it works you can safely do the same thing in production or letting the users in if the changes has already been made.
If this is not what you want and you for some reason need to run in the actual production server this is nothing we can help you with. Your server might do several things we don't know about.
Inserting new rows (can be removed and fixed with resetting auto_increment to the correct value if you know ALL the rows that has been updated)
Triggers (when you added/modified/removed) data triggers might have changes other data in the database.
Deleting records, can't be undone
You have also the risk of code that could crash from this change (or an unknown bug) that causes a method to exit before it has finished so some rows could have changed but others not.
There is no way to know all this without knowing your system fully.
Another way is of course to make a backup now and test your things and then delete production and restore it using the backup. But this is not something that I would ever try myself or recommend others to do...

Restore all Sql server transaction log but one query?

Is there any way to restore an entire Transaction Log but one query?
for example a delete query, or update query, that you accidentally execute?
Cause i know its is posible, to restore to a certain time, but what if the evil query affected only one table, and you dont want to loose the changes in the other tables?
You can certainly use ApexSQL Log. You can recover deleted or updated rows, but you can also choose what data you are going to recover.
Tool working great and it is very flexible. A lot of filters to help you narrow the search for data you want to recover.
Here you can find some info: How to recover SQL Server data from accidental UPDATE and DELETE operations.
There is no way to do that using just the tools that come with SQL Server, though as you said you can come close using point in time recovery.
I believe ApexSqlLog may allow you to do this, but I have never tested using it in that fashion. SQL Log Rescue from Red Gate also seems to allow you to do this, and whil I have used many products from Red Gate and have generally been very happy with them, I have not tried that particular product either.
This is what I would recommend you try. A fairly common scenario actually.
Restore the database to a "second" copy on the server
(full backup or log backup to bring you back to the point of loss)
Isolate the single table or data in the copy database and move it over to the live database
There are a variety of ways to accomplish step 2 such as
direct queries (insert / update)
merge statement
ssms export / import wizard
ssis etc. etc. etc.

alter table mysql offline or not?

I need to add a column to my current table.
This table is used a lot during the day and night. i found out i need to alter using the alter command found here
http://dev.mysql.com/doc/refman/5.1/en/alter-table.html
I tested it on a development server.
i took about 2 hours to complete. Now I want to execute this on the production server?
will this stop my website?
Why not display a message on the site saying you will perform maintenance from midnight UTC time January 7 2012.
This way, you won't break any data, you will not get any mysql errors. you execute your ALTER and you start the site again once its completed (don't forget your code to make sure you have the right field etc..). Easy solution.
Stackoverflow does it, why not yours?
Yes, during an ALTER TABLE all reads and writes are blocked. If your website needs to use that table, requests will hang.
Try pt-online-schema-change. It allows reads and writes to continue, while it captures changes to be replayed against the altered table once the restructure is done.
Test carefully on your development server so you know how it works and what to expect.
It won't stop your website, but it will likely make it throw errors.
Of course there is no way to answer this without looking at all the code of your application.
The bottom line is, when in doubt schedule a maintenance window.
Make the production server to point to dev db (or) mirror of prodcution db for some time.
Alter the table in production
Deploy the code which talks to production db (with the new attributes)
P.S: I feel this is safer and a fool proof way (based on my experience).

How to check which cache features are turned on? [PHP/MYSQL]

Is there an application or a code I can use to check which cache functions are turned on?
On this app I'm working on, I thought there was mysql cacheing, but since I started using the SELECT SQL_NO_CACHE in one of my queries (applied yesterday), the cacheing has not stopped. This leads me to assume it's a php cache function that's occurring.
I looked over php.ini for any possible cache features, but didn't see anything that stood out.
Which leads me to this: Is there an app I can download or a Shell function I can input to tell me which cache functions are on or what may be causing the cache.
You probably already know that MySQL has a query caching mechanism. For instance if you have a table named users, and run a query like this:
SELECT COUNT(*) FROM `users`
It may take 3 seconds to run. However if you run the query again, it may only take 0.02 seconds to run. That's because MySQL has cached the results from the first query. However MySQL will clear it's cache if you update the users table in any way. Such as inserting new rows, updating a row, etc. So it's doubtful that MySQL is the problem here.
My hunch is your browser is caching the data. It's also possible that logic in your code is grabbing the old row, updating it, and then displaying the old row data in the form. I really can't say without seeing your code.
You probably need to close and restart your browser. I'd bet it is your browser caching not the back end.

How to make my MySQL databases available at all times? Some expert DB advice needed!

I've been doing a lot of research, reading on replication, etc but just not sure as to what mysql solution would work.
This is what I'm looking at:
when my mysql fails for some reason or there are certain queries that are taking really long to execute and locking some tables, I want the other insert/update/select queries to still function at normal speed without having to wait for locks to be released or for the main database to be back up. I'm thinking there should be a second mysql server for this to happen, but is what I mentioned possible even if there is and would it involve a lot of change in my existing programming logic?
when my database is being backed up, I would still like my site to function normally, all inserts/selects/updates should function as normal.
when I need to alter a large table, I wouldn't like it to affect my application, there should be a backup server to work from.
So what do I need to do to get all this done and also would it require changing plenty of existing coding to suit the new set up? [My site has a lot of reads and writes]
There's no easy way. You're asking for a highly-available MySQL-based setup, and that requires a lot of work at the server and client ends.
Some issues, for example:
when I need to alter a large table, I wouldn't like it to affect my application, there should be a backup server to work from.
If you're altering the table, you can't trivially create a copy to work from during the update. What about the changes that are made to your copy while the first update is taking place?
Have a search for "High Availability MySQL". It's mostly a solved problem, but the solution depends heavily on your exact requirements. You cannot just ask for "I want my SQL server to run at full speed always forever no matter what I throw at it".
Not a MySQL specific answer, but a general one. Have a read only copy of your DB for site to render, which will be synced to the master DB regularly. This way, you can keep your site working even if the master DB is under load/lock due to insert/delete. For efficiency, keep this copy as denormalized as you can.