SQL database cpu 100%utilization - sql-server-2008

sql server cpu utilization is high how to fix what and alll needd to check should I check the
Please let me what do how to fix the issues
All maintance job are running fine

It could be that there's some other program entirely running on the server chewing up the CPU, so I'd suggest opening task manager or resource monitor first, just to check whether SQL Server itself is using all that CPU. Because that would be unusual.
After you've done that, if you see that it really is SQL using all the CPU, a quick way of getting at least some information, which might let you see if the CPU is being consumed by user queries or system processes, is to execute the procedure sp_who2. This will return a table of values indicating the session id, login name, host name (the machine the client is connecting from), and so on, as well as a column for CPUTime. Run that, have a look at any of the rows with high CPUTime values.
Note, however, that there are system processes that run for a very long time and will seem to have very high CPU time, because they're "always running" so to speak.
If you want to get more detailed information, you could create Adam Machanic's excellent sp_whoisactive stored procedure on your instance, and run that. You can download the DDL, and read the documentation about it, here

Related

MySQL not behaving Asynchronous with Amazon RDS Instance?

I'm encountering a very strange issue with our MySQL RDS deployment. When a complex Stored procedure that can take 10+ seconds to complete is called, all other calls to the database are bogged down and hung up. This includes any call to SHOW FULL PROCESSLIST. Note the calls are from external/other sessions. For example the Stored Procedures that are taking 10-20 seconds are called by our Web Service but my attempt at executing any queries or SHOW FULL PROCESSLIST are from the IDE on my system, so a completely different connection/session.
Yet my query hangs until the other process is complete, and Amazon RDS reports just 2.3% CPU usage for MySQL.
Heck, even opening the connection to RDS while these stored procedures are running takes forever, so something is very wrong - it's as if MySQL isn't operating in any asynchronous capacity.
Any ideas what's going on here? Am I missing a single simple default flag in RDS that's turned off asynchronous processing?
The issue was the class of instance we were using with AWS; it was just too small. Once we updated it to t2.medium, the problems disappeared. The unusual thing is what we were running really was nothing intensive with the database; however, it appears the t2.micro class is really designed to not be used in any real capacity. One of the issues is price starts compounding very quickly in AWS, even for a sandbox system. A small company can quickly find fees in excess of $1,000 just by running test environments. This is not reasonable given the service and performance level provided by AWS for the cost.

MySql version of sys.dm_os_sys_info

Is there a mysql version of sys.dm_os_sys_info.
I know of show status and show variables, but I'm looking for hardware cpu and ram numbers.
Or will I just have to ask the Admins?
The nearest to your question performance schema
The Performance Schema provides a way to inspect internal execution of the server at runtime.
The Performance Schema monitors server events. An “event” is anything the server does that takes time and has been instrumented so that timing information can be collected.
https://dev.mysql.com/doc/refman/8.0/en/performance-schema.html

Data queries and computation happen in MySQL server or Rails server?

I need to run a long backend job with long MySQL queries regularly, which will take several hours to complete. I set up Delayed Job gem to schedule this job.
When this process is running:
Will this job slow down my Rails front-end server (i.e., it will take much longer to response to a simple user's request)?
Where heavy computation happens: in my Rails server, or in MySQL server?
Will MySQL server be occupied by my scheduled job, and no one can access MySQL at the same time?
Thank you.
The answer to your question is: It depends
If your task is processor intensive it could slow down the rails server. If you are concerned about the DJ workers impacting the front end box, move them to another box with access to a shared DB. Your worker box needs the project setup but does not need to be the same box you are serving pages from.
This is completely dependent on how you wrote your task. Typically a rails app does simple select / insert / update / delete. the actual computation is done in rails. But you can specify select statements that involve complex joins or take advantage of functions in the DB. This can offload the computation of complex fields to the DB
This is dependent on the number of connections your DB is configured to accept. Typically in a production level server, you wouldn't see an issue here from the size of your query. But you should take into account how many active connections there are and how many are permitted. Each rails instance counts as a connection, as well as each worker for DJ.
In each case the actual performance is going to depend on several factors. How many connections are you creating, how much data are you transmitting between worker and DB. Where are you doing the work.
If the rails server is on the same machine as the mysql server, then there will be some impact. But your OS, and MySQL together, are pretty capable of minimizing the effects without much other intervention by you. Depending how you're deployed, you can always utilize the 'nice' command, and lower the priority of the delayed job, minimizing it's impact on your site's responsiveness.

Slow data transfer of large result set

I have a large MySQL table, with proper indices etc. I run a simple select * query from a remote machine and I expect a large result set.
My problem is that when I run the query, the result set returns at a maximum data transfer speed of ~300 KBytes/sec.
I created the same table, and run the same query on SQLServer Express 2008 R2, and the results returned at a transfer speed of 2MBytes/second (my line limit).
The server machine is Windows Server 2008 R2 x64, Quad core, 4GB RAM and the the MySQL version is 5.6.2 m5 64-bit. I tried disabling the compession in the communication protocol but the results where the same.
Does anyone have an idea as to why this is happening ?
--theodore
You might be comparing apples to oranges.
I'd run SELECT * on the MySQL server, and see what kind of data rate you get for retrieving data on the server locally -- without the additional constraint of a network.
If that's slow also -- then it isn't the network's fault.
When the MySQL setup program runs, it asks the person setting up MySQL what role MySQL is going to play on the hardware -- i.e., Development Server, Shared Server, Dedicated.
The difference in all of these is how much memory MySQL will seek to consume on the Server.
The slowest setting is Development (use the least memory), and the fastest one is Dedicated (attempt to use a lot of memory). You can tinker with the my.ini file to change how much memory MySQL will allocate for itself, and/or google 'my.ini memory' for more detailed instructions.
The memory that MySQL is using (or isn't, as the case may be), will make a huge difference on performance.
First, check to see what the speed is retrieving data locally on the MySQL server is. If it's slow, the network isn't the problem -- check MySQL's memory usage -- ideally give it as much as possible. And of course, if it's fast, then either the network and/or some piece of database middleware (ODBC?) or tool-used-to-display-the-data -- is slow...
One more thing -- try the SELECT * TWICE... why? The second time some or all of the results (again, depending on memory) should be cached... the second time it should be faster...
Also, don't forget to restart MySQL when changing the my.ini file (and create a backup before you make any changes...)

debugging Error establishing mySQL database connection under extreme load

Under high traffic my mysql 5.0.45 server /Apache2/ CentOS 5 is getting "Error establishing mySQL database connection". I need to find the root cause.
I would very much appreciate any pointer to information about the procedure I should take to find the cause (memory limit, thread limits, CPU load, slow queries etc, large dataset, wrong keys ...) I would assume it involves looking at relevant log files etc....
Thank you.
That particular error message sounds like it's being generated by your application, and not by a system library. MySQL has functionality to report the specific errors that are occurring, so your best bet would be to utilize that in some way.
For instance, if you were using PHP, there is a function called mysql_error() that returns specifics about the last error encountered (too many connections, etc). You would put in some error handling near your connection call, and log the mysql_error() results if it failed.
You didn't mention what language you were using, but the MySQL libraries would provide the same functionality to whichever you are using. I'd suggest modifying your application code to take advantage of it.
I'm willing to bet this is because you're hitting the max user limit allowed by the mysql server but in general, do print the mysql errors, if not to the screen but at least to the log, or email.