MySql version of sys.dm_os_sys_info - mysql

Is there a mysql version of sys.dm_os_sys_info.
I know of show status and show variables, but I'm looking for hardware cpu and ram numbers.
Or will I just have to ask the Admins?

The nearest to your question performance schema
The Performance Schema provides a way to inspect internal execution of the server at runtime.
The Performance Schema monitors server events. An “event” is anything the server does that takes time and has been instrumented so that timing information can be collected.
https://dev.mysql.com/doc/refman/8.0/en/performance-schema.html

Related

SQL database cpu 100%utilization

sql server cpu utilization is high how to fix what and alll needd to check should I check the
Please let me what do how to fix the issues
All maintance job are running fine
It could be that there's some other program entirely running on the server chewing up the CPU, so I'd suggest opening task manager or resource monitor first, just to check whether SQL Server itself is using all that CPU. Because that would be unusual.
After you've done that, if you see that it really is SQL using all the CPU, a quick way of getting at least some information, which might let you see if the CPU is being consumed by user queries or system processes, is to execute the procedure sp_who2. This will return a table of values indicating the session id, login name, host name (the machine the client is connecting from), and so on, as well as a column for CPUTime. Run that, have a look at any of the rows with high CPUTime values.
Note, however, that there are system processes that run for a very long time and will seem to have very high CPU time, because they're "always running" so to speak.
If you want to get more detailed information, you could create Adam Machanic's excellent sp_whoisactive stored procedure on your instance, and run that. You can download the DDL, and read the documentation about it, here

Need Mysql Zabbix data for CPU utilization, memory utilization, Disk space utilization for creating ML profile

I have installed Zabbix 4.0 for remote monitoring of Linux server. My first understanding is that the Zabbix-agent monitors the server and send the data to Mysql database for storing. The Zabbix frontend retrieves the data from Mysql database and shows the above-said metrics (in the form of graphs), as shown in the attached image.
Now, instead of directly viewing from the web interface, I want to construct the ML model from metrics like CPU utilization/load, memory utilization, hard disk usage, and traffic in/out. I checked all the Columns of all the Tables in Mysql database to retrieve the above-said metrics. However, I could not find any columns or tables that stored these metrics. My second is understanding is that Zabbix front-end constructs these metrics on graph indirectly from the stored columns in Mysql databse Tables.
I want to know if my both the understandings are correct or no.
I also want to know, considering my both understandings are true, how do i extract metrics like CPU utilization/load, memory utilization, hard disk usage, and traffic in/out for constructing ML model from data that is stored in Mysql database.
If my understandings are false, how should I collect these metrics.
Any details or documentation that could help me is appreciable.
Zabbix data is stored in the Mysql database in various tables (history and trends, differentiated by data type).
The difference between history and trend is described here.
I strongly advise against the direct use of mysql, because of complexity and compatibility.
The best course of action is through the API (history.get and trend.get) to extract the data and feed it to your ML.
Zabbix itself supports predictive triggering, but I did not implement it yet.

Amazon Linux EC2 Webserver / MYSQL Upgrade – Traffic causing error establishing a database connection

To give you a little background, I currently have a website that allows users to upload photos. The website was initially on a GoDaddy shared server, but recent surges in traffic have forced me to explore other options. During peak hours, the site contains 400+ active visitors, which when combined with user uploads, forces the shared server to shut down.
I have a small amount of experience with setting up servers through AWS and attempted to place the website on a c1.medium instance, Amazon Linux. The website along with the MYSQL Database is on the same instance. While I have read that this is in general frowned upon, I have similarly read that moving the database to another instance would not significantly increase speeds. Unfortunately, the c1.medium instance also was unable to support the traffic and I soon received an error Establishing a Database connection. The site does load on occasion, so the problem stems from the traffic load and not an actual problem with the database.
My question is whether the problem revolves solely around MySQL? The database itself when backed up is around 250MB. Is the issue caused by input / output requests to the database? I read posts with people with similar problems in which they stated that installing MySQL 5.6 solved the problem, but also have read that MySQL 5.6 is slower than MySQL 5.5, which is my current version.
After conducting some preliminary research I started to believe that I could resolve the problem by increasing the IPOS of the EBS. Originally I had it set the IPOS as standard, but changed it to Provisioned IOPS and 30x the size of the EBS (i.e., 60GB – 1800 IOPS). This once again appeared to have little impact. Do I need to upgrade my instance? What measures should I be focused on when deciding on the instance? It appears that the cheapest instance with high network performance and EBS optimized would be c3.xlarge. Suggestions?
Several things to consider:
1)Separate the database server from the web server
Your database should not share resources with your web server. They will both perform poorly as the result.
It is easier to find what the bottle-neck is.
2) Upgrade to MySQL 5.6
In all the benchmarks that I have seen and done 5.6 performs better than 5.5
3) Configure your database to take advantage of your resources
Depending on the storage engine and the memory allocated in your machine configure MySQL for example set innodb_buffer_pool_size to 70% of the (DEDICATED) RAM
4) Monitor MySQL and check slow query log
Slow query log shows the queries that are slow and inefficient
5) Learn to use EXPLAIN
EXPLAIN shows query plan in MySQL run EXPLAIN on slow queries to tune them
6) Use Key-Value Stores to Cache queries
Using Memcached or Redis cache queries so they don't hit your database and return repeated queries from the memory
7) Increasing IOPS and Scaling Out
Increasing IOPS and getting better hardware helps but using efficient queries is much more effective. Queries and application most of the time are a greater contributing factor to performance problems
8) Replication
To help with concurrency consider moving to a MySQL Master/Slave replication , if you still had issues.
Final Note: use EBS because the storage on EC2 is ephemeral and will not persistent.
We recently did extensive research on the performance bottlenecks associated with massive end-user peaks across our global customer base, and the analysis actually indicates the database as - by far - the most frequent cause of slowdowns or even crashes. The report (https://queue-it.com/trend-report) includes best practice advice from our customers on how to improve the situation, which you may find helpful.

Upgrading from MySQL 4.1 to 5.0 - What kind of performance changes (good or bad) can we expect?

Currently have approximately 2000 simultaneouse connections. We average approximately 425 reads and writes per second. We have a read to write ration of 3:1. All of our tables are myisam. Can we expect better or worse performance when we go from mysql 4.1.22 to 5.0?
There's no way for anyone here to tell you without the schema, queries and test data.
Why not setup a dev environment on 5.0 and testing it out?
The main concern should be that the 5.0 Information Schemas, are a HUGE vulnerability and can be used to very easily gain access to the SQL server from remote locations simply by printing off the schema using injection will let an unwanted viewer, view all of the tables and capitalize off the knowledge to get passwords using the same schema for its columns.
The MySQL source tree includes a set of benchmark tests written as Perl scripts. See The MySQL Benchmark Suite for some information. You can download the source distribution for MySQL 5.0.91 at the archives.
Source distribution of MySQL 4.1 doesn't seem to be easily available anymore. You might have to check it old sources from LaunchPad unless you can find a copy of an old source distribution elsewhere on the internet.
However, the comparison that these benchmarks show is only of general interest. It may be irrelevant to how your application performs. For instance, your usage of the database may not take advantage of some performance improvements in MySQL 5.0, but it may run into some regressions in MySQL 5.0 that were necessary.
The only way to get an answer that is relevant to your application is to try the new software with a test instance of your application, using a sample of data that is a realistic model of the type and volume of data your application typically deals with. As #BenS says, no one on a site like StackOverflow can give an answer specific to your application.
You say in a comment that you're very concerned about performance, but if you don't have an instance of your application and database that you can run tests on, you aren't doing the work necessary to satisfy this concern.
I would strongly suggest moving straight to 5.1.45 with Innodb Support. Percona provides an excellent version with XtraDB that provides a number of performance related improvements. Moving off of your MyISAM tables and onto Innodb will provide a huge performance increase in almost all cases. If you are going to burn the QA/Testing time to move, do a full move now, not a half-way step.

MySQL performance - 100Mb ethernet vs 1Gb ethernet

I've just started a new job and noticed that the analysts computers are connected to the network at 100Mbps. The ODBC queries we run against the MySQL server can easily return 500MB+ and it seems at times when the servers are under high load the DBAs kill low priority jobs as they are taking too long to run.
My question is this... How much of this server time is spent executing the request, and how much time is spent returning the data to the client? Could the query speeds be improved by upgrading the network connections to 1Gbps?
(Updated for the why): The database in question was built to accomodate reporting needs and contains massive amounts of data. We usually work with subsets of this data at a granular level in external applications such as SAS or Excel, hence the reason for the large amounts of data being transmitted. The queries are not poorly structured - they are very simple and the appropriate joins/indexes etc are being used. I've removed 'query' from the Title of the post as I realised this question is more to do with general MySQL performance rather than query related performance. I was kind of hoping that someone with a Gigabit connection may be able to actually quantify some results for me here by running a query that returns a decent amount of data, then they could limit their connection speed to 100Mb and rerun the same query. Hopefully this could be done in an environment where loads are reasonably stable so as not to skew the results.
If ethernet speed can improve the situation I wanted some quantifiable results to help argue my case for upgrading the network connections.
Thanks
Rob
Benchmark. MySQL has many tools for determining how long queries take. Odds are you have really bad queries. Use the slow query log.
Why are you transmitting/storing 500MB of data from/in MySQL?
Divide the amount of data by the time of your query, you'll get your answer. If you're nearing the capacity of 100Mbps , you'll have IO problems.
My suspicion is yes. It should be.
In the MySQL shell, I would run:
show full processlist
on the machine and check out the state of the queries. If you see any states similar to: "reading from net" or "writing to net" that would imply that network transmission is directly impacting MySQL. You can also look at IOStat results to see how much IO the system is using. If the system is on a managed switch, you might also want to check the load there.
Ref: show processlist
Ref: Status definitions