Is reading/writing to mysql database periodically CPU intensive? [closed] - mysql

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I will be writing a program in Delphi that will be reading and writing to a MySQL database tables on a regular basis like every 5 seconds. Is this going to be CPU intensive? or get to a point where computer will freeze completely? I know reading and writing to and from a hardrive nonstop can freeze everything on your computer. I am not really sure about MySQL database.

Databases are designed to handle many transactions frequently, but it really depends on what the queries you are using. A simple SELECT on a couple rows is unlikely to cause an issue, but large scale updates targeting many tables or multiple joins can slow performance. It all depends on what your queries are.

This all depends on the computer and the complexity of the query.

As David has said, it really does depend on the hardware and queries you are processing.
I would suggest measuring the processing time of each query to determine whether the writing processes will be stacking over the other 5 second interval queries.
You can find information on how to measure your MySQL processes here.

Related

What is faster / better? More SQL-Select statements or multiple detailed sql commands? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
i work on a project with node.js and a mysql database.
i have a connection between them with the npm modul mysql.
Now my question:
is it better to send a SQL command and sort the data in node.js or multiple detailed sql commands?
what is faster / more performant?
Thanks.
Without knowing the exact SQL queries, I would say that database operations are faster compared to your own implementation. Many smart people have worked to assure performance, accuracy, atomicity, concurrency etc. of the Mysql engine.
Even if you can gain marginal improvements in some aspect with your own code, it is unlikely that you will be able to justify the investment.

MySQL Community - Scaling [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am working in an environment that is under extreme load. It is a DB used by about a thousand users with one application. This application does thousands of queries against the DB. We have noticed significant performance degradation over time and are looking for a long-term solution to this problem. Of course, query optimization is one of the tasks we are working on, and we are also optimizing indexes; however, this will not be enough to see the performance gains we need.
I have worked in SQL Server for several years but my MySQL knowledge is limited. To start scaling MySQL, I've researched Sharding, but as we are using MySQL community edition, I'm nervous that this will cause more headaches than it's worth. The only other possibility is to re-design the application, specifically how it pulls data from the DB, but I'd rather not do that.
So my question is, is sharing worthwhile to pursue? Is it feasible without an enterprise edition of MySQL? Is there another possibility you could recommend?
Turn on the slowlog with long_query_time=1. Wait a day. Use pt-query-digest to identify the 'worst' could of queries. Then let's discuss them. Sometimes it involves the trivial addition of a 'composite' index.
That is, Slow queries is almost always the cause for scaling problems.
If we eliminate that as a problem, then we can discuss sharding and other non-trivial approaches.
We must see SHOW CREATE TABLE and other clues of what is going on.

How to calculate/deal with big amounts of data? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have a table in MySQL that has about 50 million records (continuing growing), and it is about subscription consumptions.
So, everyday I have to select these records and make calculations on it in order to target different kind of consumptions/clients, for example if a client is active/inactive, how long has been active, if it had changed product, and so on.
At the moment, I have different queries to select the different business cases and then I load data to the staging area and data warehouse. Although, some of these queries are very low and they are overloading productive environment.
I would like to know if there is a known solution(s) or technology to this kind of daily tasks.
I am open to continue with MySQl or try a new big data technology. For example, selecting everyday the millions of raw records to a staging area/ODS and then work on them with some technology.
Does anybody know good solutions for these kind of tasks?
Thank you.
One option might be replication - http://dev.mysql.com/doc/refman/8.0/en/replication.html
That way you can run whatever queries you want on the replicated DB without impacting the live DB.

Searching logical shards [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Is the basic technique behind querying across logical shards just querying them all at the same time and consolidating the results?
There doesn't seem to be any built-in features of MySQL or Postgres that allows you to query across logical shards, so I assume you must query each shard or get some sort of software to sit in front of that database that indexes or queries for you.
MySQL is working on a new technology called MySQL Fabric to do this. It's still in early development (as of this writing). But they apparently intend it to be a built-in feature in MySQL 5.7.
You can also use Shard-Query today. This acts as a proxy to query across all your shards transparently. That is, you can write simple SQL queries as if you didn't have a sharded architecture. Shard-Query rewrites SQL and runs queries against each shard in parallel, then combines the results.
I don't know what, if any, solutions exist for PostgreSQL to automatically query across shards.

Best practices while designing databases in MySQL [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am developing an Enterprise application in Java EE and I think it will have a huge amount of stored data. It is similar to a University management application in which all colleges students are registered and have their profile.
I am using a MySQL database. I tried to explore on the internet and I found some tips on this link.
What are the best practices to develop huge databases so that they do not decrease its performance?
Thanks in advance.
First of all your database is not huge but medium -> small size. Huge database is when you need to deal with terabytes of data and million operations per second. Considering your case, MySQL (MyISAM) is enough and rather than optimization you should focus on correct database design (optimization is the next step).
Let me share some tips with you:
scale your hardware (not so important for your case)
identify relations (normalize) and correct datatypes (i.e. use tiny int instead of big int if you can)
try to avoid NULL if possible
user varchar instead of text/blob if possible
index your tables (remember indexes slow update/delete/insert operations)
design your queries in a correct way (use indexes)
always use transactions
Once you design and develop your database and the performance is not sufficient - think about optimization:
- check explain plans and tune sqls
- check hardware utilization and tune either system or mysql parameters (i.e. query cache).
Please check also this link:
http://dev.mysql.com/doc/refman/5.0/en/optimization.html