I am fairly new to MySQL. I have a database consisting of a few hundred table files. When I run a report I notice (through ProcMon) that MySQL is opening and closing the tables hundreds of thousands of times! That greatly affects performance. Is there some setting to direct MySQL to keep table files open until MySQL is shut down? Or at least to reduce the file thrashing?
Thanks.
Plan A: Don't worry about it.
Plan B: Increase table_open_cache to a few thousand. (See SHOW VARIABLES LIKE 'table_open_cache';) If that value won't stick, check the Operating System to see if it is constraining thing (ulimit).
Plan C: It is rare to see an application that need over a hundred tables. Ponder what the application is doing. (WP, for example, uses 12 tables per user. This does not scale well.)
Related
We are facing a problem, our DB instance MySQL 8.0 (Production environment) is continuously showing an alert that number of open tables is equal to table_open_cache value. The number of open tables is increased more than 43,200 in 24 hour observation period which makes total count of open tables equals to 2845063.
Please help me how to reduce this, If I go for Flush tables command with read only or with read lock will it cause any data loss or performance issues. I have to implement this to my production Database, Is it a good practice to use Flush tables manually once a day.
Posted a question regarding MySQL DB instance open tables, need to know how to reduce the same by any method. Is it a good practice to use Flush tables manually once a day.
I am attaching an image for reference :-
image1
Misses/Hits is about 2% -- reasonable.
Apparently that screenshot should be talking about "opened" tables, not "open" tables. Only 4K are currently "open", limited by table_open_cache.
The image shows 43.2K vs 2.8M -- it is unclear what each means. 43.2K/24h is exactly 1 per 2 seconds. This is suspect.
2.8M openings of tables in 24 hours is high, but not necessarily "bad. (It's about the 95th percentile.)
Suggest increasing table_open_cache to 8000. What activity is going on? Perhaps you are opening a connection, performing a single operation (which involves opening one or more tables), then disconnecting? Can you cut back on the rapidity of creating connections?
Please provide SHOW GLOBAL STATUS LIKE 'Connection'; 50 per second is "high".
I await seeing Opened_tables and Uptime fetched at the 'same' time.
No, I don't think FLUSH is the answer.
I have largish (InnoDB) tables in a database; apparently the users are capable of making SELECTs with JOINs that result in temporary, large (and thus on-disk) tables. Sometimes, those are so large that they exhaust disk space, leading to all sorts of weird issues.
Is there a way to limit temp table maximum size for an on-disk table, so that the table doesn't overgrow the disk? tmp_table_size only applies to in-memory tables, despite the name. I haven't found anything relevant in the documentation.
There's no option for this in MariaDB and MySQL.
I ran into the same issue as you some months ago, I searched a lot and I finally partially solved it by creating a special storage area on the NAS for themporary datasets.
Create a folder on your NAS or a partition on an internal HDD, it will be by definition limited in size, then mount it, and in the mysql ini, assign the temporary storage to this drive: (choose either windows/linux)
tmpdir="mnt/DBtmp/"
tmpdir="T:\"
mysql service should be restarted after this change.
With this approach, once the drive is full, you still have "weird issues" with on-disk queries, but the other issues are gone.
There was a discussion about an option disk-tmp-table-size, but it looks like the commit did not make it through review or got lost for some other reason (at least the option does not exist in the current code base anymore).
I guess your next best try (besides increasing storage) is to tune MySQL to not make on-disk temp tables. There are some tips for this on DBA. Another attempt could be to create a ramdisk for the storage of the "on-disk" temp tables, if you have enough RAM and only lack disk storage.
While it does not answer the question for MySQL, MariaDB has tmp_disk_table_size and potentially also useful max_join_size settings. However, tmp_disk_table_size is only for MyISAM or Aria tables, not for InnoDB. Also, max_join_size works only on the estimated row count of the join, not the actual row count. On the bright side, the error is issued almost immediately.
I'm programming an Access database but I realized that its size increases dramatically as it is being used, growing to hundreds of MB. After compacting it the size came back to 5MB. What normally cause this increase of size and how I can avoid it?
You also can turn off row locking. I have a process and a file of about 5 Megs in size.
When you run a simple update, it bloats to a 125 Megs. If you turn off row locking, then the file does not grow at all with the update.
So you want to disable row locking – this will MASSIVE reduce bloating. The option you want to un-check is this one:
File->options->client settings, and then uncheck
[x] Open database by using record-level locking
Access does not have true row locking, but does have what is called database page locking. So in a roundabout way, if you turn on row locking, then Access just expands all records to the size of one database page – the result is massive bloating. Try disabling the above option. (You have to exit + re-start Access for this setting change to take effect).
If you're really going from 5MB to hundreds of MB that can be compacted back to 5 MB then as others have mentioned you're INSERTING and DELETING a lot of records. This is usually because you need to create temporary tables.
Most of time temporary tables aren't technically required and can be remove them by either querying a query or using dynamic SQL. If you can't do this, its probably worth while to create a separate temporary database that you link to.
Its important to note that each user have their own copy of the temp database and that it gets destroyed at either the beginning or the end of their session.
Lots of adding and deleting records is one cause of database bloat. If this is your development db, then database bloat is unavoidable as you repeatedly compile and save your vba project; the bloat may be far less pronounced in end-user databases.
Doing any work in an Access database will cause the size of the file to increase. I have several databases that bloat to almost 2GB in size when a morning process is running. This process inserts, updates and deletes data.
One thing that is important when working with MS Access is to use compact and repair. This will shrink the size of the database.
I wouldn't worry about the DB growing to a couple of hundred of MBs, that is still small for Access.
I was just thinking if a mysql database's size limit exceeds what will happen to my app running on that.
My hosting only allows 1 GB of space per database. I know thats too much, but what if i make an app on people discussing something, and sometime after many years the database limit exceeds.
Then what will I do? And approximately how much text data can be stored in 1 GB?
And can I have 2 databases running one application. Like one database contains usernames and profiles and that sort of stuff, and the other contains questions and answers? And will that slow down process of getting everything?
Update: can i set up mysql on my own server and have overcome the size limitation?
Thanks.
There will be no speed disadvantage from splitting your tables across two databases (assuming both databases are on the same MySQL server), but if the data are logically part of the same application then it is more sensible they be grouped together.
When you want to refer to a table in another database, you also have to qualify it with the appropriate database name, which you could see as an inefficiency.
My guess is that if you approach 1GB with two databases or with one, it's not going to make a difference how your host treats you (it shouldn't make a difference for MySQL, after all). I suggest you not worry about it unless you're going to be generating data like nobody's business, and in that case you require a more dedicated host.
If you figure out years down the line that you're coming to the limit, you can make a decision then whether to dump some of your older data or move to a host that permits you to store more data.
I don't think your application would stop working immediately when you hit 1GB. I think it more likely that your host would start writing you emails telling you off and suggesting you upgrade packages, or something.
Most of this is specific to your host. 1GB is ~1 billion bytes (one letter usually = on byte). Having 2 databases will not slow anything down, so long as they're both on the same host and they're properly set up.
one question about NDBCLUSTER.
I inherited the writing of a web site basing on NDBCLUSTER 5.1 solution (LAMP platform).
Unfortunately, who designed the former solution didn't realize that this database engine has strong limits. One, the maximum number of fields a table can have is 128. The former programmer conceived tables with 369 fields in a single row, one for each day of the year plus some key field (he originally worked with MyISAM engine). Ok it must be refactored, anyways, I know.
What is more, the engine needs a lot of tuning: maximum number of attributes for a table (which defaults to 1000, a bit too few) and many other parameters, the misinterpretation or underestimation of which can lead to serious problems once you're in production with your database and you're forced to change something.
Even the fact that disk storage for NDBCLUSTER tables is kind of aleatory if not precisely configured: even if specified in CREATE TABLE statements, the engine seems to prefer keeping data in memory - which explains the speed - but can be a pain if your table on node 1 should suddenly collapse (as it did during testing). All table data lost on all nodes and table corrupted after 1000 records only.
We were on a server with 8Gb RAM, and the table had just 27 fields.
Please note that no ndb_mgm operation for nodes shutdown ran to compromise table data. It simply fell down, full stop. Our provider didn't understand why.
So the question is: would you recommend NDBCLUSTER as a stable solution for a large scale web service database?
We're talking about a database which should contain several millions of records, thousands of tables and thousands of catalogues.
If not that which database would you recommend as the best to accomplish the task of making a nation-level scale web service.
Thanks in advance.
I have a terrible experience with NDBCLUSTER. It's good replacement for memcached with range invalidation, nothing more. The stability and configurability does not exist for this solution. You can not force all processes to listen on specific ports, backup was working but I have to edit bkp files in vim to restore database etc..