I am building a Mobile App which uses a backend server to store the data. If I describe it in terms of Relational DB then it is ~10K records (2 or 3 tables). Planning to use Linode VPS to host it (512 MB). I know the question is very broad, but want to get an idea about the performance under load.
Another choice is to use a NoSQL like Redis, but for that need to put up some time to learn.
I already did searching on SO, but no satisfactory answers yet.
PS: This is a side project and i expect to learn the things along the way. But some good pointers would help to speed up the process.
it appears that you'll have to work on optimizing queries more than looking for some quick fix. I have been working with plenty of projects running on virtual servers with no performance issues if you write the queries in correct manner. Have a read:
http://owaisahussain.blogspot.com/2012/06/yet-another-blog-on-query-optimization.html
Disable InnoDB, that will save you lots of RAM. To do this, have skip-innodb in your my.cnf file.
Related
From MySQL8 documentation:
When innodb_dedicated_server is enabled, InnoDB automatically configures the following variables:
innodb_buffer_pool_size
innodb_log_file_size
innodb_log_files_in_group (as of MySQL 8.0.14)
innodb_flush_method
Only consider enabling innodb_dedicated_server if the MySQL instance resides on a dedicated server
where it can use all available system resources. Enabling innodb_dedicated_server is not recommended
if the MySQL instance shares system resources with other applications.
Assuming the server is dedicated for MySQL, does enabling innodb_dedicated_server actually give better performance than tuning those parameters on my own?
Short answer: No, it does not improve performance any more than setting those tuning options yourself.
The variable innodb_dedicated_server is explained in detail when the feature was announced (2017-08-24):
https://mysqlserverteam.com/plan-to-improve-the-out-of-the-box-experience-in-mysql-8-0/
It's just a shorthand for a number of tuning options. The new variable doesn't improve performance in any special way, it's exactly the same as setting those other tuning options yourself.
I wrote this comment on the blog when they announced the feature:
I’m sorry, but I don’t like this feature at all. I understand the goal
of improving the out-of-the-box experience for naive users, but I
don’t think this solution will be successful at this goal.
Trying to pre-tune a MySQL installation with some formula is a
one-size-fits-all solution, and these kinds of solutions are
unreliable. We can recall examples of other products that have tried
to do this, but eventually removed their auto-tuning features.
It’s not a good assumption that the buffer pool needs as much physical
RAM as you can afford. You already know this, because you need the
innodb_dedicated_server option. Rick mentioned the possibility that
the dataset is already smaller than RAM. In this case, adding more RAM
has little or no benefit.
Many naive users mistakenly believe (after reading some blog) that
increasing RAM allocation always increases performance. It’s difficult
to explain to them why this is not true.
Likewise innodb log file. We assume that bigger is better, because of
benchmarks showing that heavy write traffic benefits from bigger log
files, because of delaying checkpoints. But what if you don’t have
heavy write traffic? What if you use MySQL for a blog or a CMS that is
99% reads? The large log file is unnecessary. Sizing it for an assumed
workload or dataset size has a high chance of being the wrong choice
for tuning.
I understand the difficulty of asking users questions during
installation. I recently did a project automating MySQL provisioning
with apt. It was annoying having to figure out debconf to work around
the installation prompts that do exist (btw, please document MySQL’s
debconf variables!).
There’s also the problem that even if you do prompt the user for
information, they don’t know the answers to the questions. This is
especially true of the naive users that you’re targeting with this
feature.
If the installer asks “Do you use MySQL on a dedicated server?” do
they even know what this means? They might think “dedicated” is simply
the opposite of shared hosting.
If the installer asks “Do you want to use all available memory on this
system?” you will be surprised at how many users think “memory” refers
to disk space, not RAM.
In short: (1) Using formulas to tune MySQL is error-prone. (2) Asking
users to make choices without information is error-prone.
I have an alternative suggestion: Make it easier for users to become
less naive about their choices.
I think users need a kind of friendly cheat-sheet or infographic of
how to make tuning decisions. This could include a list of questions
about their data size and workload, and then a list of performance
indicators to monitor and measure, like buffer pool page create rate,
and log file write rate. Give tips on how to measure these things,
what config options to change, and then how to measure again to verify
that the change had the desired effect.
A simple monitoring tool would also be useful. Nothing so
sophisticated as PMM or VividCortex for long-term trending, but
something more like pt-mext for quick, ephemeral measurements.
The only thing the installation process needs to do is tell the user
that tuning is a thing they need to do (many users don’t realize
this), and refer them to the cheat-sheet documentation.
Just tuning.
It is a challenging task to provide "good" defaults for everything. The biggest impediment is not knowing how much of the machine's RAM and CPU will be consumed by other products (Java, WordPress, etc, etc) running on the same server.
A large number of MySQL servers are used by big players; they separate MySQL servers from webservers, etc. This makes it simple from them to tweak a small number of tunables quickly when deploying the server.
Meanwhile, less-heavy users get decent tuning out-of-the-box by leaving that setting out.
I have a website with 100k+ daily visitors. We use MySQL 5.1.
Serving all these visitors and logging their data specially during rush hours put a lot of load on our server.
I just upgraded our server to EC2/c3.8xlarge with vCPU=32, ECU= 108 and memory = 60G
Any suggestion about how I should setup my mysql configuration to optimize our usage?
I know this is a very broad question and depend on the nature of the load, the answer might be different, but I appreciate any suggestion.
You might want to consider updating to MySQL 5.5 (or higher) since they included the performance_schema since that version. That'll help you monitor events alot better and determine were you have to improve on your configuration regarding performance.
Other than that i guess you check some memory settings in your my.cnf. I'm not really read into it but i think some variables you can check regarding performance are:
key_buffer_size,
table_cache,
sort_buffer_size,
read_buffer_size,
Upgrading your server is always good but i think you have to tune MySQL towards that aswell
I am running a game server that uses a mysql database to store a lot of information. I also have couple of different hosts that have mysql.
So I was wondering if there are any ways of testing the connection or write speed (I am new to sql, so I am not sure of correct term for what determines speed of the database) of the databases on different hosts and see which one is better? Or alternatively if there are any ways of making the database faster with some setting.
Thanks.
Regarding question #1: Please see The MySQL Benchmarking page for assistance in benchmarking and performance tuning.
I would suggest using the benchmark tools available for download here against each of your hosts to check performance against each other.
Your second question is a book into itself and may not have an easy answer. I suggest creating a second question with specific questions about performance for better assistance regarding that question. Possibly at the dba site (link below).
Now, if your question is "What is the best way to load balance my database servers" You have an entirely different question, and one that should probably be asked on the https://dba.stackexchange.com/ site.
I am currently creating a site(php,css,html,ajax,MySql) which will have heavy user usage of space(regarding data). These data are quite essential and the can NOT be lost, it is really essential.
I am looking for tips on servers, languages and everything else(even theory) about distributed database systems. Any help would be really appreciated. Also it would be great if the system used mysql.
Thank you
P.S. dont link Google.com. I have done that and reached nothing but a wall :(
My guess is that you're Googling for the wrong terms. If you searched for MySql Replication you might run into this article
Database replication is what enables a "distributed database system". You should also look into clustering to see if that type of distributio/replicationn might meet your needs.
Also, you didn't specify if you were running LAMP or WAMP but here's a how-to on setting up a MySQL and Apache cluster.
We have a dedicated MySQL server, with about 2000 small databases on it. (It's a Drupal multi-site install - each database is one site).
When you load each site for the first time in a while, it can take up to 30s to return the first page. After that, the pages return at an acceptable speed. I've traced this through the stack to MySQL. Also, when you connect with the command line mysql client, connection is fast, then "use dbname" is slow, and then queries are fast.
My hunch is that this is due to the server not being configured correctly, and the unused dbs falling out of a cache, or something like that, but I'm not sure which cache or setting applies in this case.
One thing I have tried is the innodb_buffer_pool size. This was set to the default 8M. I tried raising it to 512MB (The machine has ~ 2GB of RAM, and the additional RAM was available) as the reading I did indicated that more should give better performance, but this made the system run slower, so it's back at 8MB now.
Thanks for reading.
With 2000 databases you should adjust the table cache setting. You certainly have a lot of cache miss in this cache.
Try using mysqltunner and/or tunning_primer.sh to get other informations on potential issues with your settings.
Now drupal makes Database intensive work, check you Drupal installations, you are maybe generating a lot (too much) of requests.
About the innodb_buffer_pool_size, you certainly have a lot of pagination cache miss with a little buffer (8Mb). The ideal size is when all your data and indexes size can fit in this buffer, and with 2000 databases... well it is quite certainly a very little size but it will be hard for you to grow. Tunning a MySQL server is hard, if MySQL takes too much RAM your apache won't get enough RAM.
Solutions are:
check that you do not make the connexion with DNS names but with IP
(in case of)
buy more RAM
set MySQL on a separate server
adjust your settings
For Drupal, try to set the session not in the database but in memcache (you'll need RAM for that but it will be better for MySQL), modules for that are available. If you have Drupal 7 you can even try to set some of the cache tables in memcache instead of MySQL (do not do that with big cache tables).
edit: last thing, I hope you have not modified Drupal to use persistent database connexions, some modules allows that (or having an old drupal 5 which try to do it automatically). With 2000 database you would kill your server. Try to check mysql error log for "too many connections" errors.
Hello Rupertj as I read you are using tables type innodb, right?
innodb table is a bit slower than myisam tables, but I don't think it is a major problem, as you told, you are using drupal system, is that a kind of mult-sites, like a word-press system?
If yes, sorry about but this kind of systems, each time you install a plugin or something else, it grow your database in tables and of course in datas.. and it can change into something very very much slow. I have experiencied by myself not using Drupal but using Word-press blog system, and it was a nightmare to me and my friends..
Since then, I have abandoned the project... and my only advice to you is, don't install a lot of plugins in your drupal system.
I hope this advice help you, because it help me a lot in word-press.
This sounds like a caching issue in Drupal, not MYSQL. It seems there are a few very heavy queries, or many, many small ones, or both, that hammer the database-server. Once that is done, Drupal caches that in several caching layers. After which only one (or very few) queries are all that is needed to build up a page. Slow in the beginning, fast after that.
You will have to profile it to determine what the cause is, but the table cache seems like a likely suspect.
However, you should also be mindful of persistent connections - which should absolutely definitely, always be turned off (yes, for everyone, not just you). Apache / PHP persistent connections are a pessimisation that you and everyone else can generally do without.