I have an question about this 2 dedicated servers,...
WATH is BETTER to run an website with an large MySql Data Base (4 - 6 GB)
------My actual server-------
Intel Xeon E5-1650v2 - 6c/12t - 3,5 / 3,8 GHz
Ram: 128 GB
Memory: DDR3 ECC 2133 MHz
Disks: 3x600GB SAS HARD + Cache 80Go SSD
------I want to change to this server----------
Intel Xeon 2x E5-2630v3 - 16c/32t - 2,4 / 3,2 GHz
Ram: 128 GB
Memory: DDR4 ECC 1866 MHz
Disks: 2x480GB SSD SOFT
Of corse, the second server, have beeter procesor, better memory... but the disks?
About my website:
Is an very light web, dont use images, videos,... I dont have big files, and is developed with laravel and run in cPanel (centOS)
The problem is the DB, becouse all content (for example, images) are requested from external websites, and all this routes are stored in the DB. So, for now, my DB have 4GB, but in the next monts, can have 6GB.
I need an very fast server for the DB.
MANY THANKS.
Your database should rock on either of those specs. Your entire database can fit in RAM and that will definitely make your web app/DB very responsive. Your database is really not large. 6 GB these days for a DB is small-to-medium size.
I'd go the SSD right. If you can, go with RAID-1 for data protection.
Related
I have two database servers.
Server A has 128GB memory with 75% for buffer_pool
Server B has 64GB memory with 25% for buffer_pool
There is no activity on Server A but an ALTER on a 220 GB table.
There is replication activity on Server B on ALTER on same 220 GB table.
Server B completes in half the time.
Can someone explain what might cause this behavior? All settings across Server A and B are similar except for memory and buffer_pool alotments.
Both are identical OS and Server A has 16 core CPU, while Server B has 8 core.
Not everything is main memory, in my case, factors like OS and CPU make a big difference. I tested same DB in different machines (for a project I worked on), and found a better general performance in a Linux i5-6200U and 8gb DDR4 machine than a W7 i7-4000 and 16gb DDR3 (around a 20% better performance)
I am using Couchbase Server on stage environment. Things were working fine until yesterday. But since today I am observing high CPU usage when the load is increased moderately. (PFA)
Couchbase cluster configuration:-
3 node cluster running (4.5.1-2844 Community Edition (build-2844))
each having m4.2xlarge(8 cores, 32 GB RAM) AWS machines.
Data RAM quota: 25000 MB
Index RAM quota: 2048MB
It has 9 buckets. And used bucket is having 9 GB RAM (i.e. 3 GB per cluster)
Note: - Since we are using community edition, each node is running Data, Full Text, Index, and query service.
Let me know if I've done some misconfiguration or if any optimization required.
I have a problem with MySQL and MySQL CPU usage at Plesk.
I have a news site with Joomla, and about 100-150 online user.
CPU: AMD Opteron(tm) Processor 6366 HE 3 core
Memory: 4GB
HDD: 250 GB
Plesk 12 licence
Debian 7
My site is very slow, it opens about 10-12 sec. my pages, I see MySQL CPU usage is about 85.2%, when there are 148 user online at same time...
Can anyone help me about MySQL settings? I wouldn't like to believe that server is weak for about 1000 online users....
Here is my MySQL config: my.cnf
What can I setup with that for good working?
I have mysql running on a windows machine with 3GB usable RAM and a single core. However, when I allocate more than 1GB to innodb_buffer_pool_size, I get an error saying
'mysql service cannot be started' because memory could not be
allocated to the innodb_buffer_pool.
I want to allocate atleast 2 GB to improve my performance. Any ideas/suggestions as to how I can achieve this. All my other mysql variable values are quite small (16M - 64M).
Very very late answer but I had the same problem and found this solution:
In 32 bit Windows w/ 4 Gb RAM, not all 4 Gb RAM is available for application space. In reality there is a 2Gb/2Gb split between userland and kernelspace.
The solution already given (and hopefully implemented) is to use a 64 bit OS along with 64 bit version of MySQL.
This post contains an idea to extend userland memory to 3Gb via a modification to the MySQL binary.
When you move beyond using one instance for your database, what is the best practice when using EC2? If the first instance is a master and you're spinning up slaves, they would need to scan the transaction log and bring themselves up to date before the slaves are useable correct? If the master had been running awhile and was busy, this could take a very long time, right? Is it smarter to use something besides master-slave on EC2? I've seen MySQL Enterprise has support for EC2 but it wasn't clear (to me) on the MySQL site what features this adds. Does it have some added functionality that makes spawning new instances fast and turnkey-like?
Fundamentally, I'm trying to figure how you auto-scale the database.
You could also use the Amazon RDS (their version of MySQL in the cloud) and get out of the running a MySQL server business altogether (you'll pay slightly more per server instance but you can do database snapshots/etc.).
Amazon RDS currently supports five DB Instance Classes, starting at 11 cents an hour going all the way to $3.10 an hour:
* Small DB Instance: 1.7 GB memory, 1 ECU (1 virtual core with 1 ECU), 64-bit platform.
* Large DB Instance: 7.5 GB memory, 4 ECUs (2 virtual cores with 2 ECUs each), 64-bit platform
* Extra Large DB Instance: 15 GB of memory, 8 ECUs (4 virtual cores with 2 ECUs each), 64-bit platform
* Double Extra Large DB Instance: 34 GB of memory, 13 ECUs (4 virtual cores with 3,25 ECUs each), 64-bit platform
* Quadruple Extra Large DB Instance: 68 GB of memory, 26 ECUs (8 virtual cores with 3.25 ECUs each), 64-bit platform