Is Software RAID1 faster on a larger SSD? [closed] - mysql

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 days ago.
Improve this question
I am evaluating 2 linux servers, in terms of performance of reading and writing to the database. The database sits on an SSD RAID1 pair, with MySQL 5.7 on each server. Server 2's specs are a bit better overall, except this:
Server 1: Hardware RAID1, two 2 TB SSD drives.
Server 2: Software RAID1, two 1 TB SSD drives.
Server 2 is faster when it comes to READS.
But Server 2 is slower when it comes to WRITES. A performance test on Server 1 is 33% faster than on Server 2 (e.g. 140 sec vs 210 sec). The test is the same on both servers: inserting 1000s of rows of data to the database, 64b per row.
Software RAID is slower than Hardware RAID, so this slower operation could be understandable. But it was also suggested that the SIZE of the drive is an additional factor, i.e. that a 2 TB SSD drive will be faster than a 1 TB SSD drive.
Does anyone know if this is the case? I have not been able to find anything on this online. Any help would be appreciated.

Related

Database for large application [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I'm working on an application which will serve about 11,000 new rows in the database per day and approximately 800-1000 queries per second (thats a rough estimate) in the working hours. After the working hours the rate of queries per second will drop to 100-150 per second.
The application has a both web and desktop version and the web version along with the database will be hosted on a dedicated server with 32/64 GBs of RAM and a Intel Xeon E5 1650V3 12 Threads x 3.5 Ghz, everything on 240 GB Intel SSD x 2.
Which database will be best to use for this application? I'm considering MS SQL Server at the moment but what do you think which databse will give me the BEST performance. The database is efficiently designed and the hardware for it's hosting is good as well, everything is hosted on SSDs with a decent amount of RAM and Processing Power (This is the only reason for mentioning the server specifications).
So what are your recommendations, MS SQL Server, Oracle DB, MySQL PostgrSQL or anything else?
Thank you :)
Both SQL Server and Oracle should easily be able to handle what you need with that hardware. When used properly, there's no significant performance difference between the two.
You should focus more on the knowledge/skillset of the people who will be developing & maintaining the system.

MySQL high CPU usage myisam [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
Mysql CPU high usage (80%-150%). This is shop (10 000 products), max 15 online.
my.cnf
http://pastebin.com/xJNuWTWT
log mysqltunner:
http://pastebin.com/HxWwucE2
console:
Firstly
Some other things you can check would be to run the following query while the CPU load is high:
SHOW PROCESSLIST;
I can suggest you to turn off the persistant connections.
And then to check the MySQL users, just to make sure it's not possible for anyone to be connecting from a remote server
And at the same time I would like to say you want to turn on the MySQL Slow Query Log to keep an eye on any queries that are taking a long time, and use that to make sure you don't have any queries locking up key tables for too long.
http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html
Kindly get over here too :
http://dev.mysql.com/doc/refman/5.0/en/memory-use.html
MySQL is only one part of the performance problem. Mysql only is not dedicated to high load traffic website or high load data website. You should find cache solution to deal with your problem. 10000 products is sufficiently large to slow down your website especially if there is no cache, if the server is not a dedicated server but with standard virtual hosting, etc.
To summarize, you should re-build the hardware architecture to take in consideration large database, user perfomance, dynamic pages.

Mysql server database limit [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Is it possible to have 20,000 databases on a mysql server. They will not be accessed at the same time and there sizes should not be larger then 10mb. Lets just say that 5000 of them will be open at one time to various different sites. Could the server process that many queries with that many databases?
Manual:
MySQL has no limit on the number of databases. The underlying file system may have a limit on the number of tables. Individual storage engines may impose engine-specific constraints. InnoDB permits up to 4 billion tables.
However, there are other limitations, that may affect your setup:
Memory size, as MySQL will hold some information about each db in RAM
Disk space for transaction logs and cache
Number of simultaneous connections that can be handled by OS: each connections eats CPU, RAM, HDD.
Check out E.7.6. Windows Platform Limitations - there is quite long list of things there.

Joomla 300 concurrent users [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've an issue on a website made in Joomla 2.5 with Ja teline IV template
that has 300 concurrent user,
it is a soccer magazine, so the article are updated often,
also minute by minute during the match.
I've a server of 16gb ram and quad core processor, but the website freeze when 300 users are accessing to the website.
I've done all the frontend optimization, but, I the last optimization could be enable caching.
My issues are:
- caching enabled also for logged in users
- caching timing, if I have that type of article, I can enable cache
expiring to 1 minute? It is also a good option? Could optimize the performance.
Can you suggest me what to do? Other possible optimization?
16gb should be enough to handle 300 concurrent users...smells to mysql server not fine tuned.
Run this script for you Mysql server
https://github.com/rackerhacker/MySQLTuner-perl
You have to find the bottleneck: processor? memory? disk? database? network?
After you find the issue, you have to choose the right solution: bigger processor, more memory, faster disk, db index, memory caching, network caching, etc etc

Migrating localhost MySQL to AWS Oracle seems to wait for four hours [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am currently migrating a localhost MySQL database of quite some size to an AWS server with Oracle. I am using the SQL Developer tool with an installed add-on feature for MySQL support. The migration process is going quite slow and from the diagnostics tools it seems that the space on the server is reduced (sign of data transfer) every fourth hour.
Is this due to the diagnostic tool or any constaint added to the server?
If it is a constraint on the server, how can I remove this so data can be transferred faster?
I have now been migrating for about 40 hours and just 2 gigabytes are transferred. It seems like the transfers are performed every fourth hour.
Hard to tell from your post but are you using the migration wizard in SQL Developer? Is it an online or offline migration?
An online migration for a large database will be very slow, as it is literally rebuilding your database one row at a time, including integrity checks, redo generation, index building, etc.