Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I'm working on an application which will serve about 11,000 new rows in the database per day and approximately 800-1000 queries per second (thats a rough estimate) in the working hours. After the working hours the rate of queries per second will drop to 100-150 per second.
The application has a both web and desktop version and the web version along with the database will be hosted on a dedicated server with 32/64 GBs of RAM and a Intel Xeon E5 1650V3 12 Threads x 3.5 Ghz, everything on 240 GB Intel SSD x 2.
Which database will be best to use for this application? I'm considering MS SQL Server at the moment but what do you think which databse will give me the BEST performance. The database is efficiently designed and the hardware for it's hosting is good as well, everything is hosted on SSDs with a decent amount of RAM and Processing Power (This is the only reason for mentioning the server specifications).
So what are your recommendations, MS SQL Server, Oracle DB, MySQL PostgrSQL or anything else?
Thank you :)
Both SQL Server and Oracle should easily be able to handle what you need with that hardware. When used properly, there's no significant performance difference between the two.
You should focus more on the knowledge/skillset of the people who will be developing & maintaining the system.
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 days ago.
Improve this question
I am evaluating 2 linux servers, in terms of performance of reading and writing to the database. The database sits on an SSD RAID1 pair, with MySQL 5.7 on each server. Server 2's specs are a bit better overall, except this:
Server 1: Hardware RAID1, two 2 TB SSD drives.
Server 2: Software RAID1, two 1 TB SSD drives.
Server 2 is faster when it comes to READS.
But Server 2 is slower when it comes to WRITES. A performance test on Server 1 is 33% faster than on Server 2 (e.g. 140 sec vs 210 sec). The test is the same on both servers: inserting 1000s of rows of data to the database, 64b per row.
Software RAID is slower than Hardware RAID, so this slower operation could be understandable. But it was also suggested that the SIZE of the drive is an additional factor, i.e. that a 2 TB SSD drive will be faster than a 1 TB SSD drive.
Does anyone know if this is the case? I have not been able to find anything on this online. Any help would be appreciated.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
Mysql CPU high usage (80%-150%). This is shop (10 000 products), max 15 online.
my.cnf
http://pastebin.com/xJNuWTWT
log mysqltunner:
http://pastebin.com/HxWwucE2
console:
Firstly
Some other things you can check would be to run the following query while the CPU load is high:
SHOW PROCESSLIST;
I can suggest you to turn off the persistant connections.
And then to check the MySQL users, just to make sure it's not possible for anyone to be connecting from a remote server
And at the same time I would like to say you want to turn on the MySQL Slow Query Log to keep an eye on any queries that are taking a long time, and use that to make sure you don't have any queries locking up key tables for too long.
http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html
Kindly get over here too :
http://dev.mysql.com/doc/refman/5.0/en/memory-use.html
MySQL is only one part of the performance problem. Mysql only is not dedicated to high load traffic website or high load data website. You should find cache solution to deal with your problem. 10000 products is sufficiently large to slow down your website especially if there is no cache, if the server is not a dedicated server but with standard virtual hosting, etc.
To summarize, you should re-build the hardware architecture to take in consideration large database, user perfomance, dynamic pages.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm currently setting up ghost on my server. I will host my own blog and probably some more for my friends.
Ghost uses sqlite per default. Sqlite is good for small applications and development environments.
I plan to run my blog for at least 1 - 2 years or longer if ghost will work out well. A blog contains a lot of images and text. The sqlite db will grow over time with more and more images and so on.
Is it ok to use sqlite for this purpose for several years? MySQL would be much more powerful but also more complex to setup.
What would be the best choice for a Ghost Blog?
Please note that database performance depends not so much on the amount of data (until you run out of local disk space) but on the amount of concurrency.
The SQLite documentation says:
SQLite usually will work great as the database engine for low to medium traffic websites (which is to say, 99.9% of all websites). The amount of web traffic that SQLite can handle depends, of course, on how heavily the website uses its database. Generally speaking, any site that gets fewer than 100K hits/day should work fine with SQLite. The 100K hits/day figure is a conservative estimate, not a hard upper bound. SQLite has been demonstrated to work with 10 times that amount of traffic.
[…]
But if your website is so busy that you are thinking of splitting the database component off onto a separate machine, then you should definitely consider using an enterprise-class client/server database engine instead of SQLite.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've an issue on a website made in Joomla 2.5 with Ja teline IV template
that has 300 concurrent user,
it is a soccer magazine, so the article are updated often,
also minute by minute during the match.
I've a server of 16gb ram and quad core processor, but the website freeze when 300 users are accessing to the website.
I've done all the frontend optimization, but, I the last optimization could be enable caching.
My issues are:
- caching enabled also for logged in users
- caching timing, if I have that type of article, I can enable cache
expiring to 1 minute? It is also a good option? Could optimize the performance.
Can you suggest me what to do? Other possible optimization?
16gb should be enough to handle 300 concurrent users...smells to mysql server not fine tuned.
Run this script for you Mysql server
https://github.com/rackerhacker/MySQLTuner-perl
You have to find the bottleneck: processor? memory? disk? database? network?
After you find the issue, you have to choose the right solution: bigger processor, more memory, faster disk, db index, memory caching, network caching, etc etc
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I came to know about the Free DB2 Express C today. I have a few questions about it. Can someone please tell me
How does Free DB2 Express C compare with MySQL?
Is it a bad idea to switch from MySQL to Express C?
What are the restrictions on the free version? I couldn't find that information on its website.
DB2 is a real database with all the minimal components such as referential integrity, stored procedures, ACID, etc. and some interesting extras such as native XML.
MySQL begins to adopt some of these minimal requirements for one of its storage engines, however this still remains immature. MySQL could be better than DB2 for some specific cases when transactions are not really important, such as a small web site that shows simple content.
DB2 is NOT open source, and for the version express-c you can only download the latest (most recent) version of DB2. It means, that you cannot apply patches, nor fix bugs. However, when there is a new release in the DB2 family, the express-c version is also release, so you always have access to the most recent updates (not like in Oracle, that the express version is still 10G)
The restriction in the DB2 express-c version is the memory size used (for buffer pools and other elements) and its size is 4GB. It could use only 2 cores if the machine has several ones.
There is not limit for the storage or quantity of users.
http://www.ibm.com/developerworks/wikis/display/DB2/DB2+Express-C+FAQ
When you business needs grow, you can update to another version in the DB2 family easily, because your platform (applications) are already designed to work with DB2.
DB2 is good for very small database, and for very bigs database with several TBs.
MySQL is Open Source, and it was bought by Sun, which was bought by Oracle. Some days ago, several Open Sources projects maintained by Oracle were finished, and they will only work with the payed versions; such as OpenSolaris and OpenOffice. We do not know the future of MySQL with Oracle as owner.
In the other side, IBM has been working hard with the Open Source (Eclipse, Apache Derby), and in the last years, there has been a continuous effort to use DB2 express-c, so it seems that IBM will continue this way.
"DB2 is DB2 is DB2"