Which MySQL storage engine should I use? [closed] - mysql

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I don't think innoDB could work because I need to truncate often very large tables (some GB) and i need every bit of disk space.
I have 3 tables that contain over 2 millions rows. I need to do a large number of queries for second (>50). I have only one user.

InnoDB tables area transactional and only useful where you may need to revert a commit in case something goes wrong specifically in a RDMBS environment. Based on the information you have provided which is not to none, if your only requirement is fast reads and writes I would say go with MyISAM. There are other engines too which are in-memory (definitely not a good idea with GB's of data) and have various other properties, but without more details it's hard to say if one of the others is a better fit.

Related

Maximum hits on a table in MySQL [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I need to do some analysis of table usage within my MySQL system. Can anyone point me in the right direction on a method for identifying which table has been queried most often in a given time-period i.e. if there are 30 tables, I want to know which table is accessed most.
You should use pt-table-usage to analyze the general query log. It will out put nice information about table usage (as long as you're not using stored procedures or stored functions cause those will be missed).
Enable query logging temporarily while your application is running and review the log. It can have some performance impact, so you don't want to leave it permanently enabled.

Very slow querying in Access, Can a SQL server do any better [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am working on MSAccess to quickly load the data and perform some analysis.
Since the data exceeds over 200,000 records, it gets really very slow and takes for ever or never returns a result. I know that this is expected.
Does installing a SQL server (freely availalbe) like MySQL do better in this scenario?
(even for a million similar records)
I cannot ask for paid software to perform analysis.
You're not talking about a whole lot of records. Installing SQL Lite won't do much for you if the query you are using is not optimized for performance. I recommend attempting to optimize your query and/or increasing the query timeout so results can be returned. Indexing your tables will also slightly improve performance, but the query optimization is the big thing.

What's the fastest way to search through a MySQL database containing billions of records? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I've been looking for a very fast and efficient way to search a database of an enormous size without using anything other than PHP and Mysql. What would be some options I could use?
The exact same way you would do it if you had 100's of rows. That's what indexes are for.
The most you can do is pay attention to the design of the tables, indexing strategy, and throw enough hardware at the solution.
If there was a silver bullet that anyone could answer in a paragraph or two here that applied universally (since you didn't give any insight to your table structure), don't you think it would already be built into MySQL?
The good news is that you will probably find that for most searches MySQL will do the job just fine even on massive databases.

The size of MySQL and SQL Server databases [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
When storing data in MySQL and storing the same data in SQL Server, will the size of the database differs or will it be the same for both?
Updated: What I mean exactly if we have 20 GB of disk space, and we used both SQL Server and MySQL and stored the same data exactly for example, texts, dates. All data were the same exactly in both databaes, then will the size of both be the same?
The size of the DB on disk is entirely implementation dependent, so it will likely be different. If it's not, it's just a fluke. It is also something that you shouldn't really worry about. Just buy enough disk.
It depends on many factors: block/page size, kind of data, kind of tables, type of indexes, recovery model (ms sql), data types and so on and so on.

Query optimization techniques? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
How to optimize queries which are already written?
Use EXPLAIN to see what's going on - what indexes are being used and so on.
If you can not change the them:
Indexes and statistics.
So you don't optimize the query but their execution plan.
If you can't change the query then it really depends on what features are available on your database engine of choice. As Ovidiu said you can use indexes and generate usage statistics to see where the bottleneck is.
Otherwise you can employ techniques like materialised views or horizontal partitioning.
Before you start make sure you know what you're optimisation target is.
IBM Informix Dynamic Server supports a feature that allows you to add optimizer directives to pre-existing SQL when it is executed (without modifying the application). Look up 'external directives' at the Informix web site for more information (or Google 'site:ibm.com informix external directives').