Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Which of these query would be faster?
1) Complicated query with subqueries
2) Simple query without subqueries but leave the extra processing work to application
I am deciding on which approach to take. I do not have real code to test against at the moment. Can those with more experience provide the answer?
It depends on the rows in the table and the sub queries you are using. Check the manual for query optimizing.
visit http://dev.mysql.com/doc/refman/5.0/en/optimization.html
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 days ago.
Improve this question
I am doing some programming practice and
Going through Dynamic Programming theory. we always come across two points
Optimal Substructure (OSS)
Overlapping Subproblem (OSP)
Any optimization problem with these two characteristics can be solved by using DP techniques (Memoization or Tabulation).
But we know this needs so much practice to identify the kind of problems.
Let's Say we have 4 types
TYPE 1
TYPE2
PROBLEMS
OSS
OSP
ALL DP Problem
NON-OSS
OSP
?
OSS
NON-OSP
?
NON-OSS
NON-OSP
?
e.g. Is there any problem that looks like having NON-Overlapping Subproblem but Optimal Substructure characteristics?
I need your help in listing the problem of each type. These will help me and whoever is reading this get more identifying and then solving the problem.
If you have gone through any problem (Leetcode, CodeChef, SPOJ etc) you think can be fit into '?' category please comment.
Also If you have any link/source to know more about type based on OSS/OSP.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I've been looking into a few ways of writing efficient ActiveRecord queries and I thought I might put it out to gather a consensus on who thinks what might be best.
#page = #current_shop.pages.where(state: "home").first
At the moment, I've surmised that find_by_sql might be the best route?
Rails helpfully logs execution time for every query and a query of that form is usually quite simple. It's a dual-condition SELECT with a LIMIT applied.
find_by_sql is reserved for exceptional circumstances, not routine ones. In this case if you went the "raw query" route you might save, at best, a fraction of a millisecond. You'll also get back a raw query result, not a model, which you'll then have to do something with.
This is a classic case of premature optimization. If you have a measurable performance problem, as opposed to a suspected performance problem, then you might want to consider caching to avoid the database call entirely instead of trying to execute it slightly faster.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am trying to decide on what will be more efficient between two different systems for MySQL to handle my data. I can either,
1: Create around 200 tables, each having around 30 rows & 5 columns.
2: Create 1 table, having around 6000 rows & 5 columns.
I am using Laravel for this project and Eloquent will be handling this. Does anybody have any opinions on this matter? I appreciate any/all responses.
Option 2.
For such low row counts the overhead both in terms of programming effort and computation of joining 200(!) tables far outweighs the "flat file" approach. Additionally, MySQL will attempt to cache the entire 6000-row table in RAM, assuming you're not storing massive BLOBs.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I've been trying to get my head round some very tricky SQL queries in MySQL (can range from nested queries, correlated sub queries, group concatenation, temporary tables and self joins). These are often very large and very complicated.
Recently I've been thinking of ways to try and improve the way I do this. Sometimes I try to think how a single record would be included in a dataset and follow how the keys bring together tables. Other times I think of the entire join table and mentally strip away rows according to the WHERE constraints.
Is it worthwhile looking at relational algebra to understand what is going on?
In summary, what strategies do you use for analysing large, complicated SQL queries?
For me, it was just experience. The more I had to interact with such large, complicated codes and the more questions I asked from professors, friends, coworkers, the better I came at being able to understand everything that is going on in a code.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I need to know if it is more or less efficient to have multiple databases with an index of databases relative to each dataset.
I do not know to what extent multicache can adversely affect performance.
Suppose 10 bases in 2GB data each rather than a single 20GB.
For example: the data of userid 293484 are in third database.
Thanks.
Yes, this is a common technique known as sharding.
http://en.wikipedia.org/wiki/Shard_%28database_architecture%29
Altimately the code you will have to write to maintain such a structure will kill you.
Keep it simple, keep it in one database, and use proper design patterns and indexing.
Database engines are design to deal with large amounts of data, so if your hadrware is sufficient, your queries well structured and the design good, you should not have to many performance problems.