Access: Query and Append - ms-access

Question:
I'm running a huge query over a query and then appending it to a table. Is it really needed to run the prior queries or i can just convert the query with parameters to an append? save me the processing time of those two queries. I guess I'm trying to remove redundancy as per query it runs for like 15 minutes. Then the append runs the same.
Thanks.
-Al

Assuming that the phrase "running a query over a query" means that you have a subquery within your Append query, then the answer is no: you do not need to separately run the subqueries before running the parent query.
Aside from the execution plan, no result data is stored by a query: the data is retrieved from the referenced tables each & every time the query is run.
Therefore, when you nest a query within another query, your parent query is not working with any saved results of prior evaluation of the nested query, but is evaluating the nested query as part of its own evaluation.

Related

MySQL History of the result of the queries?

Is it possible to see the history of the result of the queries? I know your query history is in ~/.mysql_history, but I also want to see the result of those queries.
Given that the result of an arbitrary SELECT query could be many times larger than the database itself (think of cartesian products), those results are not automatically logged anywhere.
The result would also be subject to change, because other updates are constantly changing the data in the database. So the query result would only be relevant to the transaction that ran that query at that time.
If there's any logging of query results, it would have to be something your own application does deliberately. The plain mysql client (which uses ~/.mysql_history) does not log results.

MySQL query with derived Table Query runs faster than without it

I have a MySQL Query.
When I run without derived table, the execution time is about 6 sec. However, if I run with derived table, the execution takes less than 1 sec. I have seen the EXPLAIN SELECT PLAN of the query. I do not get much out of it. I can't add indexes on tables or use view or procedures.
However, I am not sure which query to move with derived query or simple query. AND does I need to consider the EXPLAIN result, or the actual execution time for selection of best option.
Yes, you should consider both the EXPLAIN output and the actual execution time.
I have lots of queries that have inline views ("derived tables" in MySQL parlance). And some of those run much faster than alternate queries that return equivalent results, but which don't use inline views.

Why is it that when the same query is executed twice in MySQL, it returns two very different response times?

my question is as follows:
Why if I do the same query two times in shell MySql get two very different response times (ie,
the first time and the second a much shorter time)?
and how can I do to prevent this from happening??
thank you very much in advance
This is most likely down to query and/or result caching. if you run a query once, MySQL stores the compiled version of that query, and also has the indexes stored in memory for those particular tables, so any subsequent queries are vastly faster than the original.
This could be due to 1.query caching is turned on or due to 2.the difference in performance states of the system in which it is being executed.
In query caching if you run a query once mysql stores the compiled version of the query and is fetched when called again . the time for compiling is not there in the repeated execution of the same query . query caching can be turned off but it is not a good idea

dilemma about mysql. using condition to limit load on a dbf

I have a table of about 800 000 records. Its basically a log which I query often.
I gave condition to query only queries that were entered last month in attempt to reduce the load on a database.
My thinking is
a) if the database goes only through the first month and then returns entries, its good.
b) if the database goes through the whole database + checking the condition against every single record, it's actually worse than no condition.
What is your opinion?
How would you go about reducing load on a dbf?
If the field containing the entry date is keyed/indexed, and is used by the DB software to optimize the query, that should reduce the set of rows examined to the rows matching that date range.
That said, it's a commonly understood that you are better off optimizing queries, indexes, database server settings and hardware, in that order. Changing how you query your data can reduce the impact of a query a millionfold for a query that is badly formulated in the first place, depending on the dataset.
If there are no obvious areas for speedup in how the query itself is formulated (joins done correctly or no joins needed, or effective use of indexes), adding indexes to help your common queries would by a good next step.
If you want more information about how the database is going to execute your query; you can use the MySQL EXPLAIN command to find out. For example, that will tell you if it's able to use an index for the query.

Stored Procedure slower than LINQ query?

I was doing some testing and straight LINQ-to-SQL queries run at least 80% faster than if calling stored procedures via the LINQ query
In SQL Server profiler a generic LINQ query
var results = from m in _dataContext.Members
select m;
took only 19 milliseconds as opposed to a stored procedure
var results = from m in _dataContext.GetMember(userName)
select m;
(GetMember being the stored procedure) doing the same query which took 100 milliseconds
Why is this?
Edit:
The straight LINQ looks like this in Profiler
SELECT
[t1].[MemberID], [t1].[Aspnetusername], [t1].[Aspnetpassword],
[t1].[EmailAddr], [t1].[DateCreated],
[t1].[Location], [t1].[DaimokuGoal], [t1].[PreviewImageID],
[t1].[value] AS [LastDaimoku],
[t1].[value2] AS [LastNotefied],
[t1].[value3] AS [LastActivityDate], [t1].[IsActivated]
FROM
(SELECT
[t0].[MemberID], [t0].[Aspnetusername], [t0].[Aspnetpassword],
[t0].[EmailAddr], [t0].[DateCreated], [t0].[Location],
[t0].[DaimokuGoal], [t0].[PreviewImageID],
[t0].[LastDaimoku] AS [value], [t0].[LastNotefied] AS [value2],
[t0].[LastActivityDate] AS [value3], [t0].[IsActivated]
FROM
[dbo].[Members] AS [t0]) AS [t1]
WHERE
[t1].[EmailAddr] = #p0
The stored procedure is this
SELECT Members.*
FROM Members
WHERE dbo.Members.EmailAddr = #Username
So you see the stored procedure query is much simpler.. but yet its slower.... makes no sense to me.
1) Compare like with like. Perform exactly the same operation in both cases, rather than fetching all values in one case and doing a query in another.
2) Don't just execute the code once - do it lots of times, so the optimiser has a chance to work and to avoid one-time performance hits.
3) Use a profiler (well, one on the .NET side and one on the SQL side) to find out where the performance is actually differing.
One thing that might make it slower is the select *. Usually a query is faster if columns are specified, And in particular if the LINQ query is not using all the possible columns inthe query, it will be faster than select *.
I forgot, the proc could also have parameter sniffing issues.
A noted in the comments some of this is that you are not comparing apples to apples. You are trying to compare two different queries, thus getting different results.
If you want to try and determine performance you would want to compare the SAME queries, with the same values etc.
Also, you might try using LinqPad to be able to see the generated SQL to potentially identify areas that are causing slowness in response.
The * will extend the time it takes to run the query by quite a bit. Also, the straight SQL from LINQ you see in profiler is bracketing ([]) all of the object names - this will trim more time off the query execution time for the LINQ query.
May I add to John Skeet's answer, that when running code several time please remember clean up any query cache.
I can suggest using 'EXPLAIN' with both queries: it seems that MySQL creates query execution plan for a query and SP differently. For SP it complies before substituting parameters with their values, and therefore it does not use indexes, that used in case of hard-coded or substituted parameter. Here is another question about different run times for SP and straight query from SO with query plan data given for both cases.