Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have used IIF(expression , truePart , falsePart) in my sql query in ms-access-2010. I had also found another operator NZ(expression,falsePart).
I want to know which operator is faster in terms of time and space complexity and how ?
Example If I want to fetch records from a table having 10k records . which from the above operator is better to use?
Execution of each will be near identical except in extraordinarily large iterations.
In queries - there are a couple of issues I think that stand out.
For me those are, the returned data type and the native status of the function.
Immediate If does preserve the data type. Not only that - you can use it to coerce the data type. If you want an Integer back or a Date then you can pass that as your returning parameter. Nz gives you the variant (Text/String) that it wants to give back.
The other issue I mentioned is availability. The Immediate If is implemented by Jet's expression service. The full Access VBA library doesn't need to be loaded to expose it.
In other words - if you create a query such as
SELECT * FROM TableName WHERE IIF(FieldName Is Null, 0, FieldName) = 0
then you can execute that query from code libraries external to Access (not requiring an Access connection).
Jet will evaluate the function. (Not that this is a particularly good query - its actually terrible using either function).
The equivalent
SELECT * FROM TableName WHERE Nz(FieldName, 0) = 0
relies on the fact that Nz is a member of the Access Application object model. It absolutely requires that it is Access which executes the query. Not necessarily a common problem, but a consideration.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
In our company, we don't really use stored procedures because it makes the code not flexible to debug, update in case of errors and modification in database columns which takes lot of time. I would like to know the ideal scenarios to use stored procedure in enterprise-level applications with some examples? or is it bad to use stored proc?
And If I want to run 6 different queries I need to make 5 calls to the database. But if I have the stored proc I can do all these things in one single call, Will this make a significant improvement in the performance when using in this situation? (because I heard like every query will be catch by MySQL/sql so query execution plan is there with both SQL queries and stored procedure in catch which makes no difference in performance) pl give your valid opinions!
Thank you for your answer!
I really appreciate it!!
A stored procedure does directly not make individual queries any faster, but it does allow some other benefits performance-wise:
If your queries are complex enough (multiple ones). SP's can perform better by reducing the back and forth communication between client and the server by storing intermediate results in temp tables etc.
Having very complex queries can sometimes be optimized by breaking queries into smaller ones (again, using temp tables). SP's offer nice way of encapsulating this into the db.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Is there any mechanism for storing all the information of a probability distribution (discrete and/or continuous) into a single cell of a table? If so, how is this achieved and how might one go about making queries on these cells?
Your question is very vague. So only general hints can be provided.
I'd say there are two typical approaches for this (if I got your question right):
you can store some complex data into a single "cell" (how you call it) inside a database table. Easiest for this is to use JSON encoding. So you have an array of values, encode that to a string and store that string. If you want to access the values again you query the string and decode it back into an array. Newer versions of MariaDB or MySQL offer an extension to access such values on sql level too, though access is pretty slow that way.
you use an additional table for the values and store only a reference in the cell. This actually is the typical and preferred approach. This is how the relational database model works. The advantage of this approach is that you can directly access each value separately in sql, that you can use mathematical operations like sums, averages and the like on sql level and that you are not limited in the amount of storage space like you are when using a single cell. Also you can filter the values, for example by date ranges or value boundaries.
In the end, taking all together, both approaches offer the same, though they require different handling of the data. The fist approach additionally requires some scripting language on the client side to handle encoding and decoding, but that typically is given anyway.
The second approach us considered cleaner and will be faster in most of the cases, except if you always access to whole set of values at all times. So a decision can only be made when knowing more specific details about the environment and goal of an implementation.
Say we have a distribution in column B like:
and we want to place the distribution in a single cell. In C1 enter:
=B1
and in C2 enter:
=B1 & CHAR(10) & C1
and copy downwards. Finally, format cell C13 with wrap on:
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
when it comes to SQL DB schema, is it a better practice to add more "boolean" like fields or is it better to keep one field and have it be "mode" representing different combinations? For either case, can you elaborate why it's better?
Thanks
If you care about specific values of things . . . IsActive, IsBlue, IsEnglishSpeaking, then have a separate flag for each one.
There are a few cases when having a combined "mode" might be beneficial. However, in most cases, you want your columns to represent variables that you care about. You don't want to have special logic dissecting particular values.
In MySQL, a boolean value actually occupies 1 bytes (8 bits). MySQL has other data types such as enums and sets that might do what you want. The bit data type, despite its name, does not seem to pack flags into a single byte (which happens on other databases).
I think I get where you're coming from... The answer is: use boolean for has_property scenarios, and something else for everything else.
Basically, you're talking about a "flag" versus a "variable". Flags are boolean. Everything else has some other type (integer, datetime, etc...). Databases have strict typing, so either work with a strictly typed language or make sure the data you pass to you DB is correctly typed. DBs have incredible flexibility, as well. For instance, BLOBs can store arbitrary binary data (like pickled Python).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a program that captures many different types of structured messages. I need to persist the messages to database. What is the forum's view on design and performance, between:
(a) using one big table for all message types, so to handle any new message type, new columns are added to the big table. So the database is one table that may end up having 100's of columns.
(b) using a tables for each message type, so for a new message type, a new table is added to the database
By performance I mean in terms of searching all messages (i.e. searching one table versus a search across joined tables) and in terms of development work (i.e. knowledge transfer between developers) and maintenance (i.e. when something goes wrong).
This sounds a bit like it's about normalisation, but I am not sure it is.
Thanks!
If I read you right, choice (a) amounts to what is called the "One True Lookup Table" (OTLT). OTLT is an antipattern. You can research it on the web.
Performance is degraded because the lookup has to be done on two fields, the type and the code. With separate tables for each type, the lookup is just on the code.
Queries are more complex, and therefore more likely to be in error.
Data management is harder if you want separate entry forms for each type. If you are going to have just one true type entry form, you need to be careful when entering new lookup values. Good luck.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am working in a project where i need to calculate some avg values based on the users interaction on a site.
Now, the amount of records that needs to have their total avg calculated can range from a few to thousands.
My question is, at which threshold would it be wise to store the aggregated data in a seperate table and through a store procedure update that value everytime a new record is generated instead of just calculate it everytime it is neede?
Thanks in advance.
Dont do it, until you start having performance problems caused by the time it takes to aggregate your data.
Then do it.
If discovering this bottleneck in production is unacceptable, then run the system in a test environment that accurately matches your production environment and load in test data that accurately matches production data. If you hit a performance bottleneck in that environment that is caused by aggregation time, then do it.
You need to weigh the need of current data vs the need of quick data. If you absolutely need current data then you have to live with longer delays in your queries. If you absolutely need your data asap then you will have to deal with older data.
You can time your queries and time the insertion into a separate table and evaluate which seems to best fit your needs.