Best practises : is sql views really worth it? [duplicate] - mysql

This question already has answers here:
Why do you create a View in a database?
(25 answers)
Closed 8 years ago.
I am building a new web applications with data stored in the database. as many web applications, I need to expose data from complexe sql queries (query with conditions from multiple table). I was wondering if it could be a good idea to build my queries in the database as sql view instead of building it in the application? I mean what would be the benefit of that ? database performance? do i will code longer? debug longer?
Thank you

This can not be answered really objectively, since it depends on case by case.
With views, functions, triggers and stored procedures you can move part of the logic of your application into the database layer.
This can have several benefits:
performance -- you might avoid roundtrips of data, and certain treatment are handled more efficiently using the whole range of DBMS features.
consisness -- some treatment of data are expressed more easily with the DBMS features.
But also certain drawback:
portability -- the more you rely on specific features of the DBMS, the less portable the application becomes.
maintenability -- the logic is scattered across two different technologies which implies more skills are needed for maintenance, and local reasoning is harder.
If you stick to the SQL92 standard it's a good trade-off.
My 2 cents.

I think your question is a little bit confusing in what you are trying to achieve (Gain knowledge regarding SQL Views or how to structure your application).
I believe all database logic should be stored at the database tier, ideally in a stored procedure, function rather in the application logic. The application logic should then connect to the database and retrieve the data from these procedures/functions and then expose this data in your application.
One of the the benefits of storing this at the database tier is taking advantage of the execution plans via SQL Server (which you may not get by accessing it directly via the application logic). Another advantage is that you can separate the application, i.e. if the database needs to be changed you don't have to modify the application directly.
For a direct point on Views, the advantages of using them include:
Restrict a user to specific rows in a table.
For example, allow an employee to see only the rows recording his or her work in a labor- tracking table.
Restrict a user to specific columns.
For example, allow employees who do not work in payroll to see the name, office, work phone, and department columns in an employee table, but do not allow them to see any columns with salary information or personal information.
Join columns from multiple tables so that they look like a single table.
http://msdn.microsoft.com/en-us/library/aa214068(v=sql.80).aspx

Personally I prefer views, especially for reports/apps as if there are any problems in the data you only have to update a single view rather than re-building the app or manually editing the queries.

SQL views have many uses. Try first reading about them and then asking a more specific question:
http://odetocode.com/code/299.aspx
http://msdn.microsoft.com/en-us/library/ms187956.aspx

I have seen that views are used a lot to do two things:
Simplify queries, if you have a HUGE select with multiple joins and joins and joins, you can create a view that will have the same performance but the query will be only a couple of lines.
For security reason, if you have a table with some information that shouldn't be accessed for all the developers, you can create views and grant privileges to see the views and not the main table, I.E:
table 1: Name, Last_name, User_ID, credit_card, social_security. You create a view table.table view: name, last_name, user_id .

You can run into performance issues and constraints on the types queries you can run against a view.
Restrictions on what you can do with views.
http://dev.mysql.com/doc/refman/5.6/en/view-restrictions.html
Looks like the big one is that you cannot create an index on the view. This could cause a big performance hit if your final result table is large
This is also a good forum discussing views: http://forums.mysql.com/read.php?100,22967,22967#msg-22967
In my experience a well indexed table, using the right engine, and properly tuned (for example setting an appropriate key_buffer value) can perform better than a view.
Alternatively you could create a trigger that updates a table based on the results of other tables. http://dev.mysql.com/doc/refman/5.6/en/triggers.html

The technic you are saying is called denormalization. Cal Henderson, software engineer from Flickr, openly supports this technic.
In theory JOIN operation is one of the most expensive operations, so it is a good practice to denormalize, since you are transforming n queries with m JOIN in 1 query with m JOIN and n queries that select from a view.
That said, the best way is to test it for yourself. Because what could be incredibly good for Flickr may not be so good for your application.
Moreover, the performance of views may vary a lot from one RBDMS to another. For instance, depending on the RBDMS views can be updated when the orginal table is changed, can have indexes, etc.

Related

Best practices for creating a huge SQL table

I want to create a table about "users" for each of the 50 states. Each state has about 2GB worth of data. Which option sounds better?
Create one table called "users" that will be 100GB large OR
Create 50 separate tables called "users_{state}", each which will be 2GB large
I'm looking at two things: performance, and style (best practices)
I'm also running RDS on AWS, and I have enough storage space. Any thoughts?
EDIT: From the looks of it, I will not need info from multiples states at the same time (i.e. won't need to frequently join tables if I go with Option 2). Here is a common use case: The front-end passes a state id to the back-end, and based on that id, I need to query data from the db regarding the specified state, and return data back to front-end.
Are the 50 states truly independent in your business logic? Meaning your queries would only need to run over one given state most of the time? If so, splitting by state is probably a good choice. In this case you would only need joining in relatively rarer queries like reporting queries and such.
EDIT: Based on your recent edit, this first option is the route I would recommend. You will get better performance from the table partitioning when no joining is required, and there are multiple other benefits to having the smaller partitioned tables like this.
If your queries would commonly require joining across a majority of the states, then you should definitely not partition like this. You'd be better off with one large table and just build the appropriate indices needed for performance. Most modern enterprise DB solutions are capable of handling the marginal performance impact going from 2GB to 100GB just fine (with proper indexing).
But if your queries on average would need to join results from only a handful of states (say no more than 5-10 or so), the optimal solution is a more complex gray area. You will likely be able to extract better performance from the partitioned tables with joining, but it may make the code and/or queries (and all coming maintenance) noticeably more complex.
Note that my answer assumes the more common access frequency breakdowns: high reads, moderate updates, low creates/deletes. Also, if performance on big data is your primary concern, you may want to check out NoSQL (for example, Amazon AWS DynamoDB), but this would be an invasive and fundamental departure from the relational system. But the NoSQL performance benefits can be absolutely dramatic.
Without knowing more of your model, it will be difficult for anyone to make judgement calls about performance, etc. However, from a data modelling point of view, when thinking about a normalized model I would expect to see a User table with a column (or columns, in the case of a compound key) which hold the foreign key to a State table. If a User could be associated with more than one state, I would expect another table (UserState) to be created instead, and this would hold the foreign keys to both User and State, with any other information about that relationship (for instance, start and end dates for time slicing, showing the timespan during which the User and the State were associated).
Rather than splitting the data into separate tables, if you find that you have performance issues you could use partitioning to split the User data by state while leaving it within a single table. I don't use MySQL, but a quick Google turned up plenty of reference information on how to implement partitioning within MySQL.
Until you try building and running this, I don't think you know whether you have a performance problem or not. If you do, following the above design you can apply partitioning after the fact and not need to change your front-end queries. Also, this solution won't be problematic if it turns out you do need information for multiple states at the same time, and won't cause you anywhere near as much grief if you need to look at User by some aspect other than State.

Creating a MySQL Database Schema for large data set

I'm struggling to find the best way to build out a structure that will work for my project. The answer may be simple but I'm struggling due to the massive number of columns or tables, depending on how it's set up.
We have several tools, each that can be run for many customers. Each tool has a series of questions that populate a database of answers. After the tool is run, we populate another series of data that is the output of the tool. We have roughly 10 tools, all populating a spreadsheet of 1500 data points. Here's where I struggle... each tool can be run multiple times, and many tools share the same data point. My next project is to build an application that can begin data entry for a tool, but allow import of data that shares the same datapoint for a tool that has already been run.
A simple example:
Tool 1 - company, numberofusers, numberoflocations, cost
Tool 2 - company, numberofusers, totalstorage, employeepayrate
So if the same company completed tool 1, I need to be able to populate "numberofusers" (or offer to populate) when they complete tool 2 since it already exists.
I think what it boils down to is, would it be better to create a structure that has 1500 tables, 1 for each data element with additional data around each data element, or to create a single massive table - something like...
customerID(FK), EventID(fk), ToolID(fk), numberofusers, numberoflocations, cost, total storage, employee pay,.....(1500)
If I go this route and have one large table I'm not sure how that will impact performance. Likewise - how difficult it will be to maintain 1500 tables.
Another dimension is that it would be nice to have a description of each field:
numberofusers,title,description,active(bool). I assume this is only possible if each element is in its own table?
Thoughts? Suggestions? Sorry for the lengthy question, new here.
Build a main table with all the common data: company, # users, .. other stuff. Give each row a unique id.
Build a table for each unique tool with the company id from above and any data unique to that implementation. Give each table a primary (unique key) for 'tool use' and 'company'.
This covers the common data in one place, identifies each 'customer' and provides for multiple uses of a given tool for each customer. Every use and customer is trackable and distinct.
More about normalization here.
I agree with etherbubunny on normalization but with larger datasets there are performance considerations that quickly become important. Joins which are often required in normalized databases to display human readable information can be performance killers on even medium sized tables which is why a lot of data warehouse models use de-normalized datasets for reporting. This is essentially pre-building the joined reporting data into new tables with heavy use of indexing, archiving and partitioning.
In many cases smart use of partitioning on its own can also effectively help reduce the size of the datasets being queried. This usually takes quite a bit of maintenance unless certain parameters remain fixed though.
Ultimately in your case (and most others) I highly recommend building it the way you are able to maintain and understand what is going on and then performing regular performance checks via slow query logs, explain, and performance monitoring tools like percona's tool set. This will give you insight into what is really happening and give you some data to come back here or the MySQL forums with. We can always speculate here but ultimately the real data and your setup will be the driving force behind what is right for you.

MySql views performance [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
If you are going down the road of using views, how can you ensure good performance?
Or is it better not to use views in the first place and just incorporate the equivalent into your select statements?
It Depends.
It totally depends on what you are viewing through view. But most probably reducing your effort and giving higher performance. When SQL statement references a nonindexed view, the parser and query optimizer analyze the source of both the SQL statement and the view and then resolve them into a single execution plan. There is not one plan for the SQL statement and a separate plan for the view.
A view is not compiled. Its a virtual table made up of other tables. When you create it, it doesn't reside somewhere on your server. The underlying queries that make up the view are subject to the same performance gains or dings of the query optimizer. I've never tested performance on a view VS its underlying query, but i would imagine the performance may vary slightly. You can get better performance on an indexed view if the data is relatively static. This may be what you are thinking maybe in terms of "compiled".
Advantages of views:
View the data without storing the data into the object.
Restrict the view of a table i.e. can hide some of columns in the tables.
Join two or more tables and show it as one object to user.
Restrict the access of a table so that nobody can insert the rows into the table.
See these useful links:
Performance of VIEW vs. SQL statement
Is a view faster than a simple query?
Mysql VIEWS vs. PHP query
Are MySql Views Dynamic and Efficient?
Materialized View vs. Tables: What are the advantages?
Is querying over a view slower than executing SQL directly?
A workaround for the performance problems of TEMPTABLE views
See performance gains by using indexed views in SQL Server
Here's a tl;dr summary, you can find detailed evaluations from Peter Zaitsev and elsewhere.
Views in MySQL are generally a bad idea. At Grooveshark we consider them to be harmful and always avoid them. If you are careful you can make them work but at best they are a way to remember how to select data or keep you from having to retype complicated joins. At worst they can cause massive inefficiencies, hide complexity, cause accidental nested subselects (requiring temporary tables and leading to disk thrashing), etc.
It's best to just avoid them, and keep your queries in code.
I think the blog by Peter Zaitsev has most of the details. Speaking from personal experience views can perform well if you generally keep them simple. At one of my clients they kept on layering one view on top of another and it ended up in a perfomance nightmare.
Generally I use views to show a different aspect of a table. For example in my employees table show me the managers or hide the salary field from non HR employees. Also always make sure you run a EXPLAIN on the query and view to understand exactly what is happening inside MySQL.
If you want solid proof in your scenario I would suggest that you test. It is really hard to say using views is always a performance killer then again a badly written view is probably going to kill your performance.
They serve their purpose, but the hidden complexities and inefficiencies usually outweigh a more direct approach. I once encountered a SQL statement that was joining on two views, and sorting them the results. The views were sorting as well, so the execution time could be measured in what seemed like hours.
A thing not mentioned so far but making a huge difference is adequate indexing of the views' source tables.
As mentioned above, views do not reside in your DB but are rebuild every time. Thus everything that makes the rebuild easier for the DB increases performance of the view.
Often, views join data in a way that is very bad for storage (no normal form) but very good for further usage (doing analysis, presenting data to user, ...) and therewith joining and aggregating data from different tables.
Whether or not the columns on which the operations are made are indexed or not makes a huge difference on the performance of a view. If the tables and their relevant columns are indexed already accessing the view does not end in re-computing the indexes over and over again first. (on the downside, this is done when data is manipulated in the source tables)
! Index all columns used in JOINS and GROUP BY clauses in your CREATE VIEW statement !
If we are discussing "if you use views, how to ensure performance", and not the performance effect of views in general, I think that it boils down to restraint (as in yourself).
You can get in to big trouble if you just write views to make your query's simple in all cases, but do not take care that your views are actually usefull performance-wise. Any query's you're doing in the end should be running sane (see the comment example from that link by #eggyal). Ofcourse that's a tautology, but therefore not any less valuable
You especially need to be carefull not to make views from views, just because that might make it easier to make that view.
In the end you need to look at the reason you are using views. Any time you do this to make life easier on the programming end you might be better of with a stored procedure IMHO.
To keep things under control you might want to write down why you have a certain view, and decide why you are using it. For every 'new' use within your programming, recheck if you actually need the view, why you need it, and if this would still give you a sane execution-path. Keep on checking your uses to keep it speedy, and keep checking if you really need that view.

What are the ramifications of having a ton of views in MySql

I have a multi-tenant MySQL database. For most things I use a tenant-id column in tables to discriminate but for a few purposes, it is much better to have a view that is already filtered by tenant id. So for example, I might have a view called 'quarterly_sales_view' but for tenant 30 I would have 'quarterly_sales_view_30' and for tenant 51 I would have 'quarterly_sales_view_51'. I create these views dynamically and everything is working great but we have just a few tenants right now and I realize this would never work for millions of tenants.
My question is, am I going to run into either performance problems or just hard limits with a few thousand, few hundred, or few dozen custom views?
ADDITIONAL INFO:
I am using a 3rd party tool (immature) that requires a table name (or view name, since it's read-only) and operates on that. In the context it's working, I can't let it have access to the entire view, so I create another view that is simply defined as SELECT * FROM MasterView WHERE TenantId = 30. I recognize this as a workaround for a poor limitation of having to have the tool work on the table directly. Luckily this tool is open source, so I can tweak it to use a different approach. I just wanted to have an idea of how long I had before the current approach blew up.
The primary concern within this question (IMO) should be less with performance and more with the design. First, the number of views should not affect performance, but, why do you need a view per tenant? Is it not possible to simply filter for a tenant by ID on a more generic view. E.g.:
SELECT * FROM vwMyTentants WHERE TenantId = 30
Whatever the reason, you should reconsider your approach because it is a sign of design smell.

is better to create tables based on content or views?

i'm learning mysql and was working on a database for work. Everything's fine so far but I had a question. I am organizing financial statements for firms(balance sheet table, income statement table, cashflow table,etc.) and most companies have quarterly statements(they are unaudited) and annual statements(which are audited). Right now for each statement I have a column that flags it for annual or quarterly.
Its not likely that someone will be running a report on an audited and unaudited statement at the same time, so I was thinking if it was worth it to create a table for audited and one for unaudited. The reason I was thinking this was eventually the data will get fairly large and I thought the smaller the tables the faster performance.
So when I design a database should I be designing based on the content(i.e. group everything thats the same regardless) or should I be grouping based on how people will access it?
Another question this raises is should I be grouping financial statements by countries..since all analysis down at our firm in 90% within the same country
This is impossible to answer definitively without knowing the whole problem.
However, usually you want a single table to represent each logical entity in your system. From the sound of it, quarterly and annual statements represent the same logical entity, but differ by a single category column/field. The same holds true for the country question--if the only difference is the country (a categorization), then they likely should all be stored in the same table.
If you were to split your data into separate tables by category, your data would be scattered across multiple tables, and would be very hard to query. For example, if you wanted a count of all statements in the system, you would have to query ALL country tables and add the results together.
Edit: Joe Celko calls this anti-pattern "Attribute Splitting".
First of all I have to point out, I'm not a professional DB designer.
But if I ware you, in this case I would create one table as the entities are the same basically.
If you fear of mysql's performace on lager datasets, maybe it would be better to start building your app on Postgres. You can boost mysql's performace with stored functions/procedures or maybe views if you have to run complicated queries and of course you can use memcache or any nosql stuff to let the SQL rest a bit.
If you are sure in that users will search mainly only for this or that type of records, you can build three tables. One for all of the records, one-one for the audited and unaudited ones. You can keep them syncronized with the InnoDB's triggers (ON UPDATE/DELETE/INSERT). They could work like views, but I think (not tested) they would be faster then views. In this case you have to manage only the first "large" table. If you insert an audited record, the trigger fires and puts a record in to the audited table an so on...
Best wishes!
I agree with Phil and Damien - one table is better. What you want is one table per type of real business thing. If you design your tables to resemble real things, even abstract or conceptual things, then your data design is more likely to stand the test of time. Once you've sketched out a schema based on the real things you have data about, then you can go back and apply the rules of normalization to formalize your design.
As a rule, it is a bad idea to design for a performance problem you are worried about, but haven't actually seen. Your intuition about big tables being slower might actually be wrong. Most DBMS systems like bigger tables, at least to a point. When tables are big the query optimizers choose to use indexes. When tables are small they often end up getting full table scans, which can really slow down concurrent access. If your tables get so big that they are beyond the capabilities of your DBMS then it is time to consider either archiving out old data that you aren't using anymore or buying a more scalable DBMS.