Performance suggestions for a MySQL table definition - mysql

I am concerned with the performance of a database table i have to store data related
with a customer survey application.
I have a database table storing customer responses from a survey. Since the survey questions change according to customer i though instead of defining
the table schema using each questionid as column to define it as as follows
customerdata(customerid varchar,
partkey varchar,
questionkey varchar,
value, varchar,
version, int,
lastupdate, timestamp)
Where:
partkey: is the shortcode of the part (part1,part2...)
questionkey: is the shortcode of the question
e.g age, gender etc
since some customers fill the survey twice, thrice etc i have added the version column.
With this design customerid,partkey,questionkey and version are primary keys.
i am concerned about the performance with such design. Should i define the other primary keys as indexes ? Would that help ? So far for 30 customers i have 7000 records. I expect to have maximum 300-500. What do you think ?

Sounds like a pretty small database. I doubt you'll have performance issues but if you detect any when querying on partkey, questionkey, or version later on you can always add one or more indexes to solve the problem at that time. There's no need to solve a performance problem you don't have and probably never will have.
Performance issues will arise only if you have to perform time-sensitive queries that don't use the customerid field as the primary filter. I suspect you'll have some queries like that (when you want to aggregate data across customers) but I doubt they'll be time-sensitive enough to be impacted by the one second or less response time I would expect to see from such a small collection of data. If they are, add the index(es) then.
Also, note that a table only has a single PRIMARY KEY. That key can use more than one column, so you can say that columns customerid, partkey, questionkey, and version are part of the PRIMARY KEY, but you can't say their all "primary keys".

rownumber-wise, i have experienced mysql database with over 100.000 rows and it runs just fine so you should be okay.
although it's a different case if you run complicated queries, which depends more on database design rather than row numbers.

Related

MySQL query and insertion optimisation with varchar(255) UUIDs

I think this question has been asked in some way shape or form but I couldn't find a question that had asked exactly what I wish to understand so I thought I'd put the question here
Problem statement
I have built a web application with a MySQL database of say customer records with an INT(11) id PK AI field and a VARCHAR(255) uuid field. The uuid field is not indexed nor set as unique. The uuid field is used as a public identifier so its part of URLs etc. - e.g. https://web.com/get_customer/[uuid]. This was done because the UUID is 'harder' to guess for a regular John Doe - but understand that it is certainly not 'unguessable' in theory. But the issue now is that as the database is growing larger I have observed that the query to retrieve a particular customer record is taking longer to complete.
My thoughts on how to solve the issue
The solution that is coming to mind is to make the uuid field unique and also index the same. But I've been doing some reading in relation to this and various blog posts, StackOverflow answers on this have described putting indices on UUIDs as being really bad for performance. I also read that it will also increase the time it takes to insert a new customer record into the database as the MySQL database will take time to find the correct location in which to place the record as a part of the index.
The above mentioned https://web.com/get_customer/[uuid] can be accessed without having to authenticate which is why I'm not using the id field for the same. It is possible for me to consider moving to integer based UUIDs (I don't need the UUIDs to be universally unique - they just need to be unique for that particular table) - will that improve the the indicing performance and in turn the insertion and querying performance?
Is there a good blog post or information page on how to best set up a database for such a requirement - Need the ability to store a customer record which is 'hard' to guess, easy to insert and easy to query in a large data set.
Any assistance is most appreciated. Thank you!
The received wisdom you mention about putting indexes on UUIDs only comes up when you use them in place of autoincrementing primary keys. Why? The entire table (InnoDB) is built behind the primary key as a clustered index, and bulk loading works best when the index values are sequential.
You certainly can put an ordinary index on your UUID column. If you want your INSERT operations to fail in the astronomically unlikely event you get a random duplicate UUID value you can use an index like this.
ALTER TABLE customer ADD UNIQUE INDEX uuid_constraint (uuid);
But duplicate UUIDv4s are very rare indeed. They have 122 random bits, and most software generating them these days uses cryptographic-quality random number generators. Omitting the UNIQUE index is, I believe, an acceptable risk. (Don't use UUIDv1, 2, 3, or 5: they're not hard enough to guess to keep your data secure.)
If your UUID index isn't unique, you save time on INSERTs and UPDATEs: they don't need to look at the index to detect uniqueness constraint violations.
Edit. When UUID data is in a UNIQUE index, INSERTs are more costly than they are in a similar non-unique index. Should you use a UNIQUE index? Not if you have a high volume of INSERTs. If you have a low volume of INSERTs it's fine to use UNIQUE.
This is the index to use if you omit UNIQUE:
ALTER TABLE customer ADD UNIQUE INDEX uuid (uuid);
To make lookups very fast you can use covering indexes. If your most common lookup query is, for example,
SELECT uuid, givenname, surname, email
FROM customer
WHERE uuid = :uuid
you can create this so-called covering index.
ALTER TABLE customer
ADD INDEX uuid_covering (uuid, givenname, surname, email);
Then your query will be satisfied directly from the index and therefore be faster.
There's always an extra cost to INSERT and UPDATE operations when you have more indexes. But the cost of a full table scan for a query is, in a large table, far far greater than the extra INSERT or UPDATE cost. That's doubly true if you do a lot of queries.
In computer science there's often a space / time tradeoff. SQL indexes use space to save time. It's generally considered a good tradeoff.
(There's all sorts of trickery available to you by using composite primary keys to speed things up. But that's a topic for when you have gigarows.)
(You can also save index and table space by storing UUIDs in BINARY(16) columns and use UUID_TO_BIN() and BIN_TO_UUID() functions to convert them. )

Seeking a performant solution for accessing unique MySQL entries

I know very little about MySQL (or web development in general). I'm a Unity game dev and I've got a situation where users (of a region the size of which I haven't decided yet, possibly globally) can submit entries to an online database. The users must be able to then locate their entry at any time.
For this reason, I've generated a guid from .Net (System.Guid.NewGuid()) and am storing that in the database entry. This works for me! However... I'm no expert, but my gut tells me that looking up a complex string in what could be a gargantuan table might have terrible performance.
That said, it doesn't seem like anything other than a globally unique identifier will solve my problem. Is there a more elegant solution that I'm not seeing, or a way to mitigate against any issues this design pattern might create?
Thanks!
Make sure you define the GUID column as the primary key in the MySQL table. That will cause MySQL to create an index on it, which will enable MySQL to quickly find a row given the GUID. The table might be gargantuan but (assuming a regular B-tree index) the time required for a lookup will increase logarithmically relative to the size of the table. In other words, if it requires 2 reads to find a row in a 1,000-row table, finding a row in a 1,000,000-row table will only require 2 more reads, not 1,000 times as many.
As long as you have defined the primary key, the performance should be good. This is what the database is designed to do.
Obviously there are limits to everything. If you have a billion users and they're submitting thousands of these entries every second, then maybe a regular indexed MySQL table won't be sufficient. But I wouldn't go looking for some exotic solution before you even have a problem.
If you have a key of the row you want, and you have an index on that key, then this query will take less than a second, even if the table has a billion rows:
SELECT ... FROM t WHERE id = 1234.
The index in question might be the PRIMARY KEY, or it could be a secondary key.
GUIDs/UUIDs should be used only if you need to manufacture unique ids in multiple clients without asking the database for an id. If you do use such, be aware that GUIDs perform poorly if the table is bigger than RAM.

Optimizing a mysql query to fetch "unseen" entries per user

This title is rather mesmerizing but I couldn't come up with something clearer.
Long story short, we're creating a mobile app connected to a node.js server communicating with a mySql database. Pretty common setup. Now, we have multiple users connected that are able to upload "moments" to our servers. These moments can be only seen once by all other users.
As soon as a user x sees another user y's moment, x cannot see this one y's moment, ever. Maybe a bit like Snapchat, except the moment is single user to multiple users instead of single to single. Moments are also ordered by distance according to the current user's location.
Now, I'm looking for an intelligent way of only fetching the "unseen" moments from database. For now, we're using a relational table between Users and Moments.
Let's say a user (ID = 20) sees a moment (ID = 30320), then we insert into this table 20 and 30320. I know. This is hardly scalable and probably a terrible idea.
I thought about maybe checking the last seen date and only fetching moments that are past this date, but again, moments are ordered by distance before being ordered by date so it is possible to see a moment that is 3 minutes old followed by a moment that is 30 seconds old.
Is there a more clever way of doing this, or am I doomed to use a relationship table between Moments and Users, and join to it when querying?
Thanks a lot.
EDIT -
This logic uses in total 3 tables.
Users
Moments
MomentSeen
MomentSeen only contains what user has seen what moment, and when. Since the moments aren't ordered by date, I can't fetch all the moments that were uploaded after the last seen moment.
EDIT -
I just realized the mobile app Tinder must use similar logic for which user "liked" which other user. Since you can't go back in time and see a user twice, they probably use a very similar query as what I'm looking for.
Considering they have a lot of users, and that they're ordered by distance and some other unknown criteria, there must be a more clever way of doing things than a "UserSawUser" relational table.
EDIT
I can't provide the entire database structure so I'll just leave the important tables and some of their fields.
Users {
UserID INT UNSIGNED AUTO_INCREMENT PRIMARY KEY
}
Moments {
MomentID INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
UploaderID INT UNSIGNED, /* FK to UserID */
TimeUploaded DATE /* usually NOW() while insertion */
}
MomentSeen {
/* Both are FK to Users and Moments */
MomentID INT UNSIGNED,
UserID INT UNSIGNED
}
You can consider implementing bloom filter. It is widely used to reduce disk seeks and drive better performance.
Medium is using it to check if a user has read a post already.
More details here-
https://medium.com/the-story/what-are-bloom-filters-1ec2a50c68ff
https://en.wikipedia.org/wiki/Bloom_filter
Do not use one table per user. Do have a single table for the moments.
You seem to have two conflicting orderings for "moments": 'distance' and 'unseen'; which is it?
If it is 'unseen', are the 'moments' numbered chronologically? This implies that each user has a last_moment_seen -- all Moments before then have been seen; all after that have not been seen. So...
SELECT ...
WHERE moment > ( SELECT last_moment_seen
FROM Users WHERE user_id = $user_id );
would get all the moments not yet seen for a given user.
Munch on that for a while; the come back for more suggestions.
Edit
This should give you the Moments not yet seen. You can then order them as you see fit.
SELECT m....
FROM Moments m
LEFT JOIN MomentSeen ms ON ms.MomentID = m.MomentID
WHERE ms.MomentID IS NULL
ORDER BY ...
LIMIT 1 -- if desired
Why hesitate on using join?
Have you try to fill your database with dummy data. million of them to be able to measure performance impact on your system?
Using joins is not such a bad idea and is often faster than a single table, if done right.
You should probably do a research on database design to provide some reference.
For instance, ordering a table is done using an index.
However, you could use more than one index on one table and a combination of column in each index.
This can be done by analyzing the query(s) often used against the table.
A handy recipe is by creating index containing a set of column used in join keys, another index for each possible "where" parameter combination as a set of column index, and a set of index column(s) for each "order" query run against that table (the ascending / descending order did matter).
So don't be shy to add another column and index to suit your need.
If you talking about scalability, then you should consider tuning the database engine.
eg. make a higher possible key using big integer.
using a database cluster setup would also require to do in depth analysis, because autoincrement keys have issue in a multiple master setup
If you try to squeeze more performance from your system, then you should consider designing the whole database to be table partition friendly from the very start. That will include serious analysis of your business rule. Creating a table partition friendly environment require setting up a series of column as key, and splitting the data physically (remember to set file_per_table = 1 on mysql config, otherwise the benefit of table partitioning is lost)
If not done right, however, partitioning will not do any benefit to you.
https://dev.mysql.com/doc/refman/5.5/en/partitioning.html

SQL - performance in varchar vs. int

I have a table which has a primary key with varchar data type. And another table with foreign key as varchar datatype.
I am making a join statement using this pair of varchar datatype. Though I am dealing with few number of rows (say hunderd rows), it is taking 60ms. But when the system will finally be deployed, it will have around thousands of rows.
I read Performance of string comparison vs int join in SQL, and concluded that the performance of SQL Query depend upon DB and number of rows it is dealing with.
But when I am dealing with a very large amount of data, would it matter much?
Should I create a new column with a number datatype in both the table and join the table to reduce the time taken by the SQL Query.?
You should use the correct data type for that data that you are representing -- any dubious theoretical performance gains are secondary to the overhead of having to deal with data conversions.
It's really impossible to say what that is based on the question, but most cases are rather obvious. Where they are not obvious are in situations where you have a data element that is represented by a set of digits but which you do not treat as a number -- for example, a phone number.
Clues that you are dealing with this situation are:
leading zeroes that must be preserved
no arithmetic operations are carried out on the element.
string operations are carried out: eg. "take the last four characters"
If that's the case then you probably want to store your "number" as a varchar.
Yes, you should give that a shot. But before you do, make a test version of your db that you populate with the level of data you expect to have in production, and run some tests on not just SELECT, but also INSERT, UPDATE, and DELETE as well. Then make a version with integer keys, and perform equvialent tests.
The numeric-keys WILL be faster, for the simple reason that the keys are of smaller size, but the difference may not be noticeable. Don't blindly trust books when you can test and measure the difference yourself.
(One thing to remember: if there are occasions when all you need from a relation is the value you currently have as its key, your database may run significantly faster if you can skip entire table lookups by just referencing the foreign-key on the records you have.)
Should I create a new column with a number datatype in both the table and join the table to reduce the time taken by the SQL Query.?
If you're in a position where you can change the design of the database with ease then yes, your Primary Key should be an integer. Unless there is a really good reason to have an FK as a varchar, then they should be integers as well.
If you can't change the PK or FK fields, then make sure they're indexed properly. This will eventually become a bottleneck though.
It just does not sound right to me. It will use more space result in more reads etc. Then is the varchar the clustered index key? If so the table is going to get very fragmented.

When is it a good idea to move columns off a main table into an auxiliary table?

Say I have a table like this:
create table users (
user_id int not null auto_increment,
username varchar,
joined_at datetime,
bio text,
favorite_color varchar,
favorite_band varchar
....
);
Say that over time, more and more columns -- like favorite_animal, favorite_city, etc. -- get added to this table.
Eventually, there are like 20 or more columns.
At this point, I'm feeling like I want to move columns to a separate
user_profiles table is so I can do select * from users without
returning a large number of usually irrelevant columns (like
favorite_color). And when I do need to query by favorite_color, I can just do
something like this:
select * from users inner join user_profiles using user_id where
user_profiles.favorite_color = 'red';
Is moving columns off the main table into an "auxiliary" table a good
idea?
Or is it better to keep all the columns in the users table, and always
be explicit about the columns I want to return? E.g.
select user_id, username, last_logged_in_at, etc. etc. from users;
What performance considerations are involved here?
Don't use an auxiliary table if it's going to contain a collection of miscellaneous fields with no conceptual cohesion.
Do use a separate table if you can come up with a good conceptual grouping of a number of fields e.g. an Address table.
Of course, your application has its own performance and normalisation needs, and you should only apply this advice with proper respect to your own situation.
I would say that the best option is to have properly normalized tables, and also to only ask for the columns you need.
A user profile table might not be a bad idea, if it is structured well to provide data integrity and simple enhancement/modification later. Only you can truly know your requirements.
One thing that no one else has mentioned is that it is often a good idea to have an auxiliary table if the row size of the main table would get too large. Read about the row size limits of your specific databases in the documentation. There are often performance benefits to having tables that are less wide and moving the fields you don't use as often off to a separate table. If you choose to create an auxiliarary table with a one-to-one relationship make sure to set up the PK/FK relationship to maintain data integrity and set a unique index or constraint on the FK field to mainatin the one-to-one relationship.
And to go along with everyone else, I cannot stress too strongly how bad it is to ever use select * in production queries. You save a few seconds of development time and create a performance problem as well as make the application less maintainable (yes less - as you should not willy nilly return things you may not want to show on the application but you need in the database. You will break insert statements that use selects and show users things you don't want them to see when you use select *.).
Try not to get in the habit of using SELECT * FROM ... If your application becomes large, and you query the users table for different things in different parts of your application, then when you do add favorite_animal you are more likely to break some spot that uses SELECT *. Or at the least, that place is now getting unused fields that slows it down.
Select the data you need specifically. It self-documents to the next person exactly what you're trying to do with that code.
Don't de-normalize unless you have good reason to.
Adding a favorite column ever other day every time a user has a new favorite is a maintenance headache at best. I would highly consider creating a table to hold a favorites value in your case. I'm pretty sure I wouldn't just keep adding a new column all the time.
The general guideline that applies to this (called normalization) is that tables are grouped by distinct entities/objects/concepts and that each column(field) in that table should describe some aspect of that entity
In your example, it seems that favorite_color describes (or belongs to) the user. Some times it is a good idea to moved data to a second table: when it becomes clear that that data actually describes a second entity. For example: You start your database collecting user_id, name, email, and zip_code. Then at some point in time, the ceo decides he would also like to collect the street_address. At this point a new entity has been formed, and you could conceptually view your data as two tables:
user: userid, name, email
address: steetaddress, city, state, zip, userid(as a foreign key)
So, to sum it up: the real challenge is to decide what data describes the main entity of the table, and what, if any, other entity exists.
Here is a great example of normalization that helped me understand it better
When there is no other reason (e.g. there are normal forms for databases) you should not do it. You dont save any space, as the data must still stored, instead you waste more as you need another index to access them.
It is always better (though may require more maintenance if schemas change) to fetch only the columns you need.
This will result in lower memory usage by both MySQL and your client application, and reduced query times as the amount of data transferred is reduced. You'll see a benefit whether this is over a network or not.
Here's a rule of thumb: if adding a column to an existing table would require making it nullable (after data has been migrated etc) then instead create a new table with all NOT NULL columns (with a foreign key reference to the original table, of course).
You should not rely on using SELECT * for a variety of reasons (google it).