I want to know which one structure is good to store about 50 data in same time into MySQL?
First
data_id user_id data_key data_content
---------------------------------------------------
1 2 data_key1 content 1
2 2 data_key2 content 2
3 2 data_key3 content 3
.. .. .. ..
.. .. .. ..
50 2 data_key50 content 50
Seconds
data_id user_id data_key1 data_key2 data_key3 .. .. data_key50
-------------------------------------------------------------------
1 2 content 1 content 2 content 3 .. .. content 50
Or have other solution? user_id will have more than 5000 users.
Do you know the data keys before hand and are they likely to change? If they're along the lines of "first_name, last_name, hair_color, eye_color, birthday, ..." then the second model would be feasible since you'll often want to retrieve all of these for a user at the same time.
The first model is like having a persistent hashtable, so you'd have to store all the values as strings, and your app will have to know which keys to query for. This can make sense if the keys are user defined, however.
Definitely the first one! What if the need arises to have more than 50 different "data keys". The first solution is way more flexible and unpolluted.
I do not think the second solution is a goeod one.
If I do not suppose wrong not all tue users are 2, but you have different numbers what you should do is normalize that table, divide it in to.
One that has:
Table main:
id , data_id, user_id
Table key
main_id, data_key
Table data:
main_id,data_content
where in both tables key and data main_id reffer to the id in the table main, which is the way to join them
Related
Environment: MySQL 5.6
SqlTable name = CategoryTable
Sql Columns
CATEGORY_ID (INT)
CATEGORY_NAME (VARCHAR)
LEVEL (INT)
MOTHER_CATEGORY (INT)
I've tried with
SELECT
CATEGORY_ID, CATEGORY_NAME , LEVEL , MOTHER_CATEGORY
FROM
CategoryTable
But I don't know how to use the ORDER BY in order to get that result.
So the first line here are the columns, and from the second lines, there start the table content:
CATEGORY_ID CATEGORY_NAME LEVEL MOTHER_CATEGORY
1 MainCategory 0 0
2 -SubCategory1 1 1
3 --SubCategory2 2 2
4 ---SubCategory3 3 3
5 2Nd_Main_Category 0 0
6 -SubCategory1 1 5
7 --SubCategory2 2 6
8 ---SubCategory3 3 7
is there a way to achieve something like this with a mysql query?
You aren't very clear in what you are trying to achieve. I'll take a guess that you want to order using a multi-level parent child structure. there are some very complicated ways of handling such a feat within mysql 5.6, a DB that's not really ideal for such a structure, but I have come up with something simple myself that I use in my own apps. you create a special ordering field that creates a path of zero filled ids for each record.
ordering_path_field
/
/0000000001/
/
/0000000001/0000000002
/0000000003
/0000000003/0000000005
/0000000003/0000000005/0000000006
etc
so each record contains a path of each parent up to the root, using zero filled ids. then you can just sort by this field to get them in proper order. the drawbacks being that you'll have to set a max number of levels allowed, so that the ordering fields doesn't overflow, and also, moving a record to a new parent if ever needed would be a big pain.
I have two tables, one user table and an items table. In the user table, there is the field "items". The "items" table only consists of a unique id and an item_name.
Now each user can have multiple items. I wanted to avoid creating a third table that would connect the items with the user but rather have a field in the user_table that stores the item ids connected to the user in a "csv" field.
So any given user would have a field "items" that could have a value like "32,3,98,56".
It maybe is worth mentioning that the maximum number of items per user is rather limited (<5).
The question: Is this approach generally a bad idea compared to having a third table that contains user->item pairs?
Wouldn't a third table create quite an overhead when you want to find all items of a user (I would have to iterate through all elements returned by MySQL individually).
You don't want to store the value in the comma separated form.
Consider the case when you decide to join this column with some other table.
Consider you have,
x items
1 1, 2, 3
1 1, 4
2 1
and you want to find distinct values for each x i.e.:
x items
1 1, 2, 3, 4
2 1
or may be want to check if it has 3 in it
or may be want to convert them into separate rows:
x items
1 1
1 2
1 3
1 1
1 4
2 1
It will be a HUGE PAIN.
Use atleast normalization 1st principle - have separate row for each value.
Now, say originally you had this as you table:
x item
1 1
1 2
1 3
1 1
1 4
2 1
You can easily convert it into csv values:
select x, group_concat(item order by item) items
from t
group by x
If you want to search if x = 1 has item 3. Easy.
select * from t where x = 1 and item = 3
which in earlier case would use horrible find_in_set:
select * from t where x = 1 and find_in_set(3, items);
If you think you can use like with CSV values to search, then first like %x% can't use indexes. Second, it will produce wrong results.
Say you want check if item ab is present and you do %ab% it will return rows with abc abcd abcde .... .
If you have many users and items, then I'd suggest create separate table users with an PK userid, another items with PK itemid and lastly a mapping table user_item having userid, itemid columns.
If you know you'll just need to store and retrieve these values and not do any operation on it such as join, search, distinct, conversion to separate rows etc. etc. - may be just may be, you can (I still wouldn't).
Storing complex data directly in a relational database is a nonstandard use of a relational database. Normally they are designed for normalized data.
There are extensions which vary according to the brand of software which may help. Or you can normalize your CSV file into properly designed table(s). It depends on lots of things. Talk to your enterprise data architect in this case.
Whether it's a bad idea depends on your business needs. I can't assess your business needs from way out here on the internet. Talk to your product manager in this case.
Let's say we have a table called Workorders and another table called Parts. I would like to have a column in Workorders called parts_required. This column would contain a single item that tells me what parts were required for that workorder. Ideally, this would contain the quantities as well, but a second column could contain the quantity information if needed.
Workorders looks like
WorkorderID date parts_required
1 2/24 ?
2 2/25 ?
3 3/16 ?
4 4/20 ?
5 5/13 ?
6 5/14 ?
7 7/8 ?
Parts looks like
PartID name cost
1 engine 100
2 belt 5
3 big bolt 1
4 little bolt 0.5
5 quart oil 8
6 Band-aid 0.1
Idea 1: create a string like '1-1:2-3:4-5:5-4'. My application would parse this string and show that I need --> 1 engine, 3 belts, 5 little bolts, and 4 quarts of oil.
Pros - simple enough to create and understand.
Cons - will make deep introspection into our data much more difficult. (costs over time, etc)
Idea 2: use a binary number. For example, to reference the above list (engine, belt, little bolts, oil) using an 8-bit integer would be 54, because 54 in binary representation is 110110.
Pros - datatype is optimal concerning size. Also, I am guessing there are tricky math tricks I could use in my queries to search for parts used (don't know what those are, correct me if I'm in the clouds here).
Cons - I do not know how to handle quantity using this method. Also, Even with a 64-bit BIGINT still only gives me 64 parts that can be in my table. I expect many hundreds.
Any ideas? I am using MySQL. I may be able to use PostgreSQL, and I understand that they have more flexible datatypes like JSON and arrays, but I am not familiar with how querying those would perform. Also it would be much easier to stay with MySQL
Why not create a Relationship table?
You can create a table named Workorders_Parts with the following content:
|workorderId, partId|
So when you want to get all parts from a specific workorder you just type:
select p.name
from parts p inner join workorders_parts wp on wp.partId = p.partId
where wp.workorderId = x;
what the query says is:
Give me the name of parts that belongs to workorderId=x and are listed in table workorders_parts
Remembering that INNER JOIN means "INTERSECTION" in other words: data i'm looking for should exist (generally the id) in both tables
IT will give you all part names that are used to build workorder x.
Lets say we have workorderId = 1 with partID = 1,2,3, it will be represented in our relationship table as:
workorderId | partId
1 | 1
1 | 2
1 | 3
I am developing an evaluation system for different programs that needs a lot of flexibility. Each program will have different things to track, so I need to store what data points they want to track, and the corresponding data for the person being evaluated on the particular data point. I am guessing several tables are appropriate. Here is a general outline:
Table: accounts
- unique ID assigned to each account. We'll call this 'aid'
Table: users
- each user with unique ID.
Table: evaluation
- each program will enter in the metrics they want to track into this table (i.e attendance)
- column 'aid' will correspond to 'aid' in account table
Table: evaluation_data
- data (i.e attendance) entered into this database
- column 'aid' will correspond to 'aid' in account table
- column 'uid' will correspond to 'uid' in user table
The input form for evaluation_data will be generated from what's in the evaluation table.
This is the only logical way I can think of doing this. Some of these tables will be growing quite large over time. Is this the most optimal way of doing this?
I'm a little confused about how accounts, users and programs all relate to each other and whether or not account and program are the same thing and that you used the terms interchangeably. I'm going to use different terms which are just easier for me to understand.
Say you have a website that allows freelancers to keep track of different projects and they can create their own data to track. (Hope you see the similarity)
Tables...
freelancers
id title etc
projects
id freelancer_id title description etc
data_options
id freelancer_id title
You can even add other columns like data_type and give options like URL, email, text, date, etc which can be used for validation or to help format the input form.
example data:
1 5 Status
2 5 Budget
3 5 Customer
4 99 Job Type
5 99 Deadline
6 102 Price
7 102 Status
8 102 Due By
This display 3 different freelancers tracking data, freelancers with the id's 5, 99, and 102. Deadline and Due By are essentially the same but freelancers can call these whatever they want.
data_values
id project_id option_id option_value
a column freelancer_id as you would be able to to a join and get the freelancer_id from either the project_id or the option_id
example data:
1000 1 2 $250
1001 1 1 Completed
1002 1 3 Martha Hayes
This is only showing information freelancer with the id 5 has input because option_id's 1-3 belong to that user.
I actually have a table with 30 columns. In one day this table can get around 3000 new records!
The columns datas look like :
IMG Name Phone etc..
http://www.site.com/images/image.jpg John Smith 123456789 etc..
http://www.site.com/images/image.jpg Smith John 987654321 etc..
I'm looking a way to optimize the size of the table but also the response time of the sql queries. I was thinking of doing something like :
Column1
http://www.site.com/images/image.jpg|John Smith|123456789|etc..
And then via php i would store each value into an array..
Would it be faster ?
Edit
So to take an example of the structure, let's say i have two tables :
package
package_content
Here is the structure of the table package :
id | user_id | package_name | date
Here is the structure of the table package_content :
id | package_id | content_name | content_description | content_price | content_color | etc.. > 30columns
The thing is for each package i can get up to 16rows of content. For example :
id | user_id | package_name | date
260 11 Package 260 2013-7-30 10:05:00
id | package_id | content_name | content_description | content_price | content_color | etc.. > 30columns
1 260 Content 1 Content 1 desc 58 white etc..
2 260 Content 2 Content 2 desc 75 black etc..
3 260 Content 3 Content 3 desc 32 blue etc..
etc...
Then with php i make like that
select * from package
while not EOF {
show package name, date etc..
select * from package_content where package_content.package_id = package.id and package.id = package_id
while not EOF{
show package_content name, desc, price, color etc...
}
}
Would it be faster? Definitely not. If you needed to search by Name or Phone or etc... you'd have to pull those values out of Column1 every time. You'd never be able to optimize those queries, ever.
If you want to make the table smaller it's best to look at splitting some columns off into another table. If you'd like to pursue that option, post the entire structure. But note that the number of columns doesn't affect speed that much. I mean it can, but it's way down on the list of things that will slow you down.
Finally, 3,000 rows per day is about 1 million rows per year. If the database is tolerably well designed, MySQL can handle this easily.
Addendum: partial table structures plus sample query and pseudocode added to question.
The pseudocode shows the package table being queried all at once, then matching package_content rows being queried one at a time. This is a very slow way to go about things; better to use a JOIN:
SELECT
package.id,
user_id,
package_name,
date,
package_content.*
FROM package
INNER JOIN package_content on package.id = package_content.id
WHERE whatever
ORDER BY whatever
That will speed things up right away.
If you're displaying on a web page, be sure to limit results with a WHERE clause - nobody will want to see 1,000 or 3,000 or 1,000,000 packages on a single web page :)
Finally, as I mentioned before, the number of columns isn't a huge worry for query optimization, but...
Having a really wide result row means more data has to go across the wire from MySQL to PHP, and
It isn't likely you'll be able to display 30+ columns of information on a web page without it looking terrible, especially if you're reading lots of rows.
With that in mind, you'll be better of picking specific package_content columns in your query instead of picking them all with a SELECT *.
Don't combine any columns, this is no use and might even be slower in the end.
You should use indexes on a column where you query at. I do have a website with about 30 columns where atm are around 600.000 results. If you use EXPLAIN before a query, you should see if it uses any indexes. If you got a JOIN with 2 values and a WHERE at the same table. You should make a combined index with the 3 columns, in order from JOIN -> WHERE. If you join on the same table, you should see this as a seperate index.
For example:
SELECT p.name, p.id, c.name, c2.name
FROM product p
JOIN category c ON p.cat_id=c.id
JOIN category c2 ON c.parent_id=c2.id AND name='Niels'
WHERE p.filterX='blaat'
You should have an combined index at category
parent_id,name
AND
id (probably the AI)
A index on product
cat_id
filterX
With this easy solution you can optimize queries from NOT DOABLE to 0.10 seconds, or even faster.
If you use MySQL 5.6 you should step over to INNODB because MySQL is better with optimizing JOINS and sub queries. Also MySQL will try to run them into MEMORY which will make it a lot faster aswel. Please keep in mind that backupping INNODB tables might need some extra attention.
You might also think about making MEMORY tables for super fast querieing (you do still need indexes).
You can also optimize by making integers size 4 (4 bytes, not 11 characters). And not always using VARCHAR 255.