Part of my app includes volume automation for songs.
The volume automation is described in the following format:
[[0,50],[20,62],[48,92]]
I consider each item in this array a 'data point' with the first value containing the position in the song and the second value containing the volume on a scale of 0-100.
I then take these values and perform a function client-side to interpolate this data with 'virtual' data points in order to create a bezier curve allowing smooth volume transition as an audio file is playing.
However, the need has arisen to allow a user to save this automation into the database for recall at a later date.
The datapoints can be unlimited (though in reality should never really exceed around 40-50 with most being less than 10)
Also how should I handle the data? Should it be stored as is, in a text field? Or should I process it in some way beforehand for optimum results?
What data type would be best to use in MySQL to store an array?
Definitely not a text field, but a varchar -- perhaps. I wouldn't recommend parsing the results and storing them in individual columns unless you want to take advantage of that data in database sense -- statistics etc.
If you never see yourself asking "What is the average volume that users use?" then don't bother parsing it.
To figure out how to store this data ask yourself "How will i use it later?" If you will obtain the array and need to utilize it with PHP you can use serialize function. If you will use the values in JavaScript then JSON encoding will probably be best for you (plus many languages know how to decode it)
Good luck!
I suggest you to take a look at the JSON data type. This way you can store your array in a more efficient way than text or varchar, and you can access your data directly form MySQL without having to parse the whole thing.
Take a look at this link : https://dev.mysql.com/doc/refman/5.7/en/json.html
If speed is the most important when retrieving the rows then make a new table and make it dedicated to holding the indices of your array. Use the data type of integer and have each row represent an index of the array. You'll have to create another numeric column which binds these together so you can re-assemble the array with an SQL query.
This way you help MySQL help you speed up access. If you only want certain parts of the array, you just change the range in the SQL query and you can reassemble the array however you want.
The best way to store array is JSON data type -
CREATE TABLE example (
`id` int NOT NULL AUTO_INCREMENT,
`docs` JSON,
PRIMARY KEY (`id`)
);
INSERT INTO example (docs)
VALUES ('["hot", "cold"]');
Read more - https://sebhastian.com/mysql-array/#:~:text=Although%20an%20array%20is%20one,use%20the%20JSON%20data%20type.
Related
I'm working on a website which has a database table with more than 100 fields.
The problem is when my records number get very much (like more than 10000) the speed of response gets very much and actually doesn't return any answer.
Now i want to optimize this table.
My question is: Can we use json type for fields to reduce the number of columns?
my limitation is that i want to search, change and maybe remove that specific data which is stored in json.
PS: i read this qustion : Storing JSON in database vs. having a new column for each key, but that was asked in 2013 and as we know in MuSQL 5.7 json field type is added.
tnx for any guide...
First of all having table with 100 columns may suggest you should rethink your architecture before proceeding. Otherwise it will only become more and more pain in later stages.
May be you are storing data as seperate columns which can be broken down to be stored as seperate rows.
I think the sql query you are writing is like (select * ... ) where you may be fetching extra columns than you may require. You may specify the columns you require. It will definitely speed up the api response.
In my personal view storing active data in json inside sql is not useful. Json should be used as last resort for the meta data which does not mutate or needs not to be searched.
Please make your question more descriptive about the schema of your database and query you are making for api.
I have a situation where I have to create tables dynamically. Depending on some criteria I am going to vary the size of the columns of a particular table.
For that purpose I need to calculate the size of one row.
e.g.
If I am going to create a following table
CREATE TABLE sample(id int, name varchar(30));
so that formula would give me the size of a single row for the table above considering all overheads for storing a row in a mysql table.
Is possible to do so and Is it feasible to do so?
It depends on the storage engine you use and the row format chosen for that table, and also your indexes. But it is not a very useful information.
Edit:
I suggest going against normalization only when you know exactly what you're doing. A DBMS is created to deal with large amount of data. You probably don't need to serialize your strctured data into a single field.
Keep in mind that your application layer then has to tokenie (or worse) the serialized field data to get the original meaning back, which has certainly larger overhead than getting the data already in a structured form, from the DB.
The only exeption I can think of is a client-heavy architcture, when moving processing to the client side actually takes burden off the server, and you would serialize our data anyway for the sake of the transfer. - In server-side code (like php) it is not a good practive to save serialized stye data into the DB.
(Though, using php's built in serialization may be a good idea in some cases. Your current project does not seem to benefit from it.)
The VARCHAR is a variable-length data type, it has a length property, but the value can be empty; calculation may be not exact. Have a look at 'Avg_row_length' field in information_schema.tables.
I want to create a table that will contain dynamic data, it can be in a form of a date, boolean or a text article
for example:
meta_key = "isActive"
meta_valu = "1"
or
meta_key = "theDate"
meta_value = "Sat Jul 23 02:16:57 2005"
or
meta_key = "description"
meta_value = "this is a description and this text can go on and on so i need a long field"
The question is what type of field should meta_value be in order to not inflate the DB too much for every "1" inserted, which fields are dynamic and will only consume the space of their own length
hope I was clear...
I would only use an unstructured data model, like how you suggest, if you are storing unstructured data or documents (e.g. friendfeed).
Alternative storage thoughts
There are many more suitable data storage systems for unstructured data than SQL server. I would recommend combining one of these with your existing structured database.
SQL Options
If you can't do this and must store unstructured data in your SQL DB, you have a couple of options, the datatype isn't really the only concern, how your data is stored is.
Some structure to allow an application reading the data to be able to easily parse the data without complex string manipulation functions.
Be able to define a model for the data in your application, so when you read the data, you know what you've got.
The following 2 options provide a solution to both these challenges...
XML - xml data type
You need to concider the data you are storing. If you need to return it and perform complex searches on the contents, then XML is your best bet. It also allows you to validate that the data stored matches a defined structure (using a dtd). See this article.
http://msdn.microsoft.com/en-us/library/ms189887.aspx
or JSON - nvarchar(max) datatype
If you need to return this data for display on a webpage or use in a Javascript, then storing as JSON would be easiest to work with. You can easily load it into an object model which can be worked with directly and manipulated. The downside is that complex searches on the data will be very slow compared to XPATH (iterate through all the objects, find ones that match).
If you are storing data from other languages or strange characters go with nvarchar (unicode version). Otherwise varchar would be most efficient.
You probably want the VARCHAR field type.
In contrast to CHAR, VARCHAR values are stored as a one-byte or two-byte length prefix plus data.
Hope this helps:
datatype=Text
Are these being used as temp tables or live tables?
Here's an idea I haven't seen yet, but MAY work for you, if you are primarily worried about size explosion, but don't care about having the program do a little extra work. However, I believe the best practice is to create these meta keys with fields in their own table (for example, OrderDate), and then you can have descriptions, dates, etc. A catchall DB table can make for a lot of headaches.
Create the meta table, using this idea:
MetaID
MetaKey
MetaVarchar(255)
MetaText
MetaDate
varchar, text, and date can be null.
Let the inserting program decide what cell to put it in, and the database call will just show whatever field wasn't null. Short items will be in varchar, long ones in text, and date you can use so that you can change the way dates are shown.
In MySQL I generally use the blob datatype which I store a serialized version of a dynamic class that I use for a website.
A blob is basically binary data, so once you figure out how to serialize and de-serialize the data you should for the most part be golden with that.
Please note that for large amounts of data it does become much less efficient but then again it doesn't require you to change your whole structure.
Here is a better explanation of the blob data type:
http://dev.mysql.com/doc/refman/5.0/en/blob.html
I currently have a table in MySQL that stores values normally, but I want to add a field to that table that stores an array of values, such as cities. Should I simply store that array as a CSV? Each row will need it's own array, so I feel uneasy about making a new table and inserting 2-5 rows for each row inserted in the previous table.
I feel like this situation should have a name, I just can't think of it :)
Edit
number of elements - 2-5 (a selection from a dynamic list of cities, the array references the list, which is a table)
This field would not need to be searchable, simply retrieved alongside other data.
The "right" way would be to have another table that holds each value but since you don't want to go that route a delimited list should work. Just make sure that you pick a delimiter that won't show up in the data. You can also store the data as XML depending on how you plan on interacting with the data this may be a better route.
I would go with the idea of a field containing your comma (or other logical delimiter) separated values. Just make sure that your field is going to be big enough to hold your maximum array size. Then when you pull the field out, it should be easy to perform an explode() on the long string using your delimiter, which will then immediately populate your array in the code.
Maybe the word you're looking for is "normalize". As in, move the array to a separate table, linked to the first by means of a key. This offers several advantages:
The array size can grow almost indefinitely
Efficient storage
Ability to search for values in the array without having to use "like"
Of course, the decision of whether to normalize this data depends on many factors that you haven't mentioned, like the number of elements, whether or not the number is fixed, whether the elements need to be searchable, etc.
Is your application PHP? It might be worth investigating the functions serialize and unserialize.
These two functions allow you to easily store an array in the database, then recreate that array at a later time.
As others have mentioned, another table is the proper way to go.
But if you really don't want to do that(?), assuming you're using PHP with MySQL, why not use the serialize() and store a serialized value?
I have an array of values called A, B... X, Y, Z. Fun though it would be to have 26 columns in the table I can't help but feel there is a better way. I have considered creating a second table with the id value of row from the first table, the id of the item in the array and then the boolean value but it seems clunky and confusing.
Is there a better way?
Short answer, no. Long answer, it depends.
You can store binary data in a bunch of ways - abusing a number, using a BINARY OR VARBINARY, using a BLOB or TINYBLOB, etc. BINARY types will generally be faster than BLOB types, provided your data is a known size.
However, relational databases aren't designed for doing anything intelligent with binary data. On a project I used to work on, there was a table where each record had as specific binary pattern - stored as some sort of integer - and searching required a lot of ANDs, ORs, XORs and NOTs. It never really worked very well, performance sucked, and it held the whole project down. Looking back, I would have taken a completely different approach.
So if you just want to drop the data in and pull it out again, great. If you want to use it for anything intelligent, tough.
The situation may be different on other database vendors. In fact, have you considered using something else in place of the database? Some sort of object persistence?
Are your possible array values static?
If so, try using MySQL's SET data type.
You can try storing it as a TINYBLOB, or even an UNSIGNED INT, but you'll have to do bit masking in your code.
You can store it as a string and use text manipulation functions to (re)create your array.