Mysql field sizes on meta data - mysql

I want to create a table that will contain dynamic data, it can be in a form of a date, boolean or a text article
for example:
meta_key = "isActive"
meta_valu = "1"
or
meta_key = "theDate"
meta_value = "Sat Jul 23 02:16:57 2005"
or
meta_key = "description"
meta_value = "this is a description and this text can go on and on so i need a long field"
The question is what type of field should meta_value be in order to not inflate the DB too much for every "1" inserted, which fields are dynamic and will only consume the space of their own length
hope I was clear...

I would only use an unstructured data model, like how you suggest, if you are storing unstructured data or documents (e.g. friendfeed).
Alternative storage thoughts
There are many more suitable data storage systems for unstructured data than SQL server. I would recommend combining one of these with your existing structured database.
SQL Options
If you can't do this and must store unstructured data in your SQL DB, you have a couple of options, the datatype isn't really the only concern, how your data is stored is.
Some structure to allow an application reading the data to be able to easily parse the data without complex string manipulation functions.
Be able to define a model for the data in your application, so when you read the data, you know what you've got.
The following 2 options provide a solution to both these challenges...
XML - xml data type
You need to concider the data you are storing. If you need to return it and perform complex searches on the contents, then XML is your best bet. It also allows you to validate that the data stored matches a defined structure (using a dtd). See this article.
http://msdn.microsoft.com/en-us/library/ms189887.aspx
or JSON - nvarchar(max) datatype
If you need to return this data for display on a webpage or use in a Javascript, then storing as JSON would be easiest to work with. You can easily load it into an object model which can be worked with directly and manipulated. The downside is that complex searches on the data will be very slow compared to XPATH (iterate through all the objects, find ones that match).
If you are storing data from other languages or strange characters go with nvarchar (unicode version). Otherwise varchar would be most efficient.

You probably want the VARCHAR field type.
In contrast to CHAR, VARCHAR values are stored as a one-byte or two-byte length prefix plus data.

Hope this helps:
datatype=Text

Are these being used as temp tables or live tables?
Here's an idea I haven't seen yet, but MAY work for you, if you are primarily worried about size explosion, but don't care about having the program do a little extra work. However, I believe the best practice is to create these meta keys with fields in their own table (for example, OrderDate), and then you can have descriptions, dates, etc. A catchall DB table can make for a lot of headaches.
Create the meta table, using this idea:
MetaID
MetaKey
MetaVarchar(255)
MetaText
MetaDate
varchar, text, and date can be null.
Let the inserting program decide what cell to put it in, and the database call will just show whatever field wasn't null. Short items will be in varchar, long ones in text, and date you can use so that you can change the way dates are shown.

In MySQL I generally use the blob datatype which I store a serialized version of a dynamic class that I use for a website.
A blob is basically binary data, so once you figure out how to serialize and de-serialize the data you should for the most part be golden with that.
Please note that for large amounts of data it does become much less efficient but then again it doesn't require you to change your whole structure.
Here is a better explanation of the blob data type:
http://dev.mysql.com/doc/refman/5.0/en/blob.html

Related

How to order by a DC2Type:json_array subfield

I'm working in a existing application and I'm asked to order by a child field of a DC2Type:json_array field in Symfony. Normally I would add the field as a column in the table. In this case this is not possible.
We have a JsonSerializable invoice entity with a normal date attribute. But also a data attribute which contains the due_date. I whould like to order by data[due_date] in a Symfony query. Is this at all possible?
tl;dr: No, not really.
According to Doctrine's type mapping matrix, json_array gets mapped to MySQL's MEDIUMTEXT column type, which by default obviously does not index its contents as json hence provides little to no performance advantage. (also, AFAICT, doctrine doesn't provide any json functionality besides converting db json from and to php arrays/nulls)
Maybe you could magically do some string search magic to extract a value to sort by it, but you still wouldn't get the performance boost a proper index provides. Depending on your data this could get noticably slow (and eat memory).
The JSON data type is fairly "new" to the relational database world and mappers like doctrine have not yet fully adopted it either. Extending doctrine to handle this data type will probably take lots of work. Instead you could rethink your table schema to include all the fields as columns you want to order by to use all benefits a relational database provides (like indexing).

Storing large amount data as a single JSON field - extract important fields to their own field?

I'm planning of storing a large amount of data from a user submitted form (around 100 questions) in a json field.
I will only need to access for queries for two pieces of data from the form, name and type.
Would it be advisable (and more efficient), to extract name and type to their own fields for querying or shall I just whack it all in one json field and query that json field since json searching is now supported?
If you are concerned about performance, then maintaining separate fields for the name and type is probably the way to go here. The reason for this is that if these two points of data exist as separate fields, it leaves open the possibility to do things like add indices to those columns. While you can use MySQL's JSON API to query by name and type, it would most likely would never be able to compete with an index lookup, at least not in terms of performance.
From a storage point of view, you would not pay much of a price to maintain two separate columns. The main price you would pay is that everytime the JSON gets updated, you would have to also update the name and type columns.

json field type vs. one field for each key

I'm working on a website which has a database table with more than 100 fields.
The problem is when my records number get very much (like more than 10000) the speed of response gets very much and actually doesn't return any answer.
Now i want to optimize this table.
My question is: Can we use json type for fields to reduce the number of columns?
my limitation is that i want to search, change and maybe remove that specific data which is stored in json.
PS: i read this qustion : Storing JSON in database vs. having a new column for each key, but that was asked in 2013 and as we know in MuSQL 5.7 json field type is added.
tnx for any guide...
First of all having table with 100 columns may suggest you should rethink your architecture before proceeding. Otherwise it will only become more and more pain in later stages.
May be you are storing data as seperate columns which can be broken down to be stored as seperate rows.
I think the sql query you are writing is like (select * ... ) where you may be fetching extra columns than you may require. You may specify the columns you require. It will definitely speed up the api response.
In my personal view storing active data in json inside sql is not useful. Json should be used as last resort for the meta data which does not mutate or needs not to be searched.
Please make your question more descriptive about the schema of your database and query you are making for api.

Best way to store an array in MySQL database?

Part of my app includes volume automation for songs.
The volume automation is described in the following format:
[[0,50],[20,62],[48,92]]
I consider each item in this array a 'data point' with the first value containing the position in the song and the second value containing the volume on a scale of 0-100.
I then take these values and perform a function client-side to interpolate this data with 'virtual' data points in order to create a bezier curve allowing smooth volume transition as an audio file is playing.
However, the need has arisen to allow a user to save this automation into the database for recall at a later date.
The datapoints can be unlimited (though in reality should never really exceed around 40-50 with most being less than 10)
Also how should I handle the data? Should it be stored as is, in a text field? Or should I process it in some way beforehand for optimum results?
What data type would be best to use in MySQL to store an array?
Definitely not a text field, but a varchar -- perhaps. I wouldn't recommend parsing the results and storing them in individual columns unless you want to take advantage of that data in database sense -- statistics etc.
If you never see yourself asking "What is the average volume that users use?" then don't bother parsing it.
To figure out how to store this data ask yourself "How will i use it later?" If you will obtain the array and need to utilize it with PHP you can use serialize function. If you will use the values in JavaScript then JSON encoding will probably be best for you (plus many languages know how to decode it)
Good luck!
I suggest you to take a look at the JSON data type. This way you can store your array in a more efficient way than text or varchar, and you can access your data directly form MySQL without having to parse the whole thing.
Take a look at this link : https://dev.mysql.com/doc/refman/5.7/en/json.html
If speed is the most important when retrieving the rows then make a new table and make it dedicated to holding the indices of your array. Use the data type of integer and have each row represent an index of the array. You'll have to create another numeric column which binds these together so you can re-assemble the array with an SQL query.
This way you help MySQL help you speed up access. If you only want certain parts of the array, you just change the range in the SQL query and you can reassemble the array however you want.
The best way to store array is JSON data type -
CREATE TABLE example (
`id` int NOT NULL AUTO_INCREMENT,
`docs` JSON,
PRIMARY KEY (`id`)
);
INSERT INTO example (docs)
VALUES ('["hot", "cold"]');
Read more - https://sebhastian.com/mysql-array/#:~:text=Although%20an%20array%20is%20one,use%20the%20JSON%20data%20type.

Storing statistics of multple data types in SQL Server 2008

I am creating a statistics module in SQL Server 2008 that allows users to save data in any number of formats (date, int, decimal, percent, etc...). Currently I am using a single table to store these values as type varchar, with an extra field to denote the datatype that it should be.
When I display the value, I use that datatype field to format it. I use sprocs to calculate the data for reporting; and the datatype field to convert to the appropriate datatype for the appropriate calculations.
This approach works, but I don't like storing all kinds of data in a varchar field. The only alternative that I can see is to have separate tables for each datatype I want to store, and save the record information to the appropriate table based on datatype. To retreive, I run a case statement to join the appropriate table and get the data. This seems to solve. This however, seems like a lot of work for ... what gain?
Wondering if I'm missing something here. Is there a better way to do this?
Thanks in advance!
Assumedly, when you pull this data from the database, it goes through some kind of normalisation before being displayed to make the report useful? If so, can it not be normalised before it goes to the database?