Proper way to store json in database - mysql

Does json usually have enclosing quotation marks when stored in a database field?
Which of the following would be correct for the json column?
`json`
'{"Color": "Red"}'
{"Color": "Red"}
My assumption would be the second, but just wanted to make sure that's correct. And if so, why?

As of MySQL 5.7.8, MySQL supports a native JSON data type
If you use earlier version and store it as text, store it without the enclosing quotation marks.

While it is possible to store the data as you suggest, it would be better to store the data in a table named json with field color then insert two records, each containing the value 'red'.
It's more work now, because it involves deconstructing and reconstructing the JSON, but it saves work later if you need to serve the data in some format other than JSON, then you won't need to reformat all of the data in your database.

Related

Is there an update to set commas to existing integers/decimals used as currencies in a table?

Data already exists, but since it refers to large amounts, it's not so readable without commas.
I've read this is the function:
(https://i.stack.imgur.com/LVoEC.png)
but how do I make a sentence to modify the existing data in a table?
As far as I know, this should be done in your frontend application. The database should only store data in it's raw format. That way you have the possibility to change the formating depending on the users location for example, since not all countries use "." and "," the same way in numbers.
If you just need it while you are developing your querys and not for the end-user, you could search, if your sql-client has an option to format the output, but it is probably not done in your sql-query.

Google-BigQuery - schema parsing of CSV file

We are using Java API to load a CSV file to Google Big Query. Is there a way to detect the columns on load and auto select the appropriate schema type?
For example, if a specific column has only float, then BigQuery assigns the column as float, if non numeric then it assigns column as string. Is there a method to do this?
The roundabout way is to assign each column as string by default when loading the CSV.
Then do a query on each column -
SELECT count(columnname)- count(float(columnname)) FROM dataset.table
(assuming I am only interested in isolating columns that have "float values" that I can use for math functions from my application)
Any other method to solve this problem?
Right now, BigQuery does not support schema inference, so as you suggest, your options are:
Provide the schema explicitly when loading data.
Load all data using the string type, and cast/convert at query time.
Note that you can use the allowLargeResults feature to clean up and rewrite your imported data (but note that you'll be charged for the query, which will increase your data ingestion costs).
For the record, schema auto-detect is now supported: https://cloud.google.com/bigquery/federated-data-sources#auto-detect

Best column data type for JSON data in PostgreSQL < 9.2?

I'm going to store records with arbitrary fields, and the custom ones will automatically go into a separate serialized field. I don't care that they're not searchable nor sortable.
I've chosen the JSON serialization format. What is the best column data type, provided I don't have the new json type?
The underlying data type in 9.2 is TEXT, so you should be able to use that - see http://michael.otacoo.com/postgresql-2/postgres-9-2-highlight-json-data-type/

sort custom encoded data in mysql

i need to sort my data with one of my column in table which is vendor_params; the thing is it is an custom encoded data, below i have mentioned how i saved data in db
vendor_min_pov="200"|vendor_min_poq=1
firstly i was thinking to sort it through php but it was increasing the page load time, as some time query returns large data in an object of different keys of the same array and there are other filtration applying on that array too; so its good to sort that out via sql query.
i tried to search how can i order encoded data; but the solutions i got mostly is for serialize data;
please help if some one can guide me how can i order the result of this table with the data values of vendor_min_pov in the column vendor_params
finally i use the other option to sort this type of data as to decode it i need to do bit tweakings on php to and it increase the load time so i sort the data from jquery on front end.
however what i was preferring was the suggestion of #mike which is using MID() by this we can sort these sort of thing

Mysql field sizes on meta data

I want to create a table that will contain dynamic data, it can be in a form of a date, boolean or a text article
for example:
meta_key = "isActive"
meta_valu = "1"
or
meta_key = "theDate"
meta_value = "Sat Jul 23 02:16:57 2005"
or
meta_key = "description"
meta_value = "this is a description and this text can go on and on so i need a long field"
The question is what type of field should meta_value be in order to not inflate the DB too much for every "1" inserted, which fields are dynamic and will only consume the space of their own length
hope I was clear...
I would only use an unstructured data model, like how you suggest, if you are storing unstructured data or documents (e.g. friendfeed).
Alternative storage thoughts
There are many more suitable data storage systems for unstructured data than SQL server. I would recommend combining one of these with your existing structured database.
SQL Options
If you can't do this and must store unstructured data in your SQL DB, you have a couple of options, the datatype isn't really the only concern, how your data is stored is.
Some structure to allow an application reading the data to be able to easily parse the data without complex string manipulation functions.
Be able to define a model for the data in your application, so when you read the data, you know what you've got.
The following 2 options provide a solution to both these challenges...
XML - xml data type
You need to concider the data you are storing. If you need to return it and perform complex searches on the contents, then XML is your best bet. It also allows you to validate that the data stored matches a defined structure (using a dtd). See this article.
http://msdn.microsoft.com/en-us/library/ms189887.aspx
or JSON - nvarchar(max) datatype
If you need to return this data for display on a webpage or use in a Javascript, then storing as JSON would be easiest to work with. You can easily load it into an object model which can be worked with directly and manipulated. The downside is that complex searches on the data will be very slow compared to XPATH (iterate through all the objects, find ones that match).
If you are storing data from other languages or strange characters go with nvarchar (unicode version). Otherwise varchar would be most efficient.
You probably want the VARCHAR field type.
In contrast to CHAR, VARCHAR values are stored as a one-byte or two-byte length prefix plus data.
Hope this helps:
datatype=Text
Are these being used as temp tables or live tables?
Here's an idea I haven't seen yet, but MAY work for you, if you are primarily worried about size explosion, but don't care about having the program do a little extra work. However, I believe the best practice is to create these meta keys with fields in their own table (for example, OrderDate), and then you can have descriptions, dates, etc. A catchall DB table can make for a lot of headaches.
Create the meta table, using this idea:
MetaID
MetaKey
MetaVarchar(255)
MetaText
MetaDate
varchar, text, and date can be null.
Let the inserting program decide what cell to put it in, and the database call will just show whatever field wasn't null. Short items will be in varchar, long ones in text, and date you can use so that you can change the way dates are shown.
In MySQL I generally use the blob datatype which I store a serialized version of a dynamic class that I use for a website.
A blob is basically binary data, so once you figure out how to serialize and de-serialize the data you should for the most part be golden with that.
Please note that for large amounts of data it does become much less efficient but then again it doesn't require you to change your whole structure.
Here is a better explanation of the blob data type:
http://dev.mysql.com/doc/refman/5.0/en/blob.html