Storing JSON in an msSQL database? - json

I'm developing a form generator, and wondering if it would be bad mojo to store JSON in an SQL database?
I want to keep my database & tables simple, so I was going to have
`pKey, formTitle, formJSON`
on a table, and then store
{["firstName":{"required":"true","type":"text"},"lastName":{"required":"true","type":"text"}}
in formJSON.
Any input is appreciated.

I use JSON extensively in my CMS (which hosts about 110 sites) and I find the speed of access data to be very fast. I was surprised that there wasn't more speed degradation. Every object in the CMS (Page, Layout, List, Topic, etc) has an NVARCHAR(MAX) column called JSONConfiguration. My ORM tool knows to look for that column and reconstitute it as an object if needed. Or, depending on the situation, I will just pass it to the client for jQuery or Ext JS to process.
As for readability / maintainability of my code, you might say it's improved because I now have classes that represent a lot of the JSON objects stored in the DB.
I used JSON.net for all serialization / deserialization. https://www.newtonsoft.com/json
I also use a single query to return meta-JSON with the actual data. As in the case of Ext JS, I have queries that return both the structure of the Ext JS object as well as the data the object will need. This cuts out one post back / SQL round trip.
I was also surprised at how fast the code was to parse a list of JSON objects and map them into a DataTable object that I then handed to a GridView.
The only downside I've seen to using JSON is indexing. If you have a property of the JSON you need to search, then you have to store it as a separate column.
There are JSON DB's out there that might server your needs better: CouchDB, MongoDB, and Cassandra.

A brilliant way to make an object database from sql server. I do this for all config objects and everything else that doesn't need any specific querying. extending your object - easy, just create a new property in your class and init with default value. Don't need a property any more? Just delete it in the class. Easy roll out, easy upgrade. Not suitable for all objects, but if you extract any prop you need to index on - keep using it. Very modern way of using sql server.

It will be slower than having the form defined in code, but one extra query shouldn't cause you much harm. (Just don't let 1 extra query become 10 extra queries!)
Edit: If you are selecting the row by formTitle instead of pKey (I would, because then your code will be more readable), put an index on formTitle

We have used a modified version of XML for exactly the purpose you decribe for seven or eight years and it works great. Our customers' form needs are so diverse that we could never keep up with a table/column approach. We are too far down the XML road to change very easily but I think JSON would work as well and maybe evan better.
Reporting is no problem with a couple of good parsing functions and I would defy anyone to find a significant difference in performance between our reporting/analytics and a table/column solution to this need.

I wouldn't recommend it.
If you ever want to do any reporting or query based on these values in the future it's going to make your life a lot harder than having a few extra tables/columns.
Why are you avoiding making new tables? I say if your application requires them go ahead and add them in... Also if someone has to go through your code/db later it's probably going to be harder for them to figure out what you had going on (depending on what kind of documentation you have).

You should be able to use SisoDb for this. http://sisodb.com

I think it not an optimal idea to store object data in a string in SQL. You have to do transformation outside of SQL in order to parse it. That presents a performance issue and you lose the leverage of using SQL native data parsing capability. A better way would be to store JSON as an XML datatype in SQL. This way, you kill two birds with one stone: You don't have to create shit load of tables and still get all the native querying benefits of SQL.
XML in SQL Server 2005? Better than JSON in Varchar?

Related

How efficient is using JSON with MySQL? (esp. JSON_EXTRACT, JSON_MERGE..)

I am used to MSSQL, now using MySQL, and have come to a point, where it is convenient to dump a whole JSON object into a single db column, instead of hacking it away into separate ones, in a relational database fashion.
Most of the time, I will just SELECT the whole object back into the app when needed, but occasionally, I will want to filter rows according to contents of the JSONs and maybe do some other operations.
Is working with JSON objects "native" to MySQL, or is it kinda hammered it, for extended usability (JSON_EXTRACT, JSON_MERGE..), while sacrificing performance, and therefore not a good way to go?
Its native to MySQL and works very smooth.
Read through this example.. its pretty simple and useful
https://scotch.io/tutorials/working-with-json-in-mysql

Native JSON support in MYSQL 5.7 : what are the pros and cons of JSON data type in MYSQL?

In MySQL 5.7 a new data type for storing JSON data in MySQL tables has been
added. It will obviously be a great change in MySQL. They listed some benefits
Document Validation - Only valid JSON documents can be stored in a
JSON column, so you get automatic validation of your data.
Efficient Access - More importantly, when you store a JSON document in a JSON column, it is not stored as a plain text value. Instead, it is stored
in an optimized binary format that allows for quicker access to object
members and array elements.
Performance - Improve your query
performance by creating indexes on values within the JSON columns.
This can be achieved with “functional indexes” on virtual columns.
Convenience - The additional inline syntax for JSON columns makes it
very natural to integrate Document queries within your SQL. For
example (features.feature is a JSON column): SELECT feature->"$.properties.STREET" AS property_street FROM features WHERE id = 121254;
WOW ! they include some great features. Now it is easier to manipulate data. Now it is possible to store more complex data in column.
So MySQL is now flavored with NoSQL.
Now I can imagine a query for JSON data something like
SELECT * FROM t1
WHERE JSON_EXTRACT(data,"$.series") IN
(
SELECT JSON_EXTRACT(data,"$.inverted")
FROM t1 | {"series": 3, "inverted": 8}
WHERE JSON_EXTRACT(data,"$.inverted")<4 );
So can I store huge small relations in few json colum? Is it good? Does it break normalization. If this is possible then I guess it will act like NoSQL in a MySQL column. I really want to know more about this feature. Pros and cons of MySQL JSON data type.
SELECT * FROM t1
WHERE JSON_EXTRACT(data,"$.series") IN ...
Using a column inside an expression or function like this spoils any chance of the query using an index to help optimize the query. The query shown above is forced to do a table-scan.
The claim about "efficient access" is misleading. It means that after the query examines a row with a JSON document, it can extract a field without having to parse the text of the JSON syntax. But it still takes a table-scan to search for rows. In other words, the query must examine every row.
By analogy, if I'm searching a telephone book for people with first name "Bill", I still have to read every page in the phone book, even if the first names have been highlighted to make it slightly quicker to spot them.
MySQL 5.7 allows you to define a virtual column in the table, and then create an index on the virtual column.
ALTER TABLE t1
ADD COLUMN series AS (JSON_EXTRACT(data, '$.series')),
ADD INDEX (series);
Then if you query the virtual column, it can use the index and avoid the table-scan.
SELECT * FROM t1
WHERE series IN ...
This is nice, but it kind of misses the point of using JSON. The attractive part of using JSON is that it allows you to add new attributes without having to do ALTER TABLE. But it turns out you have to define an extra (virtual) column anyway, if you want to search JSON fields with the help of an index.
But you don't have to define virtual columns and indexes for every field in the JSON document—only those you want to search or sort on. There could be other attributes in the JSON that you only need to extract in the select-list like the following:
SELECT JSON_EXTRACT(data, '$.series') AS series FROM t1
WHERE <other conditions>
I would generally say that this is the best way to use JSON in MySQL. Only in the select-list.
When you reference columns in other clauses (JOIN, WHERE, GROUP BY, HAVING, ORDER BY), it's more efficient to use conventional columns, not fields within JSON documents.
I presented a talk called How to Use JSON in MySQL Wrong at the Percona Live conference in April 2018. I'll update and repeat the talk at Oracle Code One in the fall.
There are other issues with JSON. For example, in my tests it required 2-3 times as much storage space for JSON documents compared to conventional columns storing the same data.
MySQL is promoting their new JSON capabilities aggressively, largely to dissuade people against migrating to MongoDB. But document-oriented data storage like MongoDB is fundamentally a non-relational way of organizing data. It's different from relational. I'm not saying one is better than the other, it's just a different technique, suited to different types of queries.
You should choose to use JSON when JSON makes your queries more efficient.
Don't choose a technology just because it's new, or for the sake of fashion.
Edit: The virtual column implementation in MySQL is supposed to use the index if your WHERE clause uses exactly the same expression as the definition of the virtual column. That is, the following should use the index on the virtual column, since the virtual column is defined AS (JSON_EXTRACT(data,"$.series"))
SELECT * FROM t1
WHERE JSON_EXTRACT(data,"$.series") IN ...
Except I have found by testing this feature that it does NOT work for some reason if the expression is a JSON-extraction function. It works for other types of expressions, just not JSON functions. UPDATE: this reportedly works, finally, in MySQL 5.7.33.
The following from MySQL 5.7 brings sexy back with JSON sounds good to me:
Using the JSON Data Type in MySQL comes with two advantages over
storing JSON strings in a text field:
Data validation. JSON documents will be automatically validated and
invalid documents will produce an error. Improved internal storage
format. The JSON data is converted to a format that allows quick read
access to the data in a structured format. The server is able to
lookup subobjects or nested values by key or index, allowing added
flexibility and performance.
...
Specialised flavours of NoSQL stores
(Document DBs, Key-value stores and Graph DBs) are probably better
options for their specific use cases, but the addition of this
datatype might allow you to reduce complexity of your technology
stack. The price is coupling to MySQL (or compatible) databases. But
that is a non-issue for many users.
Note the language about document validation as it is an important factor. I guess a battery of tests need to be performed for comparisons of the two approaches. Those two being:
Mysql with JSON datatypes
Mysql without
The net has but shallow slideshares as of now on the topic of mysql / json / performance from what I am seeing.
Perhaps your post can be a hub for it. Or perhaps performance is an after thought, not sure, and you are just excited to not create a bunch of tables.
From my experience, JSON implementation at least in MySql 5.7 is not very useful due to its poor performance.
Well, it is not so bad for reading data and validation. However, JSON modification is 10-20 times slower with MySql that with Python or PHP.
Lets imagine very simple JSON:
{ "name": "value" }
Lets suppose we have to convert it to something like that:
{ "name": "value", "newName": "value" }
You can create simple script with Python or PHP that will select all rows and update them one by one. You are not forced to make one huge transaction for it, so other applications will can use the table in parallel. Of course, you can also make one huge transaction if you want, so you'll get guarantee that MySql will perform "all or nothing", but other applications will most probably not be able to use database during transaction execution.
I have 40 millions rows table, and Python script updates it in 3-4 hours.
Now we have MySql JSON, so we don't need Python or PHP anymore, we can do something like that:
UPDATE `JsonTable` SET `JsonColumn` = JSON_SET(`JsonColumn`, "newName", JSON_EXTRACT(`JsonColumn`, "name"))
It looks simple and excellent. However, its speed is 10-20 times slower than Python version, and it is single transaction, so other applications can not modify the table data in parallel.
So, if we want to just duplicate JSON key in 40 millions rows table, we need to not use table at all during 30-40 hours. It has no sence.
About reading data, from my experience direct access to JSON field via JSON_EXTRACT in WHERE is also extremelly slow (much slower that TEXT with LIKE on not indexed column). Virtual generated columns perform much faster, however, if we know our data structure beforehand, we don't need JSON, we can use traditional columns instead. When we use JSON where it is really useful, i. e. when data structure is unknown or changes often (for example, custom plugin settings), virtual column creation on regular basis for any possible new columns doesn't look like good idea.
Python and PHP make JSON validation like a charm, so it is questionable do we need JSON validation on MySql side at all. Why not also validate XML, Microsoft Office documents or check spelling? ;)
I got into this problem recently, and I sum up the following experiences:
1, There isn't a way to solve all questions.
2, You should use the JSON properly.
One case:
I have a table named: CustomField, and it must two columns: name, fields.
name is a localized string, it content should like:
{
"en":"this is English name",
"zh":"this is Chinese name"
...(other languages)
}
And fields should be like this:
[
{
"filed1":"value",
"filed2":"value"
...
},
{
"filed1":"value",
"filed2":"value"
...
}
...
]
As you can see, both the name and the fields can be saved as JSON, and it works!
However, if I use the name to search this table very frequently, what should I do? Use the JSON_CONTAINS,JSON_EXTRACT...? Obviously, it's not a good idea to save it as JSON anymore, we should save it to an independent table:CustomFieldName.
From the above case, I think you should keep these ideas in mind:
Why MYSQL support JSON?
Why you want to use JSON? Did your business logic just need this? Or there is something else?
Never be lazy
Thanks
Strong disagree with some of things that are said in other answers (which, to be fair, was a few years ago).
We have very carefully started to adopt JSON fields with a healthy skepticism. Over time we've been adding this more.
This generally describes the situation we are in:
Like 99% of applications out there, we are not doing things at a massive scale. We work with many different applications and databases, the majority of these are capable of running on modest hardware.
We have processes and know-how in place to make changes if performance does become a problem.
We have a general idea of which tables are going to be large and think carefully about how we optimize queries for them.
We also know in which cases this is not really needed.
We're pretty good at data validation and static typing at the application layer.
Lastly,
When we use JSON for storing complex data, that data is never referenced directly by other tables. We also tend to never need to use them in where clauses in hot paths.
So with all this in mind, using a little JSON field instead of 1 or more tables vastly reduces the complexity of queries and data model. Removing this complexity makes it easier to write certain queries, makes our code simpler and just generally saves time.
Complexity and performance is something that needs to be carefully balanced. JSON fields should not be blindly applied, but for the cases where this works it's fantastic.
'JSON fields don't perform well' is a valid reason to not use JSON fields, if you are at a place where that performance difference matters.
One specific example is that we have a table where we store settings for video transcoding. The settings table has 1 'profile' per row, and the settings themselves have a maximum nesting level of 4 (arrays and objects).
Despite this being a large database overall, there's only a few hundreds of these records in the database. Suggesting to split this into 5 tables would yield no benefit and lots of pain.
This is an extreme example, but we have plenty of others (with more rows) where the decision to use JSON fields is a few years in the past, and hasn't yet caused an issue.
Last point: it is now possible to directly index on JSON fields.

Postgresql JSONB is coming. What to use now? Hstore? JSON? EAV?

After going through the relational DB/NoSQL research debate, I've come to the conclusion that I will be moving forward with PG as my data store. A big part of that decision was the announcement of JSONB coming to 9.4. My question is what should I do now, building an application from the ground up knowing that I want to migrate to (I mean use right now!) jsonb? The DaaS options for me are going to be running 9.3 for a while.
From what I can tell, and correct me if I'm wrong, hstore would run quite a bit faster since I'll be doing a lot of queries of many keys in the hstore column and if I were to use plain json I wouldn't be able to take advantage of indexing/GIN etc. However I could take advantage of nesting with json, but running any queries would be very slow and users would be frustrated.
So, do I build my app around the current version of hstore or json data type, "good ol" EAV or something else? Should I structure my DB and app code a certain way? Any advice would be greatly appreciated. I'm sure others may face the same question as we await the next official release of PostgreSQL.
A few extra details on the app I want to build:
-Very relational (with one exception below)
-Strong social network aspect (groups, friends, likes, timeline etc)
-Based around a single object with variable user assigned attributes, maybe 10 or 1000+ (this is where the schema-less design need comes into play)
Thanks in advance for any input!
It depends. If you expect to have a lot of users, a very high transaction volume, or an insane number of attribute fetches per query, I would say use HSTORE. If, however, you app will start small and grow over time, or have relatively few transactions that fetch attributes, or just fetch a few per query, then use JSON. Even in the latter case, if you're not fetching many attributes but checking one or two keys often in the WHERE clause of your queries, you can create a functional index to speed things up:
CREATE INDEX idx_foo_somekey ON foo((bar ->> 'somekey'));
Now, when you have WHERE bar ->> somekey, it should use the index.
And of course, it will be easier to use nested data and to upgrade to jsonb when it becomes available to you.
So I would lean toward JSON unless you know for sure you're going kick your server's ass with heavy use of key fetches before you have a chance to upgrade to 9.4. But to be sure of that, I would say, do some benchmarking with anticipated query volumes now and see what works best for you.
You probably don't give quite enough to give a very detailed answer, but I will say this... If your data is "very relational" then I believe your best course is to build it with a good relational design. If it's just one field with "variable assigned attributes", then that sounds like a good use for an hstore. Which is pretty tried and true at this point. I've been doing some reading on 9.4 and jsonb sounds cool, but, that won't be out for a while. I suspect that a good schema design in 9.3 + a very targeted use of hstore will probably yield a good combination of performance and flexibility.

Performance of MySql Xml functions?

I am pretty excited about the new Mysql XMl Functions.
Now I can finally embed something like "object oriented" documents in my oldschool relational database.
For an example use-case consider a user who sings up at your website using facebook connect.
You can fetch an object for the user using the graph api, and get nice information. This information however can vary vastly. Some fields may or may not be set, some may be added over time and so on.
Well if you are just intersted in very special fields (for example friends relations, gender, movies...), you can project them into your relational database scheme.
However using the XMl functions you could store the whole object inside a field and then your different models can access the data using the ExtractValue function. You can store everything right away without needing to worry what you will need later.
But what will the performance be?
For example I have a table with 50 000 entries which represent useres.
I have an enum field that states "male", "female" (or various other genders to be politically correct).
The performance of for example fetching all males will be very fast.
But what about something like WHERE ExtractValue(userdata, '/gender/') = 'male' ?
How will the performance vary if the object gets bigger?
Can I maby somehow put an Index on specified xpath selections?
How do field types work together with this functions/performance. Varchar/blob?
Do I need fulltext indexes?
To sum up my question:
Mysql XML functins look great. And I am sure they are really great if you just want to store structured data that you fetch and analyze further in your application.
But how will they stand battle in procedures where there are internal scans/sorting/comparision/calculations performed on them?
Can Mysql replace document oriented databases like CouchDB/Sesame?
What are the gains and trade offs of XML functions?
How and why are they better/worse than a dynamic application that stores various data as attributes?
For example a key/value table with an xpath as key and the value as value connected to the document entity.
Anyone made any other experiences with it or has noticed something mentionable?
I tend to make comments similar to Pekka's, but I think the reason we cannot laugh this off is your statement "This information however can vary vastly." That means it is not realistic to plan to parse it all and project it into the database.
I cannot answer all of your questions, but I can answer some of them.
Most notably I cannot tell you about performance on MySQL. I have seen it in SQL Server, tested it, and found that SQL Server performs in memory XML extractions very slowly, to me it seemed as if it were reading from disk, but that is a bit of an exaggeration. Others may dispute this, but that is what I found.
"Can Mysql replace document oriented databases like CouchDB/Sesame?" This question is a bit over-broad but in your case using MySQL lets you keep ACID compliance for these XML chunks, assuming you are using InnoDB, which cannot be said automatically for some of those document oriented databases.
"How and why are they better/worse than a dynamic application that stores various data as attributes?" I think this is really a matter of style. You are given XML chunks that are (presumably) documented and MySQL can navigate them. If you just keep them as-such you save a step. What would be gained by converting them to something else?
The MySQL docs suggest that the XML file will go into a clob field. Performance may suffer on larger docs. Perhaps then you will identify sub-documents that you want to regularly break out and put into a child table.
Along these same lines, if there are particular sub-docs you know you will want to know about, you can make a child table, "HasDocs", do a little pre-processing, and populate it with names of sub-docs with their counts. This would make for faster statistical analysis and also make it faster to find docs that have certain sub-docs.
Wish I could say more, hope this helps.

Storing Data in MySQL as JSON

I thought this was a n00b thing to do. And, so, I've never done it. Then I saw that FriendFeed did this and actually made their DB scale better and decreased latency. I'm curious if I should do this. And, if so, what's the right way to do it?
Basically, what's a good place to learn how to store everything in MySQL as a CouchDB sort of DB? Storing everything as JSON seems like it'd be easier and quicker (not to build, less latency).
Also, is it easy to edit, delete, etc., things stored as JSON on the DB?
Everybody commenting seems to be coming at this from the wrong angle, it is fine to store JSON code via PHP in a relational DB and it will in fact be faster to load and display complex data like this, however you will have design considerations such as searching, indexing etc.
The best way of doing this is to use hybrid data, for example if you need to search based upon datetime MySQL (performance tuned) is going to be a lot faster than PHP and for something like searching distance of venues MySQL should also be a lot faster (notice searching not accessing). Data you do not need to search on can then be stored in JSON, BLOB or any other format you really deem necessary.
Data you need to access is very easily stored as JSON for example a basic per-case invoice system. They do not benefit very much at all from RDBMS, and could be stored in JSON just by json_encoding($_POST['entires']) if you have the correct HTML form structure.
I am glad you are happy using MongoDB and I hope that it continues to serve you well, but don't think that MySQL is always going to be off your radar, as your app increases in complexity you may well end up needing an RDBMS for some functionality and features (even if it is just for retiring archived data or business reporting)
MySQL 5.7 Now supports a native JSON data type similar to MongoDB and other schemaless document data stores:
JSON support
Beginning with MySQL 5.7.8, MySQL supports a native JSON type. JSON values are not stored as strings, instead using an internal binary format that permits quick read access to document elements. JSON documents stored in JSON columns are automatically validated whenever they are inserted or updated, with an invalid document producing an error. JSON documents are normalized on creation, and can be compared using most comparison operators such as =, <, <=, >, >=, <>, !=, and <=>; for information about supported operators as well as precedence and other rules that MySQL follows when comparing JSON values, see Comparison and Ordering of JSON Values.
MySQL 5.7.8 also introduces a number of functions for working with JSON values. These functions include those listed here:
Functions that create JSON values: JSON_ARRAY(), JSON_MERGE(), and JSON_OBJECT(). See Section 12.16.2, “Functions That Create JSON Values”.
Functions that search JSON values: JSON_CONTAINS(), JSON_CONTAINS_PATH(), JSON_EXTRACT(), JSON_KEYS(), and JSON_SEARCH(). See Section 12.16.3, “Functions That Search JSON Values”.
Functions that modify JSON values: JSON_APPEND(), JSON_ARRAY_APPEND(), JSON_ARRAY_INSERT(), JSON_INSERT(), JSON_QUOTE(), JSON_REMOVE(), JSON_REPLACE(), JSON_SET(), and JSON_UNQUOTE(). See Section 12.16.4, “Functions That Modify JSON Values”.
Functions that provide information about JSON values: JSON_DEPTH(), JSON_LENGTH(), JSON_TYPE(), and JSON_VALID(). See Section 12.16.5, “Functions That Return JSON Value Attributes”.
In MySQL 5.7.9 and later, you can use column->path as shorthand for JSON_EXTRACT(column, path). This works as an alias for a column wherever a column identifier can occur in an SQL statement, including WHERE, ORDER BY, and GROUP BY clauses. This includes SELECT, UPDATE, DELETE, CREATE TABLE, and other SQL statements. The left hand side must be a JSON column identifier (and not an alias). The right hand side is a quoted JSON path expression which is evaluated against the JSON document returned as the column value.
See Section 12.16.3, “Functions That Search JSON Values”, for more information about -> and JSON_EXTRACT(). For information about JSON path support in MySQL 5.7, see Searching and Modifying JSON Values. See also Secondary Indexes and Virtual Generated Columns.
More info:
https://dev.mysql.com/doc/refman/5.7/en/json.html
CouchDB and MySQL are two very different beasts. JSON is the native way to store stuff in CouchDB. In MySQL, the best you could do is store JSON data as text in a single field. This would entirely defeat the purpose of storing it in an RDBMS and would greatly complicate every database transaction.
Don't.
Having said that, FriendFeed seemed to use an extremely custom schema on top of MySQL. It really depends on what exactly you want to store, there's hardly one definite answer on how to abuse a database system so it makes sense for you. Given that the article is very old and their main reason against Mongo and Couch was immaturity, I'd re-evaluate these two if MySQL doesn't cut it for you. They should have grown a lot by now.
json characters are nothing special when it comes down to storage, chars such as
{,},[,],',a-z,0-9.... are really nothing special and can be stored as text.
the first problem your going to have is this
{
profile_id: 22,
username: 'Robert',
password: 'skhgeeht893htgn34ythg9er'
}
that stored in a database is not that simple to update unless you had your own proceedure and developed a jsondecode for mysql
UPDATE users SET JSON(user_data,'username') = 'New User';
So as you cant do that you would Have to first SELECT the json, Decode it, change it, update it, so in theory you might as well spend more time constructing a suitable database structure!
I do use json to store data but only Meta Data, data that dont get updated often, not related to the user specific.. example if a user adds a post, and in that post he adds images ill parse the images and create thumbs and then use the thumb urls in a json format.
To illustrate how difficult it is to get JSON data using a query, I will share the query I made to handle this.
It doesn't take into account arrays or other objects, just basic datatypes. You should change the 4 instances of column to the column name storing the JSON, and change the 4 instances of myfield to the JSON field you want to access.
SELECT
SUBSTRING(
REPLACE(REPLACE(REPLACE(column, '{', ''), '}', ','), '"', ''),
LOCATE(
CONCAT('myfield', ':'),
REPLACE(REPLACE(REPLACE(column, '{', ''), '}', ','), '"', '')
) + CHAR_LENGTH(CONCAT('myfield', ':')),
LOCATE(
',',
SUBSTRING(
REPLACE(REPLACE(REPLACE(column, '{', ''), '}', ','), '"', ''),
LOCATE(
CONCAT('myfield', ':'),
REPLACE(REPLACE(REPLACE(column, '{', ''), '}', ','), '"', '')
) + CHAR_LENGTH(CONCAT('myfield', ':'))
)
) - 1
)
AS myfield
FROM mytable WHERE id = '3435'
This is an old question, but I am still able to see this at the top of the search result of Google, so I guess it would be meaningful to add a new answer 4 years after the question is asked.
First of all, there is better support in storing JSON in RDBMS. You may consider switching to PostgreSQL (although MySQL has supported JSON since v5.7.7). PostgreSQL uses very similar SQL commands as MySQL except they support more functions. One of the functions they added is that they provide JSON data type and you are now able to query the JSON stored. (Some reference on this) If you are not making up the query directly in your program, for example, using PDO in php or eloquent in Laravel, all you need to do is just to install PostgreSQL on your server and change database connection settings. You don't even need to change your code.
Most of the time, as the other answers suggested, storing data as JSON directly in RDBMS is not a good idea. There are some exception though. One situation I can think of is a field with variable number of linked entry.
For example, for storing tag of a blog post, normally you will need to have a table for blog post, a table of tag and a matching table. So, when the user wants to edit a post and you need to display which tag is related to that post, you will need to query 3 tables. This will damage the performance a lot if your matching table / tag table is long.
By storing the tags as JSON in the blog post table, the same action only requires a single table search. The user will then be able to see the blog post to be edit quicker, but this will damage the performance if you want to make a report on what post is linked to a tag, or maybe search by tag.
You may also try to de-normalize the database. By duplicating the data and storing the data in both ways, you can receive benefit of both method. You will just need a little bit more time to store your data and more storage space (which is cheap comparing to the cost of more computing power)
It really depends on your use case. If you are storing information that has absolutely no value in reporting, and won't be queried via JOINs with other tables, it may make sense for you to store your data in a single text field, encoded as JSON.
This could greatly simplify your data model. However, as mentioned by RobertPitt, don't expect to be able to combine this data with other data that has been normalized.
I would say the only two reasons to consider this are:
performance just isn't good enough with a normalised approach
you cannot readily model your particularly fluid/flexible/changing data
I wrote a bit about my own approach here:
What scalability problems have you encountered using a NoSQL data store?
(see the top answer)
Even JSON wasn't quite fast enough so we used a custom-text-format approach. Worked / continues to work well for us.
Is there a reason you're not using something like MongoDB? (could be MySQL is "required"; just curious)
Here is a function that would save/update keys of a JSON array in a column and another function that retrieves JSON values. This functions are created assuming that the column name of storing the JSON array is json. It is using PDO.
Save/Update Function
function save($uid, $key, $val){
global $dbh; // The PDO object
$sql = $dbh->prepare("SELECT `json` FROM users WHERE `id`=?");
$sql->execute(array($uid));
$data = $sql->fetch();
$arr = json_decode($data['json'],true);
$arr[$key] = $val; // Update the value
$sql=$dbh->prepare("UPDATE `users` SET `json`=? WHERE `id`=?");
$sql->execute(array(
json_encode($arr),
$uid
));
}
where $uid is the user's id, $key - the JSON key to update and it's value is mentioned as $val.
Get Value Function
function get($uid, $key){
global $dbh;
$sql = $dbh->prepare("SELECT `json` FROM `users` WHERE `id`=?");
$sql->execute(array($uid));
$data = $sql->fetch();
$arr = json_decode($data['json'], true);
return $arr[$key];
}
where $key is a key of JSON array from which we need the value.
It seems to me that everyone answering this question is kind-of missing the one critical issue, except #deceze -- use the right tool for the job. You can force a relational database to store almost any type of data and you can force Mongo to handle relational data, but at what cost? You end up introducing complexity at all levels of development and maintenance, from schema design to application code; not to mention the performance hit.
In 2014 we have access to many database servers that handle specific types of data exceptionally well.
Mongo (document storage)
Redis (key-value data storage)
MySQL/Maria/PostgreSQL/Oracle/etc (relational data)
CouchDB (JSON)
I'm sure I missed some others, like RabbirMQ and Cassandra. My point is, use the right tool for the data you need to store.
If your application requires storage and retrieval of a variety of data really, really fast, (and who doesn't) don't shy away from using multiple data sources for an application. Most popular web frameworks provide support for multiple data sources (Rails, Django, Grails, Cake, Zend, etc). This strategy limits the complexity to one specific area of the application, the ORM or the application's data source interface.
Early support for storing JSON in MySQL has been added to the MySQL 5.7.7 JSON labs release (linux binaries, source)! The release seems to have grown from a series of JSON-related user-defined functions made public back in 2013.
This nascent native JSON support seems to be heading in a very positive direction, including JSON validation on INSERT, an optimized binary storage format including a lookup table in the preamble that allows the JSN_EXTRACT function to perform binary lookups rather than parsing on every access. There is also a whole raft of new functions for handling and querying specific JSON datatypes:
CREATE TABLE users (id INT, preferences JSON);
INSERT INTO users VALUES (1, JSN_OBJECT('showSideBar', true, 'fontSize', 12));
SELECT JSN_EXTRACT(preferences, '$.showSideBar') from users;
+--------------------------------------------------+
| id | JSN_EXTRACT(preferences, '$.showSideBar') |
+--------------------------------------------------+
| 1 | true |
+--------------------------------------------------+
IMHO, the above is a great use case for this new functionality; many SQL databases already have a user table and, rather than making endless schema changes to accommodate an evolving set of user preferences, having a single JSON column a single JOIN away is perfect. Especially as it's unlikely that it would ever need to be queried for individual items.
While it's still early days, the MySQL server team are doing a great job of communicating the changes on the blog.
JSON is a valid datatype in PostgreSQL database as well. However, MySQL database has not officially supported JSON yet. But it's baking: http://mysqlserverteam.com/json-labs-release-native-json-data-type-and-binary-format/
I also agree that there are many valid cases that some data is better be serialized to a string in a database. The primary reason might be when it's not regularly queried, and when it's own schema might change - you don't want to change the database schema corresponding to that. The second reason is when the serialized string is directly from external sources, you may not want to parse all of them and feed in the database at any cost until you use any. So I'll be waiting for the new MySQL release to support JSON since it'll be easier for switching between different database then.
I know this is really late but I did have a similar situation where I used a hybrid approach of maintaining RDBMS standards of normalizing tables upto a point and then storing data in JSON as text value beyond that point. So for example I store data in 4 tables following RDBMS rules of normalization. However in the 4th table to accomodate dynamic schema I store data in JSON format. Every time I want to retrieve data I retrieve the JSON data, parse it and display it in Java. This has worked for me so far and to ensure that I am still able to index the fields I transform to json data in the table to a normalized manner using an ETL. This ensures that while the user is working on the application he faces minimal lag and the fields are transformed to a RDBMS friendly format for data analysis etc. I see this approach working well and believe that given MYSQL (5.7+) also allows parsing of JSON this approach gives you the benefits of both RDBMS and NOSQL databases.
I use json to record anything for a project, I use three tables in fact ! one for the data in json, one for the index of each metadata of the json structure (each meta is encoded by an unique id), and one for the session user, that's all.
The benchmark cannot be quantified at this early state of code, but for exemple I was user views (inner join with index) to get a category (or anything, as user, ...), and it was very slow (very very slow, used view in mysql is not the good way).
The search module, in this structure, can do anything I want, but, I think mongodb will be more efficient in this concept of full json data record.
For my exemple, I user views to create tree of category, and breadcrumb, my god ! so many query to do ! apache itself gone ! and, in fact, for this little website, I use know a php who generate tree and breadcrumb, the extraction of the datas is done by the search module (who use only index), the data table is used only for update.
If I want, I can destroy the all indexes, and regenerate it with each data, and do the reverse work to, like, destroy all the data (json) and regenerate it only with the index table.
My project is young, running under php and mysql, but, sometime I thing using node js and mongodb will be more efficient for this project.
Use json if you think you can do, just for do it, because you can ! and, forget it if it was a mistake; try by make good or bad choice, but try !
Low
a french user
I believe that storing JSON in a mysql database does in fact defeat the purpose of using RDBMS as it is intended to be used. I would not use it in any data that would be manipulated at some point or reported on, since it not only adds complexity but also could easily impact performance depending on how it is used.
However, I was curious if anyone else thought of a possible reason to actually do this. I was thinking to make an exception for logging purposes. In my case, I want to log requests that have a variable amount of parameters and errors. In this situation, I want to use tables for the type of requests, and the requests themselves with a JSON string of different values that were obtained.
In the above situation, the requests are logged and never manipulated or indexed within the JSON string field. HOWEVER, in a more complex environment, I would probably try to use something that has more of an intention for this type of data and store it with that system. As others have said, it really depends on what you are trying to accomplish, but following standards always helps longevity and reliability!
You can use this gist: https://gist.github.com/AminaG/33d90cb99c26298c48f670b8ffac39c3
After installing it to the server (just need root privilege not super), you can do something like this:
select extract_json_value('{"a":["a","2"]}','(/a)')
It will return
a 2
.You can return anything inside JSON by using this
The good part is that it is support MySQL 5.1,5.2,5.6. And you do not need to install any binary on the server.
Based on old project common-schema, but it is still working today
https://code.google.com/archive/p/common-schema/