Mysql json column array value query - mysql

I have a table that contains a json type column history, and the structure is this:
[
{
"admin_id": "1",
"process_time": "2017-6-6 14:14:14"
},
{
"admin_id": "2",
"process_time": "2017-6-6 14:14:14"
}
]
for every record's history column it may contain multiple elements in array. Now I want to build a query to select records which has specific id in history array. For example, I want to select all records which history array has admin_id equals to 1. I don't know how to write this query, someone can help me? Thanks.

Related

ADF dataflow and columns/rows in separate array in JSON

I have a bunch of json files which have an array with column names and a separate array for the rows.
I want a dynamic way of retrieving column names and merge them with the rows for each json file.
Been playing around with derived columns and column patterns, but struggling to get it working.
I want the column names from [data.column.shortText] and values for each corresponding [data.rows.value] according to the order.
Example format
{
"messages":{
},
"data":{
"columns":[
{
"columnName":"SelectionCriteria1",
"shortText":"Case no."
},
{
"columnName":"SelectionCriteria2",
"shortText":"Period for periodical values",
},
{
"columnName":"SelectionCriteria3",
"shortText":"Location"
},
{
"columnName":"SelectionCriteriaAggregate",
"shortText":"Value"
}
],
"rows":[
[
{
"value":"23523"
},
{
"value":12342349
},
{
"value":"234234",
"code":3342
},
{
"value":234234234
}
]
]
}
}
First, you need to fix your Json data, i can see you have an extra comma in columns second Json and in rows you have value as int and as string so when i tried to parse it in ADF i got an error.
i don't quite understand why you're trying to do merge by position because normally we get rows more than columns, and if you'll get 5 rows and 3 columns you will get an error.
Here is my approach to your problem:
the main idea is that i added index column to both arrays and joined the jsons by Inner Join.
created a Source Data (its 2 but you can make it one to simplify your data flow)
added Select activity to select relevant arrays from the data.
flattened the array(in order to add index column)
added index by using rank activity (please read more about rank and dense rank and what is the difference between the two)
added a Join activity , inner join by index column.
Select activity to remove index column from the result.
saved output to sink.
Json Data that i worked with:
Data Flow:
SelectRows Activity:
Flatten Activity:
Rank actitity:
Join activity:
please check these links:
https://learn.microsoft.com/en-us/azure/data-factory/data-flow-expressions-usage#mapAssociation
https://learn.microsoft.com/en-us/azure/data-factory/data-flow-map-functions

Postgresql - Count of elements in nested JSON blob

I have a Postgres statement that returns extracts/iterates over a json blob in the value column of a table. I am able to get a count one level deep using the query below but I can't count any deeper. I was using:
select jsonb_array_length(value -> 'team') as team_count
This returns the proper count but I cant seem to leverage this to count the names under each team.
In a perfect world I would my results to return 4 lines of results like this(title and a matching count of names):
Product Owner, 2
Technical Product Manager, 2
Data Modeler, 0
Engineer, 0
How would I go about amending this query to give me the count of names under team? I tried all sorts of stuff but nothing that got me close.
Sample Json is below.
"team":[
{
"title":"Product Owner",
"names":[
"John Smith",
"Jane Doe"
]
},
{
"title":"Technical Project Manager",
"names":[
"Fred Flintstone",
"Barney Rubble"
]
},
{
"title":"Data Modeler"
},
{
"title":"Engineer"
}
You seem to be looking for
SELECT
role -> 'title' AS team_role,
jsonb_array_length(role -> 'names') AS member_count
FROM jsonb_array_elements(value -> 'team') AS team(role)

Filtering on JSON internal keys stored in PostgreSQL table

I have a report in JSON format stored in a field in a PostgreSQL database table.
Say the (simplified) table format is:
Column | Type
-------------------+----------------------------
id | integer
element_id | character varying(256)
report | json
and the structure of the data in the reports is like this
{
"section1":
"test1": {
"outcome": "nominal",
"results": {
"value1": 34.,
"value2": 56.
}
},
"test2": {
"outcome": "warning",
"results": {
"avg": 4.5,
"std": 21.
}
},
...
"sectionN": {
...
}
}
That is, there are N keys at first level (the sections), each of them being an object with a set of keys (the tests), with a outcome and a variable set of results in form of (key, value) pairs.
I need to do filtering based on internal JSON keys. More specifically, in this example, I want to know if it is possible, using SQL alone, to obtain the elements that have, for example, the std value in the results section above a certain threshold, say 10. I can even know that the std is in test2, but I do not know a priori in which section. With this filter (test2.std > 10.), for example, the record with the sample data shown above will appear, since the std variable in the test2 test has this value equal to 21. (>10.).
Another, simpler, filter could be to request all the records for which the test2.outcome is not nominal.
One way is jsonb_each, like:
select section.key
, test.key
from t1
cross join
jsonb_each(t1.col1) section
cross join
jsonb_each(section.value) test
where (test.value->'results'->>'std')::int > 10
Example at SQL Fiddle.

Select JSON array's fields from SQL view to create a column

I have an SQL Table which one of the columns contain a JSON array in the following format:
[
{
"id":"1",
"translation":"something here",
"value":"value of something here"
},
{
"id":"2",
"translation":"something else here",
"value":"value of something else here"
},
..
..
..
]
Is there any way to use an SQL Query and retrieve columns with the ID as header and the "value" as the value of the column? Instead of return only one column with the JSON array.
For example, if I run:
SELECT column_with_json FROM myTable
It will return the above array. Where I want to return
1,2
value of something here, value of something else here
You can't use SQL to retrieve columns from the JSON stored inside the table: to the database engine the JSON is just unstructured text saved in a text field.
Some relational databases, like PostgreSQL, have a JSON type and functions to support JSON query. If this is your case, you should be able to perform the query you want.
Check this for an example on how it work with PostgreSQL:
http://clarkdave.net/2013/06/what-can-you-do-with-postgresql-and-json/

How to enter multiple table data in mongoDB using json

I am trying to learn mongodb. Suppose there are two tables and they are related. For example like this -
1st table has
First name- Fred, last name- Zhang, age- 20, id- s1234
2nd table has
id- s1234, course- COSC2406, semester- 1
id- s1234, course- COSC1127, semester- 1
id- s1234, course- COSC2110, semester- 1
how to insert data in the mongo db? I wrote it like this, not sure is it correct or not -
db.users.insert({
given_name: 'Fred',
family_name: 'Zhang',
Age: 20,
student_number: 's1234',
Course: ['COSC2406', 'COSC1127', 'COSC2110'],
Semester: 1
});
Thank you in advance
This would be a assuming that what you want to model has the "student_number" and the "Semester" as what is basically a unique identifier for the entries. But there would be a way to do this without accumulating the array contents in code.
You can make use of the upsert functionality in the .update() method, with the help of of few other operators in the statement.
I am going to assume you are going this inside a loop of sorts, so everything on the right side values is actually a variable:
db.users.update(
{
"student_number": student_number,
"Semester": semester
},
{
"$setOnInsert": {
"given_name": given_name,
"family_name": family_name,
"Age": age
},
"$addToSet": { "courses": course }
},
{ "upsert": true }
)
What this does in an "upsert" operation is first looks for a document that may exist in your collection that matches the query criteria given. In this case a "student_number" with the current "Semester" value.
When that match is found, the document is merely "updated". So what is being done here is using the $addToSet operator in order to "update" only unique values into the "courses" array element. This would seem to make sense to have unique courses but if that is not your case then of course you can simply use the $push operator instead. So that is the operation you want to happen every time, whether the document was "matched" or not.
In the case where no "matching" document is found, a new document will then be inserted into the collection. This is where the $setOnInsert operator comes in.
So the point of that section is that it will only be called when a new document is created as there is no need to update those fields with the same information every time. In addition to this, the fields you specified in the query criteria have explicit values, so the behavior of the "upsert" is to automatically create those fields with those values in the newly created document.
After a new document is created, then the next "upsert" statement that uses the same criteria will of course only "update" the now existing document, and as such only your new course information would be added.
Overall working like this allows you to "pre-join" the two tables from your source with an appropriate query. Then you are just looping the results without needing to write code for trying to group the correct entries together and simply letting MongoDB do the accumulation work for you.
Of course you can always just write the code to do this yourself and it would result in fewer "trips" to the database in order to insert your already accumulated records if that would suit your needs.
As a final note, though it does require some additional complexity, you can get better performance out of the operation as shown by using the newly introduced "batch updates" functionality.For this your MongoDB server version will need to be 2.6 or higher. But that is one way of still reducing the logic while maintaining fewer actual "over the wire" writes to the database.
You can either have two separate collections - one with student details and other with courses and link them with "id".
Else you can have a single document with courses as inner document in form of array as below:
{
"FirstName": "Fred",
"LastName": "Zhang",
"age": 20,
"id": "s1234",
"Courses": [
{
"courseId": "COSC2406",
"semester": 1
},
{
"courseId": "COSC1127",
"semester": 1
},
{
"courseId": "COSC2110",
"semester": 1
},
{
"courseId": "COSC2110",
"semester": 2
}
]
}