Trying to map json to mysql - mysql

These are the two tables I want to end up with:
tableA (I already have data in this table)
id | initials | name
1 | ER | Eric Robinsn
2 | DD | David Dobson
tableB (nothing in here yet)
id | tableA_id | nickname
1 | 1 | Rick
2 | 1 | Ricky
3 | 1 | Mr. Bossman
4 | 2 | Dave
5 | 2 | Davey
This is the JSON I have:
[
{
name: "Eric Robinson",
initials: "ER",
nicknames: ["Rick", "Ricky", "Mr. Bossman"]
},
{
name: "David Dobson",
initials: "DD",
nicknames: ["Dave", "Davey"]
}
]
Inserting into tableA is very easy, you can do it like this with node-mysql:
vary connection = require("mysql");
var json = JSON.parse(require("./data.json"));
var sql = "INSERT INTO tableA(initials, name) VALUES ?";
connection.query(sql, json, callback);
But as a complete SQL noob how would I map the data into tableB? After some researching I'm not sure if I can do this with something like the following:
INSERT INTO tableB (tableA_id, nickname)
SELECT id
FROM tableA
Or maybe I need to include a left join? The part that confuses me the most is how to include the tableA_id part of the query into the statement. I've tried
INSERT INTO tableB (tableA_id, nickname)
SELECT id
FROM tableA
WHERE tableB.tableA_id = tableA.id //this is the part I don't get
This is just an abstracted example. Also, I'm using node-mysql so when I'm inserting into tableB the re-mapped JSON looks looks like this:
[
{
initials: "ER", nickname: "Rick"
},
{
initials: "ER", nickname: "Ricky"
},
{
initials: "ER", nickname: "Mr. Bossman"
},
{
initials: "DD", nickname: "Dave"
},
{
initials: "DD", nickname: "Davey"
}
]

Related

Expanding a record with unknown keys in Power Query

I am working with a nested json file. The issue is that the keys of the nested json are dates and their value is not known beforehand. Therefore I am unable to apply expandRecordColumn method on it.
Each row has a unique refId and looks like this
{
"refId" : "XYZ",
"snapshotIndexes" : {
"19-07-2021" : {
url: "abc1",
value: "123"
},
"20-07-2021" : {
url: "abc2",
value: "567"
}
}
}
I finally want a table with these columns,
refid | date | url | value
XYZ | 19-7-2021 | abc1 | 123
XYZ | 20-7-2021 | abc2 | 567
PQR | 7-5-2021 | srt | 999
In the new table, refId and date will together make a unique entry.
This is powerBi snapshot
Records
I was able to solve it using Record.ToTable on each row to convert from record to table and then applying ExpandTableColumn
let
Source = DocumentDB.Contents("sourceurl"),
Source = Source{[id="dbid"]}[Collections],
SourceTable= Source{[db_id="dbid",id="PartnerOfferSnapshots"]}[Documents],
ExpandedDocument = Table.ExpandRecordColumn(SourceTable, "Document", {"refId", "snapshotIndexes"}, {"Document.refId", "Document.snapshotIndexes"}),
TransformColumns = Table.TransformColumns(ExpandedDocument,{"Document.snapshotIndexes", each Table.ExpandRecordColumn(Record.ToTable(_), "Value", {"url","id","images"}, {"url","id","images"})}),
ExpandedTable = Table.ExpandTableColumn(TransformColumns, "Document.snapshotIndexes", {"Name","url","id","images"}, {"Document.dates","Document.url","Document.id","Document.images"})
in
ExpandedTable

Karate API framework how to match the response values with the table columns?

I have below API response sample
{
"items": [
{
"id":11,
"name": "SMITH",
"prefix": "SAM",
"code": "SSO"
},
{
"id":10,
"name": "James",
"prefix": "JAM",
"code": "BBC"
}
]
}
As per above response, my tests says that whenever I hit the API request the 11th ID would be of SMITH and 10th id would be JAMES
So what I thought to store this in a table and assert against the actual response
* table person
| id | name |
| 11 | SMITH |
| 10 | James |
| 9 | RIO |
Now how would I match one by one ? like first it parse the first ID and first name from the API response and match with the Tables first ID and tables first name
Please share any convenient way of doing it from KARATE
There are a few possible ways, here is one:
* def lookup = { 11: 'SMITH', 10: 'James' }
* def items =
"""
[
{
"id":11,
"name":"SMITH",
"prefix":"SAM",
"code":"SSO"
},
{
"id":10,
"name":"James",
"prefix":"JAM",
"code":"BBC"
}
]
"""
* match each items contains { name: "#(lookup[_$.id+''])" }
And you already know how to use table instead of JSON.
Please read the docs and other stack-overflow answers to get more ideas.

SQL query with join to get nested array of objects

Summary: I'll start with a JSON schema to describe the expectation. Notice the roles with a nested array of objects and I'm looking for a "Smart query" that can fetch it one single query.
{
"id": 1,
"first": "John",
"roles": [ // Expectation -> array of objects
{
"id": 1,
"name": "admin"
},
{
"id": 2,
"name": "accounts"
}
]
}
user
+----+-------+
| id | first |
+----+-------+
| 1 | John |
| 2 | Jane |
+----+-------+
role
+----+----------+
| id | name |
+----+----------+
| 1 | admin |
| 2 | accounts |
| 3 | sales |
+----+----------+
user_role
+---------+---------+
| user_id | role_id |
+---------+---------+
| 1 | 1 |
| 1 | 2 |
| 2 | 2 |
| 2 | 3 |
+---------+---------+
Attempt 01
In a naive approach I'd run two sql queries in my nodejs code, with the help of multipleStatements:true in connection string. Info.
User.getUser = function(id) {
const sql = "SELECT id, first FROM user WHERE id = ?; \
SELECT role_id AS id, role.name from user_role \
INNER JOIN role ON user_role.role_id = role.id WHERE user_id = ?";
db.query(sql, [id, id], function(error, result){
const data = result[0][0]; // first query result
data.roles = result[1]; // second query result, join in code.
console.log(data);
});
};
Problem: Above code produces the expected JSON schema but it takes two queries, I was able to narrow it down in a smallest possible unit of code because of multiple statements but I don't have such luxury in other languages like Java or maybe C# for instance, there I've to create two functions and two sql queries. so I'm looking for a single query solution.
Attempt 02
In an earlier attempt With the help of SO community, I was able to get close to the following using single query but it can only help to produce the array of string (not array of objects).
User.getUser = function(id) {
const sql = "SELECT user.id, user.first, GROUP_CONCAT(role.name) AS roles FROM user \
INNER JOIN user_role ON user.id = user_role.user_id \
INNER JOIN role ON user_role.role_id = role.id \
WHERE user.id = ? \
GROUP BY user.id";
db.query(sql, id, function (error, result) {
const data = {
id: result[0].id, first: result[0].first,
roles: result[0].roles.split(",") // manual split to create array
};
console.log(data);
});
};
Attempt 02 Result
{
"id": 1,
"first": "John",
"roles": [ // array of string
"admin",
"accounts"
]
}
it's such a common requirement to produce array of objects so wondering there must be something in SQL that I'm not aware of. Is there a way to better achieve this with the help of an optimum query.
Or let me know that there's no such solution, this is it and this is how it's done in production code out there with two queries.
Attempt 03
use role.id instead of role.name in GROUP_CONCAT(role.id), that way you can get hold of some id's and then use another subquery to get associated role names, just thinking...
SQL (doesn't work but just to throw something out there for some thought)
SELECT
user.id, user.first,
GROUP_CONCAT(role.id) AS role_ids,
(SELECT id, name FROM role WHERE id IN role_ids) AS roles
FROM user
INNER JOIN user_role ON user.id = user_role.user_id
INNER JOIN role ON user_role.role_id = role.id
WHERE user.id = 1
GROUP BY user.id;
Edit
Based on Amit's answer, I've learned that there's such solution in SQL Server using JSON AUTO. Yes this is something I'm looking for in MySQL.
To articulate precisely.
When you join tables, columns in the first table are generated as
properties of the root object. Columns in the second table are
generated as properties of a nested object.
User this Join Query
FOR JSON AUTO will return JSON for your query result
SELECT U.UserID, U.Name, Roles.RoleID, Roles.RoleName
FROM [dbo].[User] as U
INNER JOIN [dbo].UserRole as UR ON UR.UserID=U.UserID
INNER JOIN [dbo].RoleMaster as Roles ON Roles.RoleID=UR.RoleMasterID
FOR JSON AUTO
out put of above query is
[
{
"UserID": 1,
"Name": "XYZ",
"Roles": [
{
"RoleID": 1,
"RoleName": "Admin"
}
]
},
{
"UserID": 2,
"Name": "PQR",
"Roles": [
{
"RoleID": 1,
"RoleName": "Admin"
},
{
"RoleID": 2,
"RoleName": "User"
}
]
},
{
"UserID": 3,
"Name": "ABC",
"Roles": [
{
"RoleID": 1,
"RoleName": "Admin"
}
]
}
]
Though it is an old question, just thought might help others looking for the same issue. The below script should output the json schema you have been looking for.
SELECT roles, user.* from `user_table` AS user
INNER JOIN `roles_table` AS roles
ON user.id=roles.id

Postgres 9.4: Include sibling column in jsonb array on SELECT

If I have a table like this:
office_id int
employees jsonb
and the data looks something like this:
1
[{ "name" : "John" }, { "name" : "Jane" }]
Is there an easy way to query so that the results look like this:
office_id,employees
1,[{ "name" : "John", "office_id" : 1 }, { "name" : "Jane", "office_id" : 1 }]
For example data, check out this sqlfiddle: http://sqlfiddle.com/#!15/ac37b/1/0
The results should actually look like this:
id employees
1 [{ "name" : "John", "office_id" : 1 }, { "name" : "Jane", "office_id" : 1 }]
2 [{ "name" : "Jamal", "office_id" : 1 }]
I've been reading through the json functions and it seems like it's possible, but I can't seem to figure it out. I would rather not have to store the office_id on each nested object.
Note: This is similar to my other question about jsonb arrays, but the desired output is different.
I'm not sure if you are selecting from a Postgres table or a json object table. Doing a normal query and converting it to json can be done with json_agg().
Here is a normal query:
ao_db=# SELECT * FROM record.instance;
id | created_by | created_on | modified_by | modified_on
--------------------------------------+------------+-------------------------------+-------------+-------------------------------
18d8ca56-87b6-11e5-9c15-48d22415d991 | sysop | 2015-11-10 23:19:47.181026+09 | sysop | 2015-11-10 23:19:47.181026+09
190a0e86-87b6-11e5-9c15-48d22415d991 | sysop | 2015-11-10 23:19:47.56517+09 | sysop | 2015-11-10 23:19:47.56517+09
57611c9c-87b6-11e5-8c4b-48d22415d991 | admin | 2015-11-10 23:21:32.399775+09 | admin | 2015-11-10 23:22:27.975266+09
(3 行)
Here is the same query passed through json_agg():
ao_db=# WITH j AS (SELECT * FROM record.instance) SELECT json_agg(j) FROM j;
json_agg
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[{"id":"18d8ca56-87b6-11e5-9c15-48d22415d991","created_by":"sysop","created_on":"2015-11-10T23:19:47.181026+09:00","modified_by":"sysop","modified_on":"2015-11-10T23:19:47.181026+09:00"}, +
{"id":"190a0e86-87b6-11e5-9c15-48d22415d991","created_by":"sysop","created_on":"2015-11-10T23:19:47.56517+09:00","modified_by":"sysop","modified_on":"2015-11-10T23:19:47.56517+09:00"}, +
{"id":"57611c9c-87b6-11e5-8c4b-48d22415d991","created_by":"admin","created_on":"2015-11-10T23:21:32.399775+09:00","modified_by":"admin","modified_on":"2015-11-10T23:22:27.975266+09:00"}]

How to Simulate subquery in MongoDB query condition

Let's suppose that I have a product logs collection, all changes are being done on my products will be recorded in this collection ie :
+------------------------------+
| productId - status - comment |
| 1 0 .... |
| 2 0 .... |
| 1 1 .... |
| 2 1 .... |
| 1 2 .... |
| 3 0 .... |
+------------------------------+
I want to get all products which their status is 1 but hasn't became 2. In SQL the query would look something like :
select productId from productLog as PL1
where
status = 1
and productId not in (
select productId from productLog as PL2 where
PL1.productId = PL2.productId and PL2.status = 2
)
group by productId
I'm using native PHP MongoDB driver.
Well since the logic here on the subquery join is simply that exactly the same key matches the other then:
Setup
db.status.insert([
{ "productId": 1, "status": 0 },
{ "productId": 2, "status": 0 },
{ "productId": 1, "status": 1 },
{ "productId": 2, "status": 1 },
{ "productId": 1, "status": 2 },
{ "productId": 3, "status": 0 }
])
Then use .aggregate():
db.status.aggregate([
{ "$match": {
"status": { "$ne": 2 }
}},
{ "$group": {
"_id": "$productId"
}}
])
Or using map reduce (with a DBRef):
db.status.mapReduce(
function() {
if ( this.productId.$oid == 2 ) {
emit( this.prouctId.$oid, null )
}
},
function(key,values) {
return null;
},
{ "out": { "inline": 1 } }
);
But again the SQL here was as simple as:
select productId
from productLog
where status <> 2
group by productId
Without the superfluous join on exactly the same key value
This mongo query above doesn't meet the requirements in question,
the result of the mongo-query includes documents with productId=1,
however the result of the SQL in question doesn't. Because in sample data: there exists 1 record with status=2, and productId of that document is 1.
So, assuming db.productLog.insert executed as stated above, you can use the code below to get the results:
//First: subquery for filtering records having status=2:
var productsWithStatus2 = db.productLog .find({"status":2}).map(function(rec) { return rec.productId; });
//Second:final query to get productIds which there not exists having status=2 with same productId :
db.productLog.aggregate([ {"$match":{productId:{$nin:productsWithStatus2}}},{"$group": {"_id": "$productId"}}]) ;
//Alternative for Second final query:
//db.productLog.distinct("productId",{productId:{$nin:productsWithStatus2}});
//Alternative for Second final query,get results with product and status detail:
//db.productLog.find({productId:{$nin:productsWithStatus2}});