Get specific data from MYSQL using PHP - mysql

I have two tables as follows
user table
user_id
name
1
zia
2
john
3
raza
subject table
data_id
user_id
subject
1
1
Math
2
1
Chem
3
1
Bio
4
2
Math
5
2
Phy
when I am querying data i am getting results like this:
[
{
"user_id": "1",
"name": "zia",
"subject": [
"Math",
"Chem",
"Bio"
]
},
{
"user_id": "2",
"name": "john",
"subject": [
"Math",
"Phy"
]
},
]
My query is as follows
SELECT
users.user_id,
users.name,
GROUP_CONCAT(subjects.subject) sub
FROM
`users`
INNER JOIN subjects ON users.user_id = subjects.user_id
GROUP BY
subjects.user_id;
but actually I want to get data in following way:
the resuluts shown above are in such a way that if an entry from user table does not have coresponding enteries in subject table even then we must have user name and user id in our rsults as follows
[
{
"user_id": "1",
"name": "zia",
"subject": [
"Math",
"Chem",
"Bio"
]
},
{
"user_id": "2",
"name": "john",
"subject": [
"Math",
"Phy"
]
},
{
"user_id": "3",
"name": "Raza",
}
]
Here as you see that we have data in such a way that all the enteries from user table are shown alog with subject enteries from subject table if they match otherwise every user table entery is showing up with no affect.
*PLease help me in solving this issue **

You probably need to use LEFT JOIN instead of INNER JOIN, because you want to retrieve user without subject too

Related

Create json object from tree structure in Postgresql

I have a nested tree data structure of groups and layers that need to be generated from a couple of joined database tables and then be stored as JSON in a PostgreSQL (version 12.8) database. Groups can contain layer objects, but also (sub)groups. The desired JSON output (generated via SQL and/or function) would look like shown below.
[{
"title": "Folder 1",
"type": "group",
"folded": false,
"layers": [{
"title": "Layer 1",
"type": "WMS",
"visible": true
},
{
"title": "Folder 2",
"type": "group",
"folded": true,
"layers": [{
"title": "Layer 2",
"type": "WMS",
"visible": true
},
{
"title": "Layer 3",
"type": "WMS",
"visible": true
},
{
"title": "Folder 3",
"type": "group",
"folded": true,
"layers": [{
"title": "Layer 4",
"type": "WMS",
"visible": false
},
{
"title": "Layer 5",
"type": "WMS",
"visible": true
}
]
}
]
}
]
}]
These are the database tables and sample data:
CREATE TABLE IF NOT EXISTS jtest.folders
(
id serial,
item_id integer,
title text,
folded boolean NOT NULL
);
CREATE TABLE IF NOT EXISTS jtest.layers
(
id serial,
item_id integer,
title text,
type text,
visible boolean
);
CREATE TABLE IF NOT EXISTS jtest."connect"
(
id serial,
item_id integer,
child_id integer,
layer boolean DEFAULT true
);
--data
INSERT INTO jtest.folders(id, item_id, title, folded) VALUES
(1,1,'Folder 1',false),(2,2,'Folder 2', true),(3,3,'Folder 3', true);
INSERT INTO jtest.layers(item_id,id,title,type,visible) VALUES
(1,4,'Layer 1','WMS',true),(2,5,'Layer 2','WMS',true),(3,6,'Layer 3','WMS',true),(4,7,'Layer 4','WMS',true),(5,8,'Layer 5','WMS',true);
INSERT INTO jtest.connect (item_id, child_id, layer) VALUES
(1,4,true),(1,2,false),(2,5,true),(2,6,true),(2,3,false),(3,7,true),(3,8,true);
I partially succeeded to generate a JSON output for a simple folder-layer tree using queries like the one below, but could not figure out how to correctly handle folders nested into a list of layers (aka sub folders).
SELECT jsonb_agg(sub)
from (
SELECT f.id, f.folded,f.item_id,f.title, to_jsonb(array_agg(l.*)) as layers
FROM jtest.folders f
JOIN jtest.connect c ON f.item_id = c.item_id
JOIN jtest.layers l ON l.item_id = c.child_id
where c.layer is true
GROUP BY f.id,f.folded,f.item_id,f.title
ORDER BY f.item_id ) sub;
Any ideas or examples on how to solve this ?

Postgresql join on jsonb array

I'm new to JSONB and I am wondering, if the following would be possible with a single query:
I have a lot of tables that look like this:
ID (INT) | members (JSONB)
all the tables has only one row.
example for 2 tables
table1:
id: 1
data:
[
{
"computer": "12.12.12.12",
"tag": "dog"
},
{
"computer": "1.1.1.1",
"tag": "cat"
},
{
"computer": "2.2.2.2",
"tag": "cow"
}
]
table2:
id: 1
data:
[
{
"IP address": "12.12.12.12",
"name": "Beni",
"address": "Rome"
},
{
"IP address": "1.1.1.1",
"name": "Jone",
"address": "Madrid"
}
]
The result should be rows like this :
computer
tag
name
12.12.12.12
dog
Beni
1.1.1.1
cat
Jone
Thanks !
Convert jsons into setof types using jsonb_to_recordset function and then join them (like they were relational tables).
with table1 (id,members) as (
values (1,'[{"computer": "12.12.12.12","tag": "dog"},{"computer": "1.1.1.1","tag": "cat"},{"computer": "2.2.2.2","tag": "cow"}]'::jsonb)
), table2 (id,members) as (
values (1,'[{"IP address": "12.12.12.12","name": "Beni", "address": "Rome"},{"IP address": "1.1.1.1","name": "Jone", "address": "Madrid"}]'::jsonb)
)
select t1.computer, t1.tag, t2.name
from jsonb_to_recordset((select members from table1 where id=1)) as t1(computer text,tag text)
join jsonb_to_recordset((select members from table2 where id=1)) as t2("IP address" text,name text)
on t1.computer = t2."IP address"
db fiddle
to get values out of a jsonb array of objects you somehow have to explode them.
another way with jsonb_array_elements:
with _m as (
select
jsonb_array_elements(members.data) as data
from members
),
_m2 as (
select
jsonb_array_elements(members2.data) as data
from members2
)
select
_m.data->>'computer' as computer,
_m.data->>'tag' as tag,
_m2.data->>'name' as name
from _m
left join _m2 on _m2.data->>'IP address' = _m.data->>'computer'
https://www.db-fiddle.com/f/68iC5TzLKbzkLZ8gFWYiLz/0

group objects by a field and sum another, then produce a CSV report

How can I create a csv from this json? I have:
[
{
"name": "John",
"cash": 5
},
{
"name": "Anna",
"cash": 4
},
{
"name": "Anna",
"cash": 3
},
{
"name": "John",
"cash": 8
}
]
I need group by name and sum the cash and send the result a .csv like:
John,13
Anna,7
Thanks!
JQ has group_by as a builtin, use that and do map(.cash) | add to sum cash values for each group.
group_by(.name)[] | [.[0].name, (map(.cash) | add)] | #csv
Online demo

Flatten nested JSON structure in PostgreSQL

I'm trying to write a Postgres query that will output my json data in a particular format.
JSON data structure
{
user_id: 123,
data: {
skills: {
"skill_1": {
"title": "skill_1",
"rating": 4,
"description": 'description text'
},
"skill_2": {
"title": "skill_2",
"rating": 2,
"description": 'description text'
},
"skill_3": {
"title": "skill_3",
"rating": 5,
"description": 'description text'
},
...
}
}
}
This is how I need the data to be formatted in the end:
[
{
user_id: 123,
skill_1: 4,
skill_2: 2,
skill_3: 5,
...
},
{
user_id: 456,
skill_1: 1,
skill_2: 3,
skill_3: 4,
...
}
]
So far I'm working with a query that looks like this:
SELECT
user_id,
data#>>'{skills, "skill_1", rating}' AS "skill_1",
data#>>'{skills, "skill_2", rating}' AS "skill_2",
data#>>'{skills, "skill_3", rating}' AS "skill_3"
FROM some_table
There has to be a better way to go about writing my query. There are 400+ rows and 70+ skills. My above query is a little crazy. Any guidance or help would be greatly appreciated.
Some things to note:
Users rated themselves on 70+ skills
Each skill object has the same structure
Each user rated themselves on the exact same set of skills
db<>fiddle
I expanded your test data to (note the array around all users):
[{
"user_id": 123,
"data": {
"skills": {
"skill_1": {
"title": "skill_1",
"rating": 4,
"description": "description text"
},
"skill_2": {
"title": "skill_2",
"rating": 2,
"description": "description text"
},
"skill_3": {
"title": "skill_3",
"rating": 5,
"description": "description text"
}
}
}
},
{
"user_id": 456,
"data": {
"skills": {
"skill_1": {
"title": "skill_1",
"rating": 1,
"description": "description text"
},
"skill_2": {
"title": "skill_2",
"rating": 3,
"description": "description text"
},
"skill_3": {
"title": "skill_3",
"rating": 4,
"description": "description text"
}
}
}
}]
The query:
SELECT
jsonb_pretty(jsonb_agg(user_id || skills)) -- E
FROM (
SELECT
json_build_object('user_id', user_id)::jsonb as user_id, -- D
json_object_agg(skill_title, skills -> skill_title -> 'rating')::jsonb as skills
FROM (
SELECT
user_id,
json_object_keys(skills) as skill_title, -- C
skills
FROM (
SELECT
(datasets -> 'user_id')::text as user_id,
datasets -> 'data' -> 'skills' as skills -- B
FROM (
SELECT
json_array_elements(json) as datasets -- A
FROM (
SELECT '/* the JSON data; see db<>fiddle */'::json
)s
)s
)s
)s
GROUP BY user_id
ORDER BY user_id
)s
A Make all array elements ({user_id: '42', data: {...}}) one row each
B First column safe the user_id. The cast to text ist necessary for the GROUP BY later which cannot group JSON output. For the second column extract the skills data of the user
C Extract the skill titles for using them as keys in (D.1).
D.1 skills -> skill_title -> 'rating' extracts the rating value from each skill
D.2 json_object_agg aggregates the skill_titles and each corresponding rating value into one JSON object; grouped by the user_id
D.3 json_build_object makes the user_id a JSON object again
E.1 user_id || skills aggregates the two json object into one
E.2 jsonb_agg aggregates these json objects into an array
E.3 jsonb_pretty makes the result looking pretty.
Result:
[{
"skill_1": 4,
"skill_2": 2,
"skill_3": 5,
"user_id": "123"
},
{
"skill_1": 1,
"skill_2": 3,
"skill_3": 4,
"skill_4": 42,
"user_id": "456"
}]

Is it better to store data in multiple rows or just in one json in MYSQL?

Once such example for this can be the question bank for a user and test. There can be a lot of relation that can arise in future.
When user attemps questions in a set there are multiple random number of question.
So while submitting that set it is good to store data of a particular set as json or multiple rows
Approach 1
QuesTable
id | ques
UserTable
id | username | setinfo
Where setinfo can be stored as json for a particular user for any number of sets he create,
when user creates a set we can append the data in this json.
{
"sets": [
{
"set1": [
{
"q1": {
"given_answer": "a",
"some_key1": "some_value1",
"some_key2": "some_value2"
},
"q2": {
"given_answer": "c",
"some_key1": "some_value1",
"some_key2": "some_value2"
},
"q3": {
"given_answer": "b",
"some_key1": "some_value1",
"some_key2": "some_value2"
}
}
]
},
{
"set2": [
{
"q1": {
"given_answer": "a",
"some_key1": "some_value1",
"some_key2": "some_value2"
},
"q2": {
"given_answer": "c",
"some_key1": "some_value1",
"some_key2": "some_value2"
},
"q3": {
"given_answer": "b",
"some_key1": "some_value1",
"some_key2": "some_value2"
}
}
]
}
]
}
APPROACH 2
Its same but we can create another table for set info and store each set as its own id
QuesTable
id | ques
UserTable
id | username
user_set_table
id | userid | setinfo
Here every time user creates a set it will create a new column in user_set_info and using FK userid
where each setinfo is
[
{
"q1": {
"given_answer": "a",
"some_key1": "some_value1",
"some_key2": "some_value2"
},
"q2": {
"given_answer": "c",
"some_key1": "some_value1",
"some_key2": "some_value2"
},
"q3": {
"given_answer": "b",
"some_key1": "some_value1",
"some_key2": "some_value2"
}
}
]
APPROACH3
QuesTable
id | ques
UserTable
id | username
User_Set_Info
id | userid | quesid | given_ans | somekey1 | somekey2
Here the issue is if user gives a test that has 100 question so it will create 100 rows and needs 100 insertion tough the query can be single.
Is it a good idea to make multiple rows ? When should be best to use json in mysql column and when not?
The questions you should ask yourself are:
- Is the data easy to retrieve?
- Is the data easy to update?
- What is my data's durability if I change the model later?
Relational databases answer these questions with:
- SELECT
- UPDATE
- ALTER and UPDATE
When using a JSON storage, you may have trouble to alter the data, reducing drastically the durability of your database, and making it way more difficult to maintain.
When taking your decision about data storage, always think CRUD and ACID