Table:
id
filejson
1
data1.json
2
data2.json
Or:
ID
DATA
1
data1
3
data2
Also how to query data in data.json? Is there any alternative way of storing JSON data in SQLite?
Here's something to get you started. More details can be found in https://www.sqlite.org/json1.html#:~:text=SQLite%20stores%20JSON%20as%20ordinary,a%20binary%20encoding%20of%20JSON.
create table json_example (
id int primary key,
json text);
insert into json_example
values
(1, '{"name": "jason", "gender": "m"}'),
(2, '{"name": "catherine", "gender": "f"}');
select id,
json,
json_extract(json,'$.name') as name,
json_extract(json,'$.gender') as gender
from json_example;
id|json |name |gender|
--+------------------------------------+---------+------+
1|{"name": "jason", "gender": "m"} |jason |m |
2|{"name": "catherine", "gender": "f"}|catherine|f |
Related
I have started using MySQL 8 and trying to update JSON data type in a mysql table
My table t1 looks as below:
# id group names
1100000 group1 [{"name": "name1", "type": "user"}, {"name": "name2", "type": "user"}, {"name": "techDept", "type": "dept"}]
I want to add user3 to the group1 and written below query:
update t1 set names = JSON_SET(names, "$.name", JSON_ARRAY('user3')) where group = 'group1';
However, the above query is not working
I suppose you want the result to be:
[{"name": "name1", "type": "user"}, {"name": "name2", "type": "user"}, {"name": "techDept", "type": "dept"}, {"name": "user3", "type": "user"}]
This should work:
UPDATE t1 SET names = JSON_ARRAY_APPEND(names, '$', JSON_OBJECT('name', 'user3', 'type', 'user'))
WHERE `group` = 'group1';
But it's not clear why you are using JSON at all. The normal way to store this data would be to create a second table for group members:
CREATE TABLE group_members (
member_id INT PRIMARY KEY,
`group` VARCHAR(10) NOT NULL,
member_type ENUM('user','dept') NOT NULL DEFAULT 'user',
name VARCHAR(10) NOT NULL
);
Then store one per row.
Adding a new member would be like:
INSERT INTO group_members
SET `group` = 'group1', name = 'user3';
So much simpler than using JSON!
I have a table with the name mainapp_project_data which has a jsonb column project_user_data
TABLE
public | mainapp_project_data | table | admin
select project_user_data from mainapp_project_data;
project_user_data
-----------------------------------------------------------------------------------------------------------------
[{"name": "john", "age": "21", "gender": "M"}, {"name": "randy", "age": "23", "gender": "M"}]
[{"name": "donald", "age": "31", "gender": "M"}, {"name": "wick", "age": "32",
"gender": "M"}]
[{"name": "orton", "age": "18", "gender": "M"}, {"name": "russel", "age": "55",
"gender": "M"}]
[{"name": "angelina", "age": "open", "gender": "F"}, {"name": "josep", "age": "21",
"gender": "M"}]
(4 rows)
(END)
I would like to count the distinct values of keys gender and age of JSON.
output format : [{key:count(repeated_values)}]
filtering on `gender` : [{"M":7},{"F":1}]
filtering on `age` : [{"21":2},{"23":1},{"31":1}.....]
WITH flat AS (
SELECT
kv.key,
-- make into a JSON object with a single value and count, e.g., '{"M": 7}'
jsonb_build_object(kv.value, COUNT(*)) AS val_count
FROM mainapp_project_data AS mpd
-- Flatten the JSON arrays into single objects per row
CROSS JOIN LATERAL jsonb_array_elements(mpd.project_user_data) AS unarrayed(udata)
-- Convert to a long, flat list of key-value pairs
CROSS JOIN LATERAL jsonb_each_text(unarrayed.udata) AS kv(key, value)
GROUP BY kv.key, kv.value
)
SELECT
-- de-deplicated object keys
flat.key,
-- aggregation of all values and counts per key
jsonb_agg(flat.val_count) AS value_counts
FROM flat
GROUP BY flat.key
Returns
key | value_counts
--------+---------------------------------------------------------------------------------------------------------------------
gender | [{"M": 7}, {"F": 1}]
name | [{"josep": 1}, {"russel": 1}, {"orton": 1}, {"donald": 1}, {"wick": 1}, {"john": 1}, {"randy": 1}, {"angelina": 1}]
age | [{"18": 1}, {"32": 1}, {"21": 2}, {"23": 1}, {"open": 1}, {"31": 1}, {"55": 1}]
This will provide any key-value pair instance count. If you just want genders and ages, just add a where clause before the first GROUP BY clause.
WHERE kv.key IN ('gender', 'age')
Does something like this work for you?
postgres=# select count(*), (foo->'gender')::text as g from (select json_array_elements(project_user_data) as foo from mainapp_project_data) as j group by (foo->'gender')::text;
count | g
-------+-----
7 | "M"
1 | "F"
(2 rows)
postgres=# select count(*), (foo->'age')::text as g from (select json_array_elements(project_user_data) as foo from mainapp_project_data) as j group by (foo->'age')::text;
count | g
-------+--------
2 | "21"
1 | "32"
1 | "open"
1 | "23"
1 | "18"
1 | "55"
1 | "31"
(7 rows) ```
I have this mySql (ver 5.7.14) fields.
id shop_name json_string
1 shop_1 [{"your_number": "2", "player_id": "6789" }, {"your_number": "3", "player_id": "9877" }, {"your_number": "4", "player_id": "132456" }]
2 shop_2 [{"your_number": "2", "player_id": "6789" }, {"your_number": "3", "player_id": "9877" }, {"your_number": "4", "player_id": "132456" }]
how can I update string based on id and JSON your_number?
For example I'd like remove your_number = 3 string where id = 2
Result:
id shop_name json_string
2 shop_2 [{"your_number": "2", "player_id": "6789" }, {"your_number": "4", "player_id": "132456" }]
thanks!
I tested this and it works:
UPDATE Shops
SET json_string = JSON_REMOVE(
json_string,
SUBSTRING_INDEX(
JSON_UNQUOTE(JSON_SEARCH(json_string, 'one', '3', null, '$[*].your_number')), '.', 1)
)
WHERE id = 2;
I have to comment that this does NOT make JSON seem like a good idea for your application.
If you have a requirement to manipulate JSON documents, it would be easier to store your database as a normalized set of tables.
CREATE TABLE Shops (
id INT PRIMARY KEY,
shop_name VARCHAR(10)
);
CREATE TABLE ShopPlayers (
shop_id INT NOT NULL,
your_number INT NOT NULL,
player_id INT NOT NULL,
PRIMARY KEY (shop_id, your_number)
);
Now you can remove a player with more straightforward SQL:
DELETE FROM ShopPlayers WHERE shop_id = 2 AND your_number = 3;
I've been watching questions about mysql and json on Stack Overflow for a while, and I have to say that in virtually all cases I've seen, it would be better if the tables were designed in a traditional way, according to rules of normalization. The SQL queries would be easier to write and easier to debug, they would run faster, and the database would store the data more efficiently.
update users as u set -- postgres FTW
email = u2.email,
first_name = u2.first_name,
last_name = u2.last_name
from (values
(1, 'hollis#weimann.biz', 'Hollis', 'O\'Connell'),
(2, 'robert#duncan.info', 'Robert', 'Duncan')
) as u2(id, email, first_name, last_name)
where u2.id = u.id;
Above query is used to update multiple rows in one query and it works efficiently as well but I have a JSON below:
Person:{[id:1,email:"[xyz#abc.com]",first_name:"John",last_name:"Doe"],[id:2,email:"[xyz#abc.com]",first_name:"Robert",last_name:"Duncan"],[id:3,email:"[xyz#abc.com]",first_name:"Ram",last_name:"Das"],[id:4,email:"[xyz#abc.com]",first_name:"Albert",last_name:"Pinto"],[id:5,email:"[xyz#abc.com]",first_name:"Robert",last_name:"Peter"],[id:6,email:"[xyz#abc.com]",first_name:"Christian",last_name:"Lint"],[id:7,email:"[xyz#abc.com]",first_name:"Mike",last_name:"Hussey"],[id:8,email:"[xyz#abc.com]",first_name:"Ralph",last_name:"Hunter"]};
Such JSON has 1000 data which I want to insert in the Database using JPA. Currently I have inserted it by iterating this which makes my code slow,Is there any other alternative which can be implemented.
Any help will be appreciated.
Here is my Java Code :
public Boolean multiEditPerson(List<PersonList> personList) {
for (PersonList list : personList) {
Person personMstr = em.find(Person.class, list.getId());
personMstr.setFirstName(list.getFirstName());
personMstr.setLastName(list.getLastName());
personMstr.setEmail(Arrays.toString(list.getEmail()));
em.persist(personMstr);
}
return Boolean.TRUE;
}
You can do a bulk insert based on the json document. You should reformat the document as the format shown in the question is strange and unpractical.
Full working example:
create table example(id int primary key, email text, last_name text, first_name text);
with jsondata(jdata) as (
values
(
'[
{"id": 1, "email": "[xyz#abc.com]", "first_name": "John", "last_name": "Doe"},
{"id": 2, "email": "[xyz#abc.com]", "first_name": "Robert", "last_name": "Duncan"},
{"id": 3, "email": "[xyz#abc.com]", "first_name": "Ram", "last_name": "Das"},
{"id": 4, "email": "[xyz#abc.com]", "first_name": "Albert", "last_name": "Pinto"},
{"id": 5, "email": "[xyz#abc.com]", "first_name": "Robert", "last_name": "Peter"},
{"id": 6, "email": "[xyz#abc.com]", "first_name": "Christian", "last_name": "Lint"},
{"id": 7, "email": "[xyz#abc.com]", "first_name": "Mike", "last_name": "Hussey"},
{"id": 8, "email": "[xyz#abc.com]", "first_name": "Ralph", "last_name": "Hunter"}
]'::jsonb)
)
insert into example
select (elem->>'id')::int, elem->>'email', elem->>'last_name', elem->>'first_name'
from jsondata,
jsonb_array_elements(jdata) as elem;
The result:
select *
from example
id | email | last_name | first_name
----+---------------+-----------+------------
1 | [xyz#abc.com] | Doe | John
2 | [xyz#abc.com] | Duncan | Robert
3 | [xyz#abc.com] | Das | Ram
4 | [xyz#abc.com] | Pinto | Albert
5 | [xyz#abc.com] | Peter | Robert
6 | [xyz#abc.com] | Lint | Christian
7 | [xyz#abc.com] | Hussey | Mike
8 | [xyz#abc.com] | Hunter | Ralph
(8 rows)
If you want to update the table (instead of insert into it):
with jsondata(jdata) as (
-- values as above
)
update example set
email = elem->>'email',
last_name = elem->>'last_name',
first_name = elem->>'first_name'
from jsondata,
jsonb_array_elements(jdata) as elem
where id = (elem->>'id')::int;
The trick is to do a patch insertion without commit for each record. If this is a one time job, it is better to process it on the PostgreSQL side, insert the Json entries all at once to the database using unlogged table and then update you main table.
here an exmaple from documentation to change a json to rwos
select * from json_each('{"a":"foo", "b":"bar"}')
if this is not one time job, you need to create a batch insert in your java code. Do not process one person at a time but the whole list of persons.
Thanks for the reply to everyone !!
I have used below query to update records using JSON directly in the db.
UPDATE person p
SET (first_name,email,last_name)=
(COALESCE(ab.first_name, p.first_name)
, COALESCE(ab.email,p.email)
,COALESCE(ab.last_name, p.last_name)
)
FROM (
select * from json_populate_recordset
(null::person,'[{"id":1,"first_name":"Robert","email":"robert.stark#xyz.com","last_name":"Stark"},{"id":2,"first_name":"John","email":"John.Doe#xyz.com","last_name":"Doe"}]')
) ab
WHERE p.id = ab.id;
I am using postgres 9.5 and i have a table like this:
create table t1 (
id serial,
fkid int not null,
tstamp timestamp with time zone,
data jsonb
)
a typical json is:
{
"WifiStatistic": {
"Interfaces": {
"wlan0": {
"qualityLevel": {
"type": "graph",
"unit": "string",
"value": "75",
"graphDisplayName": "Quality level"
},
"SNR": {
"type": "graph",
"unit": "string",
"value": "75",
"graphDisplayName": "SNR"}
}
}
}
}
What 'id like as a result of a query that extract the quality level is a recordset like:
id | fkid | tstamp | value | graphdisplayName
-------------------------------------------------
1 | 1 | 2017-01-22 | 75 | "Quality Level"
what kind of query might i use?
Thanks to #VaoTsun for his comment, i ended up using this:
select
tstamp, id, fkid,
data->'WifiStatistics'->'Interfaces'->'wlan0'->'qualityLevel'->'value' as value,
data #>'{WifiStatistics, Interfaces, wlan0, qualityLevel}' ->'graphDisplayName' as dname
from data;
i was trying to recover two values with a single json selection, that is what puzzled me.