I have a table of data with and id column and a jsonb column with object data:
id (int)
data (jsonb)
I'm querying the data like this
select row_to_json(t)
from (
select id, data from accounts
) t;
which gets me data that looks like this:
{
"id":3,
"data":
{
"emailAddress": "someone#gmail.com",
"mobileNumbers": ["5559991212"]
}
}
I would like to merge the data field into the main record set, I basically want the keys in the data node into the main record:
{
"id":3,
"emailAddress": "someone#gmail.com",
"mobileNumbers": ["5559991212"]
}
You can use
SELECT jsonb_set(data, '{id}', to_jsonb(id))
FROM accounts;
I cannot help remarking that a table with just a primary key and a jsomb column seems like a problematic database design to me.
Related
how to add json data into POSTGRES table. like if i have a table with id and name. i should add { id : 1, name : 'abc'} into the table in both fields like id -> 1, name -> abc.
I have a simple JSON document "example_1.JSON:
{
"fruit": "Apple",
"size": "Large",
"color": "Red"
}
I created a temp table "temp_json" to copy the file into it:
CREATE TEMP TABLE temp_json (mydata text);
I copied the JSON file using the following statement:
COPY temp_json from 'C:\Program Files\PostgreSQL\9.5\data\example_1.JSON';
It is ok with the copy part. When I insert the values from the temp table into my database table "jsontable", the insertion happens with no errors, but it inserts the JSON values in several rows inside my database table!!!
My database table is created as follows:
CREATE TABLE public.jsontable (
id bigint NOT NULL DEFAULT nextval('jsontable_id_seq'::regclass),
jsondata jsonb,
CONSTRAINT jsontable_pkey PRIMARY KEY (id)
);
The insert statement from the temp table to the jsontable:
INSERT INTO jsontable(jsondata) SELECT to_jsonb(mydata::text) FROM temp_json;
But when I select rows from jsontable, I don't get the JSON values in a single row!
SELECT * FROM jsontable;
Any suggestions to solve this problem?
You have two options. Control the data from the source file, i.e. pre-format the contents of the file to have them on a single line before copying.
Another option might be to concatenate the lines using string_agg. To maintain the order, I would suggest you to have a default identity column in your temp table.
create sequence seq_temp_json;
CREATE temp TABLE temp_json
(
id INT DEFAULT NEXTVAL('seq_temp_json'::regclass),
mydata TEXT
);
Now, load the temp table and check the order, json order should be in the ascending order of id.
COPY temp_json(mydata) from 'C:\Program Files\PostgreSQL\9.5\data\example_1.JSON';
knayak=# select * from temp_json;
id | mydata
----+-------------------
1 | {
2 | "fruit": "Apple",
3 | "size": "Large",
4 | "color": "Red"
5 | }
(5 rows)
Load the JSON into main table
INSERT INTO jsontable ( jsondata )
SELECT string_agg( mydata ,e'\n' ORDER BY id)::jsonb
FROM temp_json;
The column now contains the complete JSONB.
knayak=# select * from jsontable;
id | jsondata
----+-----------------------------------------------------
6 | {"size": "Large", "color": "Red", "fruit": "Apple"}
(1 row)
I'm running PostgreSQL 9.6, and I've got a table consisting of lots of columns.
I've got a csv-file containing the following format:
id, insert_time, JSON-object
The JSON-object has the following format:
{ column_nameY: valueX, column_nameY: valueY, ... }
The column_names in the JSON-object matches the columns in my PostgreSQL-table.
Is there a dynamic way to import such file, so I'll get the id, insert_time, and the remaining column values from the JSON object?
The order of columns in the JSON object might not match the order of the columns in the PostgreSQL table.
I am assuming you know how to get that csv file imported into postgresql and that you know the fields inside that json object field.
first we create a table to store the contents of the csv file. Notice the datatype of that JSON field is jsonb.
create table test11 (id int, insert_time timestamp, json_object jsonb )
Know you import the csv file. but for illustration purposes, i will insert sample data into this Table.
insert into test11 (id, insert_time, json_object) values (1, '2017-11-14'::timestamp, '{ "column_nameX": "3", "column_nameY": "4" }'::jsonb);
insert into test11 (id, insert_time, json_object) values (1, '2017-11-14'::timestamp, '{ "column_nameX": "13", "column_nameY": "14" }'::jsonb);
we now select from that table;
Select id, insert_time, json_object->>'column_nameY' as Column_NameY, json_object->>'column_nameX' as Column_NameX from test11
your results should look like this...
id |insert_time |column_namey |column_namex
1 |11/14/2017 |4 |3
1 |11/14/2017 |14 |13
-HTH
I'm getting this error when trying to access data in a JSON object, does anybody know what it is causing it?
This is the query:
SELECT id, data FROM cities WHERE data->'location'->>'population' = '270816'
This is the JSON object:
location": {
"population": 270816,
"type": "city"
}
Any help would be really appreciate it. Thanks
I was able to get this SELECT to work in Postgres 9.3.1. Here's an sqlfiddle which illustrates that.
Here is the DDL and INSERT statement I used in the sqlfiddle:
create table cities
(
id serial,
data json
);
insert into cities (data) VALUES
('{
"location": {
"population": 270816,
"type": "city"
}
}'
);
What version of Postgres are you using? How are you inserting the JSON? What's the DDL for your cities table?
It suspect it may be an issue with the way you are inserting the JSON data. Try inserting it similar to the way I am doing in the sqlfiddle above and see if that works for you. i.e. as a pure SQL string, but one with valid JSON inside, into a column defined as json.
Just had what sounds like the same issue on Postgres 9.6.6. Improper string escaping caused mysterious JSONB behavior. Using pgAdmin4,
CREATE TABLE json_test (json_data JSONB);
INSERT INTO json_test (json_data) VALUES ('"{\"id\": \"test1\"}"');
INSERT INTO json_test (json_data) VALUES ('{"id": "test2"}');
SELECT json_data, json_data->>'id' as id FROM json_test;
returns the following pgAdmin4 output showing baffling failure to find id test2. Turns out the pgAdmin4 display is misleading. Situation becomes clear using text display from PSQL:
db=> CREATE TABLE json_test (json_data JSONB);
CREATE TABLE
db=> INSERT INTO json_test (json_data) VALUES ('"{\"id\": \"test1\"}"');
INSERT 0 1
db=> INSERT INTO json_test (json_data) VALUES ('{"id": "test2"}');
INSERT 0 1
db=> SELECT json_data, json_data->>'id' as id FROM json_test;
json_data | id
-----------------------+-------
"{\"id\": \"test1\"}" |
{"id": "test2"} | test2
(2 rows)
Where it is obvious that the first row was inserted as a string which just looks like JSON, not as a nested JSON object.
Does PostgreSQL provide any notation/method for putting a constraint on each element of a JSON array?
An example:
create table orders(data json);
insert into orders values ('
{
"order_id": 45,
"products": [
{
"product_id": 1,
"name": "Book"
},
{
"product_id": 2,
"name": "Painting"
}
]
}
');
I can easily add a constraint on the order_id field:
alter table orders add check ((data->>'order_id')::integer >= 1);
Now I need to do the same with product_id. I can put constraint on idividual array items:
alter table orders add check ((data->'products'->0->>'product_id')::integer >= 1);
alter table orders add check ((data->'products'->1->>'product_id')::integer >= 1);
-- etc.
So obviously what I'm looking for is some kind of wildcard operator for matching any JSON array element:
alter table orders add check ((data->'products'->*->>'product_id')::integer >= 1);
-- ^ like this
I know that this can be done by extracting products to a separate products table with a foreign key to orders. But I want to know if this is possible within single JSON column, so I can keep that in mind when designing a database schema.
So I asked this question on PostgreSQL mailing list, as suggested by Craig Ringer, and I've got the answer.
In short the solution is to write a procedure which materializes JSON array to PostgreSQL array:
create function data_product_ids(JSON) returns integer[] immutable as $$
select array_agg((a->>'product_id')::integer) from
json_array_elements($1->'products') as a $$ language sql ;
and use that procedure in CHECK statment:
alter table orders add check (1 <= ALL(data_product_ids(data)));
For more details on how this works se the answer on PostgreSQL mailing list. Credits to Joel Hoffman.
From one of the developers of JSON for Postgres
The path stuff does not support wildcards.