Is it possible to use json_populate_recordset so that the table column names/json keys are compared in an case-insensitive way, using PostgreSQL (9.6)?
For example, the following snippet would return zero row.
CREATE TABLE foo (bar TEXT);
SELECT * from json_populate_recordset(null::foo, '[{"bAr":1}]')
Of course I could transform json keys to lowercase or the table name could be case sensitive.
I don't believe case-insensitive is possible. If you know in advance the case that will be used for records (e.g. they are always camel cased) you can specify a specific case by quoting the column name.
Baseline example to show case-insensitivity:
# create type x as (abc integer);
CREATE TYPE
# select * from json_populate_recordset(null::x, '[{"abc" : 1}, {"Abc" : 2}, {"aBc" : 3}, {"abC" : 4}]');
abc
-----
1
(4 rows)
Now let's choose a specific case we want to use by quoting the column name.
# drop type x;
DROP TYPE
# create type x as ("aBc" integer);
CREATE TYPE
edgar=# select * from json_populate_recordset(null::x, '[{"abc" : 1}, {"Abc" : 2}, {"aBc" : 3}, {"abC" : 4}]');
aBc
-----
3
(4 rows)
If you can't guarantee the case of your input data you show lower-case everything.
Related
I have the following table:
column1 column2 ...
-------------------
a b
a b
a b
a b
Assume "a" and "b" are all different string, int or boolean values. I want to represent this structure with an Avroschema. However I don't want the following structure as it consumes too much space because of the repeated column names:
[{"column1":"a", "column2":"b"}{{"column1":"a", "column2":"b"}, ...]
What I want is the following:
{
"columns": ["column1", "column2", ...],
"rows: [["a", "b"], ["a", "b"], ...]
}
So the column names are always "column1", "column2" etc. My question is, in order to serialize the above structure, how can I create an avroschema and say that "columns" field must have the values "column1", "column2" etc.
I have JSON stored in a table. The JSON is nested and has the following structure
[
{
"name": "abc",
"ques": [
{
"qId": 100
},
{
"qId": 200
}
]
},{
"name": "xyz",
"ques": [
{
"qId": 100
},
{
"qId": 300
}
]
}
]
Update TABLE_NAME
set COLUMN_NAME = jsonb_set(COLUMN_NAME, '{ques,qId}', '101')
WHERE COLUMN_NAME->>'qId'=100
I am trying to update qId value from JSON. If qId is 100, I want to update it to 101.
1st solution, simple but to be used carefully
You convert your json data into text and you use the replace function :
Update TABLE_NAME
set COLUMN_NAME = replace(COLUMN_NAME :: text,'"qId": 100}', '"qId": 101}') :: jsonb
2nd solution more elegant and more complex
jsonb_set cannot make several replacements in the same jsonb data at the same time. To do so, you need to create your own aggregate based on the jsonb_set function :
CREATE OR REPLACE FUNCTION jsonb_set(x jsonb, y jsonb, path text[], new_value jsonb) RETURNS jsonb LANGUAGE sql AS $$
SELECT jsonb_set(COALESCE(x, y), path, new_value) ; $$ ;
CREATE OR REPLACE AGGREGATE jsonb_set_agg(x jsonb, path text[], new_value jsonb)
( stype = jsonb, sfunc = jsonb_set);
Then you get your result with the following query :
UPDATE TABLE_NAME
SET COLUMN_NAME =
( SELECT jsonb_set_agg(COLUMN_NAME :: jsonb, array[(a.id - 1) :: text, 'ques', (b.id - 1) :: text], jsonb_build_object('qId', 101))
FROM jsonb_path_query(COLUMN_NAME :: jsonb, '$[*]') WITH ORDINALITY AS a(content, id)
CROSS JOIN LATERAL jsonb_path_query(a.content->'ques', '$[*]') WITH ORDINALITY AS b(content, id)
WHERE (b.content)->'qId' = to_jsonb(100)
)
Note that this query is not universal, and it must breakdown the jsonb data according to its structure.
Note that jsonb_array_elements can be used in place of jsonb_path_query, but you will get an error with jsonb_array_elements when the jsonb data is not an array, whereas you won't get any error with jsonb_path_query in lax mode which is the default mode.
Full test results in dbfiddle
You must specify the whole path to the value.
In this case your json is an array so you need to address which element of this array your are trying to modify.
A direct approach (over your example) would be:
jsonb_set(
jsonb_set(
COLUMN_NAME
, '{0,ques,qId}'
, '101'
)
, '{1,ques,qId}'
, '101'
)
Of course, if you want to modify every element of different arrays of different lengths you would need to elaborate this approach disassembling the array to modify every contained element.
In my postgres database I have json that looks similar to this:
{
"myArray": [
{
"myValue": 1
},
{
"myValue": 2
},
{
"myValue": 3
}
]
}
Now I want to rename myValue to otherValue. I can't be sure about the length of the array! Preferably I would like to use something like set_jsonb with a wildcard as the array index, but that does not seem to be supported. So what is the nicest solution?
You have to decompose a whole jsonb object, modify individual elements and build the object back.
The custom function will be helpful:
create or replace function jsonb_change_keys_in_array(arr jsonb, old_key text, new_key text)
returns jsonb language sql as $$
select jsonb_agg(case
when value->old_key is null then value
else value- old_key || jsonb_build_object(new_key, value->old_key)
end)
from jsonb_array_elements(arr)
$$;
Use:
with my_table (id, data) as (
values(1,
'{
"myArray": [
{
"myValue": 1
},
{
"myValue": 2
},
{
"myValue": 3
}
]
}'::jsonb)
)
select
id,
jsonb_build_object(
'myArray',
jsonb_change_keys_in_array(data->'myArray', 'myValue', 'otherValue')
)
from my_table;
id | jsonb_build_object
----+------------------------------------------------------------------------
1 | {"myArray": [{"otherValue": 1}, {"otherValue": 2}, {"otherValue": 3}]}
(1 row)
Using json functions are definitely the most elegant, but you can get by on using character replacement. Cast the json(b) as text, perform the replace, then change it back to json(b). In this example I included the quotes and colon to help the text replace target the json keys without conflict with values.
CREATE TABLE mytable ( id INT, data JSONB );
INSERT INTO mytable VALUES (1, '{"myArray": [{"myValue": 1},{"myValue": 2},{"myValue": 3}]}');
INSERT INTO mytable VALUES (2, '{"myArray": [{"myValue": 4},{"myValue": 5},{"myValue": 6}]}');
SELECT * FROM mytable;
UPDATE mytable
SET data = REPLACE(data :: TEXT, '"myValue":', '"otherValue":') :: JSONB;
SELECT * FROM mytable;
http://sqlfiddle.com/#!17/1c28a/9/4
I was testing some queries at pg9.4 in "JSON mode", and now I am checking if pg9.5 will bring all same JSONB functionality... But there are no row_to_jsonb() function (!). (why it is not orthogonal instruction set in the basic parameters?)
The guide only says "the to_jsonb function supplies much the same functionality". Where we can check "how much"? There are other specific JSONB guide about this details?
((Year 2022 update and pg upgrade))
The phrase "supplies much the same functionality" was removed on the version 13. The current Guide does not use the phrase neither the word "much".
Now row_to_json is an alias for to_json except when the optional boolean parameter is true — the result will be the inclusion of line feeds like in jsonb_pretty().
Now the functions to_jsonb and to_json are orthogonal (!), and typical use is the same:
SELECT t.a, t.b, to_jsonb(r) json_info
-- or to_json(r)
FROM t, LATERAL (SELECT t.c,t.d,t.f) r;
-- or SELECT to_jsonb(r) FROM (SELECT c,d,f FROM t) r;
You can just use to_jsonb() instead of row_to_json(), example:
with the_table(a, b, c) as (
select 1, 'alfa', '2016-01-01'::date
)
select to_jsonb(t), row_to_json(t)
from the_table t;
to_jsonb | row_to_json
------------------------------------------+-------------------------------------
{"a": 1, "b": "alfa", "c": "2016-01-01"} | {"a":1,"b":"alfa","c":"2016-01-01"}
(1 row)
The first has a wider application than the other because of the type of arguments (anyelement versus record). For example, you can convert a Postgres array to json array using to_jsonb(), that cannot be done with row_to_json():
select to_jsonb(array['a', 'b', 'c']);
to_jsonb
-----------------
["a", "b", "c"]
(1 row)
In case of the use of two arguments in row_to_json() you should additionally use jsonb_pretty():
with the_table(a, b, c) as (
select 1, 'alfa', '2016-01-01'::date
)
select jsonb_pretty(to_jsonb(t)), row_to_json(t, true)
from the_table t;
jsonb_pretty | row_to_json
-----------------------+--------------------
{ +| {"a":1, +
"a": 1, +| "b":"alfa", +
"b": "alfa", +| "c":"2016-01-01"}
"c": "2016-01-01"+|
} |
(1 row)
You can use to_jsonb as a drop-in replacement for row_to_json.
SELECT to_jsonb(rows) FROM (SELECT * FROM table) rows;
you can cast json to jsonb row_to_json(...)::jsonb, not ideal but often does the trick
So l have been trying to find an answer on the internet with 0 luck.
Does postgres support having arrays of objects in a single field, e.g.
[
{
key: value,
another: value
},
{
key: value,
value: key
}
]
and saving this to a single field?
Also how would you perform the single INSERT or UPDATE
would it be: UPDATE db SET value='[{ key: val }, { key: val }]' ??
Postgres supports any valid json values, including json arrays.
What you are going to use is a single json (jsonb) column, not a Postgres array:
create table example (id int, val jsonb);
insert into example
values (1, '[{ "name": "aga" }, { "gender": "female" }]');
select * from example;
id | val
----+-----------------------------------------
1 | [{"name": "aga"}, {"gender": "female"}]
(1 row)
It depends on your definitoion of objects I guess.
You can use JSON: http://www.postgresql.org/docs/current/static/functions-json.html and insert unstructured data:
# create table test (field json);
CREATE TABLE
# insert into test values ('[1,2,3]');
INSERT 0 1
# insert into test values ('[{"key": "value"}, {"key": "value"}]');
INSERT 0 1
# select * from test;
field
--------------------------------------
[1,2,3]
[{"key": "value"}, {"key": "value"}]
There is also support for arrays: http://www.postgresql.org/docs/current/static/arrays.html