I have the following table:
column1 column2 ...
-------------------
a b
a b
a b
a b
Assume "a" and "b" are all different string, int or boolean values. I want to represent this structure with an Avroschema. However I don't want the following structure as it consumes too much space because of the repeated column names:
[{"column1":"a", "column2":"b"}{{"column1":"a", "column2":"b"}, ...]
What I want is the following:
{
"columns": ["column1", "column2", ...],
"rows: [["a", "b"], ["a", "b"], ...]
}
So the column names are always "column1", "column2" etc. My question is, in order to serialize the above structure, how can I create an avroschema and say that "columns" field must have the values "column1", "column2" etc.
Related
My table looks like this:
id
data
1
{tags: {"env": "dev", "owner": "me"}}
I want to fetch the data and inside the select query convert the data column to the following format:
id
data
1
{tags: [{"key": "env", "value": "dev"}, {"key": "owner", "value": "me"}]}
I've tried several JSON mysql functions but the closest I got is :
id
data
1
{tags: [{"key": "env", "value": ["dev", "me"]}, {"key": "owner", "value": ["dev", "me"]}]}
Any suggestions?
Thanks
SELECT id, JSON_OBJECT("tags", JSON_ARRAY( JSON_OBJECT("key", "env", "value", JSON_EXTRACT(json_column, "$.tags.env")), JSON_OBJECT("key", "owner", "value", JSON_EXTRACT(json_column, "$.tags.owner")) )) as data FROM table_name
JSON_EXTRACT : extract the values of the "env" and "owner" keys from the json_column
JSON_OBJECT : create two JSON objects with the "key" and "value" keys and the extracted values
JSON_ARRAY : create a JSON array of these two objects
and finally wraps the array in another JSON_OBJECT with the "tags" key.
This is a generic approach which will also work on data fields that have multiple top-level keys and multiple second-level keys:
select t1.id, (select json_objectagg(t1.k1,
(select json_arrayagg(json_object('key', t2.k2,
'value', json_extract(t.data, concat('$.', t1.k1, '.', t2.k2))))
from json_table(json_keys(json_extract(t.data, concat('$.', t1.k1))), '$[*]' columns (k2 text path '$')) t2))
from json_table(json_keys(t.data), '$[*]' columns (k1 text path '$')) t1)
from tbl t;
Let's say I have inserted a record like this:
id | u_id | the_data
-------+------+--------------------------------------------------
1 | 2863 |[{name: a, body: lorem}]
using this command:
CREATE TABLE users (
id SERIAL PRIMARY KEY,
u_id INT,
the_data JSON
);
INSERT INTO users (u_id, the_data) VALUES (2863, '[{"name": "a", "body": "lorem"}]');
But now, I want to insert some more data into the same record without losing the old array of json. How to do this type of insertion?
id | u_id | the_data
-------+------+------------------------------------------------------------------------------
1 | 2863 |[ {name: a, body: lorem}, {name: b, body: ipsum} ]
Please note: Below command creates a new record which I don't want.
INSERT INTO users (u_id, the_data)
VALUES (2863, '[{"name": "b", "body": "ipsum"}]');
Not looking for solutions like below since they all insert at the same time:
INSERT INTO users (u_id, the_data)
VALUES (2863, '[{"name": "a", "body": "lorem"}, {"name": "b", "body": "ipsum"}]');
INSERT INTO users (u_id, the_data)
VALUES (2863, '[{"name": "a", "body": "lorem"}]'), (2863, '[{"name": "b", "body": "ipsum"}]');
As the top level JSON object is an array, you can use the standard concatenation operator || to append an element to the array:
update users
set the_data = the_data || '{"name": "b", "body": "ipsum"}'
where u_id = 2863;
You should change your column definition to jsonb as that offers a lot more possibilities for querying or changing the value. Otherwise you will be forced to cast the column to jsonb every time you want to do something more interesting with it.
If you can't or don't want to change the data type you need to cast it:
set the_data = the_data::jsonb || '....'
You can create list of object and parse it to loop :
For Example:
var data = {
Id : Id
Name : Name
}
Json Request :
Data: data
Well, that's not a simple json object. You're trying to add a object to an array of values that is saved as json field.
So it's not about keeping the old array, but rather keeping the objects that were already present on the array saved in the json field and adding the new one.
I tried this on Postgres 12, it works, basically as someone else said, you will need to cast the jsonb type if you've json and use pipes operator to concatenate the new value.
UPDATE users
SET the_data = the_data::jsonb || '{"name": "b", "body": "ipsum"}'
WHERE id = 1;
Taken from here:
https://stackoverflow.com/a/69630521/9231145
Having a json array column in a table e.g. ["A1", "A2", "B1"]. I want to reference that array in WHERE IN clause. I could not evaluate the json array to ... WHERE tbl2.refID IN ("A1", "A2", "B1").
SET #ref = replace(replace('["A1", "A2", "BI"]','[', ''), ']', ''); SELECT #ref;
returns "A1", "A2", "B1" as I want it but not working in ... WHERE tbl2.refID IN (#ref)
So how can I evaluate array to be used as "WHERE IN" values?
Table 1
id
array of ids
other cols
1
["A1", "A2", "B1"]
Table 2
id
refID
col 3
1
A1
[ ]
2
A2
[ ]
Using elements of table1.col2 I want to select and group col3 from table2.
Wished I could illustrate it better!
I have tried to evaluate the array column passed to WHERE IN () but not returning any value.
The evaluation is broken somehow.
WHERE tbl2.refID IN (replace(replace('["A1", "A2", "B1"]','[', ''), ']', '')); not evaluating
You could search in the JSON if the value exists with JSON_CONTAINS
Example
Beware, JSON_CONTAINS needs a valid JSON on the two parameters, so JSON_CONTAINS('["A1"]', 'A1') we be invalid as A1 is not a valid JSON string representation.
For the where, you can simply do
WHERE JSON_CONTAINS('["A1", "A2", "BI"]', JSON_QUOTE(tbl2.refID))
It will add quotes around strings and test it against your array.
I'm trying to count number of nested JSON array elements grouped by parent index using MySQL 8 JSON type field. My JSON string looks like
{
"a": [
{
"b": [
1,
2,
3
]
},
{
"b": [
1
]
}
]
}
I'm trying to get the count of elements under "b" key for each "a" element. I need an output similar to:
{0: 3, 1: 1}
Meaning that a[0] has 3 elements under "b", while a[1] has 1.
This query counts total number of "b" elements across all "a"s (yields 4):
select JSON_LENGTH(json->>'$.a[*].b[*]') from myTable
Is it possible to somehow group it by a's index?
Thank you!
One option is JSON_TABLE and JSON_OBJECTAGG:
SELECT
JSON_OBJECTAGG(
`rowid` - 1,
JSON_LENGTH(`count`)
)
FROM JSON_TABLE(
'{"a":[{"b":[1,2,3]},{"b":[1]}]}',
'$.a[*]'
COLUMNS(
`rowid` FOR ORDINALITY,
`count` JSON PATH '$.b'
)
) `der`;
See db-fiddle.
Is it possible to use json_populate_recordset so that the table column names/json keys are compared in an case-insensitive way, using PostgreSQL (9.6)?
For example, the following snippet would return zero row.
CREATE TABLE foo (bar TEXT);
SELECT * from json_populate_recordset(null::foo, '[{"bAr":1}]')
Of course I could transform json keys to lowercase or the table name could be case sensitive.
I don't believe case-insensitive is possible. If you know in advance the case that will be used for records (e.g. they are always camel cased) you can specify a specific case by quoting the column name.
Baseline example to show case-insensitivity:
# create type x as (abc integer);
CREATE TYPE
# select * from json_populate_recordset(null::x, '[{"abc" : 1}, {"Abc" : 2}, {"aBc" : 3}, {"abC" : 4}]');
abc
-----
1
(4 rows)
Now let's choose a specific case we want to use by quoting the column name.
# drop type x;
DROP TYPE
# create type x as ("aBc" integer);
CREATE TYPE
edgar=# select * from json_populate_recordset(null::x, '[{"abc" : 1}, {"Abc" : 2}, {"aBc" : 3}, {"abC" : 4}]');
aBc
-----
3
(4 rows)
If you can't guarantee the case of your input data you show lower-case everything.