psql - Check if json value has specific property - json

I'm trying to delete rows from a table depending on a specific value on a details column which is of json type.
The column is expected to have a json value like this one:
{
"tax": 0,
"note": "",
"items": [
{
"price": "100",
"quantity": "1",
"description": "Test"
}
]
}
The objects inside items could have a name entry or not. I'd like to delete those that don't have that entry.
NOTE: All objects inside items have the same entries so all of them will have or will not have the name entry

You can use a JSON path expression.
delete from the_table
where details::jsonb ## '$.items[*].name <> ""'
This checks if there is at least one array element where the name is not empty. Note that this wouldn't delete rows with an array element having "name": ""
As you didn't use the recommended jsonb type (which is the one that supports all the nifty JSON path operators), you need to cast the column to jsonb.

Related

Oder of my json request is getting sorted in alphabetical order when i am getting data from data file.i dnt want my json request to get sorted

{
"ID":0,
"OrganizationId":"",
"OrganizationName":"",
"Name":"",
"IsActive":"True",
"Type":2,
"AppliesTo":1,
"TagHOD":"",
"DisplayAsPrimary":"false",
"Values":[
]
}
Above is my json request which I have stored in a data file
Below is my json request body which I am getting after sending a parameter into it. It is sorted into alphabetical order which I don't want. I want the same order as above eg ID Should be first then OrganizationId
{
"AppliesTo": 1,
"DisplayAsPrimary": "false",
"ID": 0,
"IsActive": "True",
"Name": "TAG1205510333275",
"OrganizationId": 2404,
"OrganizationName": "",
"TagHOD": "",
"Type": 2,
"Values": [
{
"HODEmail": "tagsapiautomationae#mailinator.com",
"Id": 1,
"IsDeleted": false,
"Text": "Level20"
}
]
}
The JSON specification states: "An object is an unordered set of name/value pairs."
When working with JSON (and objects in most languages), the properties in objects are inherently unordered. You can't rely on different systems giving you the properties in the same order you supply them. In fact, you can't even rely on a single system giving you the properties in the same order all the time within a given execution of the code, even though many systems do behave that way.
If you want to preserve ordering, you either need to use an array to store the data, or you can use an array of object property names that stores the keys in the order you want, so you can use that array to reference them in the desired order later.
EG:
keyorder = ["ID",
"OrganizationId",
"OrganizationName",
"Name",
"IsActive",
"Type",
"AppliesTo",
"TagHOD",
"DisplayAsPrimary",
"Values"
]
You can then loop over this array when accessing elements in your object, so you are always accessing them in your defined order.
In python, with an object named "data" this would look like:
for key in keyorder:
print data.get(key)

Extracting data from JSON field in Redshift

I am trying to extract some data from a JSON field.
[{"id": 10001, "person1": {"name": "Kevin", "role": "junior"},
"person2": {"name": "Scott", "role": "senior"}}]
I am trying to extract the name and role under each ID.
I tried the below but it returned empty record.
SELECT json_extract_path(column_name::json,'person1','name') FROM table
The JSON you have shown is:
A list (as indicated by [])
That contains a dictionary
That contains a dictionary
You will first need to extract the first list element, then use the command you have supplied.
Try something like:
SELECT
json_extract_path(
json_extract_array_element_text(column_name::json, 0),
'person1',
'name'
)
FROM table

How to replace JSON key's value in mysql

I have a mysql JSON column like:
column value
data [{ "report1": { "result": "5"}, "report2": {"result": "6"}, "report3": {"a": "4"}}, {"report1": { "result": "9"},"report4": {"details": "<b>We need to show the details here</b>"}, "report3": {"result": "5"}}]
another instance of data is:
[{ "report1": { "result": "5"}, "report2": {"result": "6"}, "report3": {"a": "4"}}, {"report1": { "result": "9"}, "report3": {"result": "5"},"report4": {"details": "<b>We need to show the details here</b>"}}]
In above record the key is present on 2nd index.
And in this:
[{ "report1": { "result": "5"}, "report2": {"result": "6"}, "report3": {"a": "4"}}, {"report1": { "result": "9"}, "report3": {"result": "5"}}]
The key is not present.
I need to replace {"details": "<b>We need to show the details here</b>"}, i.e. key report4's value with just [], I need now data in this report.
Actually, the logic for generating data have been changed from XML data to JSON for only that key, so, we need to replace it with a blank array, the target type now, without affecting the other data.
Is there any direct solution to that? I'm avoiding creating procedures here.
So, The Target data will be:
[{ "report1": { "result": "5"}, "report2": {"result": "6"}, "report3": {"a": "4"}}, {"report1": { "result": "9"},"report4": [], "report3": {"result": "5"}}]
And yes the keys in JSON are not consistent, means, a key may present in next or previous record in the table but may not present in this record.
The column should be of type JSON to use MySQL's JSON features efficiently. Then use the JSON modification functions, such as JSON_REPLACE.
Since each value contains a JSON array whose size may not be known in advance, you can create a small utility function to modify each element in the array.
create function modify_json(val json)
returns json
deterministic
begin
declare len int default json_length(val);
declare i int default 0;
while i < len do
# Replace the report4 property of the i'th element with an empty list
set val = json_replace(val, concat('$[', i, '].report4'), '[]');
set i = i + 1;
end while;
return val;
end;
With your utility function, update the records:
update table set data = modify_json(data)
where json_contains_path(data, 'one', '$[*].report4');
The records containing at least one element with a report4 property will be updated according to the modify_json function in this case. You could achieve the same thing with multiple update commands that operate on each index of the JSON array separately.
If the column can't be of type JSON for some reason, then you can allow MySQL to coerce the data or your program can marshall the string into a JSON object, modify the data, then serialize it to a string, and update the row.

Postgres replace an array inside a JSONB field

I have a table where the data field has JSONB type and among many other data I have a notes key inside the data json value where I store an array of notes.
Each note has (at least) two fields: title and content.
Sometimes I have to replace the whole list of notes with a different list, but not affecting any other fields inside my json record.
I tried something like this:
UPDATE mytable
SET data = jsonb_set("data", '{notes}', '[{ "title": "foo1" "content": "bar"'}, { "title": "foo2" "content": "bar2"}]', true)
WHERE id = ?
And I get an exception (through a js wrapper)
error: invalid input syntax for type json
How should I correctly use the jsonb_set function?
You have a stray single quote and missing commas in your JSON payload
Instead of
[{ "title": "foo1" "content": "bar"'}, { "title": "foo2" "content": "bar2"}]
^ ^ ^
it should rather look
[{ "title": "foo1", "content": "bar"}, { "title": "foo2", "content": "bar2"}]

Is it possible to have an optional field in an Avro schema (i.e. the field does not appear at all in the .json file)?

Is it possible to have an optional field in an Avro schema (i.e. the field does not appear at all in the .JSON file)?
In my Avro schema, I have two fields:
{"name": "author", "type": ["null", "string"], "default": null},
{"name": "importance", "type": ["null", "string"], "default": null},
And in my JSON files those two fields can exist or not.
However, when they do not exist, I receive an error (e.g. when I test such a JSON file using avro-tools command line client):
Expected field name not found: author
I understand that as long as the field name exists in a JSON, it can be null, or a string value, but what I'm trying to express is something like "this JSON is valid if the those field names do not exist, OR if they exist and they are null or string".
Is this possible to express in an Avro schema? If so, how?
you can define the default attribute as undefined example.
so the field can be skipped.
{
"name": "first_name",
"type": "string",
"default": "undefined"
},
Also all field are manadatory in avro.
if you want it to be optional, then union its type with null.
example:
{
"name": "username",
"type": [
"null",
"string"
],
"default": null
},
According to avro specification this is possible, using the default attribute.
See https://avro.apache.org/docs/1.8.2/spec.html
default: A default value for this field, used when reading instances that lack this field (optional). Permitted values depend on the field's schema type, according to the table below. Default values for union fields correspond to the first schema in the union.
At the example you gave, you do add the default attribute with value "null", so this should work. However, supporting this depends also on the library you use for reading the avro message (there are libraries at c,c++,python,java,c#,ruby etc.). Maybe (probably) the library you use lack this feature.