R list to serialize json - json

How ia can get json string using R-list?
im have ntext code
library(jsonlite)
x<-list(
"a" = "test",
"b" = 1,
"c" = 2
)
serial_data<-toJSON( x )
but its return
{"a":["test"],"b":[1],"c":[2]}
im need next string
{"a":"test","b":1,"c":2}

Related

Update column value of type JSON string with another JSON string value from different column

I have a PySpark dataframe where columns have JSON string values like this:
col1 col2
{"d1":"2343","v1":"3434"} {"id1":"123"}
{"d1":"2344","v1":"3435"} {"id1":"124"}
I want to update "col1" JSON string values with "col2" JSON string values to get this:
col1 col2
{"d1":"2343","v1":"3434","id1":"123"} {"id1":"123"}
{"d1":"2344","v1":"3435","id1":"124"} {"id1":"124"}
How to do this in PySpark?
Since you're dealing with string type columns, you can remove the last } from "col1", remove the first { from "col2" and join the strings together with comma , as delimiter.
Input:
from pyspark.sql import functions as F
df = spark.createDataFrame(
[('{"d1":"2343","v1":"3434"}', '{"id1":"123"}'),
('{"d1":"2344","v1":"3435"}', '{"id1":"124"}')],
["col1", "col2"])
Script:
df = df.withColumn(
"col1",
F.concat_ws(
",",
F.regexp_replace("col1", r"}$", ""),
F.regexp_replace("col2", r"^\{", "")
)
)
df.show(truncate=0)
# +-------------------------------------+-------------+
# |col1 |col2 |
# +-------------------------------------+-------------+
# |{"d1":"2343","v1":"3434","id1":"123"}|{"id1":"123"}|
# |{"d1":"2344","v1":"3435","id1":"124"}|{"id1":"124"}|
# +-------------------------------------+-------------+

Conditionally select values from an array in a nested JSON string in a Mysql database

I am struggling to conditionally extract values from a nested JSON string in the Mysql table.
{"users": [{"userId": "10000001", "userToken": "11000000000001", "userTokenValidity": 1}, {"userId": "10000002", "userToken": "12000000000001", "userTokenValidity": 1}, {"userId": "10000003", "userToken": "13000000000001", "userTokenValidity": 0}]}
I want to select a userToken but only if the userTokenValidity is 1. So in this example only "11000000000001" and "12000000000001" should get selected.
This will extract the whole array ... how should I filter the result?
SELECT t.my_column->>"$.users" FROM my_table t;
SELECT CAST(value AS CHAR) output
FROM test
CROSS JOIN JSON_TABLE(test.data, '$.users[*]' COLUMNS (value JSON PATH '$')) jsontable
WHERE value->>'$.userTokenValidity' = 1
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=4876ec22a9df4f6d2e75a476a02a2615

Add a new key/value pair into a nested array inside a PostgreSQL JSON column

Using PostgreSQL 13.4 I have a table with a JSON column in a structure like the following sample:
{
"username": "jsmith",
"location": "United States",
"posts": [
{
"id":"1",
"title":"Welcome",
"newKey":true <----------- insert new key/value pair here
},
{
"id":"4",
"title":"What started it all",
"newKey":true <----------- insert new key/value pair here
}
]
}
For changing keys on the first level, I used a simple query like this
UPDATE
sample_table_json
SET
json = json::jsonb || '{"active": true}';
But this doesn't work for nested objects and objects in an array like in the sample.
How would I insert a key/value pair into a JSON column with nested objects in an array?
You have to use the jsonb_set function while specifying the right path see the manual.
For a single json update :
UPDATE sample_table_json
SET json = jsonb_set( json::jsonb
, '{post,0,active}'
, 'true'
, true
)
For a (very) limited set of json updates :
UPDATE sample_table_json
SET json = jsonb_set(jsonb_set( json::jsonb
, '{post,0,active}'
, 'true'
, true
)
, '{post,1,active}'
, 'true'
, true
)
For a larger set of json updates of the same json data, you can create the "aggregate version" of the jsonb_set function :
CREATE OR REPLACE FUNCTION jsonb_set(x jsonb, y jsonb, p text[], e jsonb, b boolean)
RETURNS jsonb LANGUAGE sql AS $$
SELECT jsonb_set(COALESCE(x,y), p, e, b) ; $$ ;
CREATE OR REPLACE AGGREGATE jsonb_set_agg(x jsonb, p text[], e jsonb, b boolean)
( STYPE = jsonb, SFUNC = jsonb_set) ;
and then use the new aggregate function jsonb_set_agg while iterating on a query result where the path and val fields could be calculated :
SELECT jsonb_set_agg('{"username": "jsmith","location": "United States","posts": [{"id":"1","title":"Welcome"},{"id":"4","title":"What started it all"}]}' :: jsonb
, l.path :: text[]
, to_jsonb(l.val)
, true)
FROM (VALUES ('{posts,0,active}', 'true'), ('{posts,1,active}', 'true')) AS l(path, val) -- this list could be the result of a subquery
This query could finally be used in order to update some data :
WITH list AS
(
SELECT id
, jsonb_set_agg(json :: jsonb
, l.path :: text[]
, to_jsonb(l.val)
, true) AS res
FROM sample_table_json
CROSS JOIN (VALUES ('{posts,0,active}', 'true'), ('{posts,1,active}', 'true')) AS l(path, val)
GROUP BY id
)
UPDATE sample_table_json AS t
SET json = l.res
FROM list AS l
WHERE t.id = l.id
see the test result in dbfiddle
It became a bit complicated. Loop through the array, add the new key/value pair to each array element and re-aggregate the array, then rebuild the whole object.
with t(j) as
(
values ('{
"username": "jsmith",
"location": "United States",
"posts": [
{
"id":"1", "title":"Welcome", "newKey":true
},
{
"id":"4", "title":"What started it all", "newKey":true
}]
}'::jsonb)
)
select j ||
jsonb_build_object
(
'posts',
(select jsonb_agg(je||'{"active":true}') from jsonb_array_elements(j->'posts') je)
)
from t;

merging 2 arrays into an array with 2 columns

I want to merge 2 arrays in the following format.
array1 = [ "a" , "b" , "c"]
array2 = [ 1 , 2 , 3]
merged_array = [ {"a",1} , {"b",2} , {"c",3}]
The goal is to use this as values of 2 columns and rewrite this back to google sheet.
is my format correct and if yes how should i merge the arrays as said above ?
EDIT:
i decided to use this
var output = [];
for(var a = 0; a <= array1.length; a++)
output.push([array1[a],array2[a]]);
how would this compare to map function, performancewise ?
array1 = [ "a" , "b" , "c"]
array2 = [ 1 , 2 , 3]
merged_array = []
for index, value in enumerate(array1): merged_array.append({value,array2[index]})
print (merged_array)
-> [{'a', 1}, {'b', 2}, {'c', 3}]
Merging two arrays into and array of arrays
function myFunk() {
let array1 = ["a", "b", "c"];
let array2 = [1, 2, 3];
let a = array1.map((e,i) => {return [e,array2[i]];})
Logger.log(JSON.stringify(a));
}
Execution log
4:17:09 PM Notice Execution started
4:17:08 PM Info [["a",1],["b",2],["c",3]]
Array.map()

How to match a top level array in json with specs2

In specs2 you can match an array for elements like this:
val json = """{"products":[{"name":"shirt","price":10, "ids":["1", "2", "3"]},{"name":"shoe","price":5}]}"""
def aProductWith(name: Matcher[JsonType], price: Matcher[JsonType]): Matcher[String] =
/("name").andHave(name) and /("price").andHave(price)
def haveProducts(products: Matcher[String]*): Matcher[String] =
/("products").andHave(allOf(products:_*))
json must haveProducts(
aProductWith(name = "shirt", price = 10) and /("ids").andHave(exactly("1", "2", "3")),
aProductWith(name = "shoe", price = 5)
)
(Example taken from here: http://etorreborre.github.io/specs2/guide/SPECS2-3.0/org.specs2.guide.Matchers.html)
How do I do the same thing i.e. match the contents of products if products is a root element in the json? What should haveProducts look like?
val json = """[{"name":"shirt","price":10, "ids":["1", "2", "3"]},{"name":"shoe","price":5}]"""
You can replace /("products").andHave(allOf(products:_*)) with have(allOf(products:_*)) like this:
val json = """[{"name":"shirt","price":10, "ids":["1", "2", "3"]},{"name":"shoe","price":5}]"""
def aProductWith(name: Matcher[JsonType], price: Matcher[JsonType]): Matcher[String] =
/("name").andHave(name) and /("price").andHave(price)
def haveProducts(products: Matcher[String]*): Matcher[String] = have(allOf(products:_*))
json must haveProducts(
aProductWith(name = "shirt", price = 10) and /("ids").andHave(exactly("1", "2", "3")),
aProductWith(name = "shoe", price = 5)
)