I want to convert value of column into an array. But I don't know how. Can anyone help?
Below is the structure of table that I want to change.
[{"entity":"Job","value":"400072 "},{"entity":"Job","value":"400087"}]
Expected result:
[{"entity":"Job","value":[400072, 400087]}]
The code I tried :
SELECT (
SELECT ose.TaggedEntity AS 'entity', ose.TaggedEntityId AS 'value'
FROM #OldSharedEntity AS ose
WHERE ose.TaggedEntityId NOT IN (
SELECT nse.TaggedEntityId
FROM #NewSharedEntity AS nse
)
FOR JSON PATH, INCLUDE_NULL_VALUES
) AS json
If your table's name #yourtable
You can try this
SELECT entity,
(Select JSON_QUERY('['+ STRING_AGG(value,',')+']')
FROM #yourtable t2 where t2.entity=entity) value
FROM #yourtable t
GROUP BY entity FOR JSON PATH
Related
I have a PostgreSQL query that uses a CTE and the SELECT within the CTE uses json_agg() to aggregate data as JSON objects. Is there a way to query the results of the CTE by searching for a specific object in the array based on the value of a field of objects?
For example, lets say the CTE creates a temporary table named results. The values from the json_agg() is available in a field called owners, and each owner object has a field called name. I want to SELECT * FROM results WHERE owner.name = 'John Smith'. I am not sure how to write the WHERE clause below so that the name field of each object in the array of owners is checked for the value.
WITH results AS (
-- some other fields here
(SELECT json_agg(owners)
FROM (
SELECT id, name, telephone, email
FROM owner
) owners
) as owners
)
SELECT *
FROM results
WHERE owners->>'name' == 'John Smith'
To do that query you can typically use the jsonpath language after having converted your json data into jsonb data (see the manual here and here) :
WITH results AS (
-- some other fields here
(SELECT json_agg(owners)
FROM (
SELECT id, name, telephone, email
FROM owner
) owners
) as owners
)
SELECT *
FROM results
WHERE jsonb_path_exists(owners :: jsonb, '$[*] ? (#.name == "John Smith")')
How to use DISTINCT with JSON_ARRAYAGG?
Let's consider the below query as an example.
SELECT
staff.company,
JSON_ARRAYAGG(
JSON_OBJECT(
'uuid', UuidFromBin(staff.uuid),
'username', staff.username,
'name', staff.name,
'surname', staff.surname
)
)
FROM events_staff
JOIN staff ON staff.id = staff_id
LEFT JOIN skills s ON s.id = events_staff.skill_id
GROUP BY staff.company
Now, How can I use DISTINCT with JSON_ARRAYAGG in this query so that JSON objects will be distinct? It will be better if we can apply DISTINCT based on any key like uuid.
After googling for half an hour, I found the below options but was not able to apply these in the above query.
A JSON_ARRAYAGG DISTINCT returns a JSON array composed of all the
different (unique) values for string-expr in the selected rows:
JSON_ARRAYAGG(DISTINCT col1). The NULL string-expr is not included in
the JSON array. JSON_ARRAYAGG(DISTINCT BY(col2) col1) returns a JSON
array containing only those col1 field values in records where the
col2 values are distinct (unique). Note however that the distinct col2
values may include a single NULL as a distinct value.
I have came to a workaround to solve this issue, First addressing the issue that using JSON_ARRAYAGG(DISTINCT JSON_OBJECT()) Will simply not work.
So the workaround is CONCAT('[', GROUP_CONCAT(DISTINCT JSON_OBJECT("key": value)), ']'); this will result in something like this [ {"key": <value1>},{"key":<value2>}, ...]. this will return distinct result.
Note: You might need to cast this as JSON in the end this can be done like this => CAST(CONCAT('[', GROUP_CONCAT(JSON_OBJECT("key": value)), ']') AS JSON);
I have some JSON in an oracle table:
{"orders":[{"timestamp": "2016-08-10T06:15:00.4"}]}
And using JSON_TABLE to select/create a view:
SELECT jt.*
FROM table1
JSON_TABLE (table1.json_data, '$.orders[*]' ERROR ON ERROR
COLUMNS ( StartTime TIMESTAMP PATH '$.timestamp')) AS jt;
However no matter what format I put the date/time in JSON I always get:
ORA-01830: date format picture ends before converting entire input
string
Is there a way to format the json, or something I am missing? If i pass in a date like "2016-08-10", then it will successfully create a DATE column.
When running your query on my Oracle 19.6.0.0.0 database, I do not have any problem parsing your example (see below). If you are on an older version of Oracle, it may help to apply the latest patch set. You also might have to parse it out as a string, then use TO_DATE based on the format of the date you are receiving.
SQL> SELECT jt.*
2 FROM (SELECT '{"orders":[{"timestamp": "2016-08-10T06:15:00.4"}]}' AS json_data FROM DUAL) table1,
3 JSON_TABLE (table1.json_data,
4 '$.orders[*]'
5 ERROR ON ERROR
6 COLUMNS (StartTime TIMESTAMP PATH '$.timestamp')) AS jt;
STARTTIME
__________________________________
10-AUG-16 06.15.00.400000000 AM
In Oracle 18c, your query also works (if you add in CROSS JOIN, CROSS APPLY or a comma, for a legacy cross join after table1) and change $.timeStamp to lower case.
However, if you can't get it working in Oracle 12c then you can get the string value and use TO_TIMESTAMP to convert it:
SELECT StartTime,
TO_TIMESTAMP( StartTime_Str, 'YYYY-MM-DD"T"HH24:MI:SS.FF9' )
AS StartTime_FromStr
FROM table1
CROSS JOIN
JSON_TABLE(
table1.json_data,
'$.orders[*]'
ERROR ON ERROR
COLUMNS (
StartTime TIMESTAMP PATH '$.timestamp',
StartTime_Str VARCHAR2(30) PATH '$.timestamp'
)
) jt;
So, for your sample data:
CREATE TABLE table1 ( json_data VARCHAR2(4000) CHECK ( json_data IS JSON ) );
INSERT INTO table1 ( json_data )
VALUES ( '{"orders":[{"timestamp": "2016-08-10T06:15:00.4"}]}' );
This outputs:
STARTTIME | STARTTIME_FROMSTR
:------------------------ | :---------------------------
10-AUG-16 06.15.00.400000 | 10-AUG-16 06.15.00.400000000
db<>fiddle here
I want to find multiple rows where a JSON array contains a specific value or values. Sometimes all match items will need to match (ANDs), sometimes only some (ORs) and sometimes a combination of both (ANDs and ORs).
This is in Microsoft SQL Server 2017.
I've tried doing an AS statement in the select but that resulted in the alias created for the subquery not being recognised later on in the subquery.
The bellow example works, it just seems innificent and has code duplication.
How would I only specify SELECT VALUE FROM OPENJSON(JsonData, '$.categories' once? Or perhaps there is some other way to do this?
DECLARE #TestTable TABLE
(
Id int,
JsonData nvarchar(4000)
);
INSERT INTO #TestTable
VALUES
(1,'{"categories":["one","two"]}'),
(2,'{"categories":["one"]}'),
(3,'{"categories":["two"]}'),
(4,'{"categories":["one","two","three"]}');
SELECT [Id]
FROM #TestTable
WHERE ISJSON(JsonData) = 1
-- These two lines are the offending parts of code
AND 'one' in (SELECT VALUE FROM OPENJSON(JsonData, '$.categories'))
AND 'two' in (SELECT VALUE FROM OPENJSON(JsonData, '$.categories'));
The table format cannot change, though I can add computed columns - if need be.
Well, I'm not sure if this helps you...
It might help to transform the nested array to a derived table to use it as a CTE. Check this out:
DECLARE #TestTable TABLE
(
Id int,
JsonData nvarchar(4000)
);
INSERT INTO #TestTable
VALUES
(1,'{"categories":["one","two"]}'),
(2,'{"categories":["one"]}'),
(3,'{"categories":["two"]}'),
(4,'{"categories":["one","two","three"]}');
--This is the query
WITH JsonAsTable AS
(
SELECT Id
,JsonData
,cat.*
FROM #TestTable tt
CROSS APPLY OPENJSON(tt.JsonData,'$.categories') cat
)
SELECT *
FROM JsonAsTable
The approach is very close to the query you formed yourself. The result is a table with one line per array entry. The forme Id is a repeated grouping key, the key is the ordinal position within the array, while the value is one of the words you are searching for.
In your query you can use JsonAsTable like you'd use any other table in this place.
But - instead of the repeated FROM OPENJSON queries - you will need repeated EXISTS() predicates...
A hacky solution might be this:
SELECT Id
,JsonData
,REPLACE(REPLACE(REPLACE(JsonData,'{"categories":[','",'),']}',',"'),'","',',')
FROM #TestTable
This will return all nested array values in one string, separated by a comma. You can query this using a LIKE pattern... You could return this as computed column though...
I am trying to export DB2 select with headhers. But without any success, my actual code is:
db2 "EXPORT TO /tmp/result5.csv OF DEL MODIFIED BY NOCHARDEL
SELECT 1 as id, 'DEVICE_ID', 'USER_ID' from sysibm.sysdummy1
UNION ALL (SELECT 2 as id, DEVICE_ID, USER_ID FROM MOB_DEVICES) ORDER BY id"
which is not working (I suggest because USER_ID is INTEGER), when I change it for:
db2 "EXPORT TO /tmp/result5.csv OF DEL MODIFIED BY NOCHARDEL
SELECT 1 as id, 'DEVICE_ID', 'PUSH_ID' from sysibm.sysdummy1
UNION ALL (SELECT 2 as id, DEVICE_ID, PUSH_ID FROM MOB_DEVICES) ORDER BY id"
It works, DEVICE_ID and PUSH_ID are both VARCHAR.
MOB_DEVICE TABLE Any suggest how to solve this?
Thanks for advice.
DB2 will not export a CSV file with the headers, because the headers will be included as data. Normally, CSV file is for storage not viewing. If you want to view a file with its headers you have the following options:
Export to IXF file, but this file is not a flat file. You will need a spreadsheet to view it.
Export to a CSV file and include the headers by:
Select the columns names from the name, and then perform an extra step to add it to the file. You can use the describe command or perform a select on syscat.columns for this purpose, but this process is manual.
Perform a select union, in one part the data and in the other part the headers.
Perform a select and take the output to a file. Do not use export.
select * from myTable > myTable
Ignoring the EXPORT, thus just looking exclusively at the problematic UNION ALL query:
The DB2 SQL will want to conform the data of the mismatched data-types, into the numeric data-type; in this scenario, into the INTEGER data-type. Because conspicuously, the literal string value 'USER_ID' is not a valid representation of numeric value, that value can not be cast into an INTEGER value.
However, one can explicitly request to reverse that casting [whereby SQL wants to convert from string into numeric], to ensure that the SQL obeys the desired effect, to convert the INTEGER values from the column into VARCHAR values; i.e. explicit casting can ensure the data-types between the common columns of the UNION will be compatible, by forcing the values from the INTEGER column to match the data-type of the literal\constant character-string value of 'USER_ID':
with
mob_devices (DEVICE_ID, USER_ID, PUSH_ID) as
( values( varchar('dev', 1000 ), int( 1 ), varchar('pull', 1000) ) )
( SELECT 1 as id, 'DEVICE_ID', 'USER_ID'
from sysibm.sysdummy1
)
UNION ALL
( SELECT 2 as id, DEVICE_ID , cast( USER_ID as varchar(1000) )
FROM MOB_DEVICES
)
ORDER BY id