How to insert json field values Postgresql 14 - json

I am trying to insert some data into my Postgres table. There is a column which includes some json Data. But every time I try to insert it shows some error
Below is my query
INSERT INTO settings.tbl_settings
(sin_product_id,
vhr_sys_module_name,
vhr_grouping_name,
vhr_settings_sys_name,
vhr_label,
vhr_value,
arj_select_items,
txt_remarks,
vhr_widget_type,
sin_settings_category,
int_sys_action_id,
fk_created_user_id,
dtm_created)
VALUES
(1,
'XO_PURCHASE',
'XO Purchase',
'NEED_XO_PURCHASE_SUPPLIER_SIDE_POSTING',
'Need XO Purchase Supplier Side Posting',
'NO_POSTING',
'[{"strSysName":"NO_POSTING","strLabel":"No Posting"} ,{"strSysName":"POSTING","strLabel":"Posting"}]'::JSONB[],
'',
'SELECTBOX',
2,
0,
1,
TO_TIMESTAMP('27-08-2022 17.24.34', 'DD/MM/YYYY HH24:MI:SS')
);
It Returns Given Error
ERROR: malformed array literal: "[{"strSysName":"NO_POSTING","strLabel":"No Posting"} ,{"strSysName":"POSTING","strLabel":"Posting"}]"
LINE 27: '[{"strSysName":"NO_POSTING","st...
^
DETAIL: "[" must introduce explicitly-specified array dimensions.
SQL state: 22P02
Character: 1024

Your mistake is that when you perform the insert operation, you convert the string into a jsonb array (::JSONB[]), but you just need to jsonb, since the type of your arj_select_items column is most likely defined as jsonb.
Demo in dbfiddle

Related

Converting table with Varchar columns to custom JSON in Snowflake

I have a table that has the following data
START_DATE
NAME
ID
01/18/2022
JOHN
10
01/19/2022
ADAM
20
I am trying to convert this to JSON in a custom format like below -
{
"labels":{
"name":"JOHN",
"id":[10]
}
"values":{
"startDate":"01/18/2022"
}
}
PARSE_JSON way of
SELECT parse_json('{"values": {startDate: A.STARTDATE}}')
FROM TABLE_A A;
resulted in error
Error parsing JSON: unknown keyword "A", pos 25
OBJECT_CONSTRUCT results in converting column name as key and column value as value.
Please advise how to have custom field names in JSON conversion in Snowflake.
Renamed objects names as per data given and changed Id to array:
create table test1 values(START_DATE date, NAME string,ID number);
insert into test1(START_DATE, NAME,ID ) values('01/18/2022','JOHN', 10);
insert into test1(START_DATE, NAME,ID ) values('01/19/2022','ADAM', 20);
Select
OBJECT_CONSTRUCT('ID',id::array,'NAME',name) as label_obj,
OBJECT_CONSTRUCT(
'start_date',
START_DATE::string) as start_dt_obj,
object_insert(object_construct('labels', label_obj), 'values', start_dt_obj) as final_json
from
Test1;

Convert string -> binary -> integer in snowflake

I am attempting to convert a string to its binary representation and ultimately an integer in Snowflake, but am unable to get the desired result with either TO_BINARY or ::BINARY(<n>). For example, I am able to do what I want in postgres with the following code
SELECT ('x' || 'abcd')::BIT(32)
which returns 10101011110011010000000000000000 as desired.
I want to get the same result in Snowflake, but can't. I have tried both of the following, but simply get the same string returned (e.g. ABCD returned as output)
SELECT TO_BINARY('abcd')
...
SELECT 'abcd'::BINARY(32)
I am attempting to convert a string to its binary representation and
ultimately an integer in Snowflake
If you need to convert a hex value to integer in Snowflake, it's easy:
select TO_NUMBER( 'abcd', 'XXXX' );
43981
If you need to convert a hex value to bits, Snowflake does not have a native function but it's possible to write a JavaScript function:
create or replace function convert_to_bits( decimal_value float, size float )
returns string
language javascript
as $$
var res = parseInt(DECIMAL_VALUE).toString(2);
return res + "0".repeat( SIZE - res.length );
$$;
select convert_to_bits(TO_NUMBER( 'abcd', 'XXXX' ), 32) ;
10101011110011010000000000000000

sql udf sum varying inputs [duplicate]

I am trying to create a MySQL function IS_IN_ENUM('value', 'val1', 'val2', 'val3') which return true if 'value' is in ('val1', 'val2', 'val3'). I know I can do SELECT 'value' IN ('val1', 'val2', 'val3') but that's less intersting because I just want to learn how to create such functions.
I give you an example, consider the following ADD function :
CREATE FUNCTION my_add (
a DOUBLE,
b DOUBLE
)
RETURNS DOUBLE
BEGIN
IF a IS NULL THEN
SET a = 0;
END IF;
IF b IS NULL THEN
SET b = 0;
END IF;
RETURN (a + b);
END;
If I do SELECT my_add(1, 1), I get 2 (wow!).
How can I improve this function to be able to call :
SELECT my_add(1, 1); -- 2
SELECT my_add(1, 1, 1); -- 3
SELECT my_add(1, 1, 1, 1); -- 4
SELECT my_add(1, 1, 1, 1, 1, 1, .....); -- n
The function example you show is a Stored Function, not a UDF. Stored Functions in MySQL don't support a variable number of arguments, as #Enzino answered.
MySQL UDFs are written in C or C++, compiled into dynamic object files, and then linked with the MySQL server with a different syntax of CREATE FUNCTION.
See http://dev.mysql.com/doc/refman/5.5/en/adding-udf.html for details of writing UDFs. But I don't know if you want to get into writing C/C++ code to do this.
MySQL UDFs do support variable number of arguments. In fact, all UDFs implicitly accept any number of arguments, and it's up to you as the programmer to determine if the number and datatypes of the arguments given are valid for your function.
Processing function arguments in UDFs is documented in http://dev.mysql.com/doc/refman/5.5/en/udf-arguments.html
I am trying to create a MySQL function IS_IN_ENUM('value', 'val1',
'val2', 'val3') which return true if 'value' is in ('val1', 'val2',
'val3').
For this you can use the native function FIELD:
http://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_field
IS_IN_ENUM means FIELD != 0.
Check also FIND_IN_SET
http://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_find-in-set
Stored functions do not support variable number of parameters.
Now, if you really want to implement such a native function yourself in the MySQL server code, look for subclasses of Create_native_func in sql/item_create.cc
Old question but you don need to create is_in_enum since that is already build in. simply do select true from table where value IN (1,2,3,4);

Unable to work with unixtimestamp column type in string data type

I have a hive table to load JSON data. There are two values in my JSON. Both have data type as string. If I keep them as bigint, then select on this table gives below error:
Failed with exception java.io.IOException:org.apache.hadoop.hive.serde2.SerDeException: org.codehaus.jackson.JsonParseException: Current token (VALUE_STRING) not numeric, can not use numeric value accessors
at [Source: java.io.ByteArrayInputStream#3b6c740b; line: 1, column: 21]
If I change it two string, then it works OK.
Now, because these columns are in string, I am not able to use from_unixtime method for these columns.
If I try to alter these columns data types from string to bigint, I get below error:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. The following columns have types incompatible with the existing columns in their respective positions : uploadtimestamp
Below is my create table statement:
create table ABC
(
uploadTimeStamp bigint
,PDID string
,data array
<
struct
<
Data:struct
<
unit:string
,value:string
,heading:string
,loc:string
,loc1:string
,loc2:string
,loc3:string
,speed:string
,xvalue:string
,yvalue:string
,zvalue:string
>
,Event:string
,PDID:string
,`Timestamp`:string
,Timezone:string
,Version:string
,pii:struct<dummy:string>
>
>
)
row format serde 'org.apache.hive.hcatalog.data.JsonSerDe'
stored as textfile;
My JSON:
{"uploadTimeStamp":"1488793268598","PDID":"123","data":[{"Data":{"unit":"rpm","value":"100"},"EventID":"E1","PDID":"123","Timestamp":1488793268598,"Timezone":330,"Version":"1.0","pii":{}},{"Data":{"heading":"N","loc":"false","loc1":"16.032425","loc2":"80.770587","loc3":"false","speed":"10"},"EventID":"Location","PDID":"skga06031430gedvcl1pdid2367","Timestamp":1488793268598,"Timezone":330,"Version":"1.1","pii":{}},{"Data":{"xvalue":"1.1","yvalue":"1.2","zvalue":"2.2"},"EventID":"AccelerometerInfo","PDID":"skga06031430gedvcl1pdid2367","Timestamp":1488793268598,"Timezone":330,"Version":"1.0","pii":{}},{"EventID":"FuelLevel","Data":{"value":"50","unit":"percentage"},"Version":"1.0","Timestamp":1488793268598,"PDID":"skga06031430gedvcl1pdid2367","Timezone":330},{"Data":{"unit":"kmph","value":"70"},"EventID":"VehicleSpeed","PDID":"skga06031430gedvcl1pdid2367","Timestamp":1488793268598,"Timezone":330,"Version":"1.0","pii":{}}]}
Any ways I can convert this string unixtimestamp to standard time or I can work with bigint for these columns?
If you are talking about Timestamp and Timezone then you can define them as int/big int types.
If you'll look on their definition you'll see that there are no qualifiers (") around the values, therefore they are of numeric types within in the JSON doc:
"Timestamp":1488793268598,"Timezone":330
create external table myjson
(
uploadTimeStamp string
,PDID string
,data array
<
struct
<
Data:struct
<
unit:string
,value:string
,heading:string
,loc3:string
,loc:string
,loc1:string
,loc4:string
,speed:string
,x:string
,y:string
,z:string
>
,EventID:string
,PDID:string
,`Timestamp`:bigint
,Timezone:smallint
,Version:string
,pii:struct<dummy:string>
>
>
)
row format serde 'org.apache.hive.hcatalog.data.JsonSerDe'
stored as textfile
location '/tmp/myjson'
;
+------------------------+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| myjson.uploadtimestamp | myjson.pdid | myjson.data |
+------------------------+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 1486631318873 | 123 | [{"data":{"unit":"rpm","value":"0","heading":null,"loc3":null,"loc":null,"loc1":null,"loc4":null,"speed":null,"x":null,"y":null,"z":null},"eventid":"E1","pdid":"123","timestamp":1486631318873,"timezone":330,"version":"1.0","pii":{"dummy":null}},{"data":{"unit":null,"value":null,"heading":"N","loc3":"false","loc":"14.022425","loc1":"78.760587","loc4":"false","speed":"10","x":null,"y":null,"z":null},"eventid":"E2","pdid":"123","timestamp":1486631318873,"timezone":330,"version":"1.1","pii":{"dummy":null}},{"data":{"unit":null,"value":null,"heading":null,"loc3":null,"loc":null,"loc1":null,"loc4":null,"speed":null,"x":"1.1","y":"1.2","z":"2.2"},"eventid":"E3","pdid":"123","timestamp":1486631318873,"timezone":330,"version":"1.0","pii":{"dummy":null}},{"data":{"unit":"percentage","value":"50","heading":null,"loc3":null,"loc":null,"loc1":null,"loc4":null,"speed":null,"x":null,"y":null,"z":null},"eventid":"E4","pdid":"123","timestamp":1486631318873,"timezone":330,"version":"1.0","pii":null},{"data":{"unit":"kmph","value":"70","heading":null,"loc3":null,"loc":null,"loc1":null,"loc4":null,"speed":null,"x":null,"y":null,"z":null},"eventid":"E5","pdid":"123","timestamp":1486631318873,"timezone":330,"version":"1.0","pii":{"dummy":null}}] |
+------------------------+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Even if you have defined Timestamp as a string you can still cast it to a bigint before using it in a function that requires a bigint.
cast (`Timestamp` as bigint)
hive> with t as (select '0' as `timestamp`) select from_unixtime(`timestamp`) from t;
FAILED: SemanticException [Error 10014]: Line 1:45 Wrong arguments
'timestamp': No matching method for class
org.apache.hadoop.hive.ql.udf.UDFFromUnixTime with (string). Possible
choices: FUNC(bigint) FUNC(bigint, string) FUNC(int)
FUNC(int, string)
hive> with t as (select '0' as `timestamp`) select from_unixtime(cast(`timestamp` as bigint)) from t;
OK
1970-01-01 00:00:00

Cassandra + data insertion + set<FROZEN<map<text,text>>>

My column family structure is:
create table mykeyspc."test" (
id int PRIMARY KEY,
val set<frozen<map<text,text>>>
);
when I am inserting data through CQL shell
insert into "test" JSON '{"id":1,"val":{"ab","bc"}}';
Error: INVALIDREQUEST: code=2200 [Invalid query] message="Counld not decode JSon string as
map:org.codehaus.jackson.jsonParseException: Unexpected character{'{'{ code 123})
or
insert into "test" (id,val) values (1,{{'ab','bc'},{'sdf','name'}});
Error: INVALIDREQUEST: code=2200 [Invalid query] message="INVALID SET LITERAL FOR
VAL:value{'a','b'} is not of type frozen<map<text,text>>"
In your second example, try separating the map key/values with colons : instead of commas.
aploetz#cqlsh:stackoverflow> INSERT INTO mapOfSet (id,val)
VALUES (1,{{'ab':'bc'},{'sdf':'name'}});
aploetz#cqlsh:stackoverflow> SELECT * FROm mapofset WHERE id=1;
id | val
----+---------------------------------
1 | {{'ab': 'bc'}, {'sdf': 'name'}}
(1 rows)