Store JSON in multiple columns and concatenate them to query them - json

I have a table that currently has a extended column 32,768 bytes in size and so the database uses that space no matter if we put in 1 byte or all 32,768.
I store json in this column.
I am needing to reduce the size this column is taking.
Can I store the json in multiple columns and then concatenate the columns to work with the complete JSON?
For example
column has data:
'{"REC_TYPE_IND":"1","ID":"999999","2nd ID":"1111","location":"0003","BEGIN_DT":"20000101","END_DT":"20991231"}'
I want to split it out like
column1:
'{"REC_TYPE_IND":"1","ID":"999999","2nd '
column2:
'ID":"1111","location":"0003","BEGIN_DT":"20000101","END_DT":"20991231"}'
The how to I use built in functions like json_value(column1 || column2,'location') to get a value?
The error I get when trying the above is:
ORA-00932: inconsistent datatypes: expected - got CHAR

Related

JSON file Size loaded into Snowflake Variant Column

We have JSON files loaded into Snowflake table (Variant column), the table has 2 column - File name and Variant column (JSON records).
I am able to get the whole table size from information schema, however we are interested in getting the size of each row/record in the table.
Can you please help me if there is any formula/function that can help me to get the row size.
LEN(expression) Returns the length of an input string or binary value.
You can concatenate columns using type cast operator and get resulted string length.
Like this:
len(col1::string||col1::string||colN::string)
len(variant) also works

Apache NiFi: Creating a new column by comparing multiple rows with various data

I have a csv which looks like this:
first,second,third,num1,num2,num3
12,312,433,0787388393,0783452323,01123124
12,124345,453,07821323,077213424,0123124421
33,2432,214,077213424,07821323,0234234211
I have to create another column according to the data stored in num1 and num2. There can be various values in the columns but the new column should only contain 2 values it's either original or fake. (I should only compare the first 3 digits in both num1 andnum2`.
For the mapping part I have another csv which looks like this (I have many more rows):
078,078,fake
072,078,original
077,078,original
My Output csv should look like this after mapping:
first,second,third,num1,num2,num3,status
12,312,433,0787388393,0783452323,01123124,fake
12,124345,453,07821323,072213424,0123124421,original
33,2432,214,078213424,07821323,0234234211,fake
Hope you can suggest me a nifi workflow to get the following done:
You can use LookupRecord for this, but due to the special logic you'll likely have to write your own ScriptedLookupService to read in the mapping file and compare the first 3 digits.

Mysql: extract number from string

Hello I have a column in the table which is serialized:
I want the last number in the array if there is only one number in that array.
column
a:1:{s:7:"general";s:6:"666423";}
a:1:{s:7:"general";s:5:"36624";}
a:1:{s:7:"general";s:12:"36628, 36624";}
a:1:{s:7:"general";s:5:"36601";}
a:1:{s:7:"general";s:4:"9847";}
a:1:{s:7:"general";s:3:"444";}
a:1:{s:7:"general";s:2:"56";}
a:1:{s:7:"general";s:1:"7";}
Expected output
-
666423
36624
null (i do not want to extract if there is more than one number)
36601
9847
444
56
7
Which string function would be the most efficient in this case?
You can take advantage of the serialized format that always ends the same way and use reverse+locate-function first to get the string and then see if it contains multiple values.
See SQLFiddle.

Add value from every row in a table and output (Cast JSON string to int)

I'm querying an SQL database that I have read only access to (Cannot edit tables/create columns etc)
My table contains a column with JSON strings that have (Actual strings are much larger, this is just an example) the following syntax
{"value":"442","country":"usa"}
I would like to add the values contained in the JSON string from each row together and output it as readable, if this is possible?
The values are in the same point of the JSON, as shown above. The values vary in length also, most are 3/4 characters long.
Try the following (for MySQL v5.7+):
select sum(json_extract(jsonString, '$.value')) from mytable;
An example of this is here.

Preserving decimal values in SSIS

I have a column from my .csv file coming in with values as 1754625.24 etc,. where as we have to save it as integer in our database. So am trying to split number with '.' and divide second part with 1000 (24/1000) as i want 3 digit number.
so i get 0.024. But i am having issues storing/preserving that value as a decimal.
I tried (DT_DECIMAL,3) conversion but i get result as '0'.
My idea is to then append '024' part to original first part. So my final result should look like 1754625024
Please help
I am not convinced why would you store 1754625.24 as 1754625024 when storing it as int.
But still for your case , we can use a derived column task and
use Replace command on the source column of csv. E.g.
Replace('1754625.24','.',0)