row_to_json gives JSON with lower case field names - json

I have created a custom type as following:
create type my_type as (camelCasedIdentifier uuid, ...);
I am using this custom type my_type to define the field names in a JSON body:
select row_to_json(row(my_table.id, ...)::my_type) from my_table;
The reason why I thought using a custom type is useful is that this way, I don't have to define the JSON field names in every query (they differ from the table column names in my case), as you would have to do with json_build_object().
The problem here however is that the field names are now all in lower case:
{"camelcasedidentifier":"d8f0a177-af13-4fa2-a2af-3bc8296d848e", ...}
I expected:
{"camelCasedIdentifier":"d8f0a177-af13-4fa2-a2af-3bc8296d848e", ...}
How can I fix this? I know this can be fixed by using select json_build_object('camelCasedIdentifier', my_table.id) from my_table, but I would rather not do that, as I will be forced to enumerate the JSON field names in every query.

In SQL identifiers are not case sensitive so your type was actually created with a field named camelcasedidentifier.
If you really need that, you have to use quoted identifiers:
create type my_type as ("camelCasedIdentifier" uuid, ...);
If you only use that type to do the JSON conversion this is acceptable, but using those dreaded quoted identifiers everywhere is going to give you more problems in the long run than they are worth it.

Related

How to set value in MySQL(5.6) column if that contains json document as string

How to set value in MySQL(5.6) column if that contains JSON document as a string
For example, if we have a table - user in that we have three columns id, name and jsonConfig and column jsonConfig contains data as a JSON document
{"key1":"val1","key2":"val2","key3":"val3"}
I would like to replace the value of val1 let's say to val4 for jsonConfig column
Can we do that using MySQL(5.6) queries?
I don't thing their is direct way to do this like in later version alot of json support was added like JSON_EXTRACT, JSON_CONTAINS etc.You might have to write your own custom function.
With MySQL 5.6, since it does not have the JSON data type or the supporting functions, you are going to have to replace the entire string via an UPDATE query if you want to change any part of the JSON document in your string.

Talend Casting of JSON string to JSON or JSONB in PostgreSQL

I'm trying to use Talend to get JSON data that is stored in MySQL as a VARCHAR datatype and export it into PostgreSQL 9.4 table of the following type:
CREATE TABLE myTable( myJSON as JSONB)
When I try running the job I get the following error:
ERROR: column "json_string" is of type json but expression is of type
character varying
Hint: You will need to rewrite or cast the expression. Position:
54
If I use python or just plain SQL with PostgreSQL insert I can insert a string such as '{"Name":"blah"}' and it understands it.
INSERT INTO myTable(myJSON) VALUES ('{"Name":"blah"}');
Any Idea's how this can be done in Talend?
You can add a type-cast by opening the "Advanced Settings" tab on you "tPostgresqlOutput" component. Consider the following example:
In this case, the input row to "tPostgresqlOutput_1" has one column data. This column is of type String and is mapped to the database column data of type VARCHAR (as by the default suggested by Talend):
Next, open the component settings for tPostgresqlOutput_1 and locate the "Advanced settings" tab:
On this tab, you can replace the existing data column by a new expression:
In the name column, specify the target column name.
In the SQL Expression column, do your type casting. In this case: "?::json"`. Note the usage of the placeholder character?`` which will be replaced with the original value.
In Position, specify Replace. This will replace the value proposed by Talend with your SQL expression (including the type cast).
As Reference Column use the source value.
This should do the trick.
Here is a sample schema for where in i have the input row 'r' which has question_json and choice_json columns which are json strings. From which i know the key what i wanted to extract and here is how i do
you should look at the columns question_value and choice_value. Hope this helps you

Parsing json into data structures with lower case field names

I am parsing JSON into ABAP structures, and it works:
DATA cl_oops TYPE REF TO cx_dynamic_check.
DATA(text) = `{"TEXT":"Hello ABAP, I'm JSON!","CODE":"123"}`.
TYPES: BEGIN OF ty_structure,
text TYPE string,
code TYPE char3,
END OF ty_structure.
DATA : wa_structure TYPE ty_structure.
TRY.
text = |\{"DATA":{ text }\}|.
CALL TRANSFORMATION id OPTIONS clear = 'all'
SOURCE XML text
RESULT data = wa_structure.
WRITE: wa_structure-text , wa_structure-code.
CATCH cx_transformation_error INTO cl_oops.
WRITE cl_oops->get_longtext( ).
ENDTRY.
The interesting part is that the CODE and TEXT are case sensitive. For most external systems, having all CAPS identifiers is ugly, so I have been trying to parse {"text":"Hello ABAP, I'm JSON!","code":"123"} without any success. I looked into the options, I looked whether a changed copy of id migh accomplish this, I googled it and have no idea how to accomplish this.
Turns out that SAP has a sample program on how to do this.
There is basically an out of the box transformation that does this for you called demo_json_xml_to_upper. The name is a bit unfortunate, so I would suggest renaming this transformation and adding it to the customer namespace.
I am a bit bummed that this only works through xstrings, so debugging it becomes a pain. But, it works perfeclty and solved my problem.
My solution to this is low tech. I spent hours looking for a simple way to get out of this mess that the JSON response could have the fieldnames in lower or camel case. Here it is: if you know the field names - obviously you do because your table has the same column names - just replace the lower case name with an upper case one in your xstring.
If in your table the field is USERS_ID and in the JSON xstring it is users_ID - go for that:
replace all occurrences of 'users_ID' in ls_string with 'USERS_ID'.
Do the same for all fields and the object name and call transformation ID.

Cast JSON to HSTORE in Postgres 9.3+?

I've read the docs and it appears that there's no discernible way to perform an ALTER TABLE ... ALTER COLUMN ... USING statement to directly convert a json type column to an hstore type. There's no function available (that I'm aware of) to perform the cast.
The next best alternative I have is to create a new column of type hstore, copy my JSON data to that new column using some external tool, drop the old json column and rename the new hstore column to the old column's name.
Is there a better way?
What I have so far is:
$ CREATE TABLE blah (unstructured_data JSON);
$ ALTER TABLE blah ALTER COLUMN unstructured_data
TYPE hstore USING CAST(unstructured_data AS hstore);
ERROR: cannot cast type json to hstore
Unfortunately, PostgreSQL doesn't allow all kind of expressions within the USING clause of ALTER TABLE ... SET DATA TYPE ... (f.ex. sub-queries are disallowed).
But, you can write a function to overcome this, you just need to decide what to do with advanced types (in object's values), like arrays & objects. Here is an example, which simply converts them to string:
CREATE OR REPLACE FUNCTION my_json_to_hstore(json)
RETURNS hstore
IMMUTABLE
STRICT
LANGUAGE sql
AS $func$
SELECT hstore(array_agg(key), array_agg(value))
FROM json_each_text($1)
$func$;
After that, you can use this in your ALTER TABLE, like:
ALTER TABLE blah
ALTER COLUMN unstructured_data
SET DATA TYPE hstore USING my_json_to_hstore(unstructured_data);
There is "trap" for repeated keys - allowed by both json and hstore input, but unfortunately resolved differently (!). Consider this example value:
json '{"double_key":"key1","foo":null,"double_key":"key2"}'
In json, 'double_key is effectively 'key2'. The manual:
Because the json type stores an exact copy of the input text, it will
preserve semantically-insignificant white space between tokens, as
well as the order of keys within JSON objects. Also, if a JSON object
within the value contains the same key more than once, all the
key/value pairs are kept. (The processing functions consider the last value as the operative one.)
Bold emphasis mine.
In hstore, however, for the same order of key/value pairs, 'double_key' might effectively be 'key1'. The manual:
Each key in an hstore is unique. If you declare an hstore with
duplicate keys, only one will be stored in the hstore and there is no guarantee as to which will be kept:
Typically, the first instance of a key, but that's an implementation details that might change.
A simple and fast option to always preserve the effective, operative value: cast to jsonb before the conversion. The manual again:
[...] jsonb does not preserve white space, does not preserve
the order of object keys, and does not keep duplicate object keys.
If duplicate keys are specified in the input, only the last value is kept.
Modifying #pozs's conversion function:
CREATE OR REPLACE FUNCTION json2hstore(json)
RETURNS hstore AS
$func$
SELECT hstore(array_agg(key), array_agg(value))
FROM jsonb_each_text($1::jsonb) -- !
$func$ LANGUAGE sql IMMUTABLE STRICT;
Requires Postgres 9.4 or later. Postgres 9.3 has the json type, but not jsonb, yet. A no-op in PL/v8 might be alternative there, like #jpmc mentioned.

How do I cast ENUMs to their integer values when selecting into an outfile in MySQL?

Description is basically the question.
I have a CSV file that I load into a table in my database. I then (SELECT * FROM tablename into OUTFILE '...') and check that the two CSVs are equal. I do this to make sure that the database was loaded properly.
My problem is that one of the fields is an ENUM type. The CSV I use to load the table contains the integer representation (for reasons I can't control) and the outfile CSV contains the string representation. Is there any way for me to cast the ENUM into its integer value when creating the outfile? I feel as though this should be an easy thing to do but I wasn't able to find an answer on Google.
Note: I have to do this for many different tables so it's difficult for me to treat each table on an individual basis.
You need to use a numeric context to get the enum's integer value. Try this:
SELECT ..., enum_field+0, ...
This will evaluate the enum_field in a numeric context, which will return the index value.