Here is my input json data file
{"uq_id" : "12rtyADF",
"item_ids": {"item_id" : "123",
"item_name" : "telephone"},
"path" : ["1|2|6|3"],
"time" : "20150818150000" }
{"uq_id" : "1234yADF",
"item_ids" : {},
"time" : "20150818150000"}
{"uq_id" : "1er45ADF",
"item_ids" : {},
"path" : ["1|2|6|3"] }
I would like to convert this to csv or txt format as below:
----------------------------------------------------------------------------
uq_id | item_ids_itemid | item_ids_itemname | path | time
----------------------------------------------------------------------------
12rtyADF | 123 | telephone | "1|2|6|3" | 20150818150000
1234yADF | | | | 20150818150000
1er45ADF | | | "1|2|6|3" |
Please note that all field in the input file are not mandatory except uq_id.
request is to read the json file and load to oracle DB. Was thinking if json file can be converted to csv or text then its easy to load to oracle DB. Oracle version is 11g.
Related
I have a mysql data table and a csv file, the table has a json type column, and the csv file has a corresponding json type field, I use the "load data local infile..." method to import the csv file into mysql , there is a problem with this process.
here is my datasheet details:
mysql> desc test;
+---------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------+--------------+------+-----+---------+----------------+
| id | int | NO | PRI | NULL | auto_increment |
| content | json | YES | | NULL | |
| address | varchar(255) | NO | | NULL | |
| type | int | YES | | 0 | |
+---------+--------------+------+-----+---------+----------------+
and my sql statement:
mysql> load data local infile '/Users/kk/Documents/test.csv'
-> into table test
-> fields terminated by ','
-> lines terminated by '\n'
-> ignore 1 rows
-> (id,address,content,type);
ERROR 3140 (22032): Invalid JSON text: "The document root must not be followed by other values." at position 3 in value for column 'test.content'.
My csv file data is as follows
"id","address","content","type"
1,"test01","{\"type\": 3, \"chain\": 1, \"address\": \"test01\"}",1
2,"test02","{\"type\": 3, \"chain\": 2, \"address\": \"test02\"}",1
If you are able to hand-craft a single insert statement that works (example here) you could go via a preprocessor written in a simple scripting language. Python, AutoIT, PowerShell, ... Using a preprocessor you have more control of fields, quoting, ordering etc compared to direct import in MySQL.
So for example (assuming you have used Python)
python split.py /Users/kk/Documents/test.csv > /tmp/temp.sql
mysql -h myhostname -u myUser mydatabase < temp.sql
where temp.sql would be something like
insert into test (content, address, type) values (`{"type":3,"chain":1,"address":"test01"}`, `test01`, 1);
...
For example, I have the following JSON file:
[
{
"DataType":"DataType_A",
"JSON":"{\"x\":\"0\", \"y\":\"1\", \"z\":\"2\"}"
},
{
"DataType":"DataType_A",
"JSON":"{\"x\":\"1\", \"y\":\"3\", \"z\":\"0\"}"
},
{
"DataType":"DataType_B",
"JSON":"{\"Name\":\"steve\", \"Id\":\"4b\"}"
},
{
"DataType":"DataType_B",
"JSON":"{\"Name\":\"andy\", \"Id\":\"7c\"}"
},
{
"DataType":"DataType_C",
"JSON":"{\"Address\":\"123 Anywhere St.\", \"Town\":\"Springfield\"}"
},
{
"DataType":"DataType_C",
"JSON":"{\"Address\":\"1400 Another Rd.\", \"Town\":\"Anytown\"}"
}
]
I can import the file via the Get Data > From JSON function, resulting in this table:
| DataType | JSON |
| DataType_A | {"x":"0", "y":"1", "z":"2"} |
| DataType_A | {"x":"1", "y":"3", "z":"0"} |
| DataType_B | {"Name":"steve", "Id":"4b"} |
| DataType_B | {"Name":"andy", "Id":"7c"} |
| DataType_C | {"Address":"123 Anywhere St.", "Town":"Springfield" } |
| DataType_C | {"Address":"1400 Another Rd.", "Town":"Anytown" } |
How do I go from the table I have to 3 tables, one for each data type?
I want to move a file consisting of raw file to MYSQL.
I want to assign the key in the raw file to the field and move the value to the data in the field, but I do not know what to do.
Is there a way to do it without directly inserting it?
trying insert a raw file into MySQL.
Help!
Example raw files
file1
[{"app":"unknown", "uid": "1000", "says": "hello"}, {"app":"hi", "uid": "1020", "says": "good"}]
file2
[{"app":"wowo", "uid": "20", "says": "asdf"}, {"app":"no", "uid": "1030", "says": "goso"}]
Want MYSQL Result
+-----------------+------+-------------+
| app | uid | says |
+-----------------+------+-------------+
| unknown | 1000 | hello |
| hi | 1020 | good |
| wowo | 20 | asdf |
| no | 1030 | goso |
+-----------------+------+-------------+
I've got several Postgres 9.4 tables that contain data like this:
| id | data |
|----|-------------------------------------------|
| 1 | {"user": "joe", "updated-time": 123} |
| 2 | {"message": "hi", "updated-time": 321} |
I need to transform the JSON column into something like this
| id | data |
|----|--------------------------------------------------------------|
| 1 | {"user": "joe", "updated-time": {123, "unit":"millis"}} |
| 2 | {"message": "hi", "updated-time": {321, "unit":"millis"}} |
Ideally it would be easy to apply the transformation to multiple tables. Tables that contain the JSON key data->'updated-time' should be updated, and ones that do not should be skipped. Thanks!
You can use the || operator to merge two jsonb objects together.
select '{"foo":"bar"}'::jsonb || '{"baz":"bar"}'::jsonb;
= {"baz": "bar", "foo": "bar"}
I try to migrate data an old database into our new application.
In process, I need to grab data from old db to create a JSON which must be stored in a field in the new MySQL db.
So I use the components tWriteJSONField and tExtractJSONFields.
In tWriteJSONField, my XML tree looks like this :
path
|-- id [loop element]
|-- name
|-- description
N.B. : I can't find how to use loop element and group element properties. I don't understand how it works and the documentation doesn't talk about this.
The component tWriteJSONField is linked to a tExtractJSONFields in order to extract id from JSON. I need this to know to each record JSON must be linked.
tExtractJSONFields configuration : XPath request
"/path"
tExtractJSONFields configuration : Mapping
-----------------------------------------------
| column | XPath request | get nodes ? |
-----------------------------------------------
| idForm | "id" | false |
-----------------------------------------------
| jsonStructure | "*" | yes |
-----------------------------------------------
My problem is in jsonStructure output by tExtractJSONField, I only get the first child of my root tag. In my case jsonStructure looks like this :
{
"id": "123"
}
Expected result is :
{
"id": "123",
"name": "Test",
"description": "Test"
}
If I declare the child name before id for example, I will get :
{
"name": "Test"
}
I have tried to change the XPath query for jsonStructure but I never get all the fields.
Why ?
It's my first question about Talend, so if it lacks information, let me know in the comments.
Thanks for help.
EDIT :
Data from tMysqlInput to tWriteJSONField :
N.B. : My flux contains more columns but I only show you which are used to create JSON.
---------------------------------------------------------------------------------------
| IdForm | NomForm | DescrForm |
---------------------------------------------------------------------------------------
| 1 | English training | <p>This is a description of the training</p> |
---------------------------------------------------------------------------------------
| 2 | French training | <p>This contains HTML tags from a WYSIWYG</p> |
---------------------------------------------------------------------------------------
| 3 | How to use the application | <p>Description</p> |
---------------------------------------------------------------------------------------
In tWriteJSONField, columns are mapped to the JSON like this :
path
|-- id [loop element] --> IdForm
|-- name --> NomForm
|-- description --> DescrForm
tWriteJSONField output a new flux with the same columns as the input (although, this columns are all empty in the output even if they were populated in input) and add a new one jsonStructure which contains generated JSON.
This new flux is caught by a tExtractJSONFields (configuration for this component is available in my original post).
tExtractJSONFields outputs this flux :
--------------------------
| IdForm | jsonStructure |
--------------------------
| 1 | { "id": "1" } |
--------------------------
| 2 | { "id": "2" } |
--------------------------
| 3 | { "id": "3" } |
--------------------------
And I expect it returns this one :
--------------------------------------------------------------------------------------------
| IdForm | jsonStructure |
--------------------------------------------------------------------------------------------
| 1 | { "id": "1", "name": "English training", "description": "<p>This is[...]</p>" } |
--------------------------------------------------------------------------------------------
| 2 | { "id": "2", "name": "French training", "description": "<p>[...]</p>" } |
--------------------------------------------------------------------------------------------
| 3 | { "id": "3", "name": "How to use the [...]", "description": "<p>[...]</p>" } |
--------------------------------------------------------------------------------------------
EDIT 2
I use TOS 5.4.0.r110020 if it can help.
Your XPath request for JSONStructure column is not correct. Just remove "*" and you will get the expected result.
Also, if you don't need the root node in the json entry, just check "Remove root node" on tWriteJsonField and change Loop XPath Query to "/" in tExtractJSONFields