{"edit_history_tweet_ids":["1607615574473080832"],"id":"1607615574473080832","text":"Twitter bms with this dumb ass update ðŸ˜"}
This is my flowfile
and i wanna INSERT that mysql DB using PutDatabaseRecord
table_desc
it's my table description (ver 8.0)
PutDatabaseRecord Process' bulletin say java.lang.String cannot be cast to java.lang.Byte
I think 'edit_history_tweet_ids' column is problem
What should I do?
I tried that JSON flowfile ConvertJsontoSql and PutSql
but after ConvertJsontoSql Processor, edit_history_tweet_ids column's data disappears
after_convertjsontosql
when INSERT valid generated flowfile, it is successfully done.
updated_attribute
It was solved by converting text to escapeJSON using UpdateRecord and changing it to csv format.
It seems to be solved by setting text to string instead of array.
Related
I'm trying to load a CSV data into mysql from talend studio and getting the below mentioned error:
Couldn't parse value for column 'RegisterTime' in 'row1', value is '1974-10-22 08:46:40.000'. Details: java.lang.RuntimeException: Unparseable date: "1974-10-22 08:46:40.000"
Field RegisterTime has the data type as 'Date' and format as yyyy-MM-dd hh:mm:ss.SSS while defining the schema as metadata in the talend studio.
Am I using an incorrect format? Any help in suggesting the right format would be great.
This is probably a Date pattern problem even though the one you indicate is the right one. You should make sure this pattern is used in the component itself by going to the component view -> "Edit Schema".
because has millisecond,'RegisterTime' the data type should be 'DATETIME' and length is 3.try again
I am trying to ingest the data of my CSV file into MySQL Db. My CSV file have field called 'MeasurementTime' value as 2018-06-27 11:14.50. My flow is taking that field as string and thus PutSQL is giving error. I am using the same template as per this Template but not using the InferAvro processor as i already have a pre-defined schema. This is the website Website link
How can I pass a Datetime field into my MySQL db as correct datatype and not as string. What setting should I change?
Thank you
With PutDatabaseRecord you can avoid all this chain of transformations and overengineering. The flow would be like:
GetFile -> PutDatabaseRecord
You need to configure PutDatabaseRecord with RecordReader property configured to CSVReader and configure CSVReader and set its Schema Registry to AvroSchemaRegistry and provide the valid schema. you can find the template for a sample flow here.
I have an old table which used to save json to a string sql field using the Rails ActiveRecord serializer. I created a new table with the field datatype as JSON which was introduced in MySQL 5.7. But directly copying the data from the old field to the new one gives me JSON error saying the json structure is wrong.
More specifically the problem is with unicode characters which my database does not support as of yet and the database is too large to just migrate everything to support it.
I am looking for a way to migrate the data from the old field to the new JSON field
I have seen that replacing \ufor the unicode character to \\u in the JSON string solved the issue but I am not just able to do this:
update table_name set column_name=REPLACE(column_name, '\u', '\\u');
since it gives an error again
ERROR 3140 (22032): Invalid JSON text: "Invalid escape character in string." at position 564 in value for column 'table_name.column_name'.
for sql updating table values \u to \\u :
update table_name set column_name=REPLACE(column_name, '\\u', '\\\\u') [where condition];
Background
In Bigquery autodetect,i have following json data being loaded to BQ table.
"a":"","b":"q"
"a":"","b":"q1"
"a":"1","b":"w2"
Now,when this json is uploaded,BQ throws error cannot convert field "a" to integer.
Thoughts
I guess BQ,after reading two rows,BQ infers field "a" as string and then later when "a":"1" comes ,BQ tries to convert it to integer(But why?).
So,to investigate more,i modified the json as follows.
"a":"f","b":"q"
"a":"v","b":"q1"
"a":"1","b":"w2"
Now,when i use this json,no errors,data is smoothly loaded to table.
I don't see as to why in this scenario,if BQ infers field "a" as string,how come it throws no error (why does it not try to convert "a":"1" to integer)?
Query
What i assume is,BQ infers a field to a particular type ,only when it sees data in the field("a":"1" or "a":"f"),but what i don't get is why is BQ trying to automatically converting ("a":"1") to integer when it is of type string.
This autoconversion could create issues.
Please let me know,if my assumptions are correct and what could be done to avoid such errors because realtime data isnot in my control,i can only control my code(using autodetect).
It is a bug with autodetect. We are working on a fix.
I am new to Pentaho Kettle and I am trying to build a simple data transformation (filter, data conversion, etc). But I keep getting errors when reading my CSV data file (whether using CSV File Input or Text File Input).
The error is:
... couldn't convert String to number : non-numeric character found at
position 1 for value [ ]
What does this mean exactly and how do I handle it?
Thank you in advance
I have solved it. The idea is similar to what #nsousa suggested, but I didn't use the Trim option because I tried it and it didn't work on my case.
What I did is specify that if the value is a single space, it is set to null. In the Fields tab of the Text File Input, set the Null if column to space .
That value looks like an empty space. Set the Format of the Integer field to # and set trim to both.