I'm doing a tranfomation that has two differents flows. In the end of the transformation the two flows converge and save the data into the same json file output. Verifing a specific column on result file the values are strange. They look like as follow:
Column
[B#3e8fe299
[B#50b541fb
[B#44b719d4
[B#7dad3c13
[B#6e46a542
[B#170d9515
When I save in differents files it doesn't occurs, the values stay right. Does anyone know what could be causing it and how can I solve it?
Thanks.
Looks like you're printing out java Byte Array object IDs. Here is a link which shows similar values to yours ([B#...):
Java: Syntax and meaning behind "[B#1ef9157"? Binary/Address?
Can you verify what types your fields are?
I have a csv which contains a column with a date and time. I want to change the format of the date-time column. The first 3 rows of my csv looks like the following.
Dater,test1,test2,test3,test4,test5,test6,test7,test8,test9,test10,test11
20011018182036,,,,,166366183,,,,,,
20191018182037,,27,94783564564,,162635463,817038655446,,,0,,
I want to change the csv to look like this.
Dater,test1,test2,test3,test4,test5,test6,test7,test8,test9,test10,test11
2001-10-18-18-20-36,,,,,166366183,,,,,,
2019-10-18-18-20-37,,27,94783564564,,162635463,817038655446,,,0,,
How is this possible?
I tried using the UpdateRecord Processor.
My properties look like this:
But this approach doesn't work since the data gets routed as a failure from the UpdateRecord Processor. Suggest me a method to complete the task.
I was able to accomplish this using the UpdateRecord Processor. The expression language I used is ${field.value:toDate('yyyyMMddHHmmss'):format('yyyy-MM-dd HH:mm:ss')}.
Just this didn't work since every time, the data was routed towards the failure path from the UpdateRecord Processor.
To fix this error I changed the configuration of the CSVRecordSetWriter. The Schema Access Strategy must be changed to Use String Fields from Header. This is by default Use Schema Name Property
Strategy: use UpdateRecord to manipulate the timestamp value using expression language:
${field.value:toDate():format('ddMMyyyy')}
Flow:
GenerateFlowFile:
UpdateRecord:
Setup reader and writer to inherit schema. Include header line. Leave other properties untouched.
Result:
However this solution might not satisfy you because of a strange problem. When you format the date like that:
${field.value:toDate():format('dd-MM-yyyy')}
ConvertRecord routes to the failure relationship:
Type coercion does not work properly. Maybe it is a bug. I could not find a solution for this problem.
I have this solution that helps me creating a Wizard to fill some data and turn into JSON, the problem now is that I have to receive a xlsx and turn specific data from it into JSON, not all the data but only the ones I want which are documented in the last link.
In this link: https://stackblitz.com/edit/xlsx-to-json I can access the excel data and turn into object (when I print document.getElementById('output').innerHTML = JSON.parse(dataString); it shows [object Object])
I want to implement this solution and automatically get the specified fields in the config.ts but can't get to work. For now, I have these in my HTML and app-component.ts
https://stackblitz.com/edit/angular-xbsxd9 (It's probably not compiling but it's to show the code only)
It wasn't quite clear what you were asking, but based on the assumption that what you are trying to do is:
Given the data in the spreadsheet that is uploaded
Use a config that holds the list of column names you want returned in the JSON when the user clicks to download
based on this, I've created a fork of your sample here -> Forked Stackbliz
what I've done is:
use the map operator on the array returned from the sheet_to_json method
Within the map, the process is looping through each key of the record (each key being a column in this case).
If a column in the row is defined in the propertymap file (config), then return it.
This approach strips out all columns you don't care about up front. so that by the time the user clicks to download the file, only the columns you want are returned. If you need to maintain the original columns, then you can move this logic somewhere more convenient for you.
I also augmented the property map a little to give you more granular control over how to format the data in the returned JSON. i.e. don't treat numbers as strings in the final output. you can use this as a template if it suites your needs for any additional formatting.
hope it helps.
I've got parent & child data which I am trying to convert into a flat file using Dell Boomi. The flat file structure is column-based and needs a structure where the lines data is on the same line of the file as the header data.
For instance, a header record which has 4 line items needs to generate a file with a structure of:
[header][line][line][line][line]
Currently what I have been able to generate is either
[header][line]
[header][line]
[header][line]
[header][line]
or
[header]
[line]
[line]
[line]
[line]
I think using the results of the second profile and then using a data processing shape to strip [\r][\n] might be my best option but wanted to check before implementing it.
I created a user define function for each field of data. For this example, lets just speak for "FirstName."
I used a branch immediately-Branch 1 Split the documents, flow controlled them one at a time, and entered the map, then stopped. Branch two contained a message where I built the new file. I typed a static value with the header name, then used a dynamic process property as the parameter next to the header.
User defined function for "FirstName" accepts the first name as input, appends the dynamic process property (need to define a dynamic property for each field), prepends your delimiter of choice, then sets the same dynamic process property.
This was all done with NO scripting at all. I hope this helps. I can provide screenshots if you need more clarification.
I have some big json files that are slightly different in the types that the fields contain.
{ "a":"1" }
vs.
{ "a":1 }
When I unmarshal the second I get:
cannot unmarshal number into Go value of type string
However since these jsons are large I would like to have the actual field that is in error so I can fix them. The UnmarshalTypeError does not hold the Struct's field type.
Does anybody know of a way to get to field name? (not debugging I have a lot of different fields that err)
[EDIT]
I know how to solve the type conversion. What I need is a method to see what fields I need to apply that conversion to.
The short answer is that you can't.
However, to fix your problem, there is multiple solutions:
Dive into the json.Unmarshal source code to change its working and add the information you need: copy the function to a local package, do your edits, and use this function
Use a thrid-party tool to help you, for example a JSON validator compatible with JSON Schema: here is an online example, there is probably some better-suited tool
Now the UnmarshalTypeError, contains the field name.