I have a JSON file with almost 50 fields in it and I need to validate each one of them through automation. Also I have the test data for all 50 fields to be validated in an excel sheet.
The logic I am stuck here is I cannot place all 50 field's data in the excel cells since it will make my sheet look bulk. Any suggestions on how I can validate this ?
Can you use Python/Java/C# etc to directly read the excel sheet (or export the excel sheet to a CSV file) then write code to validate the JSON against the read data? You don't have to put the validation result back into the excel sheet, do you?
Related
I want to write some data on second sheet of a CSV file using FileConnector in IBM TDI/SDI.
The first sheet of the same file has data which should not be over written.
Is it possible to do so ?
Any lead will be appreciated! Thank you
Csv files do not have 'sheets'.
They are files with tabular data having only one structure for the whole file, resulting in a single table.
So I have an input CSV sheet that I want to copy into an output CSV sheet. The output CSV sheet has all the columns in the input sheet, plus a bunch of other columns. (I will be copying data into those from other input sheets later.)
When I run the pipeline containing my Copy Activity, the only columns present in the new output sheet are the 5 columns from the input sheet, I assume because those are the only ones in the mapping. However, I've also tried creating 15 "Additional Columns" in the "source" section of the Copy Activity --- just trying out things like "test", \"test\", test, #test, #pipeline().DataFactory, $$FILEPATH, etc. --- but when I debug the pipeline and go back to my container and look at the output sheet, still only the 5 columns from the input sheet are present there!
How do I get the output sheet to contain columns that are not present in the input sheet? Do I need to create an ARM template?
I am doing this entirely via the Azure Portal, btw.
This will be much easier to design in ADF's data flows instead by creating Derived Columns to append to your output sink schema
This works fine on my side. Is there any differences?
I have imported JSON data into my sheets. I wanted to use functions such as =STDEV and =AVERAGE to the imported data but I keep getting #div/0 error if I do. What can I do to solve this?
wrap your JSON formula into ARRAYFORMULA and multiply it by 1 to convert string numerals into numeric values
=ARRAYFORMULA(IMPORTJSON(...)*1)
or you can just convert to numbers after the import like lets say your imported JSON stuff is in A1:A10 then:
=ARRAYFORMULA(AVERAGE(A1:A10*1))
=ARRAYFORMULA(AVERAGE(VALUE(A1:A10)))
I have an excel sheet that has three columns A, B and C.
A and B contain regular text. A firstname and lastname, if you will. The third column C contains JSON data.
Is there a way I can read this file into PowerBI and have it automatically parse out the JSON data into additional columns? In PowerBI Desktop Client, I can use an excel sheet as the datasource, and it loads in my data into the client, however it naturally treats column C as just text. I've had a look at the Advanced Editor and I'm thinking I might have to include something in there to help parse that out.
Any ideas?
I figured it out. In the query editor, right-click on the column that contains the JSON, go to Transform and select JSON. It will parse out the data, allowing you to add them in as additional column.
Extremely handy!
I have a tabular csv file that has seven columns and containing the following data:
ID,Gender,PatientPrefix,PatientFirstName,PatientLastName,PatientSuffix,PatientPrefName
2 ,M ,Mr ,Lawrence ,Harry , ,Larry
I am new to pentaho and I want to design a transformation that moves the data (values of the 7 columns) to an empty excel sheet. The excel sheet has different column names, but should carry the same data, as shown:
prefix_name,first_name,middle_name,last_name,maiden_name,suffix_name,Gender,ID
I tried to design a transformation using the following series of steps, but it gives me errors at the end that I could not interpret them.
What is the proper design to move the data from the csv file to the excel sheet in this case? Any ideas to solve this problem?
As #Brian.D.Myers mentioned in the comment you can use select values step. But here is how you do it step by step explanation.
Select all the fields from CSV file input step.
Configure the select values step as follows.
In the Content tab of Excel writer step click on Get fields button and fill the fields. Alternatively you can use Excel output step as well.