Power Automate Desktop - Convert a data table (with multiple rows) to JSON - json

I've been researching the best way to convert a data table from excel (with multiple rows) to JSON.
I found a solution on here that appears to "mostly" work, but I am not familiar with JSON to know if it's converting multiple rows correctly.
Here is the data table that I am starting with (from excel)
Here are the steps I took to convert this to JSON
Step 1: Set variable called INVObject to be empty to initialize it
Step 3: Added a For each to loop through each Data Row in the Data Table
Step 4: Added a Set Variable to set the INVObject (Custom Object) to the Data Table for each loop in the For each
Step 5: Convert the Custom Object INVObject to JSON
Results: There is one row/object with all 3 rows from the Data table on the same row
If you scroll to the right, the 2nd row eventually starts and then the 3rd row.
I was expecting to see 3 lines/rows/object to represent the 3 different rows in the Data table.
Can someone provide some insight as to if I am doing something wrong or if this is the expected results for multiple rows?
Thank You!

There is an option in Actions under Variables: 'Convert Custom Object to JSON'
https://learn.microsoft.com/en-us/power-automate/desktop-flows/actions-reference/variables#convertcustomobjecttojson

Related

AWS glue: crawler does not identify metadata when CSV contains string and timestamp/date values

I have come across one thing when we consider CSV as input to crawler
crawler doesn't identify the columns header when all the data is in string format in CSV.
#P1 Headers are displayed as col0,col1...colN.
#P2 And actual column names are considered as data.
#P3 Metadata (i.e. column datatype is shown as string even the CSV dataset consists of date/timestamp value)
If we are going to consider custom (CSV) classifier then we are manually mentioning the column header.
#P2 will get covered i.e. column names will be removed however
#P1 still remain same. column header will be displayed as col0,col1...colN.
There are 3 things I want to avoid and achieve expected result.
CSV with strings only should show actual column names instead of col0,col1...colN.
Metadata of generated table should show correctly (i.e. date/timestamp, string) once it is crawled by crawler.
If Custom classifier is used, we need to mention column header names manually in classifier, yet result is not satisfactory.
Need generic solution instead of manual interventions.
Have gone through this document: here
If anyone has already implemented the solution, Please help.
I got solution to one of the above points. Headers i.e. first line of CSV is displayed by using 'Has heading' in CSV classifier.
However, Solution for following is yet to figure out.
Metadata of CSV file is shown as string even if column contains timestamp/date value. Crawler is reading these datatypes as string.
Custom classifier needs manual interventions. I have mentioned all column names in classifier. Is there generic solution?
If we are using pd.to_csv to write the dataframe, then to avoid getting column names as col1, col2 and so on, add the parameter
index_label='index' such as:
pd.to_csv(df,index_label='index')

Apache NiFi: Creating new column using a condition

I have asked a similar question. Yet I wasn't able to find a solution for my problem through that approach. I have a csv which looks like this:
studentID,regger,age,number
123,west,12,076392367
456,nort,77,098123124
231,west,33,076346325
I want to add a new column and add values according to the data in the number field.This is the logic.
If the first 4 digits of data in the number column is equal to "0763" then the new column named (status) must be set as INSIDE or if it is any other value its OUTSIDE
As mentioned in the logic the output must look like this:
studentID,regger,age,number,status
123,west,12,076392367,INSIDE
456,nort,77,098123124,OUTSIDE
231,west,33,076346325,INSIDE
My Approach
I tried to achieve this by first duplicating the number column to the status column. And then trying to take the first 4 digits and dealing with it.
Hope you would be able to suggest a way to Nifi Workflow to make this possible.
I used the UpdateRecord processor twice and got the results that you want.
Input
I started with your input data.
studentID,regger,age,number
123,west,12,076392367
456,nort,77,098123124
231,west,33,076346325
Process
First, set the UpdateRecord processor as follows:
Record Reader CSVReader
Record Writer CSVRecordSetWriter
Replacement Value Strategy Record Path Value
/status /number
it will create the new column status with the value of number column.
Second, the first output should go to another UpdateRecord processor with the options
Record Reader CSVReader
Record Writer CSVRecordSetWriter
Replacement Value Strategy Literal Value
/status ${field.value:substring(0,4):equals('0763'):ifElse(${field.value:replace(${field.value},'INSIDE')},${field.value:replace(${field.value},'OUTSIDE')})}
and this will give you the final results.
Be aware that the number column is not an integer column, so you have to set the record reader CSVReader with the option Schema Access Strategy to the Use String Fields From Header.
Output
studentID,regger,age,number,status
123,west,12,076392367,INSIDE
456,nort,77,098123124,OUTSIDE
231,west,33,076346325,INSIDE
You can try below logic :-
SplitText ->
ExtractText Processor ->
RouteOnAttribute(Add condition if first four number is 0763)
-----Match Relation--> ReplaceText(Extracted Attribute from file + "INSIDE") -> PutFile
-----Unmatch Relation--> ReplaceText(Extracted Attribute from file + "OUTSIDE") -> PutFile
Hope this will help you.

Best way to parse a big and intricated Json file with OpenRefine (or R)

I know how to parse json cells in Open refine, but this one is too tricky for me.
I've used an API to extract the calendar of 4730 AirBNB's rooms, identified by their IDs.
Here is an example of one Json file : https://fr.airbnb.com/api/v2/calendar_months?key=d306zoyjsyarp7ifhu67rjxn52tv0t20&currency=EUR&locale=fr&listing_id=4212133&month=11&year=2016&count=12&_format=with_conditions
For each ID and each day of the year from now until november 2017, i would like to extract the availability of this rooms (true or false) and its price at this day.
I can't figure out how to parse out these informations. I guess that it implies a series of nested forEach, but i can't find the right way to do this with Open Refine.
I've tried, of course,
forEach(value.parseJson().calendar_months, e, e.days)
The result is an array of arrays of dictionnaries that disrupts me.
Any help would be appreciate. If the operation is too difficult in Open Refine, a solution with R (or Python) would also be fine for me.
Rather than just creating your Project as text, and working with GREL to parse out...
The best way is just select the JSON record part that you want to work with using our visual importer wizard for JSON files and XML files (you can even use a URL pointing to a JSON file as in your example). (A video tutorial shows how here: https://www.youtube.com/watch?v=vUxdB-nl0Bw )
Select the JSON part that contains your records that you want to parse and work with (this can be any repeating part, just select one of them and OpenRefine will extract all the rest)
Limit the amount of data rows that you want to load in during creation, or leave default of all rows.
Click Create Project and now your in Rows mode. However if you think that Records mode might be better suited for context, just import the project again as JSON and then select the next outside area of the content, perhaps a larger array that contains a key field, etc. In the example, the key field would probably be the Date, and why I highlight the whole record for a given date. This way OpenRefine will have Keys for each record and Records mode lets you work with them better than Row mode.
Feel free to take this example and make it better and even more helpful for all , add it to our Wiki section on How to Use
I think you are on the right track. The output of:
forEach(value.parseJson().calendar_months, e, e.days)
is hard to read because OpenRefine and JSON both use square brackets to indicate arrays. What you are getting from this expression is an OR array containing twelve items (one for each month of the year). The items in the OR array are JSON - each one an array of days in the month.
To keep the steps manageable I'd suggest tackling it like this:
First use
forEach(value.parseJson().calendar_months,m,m.days).join("|")
You have to use 'join' because OR can't store OR arrays directly in a cell - it has to be a string.
Then use "Edit Cells->Split multi-valued cells" - this will get you 12 rows per ID, each containing a JSON expression. Now for each ID you have 12 rows in OR
Then use:
forEach(value.parseJson(),d,d).join("|")
This splits the JSON down into the individual days
Then use "Edit Cells->Split multi-valued cells" again to split the details for each day into its own cell.
Using the JSON from example URL above - this gives me 441 rows for the single ID - each contains the JSON describing the availability & price for a single day. At this point you can use the 'fill down' function on the ID column to fill in the ID for each of the rows.
You've now got some pretty easy JSON in each cell - so you can extract availability using
value.parseJson().available
etc.

How to change Column Delimiter in MS VSTS for web performance test

I am using Microsoft VSTS for Performance test a web application
I am adding a Data Pool (.csv file) for parameterize multiple values, But the problem is .csv file is showing it in column delimited type like:
VariableA,VariableB,Variable3
Test1,Test2,Test3
Test4,Test5,Test6
But i want these multiple values in single column, Because whenever we will select the column delimited type, .csv file automatically converts all values in different columns.
Like in HP-LoadRunner we have 3 options [Column, Tab, Space]. I tried to find out in VSTS data-pool settings but not able to find any option.
I am trying to do this:
VariableA
Test1,Test2,Test3
Test5,Test6,Test7
Kindly help me out.
If you want to use Test1,Test2,Test3 in first iteration, Test5,Test6,Test7 in second iteration then try below in your csv file.
VariableA
"Test1,Test2,Test3"
"Test5,Test6,Test7"
This should consider Test1,Test2,Test3 as a single variable.

SSIS 2012 Full Result Set to set variables

I'm trying to create an SSIS package that reads a mapping table that contains foreign key information and tables they point to and store the full result set to be used to populate 7 columns representing columns in the result set that is then used to update an xxxSID column on 6 servers.
I'm stuck! Please help.
I've created the SQL Task with query to build the result set and mapped to object variable SidMap and the task runs successfully however, I don't know where to go from there. Some blogs say create a ForEachLoop Container and map the object variable to the collection which I've done. I've also created string variables representing the 7 columns but don't know how to populate them.
The blogs I've read so far suggest this can only be done from a Script task. Is that true? If so how is it done?
Another user posted a question that sounded like he may be doing the same or very similar thing using a SQL Task but I didn't see how he was populating the column object variables and then converting data into string variables.
SSIS Result set, Foreachloop and Variable
Currently I'm updating tables manually using a cursor. If anyone cares to see the code I can post it but didn't think it relevant to the question other than providing a clear picture of what I'm doing.
I would create a For Each Loop Container using the Foreach ADO Enumerator, and map the object variable to the collection. I would map the 7 string variables on the Variable Mappings page.
This process is documented in detail here:
http://technet.microsoft.com/en-us/library/cc879316.aspx
A common "gotcha" is mismatched datatypes between the result set and the Variables. To avoid this I always wrap CAST ( ... AS NVARCHAR ( 4000 ) ) or similar around the columns in the dataflow that produces the dataset, and all my receiving Variables are String datatype.