Which is the best way of parsing CSV-data in a logic app without using a custom connector? - json

I have an SFTP trigger in a logic app which fires when a file is added to a certain file area. It is a CSV-formatted file and I want the rows to be parsed and coverted into json. Which is the best way to convert CSV-data into json without using any custom connectors?
I cannot find any built-in connectors doing this job. And as far as I know there are no logic apps functions doing the job either.

Right now, there is no connector/action in logic app that can provide the out of box solution for your requirement. You need to loop in through the array and perform the calculation as per your requirement but I will not suggest you leverage the loop, variables action as it may take time and cost you more.
The alternative would be leveraging the inline code (JavaScript code) to do the calculation as per your requirement. Please note that you will need Integration Account to run your inline code.
Please refer to javascript code and modified if needed according to your needs. I have used '_' for differentiating the nested objects. For more details you can refer to previous discussion here.
For complex calculation you can offload this functionality to azure function and write your code as per the supported languages and call azure function from logic app.

1.Created logic app as shown below:
2 .Created container in storage account and uploaded a CSV file in container.
3.Next using compose action to split the contents of the CSV file on every new line into an array.
a. Here is the expression used in SplitLines compose action:
split(body('Get_blob_content_(V2)'),decodeUriComponent('%0D%0A'))
b. Follow the below MS Doc to write expressions:
4. Removing last(empty) line from previous output using another compose action as shown below ,
take(outputs('SplitLines'),add(length(outputs('SplitLines')),-1))
5.Separating filed names using compose action
split(first(outputs('SplitLines')), ',')
Forming json as shown below using Select action,
**From**: **`skip(outputs('RemoveLastLine'), 1)`**
**Map:**
**`outputs('SplitFieldName')[0]`** **`split(item(), ',')?[0]`**
**`outputs('SplitFieldName')[1]`** **`split(item(), ',')?[1]`**
Tested logic app and it is running successfully. 
Content of CSV file is as shown below:
Csv data is formatted as json:
Reference:Use data operations in Power Automate (contains video) — Power Automate | Microsoft Docs
Credit: #Iason Koulas

Related

Best data processing software to parse CSV file and make API call per row

I'm looking for ideas for an Open Source ETL or Data Processing software that can monitor a folder for CSV files, then open and parse the CSV.
For each CSV row the software will transform the CSV into a JSON format and make an API call to start a Camunda BPM process, passing the cell data as variables into the process.
Looking for ideas,
Thanks
You can use a Java WatchService or Spring FileSystemWatcher as discussed here with examples:
How to monitor folder/directory in spring?
referencing also:
https://www.baeldung.com/java-nio2-watchservice
Once you have picked up the CSV you can use my example here as inspiration or extend it: https://github.com/rob2universe/csv-process-starter specifically
https://github.com/rob2universe/csv-process-starter/blob/main/src/main/java/com/camunda/example/service/CsvConverter.java#L48
The example starts a configurable process for every row in the CSV and includes the content of the row as a JSON process data.
I wanted to limit the dependencies of this example. The CSV parsing logic applied is very simple. Commas in the file may break the example, special characters may not be handled correctly. A more robust implementation could replace the simple Java String .split(",") with an existing CSV parser library such as Open CSV
The file watcher would actually be a nice extension to the example. I may add it when I get around to it, but would also accept a pull request in case you fork my project.

Upload from CSV file to DHTMLX GANTT

There is a custom implementation in dhtmlx gantt for upload from MPP/XML which goes to their servlet and renders the gantt. Has anyone tried to build a custom CSV upload or any third parties available to load the csv into the gantt.
https://dhtmlx.com/blog/export-import-ms-project-dhtmlx-gantt-chart/
There is no such solution from DHTMLX (FYI I work for DHTMLX), and I'm not aware if there is any third-party service or ready-to-use solution that could be used for a development.
At the code level, importing csv into gantt breaks down into three steps:
parsing CSV into an object array
mapping columns of CSV to properties of that objects (mandatory properties of gantt tasks - text/start_date/duration/parent)
and inserting the result into database.
The first step is trivial. Mapping columns may require implementing some sort of UI so the user could specify which columns of csv mean what in gantt.
For an inspiration, you can check how it's done in this app https://app.ganttpro.com/ - requires registration, but you can create a free account using google or facebook acc - create new project ("+ CREATE NEW" in lefthand menu), select "Import from" and try uploading some csv file -> here is how the ui looks like.
As for the last step - inserting parsed records into db - you'll need to do some coding in order to insert tasks without losing project hierarchy (task.parent -> task.id relations, given that database ids of your items will likely change after inserting), but overall it shouldn't be very difficult.
If you looking for something more specific - please update your question.

Watson Conversation dialog for large number of entities?

I currently have a chat bot that has an entity for each stock symbol. There are over 3,000. For my dialog I want to be able to detect questions like #get #price #stockSymbol. Is there a way to deal with a large number of entities without writing an if statement for each one?
You are only allowed to have 100 entities in a single workspace. However those entities can have 100,000 values.
So you could create an entity called #StockSymbol and then each value would be the Stock identifier (eg. IBM).
So you would only need one IF statement to determine it is a stock, then pass back the entity information to your calling application to take action on the value.
To put this in programatically, if it is a one time thing you can create a CSV file like the following:
StockSymbol,IBM
StockSymbol,MSFT
StockSymbol,APPL
and so on. Then import that entity file. Alternatively you can use the workspace API to update an already deployed workspace.
I am sorry to say there is no process within the Conversation Service UI that has an automatic dialog creation method. In cases like this, many teams create an external script that can read a file with your entities in it, and then creates a workspace json file with the required dialog nodes. The workspace json file is a relatively simple format, and I have found you can easy merge any new json file into an already created workspace. In fact with the new API's it even possible to load the new elements into a running workspace. ( although if new to this, create a duplicate ws, and merge into this one, or download and merge via a good editor. )

Custom dynamic inventory scripts/plugins in Ansible

Ansible allows devs
to write programs (in any language) that will return JSON describing the dynamic “snapshot” of current hosts. I’m using vSphere, which is currently not supported by Ansible OSS, and so I need to write such a "custom inventory plugin".
I can handle the querying of vSphere for a list of hosts, as well as constructing the JSON that is compatible with what Ansible is expecting.
Where the documentation completely (seemingly) falls flat is:
How do I “connect” Ansible with my inventory app? That is, say my inventory app is a simple bash script (inventory.sh)..how do I configure Ansible to call bash inventory.sh and obtain JSON from it? In reality the app will likely be a Java executable (inventory.jar) but I figure that if I can figure out how to get it working with bash, I can extrapolate to Java; and
How does Ansible actually capture/fetch the JSON back from the app? STDOUT? Is this all supposed to happen over an HTTP connection? Examples? How does inventory.sh or inventory.jar communicate that JSON back to Ansible?
The inventory script has to be located on the same machine where Ansible runs. It is not communicating through http, Ansible will simply parse the STDOUT of your program. The location does not matter at all, you have to pass the path to Ansible when you call Ansible:
ansible-playbook ... -i /path/to/your/inventory.sh
To avoid passing the inventory location every time you could add this to you ansible.cfg:
inventory = /path/to/your/inventory.sh
You could also copy the script to /etc/ansible/hosts, which is the default location Ansible will look for inventory files/scripts, but I prefer to keep things together so I suggest to place it close to your playbooks/roles etc.
And (3) Is any of this documented, anywhere? Don't see anything in the Ansible docs...
It is not mentioned on the page Developing Dynamic Inventory Sources but it is to be seen on some examples on the page Dynamic Inventory. The docs are community managed and from times litte unstructured and lacking important information.
BTW, there is a VMware inventory script included. By looking at the source I have seen it imports some vSphere stuff. I have little experience with VMware so I can't judge if this is actually what you need and don't need to write your own.
This is completely user defined. Typically you would write your dynamic inventory in Python and use a json dump of the output to create the inventory.
Here is an example for the use case you mentioned (vSphere): https://github.com/RaymiiOrg/ansible-vmware/blob/master/query.py
In a nutshell you create it like a normal Python file and create the options (as he does in main) and selectively execute functions based on which options are passed. These will make REST calls and return the output in the form of a JSON dump, which Ansible can parse for use in inventory.

Google Charts using JSON (without Webserver!)

So I want to create a chart using Google Chart API. The problem is following:
I need to create a chart which would use JSON data for the Output.
Example:
I am creating a BarChart which should show Montly Income. Each bar should represent X value of income. But this income changes monthly and I dont want to update the new numbers (as there are plenty) manually by writting them inside the JSON structure (I will use PERL scripts that will gather the new data).
I have read plenty on ther web, but all are using JSON COMBINED WITH PHP ! But I do not have a Web Server. What I want is to create a JSON FOLDER on my desktop, which will contain 100 JSON files inside it.
And when I browse my HTML page, I will click on Monthly Income (May) and this should open a new HTML page which should have some sort of INCLUDE or call function, which should call a SPECIFIC JSON file from the JSON folder on my computer that is coresponded with the path I have chosen on my webpage (in this case Monthly income (May)).
I think my problem is MORE SIMPLE than the method with PHP updating, because I am not worried with the updates - I will update my text files with PERL script. I just need a way, how to include a JSON file into my HTML script, without using PHP or any of these WEB SERVER related stuff.
Any ideas / suggestion ?
Thanks,
David
You can't reliably do it cross-browser with JSON because some browsers block ajax calls from local files (e.g., file:// URLs). But you can do it with JSON-P.
You create the files like this:
dataCallback(/*...JSON here...*/)
E.g.:
dataCallback({
"foo": "bar"
});
...and use this to load the relevant:
<script>
function dataCallback(data) {
// Here, the data will be the deserialized (parsed) data
}
</script>
<script src="/path/to/jsonp/file"></script>
Basically this is using JavaScript, rather than JSON, but where the data file although technically code is really just a function call with the data in the argument.