There is a custom implementation in dhtmlx gantt for upload from MPP/XML which goes to their servlet and renders the gantt. Has anyone tried to build a custom CSV upload or any third parties available to load the csv into the gantt.
https://dhtmlx.com/blog/export-import-ms-project-dhtmlx-gantt-chart/
There is no such solution from DHTMLX (FYI I work for DHTMLX), and I'm not aware if there is any third-party service or ready-to-use solution that could be used for a development.
At the code level, importing csv into gantt breaks down into three steps:
parsing CSV into an object array
mapping columns of CSV to properties of that objects (mandatory properties of gantt tasks - text/start_date/duration/parent)
and inserting the result into database.
The first step is trivial. Mapping columns may require implementing some sort of UI so the user could specify which columns of csv mean what in gantt.
For an inspiration, you can check how it's done in this app https://app.ganttpro.com/ - requires registration, but you can create a free account using google or facebook acc - create new project ("+ CREATE NEW" in lefthand menu), select "Import from" and try uploading some csv file -> here is how the ui looks like.
As for the last step - inserting parsed records into db - you'll need to do some coding in order to insert tasks without losing project hierarchy (task.parent -> task.id relations, given that database ids of your items will likely change after inserting), but overall it shouldn't be very difficult.
If you looking for something more specific - please update your question.
Related
I have an SFTP trigger in a logic app which fires when a file is added to a certain file area. It is a CSV-formatted file and I want the rows to be parsed and coverted into json. Which is the best way to convert CSV-data into json without using any custom connectors?
I cannot find any built-in connectors doing this job. And as far as I know there are no logic apps functions doing the job either.
Right now, there is no connector/action in logic app that can provide the out of box solution for your requirement. You need to loop in through the array and perform the calculation as per your requirement but I will not suggest you leverage the loop, variables action as it may take time and cost you more.
The alternative would be leveraging the inline code (JavaScript code) to do the calculation as per your requirement. Please note that you will need Integration Account to run your inline code.
Please refer to javascript code and modified if needed according to your needs. I have used '_' for differentiating the nested objects. For more details you can refer to previous discussion here.
For complex calculation you can offload this functionality to azure function and write your code as per the supported languages and call azure function from logic app.
1.Created logic app as shown below:
2 .Created container in storage account and uploaded a CSV file in container.
3.Next using compose action to split the contents of the CSV file on every new line into an array.
a. Here is the expression used in SplitLines compose action:
split(body('Get_blob_content_(V2)'),decodeUriComponent('%0D%0A'))
b. Follow the below MS Doc to write expressions:
4. Removing last(empty) line from previous output using another compose action as shown below ,
take(outputs('SplitLines'),add(length(outputs('SplitLines')),-1))
5.Separating filed names using compose action
split(first(outputs('SplitLines')), ',')
Forming json as shown below using Select action,
**From**: **`skip(outputs('RemoveLastLine'), 1)`**
**Map:**
**`outputs('SplitFieldName')[0]`** **`split(item(), ',')?[0]`**
**`outputs('SplitFieldName')[1]`** **`split(item(), ',')?[1]`**
Tested logic app and it is running successfully. 
Content of CSV file is as shown below:
Csv data is formatted as json:
Reference:Use data operations in Power Automate (contains video) — Power Automate | Microsoft Docs
Credit: #Iason Koulas
1) I have already made transformation mapping for getting data from specific MySQL (Table Input) and convert it as Text File output.
2) Also I have created a facebook developer account page and trying to figure out how the Facebook API works to push data from MYsql to facebook.
3) would appreciate if transformation mapping can be provided. Also I would not like to use XML, instead I would like to use JSON.
Already the msql table is converted to csv file, but I am not sure how to post the csv file to facebook or is there a way to connect mysql table to facebook directly. Please share your ideas or transformation mapping. Thanks
I would Assuemm you are familiar with Facebook Development API to do all actions like post,get and so on.
You have a step called "REST CLIENT STEP" in Pentaho.
you will have an API url to post the data that you want from mySQL. There several methods GET PUT POST DELETE
Also set the Application Format to Json (XML,JSON etc).
I used to read data from FB using REST Client by using GET Method. Work around.
I currently have a chat bot that has an entity for each stock symbol. There are over 3,000. For my dialog I want to be able to detect questions like #get #price #stockSymbol. Is there a way to deal with a large number of entities without writing an if statement for each one?
You are only allowed to have 100 entities in a single workspace. However those entities can have 100,000 values.
So you could create an entity called #StockSymbol and then each value would be the Stock identifier (eg. IBM).
So you would only need one IF statement to determine it is a stock, then pass back the entity information to your calling application to take action on the value.
To put this in programatically, if it is a one time thing you can create a CSV file like the following:
StockSymbol,IBM
StockSymbol,MSFT
StockSymbol,APPL
and so on. Then import that entity file. Alternatively you can use the workspace API to update an already deployed workspace.
I am sorry to say there is no process within the Conversation Service UI that has an automatic dialog creation method. In cases like this, many teams create an external script that can read a file with your entities in it, and then creates a workspace json file with the required dialog nodes. The workspace json file is a relatively simple format, and I have found you can easy merge any new json file into an already created workspace. In fact with the new API's it even possible to load the new elements into a running workspace. ( although if new to this, create a duplicate ws, and merge into this one, or download and merge via a good editor. )
I am trying to find the best way to import all of our Lighthouse data (which I exported as JSON) into JIRA, which wants a CSV file.
I have a main folder containing many subdirectories, JSON files and attachments. The total size is around 50MB. JIRA allows importing CSV data so I was thinking of trying to convert the JSON data to CSV, but all convertors I have seen online will only do a file, rather than parsing recursively through an entire folder structure, nicely creating the CSV equivalent which can then be imported into JIRA.
Does anybody have any experience of doing this, or any recommendations?
Thanks, Jon
The JIRA CSV importer assumes a denormalized view of each issue, with all the fields available in one line per issue. I think the quickest way would be to write a small Python script to read the JSON and emit the minimum CSV. That should get you issues and comments. Keep track of which Lighthouse ID corresponds to each new issue key. Then write another script to add things like attachments using the JIRA SOAP API. For JIRA 5.0 the REST API is a better choice.
We just went through a Lighthouse to JIRA migration and ran into this. The best thing to do is in your script, start at the top-level export directory and loop through each ticket.json file. You can then build a master CSV or JSON file to import into JIRA that contains all tickets.
In Ruby (which is what we used), it would look something like this:
Dir.glob("path/to/lighthouse_export/tickets/*/ticket.json") do |ticket|
JSON.parse(File.open(ticket).read).each do |data|
# access ticket data and add it to a CSV
end
end
I need to pull data from csv file to SQL Server table. Which Control task should I use ? Is it Flat File ? What is the correct method to pull data ?
The problem is I have used Flat File Task for pulling csv file. But the csv file whihc I am having, contains headings as first row, then on the third row, I have the columns, and data starting from fifth row.
Another problem is, in this file column details comes again after 1000 data ie columns appears in two rows. Is it possible to pull data ? If so, HOW ?
While Valentino's suggestion should work, I suggest that first you work with the provider of the file to get them to provide the data in a better format. When we get stuff like this we almost always push it back and ask for properly formatted data. We get it too about 90% of the time. It will save you work if they will fix their own drek. In our case, the customers providing the data are paying for our programming services and when they understand how substantial an increase in the cost to them, they are usually nmore than willing to accomodate our needs.
I believe you'll first have to transform your file into a proper CSV file so that the SSIS Flat File Source component (Data Flow) can read it. If the source system cannot produce a real CSV file, we usually create custom .NET applications for the cleanup/conversion task.
An Execute Process task (Control Flow) that executes the custom app can then be called prior to the Data Flow.