Automatically export file from a given software - csv

I have a folder with hundreds of files that were saved on a specific format of a given software (in this case it is the Qualisys Track Manager and the file format is .qtm).
This software has the option of exporting the files to another format such as TSV, MAT, C3D,...
My problem: I want to export all my files to TSV format but the only way I know is open the software, go to File->Export->To TSV. And doing this for hundreds of files is time consuming. So I was thinking on writing a script where I could call my files, access the software and it would do the export automatically.
But I have no clue how to do this, I was thinking on writing a script on Notepad++, running on the command window and then I would get all the files on TSV format.
[EDIT] After some research I think maybe a Batch script or a PowerShell script may help me but I have no idea how to run automatically the commands of the software of if it is even possible... (I am using Windows10)

It is highly likely to be a perpetual file format(.qtm) and Powershell/batch would not understand it. Unless this file can be read in a known way (Text XML etc), they would not be able to convert it.
I googled it and seems QTM have a REST API interface. It would be the best chance you have. I'm not sure if the documentation is available publicly, I didn't find it. I'd recommend you contact their support for REST API document/ask if their REST API can handle this task/sample code to get you start.
Then you can make REST API calls with Invoke-RestMethod in a loop from powershell.

Related

Best data processing software to parse CSV file and make API call per row

I'm looking for ideas for an Open Source ETL or Data Processing software that can monitor a folder for CSV files, then open and parse the CSV.
For each CSV row the software will transform the CSV into a JSON format and make an API call to start a Camunda BPM process, passing the cell data as variables into the process.
Looking for ideas,
Thanks
You can use a Java WatchService or Spring FileSystemWatcher as discussed here with examples:
How to monitor folder/directory in spring?
referencing also:
https://www.baeldung.com/java-nio2-watchservice
Once you have picked up the CSV you can use my example here as inspiration or extend it: https://github.com/rob2universe/csv-process-starter specifically
https://github.com/rob2universe/csv-process-starter/blob/main/src/main/java/com/camunda/example/service/CsvConverter.java#L48
The example starts a configurable process for every row in the CSV and includes the content of the row as a JSON process data.
I wanted to limit the dependencies of this example. The CSV parsing logic applied is very simple. Commas in the file may break the example, special characters may not be handled correctly. A more robust implementation could replace the simple Java String .split(",") with an existing CSV parser library such as Open CSV
The file watcher would actually be a nice extension to the example. I may add it when I get around to it, but would also accept a pull request in case you fork my project.

Is there any way to fill in Sharepoint entries via parsing text file?

My workplace has a whole bunch of unannotated .zip files that need to be uploaded to the new file server (Windows). I've used perl to parse through through the excel files within the .zip files to create an annotation.txt file for each .zip file that contains information about the .zip file. I have 1000's of zip files and do not want to manually enter in information for each entry if there's a way to automate it. I am proficient in perl and mysql, and wondering if there is any way to utilize my skillsets to port this information into the Microsoft Sharepoint website.
Thank you in advance for any advice or suggestions.
There a many, many ways to meet your requirement.
You could write a event receiver to parse the files once uploaded and set metadata.
A better approach for your use case might be to write a .NET based console application and reference Microsoft.SharePoint.Client and then upload your files using the Client side object model (CSOM) and set the metadata during that process as outlined here: Upload a document to a SharePoint list from Client Side Object Model
There are also REST and ASMX webservices that you could call from a non .NET runtime process.
Plenty of options, pick the one that fits your needs and skills best.

Extract excel metadata in Linux

I have used the "extract" command, but it never was able to find as much information as FOCA found on these excel spreadsheets I am dealing with.
For example, I am using the FOCA application to harvest and download files from the web. Afterwards, it is extracting metadata from all of the files.
With regards to excel files, it appears that these files are containing more metadata than the average pdf file. That being said, FOCA is able to detect printer names, email addresses, and a few other things that are stored within this spreadsheet file. However, I cannot find any way to get this same information in Linux using the "extract" command.
Anyone know a way to extract files within Linux and grab ALL of its metadata? Seems like the extract command may be limited from what I understand.
Thanks,
Excel files store a lot of meta data within the file, so you would have to parse the file itself to get at it. Since you're on Linux and can't use the Excel interop, you could try to use an Excel library like ExcelWriter or something similar. ExcelWriter is written for .Net, so you'd have to use mono.

Migrating from Lighthouse to Jira - Problems Importing Data

I am trying to find the best way to import all of our Lighthouse data (which I exported as JSON) into JIRA, which wants a CSV file.
I have a main folder containing many subdirectories, JSON files and attachments. The total size is around 50MB. JIRA allows importing CSV data so I was thinking of trying to convert the JSON data to CSV, but all convertors I have seen online will only do a file, rather than parsing recursively through an entire folder structure, nicely creating the CSV equivalent which can then be imported into JIRA.
Does anybody have any experience of doing this, or any recommendations?
Thanks, Jon
The JIRA CSV importer assumes a denormalized view of each issue, with all the fields available in one line per issue. I think the quickest way would be to write a small Python script to read the JSON and emit the minimum CSV. That should get you issues and comments. Keep track of which Lighthouse ID corresponds to each new issue key. Then write another script to add things like attachments using the JIRA SOAP API. For JIRA 5.0 the REST API is a better choice.
We just went through a Lighthouse to JIRA migration and ran into this. The best thing to do is in your script, start at the top-level export directory and loop through each ticket.json file. You can then build a master CSV or JSON file to import into JIRA that contains all tickets.
In Ruby (which is what we used), it would look something like this:
Dir.glob("path/to/lighthouse_export/tickets/*/ticket.json") do |ticket|
JSON.parse(File.open(ticket).read).each do |data|
# access ticket data and add it to a CSV
end
end

How might I write a program to extract my data from Google Code?

I'm about to start writing a program which will attempt to extract data from a Google Code site so that it may be imported in to another project management site. Specifically, I need to extract the full issue detail from the site (description, comments, and so on).
Unfortunately Google don't provide an API for this, nor do they have an export feature, so to me the only option looks to be extracting the data from the actual HTML (yuck). Does any one have any suggestions on "best practice" from attempting to parse data out of HTML? I'm aware that this is less than ideal, but I don't think I have much choice. Can anyone else think of a better way, or maybe someone else has already done this?
Also, I'm aware of the CSV export feature on the issue page, however this does not give complete data about issues (but could be a useful starting point).
I just finished a program called google-code-export (hosted on Github). This allows you to export your Google Code project to an XML file, for example:
>main.py -p synergy-plus -s 1 -c 1
parse: http://code.google.com/p/synergy-plus/issues/detail?id=1
wrote: synergy-plus_google-code-export.xml
... will create a file named synergy-plus_google-code-export.xml.