Step does not have matching glue code even though the methods are defined correctly in Step Definition file using the snippets
Related
I'm looking for ideas for an Open Source ETL or Data Processing software that can monitor a folder for CSV files, then open and parse the CSV.
For each CSV row the software will transform the CSV into a JSON format and make an API call to start a Camunda BPM process, passing the cell data as variables into the process.
Looking for ideas,
Thanks
You can use a Java WatchService or Spring FileSystemWatcher as discussed here with examples:
How to monitor folder/directory in spring?
referencing also:
https://www.baeldung.com/java-nio2-watchservice
Once you have picked up the CSV you can use my example here as inspiration or extend it: https://github.com/rob2universe/csv-process-starter specifically
https://github.com/rob2universe/csv-process-starter/blob/main/src/main/java/com/camunda/example/service/CsvConverter.java#L48
The example starts a configurable process for every row in the CSV and includes the content of the row as a JSON process data.
I wanted to limit the dependencies of this example. The CSV parsing logic applied is very simple. Commas in the file may break the example, special characters may not be handled correctly. A more robust implementation could replace the simple Java String .split(",") with an existing CSV parser library such as Open CSV
The file watcher would actually be a nice extension to the example. I may add it when I get around to it, but would also accept a pull request in case you fork my project.
I have a workflow Knime, in the middle I must execute an external program to create an Excel file.
Exists some node that allows me to achieve this? I don't need to put any input or output, only execute the program and wait to generate the Excel file (I require to use this Excel for the next nodes).
There are (at least) two “External Tool” nodes which allow running executables on the command line:
External Tool
External Tool (Labs)
In case that should not be enough, you can always go for a Java Snippet node. The java.lang.Runtime class should be your entry point.
It's could be used the External tool node. The node requires inputs and outputs... but, you can use a table creator node for input:
This create an empty table.
In the external tool node, you must include an Input file and Output file, depending on your request, this config could be meaningless but require to the Node works.
In this case, the external app creates a text with the result of the execution, so, in the initial table (Table creator node), will be read the file and get the information into Knime.
Calling a file resulting from the concatenation (bash: cat ... >> app.js) of the following three files:
/usr/share/ceylon/1.2.0/repo/ceylon/language/1.2.0/ceylon.language-1.2.0.js
modules/com/example/helloworld/1.0.0/com.example.helloworld-1.0.0-model.js
modules/com/example/helloworld/1.0.0/com.example.helloworld-1.0.0.js
with the command nodejs app.js does nothing. The same when used in a web page. How do have I to call that javascript program so that it runs without using require.js ?
Please give the rules how ceylon modules and the run function and other functions contained within translate to javascript and are to be called.
How can I get one javascript file from compilation of several ceylon modules without concatenating them manually or with require.js?
The above is without using google closure compiler.
Given the size of 1.6 MB of the language module, it makes no sense to run ceylon-js without using google closure compiler.
Compiling "ceylon.language-1.2.0.js" alone with google closure compiler results in a lot of warnings.
java -jar compiler.jar --compilation_level ADVANCED_OPTIMIZATIONS --js /usr/share/ceylon/1.2.0/repo/ceylon/language/1.2.0/ceylon.language-1.2.0.js --js_output_file lib-compiled.js
How can I get rid of those warnings?
In what order do I have to chain together files resulting from ceylon-js with the model file and the language file to compile them in advanced mode with google closure compiler for dead code elimination.
These are 3 questions, really.
A Ceylon module is compiled to a CommonJS module. Concatenating the resulting files won't work because each file is on CommonJS format, which is a big function that returns an object with the exported declarations.
You can compile the modules with the --no-module option to get just the generated code, without it being wrapped in CommonJS format. For the language module, you can copy the file and just delete the first line and the last 5 lines.
I do not yet know how to get rid of the warnings you mention in the second question.
And as for the third question, I would recommend putting the language module first, then the rest of the files. If you have any toplevel declarations with the same name in different modules, you'll have conflicts (only the last declaration will remain), even if they're not shared, since they're all in the same module/unit.
Well, I think require.js can run the compilation of the modules to one file and then run the google-closure-compiler, see: http://www.requirejs.org/docs/optimization.html
I have the following issue.
I have 100+ Jmeter tests as separate files with the tendency to add more. Using Ant I have configured the results to come into a separate output HTML file for each test. So now when I have 100+ tests I get 100+ resulting HTML files. And I need to check every single one if the tests run OK.
My question is how to make the Ant append the results into one HTML file for all 100+ tests so I can view with a single glance that the tests run OK.
I guess I either need to modify the ..extras/build.xml file in Jmeter or modify the command line where I invoke my tests via Ant.
Thank you in advance.
If you are using JMeter Ant Task try this - it uses a FileSet for the testplans:
<jmeter
jmeterhome="c:\jakarta-jmeter-1.8.1"
resultlog="${basedir}/loadtests/JMeterResults.jtl">
<testplans dir="${basedir}/loadtests" includes="*.jmx"/>
</jmeter>
So, it will only be one result file generated and then transformed into HTML.
I am using an F# JSON type provider to create a type from a reference JSON document. The reference document "ReferenceItem.json" is part of the F# library. In addition I have a unit test project which tests the library. I am struggling with making the reference document available for the test project without duplicating it.
No matter how I mark "ReferenceItem.json" in Visual Studio (Content, None, Copy to Output etc.) my test project fails to compile because the statement JsonProvider<"ReferenceItem.json"> expects "Reference.json" to be present in the project source folder at compilation time. Including it as a linked item from the library project doesn't help: it's not copied at compile time to the test source folder. So I need to make a duplicate copy of the file in the test project.
I noticed that in F# projects I can mark files as "DesignData" or "DesignDataWithDesignTimeCreatableTypes", but I wasn't able to figure out how I can use them.
This is a tricky problem - when F# compiler references the library, it will invoke the type provider and so the type provider needs to be able to access the sample.
The easiest solution is to just always copy the sample json file so that it is in the folder from where the application is starting. This is obviously sub-optimal, and so we have another way of handling this using resources.
See the "Using JSON provider in a library" section of the documentation. The idea is that you can embed the sample document as a resource in the library and specify the resource name as an additional parameter:
type WB = JsonProvider<"../data/WorldBank.json",
EmbeddedResource="MyLib, worldbank.json">
This will then load the resource when using the library (but it still needs the file name in the original compilation mode). This is still somewhat experimental, so please open an issue on GitHub if you cannot get it to work!