how to log process md files with junit? - concordion

I work on a project using concordion, maven and junit. There are a lot's of md files which are processed but the logs do not contain the name of the processed md file. So it is hard to link logs lines to executed tests.
How is it possible to log the name of processed md file ?

Assuming you want the details logged in the output specification, take a look at the echo command (or the embed extension if you want to alter the HTML formatting).
If you just want it logged to a log file, you'll need to look at a logging framework (eg. slf4j/logback). The logback extension can make the log accessible from the concordion output specification.

Related

Using Apache Nifi to collect files from 3rd party Rest APi - Flow advice

I am trying to create a flow within Apache-Nifi to collect files from a 3rd party RESTful APi and I have set my flow with the following:
InvokeHTTP - ExtractText - PutFile
I can collect the file that I am after, as I have specified this within my Remote URL however when I get all of the data from said file it is outputting multiple (100's) of the same files to my output directory.
3 things I need help with:
1: How do I get the flow to output the file in a readable .csv rather than just a file with no ext
2: How can I stop the processor once I have all of the data that I need
3: The Json file that I have been supplied with gives me the option to get files from a certain date range:
https://api.3rdParty.com/reports/v1/scheduledReports/877800/1553731200000
Or I can choose a specific file:
https://api.3rdParty.com/reports/v1/scheduledReports/download/877800/201904/CTDDaily/2019-04-02T01:50:00Z.csv
But how can I create a command in Nifi to automatically check for newer files, as this process will be running daily and we will be looking at downloading a new file each day.
If this is too broad, please help me by letting me know so I can edit this post.
Thanks.
Note: 3rdParty host name has been renamed to comply with security - therefore links will not directly work. Thanks.
1) You change the filename of the flow file to anything you want using the UpdateAttribute processor. If you want to make it have a ".csv" extension then you can add a property named "filename" with a value of "${filename}.csv" (without the quotes when you enter it).
2) By default most processors have a scheduling strategy of timer-driver 0 seconds, which means keep running as fast as possible. Go to the configuration of the processor on the scheduling tab and configure the appropriate schedule, it sounds like you probably want CRON scheduling to schedule it daily.
3) You can use NiFi expression language statements to create dynamic time ranges. I don't fully understand the syntax for the API that you have to communicate with, but you could do something like this for the URL:
https://api.3rdParty.com/reports/v1/scheduledReports/877800/${now()}
Where now() would return the current timestamp as an epoch.
You can also format it to a date string if necessary:
${now():format('yyyy-MM-dd')}
https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html

Copy/Move files in PDI / Spoon yields 'is not a file' error

I am trying to automate weekly generation of a database. As a first step in this process, I need to obtain a set of files from network location M:\. The process is as follows:
Delete any possibly remaining old source files from my local folder (REMOVE_OLD_FILES).
Obtain the names of the required files using regular expressions (GET_FILES).
Copy the files from the network location to my local folder for further processing (COPY/MOVE FILES)
Step 3 is where I run into trouble, I frequently receive the below error:
Error processing files. Exception : org.apache.commons.vfs.FileNotFoundException: Could not read from "file:///M:/FILESOURCE/FILENAME.zip" because it is a not a file.
However, when I manually locatae the 'erroneous' file on the network location and try to open or copy it, there are no problems. If I then re-run the Spoon job, no errors occur for this file (although the next file might lead to an error).
So far, I have verified that steps 1 and 2 run correctly: more specifically, there are no errors in the file names returned from step 2.
Obviously, I would prefer not having to manually open all the files first to ensure that Spoon can correctly copy them. Does anyone have an idea what might be causing this behaviour?
For completeness, below are the parameters selected in the COPY/MOVE FILES step.
I was facing same issue with different clients and finally i tried with some basic approach and it got resolved. It might help in your case as well.
Also, other users can follow this rule.
Just try this: Create all required folder with Spoon Job "Create a Folder" and inactive/delete those hops from your job or transformation once your folders are created.
This is because, user you are using to delete the file/s is not recognized as Windows User. Once your folder is in place you can remove "Create a Folder" steps from your Job.
The path to the file is wrong. If you are running spoon in a Windows environment you should use the Windows format for filepaths. Try changing from
"file:///M:/FILESOURCE/FILENAME.zip"
To
"M:\FILESOURCE\FILENAME.zip"
By the way, it will only work if M: is an actual drive in the machine. If you want to access a file in the network you should use the network path to the shared folder, this way:
"\\MachineName\M$\FILESOURCE\FILENAME.zip"
or
"\\MachineName\FILESOURCE\FILENAME.zip"
If you try to access a file in a network mounted drive it won't work.

How to append the results HTML output file on JMeter tests using Ant

I have the following issue.
I have 100+ Jmeter tests as separate files with the tendency to add more. Using Ant I have configured the results to come into a separate output HTML file for each test. So now when I have 100+ tests I get 100+ resulting HTML files. And I need to check every single one if the tests run OK.
My question is how to make the Ant append the results into one HTML file for all 100+ tests so I can view with a single glance that the tests run OK.
I guess I either need to modify the ..extras/build.xml file in Jmeter or modify the command line where I invoke my tests via Ant.
Thank you in advance.
If you are using JMeter Ant Task try this - it uses a FileSet for the testplans:
<jmeter
jmeterhome="c:\jakarta-jmeter-1.8.1"
resultlog="${basedir}/loadtests/JMeterResults.jtl">
<testplans dir="${basedir}/loadtests" includes="*.jmx"/>
</jmeter>
So, it will only be one result file generated and then transformed into HTML.

Camel streaming splitter behavior different between file2 and ftp2

I'm new to Camel and the lack of similar questions online leads me to believe Im doing something silly. I am using camel 2.12.1 components and am parsing large CSV files from both local directories and by downloading them over SFTP. I've found that
split(body().tokenize("\n")).streaming().unmarshal().csv()
works for local files (windows 7); I get multiple exchanges with a
List<List<String>>
for each line in the csv file. But when I use that same route syntax from an sftp component (connecting to a linux server to download the files), I get a single exchange with a single line that reads like a call to "ls":
-rwxrwxrwx 1 userName userName 83400 Dec 16 14:11 fileName.csv
Through trial and error, I found that
split(body()).streaming().unmarshal().csv()
with the sftp component will correctly load and parse the file, but it doesn't do it in streaming mode, it loads the entire file into memory before unmarshalling it into a single exchange.
I found a similar bug report (https://issues.apache.org/jira/browse/CAMEL-6231) from camel 2.10 which Clause closed as Invalid indicating the reporter was using thread and parallel with stream incorrectly, but Im not configuring either of those capabilities.
The sftp stanza im using is:
sftp://192.168.1.1?fileName=fileName.csv&username=userName&password=secret!&idempotent=true&localWorkDirectory=tmp
The file stanza is :
"file:test/data?noop=true&fileName=fileName.csv"
Anyone have an idea on what im doing wrong ?
make an intermediate route to solve the problem.
<route id="StagingFtpFileCopy">
<from uri="ftp://{{uriFtpPath}}"/>
<to uri="file://data/staging"/>
</route>
I faced the same issue as well with SFTP (Camel 2.25.0). However, before splitting the route into 2 different routes (as proposed by others), I used the below url
sftp://:22/?username=random&password=random&delay=2000&move=archive&readLock=changed&bridgeErrorHandler=true&recursive=false&disconnect=true&stepwise=false&streamDownload=true&localWorkDirectory=C:/temp
with my below route definition,
from("sftp url").split().tokenize("\n", 10, true).streaming().to("log:out")
As this route also downloads the remote file into local (same as 2 route option) and then treat the local file with normal streaming (as Sinsanator, mentioned it works perfectly with file), the memory footprint becomes a truely saw-tooth while downloading (upto 100MB) and then it used upto 150MB while processing but roughly a saw-tooth nature again.
One advantage (in my view) with this approach could be we can handle the completion related task (e.g. moving the remote file into other directory) based on actual processing completion (which is not possible automatically if we break the routes). Also, since the downloading is managed by Camel, the local file gets deleted automatically on completion of processing.

How can I add file locations to a database after they are uploaded using a Perl CGI script?

I have a CGI program I have written using Perl. One of its functions is to upload pics to the server.
All of it is working well, including adding all kinds of info to a MySQL db. My question is: How can I get the uploaded pic files location and names added to the db?
I would rather that instead of changing the script to actually upload the pics to the db. I have heard horror stories of uploading binary files to databases.
Since I am new to all of this, I am at a loss. Have tried doing some research and web searches for 3 weeks now with no luck. Any suggestions or answers would be greatly appreciated. I would really hate to have to manually add all the locations/names to the db.
I am using: a Perl CGI script, MySQL db, Linux server and the files are being uploaded to the server. I AM NOT looking to add the actual files to the db. Just their location(s).
It sounds like you have your method complete where you take the upload, make it a string and toss it unto mysql similar to reading file in as a string. However since your given a filehandle versus a filename to read by CGI. You are wondering where that file actually is.
If your using CGI.pm, the upload, uploadInfo, the param for the upload, and upload private files will help you deal with the upload file sources. Where they are stashed after the remote client and the CGI are done isn't permanent usually and a minimum is volatile.
You've got a bunch of uploaded files that need to be added to the db? Should be trivial to dash off a one-off script to loop through all the files and insert the details into the DB. If they're all in one spot, then a simple opendir()/readdir() type loop would catch them all, otherwise you can make a list of file paths to loop over and loop over that.
If you've talking about recording new uploads in the server, then it would be something along these lines:
user uploads file to server
script extracts any wanted/needed info from the file (name, size, mime-type, checksums, etc...)
start database transaction
insert file info into database
retrieve ID of new record
move uploaded file to final resting place, using the ID as its filename
if everything goes file, commit the transaction
Using the ID as the filename solves the worries of filename collisions and new uploads overwriting previous ones. And if you store the uploads somewhere outside of the site's webroot, then the only access to the files will be via your scripts, providing you with complete control over downloads.