JMeter Report Dashboard - csv

Most recent version of JMeter has an option to generate Report Dashboard which is great, but i am struggling to customize it to match my needs.
I am running performance tests for every new version of Application.
Lets start from current state of my reports.
I have User-Defined Variable named - Version. I am changing this for every new run of performance test.
Also, there is time stamp as a second type of comparison. - It is possible to compare previous results of the same version. Basically results from yesterday compare to today's results.
I am using Flexible File Writer to save results to csv file. Using this plugin, it is perfectly easy to store version number(User-Defined variable) in every row, which is important for next step.
Results are imported to Excel Pivot table from where you can do basically everything.
Now, this above is ok, but it would be great to have created consolidated report directly from JMeter but there are few problems here.
Report Dashboard is created from JMeter log file and here comes problems:
How to pass User-Defined Variable to log file?
How to make JMeter to continue with adding results to log file?
Currently it is asking to write new filename, so one test-one log file and i need: few tests-one log file.
How to Modify Jmeter properties to be able to compare results of more versions/more dates using JMeter JMeter Report Dashboard? thnx

You could use the JMeter Plugins Merge results.
Add a prefix with the date to the merge results.
For example :
LOGIN for date 1 will be date1:LOGIN or 2017_01_16:LOGIN
LOGIN for date 2 will be date2:LOGIN or 2017_01_17:LOGIN
https://jmeter-plugins.org/wiki/MergeResults/
Regards.
Vincent D.

Related

Which is the best way of parsing CSV-data in a logic app without using a custom connector?

I have an SFTP trigger in a logic app which fires when a file is added to a certain file area. It is a CSV-formatted file and I want the rows to be parsed and coverted into json. Which is the best way to convert CSV-data into json without using any custom connectors?
I cannot find any built-in connectors doing this job. And as far as I know there are no logic apps functions doing the job either.
Right now, there is no connector/action in logic app that can provide the out of box solution for your requirement. You need to loop in through the array and perform the calculation as per your requirement but I will not suggest you leverage the loop, variables action as it may take time and cost you more.
The alternative would be leveraging the inline code (JavaScript code) to do the calculation as per your requirement. Please note that you will need Integration Account to run your inline code.
Please refer to javascript code and modified if needed according to your needs. I have used '_' for differentiating the nested objects. For more details you can refer to previous discussion here.
For complex calculation you can offload this functionality to azure function and write your code as per the supported languages and call azure function from logic app.
1.Created logic app as shown below:
2 .Created container in storage account and uploaded a CSV file in container.
3.Next using compose action to split the contents of the CSV file on every new line into an array.
a. Here is the expression used in SplitLines compose action:
split(body('Get_blob_content_(V2)'),decodeUriComponent('%0D%0A'))
b. Follow the below MS Doc to write expressions:
4. Removing last(empty) line from previous output using another compose action as shown below ,
take(outputs('SplitLines'),add(length(outputs('SplitLines')),-1))
5.Separating filed names using compose action
split(first(outputs('SplitLines')), ',')
Forming json as shown below using Select action,
**From**: **`skip(outputs('RemoveLastLine'), 1)`**
**Map:**
**`outputs('SplitFieldName')[0]`** **`split(item(), ',')?[0]`**
**`outputs('SplitFieldName')[1]`** **`split(item(), ',')?[1]`**
Tested logic app and it is running successfully. 
Content of CSV file is as shown below:
Csv data is formatted as json:
Reference:Use data operations in Power Automate (contains video) — Power Automate | Microsoft Docs
Credit: #Iason Koulas

Using Azure Data Factory to read only one file from blob storage and load into a DB

I'd like to read just one file from a blob storage container and load it into a copy operation into a DB, after the arrival of the file has set off a trigger.
Using Microsoft Documentation, the closest I seem to do is read all the file in order of Modified Date.
Would anyone out there know how to read one file after it has arrived in my blob storage?
EDIT:
Just to clarify, I would look to read only the latest file automatically. Without hardcoding the filename.
You can specify a single Blob in the DataSet. This value can be hard coded or variables (using Data Set Parameters):
If you need to run this process whenever a new blob is created/updated, you can use the Event Trigger:
EDIT:
Based on your addition of "only the latest", I don't have a direct solution. Normally, you could use Lookup or GetMetadata activities, but neither they nor the expression language support sorting or ordering. One option would be to use an Azure Function to determine the file to process.
However - if you think about the Event Trigger I mention above, every time it fires the file (blob) is the most recent one in the folder. If you want to coalesce this across a certain period of time, something like this might work:
Logic App 1 on event trigger: store the blob name in a log [blob, SQL, whatever works for you].
Logic App 2 OR ADF pipeline on recurrence trigger: read the log to grab the "most recent" blob name.

Using Apache Nifi to collect files from 3rd party Rest APi - Flow advice

I am trying to create a flow within Apache-Nifi to collect files from a 3rd party RESTful APi and I have set my flow with the following:
InvokeHTTP - ExtractText - PutFile
I can collect the file that I am after, as I have specified this within my Remote URL however when I get all of the data from said file it is outputting multiple (100's) of the same files to my output directory.
3 things I need help with:
1: How do I get the flow to output the file in a readable .csv rather than just a file with no ext
2: How can I stop the processor once I have all of the data that I need
3: The Json file that I have been supplied with gives me the option to get files from a certain date range:
https://api.3rdParty.com/reports/v1/scheduledReports/877800/1553731200000
Or I can choose a specific file:
https://api.3rdParty.com/reports/v1/scheduledReports/download/877800/201904/CTDDaily/2019-04-02T01:50:00Z.csv
But how can I create a command in Nifi to automatically check for newer files, as this process will be running daily and we will be looking at downloading a new file each day.
If this is too broad, please help me by letting me know so I can edit this post.
Thanks.
Note: 3rdParty host name has been renamed to comply with security - therefore links will not directly work. Thanks.
1) You change the filename of the flow file to anything you want using the UpdateAttribute processor. If you want to make it have a ".csv" extension then you can add a property named "filename" with a value of "${filename}.csv" (without the quotes when you enter it).
2) By default most processors have a scheduling strategy of timer-driver 0 seconds, which means keep running as fast as possible. Go to the configuration of the processor on the scheduling tab and configure the appropriate schedule, it sounds like you probably want CRON scheduling to schedule it daily.
3) You can use NiFi expression language statements to create dynamic time ranges. I don't fully understand the syntax for the API that you have to communicate with, but you could do something like this for the URL:
https://api.3rdParty.com/reports/v1/scheduledReports/877800/${now()}
Where now() would return the current timestamp as an epoch.
You can also format it to a date string if necessary:
${now():format('yyyy-MM-dd')}
https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html

How to load HTML data into SQL Server (non-table format)?

I'm posting it here because I couldn't' find any such scenario on the web so far. I have a webpage which contains a set of reports both in XLS and PDF formats. I should be downloading the excel files from the page and load into my database. I wish I could use the URL for XLS file directly but the problem is the naming convention may keep changing every time (Sales_Quarter1.xlsx can be Sales_Q1.xlsx the next year). The only thing that would be constant in the following example is "Sales for Calendar Year". I should be looking up for the file that corresponds to this text and download it before loading it into database table.
I would like to know from experts if this would be possible?
<li>
<sub>Sales for Calendar Year 2015--All Countries </sub>
<a href="/Data/Downloads/Documents/Sales/Sales_Quarter1.xlsx">
<sub>[XLS]</sub></a><sub> , <sub>[PDF]</sub><sub>​</sub></sub>
</li>
PS: I am using SQL Server 2014.
Thanks!
Have a look at Integration Services. Create a package for both pulling the web page using a script task, along with a variable name that will represent your downloaded, local filenames for the html file and excel files (you will also have to parse the link out of the html file). Then utilize an Excel Source next in your package.
The variable name for the excel file used in the script task will need to be set to ReadWrite as well.
You can also schedule the resulting package execution via SQL Agent job, if you plan to run this on a reoccurring basis, placing logic into the script or the execution paths,

import csv file

I need to pull data from csv file to SQL Server table. Which Control task should I use ? Is it Flat File ? What is the correct method to pull data ?
The problem is I have used Flat File Task for pulling csv file. But the csv file whihc I am having, contains headings as first row, then on the third row, I have the columns, and data starting from fifth row.
Another problem is, in this file column details comes again after 1000 data ie columns appears in two rows. Is it possible to pull data ? If so, HOW ?
While Valentino's suggestion should work, I suggest that first you work with the provider of the file to get them to provide the data in a better format. When we get stuff like this we almost always push it back and ask for properly formatted data. We get it too about 90% of the time. It will save you work if they will fix their own drek. In our case, the customers providing the data are paying for our programming services and when they understand how substantial an increase in the cost to them, they are usually nmore than willing to accomodate our needs.
I believe you'll first have to transform your file into a proper CSV file so that the SSIS Flat File Source component (Data Flow) can read it. If the source system cannot produce a real CSV file, we usually create custom .NET applications for the cleanup/conversion task.
An Execute Process task (Control Flow) that executes the custom app can then be called prior to the Data Flow.