Facing round off issue with Jmeter Throughput Metric in exported CSV file in Jmeter 5.0 - csv

I can see the whole number for Throughput in Jmeter 5.0, if I double click against each of the sampler.
But it doesn't appear when I export the same report in .csv file.
It is round off in the CSV file, and I need to have the whole number so that I can compare with Baseline and Prior deployments.
How to deal with it, I have been doing the same and it was/is working in older version of Jmeter 2.13, recently I upgraded to latest version 5.0 and facing this issue.
Could anyone help me out on this.
Thanks

Looking into Synthesis Report plugin source
new RateRenderer("#.0"), // Throughput
I don't see easy way of getting the full throughput number as it is being cut to one decimal number.
I would recommend going for Summary Report Listener instead, looking into its source you will have 5 decimal points in the resulting table.
new DecimalFormat("#.00000"), // Throughput //$NON-NLS-1$
Also be aware that you can use Merge Results tool in order to combine results of 2 test runs into a single .jtl file and provide difference prefixes to different runs. Once done you will be able to visualize the difference in throughput for 2 test runs using i.e. Transactions Per Second listener
You can install Merge Results Tool using JMeter Plugins Manager:

The Throughput number is limited to 1 decimal point in the jp#gc - Synthesis Report (filtered) Listener
Whereas we can still get the Throughput number upto 5 decimal point in Summary & Aggregate Report Listeners
This only happens with the latest Jmeter version along with the latest Plugins, and Plugin Manager
But I needed to use only the jp#gc - Synthesis Report (filtered) Listener as I have to use both 90% Line Response Time and Std. Dev. metrics in my custom report, also it has the RegExp filtering capabilities which the above two in-built Listeners are not having.
Hence, have found a Workaround:
Replaced the following older Jar files manually into the latest Jmeter 5.0, and it works.
-JMeterPlugins-Standard.jar
-JMeterPlugins-Extras.jar
-JMeterPlugins-ExtrasLibs.jar
This helps me to have the whole Throughput number from the "jp#gc - Synthesis Report (filtered)" Listener

Related

mxmlc - Warning: Failed to parse corrupt data - once project reaches certain size?

I have a long running project which is compiled as modules for release, but the test suite potentially runs all the tests for every module.
The project is quite large - currently around 1250 tests cases (classes), and pulling in around 4000 classes in total. It's an asunit3 project so all the test cases are listed in one AllTests.as file.
Obviously I don't run all the tests all the time - the suite takes minutes to run, so most of the time I'm running focussed tests but a couple of times a day I run the full suite which includes integration tests and so on.
As of my last few hours of work, I'm no longer able to successfully build and run the whole suite. We have a script that allows us to filter tests using the package name or class name, so I can testpackage['modules'] or testpackage['com'] etc. I can also exclude packages - testallexcept['utils'] and so on.
I'm able to run any and all subsets of the tests, but if I try to test the whole set, I get:
Warning: Failed to parse corrupt data.
If I filter out just a few classes then I'm able to get the swf to compile and open, but it's just a white box and doesn't actually run the tests. If I filter a few more then all is fine. It doesn't appear to matter which ones I filter - as long as I take around 15 test cases out, all is fine (though I haven't found an exact number that is the line between ok / not ok.)
I'm compiling with -benchmark and get the following output:
Initial setup: 34ms
start loading swcs 7ms Running Total: 41ms
Loaded 45 SWCs: 253ms
precompile: 456ms
Files: 4013 Time: 16087ms
Linking... 91ms
SWF Encoding... 833ms
/Users/me/Documents/clients/project/bin/ProjectShellRunner.swf (4888318 bytes)
postcompile: 927ms
Total time: 17312ms
Peak memory usage: 413 MB (Heap: 343, Non-Heap: 70)
mxmlc finished compiling bin/ProjectShellRunner.swf in 18 seconds
As the peak memory usage is over the default heap in mxmlc, I increased it to
VMARGS="-Xmx1024m -Dsun.io.useCanonCaches=false "
This doesn't appear to have helped.
The way asunit3 and projectsprouts is set up pulls all the tests together in one single AllTests.as file. This is now over 2500 lines long and imports all 1250 test cases.
Is there anything I'm missing in terms of hard limits on number of classes, class length, number of imports in one class, etc? Or any settings I'm able to change other than the VM heap for java? I'm using the Flex 4.2 mxmlc compiler.
Obviously I can work around this via a script to run a series of subsets instead of one single suite, but I'd like to understand why this is happening.
Any clues?
Some extra info based on Qs from twitter:
I'm running Mac OS X 10.8.5
mxmlc is running via command line
I've tried forcing it to use the 32 bit runtime - no change
I've switched mxmlc to use headless mode, also no change

View plot for Node in KNIME_BATCH_APPLICATION

I have been using KNIME 2.7.4 for running analysis algorithm. I have integrated KNIME with our existing application to run in BATCH mode using the below command.
<<KNIME_ROOT_PATH>>\\plugins\\org.eclipse.equinox.launcher_1.2.0.v20110502.jar -application org.knime.product.KNIME_BATCH_APPLICATION -reset -workflowFile=<<Workflow Archive>> -workflow.variable=<<parameter>>,<<value>>,<<DataType>
Knime provide different kinds of plot which I want to use. However I am running the workflow in batch mode. Is there any option in KNIME where I can specify the Node Id and "View" option as a parameter to KNIME_BATCH_APPLICATION.
Would need suggestion or guidance to achieve this functionality.
I have posted this question in KNIME forum and got the satisfactory answer mentioned below
As per concept of command line execution, this requirement does not fit in. Also there is now way for batch executor to open the view of specific plot node.
Hence there could be two solutions
Solution 1
Write the output of workflow in a file and use any charitng plugin to plot the graph and do the drilldown activity.
Solution 2
Use jFreeChart and write the image using ImageWriter node which can be displayed in any screen.

large data download and local processing and storage for monotouch app

This is a more general software architecture question for monotouch / xamarin enviorment.
Here's my problem:
The app I am currently building downloads around 30k of json objects (6mb) on app launch. Data is then locally stored, so all screens make local db (sqlite) calls.
Main issue is the time it takes to perform the download. At the moment, it's about 36s total on the simulator, split between following tasks:
download ~ 10 sec
data conversion (json to native obj) ~ 16 sec
db insert ~ 10 sec
This is far too long, especially when I compare it with similar apps that are on the appstore. I feel like I am not doing something right here, or not being aware of an alternative way? Here are the improvements I've implemented:
gzip response - currently 6mb, with gzip it goes down to ~ 1mb
installed ServiceStack.Text json serialiser, about 2.5x faster than json.net (but still 16 seconds is too long)
flattened json response, so I can execute db.InsertAll() on response array (without extra looping etc) for more robost db import (transactions)
one call per day limitation
Now, what I want to do is to display local data on app launch and initialise download / updater in the background. The only problem is the time it takes to download + newly installed apps won't have any local data to display...
My questions are:
is mvc 4 api -> json convert -> sqlite import a good approach for this type of app? If not - what are the alternatives?
I've been thinking of server returning actual sqlite file instead, in a zipped response, or returning zipped db commands... Or perhaps sqlite is not suitable for this type of app? Are there any better alternatives for local storage? .net serializer / xml etc?
Thanks for all your suggestions!
My suggestion would be to do your work asynchronously - and you're lucky since C# makes that very easy. E.g.
Start a background download;
Process (background) the object as they are downloaded;
Insert (background) objects as they are processed;
If applicable update the UI (from the main thread) for every X object you add;
Since the download is (mostly, see note) network bound then your CPU will be idle for many seconds. That's a waste of time considering your next step (processing) will be CPU bound. Even more since the step afterward will likely be I/O bound (database).
IOW it looks like a good idea to run all three tasks simultaneously while giving feedback of the progress (showing data or a progress bar) to the application user.
Note #1: A gzipped response will download faster. OTOH it will take some extra (CPU) time to uncompress locally. It should be faster but it's worth measuring both options (e.g. using Apple's Instrument tool, which works nicely with Xamarin.iOS).
Note #2: A zip file, as a response, will also need extra time (to uncompress). That's not something you want to do sequentially after the download (but you could uncompress it as it's downloaded).

How to make hudson aggregate results provided in build artifacts over several builds

I have a hudson job that performs a stress test, torturing a virtual machine for several hours with some CPU- and IO-intensive tasks. The build scripts write a few interesting results into several files which are then stored as build artifacts. For example, one result is the time it took to perform certain operations.
I need to monitor the development of these results. For example, I need to know when the time for certain operations suddenly increases. So I need to aggregate these results over several (all?) builds. The ideal scenario would be if I could download the aggregated data from hudson.
I've been thinking about several possibilities to do this, but they all seem quite complicated. That's when I thought someone else might have had that problem already.
Maybe there already are some plugins doing this?
If you can write a script to extract the relevant numbers from the log files, you can use the Plot Plugin to visualize the data. We use this for simple stuff like tracking the executable size of build artifacts.
The Plot Plugin is more manual than the Perf Plugin mentioned by #Tao, but it might be easier to integrate depending on how much data murging the Perf Plugin requires.
Update: Java-style properties files (which are used as input to the Plot Plugin) are just simple name-value pairs in a text file, e.g.:
YVALUE=1234
Here's a build script that shows a (very stupid) example:
echo YVALUE=$RANDOM > buildtime.properties
This example plots a random number with each build.
I have not persoanlly use this plugin yet, but this might fits your need if you can just generate the xml file according to this plugin's format according to its description.
PerfPublisher Plugin
What about creating the results as JUnit results (XML files) so the results can be recorded by Hudson and will be aggregated by Hudson for different builds.

How can I generate a list of function dependencies in MATLAB?

In order to distribute a function I've written that depends on other functions I've written that have their own dependencies and so on without distributing every m-file I have ever written, I need to figure out what the full list of dependencies is for a given m-file. Is there a built-in/freely downloadable way to do this?
Specifically I am interested in solutions for MATLAB 7.4.0 (R2007a), but if there is a different way to do it in older versions, by all means please add them here.
For newer releases of Matlab (eg 2007 or 2008) you could use the built in functions:
mlint
dependency report and
coverage report
Another option is to use Matlab's profiler. The command is profile, it can also be used to track dependencies. To use profile, you could do
>> profile on % turn profiling on
>> foo; % entry point to your matlab function or script
>> profile off % turn profiling off
>> profview % view the report
If profiler is not available, then perhaps the following two functions are (for pre-MATLAB 2015a):
depfun
depdir
For example,
>> deps = depfun('foo');
gives a structure, deps, that contains all the dependencies of foo.m.
From answers 2, and 3, newer versions of MATLAB (post 2015a) use matlab.codetools.requiredFilesAndProducts instead.
See answers
EDIT:
Caveats thanks to #Mike Katz comments
Remember that the Profiler will only
show you files that were actually used
in those runs, so if you don't go
through every branch, you may have
additional dependencies. The
dependency report is a good tool, but
only resolves static dependencies on
the path and just for the files in a
single directory.
Depfun is more reliable but gives you
every possible thing it can think of,
and still misses LOAD's and EVAL's.
For MATLAB 2015a and later you should preferably look at matlab.codetools.requiredFilesAndProducts
or doc matlab.codetools.requiredFilesAndProducts
because depfun is marked to be removed in a future release.