Save View Results in Table listener as a csv file in jmeter - csv

I am running a test script on jmeter and I want to save the results I obtained from the 'View Results in Table' listener. I just don't understand how I would do this the same way I save the results in the Summary Report and Aggregate Report listener; that is, there is no 'Save Table Data' in the bottom of the View Results in Table unlike the other listeners. How can I do this?

In 'View Results in Table' listener, there is Filename field.
Fill Filename field with correct local path
Click Configure button. Then configure as below picture
Run the test and check the result. The csv file will created.

Summary Report and Aggregate Report have "Save Table Data" button because the majority of values in them are calculated.
View Results in Table listener just displays raw data therefore it does not require to calculate anything hence there is no "Save Table Data" button.
I am under impression that you are using JMeter a little bit wrong. Normally you should not be saving listeners output separately, moreover you don't even need the listeners during the test run.
Run your test in command-line non-GUI mode like:
jmeter -n -t test.jmx -l results.jtl
After your test is done open JMeter GUI add a listener of your choice to Test Plan and open results.jtl file with that listener
Make sure that all the listeners are either disabled or deleted during the test execution as they cause huge memory overhead which may cause lesser throughput or even ruin your test. See Greedy Listeners - Memory Leeches of Performance Testing arcticle for more detailed explanation.

Related

Zabbix - track config files

I would like to track changes to one config file. The reason for this is that multiple users access it to solve different issues, but every now and then those fixes break something else. diff function in Zabbix shows that a file was changed, but I would like Zabbix to also track what changed. Is there a combination of triggers that would let me do that? Any help is greatly appreciated.
Do you store file checksum or contents in the item? In any case, there is no built-in way to do that, but you can implement it with a script.
If checksum, you will need a way to store the previous version, new version and run the diff command. The easiest would be a userparameter that would do a diff between a temporary copy of the file and the current copy, then copy the current file over the temporary copy. In this case, you would store diff results directly in an item and your trigger would check that the last value is not an empty string. See https://www.zabbix.com/documentation/3.0/manual/config/items/userparameters for more information on userparameters.
If you are storing file contents already, presumably you want to reuse them. This would be a bit more complicated, as you would have to kick off the script whenever a new value arrives - maybe a special trigger could kick off an action that would compare the last two values (probably using the API), then push the result in another item that has another trigger. Unless you have a good reason to do it this way, I'd opt for the first approach.
Make a copy off your file
file.txt.copy or something like that. Make this file only writable by zabbix.
Create an item and trigger on zabbix to check when the file was changed (using diff or checksum)
Create a action on zabbix to execute a script that will
1 - diff between file.txt and file.txt.copy and send this diference to your email
2 - Copy file.txt to file.txt.copy so you can do the diff next time the file change.
To reate a action with script.
Create a action on zabbix. Go to "operations" tab. Select "Remote Command" from option.
Choose custom script.
Put the script with the whole path and arg's.
Sample
/opt/script/my_script.sh
The user zabbix must have permission to ruin the script.
Zabbix docs

Is there a design pattern for this...?

When a user selects a record in a datagrid I launch a pop-up window with more detailed info. The user can make changes to the record in this window but they don't have to save them. For example, they can click the X to close the window.
Unfortunately, I am stupid and whenever a user makes changes I update the object directly.
Is there a pattern for copying the object and then mapping the changes to it when a user confirms they want to save?
Thanks!
I wouldn't go with copy and merge. Why don't you just update the object only if the user explicitly wants to update/save? Let the UI be UI and condense the relevant information from it as soon as you need it.
Another way that may be appliable, if you want something like temporary edits, would be using commands for every atomic update, where every command has an inverse - undo - command. If you keep these in a history, you could just go back to the initial state.

Design Pattern to require multiple events before executing method?

There are many times that I've needed to execute some code after a number of events have fired, and I've come up with counters and such but I feel there must be a better way.
For example, say five files need to be loaded, after which a UI component will become active.
If I set up a counter that increments each time a file is requested, then decrements each time one has loaded, I run the risk that the first two or three files may somehow get completely loaded before my code gets around to requesting the fourth and fifth, which would mean that my counter would be at zero when I still have two files to load, thus allowing the UI component to be prematurely activated.
There are some cases where you could know the number that need to be loaded before the requests go out, but it's possible that the first file contains the paths (and therefore the number of) files. (And this file-loading scenario is only an example of the pattern I'm trying to explain.)
Does anyone have an elegant solution for this? (Does my description make sense?) Thanks!
You could do something with a task framework like spicelib
Using that as an example
Create a FileRecursionLoadTask which grabs a file and completes when that file and any references it makes are loaded.
Add each FileRecursionLoadTask to a SequentialTaskGroup.
When the TaskGroup is completed, then you know all of the file loads have completed.
There are also plenty of other task frameworks which you might like better. For example, Spring ActionScript also has one.
Before executing a request, store a reference (a unique request uri, the loader object or a special command object) in a list. When a loader has finished, remove that object and call a function that checks if there are remaining active tasks in the list.
This isn't specific to file requests nor request in general, it can be used for anything that needs to wait for multiple actions to finish. Multiple list can be used to process multiple types of action at the same time. The object stored in the list could be implemented as a command object, which could provide more information about the task. This is called command pattern.
If you're doing just loading, like Jacob, I would also suggest a library that handles loading
If the case of a more complicated situation like mixing loaders and other event listeners, I would suggest using an event that fires whenever there is any change to any of the dependencies. In addition all the objects/classes would have a state.
Then I would create a listener adding function for the class that would need to do the function or initiate it, that would have 3 parameters
object with event dispatcher (assuming they all use the same update event) ie. assetLoader
name of object state ie. headerLoaded
state value's desired ie. true
the function would add the listener to a chain of listeners, and any time any of the listeners fires, all objects would check if the state value.
This would allow for regression as well (like when a user presses a button, the content starts loading, but then the user presses cancel, even if all the assets load, the state of one object would be false, thus not allowing the item to complete) If you were using counters, it would be the equivalent to adding instead of subtracting, but much more reliable.
Looking for a design pattern? Try the command pattern (http://johnlindquist.com/2010/09/09/patterncraft-command-pattern/)
(The video is a great example of what command pattern is and how it works - using Starcraft as an example.
The implementation is that you queue your load commands so that they do not execute out of order, and you can add the enable or disable commands to your command que. So the command pattern will play back your commands something like: load, load, load, enable ui item, load, load, enable another item
Good luck

Access Confirmation message

I have an access database that is saved in a network location so that any of the 600 employees who work at the company can access the database. When they open the main form it runs a make-table query. However there is a popup from MS Access stating "You are about to run a make-table query that will modify data in your table. Do you want to continue?"
The form will not run correctly if they do not press yes so I want to suppress this prompt so that it does not ask them. I changed the settings from the Options>Edit/Find>Confirm menu so that it doesn't show this confirmation. However, this is apparently a local setting so to enforce this every user would have to change those settings.
Is there any other possible solution to suppress the confirmation message?
There is so much wrong here, I hardly know where to start:
you can't possible have an Access database used by 600 people.
if more than one person opens it and runs the MakeTable, it will break, because you'd be making a structural change that collides between the two users.
turning off error notification is a HUGE MISTAKE. You don't know exactly which errors you might end up ignoring.
turning off SetWarnings means that you can get inconsistent updates from a SQL DML statement, and then you have no way to know which data was updated or not.
MakeTable queries do not belong in any production application. Instead, create a persistent table, and clean it out and append new records to it. But it doesn't belong in your main application -- this is the very definition of temporary data, since it's constantly being replaced, so it needs to be in a separate temp database.
you'd likely want all users to have their own temp databases so there are no collisions if more than one opens the app at a time.
Yes, from VBA:
DoCmd.ShowWarnings False ' Don't show warning popup
DoCmd.RunQuery "MyMakeTableQuery" ' Run the make table query silently
DoCmd.ShowWarnings True ' Turn warnings back on
Use DoCmd.SetWarnings False to stop the message box from popping up. Be warned, however, that this action is global to the Access application. You have to re-enable warnings with DoCmd.SetWarnings True as needed.
You can also try:
Application.SetOption "Confirm Action Queries", False
If you don't already, you may want to have a hidden form open every time the database is opened. You can use the OnOpen event of that form to run startup code like the above.

SSIS: Why is this not logging?

I don't know if this will help, but i enabled logging to a text file called test.txt on my C Drive.
Public Sub Main()
Dim rowsProcessed As Integer = 100
Dim emptyBytes(0) As Byte
Dts.Log("Testing, Test 1,2,3", rowsProcessed.ToString, emptyBytes)
Dts.TaskResult = ScriptResults.Success
End Sub
You have to go into the SSIS->Logging menu and tick checkboxes like a crazy checkbox-ticking-ninja to get this to work.
There are various checkboxes that have to be checked, and some of them only appear when you click on the script tasks, so it took me a while to figure this out:
First, enable your logging provider (that you have set up, right?) by ticking it on the Providers and Logs tab.
Then switch to the Details tab (which shows various events that you might want to log)
For the DTS.Log() method you need the ScriptTaskLogEntry event, but they only show up when you click on Script Tasks in the tree on the left.
So, click each of your Script Tasks in the tree on the left, enable it for logging, and then tick the Script Tasks event on the details tab.
Also make sure your logging provider is ticked for each script task
See also: http://msdn.microsoft.com/en-us/library/ms136131.aspx
This is an old question and #codeulike has answered it well but I would like to add a note about the logging behavior in debug mode, specially for someone new to SSIS or SSIS logging (like me) - Assuming you have all the configuration in place required for logging, if execute a selected task that you expect to log, it still will not log. Only if you execute (or debug) the entire package, then the logging will work.
If you are sure you have done configured everything correctly and still cant see your dbo.sysssislog in the database you selected, then check in the following
YourDatabase > tables > system tables
you can find your loggings there.
its my first time doing package loggings and configured everything correctly and could not see that dbo.sysssislog in the database it was supposed to do loggings and banging my head for half an hour when i realised it was in system tables of that particular database.
I realize that this question is rather old, but maybe it helps someone.
I ran into the same problem - while debugging in Visual Studio my textfile logging provider just did not write into the configured logfile.
My logfile was in the directory "project-directory/bin/development".
After changing the path to "project-directory-name"-root (eg. "/project-directory-name/test.txt" it worked.
I cannot explain this - it is just what I have observed.