Access CSV Data Set Config variables in backend listener - csv

I'm trying to fetch my variables from CSV Data config and add them to my backend listener in a distributed testing environment like this. FYI, it works on my local machine.
Here is my test plan:
Test Plan
CSV Data Config:
CSV config
My csv looks like this:
SELECT count(*) FROM github_events;simpleQuery
SELECT count(*) FROM github_events;medium
SELECT count(*) FROM github_events;complexQuery
SELECT count(*) FROM github_events;simpleQuery
Backend Listener:
Backend Listener
I'm setting the CSV config variables in the beanshell pre-processor like this:
props.put("query", "${QUERY}");
props.put("query_type", "${QUERY_TYPE}");
and that's why I have the ${__P(query)} ${__P(query_type)} in the backend listener.
The goal is to grab the QUERY and QUERY_TYPE from the CSV data config and send it to the backend listener.
Any help would be appreciated. Let me know if I need to add more info on here. Thank you!
Solution:
How I got this to work... kind of hacky but it'll work for what I need:
I created a JSR223 Postprocessor on my JDBC Request and added the following code:
import groovy.json.*
def my_query = vars.get("QUERY")
def my_query_type = vars.get("QUERY_TYPE")
json = JsonOutput.toJson([myQuery: my_query, myQueryType: my_query_type])
prev.setSamplerData(groovy.json.JsonOutput.prettyPrint(groovy.json.JsonOutput.toJson(json)))
This won't work if you need whatever is in your response Data but in my case, it was okay to replace. BTW, this only works with my distributed test. To make it work locally, you use prev.setResponseData instead. Hope this helps someone.

I don't think you can, as per JMeter 5.4.1 all fields of the Backend Listener are being populated in "testStarted" phase
the same applies to your custom listener
it means that JMeter Variables originating from the CSV Data Set Config don't exist at the time the Backend Listener is being initialized and your reference to JMeter Properties returns the default value of 1 as there are no such variables.
If you're looking for the possibility to dynamically send metrics to Azure you will need to replicate the code from the Azure Backend Listener in JSR223 Listener using Groovy language.
The only way how this could work on your local machine is that:
You run your test plan in GUI mode 1st time - it fails, but it sets the properties
You run your test plan in GUI mode 2nd time - it passes but uses the last values of the properties
etc.

Related

Foundry writebacks - is it possible to restore an edited record to it's unedited version (BaseVersion)

Palantir-Foundry - We have a workflow that needs updates from the backing dataset of an object with a writeback to persist in the writeback, but this fails on rows that have previously been edited. Due to the "Edits-win" model the writeback it will always choose the edited version of the row, which makes sense. Short of re-architecting the entire app, I am looking into ways to take care of this by using the Foundry REST API.
Is it possible to revert an edited row in Foundry writebacks to the original unedited version? I found some API documentation in our instance for phonograph2 BaseVersion, but I have not been able to find/understand anything that would restore a row to BaseVersion. I would need to be able to do this from a functions repository using typescript, on certain events.
One way to overwrite the edits with the values from a backing datasets is to build a transform off of the backing dataset that makes a new, identical dataset. Then you can use the new dataset as backing dataset for a new object.
Transform using a simple code repo:
from transforms.api import transform_df, Input, Output
#transform_df(
Output(".../static_guests"),
source_df=Input("<backing dataset RID>"),
)
def compute(source_df):
return source_df
You can then build up the ontology of a static object that will always equal the writeback dataset.
Then create an action that will modify your edited object (in my example that is Test Guest) by reverting a value to equal a value in the static object type.
You can then use the Apply Action API to automatically apply this action to certain values on a schedule or based on a certain condition. Documentation for the API is here.

SSIS SQL Task Map Result Set to Project Parameter

I am implementing a custom auditing framework, logging ETL events such as start, end, error, insertrows etc.
As well as logging at a package level, I'm implementing "session logging" where a sequence of package executions, i.e. a controller package that executes several packages, is a session. In order to keep track of the "session", the stored procedures always return a SessionLogID.
I was hoping I could map this result set to a project parameter as otherwise, I will have to save it to a user var and then pass it around between packages via parameters. This will mean every single package will have a Package Parameter and User Variable called SessionLogID. I don't want to do this if I don't need to.
Open to other suggestions.
Thanks,
Adam
Parameters cannot change at runtime. They are a set once kind of deal whereas variables can change at any time. You can set the variable once in the parent package and map the variable to the child package's using a parameter.

Logging different project libraries, with a single logging library

I have a project in Apps script that uses several libraries. The project needed a more complex logger (logging levels, color coding) so I wrote one that outputs to google docs. All is fine and dandy if I immediately print the output to the google doc, when I import the logger in all of the libraries separately. However I noticed that when doing a lot of logging it takes much longer than without. So I am looking for a way to write all of the output in a single go at the end when the main script finishes.
This would require either:
Being able to define the logging library once (in the main file) and somehow accessing this in the attached libs. I can't seem to find a way to get the main projects closure from within the libraries though.
Some sort of singleton logger object. Not sure if this is possible from with a library, I have trouble figuring it out either way.
Extending the built-in Logger to suit my needs, not sure though...
My project looks at follows:
Main Project
Library 1
Library 2
Library 3
Library 4
This is how I use my current logger:
var logger = new BetterLogger(/* logging level */);
logger.warn('this is a warning');
Thanks!
Instead of writing to the file at each logged message (which is the source of your slow down), you could write your log messages to the Logger Library's ScriptDB instance and add a .write() method to your logger that will output the messages in one go. Your logger constructor can take a messageGroup parameter which can serve as a unique identifier for the lines you would like to write. This would also allow you to use different files for logging output.
As you build your messages into proper output to write to the file (don't write each line individually, batch operations are your friend), you might want to remove the message from the ScriptDB. However, it might also be a nice place to pull back old logs.
Your message object might look something like this:
{
message: "My message",
color: "red",
messageGroup: "groupName",
level: 25,
timeStamp: new Date().getTime(), //ScriptDB won't take date objects natively
loggingFile: "Document Key"
}
The query would look like:
var db = ScriptDb.getMyDb();
var results = db.query({messageGroup: "groupName"}).sortBy("timeStamp",db.NUMERIC);

Jenkins/Hudson job parameters at runtime?

PROBLEM
Let's say I have a jenkins/hudson job (for example free-style) that takes two parameters PARAM_ONE and PARAM_TWO. Now, I do not know the values of those parameters, but I can run some script (perl/shell) to find values of those parameters and then I want the user to select from a dropdown list after which I can start the build.
Is there any way of doing that?
Sounds like you've found a plug-in that does what you need, that is pretty similar to the built-in Parameterized Builds functionality.
To answer your second question: when you define parameterized builds, the parameters are typically passed to your job as environment variables. So you'd access them however you access environment variables in your language, for instance, if you defined a parameter PARAM_ONE, you'd access it as:
In bash:
$PARAM_ONE
In Windows batch:
%PARAM_ONE%
In Python:
import os
os.getenv('PARAM_ONE')
etc.
I imagine this would be the same for the Extended Choice Parameter plugin you are using.
Just install this, and give the parameter in the build script like:
Windows
"your build script" %PARAMONE% %PARAMTWO%
In Java, you can access these parameters off the run object
EnvVars envVars = new EnvVars();
envVars = run.getEnvironment(listener);
for (String envName2 : envVars.keySet()) {
listener.getLogger().println(envName2 + " = " + envVars.get(envName2));
}

How to log SQL queries to a log file with CakePHP

I have a CakePHP 1.2 application that makes a number of AJAX calls using the AjaxHelper object. The AjaxHelper makes a call to a controller function which then returns some data back to the page.
I would like to log the SQL queries that are executed by the AJAX controller functions. Normally, I would just turn the debug level to 2 in config/core.php, however, this breaks my AJAX functionality because it causes the output SQL queries to be appended to the output that is returned to the client side.
To get around this issue, I would like to be able to log any SQL queries performed to a log file. Any suggestions?
I found a nice way of adding this logging functionality at this link:
http://cakephp.1045679.n5.nabble.com/Log-SQL-queries-td1281970.html
Basically, in your cake/libs/model/datasources/dbo/ directory, you can make a subclass of the dbo that you're using. For example, if you're using the dbo_mysql.php database driver, then you can make a new class file called dbo_mysql_with_log.php. The file would contain some code along the lines of the following:
App::import('Core', array('Model', 'datasource', 'dbosource', 'dbomysql'));
class DboMysqlWithLog extends DboMysql {
function _execute($sql) {
$this->log($sql);
return parent::_execute($sql);
}
}
In a nutshell, this class modifies (i.e. overrides) the _execute function of the superclass to log the SQL query before doing whatever logic it normally does.
You can modify your app/config/database.php configuration file to use the new driver that you just created.
This is a fantastic way to debug things like this, https://github.com/cakephp/debug_kit