which beanshell code or groovy code are used to to push the jtl results to db by using single one sampler - mysql

In jmeter,I want the results while the running the test,which beansheel code add to sampler and convert summary report values in to milliseconds and push those values in MySQL db automatically by adding one sampler.
please give me step by step process and all possible ways explain
and how create a table in particular values on jtl file values in avg,min,max,response time,error values in mysql db please explain

Wouldn't that be easier to use InfluxDB instead? JMeter provides Backend Listener which automatically sends metrics to InfluxDB and they can be visualized via Grafana. Check out How to Use Grafana to Monitor JMeter Non-GUI Results - Part 2 article for more details.
If you have to use MySQL the correct approach would be writing your own implementation of the AbstractBackendListenerClient
If you need a "single sampler" - take a look at JSR223 Listener, it has prev shorthand for SampleResult class instance providing access to all the necessary information like:
def name = prev.getSampleLabel() // get sampler name
def elapsed = prev.getTime() // get elapsed time (in milliseconds)
// etc.
and in order to insert them into the database you could do something like:
import groovy.sql.Sql
def url = 'jdbc:mysql://localhost:3306/your-database'
def user = 'your-username'
def password = 'your-password'
def driver = 'com.mysql.cj.jdbc.Driver'
def sql = Sql.newInstance(url, user, password, driver)
def insertSql = 'INSERT INTO your-table-name (sampler, elapsed) VALUES (?,?)'
def params = [name , elapsed]
def keys = sql.executeInsert insertSql, params
sql.close()

Related

What is the best way to share the sqlalchemy db session between FastAPI and pytest

I am using FastAPI to design the GUI and RESTFul APIs. When one API is called, it will trigger the pytest as background task. I want to use a table in the database to monitor the progress of the test cases in pytest. Then, the RESTFul API logic can also refer to the table to update the test progress in GUI. In order to do that, I am currently using two db sessions based on sqlalchemy to query and update the database.
In the main.py of FastAPI, I implement the db session as below.
def get_db():
try:
db = SessionLocal()
yield db
finally:
db.close()
#app.post("/runtest")
async def run_test(test_request: TestRequest, backgroud_tasks: BackgroundTasks, db: Session = Depends(get_db)):
"""
add a test request and trigger background task
"""
test = Test()
test.panel_id = test_request.panel_id
test.device_pos = test_request.device_pos
db.add(test)
db.commit()
backgroud_tasks.add_task(schedule_test, test.id)
In the background task for schedule_test, I called pytest.main to start the tests by using pytest framework. In pytest part, I implemented the pytest fixtures to setup another session to talk with DB.
#pytest.fixture(scope="session")
def connection() -> Any:
SQLALCHEMY_DATABASE_URL = "postgresql+psycopg2://xxxx"
engine = create_engine(SQLALCHEMY_DATABASE_URL)
return engine.connect()
#pytest.fixture(scope="session")
def db_session(connection) -> Session:
"""Returns an sqlalchemy session, and after the test tears down everything properly."""
transaction = connection.begin()
session = sessionmaker(autocommit=False, autoflush=False, bind=connection)
yield session()
Then in my test case, I want to update the table in the database as below.
#pytest.mark.dependency(name="test_uid", scope="session")
#pytest.mark.order(1)
def test_case1(
db_session: Session
) -> None:
db_session.query(Test).filter(Test.device_pos == '1').update({'test_progress': 'Started'})
db_session.commit()
However, I found that the db session in pytest can not really update the table even after calling the commit() function.
What could be wrong in this implementation? Are there some better way to share the db session between FastAPI and Pytest? Thanks a lot!

odoo 10 QWEB report how to pass value that is used in methods in parser?

I have a model that saves reports in binary fields for archiving. To do that I use the pdf_get().
document = self.env['report'].sudo().get_pdf(ids, report_name)
The problem is when I want to create a report that doesn't use the models fields but has to compute values from related models with the model that is pass with ids.
My report model
class ReportHistory(models.AbstractModel):
_name = 'report.hr.report_history'
def _get_report(self, ids[0]):
record = self.env['hr.history'].search([('id', '=', ids[0])])
return record
def _get_company(self, ids):
rec = self._get_report(ids)
if len(rec) > 0:
return rec[0].company_name
My biggest problem is that I can't debug so I can't what data is passed. The print or logger or raise ValidationError won't work. Probably due to running odoo on windows pc.
Every answer that I found it was said to pass values to report like this but it doesn't work.
#api.model
def render_html(self, docids, data=None):
docargs =
'doc_ids': self.ids,
'doc_model': self.model,
'data': data,
'company': self._get_company,
}
return self.env['report'].render()
So how to correctly pass values from methods to report? Or did I only do a dumb mistake?
Try this:
return self.env['report'].render(report_name, docargs)

Using Groovy in Confluence

I'm new to Groovy and coding in general, but I've come a long way in a very short amount of time. I'm currently working in Confluence to create a tracking tool, which connects to a MySql Database. We've had some great success with this, but have hit a wall with using Groovy and the Run Macro.
Currently, we can use Groovy to populate fields within the Run Macro, which really works well for drop down options, example:
{groovy:output=wiki}
import com.atlassian.renderer.v2.RenderMode
def renderMode = RenderMode.suppress(RenderMode.F_FIRST_PARA)
def getSql = "select * from table where x = y"
def getMacro = '{sql-query:datasource=testdb|table=false} ${getSql} {sql-query}"
def get = subRenderer.render(getMacro, context, renderMode)
def runMacro = """
{run:id=test|autorun=false|replace=name::Name, type::Type:select::${get}|keepRequestParameters = true}
{sql:datasource=testdb|table=false|p1=\$name|p2=\$type}
insert into table1 (name, type) values (?, ?)
{sql}
{run}
"""
out.println runMacro
{groovy}
We've also been able to use Groovy within the Run Macro, example:
enter code here
{run:id=test|autorun=false|replace=name::Name, type::Type:select::${get}|keepRequestParameters = true}
{groovy}
def checkSql = "{select * from table where name = '\name' and type = '\$type'}"
def checkMacro = "{sql-query:datasource=testdb|table=false} ${checkSql} {sql-query}"
def check = subRenderer.render(checkMacro, context, renderMode)
if (check == "")
{
println("This information does not exist.")
} else {
println(checkMacro)
}
{groovy}
{run}
However, we can't seem to get both scenarios to work together, Groovy inside of a Run Macro inside of Groovy.
We need to be able to get the variables out of the Run Macro form so that we can perform other functions, like checking the DB for duplicates before inserting data.
My first thought is to bypass the Run Macro and create a simple from in groovy, but I haven't been too lucky with finding good examples. Can anyone help steer me in the right direction for creating a simple form in Groovy that would replace the Run Macro? Or have suggestions on how to get the rendered variables out of the Run Macro?

Play + Slick: How to do partial model updates?

I am using Play 2.2.x with Slick 2.0 (with MYSQL backend) to write a REST API. I have a User model with bunch of fields like age, name, gender etc. I want to create a route PATCH /users/:id which takes in partial user object (i.e. a subset of the fields of a full user model) in the body and updates the user's info. I am confused how I can achieve this:
How do I use PATCH verb in Play 2.2.x?
What is a generic way to parse the partial user object into an update query to execute in Slick 2.0?I am expecting to execute a single SQL statement e.g. update users set age=?, dob=? where id=?
Disclaimer: I haven't used Slick, so am just going by their documentation about Plain SQL Queries for this.
To answer your first question:
PATCH is just-another HTTP verb in your routes file, so for your example:
PATCH /users/:id controllers.UserController.patchById(id)
Your UserController could then be something like this:
val possibleUserFields = Seq("firstName", "middleName", "lastName", "age")
def patchById(id:String) = Action(parse.json) { request =>
def addClause(fieldName:String) = {
(request.body \ fieldName).asOpt[String].map { fieldValue =>
s"$fieldName=$fieldValue"
}
}
val clauses = possibleUserFields.flatMap ( addClause )
val updateStatement = "update users set " + clauses.mkString(",") + s" where id = $id"
// TODO: Actually make the Slick call, possibly using the 'sqlu' interpolator (see docs)
Ok(s"$updateStatement")
}
What this does:
Defines the list of JSON field names that might be present in the PATCH JSON
Defines an Action that will parse the incoming body as JSON
Iterates over all of the possible field names, testing whether they exist in the incoming JSON
If so, adds a clause of the form fieldname=<newValue> to a list
Builds an SQL update statement, comma-separating each of these clauses as required
I don't know if this is generic enough for you, there's probably a way to get the field names (i.e. the Slick column names) out of Slick, but like I said, I'm not even a Slick user, let alone an expert :-)

Django equivalent of SqlAlchemy's literal_column

Trying to port some SqlAlchemy to Django and I've got this tricky little bit:
version = Column(
BIGINT,
default=literal_column(
'UNIX_TIMESTAMP() * 1000000 + MICROSECOND(CURRENT_TIMESTAMP)'
),
nullable=False)
What's the best option for porting the literal_column bit to Django? Best idea I've got so far is a function to set as the default that executes the same raw sql, but I'm not sure if there's an easier way? My google-foo is failing me there.
Edit: the reason we need to use a timestamp created by mysql is that we are measuring how out of date something is (so we need to actually know time) and we want, for correctness, to have only one time-stamping authority (so that we don't introduce error using python functions that look at system times, which could be different across servers).
At present I've got:
def get_current_timestamp(self):
cursor = connection.cursor()
cursor.execute("SELECT UNIX_TIMESTAMP() * 1000000 + MICROSECOND(CURRENT_TIMESTAMP)")
row = cursor.fetchone()
return row
version = models.BigIntegerField(default=get_current_timestamp)
which, at this point, sounds like my best/only option.
If you don't care about having a central time authority:
import time
version = models.BigIntegerField(
default = lambda: int(time.time()*1000000) )
To bend the database to your will:
from django.db.models.expressions import ExpressionNode
class NowInt(ExpressionNode):
""" Pass this in the same manner you would pass Count or F objects """
def __init__(self):
super(Now, self).__init__(None, None, False)
def evaluate(self, evaluator, qn, connection):
return '(UNIX_TIMESTAMP() * 1000000 + MICROSECOND(CURRENT_TIMESTAMP))', []
### Model
version = models.BigIntegerField(default=NowInt())
because expression nodes are not callables, the expression will be evaluated database side.