Gradle task prototyping - mysql

I'm new to Gradle and Groovy and I'm trying to define a task that executes a SQL script in MySQL. Here's what I have so far:
task executeSomeSQL(type: Exec){
def pwd = getMySQLPwd()
workingDir './'
commandLine 'mysql', '-uroot', "--password=$pwd", 'dbname'
standardInput file('database/script.sql').newInputStream()
}
Now this works as expected, however, I'd like to be able to define many such tasks that only differ in the input script that they take. In my mind, I need a way to prototype the SQL execution task with common properties (getting the password, setting the working directory and setting the command) and then define each task with its own filename. In a sort of pseudocode:
// define a function or closure? this doesn't work because the
// three task specific properties aren't defined
def sqlExecutorDef(filename){
def pwd = getMySQLPwd()
workingDir './'
commandLine 'mysql', '-uroot', "--password=$pwd", 'dbname'
standardInput file(filename).newInputStream()
}
// this is truly pseudocode: somehow the task should be created
// using a function that defines it
task executeSomeSQL(type: Exec) = sqlExecutorDef('database/script.sql')
In this way, I could define many tasks, one per SQL script that needs to be executed, with a one-liner.
EDIT: this is probably trivial for somebody with more Groovy experience. I apologize!

Though this may not be standard Gradle, dynamic tasks might help out here. The example below uses a list both as task names and (with some massaging) sql files: (it simply prints to the console, but executing the SQL should be straight-forward given your original work):
def username = "admin"
def password = "swordfish"
def taskNames = ["abc_sql", "def_sql", "ijk_sql"]
taskNames.each { taskName ->
def sqlFile = taskName.replaceAll("_", ".")
task "${taskName}" (type:Exec) {
workingDir "."
commandLine "echo", "run SQL script '${sqlFile}' as ${username} / ${password}"
}
}
gradle tasks gives:
[snip]
Other tasks
-----------
abc_sql
def_sql
ijk_sql
example run of 'abc_sql':
bash-3.2$ gradle abc_sql
:abc_sql
run SQL script 'abc.sql' as admin / swordfish

Related

which beanshell code or groovy code are used to to push the jtl results to db by using single one sampler

In jmeter,I want the results while the running the test,which beansheel code add to sampler and convert summary report values in to milliseconds and push those values in MySQL db automatically by adding one sampler.
please give me step by step process and all possible ways explain
and how create a table in particular values on jtl file values in avg,min,max,response time,error values in mysql db please explain
Wouldn't that be easier to use InfluxDB instead? JMeter provides Backend Listener which automatically sends metrics to InfluxDB and they can be visualized via Grafana. Check out How to Use Grafana to Monitor JMeter Non-GUI Results - Part 2 article for more details.
If you have to use MySQL the correct approach would be writing your own implementation of the AbstractBackendListenerClient
If you need a "single sampler" - take a look at JSR223 Listener, it has prev shorthand for SampleResult class instance providing access to all the necessary information like:
def name = prev.getSampleLabel() // get sampler name
def elapsed = prev.getTime() // get elapsed time (in milliseconds)
// etc.
and in order to insert them into the database you could do something like:
import groovy.sql.Sql
def url = 'jdbc:mysql://localhost:3306/your-database'
def user = 'your-username'
def password = 'your-password'
def driver = 'com.mysql.cj.jdbc.Driver'
def sql = Sql.newInstance(url, user, password, driver)
def insertSql = 'INSERT INTO your-table-name (sampler, elapsed) VALUES (?,?)'
def params = [name , elapsed]
def keys = sql.executeInsert insertSql, params
sql.close()

Best way to connect to MySQL and execute a query? (probably with Dapper)

I will preface with I simply could not get the Sql Type Provider to work - it threw a dozen different errors at points and seemed to be a version conflict. So I want to avoid that. I've been following mostly C# examples and can't always get the syntax right in F#.
I am targeting .NET6 (though can drop to 5 if it's going to be an issue).
I have modelled the data as a type as well.
I like the look of Dapper the best but I generally don't need a full ORM and would just like to run raw SQL queries so am open to other solutions.
I have a MySQL server running and a connection string.
I would like to
Initialize an SQL connection with my connection string.
Execute a query (preferably in raw SQL). If a select query, map it to my data type.
Be able to nearly execute more queries from elsewhere in the code without reinitializing a connection.
It's really just a package and a syntax example of those three things that I need. Thanks.
This is an example where I've used Dapper to query an MS SQL Express database. I have quite a lot of helper methods that I've made trough the years in order to make Dapper (and to a slight degree also SqlClient) easy and type safe in F#. Below you see just two of these helpers - queryMultipleAsSeq and queryMultipleToList.
I realize now that it's not that easy to get going with Dapper and F# unless these can be made available to others. I have created a repo on GitHub for this, which will be updated regularly with new helper functions and demos to show how they're used.
The address is https://github.com/BentTranberg/DemoDapperStuff
Ok, now this initial demo:
module DemoSql.Main
open System
open System.Data.SqlClient
open Dapper
open Dapper.Contrib
open Dapper.Contrib.Extensions
let queryMultipleAsSeq<'T> (conn: SqlConnection, sql: string, args: obj) : 'T seq =
conn.Query<'T> (sql, args)
let queryMultipleToList<'T> (conn: SqlConnection, sql: string, args: obj) : 'T list =
queryMultipleAsSeq (conn, sql, args)
|> Seq.toList
let connectionString = #"Server=.\SqlExpress;Database=MyDb;User Id=sa;Password=password"
let [<Literal>] tableUser = "User"
[<Table (tableUser); CLIMutable>]
type EntUser =
{
Id: int
UserName: string
Role: string
PasswordHash: string
}
let getUsers () =
use conn = new SqlConnection(connectionString)
(conn, "SELECT * FROM " + tableUser, null)
|> queryMultipleToList<EntUser>
[<EntryPoint>]
let main _ =
getUsers ()
|> List.iter (fun user -> printfn "Id=%d User=%s" user.Id user.UserName)
Console.ReadKey() |> ignore
0
The packages used for this demo:
<PackageReference Include="Dapper.Contrib" Version="2.0.78" />
<PackageReference Include="System.Data.SqlClient" Version="4.8.2" />
The Dapper.Contrib will drag along Dapper itself.

Creating SSIS Folder with PowerShell 2.0

I'm looking for an information that might look simple to get but i can't put my hands on it.
I wanna create a folder in a SSISDB catalog through a Powershell script, but I get an error saying that Powershell can't load assemblies : Microsoft.sqlserver.BatchParser and Microsoft.sqlserver.BatchParserClient, even though they are present in C:\Windows\Assembly.
But actually I suspect that PowerShell is running with a too old version, which is 2.0. Can anyone confirm that we can can create SSIS catalog folder with a 2.0 Powershell version ?
Thanks for you help
Since no code was provided, it's terribly challenging to debug why it isn't working. However, this code is what I use as part of my ispac deployment.
[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Management.IntegrationServices") | Out-Null
#this allows the debug messages to be shown
$DebugPreference = "Continue"
# Retrieves a Integration Services CatalogFolder object
# Creates one if not found
Function Get-CatalogFolder
{
param
(
[string] $folderName
, [string] $folderDescription
, [string] $serverName = "localhost\dev2012"
)
$connectionString = [String]::Format("Data Source={0};Initial Catalog=msdb;Integrated Security=SSPI;", $serverName)
$connection = New-Object System.Data.SqlClient.SqlConnection($connectionString)
$integrationServices = New-Object Microsoft.SqlServer.Management.IntegrationServices.IntegrationServices($connection)
# The one, the only SSISDB catalog
$catalog = $integrationServices.Catalogs["SSISDB"]
$catalogFolder = $catalog.Folders[$folderName]
if (-not $catalogFolder)
{
Write-Debug([System.string]::Format("Creating folder {0}", $folderName))
$catalogFolder = New-Object Microsoft.SqlServer.Management.IntegrationServices.CatalogFolder($catalog, $folderName, $folderDescription)
$catalogFolder.Create()
}
else
{
$catalogFolder.Description = $folderDescription
$catalogFolder.Alter()
Write-Debug([System.string]::Format("Existing folder {0}", $folderName))
}
return $catalogFolder
}
$folderName = "ProdSupport HR export"
$folderDescription = "Prod deployment check"
$serverName = "localhost\dev2012"
$catalogFolder = Get-CatalogFolder $folderName $folderDescription $serverName
There may be more graceful ways of doing this within PowerShell but this gets the job done. Logically reading the above code
Create a SqlClient connection to the server in question
Instantiate the IntegrationServices class
Point it at the actual catalog (assumes it has already been created)
Test whether the folder already exists
If the folder does not exist, create it
If the folder exists, update the description

Using Groovy in Confluence

I'm new to Groovy and coding in general, but I've come a long way in a very short amount of time. I'm currently working in Confluence to create a tracking tool, which connects to a MySql Database. We've had some great success with this, but have hit a wall with using Groovy and the Run Macro.
Currently, we can use Groovy to populate fields within the Run Macro, which really works well for drop down options, example:
{groovy:output=wiki}
import com.atlassian.renderer.v2.RenderMode
def renderMode = RenderMode.suppress(RenderMode.F_FIRST_PARA)
def getSql = "select * from table where x = y"
def getMacro = '{sql-query:datasource=testdb|table=false} ${getSql} {sql-query}"
def get = subRenderer.render(getMacro, context, renderMode)
def runMacro = """
{run:id=test|autorun=false|replace=name::Name, type::Type:select::${get}|keepRequestParameters = true}
{sql:datasource=testdb|table=false|p1=\$name|p2=\$type}
insert into table1 (name, type) values (?, ?)
{sql}
{run}
"""
out.println runMacro
{groovy}
We've also been able to use Groovy within the Run Macro, example:
enter code here
{run:id=test|autorun=false|replace=name::Name, type::Type:select::${get}|keepRequestParameters = true}
{groovy}
def checkSql = "{select * from table where name = '\name' and type = '\$type'}"
def checkMacro = "{sql-query:datasource=testdb|table=false} ${checkSql} {sql-query}"
def check = subRenderer.render(checkMacro, context, renderMode)
if (check == "")
{
println("This information does not exist.")
} else {
println(checkMacro)
}
{groovy}
{run}
However, we can't seem to get both scenarios to work together, Groovy inside of a Run Macro inside of Groovy.
We need to be able to get the variables out of the Run Macro form so that we can perform other functions, like checking the DB for duplicates before inserting data.
My first thought is to bypass the Run Macro and create a simple from in groovy, but I haven't been too lucky with finding good examples. Can anyone help steer me in the right direction for creating a simple form in Groovy that would replace the Run Macro? Or have suggestions on how to get the rendered variables out of the Run Macro?

Jenkins parameterized job that only queues one build

Imagine a Jenkins job A which takes 1 minute to run, and job B which takes 5 minutes.
If we configure job A to trigger job B, while job B is running job A may run 5 times before B completes. However, Jenkins doesn't add 5 builds to job B's queue, which is great because otherwise speedy job A would be creating an ever-growing backlog of builds for poor slow job B.
However, now we want to have job A trigger B as a parameterized job, using the parameterized trigger plugin. Parameterized jobs do queue up a backlog, which means job A is happily creating a huge pile of builds for job B, which can't possibly keep up.
It does make sense to add a new parameterized build to the queue each time it's triggered, since the parameters may be different. Jenkins should not always assume that a new parameterized build renders previously queued ones unnecessary.
However, in our case we actually would like this. Job A builds and packages our application, then Job B deploys it to a production-like environment and runs a heavier set of integration tests. We also have a build C which deploys to another environment and does even more testing, so this is an escalating pattern for us.
We would like the queue for our parameterized job B to only keep the last build added to it; each new build would replace any job currently in the queue.
Is there any nice way to achieve this?
Add a "System Groovy Script" pre-build step to job B that checks for (newer) queued jobs of the same name, and bails out if found:
def name = build.properties.environment.JOB_NAME
def queue = jenkins.model.Jenkins.getInstance().getQueue().getItems()
if (queue.any{ it.task.getName() == name }) {
println "Newer " + name + " job(s) in queue, aborting"
build.doStop()
} else {
println "No newer " + name + " job(s) in queue, proceeding"
}
You could get rid of Parameterized Trigger Plugin, and instead, use the traditional triggering. As you said, this would prevent job B queue from piling up.
How to pass the parameters from A to B then? Make job A to yield the parameters in it's console output. In job B, to get these build parameters, examine the console output of the latest A build (with a Python script, perhaps?).
Ron's solution worked for me. If you don't like having bunch of cancelled builds in build history you can add the following system groovy script to job A before you trigger job B:
import hudson.model.*
def q = jenkins.model.Jenkins.getInstance().getQueue()
def items = q.getItems()
for (i=0;i<items.length;i++){
if(items[i].task.getName() == "JobB"){
items[i].doCancelQueue()
}
}
Here's one workaround:
Create a job A2B between jobs A and B
Add a build step in job A2B that determines whether B is running. To achieve that, check:
Determine if given job is currently running using Hudson/Jenkins API
Python API's is_queued_or_running()
Finally, trigger job B from A2B only if there are no B builds queued or running (carrying the parameters through)
In case you're using Git, this is now supported by the "Combine Queued git hashes" under the Triggering/ Parameters/ Pass-through option.
The first Git plugin version that should actually work with this is 1.1.27 (see Jenkins-15160)
Here's a more flexible option if you are only care about a few parameters matching. This is especially helpful when a job is triggered externally (i.e. from GitHub or Stash) and some parameters don't need a separate build.
If the checked parameters match in both value and existence in both the current build and a queued build, the current build will be aborted and the description will show that a future build contains the same checked parameters (along with what they were).
It could be modified to cancel all other queued jobs except the last one if you don't want to have build history showing the aborted jobs.
checkedParams = [
"PARAM1",
"PARAM2",
"PARAM3",
"PARAM4",
]
def buildParams = null
def name = build.project.name
def queuedItems = jenkins.model.Jenkins.getInstance().getQueue().getItems()
yieldToQueuedItem = false
for(hudson.model.Queue.Item item : queuedItems.findAll { it.task.getName() == name }) {
if(buildParams == null) {
buildParams = [:]
paramAction = build.getAction(hudson.model.ParametersAction.class)
if(paramAction) {
buildParams = paramAction.getParameters().collectEntries {
[(it.getName()) : it.getValue()]
}
}
}
itemParams = [:]
paramAction = item.getAction(hudson.model.ParametersAction.class)
if(paramAction) {
itemParams = paramAction.getParameters().collectEntries {
[(it.getName()) : it.getValue()]
}
}
equalParams = true
for(String compareParam : checkedParams) {
itemHasKey = itemParams.containsKey(compareParam)
buildHasKey = buildParams.containsKey(compareParam)
if(itemHasKey != buildHasKey || (itemHasKey && itemParams[compareParam] != buildParams[compareParam])) {
equalParams = false
break;
}
}
if(equalParams) {
yieldToQueuedItem = true
break
}
}
if (yieldToQueuedItem) {
out.println "Newer " + name + " job(s) in queue with matching checked parameters, aborting"
build.description = "Yielded to future build with:"
checkedParams.each {
build.description += "<br>" + it + " = " + build.buildVariables[it]
}
build.doStop()
return
} else {
out.println "No newer " + name + " job(s) in queue with matching checked parameters, proceeding"
}
The following is based on Ron's solution, but with some fixes to work on my Jenkins 2 including removing java.io.NotSerializableException exception and handling that the format of getName() is some times different from that of JOB_NAME
// Exception to distinguish abort due to newer jobs in queue
class NewerJobsException extends hudson.AbortException {
public NewerJobsException(String message) { super(message); }
}
// Find jenkins job name from url name (which is the most consistently named
// field in the task object)
// Known forms:
// job/NAME/
// job/NAME/98/
#NonCPS
def name_from_url(url)
{
url = url.substring(url.indexOf("/") + 1);
url = url.substring(0, url.indexOf("/"));
return url
}
// Depending on installed plugins multiple jobs may be queued. If that is the
// case skip this one.
// http://stackoverflow.com/questions/26845003/how-to-execute-only-the-most-recent-queued-job-in-jenkins
// http://stackoverflow.com/questions/8974170/jenkins-parameterized-job-that-only-queues-one-build
#NonCPS
def check_queue()
{
def name = env.JOB_NAME
def queue = jenkins.model.Jenkins.getInstance().getQueue().getItems()
if (queue.any{ name_from_url(it.task.getUrl()) == name }) {
print "Newer ${name} job(s) in queue, aborting"
throw new NewerJobsException("Newer ${name} job(s) in queue, aborting")
} else {
print "No newer ${name} job(s) in queue, proceeding"
}
}