I have a table in the database called locations which contains 2 columns(id which is auto incremented , location) , And I have a csv contains 1 column looks like :
When I try to import that file to locations table , I get this error invalid column count in CSV input on line 1.
Also I tried CSV using LOAD DATA but I get this : MySQL returned an empty result set (i.e. zero rows)
Maybe you should use:
$array = array_map('str_getcsv', file('file.csv'));
You have more option to check variables.
If Java is supported in your machine, Use below solution
https://dbisweb.wordpress.com/
The simple configuration required to do is
<?xml version="1.0" encoding="UTF-8"?>
<config>
<connections>
<jdbc name="mysql">
<driver>com.mysql.jdbc.Driver</driver>
<url>jdbc:mysql://localhost:3306/test1</url>
<user>root</user>
<password></password>
</jdbc>
</connections>
<component-flow>
<execute name="file2table" enabled="true">
<migrate>
<source>
<file delimiter="," header="false" path="D:/test_source.csv"/>
</source>
<destination>
<table connection="mysql">locations</table>
</destination>
<mapping>
<column source="1" destination="location" />
</mapping>
</migrate>
</execute>
</component-flow>
</config>
If you are interested, Same can be achieved by Java code.
Source source = new File("D:/test_source.csv", ',', false);
Destination destination = new Table("locations", new JDBCConnection("com.mysql.jdbc.Driver", "jdbc:mysql://localhost:3306/test1", "root", ""));
List<Mapping> mapping = new ArrayList<>();
mapping.add(new Mapping("1", "location", null));
Component component = new MigrateExecutor(source, destination, mapping);
component.execute();
Related
Scenario :
I have used DBLookup mediator to retrieve the full name by passing part of the name. For that I used like option in sql.
Full Name : John Smith
Value passed : John
SQL : SELECT * FROM table WHERE FullName like '%John%'
Used config :
<dblookup>
<connection>
<pool>
<driver>com.mysql.jdbc.Driver</driver>
<url>jdbc:mysql://localhost:3306/world</url>
<user>root</user>
<password>root</password>
</pool>
</connection>
<statement>
<sql>SELECT TP_ID, TP_FULL_NAME, TP_USER_NAME, TP_USER_PASSWORD, TP_ACTIVE, TP_CHANGED_TIME, TP_TENANT_ID FROM tp_user WHERE TP_FULL_NAME like ('%?%');</sql>
<parameter expression="get-property('name')" type="VARCHAR" />
</statement>
</dblookup>
Error :
[2018-11-08 13:04:14,943] [] ERROR - DBLookupMediator SQL Exception occurred while executing statement : SELECT TP_ID, TP_FULL_NAME, TP_USER_NAME, TP_USER_PASSWORD, TP_ACTIVE, TP_CHANGED_TIME, TP_TENANT_ID FROM tp_user WHERE TP_FULL_NAME like ('%?%'); against DataSource : jdbc:mysql://localhost:3306/world
java.sql.SQLException: Parameter index out of range (1 > number of parameters, which is 0).
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:965)
Change your sql statement as below;
SELECT TP_ID, TP_FULL_NAME, TP_USER_NAME, TP_USER_PASSWORD, TP_ACTIVE, TP_CHANGED_TIME, TP_TENANT_ID FROM tp_user WHERE TP_FULL_NAME like CONCAT('%',?,'%');
For further reference you can follow this LINK
Summary
Configuration file lookup works for Execute SQL Task, but fails for Dataflow tasks.
Problem
I have 2 databases:
Source (On-premise SQL Server database)
Destination (Azure SQL Database)
I have 2 packages that I want to create from BIML code.
1) Create Staging (Works fine)
Creates tables in destination database using for each loop and metadata from source database
2) Load Staging (Does not work)
Loads created tables in destination database using for each loop and Dataflow tasks (Source to Destination)
Both of these packages need to use a Package Configuration file that I have created, which stores the Username and Password of the Destination database (Azure database, using SQL Server Authentication).
Using this configuration file works fine for Package 1), but when I try to create the SSIS package using BIML code for Package 2) I get the following error:
Could not execute Query on Connection Dest: SELECT * FROM stg.SalesTaxRate. Login failed for user ''.
I have tried using the BIML Code for Package 1) and adding in a dataflow task and that seems to raise the same error - it seems that when using an Execute SQL Task it can find and use the Configuration file no problem, but when using a Dataflow Task it won't find it.
Script for Package 1):
<## import namespace="System.Data" #>
<## import namespace="System.Data.SqlClient" #>
<## template language="C#" tier="2" #>
<#
string _source_con_string = #"Data Source=YRK-L-101098;Persist Security Info=true;Integrated Security=SSPI;Initial Catalog=AdventureWorks2016";
string _dest_con_string = #"Data Source=mpl.database.windows.net;Initial Catalog=mpldb;Provider=SQLNCLI11.1;Persist Security Info=True;Auto Translate=False";
string _table_name_sql = "SELECT TABLE_SCHEMA, TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE='BASE TABLE'";
DataTable _table_names = new DataTable();
SqlDataAdapter _table_name_da = new SqlDataAdapter(_table_name_sql, _source_con_string);
_table_name_da.Fill(_table_names);
#>
<#+
public string RowConversion(DataRow Row)
{
string _ret = "[" + Row["COLUMN_NAME"] + "] " + Row["DATA_TYPE"];
switch (Row["DATA_TYPE"].ToString().ToUpper())
{
case "NVARCHAR":
case "VARCHAR":
case "NCHAR":
case "CHAR":
case "BINARY":
case "VARBINARY":
if (Row["CHARACTER_MAXIMUM_LENGTH"].ToString() == "-1")
_ret += "(max)";
else
_ret += "(" + Row["CHARACTER_MAXIMUM_LENGTH"] + ")";
break;
case "NUMERIC":
_ret += "(" + Row["NUMERIC_PRECISION"] + "," + Row["NUMERIC_SCALE"] + ")";
break;
case "FLOAT":
_ret += "(" + Row["NUMERIC_PRECISION"] + ")";
break;
}
return _ret;
}
#>
<Biml xmlns="http://schemas.varigence.com/biml.xsd">
<Connections>
<OleDbConnection Name="Dest" ConnectionString="Data Source=mpl.database.windows.net;Initial Catalog=mpldb;Provider=SQLNCLI11.1;Persist Security Info=True;Auto Translate=False" />
</Connections>
<Packages>
<Package Name="005_Create_Staging_Configuration" ConstraintMode="Linear">
<PackageConfigurations>
<PackageConfiguration Name="Configuration">
<ExternalFileInput ExternalFilePath="C:\VSRepo\BIML\Configurations\AzureConfigEdit.dtsConfig">
</ExternalFileInput>
</PackageConfiguration>
</PackageConfigurations>
<Tasks>
<Container Name="Create Staging Tables" ConstraintMode="Linear">
<Tasks>
<# foreach(DataRow _table in _table_names.Rows) { #>
<ExecuteSQL Name="SQL-S_<#= _table["TABLE_NAME"] #>" ConnectionName="Dest">
<DirectInput>
IF OBJECT_ID('stg.<#= _table["TABLE_NAME"] #>','U') IS NOT NULL
DROP TABLE stg.<#= _table["TABLE_NAME"] #>;
CREATE TABLE stg.<#= _table["TABLE_NAME"] #>
(
<#
string _col_name_sql = "select COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH, NUMERIC_PRECISION, NUMERIC_SCALE from INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='" + _table["TABLE_SCHEMA"] + "' and TABLE_NAME='"+ _table["TABLE_NAME"] + "' order by ORDINAL_POSITION ";
DataTable _col_names = new DataTable();
SqlDataAdapter _col_names_da = new SqlDataAdapter(_col_name_sql, _source_con_string);
_col_names_da.Fill(_col_names);
for (int _i=0; _i<_col_names.Rows.Count ; _i++ )
{
DataRow _r = _col_names.Rows[_i];
if (_i == 0)
WriteLine(RowConversion(_r));
else
WriteLine(", " + RowConversion(_r));
}
#>
, append_dt datetime
)
</DirectInput>
</ExecuteSQL>
<# } #>
</Tasks>
</Container>
</Tasks>
</Package>
</Packages>
</Biml>
Script for Package 2)
<## import namespace="System.Data" #>
<## import namespace="System.Data.SqlClient" #>
<## template language="C#" tier="2" #>
<#
string _source_con_string = #"Data Source=YRK-L-101098;Persist Security Info=true;Integrated Security=SSPI;Initial Catalog=AdventureWorks2016";
string _dest_con_string = #"Data Source=mpl.database.windows.net;Initial Catalog=mpldb;Provider=SQLNCLI11.1;Persist Security Info=True;Auto Translate=False";
string _table_name_sql = "select TABLE_SCHEMA , table_name from INFORMATION_SCHEMA.TABLES where TABLE_TYPE='BASE TABLE'";
DataTable _table_names = new DataTable();
SqlDataAdapter _table_name_da = new SqlDataAdapter(_table_name_sql, _source_con_string);
_table_name_da.Fill(_table_names);
#>
<Biml xmlns="http://schemas.varigence.com/biml.xsd">
<Connections>
<OleDbConnection Name="Source" ConnectionString="Data Source=YRK-L-101098;Provider=SQLNCLI11.1;Persist Security Info=true;Integrated Security=SSPI;Initial Catalog=AdventureWorks2016" />
<OleDbConnection Name="Dest" ConnectionString="Data Source=mpl.database.windows.net;Initial Catalog=mpldb;Provider=SQLNCLI11.1;Persist Security Info=True;Auto Translate=False" />
</Connections>
<Packages>
<Package Name="006_Load_Staging_Configuration" ConstraintMode="Linear">
<PackageConfigurations>
<PackageConfiguration Name="Configuration">
<ExternalFileInput ExternalFilePath="C:\VSRepo\BIML\Configurations\AzureConfigDF.dtsConfig"></ExternalFileInput>
</PackageConfiguration>
</PackageConfigurations>
<Tasks>
<Container Name="Load Staging Tables" ConstraintMode="Linear">
<Tasks>
<# foreach(DataRow _table in _table_names.Rows) { #>
<Dataflow Name="DFT-S_<#= _table["TABLE_NAME"] #>">
<Transformations>
<OleDbSource Name="SRC-<#= _table["TABLE_SCHEMA"] #>_<#= _table["TABLE_NAME"] #>" ConnectionName="Source">
<DirectInput>
SELECT *
FROM <#= _table["TABLE_SCHEMA"] #>.<#= _table["TABLE_NAME"] #>
</DirectInput>
</OleDbSource>
<OleDbDestination Name="DST-<#= _table["TABLE_SCHEMA"] #>_<#= _table["TABLE_NAME"] #>" ConnectionName="Dest">
<ExternalTableOutput Table="stg.<#= _table["TABLE_NAME"] #>"/>
</OleDbDestination>
</Transformations>
</Dataflow>
<# } #>
</Tasks>
</Container>
</Tasks>
</Package>
</Packages>
</Biml>
Configuration file:
<?xml version="1.0"?>
<DTSConfiguration>
<Configuration ConfiguredType="Property" Path="\Package.Connections[Source].Properties[ConnectionString]" ValueType="String">
<ConfiguredValue>"Data Source=YRK-L-101098;Provider=SQLNCLI11.1;Persist Security Info=true;Integrated Security=SSPI;Initial Catalog=AdventureWorks2016"</ConfiguredValue>
</Configuration> <Configuration ConfiguredType="Property" Path="\Package.Connections[Dest].Properties[ConnectionString]" ValueType="String">
<ConfiguredValue>Data Source=mpl.database.windows.net;User ID=*****;Initial Catalog=mpldb;Provider=SQLNCLI11.1;Persist Security Info=True;Auto Translate=False</ConfiguredValue>
</Configuration>
<Configuration ConfiguredType="Property" Path="\Package.Connections[Dest].Properties[Password]" ValueType="String">
<ConfiguredValue>******</ConfiguredValue>
</Configuration>
<Configuration ConfiguredType="Property" Path="\Package.Connections[Dest].Properties[UserName]" ValueType="String">
<ConfiguredValue>******</ConfiguredValue>
</Configuration>
</DTSConfiguration>
Note: Values for 'UserId' and 'Password' are filled with correct values in actual script.
Summary
The issue is that you are trying to use package configurations (an SSIS run-time feature) when developing the packages by generating them using Biml (a build-time feature).
What is happening?
Think of it this way. If you were to manually create the SSIS package, during development, you would have to connect to the source and destination databases by specifying user names and passwords. Without connecting to the databases, SSIS will not be able to get the metadata needed. Once the package has been developed and all the metadata has been mapped, you can use package configurations. When you open or execute a package with package configurations, all hardcoded values will be replaced by configuration values. This value replacement is the SSIS run-time feature.
Now, compare that to using Biml instead of manually creating the SSIS package: When generating the packages, you are expecting the Biml engine to get the user names and passwords from the package configuration files. Since this is an SSIS run-time feature, Biml is unable to get that data, and will use the connection strings specified in the BimlScript. Since these connection strings don't specify the user names and passwords, Biml will not be able to connect and you get the error Login failed for user ''. (This is similar to creating a connection manager in SSIS without providing a user name and password and getting an error when clicking "Test Connection".)
But it works for the Execute SQL task?
It may look like it, but it doesn't. The Execute SQL task basically just gets ignored. The SQL code in Execute SQL tasks is not checked or validated by SSIS or the Biml engine until the package is executed. You can type anything in there, and SSIS will be happy until you try to execute invalid code, at which point it will give you an error. Because this code is not validated by SSIS during development, it is not validated by Biml during package generation. The package gets generated successfully, and then when you open it, the package configurations will be applied, and you will not see any errors.
An OLE DB Destination, however, is validated by both SSIS and the Biml engine during development. They both need to connect to the database to get the metadata. This is why you get an error on this file only.
Solution
Package Configurations are an SSIS run-time feature only. You can not use them to pass connection strings, user names or passwords to the Biml engine. You can either hardcode the connection strings in your BimlScript or store the connection strings in an external metadata repository, but you will need to provide the user names and passwords to the Biml engine during package generation.
I'm using groovy to concatenate two fields in CSV
It's working ok except that the concatenated field is appearing with quotes.
Is there any way to resolve this?
ant.mkdir(dir:"target")
new File("target/UpsertCheckDeals.csv").withWriter {
new File("C:/Users/alon/Documents/CheckDealReadyForConcat.csv").splitEachLine(",") {Customer__c,Name__c,Deal__c,Check_Count__c ->
it.println "${Customer__c},${Deal__c},${Deal_Source__c},${Salesperson_Name__c},${Customer__c}-${Deal__c}"
CSV is a more complicated file format than it would first appear. Fields can be optionally quoted which is what appears to be your problem.
Most programming languages have a library that will parse CSV. In the case of groovy I'd recommend opencsv
http://opencsv.sourceforge.net/
The following example extends the example I created for your previous question.
Example
├── build.xml
├── src
│ └── file1.csv
└── target
└── file1.csv
src/file1.csv
"customer",deal
"200000042",23
"200000042",34
"200000042",35
"200000042",65
target/file1.csv
customer,deal,customer-deal
200000042,23,200000042-23
200000042,34,200000042-34
200000042,35,200000042-35
200000042,65,200000042-65
build.xml
<project name="demo" default="build" xmlns:ivy="antlib:org.apache.ivy.ant">
<available classname="org.apache.ivy.Main" property="ivy.installed"/>
<target name="build" depends="resolve">
<taskdef name="groovy" classname="org.codehaus.groovy.ant.Groovy" classpathref="build.path"/>
<groovy>
import com.opencsv.CSVReader
ant.mkdir(dir:"target")
new File("target/file1.csv").withWriter { writer ->
new File("src/file1.csv").withReader { reader ->
CSVReader csv = new CSVReader(reader);
csv.iterator().each { row ->
if (row.size() == 2) {
writer.println "${row[0]},${row[1]},${row[0]}-${row[1]}"
}
}
}
}
</groovy>
</target>
<target name="resolve" depends="install-ivy">
<ivy:cachepath pathid="build.path">
<dependency org="org.codehaus.groovy" name="groovy-all" rev="2.4.7" conf="default"/>
<dependency org="com.opencsv" name="opencsv" rev="3.8" conf="default"/>
</ivy:cachepath>
</target>
<target name="install-ivy" unless="ivy.installed">
<mkdir dir="${user.home}/.ant/lib"/>
<get dest="${user.home}/.ant/lib/ivy.jar" src="http://search.maven.org/remotecontent?filepath=org/apache/ivy/ivy/2.4.0/ivy-2.4.0.jar"/>
<fail message="Ivy has been installed. Run the build again"/>
</target>
</project>
Notes:
Uses apache ivy to managed dependencies like groovy and opencsv
I included a test "row.size() == 2" to prevent empty rows throwing errors
In SoapUI, I have a JDBC Test Step that returns the following data:
<Results>
<ResultSet fetchSize="128">
<Row rowNumber="1">
<ID>1</ID>
<NAME>TestName1</NAME>
<DESCRIPTION/>
<TYPE>Bool</TYPE>
<ISPRODUCTTAG>true</ISPRODUCTTAG>
<ISLOCATIONTAG>false</ISLOCATIONTAG>
<SUBSECTION>Default Sub Section</SUBSECTION>
<SECTION>Default Section</SECTION>
<SUBGROUP>Default Sub Group</SUBGROUP>
<GROUP>Default Group</GROUP>
</Row>
<Row rowNumber="2">
<ID>2</ID>
<NAME>TestName2</NAME>
<DESCRIPTION/>
<TYPE>Bool</TYPE>
<ISPRODUCTTAG>true</ISPRODUCTTAG>
<ISLOCATIONTAG>false</ISLOCATIONTAG>
<SUBSECTION>Default Sub Section</SUBSECTION>
<SECTION>Default Section</SECTION>
<SUBGROUP>Default Sub Group</SUBGROUP>
<GROUP>Default Group</GROUP>
</Row>
</Row>
</ResultSet>
I have an REST API XML Response that contains the following data:
<ArrayOfTagInfo>
<TagInfo id="1" name="TestName1" type="Bool" isProductTag="true" isLocationTag="false" subsection="Default Sub Section" section="Default Section" subgroup="Default Sub Group" group="Default Group"/>
<TagInfo id="2" name="TestName2" type="Bool" isProductTag="true" isLocationTag="false" subsection="Default Sub Section" section="Default Section" subgroup="Default Sub Group" group="Default Group"/>
</ArrayOfTagInfo>
I would like to be able to compare(assert) both the Database Values and the Response Values (response can be in XML or JSON depending on the Request Accept Header) using groovy arrays if possible as the data returned from the database can be very large.
Can anyone help?
You can design your test case with the following test steps:
jdbc step
json step
groovy script step
The idea/sudo code here in groovy script step is that
get the response of jdbc step
get the response of json step
build the data in the form of objects, so that it would be easy for comparison
store the objects in list, one for jdbc and another one json
sort both the lists, to make sure comparing the same data
compare that both the list are same.
Here goes the groovy script:
/**
* Model object for comparing
*/
#groovy.transform.Canonical
class Model {
def id
def name
def type
def isProductTag
def isLocationTag
def subSection
def section
def subGroup
def group
/**
* this will acception jdbc row
* #param row
* #return
*/
def buildJdbcData(row) {
row.with {
id = ID
name = NAME
type = TYPE
isProductTag = ISPRODUCTTAG
isLocationTag = ISLOCATIONTAG
subSection = SUBSECTION
section = SECTION
subGroup = SUBGROUP
group = GROUP
}
}
/**
* this will accept the json TagInfo
* #param tagInfo
* #return
*/
def buildJsonData(tagInfo){
id = tagInfo.#id
name = tagInfo.#name
type = tagInfo.#type
isProductTag = tagInfo.#isProductTag
isLocationTag = tagInfo.#isLocationTag
subSection = tagInfo.#subsection
section = tagInfo.#section
subGroup = tagInfo.#subgroup
group = tagInfo.#group
}
}
/**
* Creating the jdbcResponse from the response received, using fixed value for testing
* If you want, you can assign the response received directly using below instead of current and make sure you replace the step name correctly
* def jdbcResponse = context.expand('${JdbcStepName#Response}')
*/
def jdbcResponse = '''<Results>
<ResultSet fetchSize="128">
<Row rowNumber="1">
<ID>1</ID>
<NAME>TestName1</NAME>
<DESCRIPTION/>
<TYPE>Bool</TYPE>
<ISPRODUCTTAG>true</ISPRODUCTTAG>
<ISLOCATIONTAG>false</ISLOCATIONTAG>
<SUBSECTION>Default Sub Section</SUBSECTION>
<SECTION>Default Section</SECTION>
<SUBGROUP>Default Sub Group</SUBGROUP>
<GROUP>Default Group</GROUP>
</Row>
<Row rowNumber="2">
<ID>2</ID>
<NAME>TestName2</NAME>
<DESCRIPTION/>
<TYPE>Bool</TYPE>
<ISPRODUCTTAG>true</ISPRODUCTTAG>
<ISLOCATIONTAG>false</ISLOCATIONTAG>
<SUBSECTION>Default Sub Section</SUBSECTION>
<SECTION>Default Section</SECTION>
<SUBGROUP>Default Sub Group</SUBGROUP>
<GROUP>Default Group</GROUP>
</Row>
<!--added 3rd row for testing the failure -->
<Row rowNumber="3">
<ID>3</ID>
<NAME>TestName3</NAME>
<DESCRIPTION/>
<TYPE>Bool</TYPE>
<ISPRODUCTTAG>false</ISPRODUCTTAG>
<ISLOCATIONTAG>false</ISLOCATIONTAG>
<SUBSECTION>Default Sub Section3</SUBSECTION>
<SECTION>Default Section3</SECTION>
<SUBGROUP>Default Sub Group3</SUBGROUP>
<GROUP>Default Group3</GROUP>
</Row>
</ResultSet>
</Results>'''
/**
* Creating the jsonResponse from the response received, using fixed value for testing
* If you want, you can assign the response received directly using below instead of current and make sure you replace the step name correctly
* def jsonResponse = context.expand('${JsonStepName#Response}')
*/
def restResponse = '''
<ArrayOfTagInfo>
<TagInfo id="1" name="TestName1" type="Bool" isProductTag="true" isLocationTag="false" subsection="Default Sub Section" section="Default Section" subgroup="Default Sub Group" group="Default Group"/>
<TagInfo id="2" name="TestName2" type="Bool" isProductTag="true" isLocationTag="false" subsection="Default Sub Section" section="Default Section" subgroup="Default Sub Group" group="Default Group"/>
<!--added 3rd row for testing the failure -->
<TagInfo id="3" name="TestName3" type="Bool" isProductTag="true" isLocationTag="false" subsection="Default Sub Section" section="Default Section" subgroup="Default Sub Group" group="Default Group"/>
</ArrayOfTagInfo>'''
//Parsing the jdbc and build the jdbc model object list
def results = new XmlSlurper().parseText(jdbcResponse)
def jdbcDataObjects = []
results.ResultSet.Row.each { row ->
jdbcDataObjects.add(new Model().buildJdbcData(row))
}
//Parsing the json and build the json model object list
def arrayOfTagInfo = new XmlSlurper().parseText(restResponse)
def jsonDataObjects = []
arrayOfTagInfo.TagInfo.each { tagInfo ->
jsonDataObjects.add(new Model().buildJsonData(tagInfo))
}
//sorting the Data before checking for equality
jdbcDataObjects.sort()
jsonDataObjects.sort()
if (jdbcDataObjects.size() != jsonDataObjects.size()) {
System.err.println("Jdbc resultset size is : ${jdbcDataObjects.size()} and Json result size is : ${jsonDataObjects.size()}")
}
assert jdbcDataObjects == jsonDataObjects, "Comparison of Jdbc and Json data is failed"
In the above, 3rd row is not matching, so assert will throw the following error:
Caught: java.lang.AssertionError: Comparison Failed. Expression: (jdbcDataObjects == jsonDataObjects). Values: jdbcDataObjects = [Default Group3, Default Group, Default Group], jsonDataObjects = [Default Group, Default Group, Default Group]
java.lang.AssertionError: Comparison Failed. Expression: (jdbcDataObjects == jsonDataObjects). Values: jdbcDataObjects = [Default Group3, Default Group, Default Group], jsonDataObjects = [Default Group, Default Group, Default Group]
at So31472381.run(So31472381.groovy:104)
Process finished with exit code 1
If you remove 3rd (from both responses), then you do not see any error, which is indication of successful comparison of jdbc and json responses.
Note that groovy script is available in both free and pro version of SoapUI. So this solution works for both the editions of SoapUI.
If you have SoapUI-Pro, you should be able to accomplish all this with no Groovy.
Make the REST call to retrieve all your data.
Start a DataSource step that parses the XML.
Make a JDBC call that select the correct ID of the row you want to verify. Make all the assertions in here.
Loop back to #2.
I am using the following xml file (users_doc.xml)
<users>
<user trusted="false">
<userid>vsony7#vt.edu</userid>
<password>sony</password>
</user>
<user trusted="false">
<userid>shivi</userid>
<password>shivi</password>
</user>
<user trusted="false">
<userid>xyz</userid>
<password>xyz</password>
</user>
</users>
I am running the following xquery: (Here $doc_name=users_doc, $userid=xyz)
declare variable $doc_name as xs:string external;
declare variable $userid as xs:string external;
let $users_doc := doc($doc_name)/users
return delete node $users_doc/user/userid=$userid/..
I am trying to find a given node <userid>xyz</userid> and if the user exists I would like to delete its parent node
<user trusted="false">
<userid>xyz</userid>
<password>xyz</password>
</user>
But, when I run this query I get the following exception:
Exception in thread "main" java.io.IOException: Stopped at line 5, column 51:
[XPTY0019] Context node required for ..; xs:string found.
How do I fix this ?
Thanks,
Sony
From http://www.w3.org/TR/xquery/#ERRXPTY0019
err:XPTY0019
It is a type error if the result of a step (other than
the last step) in a path expression
contains an atomic value.
Let's look at your expression:
$users_doc/user/userid=$userid/..
The left term of the last step is a node set comparison:
$users_doc/user/userid=$userid
So, it will result in true or false boolean value (an atomic value). Of course, you can't select the parent::node() of true or false...
You want this expression:
$users_doc/user[userid=$userid]