Create XML object with dynamical content from a source CSV file - csv

I have the following CSV file:
iDocType,iDocId,iName,iDate
P,555551555,Braiden,2022-12-31
I,100000001,Dominique,2024-12-10
P,100000002,Joyce,2025-11-15
Using jmeter's JSR223 preprocessor element, I need to compose an XML parent node containing multiple (based on parametrization) child-nodes and each node must contain properties with values of each of these CSV rows.
I think that I need some way to loop over this CSV file and extract values from each row until all of my target objects are composed.
Maybe the approach should be having a method called createMasterXml with 2 arguments like findTargetIdInCsv and targetNumberOfXmlNodes and a for loop inside which parses the csv file and composes child node inside it with the groovy.xml.MarkupBuilder. But I don't know how to approach the problem.
Target logic:
find csv row based on an ID variable
compose the first object with values from the first found row with this ID
find next csv row downwards
compose the 2nd object with values from the 2nd row
.....
do this until the target number of objects are created
if the end of the file is reached start from the top row of the file (without the header)
For example, given the csv file described above:
I get a variable docId populated with the value 100000001 which is found on the 2nd row of data in the csv file (ignoring the header);
I define a variable numberOfNodes = 3;
Then I expect an object created by this mapping:
child node 1 - ValuesFromCsvRow2
child node 2 - ValuesFromCsvRow3
child node 3 - ValuesFromCsvRow1
Update:
JSR223 PreProcessor code:
(Note, with this current approach I am not able to compose the sub-nodes objects based on my intended logic described above, because the current approach does not handle parsing the CSV file and extracting values - I am missing the knowledge to do that)
//input from csv file
docType = vars.get('iDocType').toString()
docId = vars.get('iDocId').toString()
name = vars.get('iName').toString()
date = vars.get('iExpiryDate').toString()
def numberOfNodes = 3
def writer = new StringWriter()
def xml = new groovy.xml.MarkupBuilder(writer)
xml.nodes() {
createNode(xml, numberOfNodes, 'ID0000')
}
def createNode(builder, repeat, pReqID) {
for (int i = 0 ; i < repeat ; i++) {
builder.object(a:'false', b:'false', pReqID:pReqID +(i+1).toString()) {
builder.al(ad:'2021-09-20', alc:'bla', bla:'2021-09-20T11:00:00.000Z', sn:'AB8912')
builder.doc( docType: docType, docId: docId, name: name , date:date )
}
}
}
def nodeAsText = writer.toString()
//log.info(nodeAsText)
vars.put('passengers3XmlObj', nodeAsText)
The values from the code line with builder.doc is the one which I need to change on each node creation, based on the values from each line in the source csv file.
Currently, in this situation, my master object looks like this, because in each jmeter iteration I know only how to get the values from one row from the csv file, per sampler (using CSV Data Set test plan element):
<objects>
<object a='false' b='false' pReqID='ID00001'>
<al ad='2021-09-20' alc='bla' bla='2021-09-20T11:00:00.000Z' sn='AB8912' />
<doc docType='P' docId='100000001' date='2024-12-10' name='Dominique' />
</object>
<object a='false' b='false' pReqID='ID00002'>
<al ad='2021-09-20' alc='bla' bla='2021-09-20T11:00:00.000Z' sn='AB8912' />
<doc docType='P' docId='100000001' date='2024-12-10' name='Dominique' />
</object>
<object a='false' b='false' pReqID='ID00003'>
<al ad='2021-09-20' alc='bla' bla='2021-09-20T11:00:00.000Z' sn='AB8912' />
<doc docType='P' docId='100000001' date='2024-12-10' name='Dominique' />
</object>
</objects>
But, I need it to look like this, keeping in mid the target logic:
<objects>
<object a='false' b='false' c='ID00001'>
<al ad='2021-09-20' alc='bla' dt='2021-09-20T11:00:00.000Z' sn='AB8912' />
<doc docType='I' docId='100000001' date='2024-12-10' name='Dominique' />
</object>
<object a='false' b='false' c='ID00002'>
<al ad='2021-09-20' alc='bla' dt='2021-09-20T11:00:00.000Z' sn='AB8912' />
<doc docType='P' docId='100000002' date='2025-11-15' name='Joyce' />
</object>
<object a='false' b='false' c='ID00003'>
<al ad='2021-09-20' alc='bla' dt='2021-09-20T11:00:00.000Z' sn='AB8912' />
<doc docType='P' docId='555551555' date='2022-12-31' name='Braiden' />
</object>
</objects>
Can someone please help me achieve this ?

CSV Data Set Config by default reads next line on each iteration of each virtual user. The behaviour is controllable up to certain extend by the Sharing Mode setting but none of the sharing modes is suitable for reading the whole content of the CSV file at once.
If you want to parse all the entries from the CSV file in a single shot - do the reading/parsing in Groovy itself
Something like:
def writer = new StringWriter()
def xml = new groovy.xml.MarkupBuilder(writer)
def pReqID = 'ID0000'
def lines = new File('test.csv').readLines()
xml.nodes() {
1.upto(lines.size() - 1, { lineNo ->
xml.object(a: 'false', b: 'false', pReqID: pReqID + (lineNo).toString()) {
xml.al(ad: '2021-09-20', alc: 'bla', bla: '2021-09-20T11:00:00.000Z', sn: 'AB8912')
xml.doc(docType: lines.get(lineNo).split(',')[0],
docId: lines.get(lineNo).split(',')[1],
name: lines.get(lineNo).split(',')[2],
date: lines.get(lineNo).split(',')[3])
}
})
}
def nodeAsText = writer.toString()

Related

BIML: automatic creation of OleDbDestinations for XMLSource in Dataflow

I'm having a XML file with 2 outputpaths and 2 tables in my staging DB. Tables and outputpaths do have same names.
Instead of writing 2 times OleDbDestination and changing Inputpath and ExternalTableOutput I would like to use some Bimlscript.
My current solution:
<Dataflow Name="DF_MyXml">
<Transformations>
<XmlSource Name="MyXml">
<FileInput ConnectionName="simple.xml" />
<XmlSchemaFileInput ConnectionName="simple.xsd" />
</XmlSource>
<OleDbDestination Name="Database" ConnectionName="Dest">
<InputPath OutputPathName = "MyXml.Database" />
<ExternalTableOutput Table="Database" />
</OleDbDestination>
<OleDbDestination Name="Project" ConnectionName="Dest">
<InputPath OutputPathName = "MyXml.Project" />
<ExternalTableOutput Table="Project" />
</OleDbDestination>
</Transformations>
</Dataflow>
What I would like to achive:
<Dataflow Name="DF_MyXML">
<Transformations>
<XmlSource Name="MyXml">
<FileInput ConnectionName="simple.xml" />
<XmlSchemaFileInput ConnectionName="simple.xsd" />
</XmlSource>
<#foreach (var OutP in ["myXML"].DataflowOutputs) { #>
<OleDbDestination Name="<#=OutP.Name#>" ConnectionName="Dest">
<InputPath OutputPathName = "MyXml.<#=OutP.Name#>" />
<ExternalTableOutput Table="<#=OutP.Name#>" />
</OleDbDestination>
<# } #>
</Transformations>
</Dataflow>
Sadly this isn't working. ;-)
In API-Documentation for AstXMLSourceNode I found the property "DataflowOutputs" which "Gets a collection of all dataflow output paths for this transformation" (sounds promising, uhh?) but I can't even figure out how to reference the XMLSource in Bimlscript in any way.
Starting from RootNode I was able to find my Dataflow-Task but then I got stuck and didn't manage to "find" my Transformations\XMLSource.
Any help would be much appreciated!!
BTW: if there is a solution to automatically create destination-tables based on a given XSD this would be greate too. :-)
You need to make sure your connections are declared in a separate file to be easily accessed in Biml script. You can mess with Console.WriteLine() to print out details about objects to the output window and get a glimpse of what is going on in the BimlScript.
In the second file, traditionally called Environmnet.biml,
you need (only with your xml file connection info, the data here is just a placeholder):
<Connections>
<FileConnection Name="XmlFile" FilePath="C:\test\XmlFile.xml" RelativePath="true" />
<FileConnection Name="XmlXsd" FilePath="C:\test\XmlXsd.Xsd" RelativePath="true" />
</Connections>
then you can do something to the effect of :
var fileConnection = RootNode.Connections["XmlFile"];
(sorry before I accidentally put DbConnections)
and play with it from there. I do not have any xml files at my disposal right now to play around with to help you get the exact information that you are looking for. I will update on Monday.

Hit query on pagination in display tag

I use DisplayTag in my struts2 application and i want to hit query on clicking pagination.
Ex : When user click on the next page or any page then query is fire on action class.
FILE : displayTag.jsp
<display:table name="list1" sort="list" size="20" pagesize="5" id="table1" export="true" requestURI="" partialList="true">
<display:column property="no" group="1" sortable="true" headerClass="sortable"></display:column>
<display:column property="nam" group="2" sortable="true" headerClass="sortable"></display:column>
<display:column property="ct" group="3" sortable="true" headerClass="sortable" autolink="true"></display:column>
<display:setProperty name="export.excel.filename" value="diplayTag.xls"></display:setProperty>
<display:setProperty name="export.pdf.filename" value="diplayTag.pdf"></display:setProperty>
<display:setProperty name="export.csv.filename" value="diplayTag.csv"></display:setProperty>
<display:setProperty name="export.pdf" value="true"></display:setProperty>
</display:table>
I use request.setAttribute("list1", li); where i set all data in list1(ArrayList) and pass to the displayTag.jsp.
DisplayTag get all data and display in the table format. But my need is to pass only 5 data at a time and on clicking next page action class send other 5 data and so on.
I refer link : Display tag pagination problem
But i can not understand because i'm use MySql and also new on DisaplyTag.
DB : MySql
Framework : struts2
After researching and hard working i found answer.
FILE : displayTag.jsp
<display:table name="list1" sort="external" size="20" pagesize="5" id="table1" export="true" requestURI="disTag" partialList="true">
// code as above
</dispaly:table>
here requestURI="disTag" is a action name.
FILE : struts.xml
<action name="disTag" class="className">
<result name="success">/displayTag.jsp</result>
<result name="error">/error.jsp</result>
</action>
FILE : class file
page = Integer.parseInt(request.getParameter((new ParamEncoder("table1").encodeParameterName(TableTagParameters.PARAMETER_PAGE))));
if(page != 0)
{
start = (page - 1) * 5; //5 is row or data per page.
}
getData(start, 5); //getData is a method which store all data in ArrayList. Based on start index.

How to skip CSV header line with camel-beanio

How to skip CSV header line when using camel-beanio from apache?
My XML file for mapping look like this:
<beanio>
<record name="myRecord" class="my.package.MyConditionClass">
<field name="myField" position="1" />
<field name="mylist" position="2" collection="list" type ="string"/>
<segment name="conditions" class="my.package.MyConditionClass" nillable="true" collection="map" key="myKey">
<field name="myKey" position="2">
<field name="myValue" position="3">
</segment>
</record>
</beanio>
But to make my code run i must delete the first line (header line) manually. How do skip the header line automatically?
To read a CSV file and ignore the first header line, you can define the first field value of the header as comments of the CSV Stream
Example of CSV :
toto;tata;titi
product1;1;18
product2;2;36
product3;5;102
The mapping file :
<beanio ...
<stream name="dataStream" format="csv" >
<parser>
<property name="delimiter" value=";" />
<!-- ignore header line -->
<property name="comments" value="toto" />
</parser>
<record name="record" minOccurs="0" maxOccurs="unbounded" class="com.stackoverflow.Product" />
</stream>
</beanio>
Source : http://beanio.org/2.0/docs/reference/index.html#CSVStreamFormat
Another way will be to use camel-bindy in place of camel-beanio and the new option skipFirstLine (see https://camel.apache.org/components/latest/bindy-dataformat.html#_1_csvrecord)
Shortcut:
As soon as define BeanReader to read/process the records, use it's skip method with count 1 to skip the header.
e.g.
// Define Reader to process records
BeanReader beanReader = factory.createReader("STREAM",inputStreamReader);
// Skip First Record
beanReader.skip(1);
// Process rest of Stream
Object record;
do {
try {
record = beanReader.read();
}
catch (BeanReaderException e) {
e.printStackTrace();
}
} while(record !=null)
Refer http://beanio.org/2.0/docs/reference/index.html#TheMappingFile.
Skip method signature:
public int skip(int count) throws BeanReaderException;

load parts of a JSON in many divs + Struts2

I need load the content of a Object JSON in many divs, but in parts. For example,
My JSON structure:
{"Example": {
"Hi": "hi",
"Bye": "bye"
}
}
Assuming that the JSON string successfully load my JSP page. I am trying to load the contents of the JSON like this:
(For the attribute Hi and Bye)
<sj:div id="div1" dataType = "json">
<s:property value="Example.Hi"/>
</sj:div>
<sj:div id="div2" dataType = "json">
<s:property value="Example.Bye"/>
</sj:div>
Struts.xml:
<action name="name" class="class" method="method">
<result type="json">
<param name="root">
Example
</param>
</result>
</action>
but this doesn't work... What I can do?
I'm using: Struts2 Jquery Library
You should build the URL like
<s:url var="remoteurl1" action="name"><s:param name="div1" value="true"/></s:url>
<sj:div id="div1" href="%{#remoteurl1}" dataType = "json"/>
<s:url var="remoteurl2" action="name"><s:param name="div2" value="true"/></s:url>
<sj:div id="div2" href="%{#remoteurl2}" dataType = "json"/>
Then in your action you check if isDiv1() or isDiv2() and return corresponding result.

classify raster through WMS

I try to use WMS to dispaly raster map in geotiff format on web. I want to classify the raster file. how can I do this? I use mapserver for windows. The following is my .map file.
what I get through this is
MAP
NAME PM10
IMAGECOLOR 255 255 255
SIZE 600 800
IMAGETYPE PNG24 ## use AGG to for anti-aliassing
OUTPUTFORMAT
NAME 'AGG'
DRIVER AGG/PNG
MIMETYPE "image/png"
IMAGEMODE RGB
EXTENSION "png"
END # outputformat
PROJECTION
"init=epsg:3035" #latlon on etrs 1989 laea
END
EXTENT 3487500 2297500 4402500 3202500 # meters extents of region2
WEB
IMAGEPATH "c:/tmp/ms_tmp/"
IMAGEURL "/ms_tmp/"
METADATA
"ows_enable_request" "*"
"map" "C:/ms4w/apps/airpollution/config.map"
"ows_schemas_location" "http://schemas.opengeospatial.net"
"ows_title" "Sample WMS"
"ows_enable_request" "*"
"ows_onlineresource" "http://localhost:7070/cgi-bin/mapserv.exe? map=C:/ms4w/apps/airpollution/config.map&"
"ows_srs" "EPSG:3035 " #latlon
"wms_feature_info_mime_type" "text/plain"
"wms_feature_info_mime_type" "text/html"
"wms_server_version" "1.1.1"
"wms_formatlist" "image/png,image/gif,image/jpeg, image/geotiff"
"wms_format" "image/geotiff"
END #metadata
END #web
LAYER
NAME "pm10"
DATA "pm10.tif"
TYPE RASTER
STATUS ON
METADATA
"ows_title" "pollution"
END #metadata
PROJECTION
"init=epsg:3035"
END #projection
CLASSITEM "[pixel]"
# class using simple string comparison, equivelent to ([pixel] = 0)
CLASS
EXPRESSION "0"
STYLE
COLOR 20 20 20
END
END
# class using an EXPRESSION using only [pixel].
CLASS
EXPRESSION ([pixel] >= 0AND [pixel] < 7)
STYLE
COLOR 255 0 0
END
CLASS
EXPRESSION ([pixel] >= 7AND [pixel] < 20)
STYLE
COLOR 0 255 0
END
END
CLASS
EXPRESSION ([pixel] >= 7AND [pixel] < 50)
STYLE
COLOR 0 0 255
END
END
END #layer pm10
END #map
and what I get as a responce is this image
it seems that it only reads line 3
IMAGECOLOR 255 255 255
I don't know map server very well, but I know that you can style Rasters from a WMS using SLD (Styled Layer Descriptor), which is simply a xml file that you can pass in a WMS request according to OGC standards.
In other words: You can specify the classification in a XML schema. Following is an example of a simple SLD that styles everything in a raster black except white pixels, which are styled opaque.
<?xml version="1.0" encoding="ISO-8859-1"?>
<StyledLayerDescriptor version="1.0.0" xmlns="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.opengis.net/sld http://schemas.opengis.net/sld/1.0.0/StyledLayerDescriptor.xsd">
<NamedLayer>
<Name>undefined</Name>
<UserStyle>
<Name>rasterr</Name>
<Title>Rasterr</Title>
<Abstract>A simple raaster style</Abstract>
<FeatureTypeStyle>
<FeatureTypeName>Feature</FeatureTypeName>
<Rule>
<RasterSymbolizer>
<Opacity>1.0</Opacity>
<ColorMap>
<ColorMapEntry color="#ffffff" quantity="0" opacity="0.0" />
<ColorMapEntry color="#000000" quantity="1" />
<ColorMapEntry color="#000000" quantity="2" />
<ColorMapEntry color="#000000" quantity="3" />
<ColorMapEntry color="#000000" quantity="4" />
<ColorMapEntry color="#000000" quantity="5" />
<ColorMapEntry color="#000000" quantity="6" />
<ColorMapEntry color="#000000" quantity="7" />
<ColorMapEntry color="#000000" quantity="8" />
<ColorMapEntry color="#000000" quantity="9" />
<ColorMapEntry color="#000000" quantity="10" />
</ColorMap>
</RasterSymbolizer>
</Rule>
</FeatureTypeStyle>
</UserStyle>
</NamedLayer>
</StyledLayerDescriptor>
Pass the SLD like this:
http://demo.mapserver.org/cgi-bin/wms?SERVICE=wms&VERSION=1.1.1&REQUEST=GetMap&LAYERS=country_bounds&SLD=http://demo.mapserver.org/ogc-demos/map/sld/sld_line_simple.xml
Read more here:
http://mapserver.org/ogc/sld.html - This is for mapserver, use the RasterSymbolizer and a ColorMap to do your classification. On that page it is also described how the colormap works.
http://www.opengeospatial.org/standards/sld
Wikipedia