Quotes appearing on CSV after concatnate fields using Groovy - csv

I'm using groovy to concatenate two fields in CSV
It's working ok except that the concatenated field is appearing with quotes.
Is there any way to resolve this?
ant.mkdir(dir:"target")
new File("target/UpsertCheckDeals.csv").withWriter {
new File("C:/Users/alon/Documents/CheckDealReadyForConcat.csv").splitEachLine(",") {Customer__c,Name__c,Deal__c,Check_Count__c ->
it.println "${Customer__c},${Deal__c},${Deal_Source__c},${Salesperson_Name__c},${Customer__c}-${Deal__c}"

CSV is a more complicated file format than it would first appear. Fields can be optionally quoted which is what appears to be your problem.
Most programming languages have a library that will parse CSV. In the case of groovy I'd recommend opencsv
http://opencsv.sourceforge.net/
The following example extends the example I created for your previous question.
Example
├── build.xml
├── src
│   └── file1.csv
└── target
└── file1.csv
src/file1.csv
"customer",deal
"200000042",23
"200000042",34
"200000042",35
"200000042",65
target/file1.csv
customer,deal,customer-deal
200000042,23,200000042-23
200000042,34,200000042-34
200000042,35,200000042-35
200000042,65,200000042-65
build.xml
<project name="demo" default="build" xmlns:ivy="antlib:org.apache.ivy.ant">
<available classname="org.apache.ivy.Main" property="ivy.installed"/>
<target name="build" depends="resolve">
<taskdef name="groovy" classname="org.codehaus.groovy.ant.Groovy" classpathref="build.path"/>
<groovy>
import com.opencsv.CSVReader
ant.mkdir(dir:"target")
new File("target/file1.csv").withWriter { writer ->
new File("src/file1.csv").withReader { reader ->
CSVReader csv = new CSVReader(reader);
csv.iterator().each { row ->
if (row.size() == 2) {
writer.println "${row[0]},${row[1]},${row[0]}-${row[1]}"
}
}
}
}
</groovy>
</target>
<target name="resolve" depends="install-ivy">
<ivy:cachepath pathid="build.path">
<dependency org="org.codehaus.groovy" name="groovy-all" rev="2.4.7" conf="default"/>
<dependency org="com.opencsv" name="opencsv" rev="3.8" conf="default"/>
</ivy:cachepath>
</target>
<target name="install-ivy" unless="ivy.installed">
<mkdir dir="${user.home}/.ant/lib"/>
<get dest="${user.home}/.ant/lib/ivy.jar" src="http://search.maven.org/remotecontent?filepath=org/apache/ivy/ivy/2.4.0/ivy-2.4.0.jar"/>
<fail message="Ivy has been installed. Run the build again"/>
</target>
</project>
Notes:
Uses apache ivy to managed dependencies like groovy and opencsv
I included a test "row.size() == 2" to prevent empty rows throwing errors

Related

Fortify not able to scan json files

I am doing SCA analysis using Fortify for API Axway. The codes are written in json and xml format. Upon scanning, Fortify is able to scan .xml files only and not the json files.
Can anyone help me if there is any plugin or any other thing through which I can scan json files as well.
The github url of the sample code: https://github.com/amolmandloi037/axway-swagger-maven.git
Below is the command I am using in Jenkins Scripted Pipeline:
stage('Fortify scan') {
pom = readMavenPom file: "pom.xml"
fortify_name = pom.artifactId
fortify_version = pom.version
withCredentials([usernamePassword(credentialsId: 'Fortify', passwordVariable: 'password', usernameVariable:'username')]) {
bat """
dir
sourceandlibscanner -auto -bt none -scan -sonatype -iqrl https://{fortify_url} --nexusauth {username}:{password} -iqappid ${fortify_name} -stage build -r sonatype_result.json -f result.fpr
"""
}
fortifyUpload appName: fortify_name, appversion: fortify_version, resultsFile: 'result.fpr
}

Read CSV file escaping more than one character in Mulesoft

I have a CSV file with a header and these values:
"20000160";"20000160";"177204930";"Zusammendruck ""Blumen"" nk 01.03.07";"2021";"01";"EUR";"599.000";"599,000";"599.00";"599,00 EUR";"EUR";"0.00";"0,00 EUR";"0.00";"0,00 EUR";"EUR"
"20000000";"20000000";"1013";"Einschreiben";"2021";"01";"EUR";"0.000";"0,000";"22.80";"22,80 EUR";"EUR";"0.00";"0,00 EUR";"0.00";"0,00 EUR";"EUR"
"20000000";"20000000";"1018";"Rückschein";"2021";"01";"EUR";"0.000";"0,000";"6.60";"6,60 EUR";"EUR";"0.00";"0,00 EUR";"0.00";"0,00 EUR";"EUR"
"8003325905";"8003325905";"233800118";"Prof.Services: Datenmanagement;Pauschale";"2021";"01";"EUR";"0.000";"0,000";"600.00";"600,00 EUR";"EUR";"0.00";"0,00 EUR";"108.00";"108,00 EUR";"EUR"
I configured File Read connector to escape "Zusammendruck ""Blumen"" nk 01.03.07", and it is working:
<file:read doc:name="Read CSV, Set MIME Type" doc:id="bb378f83-d0ea-4951-8253-8253953ed9e7" path="${outputCSV}" outputMimeType='application/csv; streaming=true; quote="\""; separator=";"; escape="\""' outputEncoding="UTF-8" />
But I also have to escape ; to correctly parse "Prof.Services: Datenmanagement;Pauschale". I tried to configure the pattern as escape="\"|;" but I got an warn:
WARN 2021-12-20 16:59:34,604 [[MuleRuntime].uber.26:
[test].upload.BLOCKING #27454a45] [processor: ; event:
eb625e31-61b5-11ec-a30d-00090ffe0001]
org.mule.weave.v2.model.service.DefaultLoggingService$: Option
escape="|; expects a value of length 1 but got "|;. Only the
first character is going to be used and the rest is going to be
ignored.
How can I read and parse data correctly, considering the example data?
The default CSV parser in DataWeave has some limitations. For this reason I have developed a Mule module based on Apache Commons CSV. You can find it on GitHub: https://github.com/rbutenuth/csv-module The dependency is available on Maven Central, so you don't need to compile it yourself.
It can parse your CSV with the following settings:
<csv:config name="Csv_Config" doc:name="Csv Config" recordSeparator="\n" withHeaderLine="false" escape="">
<csv:columns >
<csv:column columnName="column_01" type="TEXT" />
<csv:column columnName="column_02" type="TEXT" />
<!-- columnss 3 to 16 ommitted -->
<csv:column columnName="column_17" type="TEXT" />
</csv:columns>
</csv:config>
I have used column_01 to column_17 as column names, as I could not guess meaningfull names from the content.
You can achieve it in your case with DataWeave with the following settings:
application/csv separator=";",quoteValues=true,quote="\"",escape="\""

module or name space not define error FS0039 [duplicate]

RaceConditionTest.fs
namespace FConsole
module RaceConditionTest =
let test x =
...
Program.fs
open System
open FConsole
[<EntryPoint>]
let main argv =
RaceConditionTest.test 1000
0 // return an integer exit code
Then I run my console app (linux)
$ dotnet run
error FS0039: The namespace or module 'FConsole' is not defined.
There is only one test method in RaceConditionTest.fs
Is the order of files the problem? if so, How do I indicate the order of *.fs files?
as #boran suggested in his comments there is this FConsoleProject.fsproj
I just added my file before Program.fs
<ItemGroup>
<Compile Include="RaceConditionTest.fs" />
<Compile Include="Program.fs" />
</ItemGroup>

mysql Invalid column count in CSV input while importing csv file

I have a table in the database called locations which contains 2 columns(id which is auto incremented , location) , And I have a csv contains 1 column looks like :
When I try to import that file to locations table , I get this error invalid column count in CSV input on line 1.
Also I tried CSV using LOAD DATA but I get this : MySQL returned an empty result set (i.e. zero rows)
Maybe you should use:
$array = array_map('str_getcsv', file('file.csv'));
You have more option to check variables.
If Java is supported in your machine, Use below solution
https://dbisweb.wordpress.com/
The simple configuration required to do is
<?xml version="1.0" encoding="UTF-8"?>
<config>
<connections>
<jdbc name="mysql">
<driver>com.mysql.jdbc.Driver</driver>
<url>jdbc:mysql://localhost:3306/test1</url>
<user>root</user>
<password></password>
</jdbc>
</connections>
<component-flow>
<execute name="file2table" enabled="true">
<migrate>
<source>
<file delimiter="," header="false" path="D:/test_source.csv"/>
</source>
<destination>
<table connection="mysql">locations</table>
</destination>
<mapping>
<column source="1" destination="location" />
</mapping>
</migrate>
</execute>
</component-flow>
</config>
If you are interested, Same can be achieved by Java code.
Source source = new File("D:/test_source.csv", ',', false);
Destination destination = new Table("locations", new JDBCConnection("com.mysql.jdbc.Driver", "jdbc:mysql://localhost:3306/test1", "root", ""));
List<Mapping> mapping = new ArrayList<>();
mapping.add(new Mapping("1", "location", null));
Component component = new MigrateExecutor(source, destination, mapping);
component.execute();

Supplied connections must be of type AstDbConnectionNode

I have been working on a simple BIML solution to start learning how to use it. I keep getting an error message:
Supplied connections must be of type AstDbConnectionNode for this method.
at Varigence.Biml.Extensions.ExternalDataAccess.GetDatabaseSchema in :line 0
I've been searching and trying different solutions and have not found an answer yet. So, I'm turning to everyone here. I need another set of eyes on this so I can figure out what I'm doing wrong.
My first BIML file has my connection setup to World Wide Importers on my local box.
<Biml xmlns="http://schemas.varigence.com/biml.xsd">
<## template language = "C#" tier="0" #>
<Connections>
<OleDbConnection
Name="src"
ConnectionString="Data Source=localhost\SQL16;Initial Catalog=WorldWideImporters;Provider=SQLNCLI11.1;Integrated Security=SSPI;"
CreateInProject = "true">
</OleDbConnection>
</Connections>
<Databases>
<Database Name="src" ConnectionName = "src" />
</Databases>
Second BIML file is what is throwing the error
<Biml xmlns="http://schemas.varigence.com/biml.xsd">
<## template language = "C#" tier = "1" #>
<## import namespace="Varigence.Biml.CoreLowerer.SchemaManagement" #>
<# var srcDB = RootNode.OleDbConnections["src"]; #>
<# var WWIdb = srcDB.GetDatabaseSchema(ImportOptions.ExcludeViews); #>
<Packages>
<# foreach (var table in WWIdb.TableNodes) { #>
<Package Name="<#=table.Schema#>_<#=table.Name#>" ConstraintMode="Linear">
<Tasks>
<Dataflow Name="DF Copy <#=table.Name#>">
</Dataflow>
</Tasks>
</Package>
<# } #>
</Packages>
</Biml>
That, misleading, error surfaces from the call to GetDatabaseSchema I say it's misleading because the root problem is that srcDB is null. See for yourself by using this code in your second Biml file.
<## import namespace="System.Windows.Forms" #>
<## assembly name= "C:\Windows\Microsoft.NET\Framework\v4.0.30319\System.Windows.Forms.dll" #>
<# var srcDB = RootNode.OleDbConnections["ConnectionDoesNotExist"]; #>
<#
if (srcDB == null)
{
MessageBox.Show("It's null");
}
else
{
MessageBox.Show("It's not null - {0}", srcDB.Name);
}
#>
Root problem
You are access an object in the connections collection that doesn't exist - probably because while you have your tiering correct, you need to "include" all the files when you build.
How do you resolve this?
If you're using BimlExpress or BIDS Helper then you simply need to select both file1.biml and file2.biml in the solution menu and right click to generate package.
If you are using Mist/BimlStudio, then I would just right click on file1.biml and change that to Convert to Live BimlScript.