Dynamically merge two CSV files using Dataweave in Mule - csv

I get CSV files of different length from different sources. The columns within the CSV are different with the only exception is each CSV file will always have an Id column which can be used to tie the records within different CSV files. At a time, two such CSV files needs to be processed. The process is to take the Id column from the first file and match the rows within the second CSV file and create a third file which contains contents from the first and second file. The id column can be repeated in the first file. Eg is given below. please note that the first file I might have 18 to 19 combination of different data columns so, I cannot hardcode the transformation within dataweave and there is a chance that a new file will be added every time as well. A dynamic approach is what I wanted to accomplish. So once written, the logic should work even if a new file is added. These files get pretty big as well.
The sample files are given below.
CSV1.csv
--------
id,col1,col2,col3,col4
1,dat1,data2,data3,data4
2,data5,data6,data6,data6
2,data9,data10,data11,data12
2,data13,data14,data15,data16
3,data17,data18,data19,data20
3,data21,data22,data23,data24
CSV2.csv
--------
id,obectId,resid,remarks
1,obj1,res1,rem1
2,obj2,res2,rem2
3,obj3,res3,rem3
Expected file output -CSV3.csv
---------------------
id,col1,col2,col3,col4,objectid,resid,remarks
1,dat1,data2,data3,data4,obj1,res1,rem1
2,data5,data6,data6,data6,obj2,res2,rem2
2,data9,data10,data11,data12,obj2,res2,rem2
2,data13,data14,data15,data16,obj2,res2,rem2
3,data17,data18,data19,data20,obj3,res3,rem3
3,data21,data22,data23,data24,obj3,res3,rem3
I was thinking to use pluck to get the column values for the first file. I idea was to get the columns in the transformation without hardcoding it. But I am getting some errors. After this I have the task of searching for the id and getting the value from the second file
{(
using(keys = payload pluck $$)
(
payload map
( (value, index) ->
{
(keys[index]) : value
}
)
)
)}
I am getting the following error when using pluck
Type mismatch for 'pluck' operator
found :array, :function
required :object, :function
I am thinking of using groupBy on id on the second file to facilitate better searching. But need suggestions on how to append the contents in one transformation to form the 3rd file.

Since you want to combine both CSVs without renaming the column names, you can try something like below
var file2Grouped=file2 groupBy ((item) -> item.id)
---
file1 map ((item) -> item ++ ((file2Grouped[item.id])[0] default {}) - 'id')
output
id,col1,col2,col3,col4,obectId,resid,remarks
1,dat1,data2,data3,data4,obj1,res1,rem1
2,data5,data6,data6,data6,obj2,res2,rem2
2,data9,data10,data11,data12,obj2,res2,rem2
2,data13,data14,data15,data16,obj2,res2,rem2
3,data17,data18,data19,data20,obj3,res3,rem3
3,data21,data22,data23,data24,obj3,res3,rem3

Working expression is as given below. The removing the id should happen before the default
var file2Grouped=file2 groupBy ((item) -> item.id)
---
file1 map ((item) -> item ++ ((file2Grouped[item.id])[0] - 'id' default {}))

Related

Importing multiple 1D JSON arrays in Excel

I'm trying to import a JSON file containing multiple unrelated 1D arrays with variable amount of elements into Excel.
The JSON I wrote is :
{
"table":[1,2,3],
"table2":["A","B","C"],
"table3":["a","b","c"]
}
When I import the file using Power Query and expand the columns, it multiplies the previous entries each time I expand a new column.
enter image description here
I there a way to solve this, shows the elements of each array below each other and each array as a new column?
One method would be to transform each Record into a List and then create a table using Table.FromColumns method.
This needs to be done from the Advanced Editor:
Read the code comments and explore the Applied Steps to better understand.
Also HELP topics for the various functions will be useful
let
//Change following line to reflect your actual data source
Source = Json.Document(File.Contents("C:\Users\ron\Desktop\New Text Document.txt")),
//Get Field Names (= table names)
fieldNames = Record.FieldNames(Source),
//Create a list of lists whereby each sublist is derived from the original record
jsonLists = List.Accumulate(fieldNames,{}, (state, current)=> state & {Record.Field(Source,current)}),
//Convert the lists into columns of a new table
myTable = Table.FromColumns(
jsonLists,
fieldNames
)
in
myTable
Results

Create a node for each column only once while importing csv into Neo4j

I have a csv file that looks the following way:
I want to create a database from it in Neo4j. Rows are nodes with labels gene, columns are also nodes with labels cell. I need to write a CREATE query that would create all my gene and cell - nodes and a relationship one for each combination of gene and cell. Currently I am stuck with the following code:
LOAD CSV WITH HEADERS FROM 'file:///merged_full.csv' AS line
CREATE (:Gene {id: line.gene_ids, name: line.wikigene_name})
I need to somehow iterate over all columns - starting from index 3 - after creating gene nodes, but I do not know how to do that.
Here are 3 queries that, performed in order, should do what you want.
This query creates a temporary Headers node with a names property that contains the collection of headers from the CSV file. It uses LIMIT 1 to only process the first row of the file. It also creates all the Cell nodes, each with it own name property.
LOAD CSV FROM 'file:///merged_full.csv' AS line
MERGE (h:Headers)
SET h.names = line
WITH line
LIMIT 1
UNWIND line[3..] AS name
MERGE (c:Cell {name: name})
This query uses the APOC function apoc.map.fromNodes to generate a map named cells, which maps each cell name to its cell node. It also gets the Headers node. It then loads the non-header data from the CSV file (using SKIP 1 to skip over the header row), and processes each row as follows. It uses MERGE to get/create a Gene node, g, with the desired id and name. It uses the REDUCE function to generate a collection of the Cell nodes that have a "1" column value in the current row, and the FOREACH clause then creates a (g)-[:HAS]->(x) relationship (if necessary) for every cell, x, in that collection.
WITH apoc.map.fromNodes('Cell', 'name') AS cells
MATCH (h:Headers)
LOAD CSV FROM 'file:///merged_full.csv' AS line
WITH h, cells, line
SKIP 1
MERGE (g:Gene {id: line[1], name: line[2]})
FOREACH(
x IN REDUCE(s = [], i IN RANGE(3, SIZE(line)-1) |
CASE line[i] WHEN "1" THEN s + cells[h.names[i]] ELSE s END) |
MERGE (g)-[:HAS]->(x))
This query just deletes the temporary Headers node (if you wish):
MATCH (h:Headers)
DELETE h;
If the columns correspond with cell nodes, then you should know all the cell nodes you need just be looking at the CSV header.
I'd recommend writing a small query just to create each of the cell nodes you need, then create an index or unique constraint on :Cell(id) (or name, or whatever the property is that is meant to identify a :Cell).
At that point the problem becomes getting and processing each relevant column (I assume only the ones with 1 as the value). APOC Procedures may help here.
apoc.map.sortedProperties() can be used to take your line map and give you a list of key/value list pairs, which you can filter down to those where the key begins with 'V', and where the value is 1, then use what's remaining to match on the relevant :Cell node and create the relationship.

NiFi : Regular Expression in ExtractText gets CSV header instead of data

I'm working on a flow where I get CSV files. I want to put the records into different directories based on the first field in the CSV record.
For ex, the CSV file would look like this
country,firstname,lastname,ssn,mob_num
US,xxxx,xxxxx,xxxxx,xxxx
UK,xxxx,xxxxx,xxxxx,xxxx
US,xxxx,xxxxx,xxxxx,xxxx
JP,xxxx,xxxxx,xxxxx,xxxx
JP,xxxx,xxxxx,xxxxx,xxxx
I want to get the field value of the first field i.e, country. Put those records into a particular directory. US records goes to US directory, UK records goes to UK directory, and so on.
The flow that I have right now is:
GetFile ----> SplitText(line split count = 1 & header line count = 1) ----> ExtractText (line = (.+)) ----> PutFile(Directory = \tmp\data\${line:getDelimitedField(1)}). I need the header file to be replicated across all the split files for a different purpose. So I need them.
The thing is, the incoming CSV file gets split into multiple flow files with the header successfully. However, the regex that I have given in ExtractText processor evaluates it against the splitted flow files' CSV header instead of the record. So instead of getting US or UK in the "line" attribute, I always get "country". So all the files go to \tmp\data\country. Help me how to resolve this.
I believe getDelimitedField will only work off a singular line and is likely not moving past the newline in your split file.
I would advocate for a slightly different approach in which you could alter your ExtractText to find the country code through a regular expression and avoid the need to include the contents of the file as an attribute.
Using a regex of ^.*\n+(\w+) will capture the first line and the first set of word characters up to the comma and place them in the attribute name you specify in capture group 1. (e.g. country.1).
I have created a template that should get the value you are looking for available at https://github.com/apiri/nifi-review-collateral/blob/master/stackoverflow/42022249/Extract_Country_From_Splits.xml

Import CSV column from different file into new file

I have 2 CSV files almost identical with the following differences:
The first has a column, "date".
The second doesn't have "date" and also has 50 rows less than the 1st ("email").
They are a list of subscribers with date created. The second, however, is the updated list with subscribers who wanted to be removed, but this no longer has the date created.
Is there any way to import column "date" from 1st CSV into the 2nd CSV by making a reference to the "email" column so I can get the correct date of that subscriber?
Sorry, there seems to be not a ready made (probably an evening's worth of effort) command line tool available.
You could look at different ways, one complex way is to load it in tables, to the merge (using a select and join on the two tables) and export it back as csv.
The simplest I could think of was to use R (given that you have header names, in your CSV?):
csv1_data <- read.csv('/path/to/csv1.csv')
csv2_data <- read.csv('/path/to/csv2.csv')
merged_csv <- merge(csv1_data, csv2_data)
write.table(merged_csv,file="/path/to/merged_csv.csv",sep=",",row.names=T)
The first 2 lines load the data in R, the 3 line merges them using the default S3 method, the final line exports the result as a csv file, with the headers.
Hope this helps!

how to load extracted file name into sql server table in SSIS

i have 3 csv files in a folder which contains eid, ename, country fields, and my 5 csv files names are test1_20120116_034512, test1_20120116_035512,test1_20120116_035812 etc.. my requirement is I want to take lastest file based on timne stamp and modified date, which i have done. Now i want to import the extracted file name into destination table..
my destination tables contains fields like,
filepath, filename, eid, ename, country
I have posted regarding this before in the same site i got an answer for extracting filename, now i want to load the extracted FileName into destination table
Import most recent csv file to sql server in ssis
my destination tables should have output as
C:/source test1_20120116_035812 1234 tester USA
In your DataFlow task, add a Derived Column Transformation. The value of CurrentFile will be the fully qualified path to the file. As you only want the file name, I would look to use a replace function on that with the base folder and then strip the remaining slash. This does not strip the file extension but you can add yet another call to REPLACE and substitute an empty string
Derived Column Name: filename
Derived Column:
Expression: REPLACE(REPLACE(#[User::CurrentFile], #[User::RootFolder], ""), "\\", "")
The above expects it to look like
CurrentFile = "C:\source\test1_20120116_035812.csv"
RootFolder = "C:\source"
Edit
I believe you've done something in your approach that I did not do. You should see a warning about possible truncation but given the values discussed in this and the preceding question, I don't believe the 4k limit on expressions will be of concern.
Displaying the derived column
Demonstrating the derived column does work
I will give you a +1 for providing an approach I wasn't aware of, but you'll still need to add a derived column to match your provided format (base path name)
Full path is provided from the custom properties. Use the above REPLACE section to remove the path info except use the column [FileName] instead of #[User::CurrentFile]
I tried to get the filename through the procedure which Billinkc has given, but its throwing me error stating that filename column failed becaue of truncation error..
Any how i tried different approach to load file name into table.
steps i have used
1. right click on flat file Source and click on show advanced edito for Flat file
2. select component Properties tab
3. Inside that Custom Properties section ---> it has a property FileNameColumnName
I have assigned Filename to that column property like
FileNameColumnName----> FileName thats it, am able to get the filename into my destination table..