How to update a values of specific fields on csv using nifi? - csv

I have a CSV file where it contains id, name, and salary as fields. The data in my CSV file is like below.
id,name,salary
1,Jhon,2345
2,Alex,3456
I want to update the current CSV with new id (id*4)
id,name,salary
4,Jhon,2345
8,Alex,3456
The format of the file in the destination should aslo be CSV. Can anyone tell me the flow? (What processors do I need). I'm very new to nifi. A big thanks in advance.

Use UpdateRecord processor with the below settings,
Record Reader CSVReader
Record Writer CSVRecordSetWriter
Replacement Value Strategy Literal Value
/id ${field.value:multiply(4)}
then it gives the desired result. Just csv in and csv out.

Related

Storing Pyspark Data in csv creating problem

I am trying to store my pyspark output into csv, but when I try to save it in csv, the output does not look the same. I have the output in this form:
When I try to convert this to csv, the Concat tasks column does not show up properly, due to the size of the data. Given my requirement, it's necessary for me to store the data in csv format. Is there a way out for this. (P.S- I also see columns showing nonsensical values, even though the pyspark output shows correct value)

Solr how to write an array of data in a CSV file

What's the syntax to write an array for solr in a csv file?, i need to update a multivalued field but when i upload the file, the data get all in the array but like just one element like this:
multiField:["data1,data2,data3"]
instead of this
multiField:["data1", "data2" , "data3"]
how i can write this in the csv file by default?
You can use the split parameter to split a single field into multiple values:
&f.multiField.split=,
.. should do what you want.

How to write a JSON extracted value to a csv file in jmeter for a specific variable

I have a csv file that looks like this:
varCust_id,varCust_name,varCity,varStateProv,varCountry,varUserId,varUsername
When I run the HTTP Post Request to create a new customer, I get a JSON response. I am extracting the cust_id and cust_name using the json extractor. How can I enter this new value into the csv for the correct variable? For example, after creating the customer, the csv would look like this:
varCust_id,varCust_name,varCity,varStateProv,varCountry,varUserId,varUsername
1234,My Customer Name
Or once I create a user, the file might look like this:
varCust_id,varCust_name,varCity,varStateProv,varCountry,varUserId,varUsername
1234,My Customer Name,,,,9876,myusername
In my searching through the net, I have found ways and I'm able to append these extracted variables to a new line but in my case, I need to replace the value in the correct location so it is associated to the correct variable I have set up in the csv file.
I believe what you're looking to do can be done via a BeanShell PostProcessor and is answered here.
Thank you for the reply. I ended up using User Defined Variables for some things and BeanShell PreProcessors for other bits vs. using the CSV.
Well, never tried this. But what you can do is create all these variables and set them to Null / 0.
Once done, update these during your execution. At the end, you can concatenate these with any delimiter (say ; or Tab) and just push in CSV as a single string.
Once you got data in CSV, you can easily split in Ms excel.

Distinct column values in csv file

I have a csv file.
columns in csv file - "SNo. StateName CityName AreaName PinCode NonServ.Area MessangerService Remark".
The column CityName has repeated values.
Ex: In many records, it has unique value (Delhi).
Is there any approach in java to read that csv file and get the distinct values from that column of the csv file.
The only way I can think of is to do it row by row and store each value into an array-type structure. Using a set structure such as HashSet or TreeSet will ensure unique values.
The other option, which isn't what you were looking for but might work depending on your project is to use a database instead of a csv file. It then becomes very easy to select distinct values in a column.
df is where you've read csv data
df[CityName].unique()

Parsing CSV file in Hadoop

How can Mapreduce parse a CSV file with 80 columns and for each row in excel format it results two to three lines in CSV format? Text input format doesn't work in this case. Does key value input format work in this case?
You can write your own InoutFormat & RecordReader which will read multiple lines and send as a single record to your Mapper.