Remove last blank row in CSV using Logic App - csv

I have a CSV file stored in SFTP where the last row is a blank, so the data looks like this in text:
a,b,c
d,e,f
,,
How can I use Logic App to remove that final row and then save it in BLOB? I have the following but will need some extra steps before the BLOB creation I think.

Considering the same sample here is my Logic app
In Compose_2 it takes the index of the last empty item. Below is the expression that I used to retrieve the lastIndex.
lastIndexOf(variables('Sample'),'\n')
Then in Compose_3 I'm selecting the one which I wanted
substring(variables('Sample'),0,outputs('Compose_2'))
Here is the Final Result
NOTE:-
Make sure you remove an extra ' \ ' been attached to '\n' in the code view at the Compose_2.
So the final Compose_2 looks like
lastIndexOf(variables('Sample'),'
')
Updated Answer
If the received data is coming from CSV then you can use the take() expression you retrieve the wanted rows. Here are a few screen shots for detailed explanation:-
Below is the expression in the compose connector
take(outputs('Split_To_Get_Rows'),sub(length(outputs('Split_To_Get_Rows')),1))

Related

importing CSV into FileMaker Pro 18 with CSV header line

Despite searching both Google and the documentation I can't figure this out:
I have a CSV file that has a header line, like this:
ID,Name,Street,City
1,John Doe,Main Street,Some City
2,Jane Done,Sideroad,Other City
Importing into FileMaker works well, except for two things:
It imports the header line as a data set, so I get one row that has an ID of "ID", a Name of "Name", etc.
It assigns the items to fields by order, including the default primary key, created date, etc. I have to manually re-assign them, which works but seems like work that could be avoided.
I would like it to understand that the header line is not a data set and that it could use the field names from the header line and match them to the field names in my FileMaker table.
How do I do that? Where is it explained?
When you import records, you have the option to select a record in the source file that contains field names (usually the first row). See #4 here.
Once you have done that, you will get the option to map the fields automatically by matching names.
If you're doing this periodically, it's best to script the action. A script will remember your choices, so you only need to do this once.

Ignore last/corrupted record from flat file source in SSIS

I have following csv file:
col1, col2, col3
"r1", "r2", "r3"
"r11", "r22", "r33"
"totals","","",
followed by 2 blank lines. The import is failing as there is extra comma at the end of the last data row and most probably will fail because of the extra blank lines at the end.
Can I skip the last row somehow or even better stop import when I get into that row? It always has "totals" string in the "col1".
UPDATE:
As far as I understood from the answers that it is not possible to do that with Flat File. Currently I did that with the "Script Component" as a source
You can do it by reading the row as a single string.
Conditionally split out Null and left(col0)=="total"
in script component you then use split function
finally trim("\"")
I know of nothing built-in to SSIS that lets you ignore the LAST line of a CSV.
One way to handle this is to precede your dataflow with a script task that uses the FileSystemObject to edit the CSV and remove the last line.
You will need to create a custom script where you read all lines but the last within SSIS.
This is old but it came up for me when searching this topic. My solution was to redirect rows on the destination. The last row is redirected instead of failing and the job completes. Of course you will potentially redirect rows you don't want to. It all depends on how much you can trust the data.

Read second column of csv

I have this CSV data file that looks like
a,b
1,2
3,4
data = readdlm("my/local/path", ',')
however, when I access data[1], I'm only getting a, I thought it supposes to be [a,b]? Doing things like data[1:2] gets me the first column only.
Any idea how can I access the second column?
From the docs for readdlm:
Read a matrix from the source where each line gives one row...
So use data[row,col] syntax to get each element

How to change Column Delimiter in MS VSTS for web performance test

I am using Microsoft VSTS for Performance test a web application
I am adding a Data Pool (.csv file) for parameterize multiple values, But the problem is .csv file is showing it in column delimited type like:
VariableA,VariableB,Variable3
Test1,Test2,Test3
Test4,Test5,Test6
But i want these multiple values in single column, Because whenever we will select the column delimited type, .csv file automatically converts all values in different columns.
Like in HP-LoadRunner we have 3 options [Column, Tab, Space]. I tried to find out in VSTS data-pool settings but not able to find any option.
I am trying to do this:
VariableA
Test1,Test2,Test3
Test5,Test6,Test7
Kindly help me out.
If you want to use Test1,Test2,Test3 in first iteration, Test5,Test6,Test7 in second iteration then try below in your csv file.
VariableA
"Test1,Test2,Test3"
"Test5,Test6,Test7"
This should consider Test1,Test2,Test3 as a single variable.

ssis flat file source trim blank spaces

Is there any way i can trim the blank spaces when I read data from CSV file?
Thanks.
The SendMail task requires ';' between email ids. If you are building one large string to send your email to, consider using a script component to remove spaces and append a';' between each one.
Use a derived column component after the CSV input, in the editor you should see columns in top-left so drag the email column into the editor and replace any spaces with empty strings. You should also set the derived column to replace the email column (Or you could add it as a new column, if you need that).