I want to upload my .csv file into my solr core using simpleposttool but there is a problem. My excel creates .csv file with semicolon ; because of that I'm replacing semicolon ; with comma ,. But there are some other comma , in my .csv file's data, so when I'm trying to upload .csv file, I'm getting this error.
error
17 commas separates datas but some datas has commas either. I've been dealing with this for a long time but there is no progress for me.
You can configure the separator with the separator argument when uploading CSV files.
&separator=%3B
(%3B is the URL encoded version of ;)
You can give extra parameters to the bin/post commparameters by adding -params "separator=%3B".
Related
I am creating a pipeline in google data fusion that should read records from a source database and write them to a target csv file in Cloud Storage.
The problem is that in the resulting file the separator character is a comma ",", and some fields are of type string and contains phrases with commas, so when I try to load the resulting file in wrangler as a csv, I get an error, because the number of fields in the csv does not match the number of fields in the schema (because of fields containing comma strings).
How can I escape these special characters in the pipeline?
Thanks and regards
Try writing the data as TSV instead of CSV (set the format of the sink plugin to tsv). Then load the data as tsv in Wrangler.
i am getting the following error if i run a copy command to copy contents of a .csv file in s3 to a table in redshift.
error:"String length exceeds DDL length".
i am using following copy command:
COPY enjoy from 's3://nmk-redshift-bucket/my_workbook.csv' CREDENTIALS 'aws_access_key_id=”****”;aws_secret_access_key=’**** ' CSV QUOTE '"' DELIMITER ',' NULL AS '\0'
i figured lets open the link given by s3 for my file through was console.
link for the work book is :
link to my s3bucket cvs file
the above file is filled with many weird characters i really don't understand.
the copy command is taking these characters instead of the information i have entered in my csv file.So hence leading to string length exceeded error.
i use sql workbench to query.My 'stl_load_errors' table in redshift has raw_field_values component similar to the chars in the link i mentioned above, thats how i got to know how its taking in the input
i am new to aws and utf-8 configs. so please i appreciate help on this
The link you provide points to a .xlsx file (but has a .csv extension instead of .xlsx), which is actually a zip file.
That is why you see those strange characters, the first 2 being 'PK', which means it is a zip file.
So you will have to export to .csv first, before using the file.
I am trying to load the following simple csv file into tableau public 9.3:
customers,item1,item2,item3,item4
1,0,0,0,0
2,0,0,0,0
3,0,0,0,0
However, it doesn't read the file as separate columns, despite the field separator being Comma. Instead it treats the whole line as one column. Any help would be greatly appreciated :
If you change your locale settings to English US you will be able to load the file. You should also be able to work around this by creating a schema.ini file.
Go to Data > Manage fields > [Field] Options
You can also control imported CSV behavior post import both by splitting individual columns (which will remain split on update as well), or by the image below at the CSV level.
That doesn`t work for me. So I reopen the .csv file in Excel and save it again in .csv format with ',' as the delimeter.
After that my file looks like .csv with ';' delimeter and works with Tableau.
I have a folder containing a number of csv files, e.g. "leeds dz.csv", "leeds gh.csv", "leeds fr.csv". The first part of the file names is constant (i.e. always "leeds").
I want to import each to Stata individually, convert to .dta file and save it. Currently I have this code:
cd "etcetc"
clear
local myfilelist : dir . files"*.csv"
foreach file of local myfilelist {
drop _all
insheet using `file', comma
local outfile = subinstr("`file'",".csv","",.)
save "`outfile'", replace
}
The code works fine if I rename all the .csv files manually to delete the "leeds" part, ie if each .csv is named "dz.csv" instead of "leeds dz.csv" etc.
However, if I do not do this deletion I receive the error "invalid 'dz.csv' "
I'm guessing this has something to do with my 3rd line of code, in particular the "*.csv". But I'm unsure how to adapt the code/ why it won't allow me to import files with a space in the name?
The line
insheet using `file', comma
will be problematic with any filename containing spaces.
Try
insheet using "`file'", comma
The help for insheet is quite explicit on this:
If filename is specified without an extension, .raw is assumed. If your
filename contains embedded spaces, remember to enclose it in double
quotes.
I am importing some thousands lines of Data from a .txt file containing two columns and the format is as it follows:
A8041550408#=86^:|blablablablablablablablablablablablablablablablablablablabla1
blablablablablablablablablablablablablablablablablablablabla2
blablablablablablablablablablablablablablablablablablablabla3
A8041550408#=86^:|blablablablablablablablablablablablablablablablablablablabla1
blablablablablablablablablablablablablablablablablablablabla2
A8041550408#=86^:|blablablablablablablablablablablablablablablablablablablabla1
blablablablablablablablablablablablablablablablablablablabla2
blablablablablablablablablablablablablablablablablablablabla3
blablablablablablablablablablablablablablablablablablablabla4
etc....
What I have done so far is create a table with the two fields, but when i try to import the .txt file as a CSV and putting / Columns separated By : | /, I get an error:
"Invalid column count in CSV input on line 2."
Which is quite obvious since the second line of the .txt file is empty.
Moreover, I have tried importing the file as a CSV using LOAD DATA, and it didn't work as well it has just filled up the table with random words and phrases from the .txt file .
So my question is : How can I import the data from this file ?
You have to fix your file; in its current state you cannot expect the import module to be able to understand it. First step would be to remove the empty lines: How to remove blank lines from a Unix file