I've exported a dataframe from R into a .csv file and then tried to open it in Tableau. What is the correct way to import these files? I've done connect to data source > to a file > text file then simply clicked on the csv.
However, the columns and rows are all mixed up and I'm not sure what's gone wrong as the files can open in Numbers and Excel just fine!
Please see the incorrectly imported data rem_posts.csv and correctly imported data kylie_posts.csv.
It was an issue with some columns having string data with commas in it so it was messing up the import when using csv format. I resolved this by exporting it in excel instead.
Try using the Data Interpreter
Yo may also try using Tableu Prep , to do some cleanup
Related
when importing It didn't prompt a dialog to edit the settings of the file, is there an easy way to convert it into a table ?
I also tried a json file the same data but. when importing it I also couldn't convert it into a table
I know to read in the csv is:
pd.read_csv("s3://data-science/misc/survey.csv")
But I am trying to export results into there using:
filex.to_csv("s3://data-science/misc/filex.csv")
and this does not work - how can this be done?
If you are looking for get the CSV beside the path where you will save it, then try using just the name of the new file and it will be saved in the actual path (from where you excecute the script:
df1.to_csv('df1.csv', sep=', encoding='utf-8')
and I recommend paying attention to the arg's
For a course on Excel I was trying to load a CSV in Neo4j (first time using this application) when I was blocked at the first step of replicating an example shown in said course: loading.
The command which was used in the example was this;
LOAD CSV WITH HEADERS FROM "file:/path/to/file/file.csv"
as row
CREATE (m:movie {name:row.movie})
But it gave syntax errors. I found out I could correct it by using double \ and add "file:";
LOAD CSV WITH HEADERS FROM "file://C:\\path\\to\\file\\file.csv"
as row
CREATE (m:movie {name:row.movie})
Neo4j accepts this syntax, processes for a few moments, and returns YET ANOTHER error;
Neo.TransientError.Statement.ExternalResourceFailure
I tried the same commands (original and my own) in the online Neo4j console but no luck. I can reach the file using that path without problem; it really is there. The CSV file consist out of just 5 strings of regular letters, that's all. No fancy formatting or characters.
What's going on?
Not that mysterious, Neo4j's IMPORT CSV function looks for the specified CSV file in the import directory within your server configuration for that database, as specified at the top of its server configuration file. (IE: dbms.directories.import=import in your neo4j.conf file.)
You should create the import directory in...
"C:\Users\[User Name]\Documents\Neo4j\default.graphdb\"
If you place your CSV file in there, you can specify any sub-directory or just the "file.csv" you want to import with the IMPORT CSV function as below.
LOAD CSV WITH HEADERS FROM "file:///file.csv"
AS row
RETURN row
LIMIT 5
Try using:
"file:///C:/path/to/file/file.csv"
Since your file is on your local computer, the third / following the file scheme is not preceded by a host name or address -- but it still needs to be there. Also, file URI path separators should be forward slashes (even on Windows machines).
See the File URI scheme Wikipedia page if you need more information.
I have a 1.9GB tab delimited file that is in the form of an xlsx file. I could write a script to convert it to CSV and then convert THAT to json, but I'm just curious if there is a more direct way to do this. Thanks! :)
Not sure if this is an option, but you can import it into some database (for example mongo) and then export relatively easily with that
I came accross a similar problem recently, and what I found really easy to do was actually go directly from XLSX to JSON using MATLAB.
IMPORTING XLSX: https://www.mathworks.com/help/matlab/ref/xlsread.html
EXPORTING JSON: https://www.mathworks.com/help/matlab/ref/jsonencode.html
It might take a little bit of time for such a large file, but I did it on a file about 400MB in size with no problem.
I have a large CSV file that I am trying to import into Microsoft Access but I am running into issues. Assume pipes represent different cells in the database
Assume my content is the below. The second entry will only parse the word my with default settings and will not import the word content into the database even though the import wizard implies that it will. The default settings being , delimiter and " text qualifier.
|my content is good|
|my|
Now if i change the text qualifier to NONE it parses the entire second entry and my content will be imported into the database however the first entry will wind up being in 3 different cells in the data base and will show up as
my|content|is|good.
|my content
I used pipes to imply different cells.
This seems like a limitation in Microsoft Access. Is anyone familiar with a workaround for this?
Original content:
,"my,content,is,good","",
,my"content","",
I am using the import wizard
Yes, this is a limitation of the CSV import capabilities in Access. For whatever reason, Access has always been more restrictive than Excel in its abilities to parse CSV files.
So, one workaround would be to open the CSV file in Excel, save the file as an actual Excel sheet, and then import the Excel sheet into Access. For example, the CSV file
this,is,a "test",CSV file,"Ugly, yes, but still parsable."
is "non-standard" (if one is willing to concede that there is such a thing as a CSV "standard"), and Access cannot import it directly. (It either complains of an "Unparsable Record" or it splits the last field on the commas, depending on the "Text Qualifier" setting.)
However, we can open it in Excel
save the file as "foo.xlsx", and then import the .xlsx file into Access