export csv data from jupyter notebook to s3 - csv

I know to read in the csv is:
pd.read_csv("s3://data-science/misc/survey.csv")
But I am trying to export results into there using:
filex.to_csv("s3://data-science/misc/filex.csv")
and this does not work - how can this be done?

If you are looking for get the CSV beside the path where you will save it, then try using just the name of the new file and it will be saved in the actual path (from where you excecute the script:
df1.to_csv('df1.csv', sep=', encoding='utf-8')
and I recommend paying attention to the arg's

Related

How to properly import .csv file into Tableau?

I've exported a dataframe from R into a .csv file and then tried to open it in Tableau. What is the correct way to import these files? I've done connect to data source > to a file > text file then simply clicked on the csv.
However, the columns and rows are all mixed up and I'm not sure what's gone wrong as the files can open in Numbers and Excel just fine!
Please see the incorrectly imported data rem_posts.csv and correctly imported data kylie_posts.csv.
It was an issue with some columns having string data with commas in it so it was messing up the import when using csv format. I resolved this by exporting it in excel instead.
Try using the Data Interpreter
Yo may also try using Tableu Prep , to do some cleanup

Export BigQuery table to GCS as CSV or JSON generates a file type file

After running a query and saving the results into a table, I went on exporting its content into a GCS bucket.
When in the table, I clicked Export and the following screen showed up
Because the table was bigger than 1 GB, I've used
bucketname/all_years*
Then, because I wanted it in both CSV and JSON, specified the Export format CSV, started the export and repeated for JSON.
Didn't notice if I got CSV and JSON files inside of the bucket (I deleted it right away due to costs, but my memory tells they weren't .csv / .json already inside of the bucket) and once I downloaded the content from the bucket to my Windows machine, I got a file of type file:
To go around this I had to go to every file properties and add .csv / .json and click OK
Why is that even though i specified the export format as .CSV and .JSON I got a file of type file?
Just setting Export Format controls file format but not file extension - You should explicitly set file extension
So, instead of bucketname/all_years* - you should use bucketname/all_years*.json for example or bucketname/all_years*.csv

ArangoDB: How to export collection to CSV?

I have noticed there is a feature in web interface of ArangoDB which allows users to Download or Upload data as JSON file. However, I find nothing similar for CSV exporting. How can an existing Arango DB collection be exported to a .csv file?
If you want to export data from ArangoDB to CSV, then you should use Arangoexport. It is included in the full packages as well as the client-only packages. You find it next to the arangod server executable.
Basic usage:
https://docs.arangodb.com/3.4/Manual/Programs/Arangoexport/Examples.html#export-csv
Also see the CSV example with AQL query:
https://docs.arangodb.com/3.4/Manual/Programs/Arangoexport/Examples.html#export-via-aql-query
Using an AQL query for a CSV export allows you to transform the data if desired, e.g. to concatenate an array to a string or unpack nested objects. If you don't do that, then the JSON serialization of arrays/objects will be exported (which may or may not be what you want).
The default Arango install includes the following file:
/usr/share/arangodb3/js/contrib/CSV_export/CSVexport.js
It includes this comment:
// This is a generic CSV exporter for collections.
//
// Usage: Run with arangosh like this:
// arangosh --javascript.execute <CollName> [ <Field1> <Field2> ... ]
Unfortunately, at least in my experience, that usage tip is incorrect. Arango team, if you are reading this, please correct the file or correct my understanding.
Here's how I got it to work:
arangosh --javascript.execute "/usr/share/arangodb3/js/contrib/CSV_export/CSVexport.js" "<CollectionName>"
Please specify a password:
Then it sends the CSV data to stdout. (If you with to send it to a file, you have to deal with the password prompt in some way.)

Export from Sketch App to JSON

I want to be able to export Layer Names and properties from Sketch to JSON format. I think I can figure out how to pull the info I need from Sketch, but I haven't started to code anything, because I haven't been able to find any info about this export issue.
I'm wondering if anyone can help confirm that Sketch can only export their supported formats or if export to JSON is possible. I don't want to dive into this project only to find out that I can't end up with a JSON file.
I have been trying to work with this as well, and it turns out there are a few ways to get access to a JSON file in sketch.
use the npm package sketch2json
Turns out that if you unzip the .sketch file, there is a JSON file hiding inside.
unzip sketch-header.sketch
This creates a folder called 'pages' with the .json file inside. To get the 'Layer Names', you can just read/serialize the .json file into a string, and then the path to collect the layer names is
const obj = JSON.parse(fileString);
object.layers.forEach((layer) => {
console.log(layer.name);
});
If you rename the .sketch extension file to .zip extension file you will see as many JSON files as pages your sketch document has inside a folder called "Pages". Also some BMP previews images and other JSON related to user and document information.

neo4j LOAD CSV returns Couldn't Load external resource

Trying CSV import to Neo4j - doesn't seem to be working.
I'm loading a local file using the syntax:
LOAD CSV WITH HEADERS FROM "file:///location/local/my.csv" AS csvDoc
Am wondering if there's something wrong with my CSV file, or if there's some syntax problem here.
If you didn't read the title, the error is:
Couldn't load the external resource at: file:/location/local/my.csv
[Neo.TransientError.Statement.ExternalResourceFailure]
Neo4j seems to need a full path spec to get a file on the local system.
On linux or mac try
LOAD CSV FROM "file:/Users/you/location/local/my.csv"
On windows try
LOAD CSV FROM "file://c:/location/local/my.csv"
.
In the browser interface (Neo4j 3.0.3, MacOS 10.11) it looks like Neo4j prefixes your file path with $path_to_graph_database/import. So you could move your files there. If you are using a command line tool, then see this SO question.
Easy solution:
Once you choose your database location (in my case ReactomeGraphDB60)...
here I placed my ddbb
...go to that folder, and create inside a folder called "import".
Later in the cypher query write (as an example):
LOAD CSV WITH HEADERS FROM "file:///ILClasiffStruct.csv" AS row
CREATE (n:Interleukines)
SET n = row