How to import CSV files into Firebase - csv

I see we can import json files into firebase.
What I would like to know is if there is a way to import CSV files (I have files that could have about 50K or even more records with about 10 columns).
Does it even make sense to have such files in firebase ?

I can't answer if it make sense to have such files in Firebase, you should answer that.
I also had to upload CSV files to Firebase and I finally transformed my CSV into JSON and used firebase-import to add my Json into Firebase.
there's a lot of CSV to JSON converters (even online ones). You can pick the one you like the most (I personnaly used node-csvtojson).

I've uploaded many files (tab separated files) (40MB each) into firebase.
Here are the steps:
I wrote a Java code to translate TSV into JSON files.
I used firebase-import to upload them. To install just type in cmd:
npm install firebase-import

One trick I used on top of all the one already mentioned is to synchronize a google spreadsheet with firebase.
You create a script that upload directly to firebase db base on row / columns. It worked quite well and can be more visual for fine tuning the raw data compared to csv/json format directly.
Ref: https://www.sohamkamani.com/blog/2017/03/09/sync-data-between-google-sheets-and-firebase/

Here is the fastest way to Import your CSV to Firestore:
Create an account in Jet Admin
Connect Firebase as a DataSource
Import CSV to Firestore
Ref:
https://blog.jetadmin.io/how-to-import-csv-to-firestore-database-without-code/

Related

Failing to Upload Large JSON file to Firebase Real Time Database

I have a 1GB json file to upload to Firebase RTDB but when I press Import, it's loading for a while and then I get this Error:
There was a problem contacting the server. Try uploading your file again.
I have tried to upload a 30mb file and everything is ok.
It sounds like your file it too big to upload to Firebase in one go. There are no parameters to tweak here, and you'll have to use another means of getting the data into the database.
You might want to give the Firebase-Import library ago, the Firebase CLI's database:set command, or write your own import for your file format using the Firebase API.

ArangoDB: How to export collection to CSV?

I have noticed there is a feature in web interface of ArangoDB which allows users to Download or Upload data as JSON file. However, I find nothing similar for CSV exporting. How can an existing Arango DB collection be exported to a .csv file?
If you want to export data from ArangoDB to CSV, then you should use Arangoexport. It is included in the full packages as well as the client-only packages. You find it next to the arangod server executable.
Basic usage:
https://docs.arangodb.com/3.4/Manual/Programs/Arangoexport/Examples.html#export-csv
Also see the CSV example with AQL query:
https://docs.arangodb.com/3.4/Manual/Programs/Arangoexport/Examples.html#export-via-aql-query
Using an AQL query for a CSV export allows you to transform the data if desired, e.g. to concatenate an array to a string or unpack nested objects. If you don't do that, then the JSON serialization of arrays/objects will be exported (which may or may not be what you want).
The default Arango install includes the following file:
/usr/share/arangodb3/js/contrib/CSV_export/CSVexport.js
It includes this comment:
// This is a generic CSV exporter for collections.
//
// Usage: Run with arangosh like this:
// arangosh --javascript.execute <CollName> [ <Field1> <Field2> ... ]
Unfortunately, at least in my experience, that usage tip is incorrect. Arango team, if you are reading this, please correct the file or correct my understanding.
Here's how I got it to work:
arangosh --javascript.execute "/usr/share/arangodb3/js/contrib/CSV_export/CSVexport.js" "<CollectionName>"
Please specify a password:
Then it sends the CSV data to stdout. (If you with to send it to a file, you have to deal with the password prompt in some way.)

How to bulk import documents with custom metadata from csv to Alfresco repo?

I have an excel file (or csv), that holds a list of documents with their properties and absolute paths in local hard drive.
Now that we are going to use Alfresco (v5.0.d) as DMS, I have already created a custom aspect which reflect the csv fields and I'm looking for a better approach to import all document from the csv file into Alfresco repository.
You could simply write java application to parse your csv and upload files, file by file using the RESTful api and do not forget to replicate the folder tree in your alfresco repo (as it is not recommended to have more than 1000 folders/documents on the same level in the hierarchy since it would require some tweaking in a few non trivial usecases).
To create the folder, refer to this answer.
To actually upload the files, refer to my answer here.

parse.com export database table into *.csv stored in parse cloud

How can I export the one database table from parse.com into a *.csv file which is stored in parse online?
I just once got a file in the following format and now I need to do that on my own:
http://files.parsetfss.com/f0e70754-45fe-43c2-5555-6a8a0795454f/tfss-63214f6e-1f09-481c-83a2-21a70d52091f-STUDENT.csv
So, the question is how can I do this? I have not found a dashboard function yet
Thank you very much
You can create a job in cloud code which will query through all the rows in the table and generate CSV data for each. This data can then be saved to a parse file for access by URL.
If you are looking to simply export a class every once in awhile and you are on a mac, check out ParseToCSV on the Mac App Store. Works very well.

Migrating from Lighthouse to Jira - Problems Importing Data

I am trying to find the best way to import all of our Lighthouse data (which I exported as JSON) into JIRA, which wants a CSV file.
I have a main folder containing many subdirectories, JSON files and attachments. The total size is around 50MB. JIRA allows importing CSV data so I was thinking of trying to convert the JSON data to CSV, but all convertors I have seen online will only do a file, rather than parsing recursively through an entire folder structure, nicely creating the CSV equivalent which can then be imported into JIRA.
Does anybody have any experience of doing this, or any recommendations?
Thanks, Jon
The JIRA CSV importer assumes a denormalized view of each issue, with all the fields available in one line per issue. I think the quickest way would be to write a small Python script to read the JSON and emit the minimum CSV. That should get you issues and comments. Keep track of which Lighthouse ID corresponds to each new issue key. Then write another script to add things like attachments using the JIRA SOAP API. For JIRA 5.0 the REST API is a better choice.
We just went through a Lighthouse to JIRA migration and ran into this. The best thing to do is in your script, start at the top-level export directory and loop through each ticket.json file. You can then build a master CSV or JSON file to import into JIRA that contains all tickets.
In Ruby (which is what we used), it would look something like this:
Dir.glob("path/to/lighthouse_export/tickets/*/ticket.json") do |ticket|
JSON.parse(File.open(ticket).read).each do |data|
# access ticket data and add it to a CSV
end
end