How to use bsondump in windows? - json

How to convert the BSON into JSON from MongoDB.
I inserted documents into MongoDB then I get the following error.
This what I get
C:\mongodb\bin>bsondump collection.bson> collection.json error: boost::filesystem::file_size:
The system cannot find the filespecified:"collection.bson"

please use the absolute directory of collection.bson file or see whether the file is in C:\mongodb\bin or not

Firstly, you have to run the mongodump, which makes the bson file. It creates a dump folder, and it makes for every database an other folder. After that you have to run bsondump coll.bson > coll.json.

Related

ArangoDB: How to export collection to CSV?

I have noticed there is a feature in web interface of ArangoDB which allows users to Download or Upload data as JSON file. However, I find nothing similar for CSV exporting. How can an existing Arango DB collection be exported to a .csv file?
If you want to export data from ArangoDB to CSV, then you should use Arangoexport. It is included in the full packages as well as the client-only packages. You find it next to the arangod server executable.
Basic usage:
https://docs.arangodb.com/3.4/Manual/Programs/Arangoexport/Examples.html#export-csv
Also see the CSV example with AQL query:
https://docs.arangodb.com/3.4/Manual/Programs/Arangoexport/Examples.html#export-via-aql-query
Using an AQL query for a CSV export allows you to transform the data if desired, e.g. to concatenate an array to a string or unpack nested objects. If you don't do that, then the JSON serialization of arrays/objects will be exported (which may or may not be what you want).
The default Arango install includes the following file:
/usr/share/arangodb3/js/contrib/CSV_export/CSVexport.js
It includes this comment:
// This is a generic CSV exporter for collections.
//
// Usage: Run with arangosh like this:
// arangosh --javascript.execute <CollName> [ <Field1> <Field2> ... ]
Unfortunately, at least in my experience, that usage tip is incorrect. Arango team, if you are reading this, please correct the file or correct my understanding.
Here's how I got it to work:
arangosh --javascript.execute "/usr/share/arangodb3/js/contrib/CSV_export/CSVexport.js" "<CollectionName>"
Please specify a password:
Then it sends the CSV data to stdout. (If you with to send it to a file, you have to deal with the password prompt in some way.)

LogParser JSON log files

Does LogParser support JSON log files? I am working with an app that outputs simple JSON log files into a folder and I'm trying to run aggregate SQL style queries against the files in the folder.
The format of the files is simple:
{"f1":"value", "f2":NumericValue, "f3":"DateValue", etc...}
You can try out Musoq which already has a plugin that lets you treat json as queryable source.
Unfortunately not. LogParser was written when XML was the cool thing :-)
But you can always write your COM extension and LogParser will be able to query JSON files as well.

Open local JSON file for examination

I was wondering if it's possible to open a local JSON file so I can just check its structure? Didn't/don't want to upload the file to an online JSON format checker site and was hoping I can just utilize PAW to do that.
Don't seem to be able to do this with a local file, unless I run it through a local server, eg using MAMP, unless I missed something...?
Thanks.
You could copy the content into the txt body then switch to the JSON body this will let you view it in the nice structure, sorry currently no way to directly import a file need to copy past the content.
Take a look at jsonlint npm module. Supports JSON schema validation and pretty printing.

Export from Couchbase to CSV file

I have a Couchbase Cluster with only one node (let's call it localhost) and I need to export all the data from a very big bucket (let's call it XXX) into a CSV file.
Now this seems to be a pretty easy task but I can't find the way to make it work.
According to the (really bad) documentation on the cbtransfer toold from Couchbase http://docs.couchbase.com/admin/admin/CLI/cbtransfer_tool.html they say this is possible but they don't explain it clearly. They just add a flag if you want the transfer to occur in csv format (?) but it is not working. Maybe someone who already did this can give me a hand?
Using the documentation I've been able to make an approach to the result I want to obtain (a clean CSV file with all the documents in the XXX bucket) using this command:
/opt/couchbase/bin/cbtransfer http://localhost:8091 /path/to/export/output.csv -b XXX
But what I get is that /path/to/export/output.csv is actually a folder with a lot of folders inside and it is storing some kind of json metadata that can be used to restore the XXX bucket in another instance of Couchbase.
Has anyone been able to export data from a Couchbase bucket (Json documents) into a CSV file?
From looking at the documentation, you have to put a slightly different syntax to export to a CSV. http://docs.couchbase.com/admin/admin/CLI/cbtransfer_tool.html
It needs to look like so:
cbtransfer http://[localhost]:8091 csv:./data.csv -b default -u Administrator -p password
Notice the "csv:" before the name of the csv file.
I tested this and it does export a CSV. Just be forwarned that you need a relatively flat document structure for this to work really well, as JSON can represent far more complex data structures than CSV obviously, e.g. arrays, sub-documents, etc. cbtransfer will not unravel those. For example, if there is a subdocument, cbtransfer will represent it as a JSON doc in the line of each CSV.
So depending on what your document structure is, exporting to CSV is not an ideal format. It is a step backwards.

How to run .sql file using java without manually parsing the document

How to run .sql file using java without manually parsing the document. Is there any prewritten class to do the parsing if it has to be done?
I'm not sure but this may help you.
In MySQL JDBC implementation: setLocalInfileInputStream() sets an InputStream instance that will be used to send data to the MySQL server for a LOAD DATA LOCAL INFILE statement rather than a FileInputStream or URLInputStream that represents the path given as an argument to the statement.