Import many JSON files into MongoDB with mongoimport - json

I use Linux, and in a local folder I have myriad of files (with no extension, but they are JSON array files).
I would like to import all of them into the database (coches) and collection (dtc) created in mongo.
I tried several codes found in different questions (1, 2) with no success. This is a list of files found in my local folder, but there are many more.
/media/mario/prueba/2020-02-25_00-30-33_588844_0Auy
/media/mario/prueba/2020-02-25_06-26-02_816819_KXbU
/media/mario/prueba/2020-02-25_07-07-22_868748_DCmL
/media/mario/prueba/2020-02-25_16-02-12_371020_eYjf
This is an example I tried unsuccessfully:
for filename in *; do mongoimport -db coches --collection dtc --file "$filename" done;
I am new no Mongo and would like to know how to deal with this situation.
How can I import plenty of JSON files (unextensioned) to the database coches and collection dtc?

Related

How to import JSON file into MongoDB using Laravel?

I want to import JSON file into MongoDB and fill collection with data from that file. How can I do it? Is it any package I have to install using composer or just import some library?
I was searching YouTube and other websites, everywhere I found imports CSV files to MySQL database.
Looking forward to your answeres!

How to export subset of data from mongodb

I currently have a large database, and I need a means of backing up subsets of the data that can then be imported on another mongodb instance.
For example, I would need to find all documents that contain a key, so essentially: find({key: 'somekey'}), and then export that data set. I thought to simply run the query in NodeJS, and save the data in JSON format. I don't think this is optimal as through my understanding simply importing the JSON data again (if needed in the future) won't be a straightforward task as the data-types will be lost.
So my question is, how would I go about exporting a subset of the dataset so that it may be possibly re-imported into another mongodb instance on another server.
Thanks to #Veeram's comment, the way to do this is as BSON so that it retains all the data structure:
sudo mongodump -d DB_Name -c Collection -q '{"key_name": "value"}' --out /home/collection
Then to import it back:
sudo mongorestore -d DB_Name -c Collection /home/collection/DB_Name/Collection.bson

Getting RECORD Array in MongoDB

I imported json data from mysql using this command :
mongoimport --db your --collection categories categories.json --type json
But when i stared search data i found an issue that mongodb collection have RECORDS Array and not imported ids as object like first one.
Any one know ? how to import data from mysql to mongodb that will be as Object not an extra RECORDS Array ?
I think RECORDS is coming from your mysql, please check your JSON file by opening it in editor like sublime.
Your Answer to export from mysql to mongo with JSON Objects:
Install Gem :
gem install mysql2xxxx
Then run :
mysql2json --user=root --database=yourdb --execute "select * from categories" > cat.json
So after run above command you will get clean records in json format, don't know how you are importing but i don't think RECORDS should come.
After done this you can use :
mongoimport --db your --collection categories cat.json --type json
Hopefully this will work.

How to import form Json file to MongoDb

How to import a json file to mongodb?
I tried:
mongoimport --db test --collection restaurants --drop --file primer-dataset.json
mongoimport is a stand-alone application that needs to be executed from the shell, (Window Command Prompt, Bash, etc). It seems you are currently executing the code inside the Mongo shell itself.
The docs read:
The mongoimport tool imports content from an Extended JSON, CSV, or TSV export created by mongoexport, or potentially, another third-party export tool.
The word "potentially" means it should be a valid json with expected structure. Not an arbitrary json.

Importing large datasets into Couchbase

I am having difficulty importing large datasets into Couchbase. I have experience doing this very fast with Redis via the command line but I have not seen anything yet for Couchbase.
I have tried using the PHP SDK and it imports about 500 documents / second. I have also tried the cbcdocload script in the Couchbase bin folder but it seems to want each document in its on JSON file. It is a bit of work to create all these files and then load them. Is there some other importation process I am missing? If cbcdocload is the only way load data fast then is it possible to put multiple documents into 1 json file.
Take the file that has all the JSON documents in it and zip up the file:
zip somefile.zip somefile.json
Place the zip file(s) into a directory. I used ~/json_files/ in my home directory.
Then load the file or files by the following command:
cbdocloader -u Administrator -p s3kre7Pa55 -b MyBucketToLoad -n 127.0.0.1:8091 -s 1000 \
~/json_files/somefile.zip
Note: '-s 1000' is the memory size. You'll need to adjust this value for your bucket.
If successful you'll see output stating how many documents were loaded, success, etc.
Here is a brief script to load up a lot of .zip files in a given directory:
#!/bin/bash
JSON_Dir=~/json_files/
for ZipFile in $JSON_Dir/*.zip ;
do /Applications/Couchbase\ Server.app/Contents/Resources/couchbase-core/bin/cbdocloader \
-u Administrator -p s3kre7Pa55 -b MyBucketToLoad \
-n 127.0.0.1:8091 -s 1000 $ZipFile
done
UPDATED: Keep in mind this script will only work if your data is formatted correctly or if the files are less than the max single document size of 20MB. (not the zipfile, but any document extracted from the zip)
I have created a blog post describing bulk loading from a single file as well and it is listed here:
Bulk Loading Documents Into Couchbase