I want to import data of type JSON in MongoDB compass,
the import function gives this error
" unexpected end of JSON input "
there is a some of my JSON file
[
{
"id":4,
"user":"test#example.com",
"date1":"2019-03-01",
"date2":"2019-04-01",
"statut":"Good",
"guest_number":4
}
]
the solution is to write all JSON in one line, but if we have a big doc !!
I just found a solution that I can import data with this command in terminal :
mongoimport --jsonArray --db YourDatabase --collection YourCollection --file Yourfile.json
I had this issue 6 month ago, the solution is write all JSON in one line.
[{"id":4,"user":"test#example.com","date1":"2019-03-01","date2":"2019-04-01","statut":"Good","guest_number":4}]
MongoDB Compass will told you:
Import success!
But definitely the document will not appear in your collection, so better use Robo3T if you gonna insert json. Then you can use again Compass like I do.
It is weird, yes, but I didnt found other solution yet.
[UPDATE]
I achieve import data with Compass, but I achieve exporting first a document from Compass to see how it write the json.
{"_id":{"$oid":"5e4cf105c9ba1a21143d04a2"},"tPreguntas":["Pregunta 1","Pregunta 2","Pregunta 3","Pregunta 4","Pregunta 5"],"tCategorias":[],"tPublico":true,"tFechaCreacion":{"$date":{"$numberLong":"1582100741716"}},"tCodigo":"test1","tTitulo":"Test 1","tDescripcion":"Test de muestreo nĂºmero uno para comprobar.","tCreadoPor":"eoeo#eoeo.com"}
It look to different to the json online I have post in my first post. (look that objectId "$oid" for example). So if you follow that pattern Compass will import you fine.
This parsing error can be solved using minification. So, minify json like this. Although, it is quite a hectic process to do this for each object.
And this kind of minification like this worked for me.
{
"_id" : ObjectId("5b9ecf9a64f634289ca895bb"),
"name" : "Mark"
}
{
"_id" : ObjectId("5b9edd9064f634289ca895e4"),
"name" : "David"
}
To :
{"_id":"ObjectId(\"5b9ecf9a64f634289ca895bb\")","name":"Mark"}
{"_id":"ObjectId(\"5b9edd9064f634289ca895e4\")","name":"David"}
Just copy the contents of your json file then in Mongodb Compass select your database then click on Add Data which will drop down then click on insert document a dialog pops up then paste it in there and click insert.
This parsing error can be solved using minification. So, minify json like this. Although, it is quite a hectic process to do this for each object.
{
"_id" : "123456",
"name" : "stackoverflow"
}
change to :
{"_id":"123456","name":"stackoverflow"}
This answer here Solution solved the issue for me. It seems to be a formatting issue.
It's an issue with the end-of-line characters (EOL).
In a Windows environment line terminations are normally CR NL (\r\n), while MongoDB Compass seems to only support CR (\r).
You can open the file in Notepad++, enable the "Show all characters" toggle in the toolbar and inspect your current end-of-line character.
To fix the issue, select Edit > EOL Conversion > Macintosh (CR).
The structure of your JSON is incorrect, you might want to read info regarding JSON standards
A value can be a string in double quotes, or a number, or true or false or null, or an object or an array. These structures can be nested.
try using double quotes instead of single ones:
JSON validators could help you aswell
[
{
"id" : 4,
"user" : "test#example.com",
"date1" : "2019-03-01",
"date2" : "2019-04-01",
"statut" : "Good",
"guest_number" : 4
}
]
I had a similar issue but it turned out to be additional line feeds at the end of the file. Removing these fixed the issue. I suggest opening your file in an editor that shows line feeds e.g. Notepad++
Add --jsonFormat=canonical to your mongoexport script:
mongoexport --db=quotes --collection=quotes --jsonFormat=canonical --out=data/quotes.json
JSON can only directly represent a subset of the types supported by BSON. To preserve type information, MongoDB adds the following extensions to the JSON format.
Source
You can also use the command line of mongodb like this :
db.user.insert(
[
{
"id" : 4,
"user" : "test#example.com",
"date1" : "2019-03-01",
"date2" : "2019-04-01",
"statut" : "Good",
"guest_number" : 4
},
{
"id" : 5,
"user" : "test2#example.com",
"date1" : "2019-03-01",
"date2" : "2019-04-01",
"statut" : "Good",
"guest_number" : 4
}
]
Run this command in cmd and the cmd path should be in the same folder where the JSON file occurs.
mongoimport --jsonArray --db YourDatabase --collection YourCollection --file Yourfile.json
I am trying to fetch a specific record from Database using QueryDatabaseTable -> UpdateAttribute-> PutElasticSearchHttpRecord
The ES processor is throwing error as Java.IO.Exception Invalid char between encapsulated token and delimiter.
Please find attached my config. How to fix this?
I am getting the correct result in the queue after 'UpdateAttribute' but not able to push it into ES. I have added the schema.name property to appropriate schema.
The following is the correct result i am getting in the queue after UpdateAttribute processor. How to fix the error of Invalid character between token and delimiter?
[ {
"TimeOfDay" : "2018-09-20T18:10:36.941",
"BMU_Debug_Pack_BlkVolt_Max2" : 4114.0,
"BMU_Debug_Pack_BlkVolt_Max1" : 4114.0,
"BMU_Debug_Pack_BlkVolt_Max3" : 4114.0,
"BMU_Debug_Pack_BlkVolt_Max0" : 4116.0,
"BMU_Debug_Pack_CTemp_Min" : 21.0,
"BMU_Debug_Pack_CurrVolt_Curr" : 2.0,
"BMU_Debug_Pack_Blk_Volt_Delta" : 6.0,
"total_Difference" : 15.0
} ]
Thank you! Please help what should I change?
You need to configure Avro Reader instead of CSV Reader in PutElasticSearchHttpRecord as QuerydatabaseTable processor outputs flowfile in Avro format.
Use embedded avro schema in Avro Reader controller service.
With .phpstorm.meta.php you can overwrite the return type of your class methods. For example:
// .phpstorm.meta.php
namespace PHPSTORM_META
{
$STATIC_METHOD_TYPES = [
\Acme\Node::closest('') => [
"" == "#",
],
];
}
The example above lets PhpStorm understand the return of $node->closest(SomeNodeType::class) is an instance of SomeNodeType.
However, my question is how to make PhpStorm understand that a function returns an array of instances of a given type.
For example:
$node->findAllOfType(SomeNodeType::class) would return an array of SomeNodeType instances (normally one would use SomeNodeType[] if it would not be dynamic).
Attempts to use "#[]" or "#Iterator" instead of "#", in the above code example, failed.
That's simply not supported ... unless you can try and list all possible types manually instead of using single generic rule (which may work in theory -- sorry, have not tried it myself).
https://youtrack.jetbrains.com/issue/WI-27832 -- watch that and related tickets (star/vote/comment) to get notified on any progress.
Practical example is this best here, so here goes. I have a JSON file named file.json as such:
{
"key": "hello"
}
And a JS file in the same directory as such (NodeJS):
'use strict';
// Dependencies
const jsonFile = require('./file.json');
// Globals
const anotherFile = {
key: 'world'
}
const fn = () => {
console.log(jsonFile.key); // this 'key' is not recognized
console.log(anotherFile.key); // this 'key' is recognized
};
Now, that fn function should, and does, print hello and world to the console. My problem is that while WebStorm recognizes the second file (anotherFile) as a JavaScript object, it doesn't do so with the one imported from the JSON.
That means that anotherFile.key has "key" in purple (in my template) color, while the other one is plain white with the underline that you get when it's "Unresolved variable key".
Also means that I can write anotherFile and it will show me all available options (functions, properties, etc) and key will be there, while in the first file, jsonFile, it wont.
Is there any way to force WebStorm to recognize JSON's as JavaScript objects?
Thanks!
Edit: Using WebStorm 11.0.1
Please try upgrading - your code works fine for me in 2016.2:
In Preference>Editor>File Types you should have this :
But I thought it was like this by default ...
I have db contain large dataset - json objects - (array) around ~10k i have for now. I want to to fetch all from db and generate csv and download via route..
Here's sample json object:
{
"_id" : ObjectId("56bc3a7da30befd952349542"),
"asin" : "B00T2Q1S18",
"searchRank" : 113,
"name" : "FREEing Racing Miku 2014 (EV Mirai Version) Figma Action Figure",
"createdAt" : ISODate("2016-02-11T07:38:37.774Z"),
"updatedAt" : ISODate("2016-02-11T07:44:07.667Z"),
"linkIds" : [
"25b1071a9e908806338c4106"
],
"price" : {
"amazon" : 50.49
},
"ranks" : [
{
"number" : 43619,
"category" : "Baby Toys"
}
],
"upc" : ""
}
Is there any better npm (node) library which can converts my json collection to csv..
Though I have tried those but on large dataset they aren't working.
papaparse / babyparse
json2csv
Is there any other libs that you know better or any other better approach?
Thanks.
I have done this before using an npm library called csv-builder. Based on my experience I can say that it gives good performance and It is quite easy to implement.
I have made a CSV of about 2 LAC rows and around 8-10 columns,with manipulation in between using this library.
I tried with many libs and at last I found one - a great npm module which handles large dataset problem nicely....
https://www.npmjs.com/package/csvwriter
exported upto 5 lacs + json objects (for now)..
Here is my small demo large dataset json to csv exporter app via node, express, mongodb
Hope this helps others as well, when they come over here.
Cheers,
Thanks.