I get an Assertion: 10340:Failure parsing JSON string error, running mongoimport in pipe over Github API, like the following:
lsoave#ubuntu:~/rails/github/gitwatcher$ curl https://api.github.com/users/lgs/repos | mongoimport -h localhost -d gitwatch_dev -c repo -f repositories
connected to: localhost
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Mon Jun 20 00:56:01 Assertion: 10340:Failure parsing JSON string near: [
100 22303 100 22303 0 0 31104 0 --:--:-- --:--:-- --:--:-- 111k
0x816d8a1 0x8118814 0x84b357a 0x84b5bb8 0x84adc65 0x84b2ee1 0x60bbd6 0x80f5bc1
mongoimport(_ZN5mongo11msgassertedEiPKc+0x221) [0x816d8a1]
mongoimport(_ZN5mongo8fromjsonEPKcPi+0x3b4) [0x8118814]
mongoimport(_ZN6Import9parseLineEPc+0x7a) [0x84b357a]
mongoimport(_ZN6Import3runEv+0x1a98) [0x84b5bb8]
mongoimport(_ZN5mongo4Tool4mainEiPPc+0x1ce5) [0x84adc65]
mongoimport(main+0x51) [0x84b2ee1]
/lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe6) [0x60bbd6]
mongoimport(__gxx_personality_v0+0x3f1) [0x80f5bc1]
exception:Failure parsing JSON string near: [
[
...
...
Mon Jun 20 00:45:20 Assertion: 10340:Failure parsing JSON string near: "name": "t
0x816d8a1 0x8118814 0x84b357a 0x84b5bb8 0x84adc65 0x84b2ee1 0x126bd6 0x80f5bc1
mongoimport(_ZN5mongo11msgassertedEiPKc+0x221) [0x816d8a1]
mongoimport(_ZN5mongo8fromjsonEPKcPi+0x3b4) [0x8118814]
mongoimport(_ZN6Import9parseLineEPc+0x7a) [0x84b357a]
mongoimport(_ZN6Import3runEv+0x1a98) [0x84b5bb8]
mongoimport(_ZN5mongo4Tool4mainEiPPc+0x1ce5) [0x84adc65]
mongoimport(main+0x51) [0x84b2ee1]
/lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe6) [0x126bd6]
mongoimport(__gxx_personality_v0+0x3f1) [0x80f5bc1]
exception:Failure parsing JSON string near: "name": "t
"name": "tentacles"
...
...
see the full trace here: http://pastie.org/2093486. Anyway, the json format I get back from the Github API seems ok ( curl https://api.github.com/users/lgs/repos ):
[
{
"open_issues": 0,
"watchers": 3,
"homepage": "http://scrubyt.org",
"language": null,
"forks": 1,
"pushed_at": "2009-02-25T22:49:08Z",
"created_at": "2009-02-25T22:22:40Z",
"fork": true,
"url": "https://api.github.com/repos/lgs/scrubyt",
"private": false,
"size": 188,
"description": "A simple to learn and use, yet powerful web scraping toolkit!",
"owner": {
"avatar_url": "https://secure.gravatar.com/avatar/9c7d80ebc20ab8994e51b9f7518909ae?d=https://a248.e.akamai.net/assets.github.com%2Fimages%2Fgravatars%2
Fgravatar-140.png",
"login": "lgs",
"url": "https://api.github.com/users/lgs",
"id": 1573
},
"name": "scrubyt",
"html_url": "https://github.com/lgs/scrubyt"
},
...
...
]
here it is a snippet: http://www.pastie.org/2093524.
If I try specifying csv format it works:
lsoave#ubuntu:~/rails/github/gitwatcher$ curl https://api.github.com/users/lgs/repos | mongoimport -h localhost -d gitwatch_dev -c repo -f repositories --type csv
connected to: localhost
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 22303 100 22303 0 0 23914 0 --:--:-- --:--:-- --:--:-- 106k
imported 640 objects
lsoave#ubuntu:~/rails/github/gitwatcher$
It worked for me by using "mongoimport --jsonArray ..."
Alright here is what could be going on. First off, I removed all the newlines in the JSON to reduce the number of errors from n (where n = number of lines) to 1. Then it turns out, I had to wrap the JSON Array in another variable and it worked thereafter. I think mongoimport is designed to work with mongoexport, so most likely you cannot use it to import any arbitrary JSON. However, if you want to, what I did woud be something you'd have to do in code before calling the import utility.
I used only 1 record while I was testing. Here is the record with no newlines.
[{"url":"https://api.github.com/repos/lgs/scrubyt", "pushed_at": "2009-02-25T22:49:08Z","homepage": "http://scrubyt.org", "forks": 1,"language": null,"fork": true,"html_url": "https://github.com/lgs/scrubyt","created_at": "2009-02-25T22:22:40Z", "open_issues": 0,"private": false,"size": 188,"watchers": 3,"owner": {"url": "https://api.github.com/users/lgs","login": "lgs","id": 1573,"avatar_url": "https://secure.gravatar.com/avatar/9c7d80ebc20ab8994e51b9f7518909ae?d=https://a248.e.akamai.net/assets.github.com%2Fimages%2Fgravatars%2Fgravatar-140.png"},"name": "scrubyt","description": "A simple to learn and use, yet powerful web scraping toolkit!"}]
Then I wrapped it with somedata (you can use any name here):
{somedata:[{"url":"https://api.github.com/repos/lgs/scrubyt", "pushed_at": "2009-02-25T22:49:08Z","homepage": "http://scrubyt.org", "forks": 1,"language": null,"fork": true,"html_url": "https://github.com/lgs/scrubyt","created_at": "2009-02-25T22:22:40Z", "open_issues": 0,"private": false,"size": 188,"watchers": 3,"owner": {"url": "https://api.github.com/users/lgs","login": "lgs","id": 1573,"avatar_url": "https://secure.gravatar.com/avatar/9c7d80ebc20ab8994e51b9f7518909ae?d=https://a248.e.akamai.net/assets.github.com%2Fimages%2Fgravatars%2Fgravatar-140.png"},"name": "scrubyt","description": "A simple to learn and use, yet powerful web scraping toolkit!"}]}
And I was able to see the record in Mongo.
> db.repo.findOne()
{
"_id" : ObjectId("4dff91d29c73f72483e82ef2"),
"somedata" : [
{
"url" : "https://api.github.com/repos/lgs/scrubyt",
"pushed_at" : "2009-02-25T22:49:08Z",
"homepage" : "http://scrubyt.org",
"forks" : 1,
"language" : null,
"fork" : true,
"html_url" : "https://github.com/lgs/scrubyt",
"created_at" : "2009-02-25T22:22:40Z",
"open_issues" : 0,
"private" : false,
"size" : 188,
"watchers" : 3,
"owner" : {
"url" : "https://api.github.com/users/lgs",
"login" : "lgs",
"id" : 1573,
"avatar_url" : "https://secure.gravatar.com/avatar/9c7d80ebc20ab8994e51b9f7518909ae?d=https://a248.e.akamai.net/assets.github.com%2Fimages%2Fgravatars%2Fgravatar-140.png"
},
"name" : "scrubyt",
"description" : "A simple to learn and use, yet powerful web scraping toolkit!"
}
]
}
Hope this helps!
this worked fine with me after I removed any '\n'. You can use tr in linux
cat file.json | tr -d '\n' > file.json
Using both the answers provided by #Daniel and #lobster1234 I created a script which I use to import the json entries into mongo.
#!/bin/sh
if [ -z "$1" ] ;
then
echo "missing argument"
exit -1
fi
FILE=${1%%.json}
echo $FILE
cat $FILE.json | tr -d '\n' > $FILE.import.json
mongoimport --collection collection --db main --file $FILE.import.json --jsonArray --upsert
Related
We are using a server software offering called FreezerPro (https://www.freezerpro.com/product-tour) with an API that can be called programmatically. There are simple methods like freezers that work with curl calls like this:
freezers -- Retrive a list of freezers
Returned objects: Freezers
Required parameters: None
Optional query parameters: None
Optional control parameters: None
curl -g --insecure 'https://username:password#demo-usa.freezerpro.com/api?method=freezers' | jq . | head -n 12
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 8697 0 8697 0 0 15980 0 --:--:-- --:--:-- --:--:-- 15987
{
"Freezers": [
{
"rfid_tag": "355AB1CBC00000700000075A",
"barcode_tag": "7000001882",
"boxes": 0,
"subdivisions": 1,
"access": 0,
"description": "[1000000000]",
"name": "[1000000000]",
"id": 1882
},
Then there is a search_samples method that searches for any fields in samples given a query. E.g.:
search_samples -- search for samples:
Returned objects: Samples
Required parameters: None
Optional query parameters:
query = <filter text> optional search string to filter the results.
Optional control parameters:
start = <staring record>
limit = <limit number of records to retrieve>
sort = <sort_field>
dir = <ASC / DESC>
curl -g --insecure 'https://username:password#demo-usa.freezerpro.com/api?method=search_samples&query=111222333' | jq .
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 347 0 347 0 0 977 0 --:--:-- --:--:-- --:--:-- 977
{
"Samples": [
{
"created_at": "06/11/2018",
"owner_id": 45,
"owner": "<span ext:qtip=\"username\">username</span>",
"description": "test",
"sample_id": 53087,
"id": 53087,
"loc_id": 54018,
"type": "cfDNA",
"scount": 1,
"name": "123456AB",
"location": "ER111→Level 1→Level 2→test001 (1)",
"icon": "images/box40/i53.png"
}
],
"Total": 1
}
So far so good. The problem comes when trying to run the advanced_search query, which takes an array of hashes in the query section. Given the sample above, which has a udf called patient_id with value 111222333, and advanced_search query for udf patient_id value=111222333 should return something, but it just gives a blank result:
Example command:
curl -g --insecure 'https://username:password#demo-usa.freezerpro.com/api?method=advanced_search&subject_type=Sample&query=[{type="udf",field="patient_id",value=111222333}]'
I am using:
curl --version
curl 7.35.0 (x86_64-pc-linux-gnu) libcurl/7.35.0 OpenSSL/1.0.1f zlib/1.2.8 libidn/1.28 librtmp/2.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
Is this something to do with the way curl works on interpreting/passing the query section of the URL?
Any ideas about how to construct the query? Is this a curl specific issue?
EDIT: tried curl urlencode, it complains about the query not being setup:
curl -g -G --insecure 'https://username:password#demo-usa.freezerpro.com/api' --data-urlencode 'method=advanced_search' --data-urlencode 'query=[{type="udf",field="patient_id",value=111222333}]'
{"error":true,"message":"Query or search conditions must be specified","success":false}
You must URL-encode values of your URL parameters. e.g.
curl -g --insecure 'https://username:password#demo-usa.freezerpro.com/api?method=advanced_search&subject_type=Sample&query=%5B%7Btype%3D%22udf%22%2Cfield%3D%22patient_id%22%2Cvalue%3D111222333%7D%5D'
Also please run curl with -v parameter to make it verbose, so we could at least know what HTTP status is returned.
I've found a solution using the --data flag together with the -k flag:
curl -k --header "Content-Type: application/json" --request GET --data '{"username":"user", "password":"password", "method":"advanced_search", "query":[{"type":"udf","field":"patient_id","op":"eq","value":"111222333"}], "udfs":["patient_id","other"]}' https://demo-usa.freezerpro.com/api | jq .
I have some logs that output information in JSON. This is for collection to elasticsearch.
Some testers and operations people want to be able to read logs on the servers.
Here is some example JSON:
{
"#timestamp": "2015-09-22T10:54:35.449+02:00",
"#version": 1,
"HOSTNAME": "server1.example",
"level": "WARN",
"level_value": 30000,
"logger_name": "server1.example.adapter",
"message": "message"
"stack_trace": "ERROR LALALLA\nERROR INFO NANANAN\nSOME MORE ERROR INFO\nBABABABABABBA BABABABA ABABBABAA BABABABAB\n"
}
And so on.
Is it possible to make Jq print newline instead of the \n character sequence as seen in the value of .stack_trace?
Sure! Using the -r option, jq will print string contents directly to the terminal instead of as JSON escaped strings.
jq -r '.stack_trace'
Unless you're constraint to use jq only, you can "fix" (or actually "un-json-ify") jq output with sed:
cat the-input | jq . | sed 's/\\n/\n/g'
If you happen to have tabs in the input as well (\t in JSON), then:
cat the-input | jq . | sed 's/\\n/\n/g; s/\\t/\t/g'
This would be especially handy if your stack_trace was generated by Java (you didn't tell what is the source of the logs), as the Java stacktrace lines begin with <tab>at<space>.
Warning: naturally, this is not correct, in a sense that JSON input containing \\n will result in a "" output, however it should result in "n" output. While not correct, it's certainly sufficient for peeking at the data by humans. The sed patterns can be further improved to take care for this (at the cost of readability).
The input as originally given isn't quite valid JSON, and it's not clear precisely what the desired output is, but the following might be of interest. It is written for the current version of jq (version 1.5) but could easily be adapted for jq 1.4:
def json2qjson:
def pp: if type == "string" then "\"\(.)\"" else . end;
. as $in
| foreach keys[] as $k (null; null; "\"\($k)\": \($in[$k] | pp)" ) ;
def data: {
"#timestamp": "2015-09-22T10:54:35.449+02:00",
"#version": 1,
"HOSTNAME": "server1.example",
"level": "WARN",
"level_value": 30000,
"logger_name": "server1.example.adapter",
"message": "message",
"stack_trace": "ERROR LALALLA\nERROR INFO NANANAN\nSOME MORE ERROR INFO\nBABABABABABBA BABABABA ABABBABAA BABABABAB\n"
};
data | json2qjson
Output:
$ jq -rnf json2qjson.jq
"#timestamp": "2015-09-22T10:54:35.449+02:00"
"#version": 1
"HOSTNAME": "server1.example"
"level": "WARN"
"level_value": 30000
"logger_name": "server1.example.adapter"
"message": "message"
"stack_trace": "ERROR LALALLA
ERROR INFO NANANAN
SOME MORE ERROR INFO
BABABABABABBA BABABABA ABABBABAA BABABABAB
"
while working with json in windows is very easy in Linux I'm getting trouble.
I found a way to convert list into json using jq:
For example:
ls | jq -R -s -c 'split("\n")'
output:
["bin","boot","dev","etc","home","lib","lib64","media","mnt","opt","proc","root","run","sbin","srv","sys","tmp","usr","var"]
I'm getting trouble to convert a table into json
I'm looking for an option to convert a table that I get from bash command into a json. I already searched for many tools but none of them are generic and you need to adjust the commands to each different table.
Do you know how can I convert a table that I get from a bash commands into json that can be generic?
table output for example:
rpm -qai
output:
Name : gnome-session
Version : 3.8.4
Release : 11.el7
Architecture: x86_64
Install Date: Mon 21 Dec 2015 04:12:41 PM EST
Group : User Interface/Desktops
Size : 1898331
License : GPLv2+
Signature : RSA/SHA256, Thu 03 Jul 2014 09:39:10 PM EDT,
Key ID 24c6a8a7f4a80eb5
Source RPM : gnome-session-3.8.4-11.el7.src.rpm
Build Date : Mon 09 Jun 2014 09:12:26 PM EDT
Build Host : worker1.bsys.centos.org
Relocations : (not relocatable)
Packager : CentOS BuildSystem <http://bugs.centos.org>
Vendor : CentOS
URL : http://www.gnome.org
Summary : GNOME session manager
Description : nome-session manages a GNOME desktop or GDM login session. It starts up the other core GNOME components and handles logout and saving the
session.
Thanks!
There are too many poorly-specified textual formats to create a single tool for what you are asking for, but Unix is well-equipped to the task. Usually, you would create a simple shell or Awk script to convert from one container format to another. Here's one example:
printf '"%s", ' * | sed 's/, $//;s/.*/[ & ]/'
The printf will produce a comma-separated, double-quoted list of wildcard matches. The sed will trim the final comma and add a pair of square brackets around the entire output. The results will be incorrect if a file name contains a double quote, for example, but in the name of simplicity, let's not embellish this any further.
Here's another:
rpm -qai | awk -F ' *: ' 'BEGIN { print "{\n"; }
{ printf "%s\"%s\": \"%s\"", delim, $1, substr($0, 15); delim="\n," }
END { print "\n}"; }'
The -qf output format is probably better but this shows how you can extract fields from a reasonably free-form line-oriented format using a simple Awk script. The first field before the colon is extracted as the key, and everything from the 15th column onwards is extracted as the value. Again, we ignore the possible complications (double quotes in the values would need to be escaped, again, for example) to keep the example simple.
If your needs are serious, you will need to spend more time on creating a robust parser; but then, you will usually want to work with tools which have a well-defined output format in the first place (XML, JSON, etc) and spend as little time as possible on ad-hoc parsers. Unfortunately, there is still a plethora of tools out there which do not support an --xml or --json output option out of the box, but JSON support is fortunately becoming more widely supported.
You can convert a table from bash command into a json using jq
This command will return a detailed report on the system’s disk space usage
df -h
The output is something like this
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk3s1s1 926Gi 20Gi 803Gi 3% 502068 4293294021 0% /
devfs 205Ki 205Ki 0Bi 100% 710 0 100% /dev
/dev/disk3s6 926Gi 7.0Gi 803Gi 1% 7 8418661400 0% /System/Volumes/VM
/dev/disk3s2 926Gi 857Mi 803Gi 1% 1811 8418661400 0% /System/Volumes/Preboot
/dev/disk3s4 926Gi 623Mi 803Gi 1% 267 8418661400 0% /System/Volumes/Update
Now we can convert the output of this command into json with jq
command=($(df -h | tr -s ' ' | jq -c -Rn 'input | split(" ") as $head | inputs | split(" ") | to_entries | map(.key = $head[.key]) | from_entries'))
echo $command | jq
{
"Filesystem": "/dev/disk3s1s1",
"Size": "926Gi",
"Used": "20Gi",
"Avail": "803Gi",
"Capacity": "3%",
"iused": "502068",
"ifree": "4293294021",
"%iused": "0%",
"Mounted": "/"
}
{
"Filesystem": "devfs",
"Size": "205Ki",
"Used": "205Ki",
"Avail": "0Bi",
"Capacity": "100%",
"iused": "710",
"ifree": "0",
"%iused": "100%",
"Mounted": "/dev"
}
{
"Filesystem": "/dev/disk3s6",
"Size": "926Gi",
"Used": "7.0Gi",
"Avail": "803Gi",
"Capacity": "1%",
"iused": "7",
"ifree": "8418536520",
"%iused": "0%",
"Mounted": "/System/Volumes/VM"
}
{
"Filesystem": "/dev/disk3s2",
"Size": "926Gi",
"Used": "857Mi",
"Avail": "803Gi",
"Capacity": "1%",
"iused": "1811",
"ifree": "8418536520",
"%iused": "0%",
"Mounted": "/System/Volumes/Preboot"
}
convert table from bash command into json
I have some logs that output information in JSON. This is for collection to elasticsearch.
Some testers and operations people want to be able to read logs on the servers.
Here is some example JSON:
{
"#timestamp": "2015-09-22T10:54:35.449+02:00",
"#version": 1,
"HOSTNAME": "server1.example",
"level": "WARN",
"level_value": 30000,
"logger_name": "server1.example.adapter",
"message": "message"
"stack_trace": "ERROR LALALLA\nERROR INFO NANANAN\nSOME MORE ERROR INFO\nBABABABABABBA BABABABA ABABBABAA BABABABAB\n"
}
And so on.
Is it possible to make Jq print newline instead of the \n character sequence as seen in the value of .stack_trace?
Sure! Using the -r option, jq will print string contents directly to the terminal instead of as JSON escaped strings.
jq -r '.stack_trace'
Unless you're constraint to use jq only, you can "fix" (or actually "un-json-ify") jq output with sed:
cat the-input | jq . | sed 's/\\n/\n/g'
If you happen to have tabs in the input as well (\t in JSON), then:
cat the-input | jq . | sed 's/\\n/\n/g; s/\\t/\t/g'
This would be especially handy if your stack_trace was generated by Java (you didn't tell what is the source of the logs), as the Java stacktrace lines begin with <tab>at<space>.
Warning: naturally, this is not correct, in a sense that JSON input containing \\n will result in a "" output, however it should result in "n" output. While not correct, it's certainly sufficient for peeking at the data by humans. The sed patterns can be further improved to take care for this (at the cost of readability).
The input as originally given isn't quite valid JSON, and it's not clear precisely what the desired output is, but the following might be of interest. It is written for the current version of jq (version 1.5) but could easily be adapted for jq 1.4:
def json2qjson:
def pp: if type == "string" then "\"\(.)\"" else . end;
. as $in
| foreach keys[] as $k (null; null; "\"\($k)\": \($in[$k] | pp)" ) ;
def data: {
"#timestamp": "2015-09-22T10:54:35.449+02:00",
"#version": 1,
"HOSTNAME": "server1.example",
"level": "WARN",
"level_value": 30000,
"logger_name": "server1.example.adapter",
"message": "message",
"stack_trace": "ERROR LALALLA\nERROR INFO NANANAN\nSOME MORE ERROR INFO\nBABABABABABBA BABABABA ABABBABAA BABABABAB\n"
};
data | json2qjson
Output:
$ jq -rnf json2qjson.jq
"#timestamp": "2015-09-22T10:54:35.449+02:00"
"#version": 1
"HOSTNAME": "server1.example"
"level": "WARN"
"level_value": 30000
"logger_name": "server1.example.adapter"
"message": "message"
"stack_trace": "ERROR LALALLA
ERROR INFO NANANAN
SOME MORE ERROR INFO
BABABABABABBA BABABABA ABABBABAA BABABABAB
"
I'm trying to export CSV file list from mongoDB and save the output file to my directory, which is /home/asaj/. The output file should have the following columns: name, file_name, d_start and d_end.
The query should filter data with status equal to "FU" or "FD", and d_end > Dec. 10, 2012.
In mongoDB, the query is working properly. The query below is limited to 1 data output. See query below:
> db.Samples.find({ $or : [ { status : 'FU' }, { status : 'FD'} ], d_end : { $gte : ISODate("2012-12-10T00:00:00.000Z") } }, {_id: 0, name: 1, file_name: 1, d_start: 1, d_end: 1}).limit(1).toArray();
[
{
"name" : "sample"
"file_name" : "sample.jpg",
"d_end" : ISODate("2012-12-10T05:1:57.879Z"),
"d_start" : ISODate("2012-12-10T02:31:34.560Z"),
}
]
>
In CLI, mongoexport command looks like this:
mongoexport -d maindb -c Samples -f "name, file_name, d_start, d_end" -q "{'\$or' : [ { 'status' : 'FU' }, { 'status' : 'FD'} ] , 'd_end' : { '\$gte' : ISODate("2012-12-10T00:00:00.000Z") } }" --csv -o "/home/asaj/currentlist.csv"
But i always ended up with this error:
connected to: 127.0.0.1
Wed Dec 19 16:58:17 Assertion: 10340:Failure parsing JSON string near: , 'd_end
0x5858b2 0x528cb4 0x52902e 0xa9a631 0xa93e4d 0xa97de2 0x31b441ecdd 0x4fd289
mongoexport(_ZN5mongo11msgassertedEiPKc+0x112) [0x5858b2]
mongoexport(_ZN5mongo8fromjsonEPKcPi+0x444) [0x528cb4]
mongoexport(_ZN5mongo8fromjsonERKSs+0xe) [0x52902e]
mongoexport(_ZN6Export3runEv+0x7b1) [0xa9a631]
mongoexport(_ZN5mongo4Tool4mainEiPPc+0x169d) [0xa93e4d]
mongoexport(main+0x32) [0xa97de2]
/lib64/libc.so.6(__libc_start_main+0xfd) [0x31b441ecdd]
mongoexport(__gxx_personality_v0+0x3c9) [0x4fd289]
assertion: 10340 Failure parsing JSON string near: , 'd_end
I'm having error in ", 'd_end' " in mongoexport CLI. I'm not so sure if it is a JSON syntax error because query works on MongoDB.
Please help.
After asking someone knows MongoDB better than me, we found out that the problem is the
ISODate("2012-12-10T00:00:00.000Z")
We found the answer on this question: mongoexport JSON parsing error
To resolve this error, first, we convert it to strtotime:
php > echo strtotime("12/10/2012");
1355126400
Next, multiple strtotime result by 1000. This date will looks like this:
1355126400000
Lastly, change ISODate("2012-12-10T00:00:00.000Z") to new Date(1355126400000) in the mongoexport command.
Now, the CLI mongoexport looks like this and it works:
mongoexport -d maindb -c Samples -f "id,file_name,d_start,d_end" -q "{'\$or' : [ { 'status' : 'FU' }, { 'status' : 'FD'} ] , 'd_end' : { '\$gte' : new Date(1355126400000) } }" --csv -o "/home/asaj/listupdate.csv"
Note: remove space between each field names in -f or --fields option.
I know it has little to do with this question, but the title of this post brought it up in Google so since I was getting the exact same error I'll add an answer. Hopefully it helps someone.
My issue was adding a MongoId query for _id to a mongoexport console command on Windows. Here's the error:
Assertion: 10340:Failure parsing JSON string near: _id
The problem ended up being that I needed to wrap the JSON query in double quotes, and the ObjectId had to be in double quotes (not single!), so I had to escape those quotes. Here's the final query that worked, for future reference:
mongoexport -u USERNAME -pPASSWORD -d DATABASE -c COLLECTION
--query "{_id : ObjectId(\"5148894d98981be01e000011\")}"