ElasticSearch, specific field not returning - json

I'm stuck with a little elasticsearch problem. I'm new to elasticsearch and don't know why this doesn't work.
curl -XPOST 'http://myhost.nl:9200/my_index/test/_search?pretty=true' -d '{ "fields": ["message"] }'
I don't get any field back. The field "message" does exist and realy looks like the example on the elasticsearch site. http://www.elasticsearch.org/guide/reference/api/search/fields.html
Can anybody see what I'm missing?

Your query would have worked if this field was stored. But since it's not stored and only available as part of source, you need to specify full path to it. Try:
curl -XPOST 'http://myhost.nl:9200/my_index/test/_search?pretty=true' -d '{ "fields": ["tweet.message"] }'

Related

Adding Variable to JSON CURL

I am trying to add a text string to a CURL request with JSON. See below for the $test variable. When I submit this, the application sees it as literal $test.
--data "{"fields": {"project": {"key": "Ticket"},"summary":"Account - Missing Tags","description":"The following AWS assets do not have the minimally required tags................ $test ","issuetype": {"name": "Service"}}}}"
I have tried various methods such as "'$Test'" and that hasn't worked either. Can you help explain how to accomplish this?

How to write a correct mongodb query for mongodump?

I'm trying to backup 3 articles from my database, I have their IDs but when I try to use mongodump I just can't seem to be able to write the proper json query. I get either a JSON error message, or some cryptic cannot decode objectID into a slice message.
Here's the command that I'm trying to run at the moment:
mongodump -d 'data' -c 'articles' -q '{"$oid": "5fa0bd32f7d5870029c7d421" }'
This is returning the ObjectID into a slice error, which I don't really understand. I also tried with ObjectId, like this:
mongodump -d 'data' -c 'articles' -q '{"_id": ObjectId("5fa0bd32f7d5870029c7d421") }'
But this one gives me a invalid JSON error.
I've tried all forms of escaping, escaping the double quotes, escaping the dollar, but nothing NOTHING seems to work. I'm desperate, and I hate mongodb. The closest I've been able to get to a working solution was this:
mongodump -d 'nikkei' -c 'articles' -q '{"_id": "ObjectId(5fa0bd32f7d5870029c7d421)" }'
And I say closest because this didn't fail, the command ran but it returned done dumping data.articles (0 documents) which means, if I understood correctly, that no articles were saved.
What would be the correct format for the query? I'm using mongodump version r4.2.2 by the way.
I have a collection with these 4 documents:
> db.test.find()
{ "_id" : ObjectId("5fab80615397db06f00503c3") }
{ "_id" : ObjectId("5fab80635397db06f00503c4") }
{ "_id" : ObjectId("5fab80645397db06f00503c5") }
{ "_id" : ObjectId("5fab80645397db06f00503c6") }
I make the binary export using the mongodump. This is using MongoDB v4.2 on Windows OS.
>> mongodump --db=test --collection=test --query="{ \"_id\": { \"$eq\" : { \"$oid\": \"5fab80615397db06f00503c3\" } } }"
2020-11-11T11:42:13.705+0530 writing test.test to dump\test\test.bson
2020-11-11T11:42:13.737+0530 done dumping test.test (1 document)
Here's an answer for those using Python:
Note: you must have mongo database tools installed on your system
import json
import os
# insert you query here
query = {"$oid": "5fa0bd32f7d5870029c7d421"}
# cast the query to a string
query = json.dumps(query)
# run the mongodump
command = f"mongodump --db my_database --collection my_collection --query '{query}'"
os.system(command)
If your query is for JSON than try this format.
mongodump -d=nikkei -c=articles -q'{"_id": "ObjectId(5fa0bd32f7d5870029c7d421)" }'
Is there nothing else you could query though, like a title? Might make things a little more simple.
I pulled this from mongoDB docs. It was pretty far down the page but here is the link.
https://docs.mongodb.com/database-tools/mongodump/#usage-in-backup-strategy

Passing key value from json in CURL request (command line)

I am using a REST API to move a user into a group, but need to append the group name to the URL when I run the command.
So the path to my REST API is:
http://server:8080/rest/api/2/group/user?groupname
And it's expecting "groupname" to be passed as "groupname="Name%20Of%20Group"
i.e.
http://server:8080/rest/api/2/group/user?groupname="Name%20Of%20Group"
The full command I'm running on Windows is
curl -u name:pass -X POST --data #add_user.txt -H "Content-Type: application/json" http://server:8080/rest/api/2/group/user?groupname
add_user.txt is structured like this
{
"name": "tim",
"groupname": "MY%20TEST%20GROUP"
},
{
"name": "carol",
"groupname": "MY%20TEST%20GROUP"
}
It's looping through the names I believe, but I want CURL to pick up on each "groupname" defined in the file.
Any ideas on how I can do this?
each curl call makes a single request.
looks like you need to change from POST to GET and use the -G, --get option to add the values as parameters after the ?
See How do I use cURL to perform multiple simultaneous requests?
HTH

Range query in ElasticSearch (GET without body)

So very basic question about elasticsearch which the docs not answer very clearly (because they seem to go into many advanced details but miss the basic ones!).
Example: range query
http://www.elasticsearch.org/guide/reference/query-dsl/range-query.html
Doesn't tell how to PERFORM the range, is it via the search endpoint?
And if it is, then how to do it via querystring? I mean, I want to do a GET, not a POST (because it's a query, not an insertion/modification). However the documention for GET requests doesn't tell how to use JSON like in the Range sample:
http://www.elasticsearch.org/guide/reference/api/search/uri-request.html
What am I missing?
Thanks
Use the Lucene query syntax:
curl -X GET 'http://localhost:9200/my_index/_search?q=my_field:[0+TO+25]&pretty'
Let's assume we have an index
curl -XPUT localhost:9200/test
And some documents
curl -XPUT localhost:9200/test/range/1 -d '{"age": 9}'
curl -XPUT localhost:9200/test/range/2 -d '{"age": 12}'
curl -XPUT localhost:9200/test/range/3 -d '{"age": 16}'
Now we can query these documents within a certain range via
curl -XGET 'http://localhost:9200/test/range/_search?pretty=true' -d '
{
"query" : {
"range" : {
"age" : {
"from" : "10",
"to" : "20",
"include_lower" : true,
"include_upper": true
}
}
}
}
'
This will return the documents 2 and 3.
I'm not sure if there is a way to perform these kind of complex queries via URI request, though.
Edit: Thanks to karmi here is the solution without JSON request:
curl -XGET --globoff 'localhost:9200/test/range/_search?q=age:["10"+TO+"20"]&pretty=true'
Replying to myself thanks to #javanna:
In the RequestBody section of the Search docs:
http://www.elasticsearch.org/guide/reference/api/search/request-body.html
At the end, it says:
The rest of the search request should be passed within the body itself. The body content can also be passed as a REST parameter named source.
So I guess that I need to use the search endpoint with the source attribute to pass json.

curl response / github oauth

I'm trying to use OAUTH authentication for github in a bash script.
this:
curl -u $USER_NAME --silent -d '{"scopes":["repo"]}' \
https://api.github.com/authorizations
works and as a result I get a response like this:
"created_at": "2012-09-03T13:02:30Z",
"token": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"updated_at": "2012-09-03T13:02:30Z",
"note_url": null,
"note": null,
"url": "https://api.github.com/authorizations/620793",
"app": {
"url": "http://developer.github.com/v3/oauth/#oauth-authorizations-api",
"name": "GitHub API"
},
"id": 620793,
"scopes": [
"repo"
]
But I need to keep the "token":'s value in a variable for future use. How could I do that?
You could use sed and grep to get what you want:
curl -u $USER_NAME --silent -d '{"scopes":["repo"]}' "https://api.github.com/authorizations" | grep '"token":' | sed 's/.*:\s\+"\([^"]\+\).*/\1/g'
I tried to setup the suggest tool, jsawk, to post another alternative here, but jsawk needs spidermonkey-bin, and now you can't get spidermonkey-bin in ubuntu and the PPA that had this packaged don't have it anymore. Now you need to compile from source. Too much trouble, to get a slightly cleaner solution.
I did it like this, to get certain data from a different json response:
yourcodehere | grep token | awk '{ print $2 }' | sed s/\"//g | sed s/,//g)
Similar to what Onilton did, but just used awk to get the second item from that line, sed to strip the quotes and comma out.
There is at least one json library or something for bash, too, I believe (probably more than one), to parse it for the variable you want, but I haven't even looked at it.
I'm not saying this is any better than Onilton's method, but just another option.
It works. If I had better awk fu, I could probably do this all with awk, and not need sed, but I don't.