I want to change the type of a field from 'string' to 'date' (format: 'epoch_second' to be specific). Since there is no way to update the mappings of an existing index, I need to make a new index for which I mostly want to use the mapping from my existing index. Here is what I am using:
curl -XGET 'http://localhost:9200/sam/saga/_mapping?pretty' >saga.json
to dump the mapping of the current index into a json file, the content of which is this:
{
"sam" : {
"mappings" : {
"saga" : {
"properties" : {
"name" : {
"type" : "long"
}
}
}
}
}
}
then I replace
"name" : {
"type" : "long"
}
with
"name" : {
"type" : "date"
}
and save the new file as saga2.json. then run this
curl -XPUT 'http://localhost:9200/sam/_mapping/saga2' -d #saga2.json
However, when I check the mapping of the new index, all types have changed to "string" now.
I even have this problem using the Elasticsearch's example.
Does anyone know what is wrong?
You need to make one more change in your saga2.json file, namely the mapping type name saga -> saga2 (Now you probably need to rename it all to saga3
{
"sam" : {
"mappings" : {
"saga2" : { <--- here
"properties" : {
"name" : {
"type" : "date" <--- and here
}
}
}
}
}
}
Then only you can run this:
curl -XPUT 'http://localhost:9200/sam/_mapping/saga2' -d #saga2.json
Related
I have a JSON object, and I'm trying to find the root elements under it. Can someone please help me to figure this out?
{
"store" : {
"10162021" : {
"id" : 812340,
"properties" : {
"server" : "server1.example.org",
"serverip" : "",
}
},
"10162022" : {
"properties" : {
"serverip" : "127.0.0.1",
"server" : "server2.example.org",
},
"id" : 859480
}
}
}
I need to extract the root elements 10162022, 10162021 based on the server name.
I have tried to use syntax like below, but it was not successful
$..*..[?(#.server == server2.example.org)]
I will appreciate any suggestions.
It's not clear whether you want to return the keys "10162022", etc, or the values, like:
{
"properties" : {
"serverip" : "127.0.0.1",
"server" : "server2.example.org"
},
"id" : 859480
}
If you want to return values, the following JSONPath should work:
$.store[?( #.properties.server=="server2.example.org" )]
If you want to return keys, I'm not entirely sure that's possible. JSONPath isn't really designed to find keys, but values.
If you need the keys, I would suggest pre-processing the structure to stash the keys into objects as values, like this:
{
"store" : {
"10162021" : {
"__key" : "10162021",
"id" : 812340,
"properties" : {
"server" : "server1.example.org",
"serverip" : ""
}
},
"10162022" : {
"__key" : "10162022",
"properties" : {
"serverip" : "127.0.0.1",
"server" : "server2.example.org"
},
"id" : 859480
}
}
}
Then use this JSONPath:
$.store[?( #.properties.server=="server2.example.org" )].__key
I have a DB in Firebase with this structure:
{
"chats" : {
"-L-hPbTK51XFwjNPjz3X" : {
"lastMessage" : "Hello!",
"timestamp" : 1512590440336,
"title" : "chat 1",
"users" : {
"Ol0XhKBksFcrYmF4MzS3vbODvT83" : true
}
}
},
"messages" : {
"-L-hPbTK51XFwjNPjz3X" : {
"-L-szWDIKX2SQl4YZFw9" : {
"message" : "Hello!",
"timestamp" : 1512784663447,
"userId" : "Ol0XhKBksFcrYmF4MzS3vbODvT83"
}
}
},
"users" : {
"Ol0XhKBksFcrYmF4MzS3vbODvT83" : {
"chats" : {
"-L-hPbTK51XFwjNPjz3X" : true
},
"email" : "mm#gmail.com",
"name" : "mm"
}
}
}
My code:
Database.database().reference().child("chats")
.queryOrdered(byChild: "users/(userId)").queryEqual(toValue: true).observe(.value, with: { snapshot in .... }
When I try to get chat members or user chats, It shows this warnings:
Using an unspecified index. Your data will be downloaded and filtered on the client. Consider adding ".indexOn": "chats/-L-hPbTK51XFwjNPjz3X" at /users to your security rules for better performance.
Using an unspecified index. Your data will be downloaded and filtered on the client. Consider adding ".indexOn": "users/Ol0XhKBksFcrYmF4MzS3vbODvT83" at /chats to your security rules for better performance.
I found lots of solutions but anything works fine for me. I want to define IndexOn rules in my DB, Can you help me?
This is my sample data
{
_id: 123123123,
author:{
name : "username"
},
data:{
title : "Hello World"
}
}
And this is my Index command : db.post.createIndex({"data.title":"text"})
But when I execute db.post.find( { $text: { $search: "Hello" } } ) I get nothing back.
What command should I run to index an embed object inside mongodb?
This: db.post.createIndex({"data.title":"text"}) is the correct command to create a text index on an embedded field.
This: db.post.find( { $text: { $search: "Hello" } } ) is the correct way of engaging the text index to search for the value Hello in the embedded field: data.title.
You are doing everything correctly. To verify this, I have taken your document, written it to a collection, created a text index on that collection using the createIndex() command you supplied and searched for it using the find() command you supplied and that document is returned.
So, perhaps the issue is elsewhere. I would suggest that you:
Confirm that the text index was definitely created. You can do this by running db.post.getIndexes(), if the text index is present and does cover data.title then you should see something like this in the output from that command:
{
"v" : 2,
"key" : {
"_fts" : "text",
"_ftsx" : 1
},
"name" : "data.title_text",
"ns" : "<your database name>.post",
"weights" : {
"data.title" : 1
},
"default_language" : "english",
"language_override" : "language",
"textIndexVersion" : 3
}
Confirm that there is definitely a document with data.title containing Hello. You can do this by running a simple find: db.post.find({'data.title': { $regex: /Hello/ } }).
Confirm that this command: db.post.find( { $text: { $search: "Hello" } } ) definitely uses your text index. You can do this by invoking that command with .explain() (e.g. db.post.find( { $text: { $search: "Hello" } } ).explain()) and the output should include something like this:
"inputStage" : {
"stage" : "TEXT_MATCH",
"inputStage" : {
"stage" : "TEXT_OR",
"inputStage" : {
"stage" : "IXSCAN",
"keyPattern" : {
"_fts" : "text",
"_ftsx" : 1
},
"indexName" : "data.title_text",
"isMultiKey" : true,
"isUnique" : false,
"isSparse" : false,
"isPartial" : false,
"indexVersion" : 2,
"direction" : "backward",
"indexBounds" : {}
}
}
I am trying to create an index in Elasticsearch for the JSON file of format:
{ "index" : { "_index" : "entity", "_type" : "type1", "_id" : "0" } }
{ "eid":"guid of Event autogenerated", "entityInfo": { "entityType":"qualityevent", "defaultLocale":"en-US" }, "systemInfo": { "tenantId":"67" }, "attributesInfo" : { "jobId":"21", "matchStatus": "new" } }
{ "index" : { "_index" : "entity", "_type" : "type1", "_id" : "1" } }
{ "eid":"guid of Event autogenerated", "entityInfo": { "entityType":"qualityevent", "defaultLocale":"en-US" }, "systemInfo": { "tenantId":"67" }, "attributesInfo" : { "jobId":"20", "matchStatus": "existing" } }
I want the fields jobId and tenantId to be integers.
I am giving the following mapping in curl command:
curl -XPUT http://localhost:9200/entity -d '
{
"mappings": {
"entityInfo":
{
"properties" : {
"entityType" : { "type":"string","index" : "not_analyzed"},
"defaultLocale":{ "type":"string","index" : "not_analyzed"}
}
},
"systemInfo":
{
"properties" : {
"tenantId": { "type" : "integer" }
}
},
"attributesInfo" :
{
"properties" : {
"jobId": { "type" : "integer" },
"matchStatus": { "type":"string","index" : "not_analyzed"}
}
}
}
}
';
This does not give me an error. However, it creates new empty fields jobId and tenantId as integers and it keeps the existing data into attributesInfo.jobId as string. Same is the case with systemInfo.tenantId. I want to use these two fields in Kibana for visualization. I currently cannot use them as they are empty.
I am new to Kibana and Elasticsearch so I am not sure if the mapping is correct.
I have tried couple of other mappings as well but they give errors. Above mapping does not give error.
This is how Discover Tab on Kibana looks like: 1
Please let me know where I am going wrong.
I tried as you mentioned but it didn't help. What I realised after a lot of trial and error that my mapping was incorrect. I finally wrote the correct mapping and now it works correctly. Jobid and TenantId are recognised as numbers by Kibana. I am new to JSON, kibana, Bulk, Elastic so it took time to understand how mapping works.
I'm pretty new in nosql world and i'd like to try the"geonear" (geospatial) feature in mongodb , i imported some data in this form :
{
"_id":ObjectId("549164b752c5c30b15bbc26a"),
"ville":"Auenheim",
"lat":"48,81",
"lon":"8,01"
}
and i need to update all my data collection in this form :
{
"_id":ObjectId("549164b752c5c30b15bbc26a"),
"ville":"Auenheim",
loc : { type: "Point", coordinates: [ 8.01 , 48.81] }
}
Is there a way from an update query to do that with mongo ?
or should i use a php script(collection is huge..)
thanks fo help,
happy
You can iterate through each document and change the format with a simple script. In the mongo shell, you would write something like
db.test.find({}, { "lat" : 1, "lon" : 1 }).forEach(function(doc) {
db.test.update({ "_id" : doc._id },
{
"$unset" : { "lat" : 1, "lon" : 1 },
"$set" : { "loc" : { "type" : "Point", "coordinates" : [ doc.lon, doc.lat ] } }
})
})
You need to change your lat and lon to numbers, as well. I'm not sure if that was a typo or what, but you can do that as part of the function to, if need be. To make this faster, you can a parallel collection scan, which is supported in most drivers, to process all the documents using multiple threads.