Mapping definition for [suggest] has unsupported parameters: [payloads : true] - json

I am using an example right from ElasticSearch documentation here using the Completion Suggestor but I am getting an error saying payloads: true is an unsupported parameter. Which obviously is supported unless the docs are wrong? I have the latest Elasticsearch app install (5.3.0).
Here is my cURL:
curl -X PUT localhost:9200/search/pages/_mapping -d '{
"pages" : {
"properties": {
"title": {
"type" : "string"
},
"suggest" : {
"type" : "completion",
"analyzer" : "simple",
"search_analyzer" : "simple",
"payloads" : true
}
}
}
}';
And the error:
{
"error" : {
"root_cause" : [
{
"type" : "mapper_parsing_exception",
"reason" : "Mapping definition for [suggest] has unsupported parameters: [payloads : true]"
}
],
"type" : "mapper_parsing_exception",
"reason" : "Mapping definition for [suggest] has unsupported parameters: [payloads : true]"
},
"status" : 400
}

The payloadparameter has been removed in ElasticSearch 5.3.0 by the following commit: Remove payload option from completion suggester . Here is the comit message:
The payload option was introduced with the new completion
suggester implementation in v5, as a stop gap solution
to return additional metadata with suggestions.
Now we can return associated documents with suggestions
(#19536) through fetch phase using stored field (_source).
The additional fetch phase ensures that we only fetch
the _source for the global top-N suggestions instead of
fetching _source of top results for each shard.

Related

How to get all errors from lua json schema validation

I am able to work with lua json schema validators like ljsonschema &rapidjason but noticed none of them give all the errors & they abort on the 1st error.
Is it possible to get the complete list of errors if the input json has > than 1 validation issues ?
For ex:
For a schema like
{
"type" : "object",
"properties" : {
"foo" : { "type" : "string" },
"bar" : { "type" : "number" }
}
}
The sample json : { "foo": 12, "bar": "42" } should give 2 errors. However, I get only 1 error property "foo" validation failed: wrong type: expected string, got number.
How can I get both the below errors:
property "foo" validation failed: wrong type: expected string, got number
property "bar" validation failed: wrong type: expected number, got string
in the same run ?

Moving mapping from old ElasticSearch to latest ES (5)

I've inherited some pretty old (v2.something) ElasticSearch instance running in cloud somewhere and need to get the data out starting with mappings to local instance of latest ES (v5). Unfortunately, it fails with following error:
% curl -X PUT 'http://127.0.0.1:9200/easysearch?pretty=true' --data #easysearch_mapping.json
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "unknown setting [index.easysearch.mappings.espdf.properties.abstract.type] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"
}
],
"type" : "illegal_argument_exception",
"reason" : "unknown setting [index.easysearch.mappings.espdf.properties.abstract.type] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"
},
"status" : 400
}
The mapping I got from old instance does contain some fields of this kind:
"espdf" : {
"properties" : {
"abstract" : {
"type" : "string"
},
"document" : {
"type" : "attachment",
"fields" : {
"content" : {
"type" : "string"
},
"author" : {
"type" : "string"
},
"title" : {
"type" : "string"
},
This "espdf" thing probably comes from Meteor's "EasySearch" component, but I have more structures like this in the mapping and new ES rejects each of them (I tried editing the mapping and deleting the "espdf" key and value).
How can I get the new ES to accept the mapping? Is this some legacy issue from 2.x ES and I should somehow convert this to new 5.x ES format?
The reason it fails is because the older ES had a plugin installed called mapper-attachments, which would add the attachment mapping type to ES.
In ES 5, this plugin has been replace by the ingest-attachment plugin, which you can install like this:
bin/elasticsearch-plugin install ingest-attachment
After running this command in your ES_HOME folder, restart your ES cluster and it should go better.

Invalid request error in AWS::Route53::RecordSet when creating stack with AWS CloudFormation json

Invalid request error in AWS::Route53::RecordSet when creating stack with AWS CloudFormation json. Here is the error:
CREATE_FAILED AWS::Route53::RecordSet ApiRecordSet Invalid request
Here is the ApiRecordSet:
"ApiRecordSet" : {
"Type" : "AWS::Route53::RecordSet",
"Properties" : {
"AliasTarget" :{
"DNSName": {"Fn::GetAtt" : ["RestELB", "CanonicalHostedZoneName"]},
"HostedZoneId": {"Fn::GetAtt": ["RestELB", "CanonicalHostedZoneNameID"]}
},
"HostedZoneName" : "some.net.",
"Comment" : "A records for my frontends.",
"Name" : {"Fn::Join": ["", ["api",{"Ref": "Env"},".some.net."]]},
"Type" : "A",
"TTL" : "300"
}
}
What is wrong/invalid in this request?
The only thing I see immediately wrong is that you are using both an AliasTarget and TTL at the same time. You can't do that since the record uses the TTL defined in the AliasTarget. For more info check out the documentation on RecordSet here.
I also got this error and fixed it by removing the "SetIdentifier" field on record sets where it was not needed.
It is only needed when the "Name" and "Type" fields of multiple records are the same.
Documentation on AWS::Route53::RecordSet

Fiware: No observation attributes in Orion CB when registered/sent via IDAS UltraLight

This question is very similar to Missing attributes on Orion CB Entity when registering device through IDAS but found no definitive answer there.
I have been trying FiWare to get UL2.0 via IDAS to the Orion CB working in the Fiware-Lab env:
using latest GitHub
https://github.com/telefonicaid/fiware-figway/tree/master/python-IDAS4
scripts
following the tutorials in particular
http://www.slideshare.net/FI-WARE/fiware-iotidasintroul20v2
I have a FI-WARE Lab account with token generated. Adapted the config.ini file:
[user]
# Please, configure here your username at FIWARE Cloud and a valid Oauth2.0 TOKEN for your user (you can use get_token.py to obtain a valid TOKEN).
username=MY_USERNAME
token=MY_TOKEN
[contextbroker]
host=130.206.80.40
port=1026
OAuth=no
# Here you need to specify the ContextBroker database you are querying.
# Leave it blank if you want the general database or the IDAS service if you are looking for IoT devices connected by you.
# fiware_service=
fiware_service=bus_auto
fiware-service-path=/
[idas]
host=130.206.80.40
adminport=5371
ul20port=5371
OAuth=no
# Here you need to configure the IDAS service your devices will be sending data to.
# By default the OpenIoT service is provided.
# fiware-service=fiwareiot
fiware-service=bus_auto
fiware-service-path=/
#apikey=4jggokgpepnvsb2uv4s40d59ov
apikey=4jggokgpepnvsb2uv4s40d59ov
[local]
#Choose here your System type. Examples: RaspberryPI, MACOSX, Linux, ...
host_type=MACOSX
# Here please add a unique identifier for you. Suggestion: the 3 lower hexa bytes of your Ethernet MAC. E.g. 79:ed:af
# Also you may use your e-mail address.
host_id=a0:11:00
I used the SENSOR_TEMP template, adding the 'protocol' field (PDI-IoTA-UltraLight which as the first problem I stumbled upon):
{
"devices": [
{ "device_id": "DEV_ID",
"entity_name": "ENTITY_ID",
"entity_type": "thing",
"protocol": "PDI-IoTA-UltraLight",
"timezone": "Europe/Amsterdam",
"attributes": [
{ "object_id": "otemp",
"name": "temperature",
"type": "int"
} ],
"static_attributes": [
{ "name": "att_name",
"type": "string",
"value": "value"
}
]
}
]
}
Now I can Register the device ok. Like
python RegisterDevice.py SENSOR_TEMP NexusPro Temp-Otterlo
and see it in Device List:
python ListDevices.py
I can send Observations like
python SendObservation.py Temp-Otterlo 'otemp|17'
But in the ContextBroker I see the Entity but never the measurements, e.g.
python GetEntity.py Temp-Otterlo
Gives
* Asking to http://130.206.80.40:1026/ngsi10/queryContext
* Headers: {'Fiware-Service': 'bus_auto', 'content-type': 'application/json', 'accept': 'application/json', 'X-Auth-Token': 'NULL'}
* Sending PAYLOAD:
{
"entities": [
{
"type": "",
"id": "Temp-Otterlo",
"isPattern": "false"
}
],
"attributes": []
}
...
* Status Code: 200
* Response:
{
"contextResponses" : [
{
"contextElement" : {
"type" : "thing",
"isPattern" : "false",
"id" : "Temp-Otterlo",
"attributes" : [
{
"name" : "TimeInstant",
"type" : "ISO8601",
"value" : "2015-10-03T14:04:44.663133Z"
},
{
"name" : "att_name",
"type" : "string",
"value" : "value",
"metadatas" : [
{
"name" : "TimeInstant",
"type" : "ISO8601",
"value" : "2015-10-03T14:04:44.663500Z"
}
]
}
]
},
"statusCode" : {
"code" : "200",
"reasonPhrase" : "OK"
}
}
]
}
I get an TimeInstant attribute strangely. I tried playing with settings of the .ini like fiware-service=fiwareiot, but to no avail. I am out of ideas. The documentation at the catalogue. for IDAS4
is talking about observations to be sent to port 8002 and setting "OpenIoT" service, but that failed as well.
Any help appreciated.
You should run "python SendObservation.py NexusPro 'otemp|17'" instead of "python SendObservation.py Temp-Otterlo 'otemp|17'".
The reason is that you are providing an observation at the southbound and then, the DEV_ID should be used.
The entity does not include an attribute until an observation is received so then it is normal you are not able to see it. Once you try the one above it should all work.
Cheers,

Elasticsearch queries on "empty index"

in my application I use several elasticsearch indices, which will contain no indexed documents in their initial state. I consider that can be called "empty" :)
The document's mapping is correct and working.
The application also has a relational database that contain entities, that MIGHT have documents associated in elasticsearch.
In the initial state of the appliation it is very common that there are only entities without documents, so not a single document has been indexed, therefore "empty index". The index has been created nevertheless and also the document's mapping has been put to the index and is present in the indexes metadata.
Anyway, when I query elasticsearch with a SearchQuery to find an document for one of the entities (the document contains an unique id from the entity), elasticsearch will throw an ElasticSearchException, that complains about no mapping present for field xy etc.
BUT IF I insert one single blank document into the index first, the query wont fail.
Is there a way to "initialize" an index in a way to prevent the query from failing and to get rid of the silly "dummy document workaround"?
UPDATE:
Plus, the workaround with the dummy doc pollutes the index, as for example a count query now returns always +1....so I added a deletion to the workaround as well...
Your questions lacks details and is not clear. If you had provided gist of your index schema and query, that would have helped. You should have also provided the version of elasticsearch that you are using.
"No mapping" exception that you have mentioned has nothing to do with initializing the index with some data. Most likely you are sorting on the field which doesn't exist. This is common if you are querying multiple indexes at once.
Solution: Solution is based on the version of elasticsearch. If you are on 1.3.x or lower then you should use ignore_unmapped. If you are on a version greater than 1.3.5 then you should use unmapped_type.
Click here to read official documentation.
If you are find the documentation confusing, then this example will make it clear:
Lets create two indexes testindex1 and testindex2
curl -XPUT localhost:9200/testindex1 -d '{"mappings":{"type1":{"properties":{"firstname":{"type":"string"},"servers":{"type":"nested","properties":{"name":{"type":"string"},"location":{"type":"nested","properties":{"name":{"type":"string"}}}}}}}}}'
curl -XPUT localhost:9200/testindex2 -d '{"mappings":{"type1":{"properties":{"firstname":{"type":"string"},"computers":{"type":"nested","properties":{"name":{"type":"string"},"location":{"type":"nested","properties":{"name":{"type":"string"}}}}}}}}}'
The only difference between these two indexes is - testindex1 has "server" field and textindex2 has "computers" field.
Now let's insert test data in both the indexes.
Index test data on testindex1:
curl -XPUT localhost:9200/testindex1/type1/1 -d '{"firstname":"servertom","servers":[{"name":"server1","location":[{"name":"location1"},{"name":"location2"}]},{"name":"server2","location":[{"name":"location1"}]}]}'
curl -XPUT localhost:9200/testindex1/type1/2 -d '{"firstname":"serverjerry","servers":[{"name":"server2","location":[{"name":"location5"}]}]}'
Index test data on testindex2:
curl -XPUT localhost:9200/testindex2/type1/1 -d '{"firstname":"computertom","computers":[{"name":"computer1","location":[{"name":"location1"},{"name":"location2"}]},{"name":"computer2","location":[{"name":"location1"}]}]}'
curl -XPUT localhost:9200/testindex2/type1/2 -d '{"firstname":"computerjerry","computers":[{"name":"computer2","location":[{"name":"location5"}]}]}'
Query examples:
Using "unmapped_type" for elasticsearch version > 1.3.x
curl -XPOST 'localhost:9200/testindex2/_search?pretty' -d '{"fields":["firstname"],"query":{"match_all":{}},"sort":[{"servers.location.name":{"order":"desc","unmapped_type":"string"}}]}'
Using "ignore_unmapped" for elasticsearch version <= 1.3.5
curl -XPOST 'localhost:9200/testindex2/_search?pretty' -d '{"fields":["firstname"],"query":{"match_all":{}},"sort":[{"servers.location.name":{"order":"desc","ignore_unmapped":"true"}}]}'
Output of query1:
{
"took" : 15,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 2,
"max_score" : null,
"hits" : [ {
"_index" : "testindex2",
"_type" : "type1",
"_id" : "1",
"_score" : null,
"fields" : {
"firstname" : [ "computertom" ]
},
"sort" : [ null ]
}, {
"_index" : "testindex2",
"_type" : "type1",
"_id" : "2",
"_score" : null,
"fields" : {
"firstname" : [ "computerjerry" ]
},
"sort" : [ null ]
} ]
}
}
Output of query2:
{
"took" : 10,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 2,
"max_score" : null,
"hits" : [ {
"_index" : "testindex2",
"_type" : "type1",
"_id" : "1",
"_score" : null,
"fields" : {
"firstname" : [ "computertom" ]
},
"sort" : [ -9223372036854775808 ]
}, {
"_index" : "testindex2",
"_type" : "type1",
"_id" : "2",
"_score" : null,
"fields" : {
"firstname" : [ "computerjerry" ]
},
"sort" : [ -9223372036854775808 ]
} ]
}
}
Note:
These examples were created on elasticserch 1.4.
These examples also demonstrate how to do sorting on nested fields.
Are you doing a sort when you search? I've run into the same issue ("No mapping found for [field] in order to sort on"), but only when trying to sort results. In that case, the solution is simply to add the ignore_unmapped: true property to the sort parameter in your query:
{
...
"body": {
...
"sort": [
{"field_name": {
"order": "asc",
"ignore_unmapped": true
}}
]
...
}
...
}
I found my solution here:
No mapping found for field in order to sort on in ElasticSearch