Moving mapping from old ElasticSearch to latest ES (5) - json

I've inherited some pretty old (v2.something) ElasticSearch instance running in cloud somewhere and need to get the data out starting with mappings to local instance of latest ES (v5). Unfortunately, it fails with following error:
% curl -X PUT 'http://127.0.0.1:9200/easysearch?pretty=true' --data #easysearch_mapping.json
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "unknown setting [index.easysearch.mappings.espdf.properties.abstract.type] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"
}
],
"type" : "illegal_argument_exception",
"reason" : "unknown setting [index.easysearch.mappings.espdf.properties.abstract.type] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"
},
"status" : 400
}
The mapping I got from old instance does contain some fields of this kind:
"espdf" : {
"properties" : {
"abstract" : {
"type" : "string"
},
"document" : {
"type" : "attachment",
"fields" : {
"content" : {
"type" : "string"
},
"author" : {
"type" : "string"
},
"title" : {
"type" : "string"
},
This "espdf" thing probably comes from Meteor's "EasySearch" component, but I have more structures like this in the mapping and new ES rejects each of them (I tried editing the mapping and deleting the "espdf" key and value).
How can I get the new ES to accept the mapping? Is this some legacy issue from 2.x ES and I should somehow convert this to new 5.x ES format?

The reason it fails is because the older ES had a plugin installed called mapper-attachments, which would add the attachment mapping type to ES.
In ES 5, this plugin has been replace by the ingest-attachment plugin, which you can install like this:
bin/elasticsearch-plugin install ingest-attachment
After running this command in your ES_HOME folder, restart your ES cluster and it should go better.

Related

how do I implement if condition in my cloudformation stack

I have two attributes in my stack enviornment and iamprofileName. If I select one of the non-prod enviornment i.e "use1dev","use1qa". I should get MyPlatformEC2NonProd as default value in "IAMProfileName"
If I select one of the prod enviornments i.e "useProd1","useProd2".I must get MyPlatformEC2Prod as default value in "IAMProfileName"
How can I achieve this
"Environment" : {
"Description" : "Environment being deployed to - use1dev, use1qa,
use1sbox etc",
"Type" : "String",
"Default" : "use1sbox",
"AllowedValues" : ["use1dev","use1qa","useProd1","useProd2"]
},
"IAMProfileName" : {
"Default" : "MyPlatformEC2",
"Type" : "String",
"Description" : "Name of IAM profile to attach to created
machines",
"AllowedValues" : ["MyPlatformEC2","MyPlatformEC2NonProd"]
Use CloudFormation conditions. For example in your case, I would do something like the following:
Conditions:
"ProdProfileCondition": {
"Fn::Or": [
{"Fn::Equals": ["useProd1", {"Ref": "Environment"}]},
{"Fn::Equals": ["useProd2", {"Ref": "Environment"}]},
]
}
Now wherever you want to use the IAMProfileName value, use something like the following,
SomeAWSResource:
Properties:
"ProfileName" : [{
"Fn::If" : [
"ProdProfileCondition",
{"Ref" : "MyPlatformEC2"},
{"Ref" : "MyPlatformEC2NonProd"}
]
}]
For more information on how to use conditionals, check out the following link.
Also, you can achieve more complicated conditionals using Jinja, just create a template and fill values according to conditions. But I wouldn't go into details of that because what you need can be fulfilled by this already.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-conditions.html

Mapping definition for [suggest] has unsupported parameters: [payloads : true]

I am using an example right from ElasticSearch documentation here using the Completion Suggestor but I am getting an error saying payloads: true is an unsupported parameter. Which obviously is supported unless the docs are wrong? I have the latest Elasticsearch app install (5.3.0).
Here is my cURL:
curl -X PUT localhost:9200/search/pages/_mapping -d '{
"pages" : {
"properties": {
"title": {
"type" : "string"
},
"suggest" : {
"type" : "completion",
"analyzer" : "simple",
"search_analyzer" : "simple",
"payloads" : true
}
}
}
}';
And the error:
{
"error" : {
"root_cause" : [
{
"type" : "mapper_parsing_exception",
"reason" : "Mapping definition for [suggest] has unsupported parameters: [payloads : true]"
}
],
"type" : "mapper_parsing_exception",
"reason" : "Mapping definition for [suggest] has unsupported parameters: [payloads : true]"
},
"status" : 400
}
The payloadparameter has been removed in ElasticSearch 5.3.0 by the following commit: Remove payload option from completion suggester . Here is the comit message:
The payload option was introduced with the new completion
suggester implementation in v5, as a stop gap solution
to return additional metadata with suggestions.
Now we can return associated documents with suggestions
(#19536) through fetch phase using stored field (_source).
The additional fetch phase ensures that we only fetch
the _source for the global top-N suggestions instead of
fetching _source of top results for each shard.

Elasticsearch - Sense - Indexing JSON files?

I'm trying to load some JSON files to my local ES instance via Sense, but I can't seem to figure the code out. I know ES has the Bulk API and the Index API, but I can't seem to bring the code together. How can I upload/index JSON files to my local ES instance using Sense? Thank you!
Yes, ES has a bulk api to upload JSON files to the ES cluster. I don't think that API is exposed in low level languages as in case of Sense it is Javascript in the browser. High level clients are available in Java or C# which expose more control over the ES cluster. I don't think chrome browser will support execution of this command.
To upload a JSON file to elastic using the bulk api.
1) This command uploads JSON documents from a JSON file.
curl -s -XPOST localhost:9200/_bulk --data-binary #path_to_file;
2)The JSON file should be formatted as follows:
{ "index" : { "_index" : "test", "_type" : "type1", "_id" : "1" } }
{ "field1" : "value1" }
{ "index" : { "_index" : "test", "_type" : "type1", "_id" : "1" } }
{ "field1" : "value3" }
{ "index" : { "_index" : "test", "_type" : "type1", "_id" : "1" } }
{ "doc" : {"field2" : "value2"} }
Where JSON object doc represents each JSON object data and the corresponding index JSON object represent metadata for that particular JSON doc like document id, type in index,index name.
link to bulk upload
Also you can refer my previous answer

Invalid request error in AWS::Route53::RecordSet when creating stack with AWS CloudFormation json

Invalid request error in AWS::Route53::RecordSet when creating stack with AWS CloudFormation json. Here is the error:
CREATE_FAILED AWS::Route53::RecordSet ApiRecordSet Invalid request
Here is the ApiRecordSet:
"ApiRecordSet" : {
"Type" : "AWS::Route53::RecordSet",
"Properties" : {
"AliasTarget" :{
"DNSName": {"Fn::GetAtt" : ["RestELB", "CanonicalHostedZoneName"]},
"HostedZoneId": {"Fn::GetAtt": ["RestELB", "CanonicalHostedZoneNameID"]}
},
"HostedZoneName" : "some.net.",
"Comment" : "A records for my frontends.",
"Name" : {"Fn::Join": ["", ["api",{"Ref": "Env"},".some.net."]]},
"Type" : "A",
"TTL" : "300"
}
}
What is wrong/invalid in this request?
The only thing I see immediately wrong is that you are using both an AliasTarget and TTL at the same time. You can't do that since the record uses the TTL defined in the AliasTarget. For more info check out the documentation on RecordSet here.
I also got this error and fixed it by removing the "SetIdentifier" field on record sets where it was not needed.
It is only needed when the "Name" and "Type" fields of multiple records are the same.
Documentation on AWS::Route53::RecordSet

Error Loading json file on elasticsearch aws

I've just set up an elasticsearch domain using Elastic search service from aws.
Now I want to feed it with some json file using:
curl -XPOST 'my-aws-domain-here/_bulk/' --data-binary #base_enquete.json
according to the documentation here.
My json file looks like the following:
[{"INDID": "10040","DATENQ": "29/7/2013","Name": "LANDIS MADAGASCAR SA"},
{"INDID": "10050","DATENQ": "14/8/2013","Name": "MADAFOOD SA","M101P": ""}]
which gives me this error:
{"error":"ActionRequestValidationException[Validation Failed: 1: no requests added;]","status":400}
I tried without [ and ] same error!
Note that I already set up access policy to be open to the world for dev stage purpose.
Any help of any kind will be helpful :)
This is because of the wrong format of data.
Please go through the documentation here.
Ideally it should be in format -
action_and_meta_data\n
optional_source\n
action_and_meta_data\n
optional_source\n
....
action_and_meta_data\n
optional_source\n
This means that content of the file you are sending should be in following format -
{ "index" : { "_index" : "test", "_type" : "type1", "_id" : "1" } }
{"INDID": "10040","DATENQ": "29/7/2013","Name": "LANDIS MADAGASCAR SA"}
{ "index" : { "_index" : "test", "_type" : "type1", "_id" : "2" } }
{"INDID": "10050","DATENQ": "14/8/2013","Name": "MADAFOOD SA","M101P": ""}