Handling Nested Integration response from aws apigateway - json

I have a JSON response from AWS Lambda going to AWS API-GATEWAY as follows :-
[
{
"key1" : "fruit",
"key2" : "citrus",
"key3" : {
"key31" : "lemon",
"key32" : "orange",
"key33" : "lime"
}
},
{
"key1" : "vegetable",
"key2" : "green",
"key3" : {
"key31" : "spinach",
"key32" : "lettuce",
"key33" : "cabbage"
}
}
]
Before sending to the client application from API Gateway, I want to modify the keys in the response as below:
[
{
"category" : "fruit",
"subCategory" : "citrus",
"examples" : {
"eg1" : "lemon",
"eg2" : "orange",
"eg3" : "lime"
}
},
{
"category" : "vegetable",
"subCategory" : "green",
"examples" : {
"eg1" : "spinach",
"eg2" : "lettuce",
"eg33" : "cabbage"
}
}
]
In AWS ApiGateway we have Mapping Templates to transform the response coming from Lambda and going out of API Gateway using the Apache Velocity.
I am using application/json format to create the mapping template.
Below is the code I am written for the transformation --
#set($inputRoot = $input.path('$'))
[
#foreach($elem in $inputRoot)
{
"category": "$elem.key1",
"subCategory": "$elem.key2",
"examples" : #set($example in $elem.key3)
{
"eg1" : "$example.key31",
"eg2" : "$example.key32",
"eg3" : "$example.key33"
}#end
}#if($foreach.hasNext),#end
#end
]
The response which I am receiving from the api gateway after hitting it as below ---
{
"message": "Internal server error"
}
I am still new with the API Gateways so if anyone could help, it would be really great.

Related

How to send below body request to hit service and get response

{
"version": 46,
"actions": [
{
"action" : "addCustomLineItem",
"name" : {
"en" : "Global India"
},
"quantity" : 1,
"money" : {
"currencyCode" : "INR",
"centAmount" : 4200
},
"slug" : "mySlug",
"taxCategory" : {
"typeId" : "tax-category",
"id" : "20f5a0ca-e3fd-48b6-8258-8dc8c75fe22a"
}
}
]
}
Above is my body request.
And we have stored above data in cartIdData variable. I am not getting response. showing it does not contain valid json format.
this.cartIdData = {
version:this.cartversion,
productId:this.productID,
currency:this.currencycode,
price:this.priceparseInt,
cartaction:action,
india:this.name,
slugname:this.slugname
};

Index a JSON file into elasticsearch command/mapping errors

I'm new to ELK and I want to import a JSON file into Elasticsearch. this is my file:
{
"news":{
"1":{
"_score":1.0,
"_index":"newsvit",
"_source":{
"content":" \u0641\u0647\u06cc\u0645\u0647 \u062d\u0633\u0646\u200c\u0645\u06cc\u0631\u06cc: \u0627\u06af\u0631\u0686\u0647 \u062f\u0631 \u0647\u06cc\u0627\u0647\u0648\u06cc \u0627\u0646\u062a\u062e\u0627\u0628\u0627\u062a \u0631\u06cc\u0627\u0633\u062a \u062c\u0645\u0647\u0648\u0631\u06cc\u060c \u0645\u0648\u0636\u0648\u0639\u06cc \u0645\u0627\u0646\u0646\u062f \u0645\u0639\u0631\u0641\u06cc \u06a9\u0627\u0646\u062f\u06cc\u062f\u0627\u0647\u0627\u06cc \u0634\u0648\u0631\u0627\u06cc \u0634\u0647\u0631 \u062f\u0631 \u062d\u0627\u0634\u06cc\u0647 \u0642\u0631\u0627\u0631 \u06af\u0631\u0641\u062a\u0647\u060c \u0627\u0645\u0627 \u0627\u0645\u0633\u0627\u0644 \u0628\u0647 \u0639\u0646\u0648\u0627\u0646 \u067e\u0646\u062c\u0645\u06cc\u0646 \u062f\u0648\u0631\u0647 \u0627\u0646\u062a\u062e\u0627\u0628 \u0627\u0639\u0636\u0627\u06cc \u0634\u0648\u0631\u0627\u06cc \u0634\u0647\u0631\u060c \u0627\u06cc\u0646 \u0631\u0648\u06cc\u062f\u0627\u062f \u0628\u0647 \u0646\u0633\u0628\u062a \u062f\u0648\u0631\u0647\u200c\u0647\u0627\u06cc \u0642\u0628\u0644\u060c \u0628\u06cc\u0634\u062a\u0631 \u0645\u0648\u0631\u062f \u062a\u0648\u062c\u0647 \u0648\u0627\u0642\u0639 \u0634\u062f\u0647. \u0627\u06cc\u0646 \u0627\u0642\u0628\u0627\u0644\u060c \u0686\u0647 \u0627\u0632 \u0633\u0648\u06cc \u0686\u0647\u0631\u0647\u200c\u0647\u0627\u06cc \u0645\u0637\u0631\u062d \u0628\u0631\u0627\u06cc \u062b\u0628\u062a \u0646\u0627\u0645 \u0648 \u0686\u0647 \u0627\u0632 \u0633\u0648\u06cc \u0645\u0631\u062f\u0645 \u0628\u0631\u0627\u06cc \u0645\u0634\u0627\u0631\u06a9\u062a \u062f\u0631 \u0627\u06cc\u0646 \u0631\u0648\u06cc\u062f\u0627\u062f\u060c \u0639\u0644\u062a\u200c\u0647\u0627\u06cc \u06af\u0648\u0646\u0627\u06af\u0648\u0646\u06cc \u0645\u06cc\u200c\u062a\u0648\u0627\u0646\u062f \u062f\u0627\u0634\u062a\u0647 \u0628\u0627\u0634\u062f \u06a9\u0647 \u062a\u0648\u062c\u0647 \u0628\u0647 \u0622\u0646\u060c \u0645\u06cc\u200c\u062a\u0648\u0627\u0646\u062f \u0631\u0627\u0647\u06af\u0634\u0627\u06cc \u0627\u0639\u0636\u0627\u06cc \u0631\u06",
"lead":"\u062c\u0627\u0645\u0639\u0647 > \u0634\u0647\u0631\u06cc - \u0645\u06cc\u0632\u06af\u0631\u062f\u06cc \u062f\u0631\u0628\u0627\u0631\u0647 \u0639\u0645\u0644\u06a9\u0631\u062f \u062f\u0648\u0631\u0647\u200c\u0647\u0627\u06cc \u06af\u0630\u0634\u062a\u0647 \u0634\u0648\u0631\u0627\u06cc \u0634\u0647\u0631\u060c \u0622\u0646\u0686\u0647 \u0627\u0639\u0636\u0627\u06cc \u062c\u062f\u06cc\u062f \u0628\u0627\u06cc\u062f \u0645\u062f \u0646\u0638\u0631 \u062f\u0627\u0634\u062a\u0647 \u0628\u0627\u0634\u0646\u062f \u0648 \u0647\u0645\u0686\u0646\u06cc\u0646 \u0645\u0627\u0647\u06cc\u062a \u0633\u06cc\u0627\u0633\u06cc \u0628\u0648\u062f\u0646 \u06cc\u0627 \u0646\u0628\u0648\u062f\u0646 \u0634\u0648\u0631\u0627\u06cc \u0634\u0647\u0631.",
"agency":"13",
"date_created":1494518193,
"url":"http://www.khabaronline.ir/(X(1)S(bud4wg3ebzbxv51mj45iwjtp))/detail/663749/society/urban",
"image":"uploads/2017/05/11/1589793661.jpg",
"category":"15"
},
"_type":"news",
"_id":"2981643"
},
"2": {
...
based on what I have learnt, at first, I tried to create a mapping system for it in DevTools of Kibana. I want to be able to perform queries and search on this file based on fields in _source, such as category, id and so on. this is my mapping:
PUT /main-news-test-data
{
"mappings": {
"properties": {
"_score": {"type":"integer"},
"_index": {"type":"keyword"},
"_type":{"type":"keyword"},
"_id":{"type":"keyword"}
},
"_source":{
"properties": {
"content":{"type":"text"},
"title":{"type":"text"},
"lead":{"type":"text"},
"agency":{"type":"keyword"},
"date_created":{"type":"date"},
"url":{"type":"keyword"},
"image":{"type":"keyword"},
"category":{"type":"keyword"}
}
}
}
}
HEAD main-news-test-data
GET /main-news-test-data/_search?q=*
but when I run this in Devtools I receive this error:
{
"error" : {
"root_cause" : [
{
"type" : "mapper_parsing_exception",
"reason" : "Mapping definition for [_source] has unsupported parameters: [properties : {image={type=keyword}, agency={type=keyword}, date_created={type=date}, title={type=text}, category={type=keyword}, content={type=text}, lead={type=text}, url={type=keyword}}]"
}
],
"type" : "mapper_parsing_exception",
"reason" : "Failed to parse mapping [_doc]: Mapping definition for [_source] has unsupported parameters: [properties : {image={type=keyword}, agency={type=keyword}, date_created={type=date}, title={type=text}, category={type=keyword}, content={type=text}, lead={type=text}, url={type=keyword}}]",
"caused_by" : {
"type" : "mapper_parsing_exception",
"reason" : "Mapping definition for [_source] has unsupported parameters: [properties : {image={type=keyword}, agency={type=keyword}, date_created={type=date}, title={type=text}, category={type=keyword}, content={type=text}, lead={type=text}, url={type=keyword}}]"
}
},
"status" : 400
}
I also tried to index my file into elasticsearch using this PowerShell command afterwards:
Invoke-RestMethod "http://localhost:9200/main-news-test-data/doc/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "test.json"
but again I get this error from Powershell:
Invoke-RestMethod : {
"error" : {
"root_cause" : [
{
"type" : "json_e_o_f_exception",
"reason" : "Unexpected end-of-input: expected close marker for Object (start marker at [Source:
(org.elasticsearch.common.bytes.AbstractBytesReference$MarkSupportingStreamInputWrapper); line: 1, column: 1])\n at
[Source: (org.elasticsearch.common.bytes.AbstractBytesReference$MarkSupportingStreamInputWrapper); line: 2, column: 1]"
}
],
"type" : "json_e_o_f_exception",
"reason" : "Unexpected end-of-input: expected close marker for Object (start marker at [Source:
(org.elasticsearch.common.bytes.AbstractBytesReference$MarkSupportingStreamInputWrapper); line: 1, column: 1])\n at
[Source: (org.elasticsearch.common.bytes.AbstractBytesReference$MarkSupportingStreamInputWrapper); line: 2, column: 1]"
},
"status" : 400
}
So what should I do? how do I import a JSON file into elasticsearch that is queryable by fields?
with what I read I can say that :
your mapping is strange
Just put :
PUT /main-news-test-data
{
"mappings": {
"properties": {
"content": {
"type": "text"
},
"title": {
"type": "text"
},
"lead": {
"type": "text"
},
"agency": {
"type": "keyword"
},
"date_created": {
"type": "date"
},
"url": {
"type": "keyword"
},
"image": {
"type": "keyword"
},
"category": {
"type": "keyword"
}
}
}
}
Your Json is wrong. A bulk don't use a valid json.
A file for the _bulk api will look like this :
{ "index" : { "_index" : "main-news-test-data", "_id" : "1" } }
{ "field1" : "value1" }
{ "index" : { "_index" : "main-news-test-data", "_id" : "2" } }
{ "field1" : "value2" }
Please also Note that "_score":1.0, has no reason to be in your request and that _type is deprecated (if you use a 7.0+, _type can only be _doc and should be ignored)

Ingest node Filebeat to Elasticsearch

We are sending logs directly from Filebeats to Elasticsearch without Logstash.
Logs can contain JSON in different fields that also need to be parsed. I have created a pipeline to parse logs, tested it in the developer console, and output was as expected. I have set Filebeat to send logs to this pipeline by adding 'pipeline: application_pipeline' to filebeat.yml. But in Index Management, I see only my docs.
How to check if Filebeat is sending these logs to the pipeline?
log example:
{"level":"info","message":"Webhook DeletePrice-{\"_headers\":{\"x-forwarded-proto\":[\"https\"],\"x-requested-with\":[\"\"],\"x-client-ip\":[\"93.84.120.32\"],\"user-agent\":[\"1C+Enterprise\\/8.3\"],\"accept\":[\"application\\/json\"],\"host\":[\"host.com\"],\"content-length\":[\"\"],\"content-type\":[\"\"]},\"company_id\":\"10248103\",\"service_id\":\"102.01.02S\",\"service_type\":\"clientApi\"}","service":"servicename","project":"someproject.com","event_id":"255A854BED569B8D4C21B5DE6D8E109C","payload":[],"date_server":"2020-07-24T11:45:48+00:00","date_unix":1595591148.966919}
{"level":"error","message":"NO service integration","service":"servicename","project":"someproject.com","event_id":"D3986456E5A42AF8574230C29D1D474D","payload":{"exception":{"class":"\\Ship\\Exceptions\\IntegrationException","message":"NO service integration","code":0,"file":"/var/www/builds/someproject.com/build.lab.service-public-api.2020_07_22_12_17_45/app/Containers/Price/UI/API/Controllers/Controller.php:406"}},"date_server":"2020-07-24T08:40:34+00:00","date_unix":1595580034.975073}
{"level":"info","message":"No photo in priceId-3696930","service":"service-private-api","project":"someproject.com","event_id":"FBEDA2C9600BFE11523592114B32BAEB","payload":[],"date_server":"2020-07-24T12:16:40+00:00","date_unix":1595593000.97212}
{"level":"error","message":"C404HttpException: 404 \u0421\u0442\u0440\u0430\u043d\u0438\u0446\u0430 \u043d\u0435 \u043d\u0430\u0439\u0434\u0435\u043d\u0430 in \/var\/www\/builds\/build.lab.classified-platform.2020_07_29_12_13_54\/htdocs\/protected\/modules\/personal\/controllers\/RobotsController.php:65\nStack trace:\n#0 \/var\/www\/builds\/build.artox-lab.classified-platform.2020_07_29_12_13_54\/htdocs\/protected\/vendor\/yiisoft\/yii\/framework\/yiilite.php(4226): RobotsController->actionIndex()\n#1 \/var\/www\/builds\/build.lab.classified-platform.2020_07_29_12_13_54\/htdocs\/protected\/vendor\/yiisoft\/yii\/framework\/yiilite.php(3739): CInlineAction->runWithParams(Array)\n#2 \/var\/www\/builds\/build.lab.classified-platform.2020_07_29_12_13_54\/htdocs\/protected\/vendor\/yiisoft\/yii\/framework\/yiilite.php(3724): CController->runAction(Object(CInlineAction))\n#3 \/var\/www\/builds\/build.lab.classified-platform.2020_07_29_12_13_54\/htdocs\/protected\/vendor\/yiisoft\/yii\/framework\/yiilite.php(3714): CController->runActionWithFilters(Object(CInlineAction), Array)\n#4 \/var\/www\/builds\/build.lab.classified-platform.2020_07_29_12_13_54\/htdocs\/protected\/vendor\/yiisoft\/yii\/framework\/yiilite.php(1799): CController->run('index')\n#5 \/var\/www\/builds\/build.lab.classified-platform.2020_07_29_12_13_54\/htdocs\/protected\/vendor\/yiisoft\/yii\/framework\/yiilite.php(1719): CWebApplication->runController('personal\/robots...')\n#6 \/var\/www\/builds\/build.lab.classified-platform.2020_07_29_12_13_54\/htdocs\/protected\/vendor\/yiisoft\/yii\/framework\/yiilite.php(1236): CWebApplication->processRequest()\n#7 \/var\/www\/builds\/build.lab.classified-platform.2020_07_29_12_13_54\/htdocs\/index.php(22): CApplication->run()\n#8 {main}\nREQUEST_URI=\/robots.txt\n---","service":"artox-lab\/classified-platform","project":"someproject.com","event_id":"91a10782a3566a74d5abefa9589c926c","payload":"exception.C404HttpException.404","date_server":"2020-07-29T14:25:34+03:00","date_unix":1596021934.218448}
pipeline example:
PUT _ingest/pipeline/application_pipeline
{
"description" : "Pipeline for parsing application.log for services",
"processors" : [
{
"grok" : {
"field" : "message",
"patterns" : [
"%{JSON:json_message_payload}"
],
"pattern_definitions" : {
"JSON" : "{.*$"
},
"ignore_failure" : true,
"ignore_missing" : true
}
},
{
"remove" : {
"field" : "json_message_payload",
"ignore_failure" : true
}
}
]
}
}
output:
{
"_index" : "application_index",
"_type" : "_doc",
"_id" : "6",
"_version" : 1,
"_seq_no" : 3,
"_primary_term" : 1,
"found" : true,
"_source" : {
"date_server" : "2020-07-29T15:16:17+03:00",
"level" : "error",
"project" : "103by",
"message" : """
C404HttpException: 404 Страница не найдена in /var/www/builds/build.artox-lab.classified-platform.2020_07_29_12_13_54/htdocs/protected/modules/personal/components/PersonalController.php:140
Stack trace:
#0 /var/www/builds/build.artox-lab.classified-platform.2020_07_29_12_13_54/htdocs/protected/vendor/yiisoft/yii/framework/yiilite.php(3737): PersonalController->beforeAction(Object(ShowGalleryPhotoAction))
#1 /var/www/builds/build.artox-lab.classified-platform.2020_07_29_12_13_54/htdocs/protected/vendor/yiisoft/yii/framework/yiilite.php(3724): CController->runAction(Object(ShowGalleryPhotoAction))
#2 /var/www/builds/build.artox-lab.classified-platform.2020_07_29_12_13_54/htdocs/protected/vendor/yiisoft/yii/framework/yiilite.php(3714): CController->runActionWithFilters(Object(ShowGalleryPhotoAction), Array)
#3 /var/www/builds/build.artox-lab.classified-platform.2020_07_29_12_13_54/htdocs/protected/vendor/yiisoft/yii/framework/yiilite.php(1799): CController->run('showGalleryPhot...')
#4 /var/www/builds/build.artox-lab.classified-platform.2020_07_29_12_13_54/htdocs/protected/vendor/yiisoft/yii/framework/yiilite.php(1719): CWebApplication->runController('personal/galler...')
#5 /var/www/builds/build.artox-lab.classified-platform.2020_07_29_12_13_54/htdocs/protected/vendor/yiisoft/yii/framework/yiilite.php(1236): CWebApplication->processRequest()
#6 /var/www/builds/build.artox-lab.classified-platform.2020_07_29_12_13_54/htdocs/index.php(22): CApplication->run()
#7 {main}
REQUEST_URI=/gallery/23609/1439643/
HTTP_REFERER=http://rnpcomr.103.by/gallery/23609/1439643/
---
""",
"date_unix" : 1.596024977817727E9,
"event_id" : "b75c7a1ef2f8780986931b038d2f8599",
"payload" : "exception.C404HttpException.404",
"service" : "artox-lab/classified-platform"
}
}
Filebeat config:
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["elk.artoxlab.com:9200"]
pipeline: application_pipeline
If you run GET _nodes/stats/ingest, you're going to see the usage statistics for your pipeline in nodes.xyz.ingest.pipelines.application_pipeline
Another thing worth noting is that you could also do the same thing in Filebeat itself without resorting to using an ingest pipeline simply by defining a decode_json_fields processor, like this:
processors:
- decode_json_fields:
fields: ["message"]
process_array: true
max_depth: 2
target: ""
overwrite_keys: true
add_error_key: false
UPDATE: if you still don't see your data being indexed, what I suggest to do is to build some failure handling into your pipeline. Change it to this, son on case the indexing fails for some reason, you can see the document in the failed-xyz index with the reason for the error.
PUT _ingest/pipeline/application_pipeline
{
"description": "Pipeline for parsing application.log for services",
"processors": [
{
"grok": {
"field": "message",
"patterns": [
"%{JSON:json_message_payload}"
],
"pattern_definitions": {
"JSON": "{.*$"
},
"ignore_failure": true,
"ignore_missing": true
}
},
{
"remove": {
"field": "json_message_payload",
"ignore_failure": true
}
}
],
"on_failure": [
{
"append": {
"field": "meta.errors",
"value": "{{ _ingest.on_failure_message }}, {{ _ingest.on_failure_processor_type }}, {{ _ingest.on_failure_processor_tag }}"
}
},
{
"set": {
"field": "_index",
"value": "failed-{{ _index }}"
}
}
]
}

Manipulating JSON messages from Kafka topic using Logstash filter

I am using Logstash 2.4 to read JSON messages from a Kafka topic and send them to an Elasticsearch Index.
The JSON format is as below --
{
"schema":
{
"type": "struct",
"fields": [
{
"type":"string",
"optional":false,
"field":"reloadID"
},
{
"type":"string",
"optional":false,
"field":"externalAccountID"
},
{
"type":"int64",
"optional":false,
"name":"org.apache.kafka.connect.data.Timestamp",
"version":1,
"field":"reloadDate"
},
{
"type":"int32",
"optional":false,
"field":"reloadAmount"
},
{
"type":"string",
"optional":true,
"field":"reloadChannel"
}
],
"optional":false,
"name":"reload"
},
"payload":
{
"reloadID":"328424295",
"externalAccountID":"9831200013",
"reloadDate":1446242463000,
"reloadAmount":240,
"reloadChannel":"C1"
}
}
Without any filter in my config file, the target documents from the ES index look like below --
{
"_index" : "kafka_reloads",
"_type" : "logs",
"_id" : "AVfcyTU4SyCFNFP2z5-l",
"_score" : 1.0,
"_source" : {
"schema" : {
"type" : "struct",
"fields" : [ {
"type" : "string",
"optional" : false,
"field" : "reloadID"
}, {
"type" : "string",
"optional" : false,
"field" : "externalAccountID"
}, {
"type" : "int64",
"optional" : false,
"name" : "org.apache.kafka.connect.data.Timestamp",
"version" : 1,
"field" : "reloadDate"
}, {
"type" : "int32",
"optional" : false,
"field" : "reloadAmount"
}, {
"type" : "string",
"optional" : true,
"field" : "reloadChannel"
} ],
"optional" : false,
"name" : "reload"
},
"payload" : {
"reloadID" : "155559213",
"externalAccountID" : "9831200014",
"reloadDate" : 1449529746000,
"reloadAmount" : 140,
"reloadChannel" : "C1"
},
"#version" : "1",
"#timestamp" : "2016-10-19T11:56:09.973Z",
}
}
But, I want only the value part of the "payload" field to move to my ES index as the target JSON body. So I tried to use the 'mutate' filter in the config file as below --
input {
kafka {
zk_connect => "zksrv-1:2181,zksrv-2:2181,zksrv-4:2181"
group_id => "logstash"
topic_id => "reload"
consumer_threads => 3
}
}
filter {
mutate {
remove_field => [ "schema","#version","#timestamp" ]
}
}
output {
elasticsearch {
hosts => ["datanode-6:9200","datanode-2:9200"]
index => "kafka_reloads"
}
}
With this filter, the ES documents now look like below --
{
"_index" : "kafka_reloads",
"_type" : "logs",
"_id" : "AVfch0yhSyCFNFP2z59f",
"_score" : 1.0,
"_source" : {
"payload" : {
"reloadID" : "850846698",
"externalAccountID" : "9831200013",
"reloadDate" : 1449356706000,
"reloadAmount" : 30,
"reloadChannel" : "C1"
}
}
}
But actually It should be like below --
{
"_index" : "kafka_reloads",
"_type" : "logs",
"_id" : "AVfch0yhSyCFNFP2z59f",
"_score" : 1.0,
"_source" : {
"reloadID" : "850846698",
"externalAccountID" : "9831200013",
"reloadDate" : 1449356706000,
"reloadAmount" : 30,
"reloadChannel" : "C1"
}
}
Is there a way to do this? Can anyone help me on this?
I also tried the below filter --
filter {
json {
source => "payload"
}
}
But that is giving me errors like --
Error parsing json {:source=>"payload", :raw=>{"reloadID"=>"572584696", "externalAccountID"=>"9831200011", "reloadDate"=>1449093851000, "reloadAmount"=>180, "reloadChannel"=>"C1"}, :exception=>java.lang.ClassCastException: org.jruby.RubyHash cannot be cast to org.jruby.RubyIO, :level=>:warn}
Any help will be much appreciated.
Thanks
Gautam Ghosh
You can achieve what you want using the following ruby filter:
ruby {
code => "
event.to_hash.delete_if {|k, v| k != 'payload'}
event.to_hash.update(event['payload'].to_hash)
event.to_hash.delete_if {|k, v| k == 'payload'}
"
}
What it does is:
remove all fields but the payload one
copy all payload inner fields at the root level
delete the payload field itself
You'll end up with what you need.
It's been a while but here there is a valid workaround, hope it would be useful.
json_encode {
source => "json"
target => "json_string"
}
json {
source => "json_string"
}

If it possible connect Loadbalancers DNSname to Route53 using AWS Cloudformation template?

What I trying to connect is Loadbalancer DNS name to to Route53.
Lets look on example.
Here is Loadbabancer from template in Resource:
"RestELB" : {
"Type" : "AWS::ElasticLoadBalancing::LoadBalancer",
"DependsOn": "AttachGateway",
"Properties": {
"LoadBalancerName": {"Fn::Join": ["",["Rest-ELB-", {"Ref": "VPC"}]]},
"CrossZone" : "true",
"Subnets": [{ "Ref": "PublicSubnet1" },{ "Ref": "PublicSubnet2" }],
"Listeners" : [
{"LoadBalancerPort" : "80", "InstancePort" : "80","Protocol" : "HTTP"},
{"LoadBalancerPort" : "6060", "InstancePort" : "6060","Protocol" : "HTTP"}
],
}
},
And Here is Route53:
"ApiRecordSet" : {
"Type" : "AWS::Route53::RecordSet",
"Properties" : {
"AliasTarget" :{
"DNSName" : [
{"Fn::Join": ["", [{"ElasticLoadBalancer": "DNSName"},"."]]}
],
"EvaluateTargetHealth" : "Boolean",
"HostedZoneId" : "String"
},
"HostedZoneName" : "example.net.",
"Comment" : "A records for my frontends.",
"Name" : "api.example.net.",
"Type" : "A",
"TTL" : "900",
}
}
Just to put {"ElasticLoadBalancer": "DNSName"} didn't work. Can someone to suggest or give me correct way to add this?
Thanks!
Most likely you want to get the attribute DNSName for the LoadBalancer whose reference is RestELB. So you will need something with Fn::GetAtt like (untested)
"ApiRecordSet" : {
"Type" : "AWS::Route53::RecordSet",
"Properties" : {
"AliasTarget" :{
"DNSName" : { "Fn::GetAtt" : [ "RestELB", "DNSName" ]},
"EvaluateTargetHealth" : "Boolean",
"HostedZoneId" : "String"
},
"HostedZoneName" : "example.net.",
"Comment" : "A records for my frontends.",
"Name" : "api.example.net.",
"Type" : "A"
}
}
For anyone reading this answer in 2018, I got mine working using CanonicalHostedZoneNameID and not CanonicalHostedZoneID
"MyRecordSet": {
"Type": "AWS::Route53::RecordSet",
"Properties": {
"HostedZoneName" : "example.com.",
"Name": "abc.example.com.",
"Type": "A",
"AliasTarget": {
"HostedZoneId" : {"Fn::GetAtt": ["MyELB", "CanonicalHostedZoneNameID"]},
"DNSName": {"Fn::GetAtt": ["MyELB", "DNSName"]},
"EvaluateTargetHealth": "false"
}
}
}
Be sure to read the CloudFormation documentation on the AWS::Route53::Recordset AliasTarget type:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-route53-aliastarget.html
This is how it looks in my CloudFormation when creating an alias target for an ELB:
"Route53LoadBalancerAlias" : {
"Type" : "AWS::Route53::RecordSet",
"Properties" : {
"AliasTarget" : {
"DNSName" : { "Fn::GetAtt" : [ "ELB", "DNSName" ]},
"EvaluateTargetHealth" : False,
"HostedZoneId" : { "Fn::GetAtt" : [ "ELB", "CanonicalHostedZoneID" ]}
},
For load balancers, use the canonical hosted zone ID of the load balancer. For Amazon S3, use the hosted zone ID for your bucket's website endpoint. For CloudFront, use Z2FDTNDATAQYW2. For a list of hosted zone IDs of other services, see the relevant service in the AWS Regions and Endpoints.
YAML for deploying a RecordSet referencing an ELB deployed in the same template.
Route53RecordSet:
Type: AWS::Route53::RecordSet
Properties:
Name: !Ref HostName
HostedZoneId: !Ref HostedZoneId
Type: A
AliasTarget:
DNSName: !GetAtt ElasticLoadBalancer.DNSName
HostedZoneId: !GetAtt ElasticLoadBalancer.CanonicalHostedZoneIDe