I'm trying to use Apache Drill with logFile Regex and I don't get configure it. I tried with the same example of the webpage https://drill.apache.org/docs/logfile-plugin/ but I got an error when I try to save it.
I have tried:
"log" : {
"type" : "logRegex",
"extension" : "log",
"regex" : "(\\d{6})\\s(\\d{2}:\\d{2}:\\d{2})\\s+(\\d+)\\s(\\w+)\\s+(.+)",
"maxErrors": 10,
"schema": [
{
"fieldName": "eventDate",
"fieldType": "DATE",
"format": "yyMMdd"
},
{
"fieldName": "eventTime",
"fieldType": "TIME",
"format": "HH:mm:ss"
},
{
"fieldName": "PID",
"fieldType": "INT"
},
{
"fieldName": "action"
},
{
"fieldName": "query"
}
]
}
It doesn't make too much sense to me, is I tried this too:
{
"type": "file",
"enabled": true,
"connection": "file:///",
"workspaces": {
"root": {
"location": "/user/max/donuts",
"writable": false,
"defaultInputFormat": null
}
},
"formats" : {
"json" : {
"type" : "json"
}
},
"log" : {
"type" : "logRegex",
"extension" : "log",
"regex" : "(\\d{6})\\s(\\d{2}:\\d{2}:\\d{2})\\s+(\\d+)\\s(\\w+)\\s+(.+)",
"maxErrors": 10,
"schema": [
{
"fieldName": "eventDate",
"fieldType": "DATE",
"format": "yyMMdd"
},
{
"fieldName": "eventTime",
"fieldType": "TIME",
"format": "HH:mm:ss"
},
{
"fieldName": "PID",
"fieldType": "INT"
},
{
"fieldName": "action"
},
{
"fieldName": "query"
}
]
}
}
Does anybody to config this plugin right?
Looks like your json-config file is not valid. Your "formats" key is closed right after after "json" format plugin. Please double check it or try this:
{
"storage":{
dfs: {
type: "file",
connection: "file:///",
workspaces: {
"root" : {
location: "/",
writable: false,
allowAccessOutsideWorkspace: false
},
"tmp" : {
location: "/tmp",
writable: true,
allowAccessOutsideWorkspace: false
}
},
formats: {
"log" : {
"type" : "logRegex",
"extension" : "log",
"regex" : "(\\d{6})\\s(\\d{2}:\\d{2}:\\d{2})\\s+(\\d+)\\s(\\w+)\\s+(.+)",
"maxErrors": 10,
"schema": [
{
"fieldName": "eventDate",
"fieldType": "DATE",
"format": "yyMMdd"
},
{
"fieldName": "eventTime",
"fieldType": "TIME",
"format": "HH:mm:ss"
},
{
"fieldName": "PID",
"fieldType": "INT"
},
{
"fieldName": "action"
},
{
"fieldName": "query"
}
]
}
}
}
}
}
Related
I'm trying to send the content of a JSON file into Elasticsearch.
Each file contains only one simple JSON object (just attributes, no array, no nested objects). Filebeat sees the files but they're not sent to Elasticsearch (it's working with csv files so the connection is correct)...
Here is the JSON file (all in one line in the file but I passed it into a JSON formatter to be displayed here):
{
"IPID": "3782",
"Agent": "localhost",
"User": "vtom",
"Script": "/opt/vtom/scripts/scriptOK.ksh",
"Arguments": "",
"BatchQueue": "queue_ksh-json",
"VisualTOMServer": "labc",
"Job": "testJSONlogs",
"Application": "test_CAD",
"Environment": "TEST",
"JobRetry": "0",
"LabelPoint": "0",
"ExecutionMode": "NORMAL",
"DateName": "TEST_CAD",
"DateValue": "05/11/2022",
"DateStart": "2022-11-05",
"TimeStart": "20:58:14",
"StandardOutputName": "/opt/vtom/logs/TEST_test_CAD_testJSONlogs_221105-205814.o",
"StandardOutputContent": "_______________________________________________________________________\nVisual TOM context of the job\n \nIPID : 3782\nAgent : localhost\nUser : vtom\nScript : ",
"ErrorOutput": "/opt/vtom/logs/TEST_test_CAD_testJSONlogs_221105-205814.e",
"ErrorOutputContent": "",
"JsonOutput": "/opt/vtom/logs/TEST_test_CAD_testJSONlogs_221105-205814.json",
"ReturnCode": "0",
"Status": "Finished"
}
The input definition in Filebeat is (it's a merge of data from different web sources):
- type: filestream
id: vtomlogs
enabled: true
paths:
- /opt/vtom/logs/*.json
index: vtomlogs-%{+YYYY.MM.dd}
parsers:
- ndjson:
keys_under_root: true
overwrite_keys: true
add_error_key: true
expand_keys: true
The definition of the index template:
{
"properties": {
"IPID": {
"coerce": true,
"index": true,
"ignore_malformed": false,
"store": false,
"type": "integer",
"doc_values": true
},
"VisualTOMServer": {
"type": "keyword"
},
"Status": {
"type": "keyword"
},
"Agent": {
"type": "keyword"
},
"Script": {
"type": "text"
},
"User": {
"type": "keyword"
},
"ErrorOutputContent": {
"type": "text"
},
"ReturnCode": {
"type": "integer"
},
"BatchQueue": {
"type": "keyword"
},
"StandardOutputName": {
"type": "text"
},
"DateStart": {
"format": "yyyy-MM-dd",
"index": true,
"ignore_malformed": false,
"store": false,
"type": "date",
"doc_values": true
},
"Arguments": {
"type": "text"
},
"ExecutionMode": {
"type": "keyword"
},
"DateName": {
"type": "keyword"
},
"TimeStart": {
"format": "HH:mm:ss",
"index": true,
"ignore_malformed": false,
"store": false,
"type": "date",
"doc_values": true
},
"JobRetry": {
"type": "integer"
},
"LabelPoint": {
"type": "keyword"
},
"DateValue": {
"format": "dd/MM/yyyy",
"index": true,
"ignore_malformed": false,
"store": false,
"type": "date",
"doc_values": true
},
"JsonOutput": {
"type": "text"
},
"StandardOutputContent": {
"type": "text"
},
"Environment": {
"type": "keyword"
},
"ErrorOutput": {
"type": "text"
},
"Job": {
"type": "keyword"
},
"Application": {
"type": "keyword"
}
}
}
The file is seen by Filebeat but it does nothing with it...
0100","log.logger":"input.filestream","log.origin":{"file.name":"filestream/prospector.go","file.line":177},"message":"A new file /opt/vtom/logs/TEST_test_CAD_testJSONlogs_221106-124138.json has been found","service.name":"filebeat","id":"vtomlogs","prospector":"file_prospector","operation":"create","source_name":"native::109713280-64768","os_id":"109713280-64768","new_path":"/opt/vtom/logs/TEST_test_CAD_testJSONlogs_221106-124138.json","ecs.version":"1.6.0"}
My version of Elasticsearch is: 8.4.3
My version of Filebeat is: 8.5.0 (with allow_older_versions: true in my configuration file)
Thanks for your help
I'm new to Json Schema validation. I think the validation should fail, but it passes. Not sure why the If/then is not forcing the required field. I believe I formatted the If/Then correctly.
JSON:
{
"name": "Battery Wear",
"triggerAlert": {
"trigger": "When",
"timeSpan": 50,
"timeSpanMeasure": "Hours"
}
}
SCHEMA:
{
"$schema": "http://json-schema.org/draft-07/schema",
"type": "object",
"required": [
"name",
"triggerAlert"
],
"properties": {
"name": {
"type": "string"
},
"triggerAlert": {
"type": "object",
"required": ["trigger"],
"properties": {
"trigger": {
"type": "string",
"enum": ["Always","When"]
},
"numberOfEvents": {
"type": "integer"
},
"timeSpan": {
"type": "integer"
},
"timeSpanMeasure": {
"type": "string"
}
},
"if": { "properties": {"trigger": {"enum": ["When"]} } },
"then": {
"required": [
"numberOfEvents",
"timeSpan",
"timeSpanMeasure"
]
}
}
}
}
Depending on the implementation you are using, you may not have support for these conditionals. if/then/else were only added in specification draft 7.
The schema is correct; the expected error result is:
{
"errors" : [
{
"error" : "missing property: numberOfEvents",
"instanceLocation" : "/triggerAlert",
"keywordLocation" : "/properties/triggerAlert/then/required"
},
{
"error" : "subschema is not valid",
"instanceLocation" : "/triggerAlert",
"keywordLocation" : "/properties/triggerAlert/then"
},
{
"error" : "not all properties are valid",
"instanceLocation" : "",
"keywordLocation" : "/properties"
}
],
"valid" : false
}
I am trying to deploy a MySQL Flexible server cluster using ARM Templates and terraform (since terraform doesn't have any resource for mysql_flexible) but it gives me the following "Internal Server Error" without any meaningful information.
Please provide string value for 'version' (? for help): 5.7
{"status":"Failed","error":{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"Conflict","message":"{\r\n "status": "Failed",\r\n "error": {\r\n "code": "ResourceDeploymentFailure",\r\n "message": "The resource operation completed with terminal provisioning state 'Failed'.",\r\n "details": [\r\n {\r\n "code": "InternalServerError",\r\n "message": "An unexpected error occured while processing the request. Tracking ID: 'b8ab3a01-d4f2-40d5-92cf-2c9a239bdac3'"\r\n }\r\n ]\r\n }\r\n}"}]}}
There's not much information when I paste this tracking ID in Azure Activity Log.
Here's my sample template.json file which I am using.
{
"$schema" : "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"parameters" : {
"administratorLogin" : {
"type" : "String"
},
"administratorLoginPassword" : {
"type" : "SecureString"
},
"availabilityZone" : {
"type" : "String"
},
"location" : {
"type" : "String"
},
"name" : {
"type" : "String"
},
"version" : {
"type" : "String"
}
},
"resources" : [
{
"apiVersion" : "2021-05-01-preview",
"identity" : {
"type" : "SystemAssigned"
},
"location" : "eastus",
"name" : "mysql-abcd-eastus",
"properties" : {
"administratorLogin" : "randomuser",
"administratorLoginPassword" : "randompasswd",
"availabilityZone" : "1",
"backup" : {
"backupRetentionDays" : "7",
"geoRedundantBackup" : "Disabled"
},
"createMode" : "Default",
"highAvailability" : {
"mode" : "Enabled",
"standbyAvailabilityZone" : "2"
},
"network" : {
"delegatedSubnetResourceId" : "myactualsubnetid",
"privateDnsZoneResourceId" : "myactualprivatednszoneid"
},
"version" : "[parameters('version')]"
},
"sku" : {
"name" : "Standard_E4ds_v4",
"tier" : "MemoryOptimized"
},
"type" : "Microsoft.DBforMySQL/flexibleServers"
}
]
}
I tested your code and faced the same issue . So, as a solution you can try with below Code :
provider "azurerm" {
features {}
}
data "azurerm_resource_group" "example" {
name = "yourresourcegroup"
}
resource "azurerm_resource_group_template_deployment" "example" {
name = "acctesttemplate-01"
resource_group_name = data.azurerm_resource_group.example.name
parameters_content = jsonencode({
"administratorLogin"= {
"value"= "sqladmin"
},
"administratorLoginPassword"= {
"value": "password"
},
"location"= {
"value": "eastus"
},
"serverName"= {
"value"= "ansumantestsql1234"
},
"serverEdition"= {
"value"= "GeneralPurpose"
},
"vCores"= {
"value"= 2
},
"storageSizeGB"= {
"value"= 64
},
"haEnabled"= {
"value"= "ZoneRedundant"
},
"availabilityZone"= {
"value"= "1"
},
"standbyAvailabilityZone"= {
"value"= "2"
},
"version"= {
"value"= "5.7"
},
"tags"= {
"value"= {}
},
"firewallRules"= {
"value"= {
"rules"= []
}
},
"backupRetentionDays"= {
"value"= 7
},
"geoRedundantBackup"= {
"value"= "Disabled"
},
"vmName"= {
"value"= "Standard_D2ds_v4"
},
"publicNetworkAccess"= {
"value"= "Enabled"
},
"storageIops"= {
"value": 1000
},
"storageAutogrow"= {
"value"= "Enabled"
},
"vnetData"= {
"value"= {
"virtualNetworkName"= "testVnet",
"subnetName"= "testSubnet",
"virtualNetworkAddressPrefix"= "10.0.0.0/16",
"virtualNetworkResourceGroupName"= "[resourceGroup().name]",
"location"= "eastus2",
"subscriptionId"= "[subscription().subscriptionId]",
"subnetProperties"= {},
"isNewVnet"= false,
"subnetNeedsUpdate"= false,
"Network"= {}
}
},
"infrastructureEncryption"= {
"value"= "Disabled"
}
})
template_content = <<DEPLOY
{
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"administratorLogin": {
"type": "string"
},
"administratorLoginPassword": {
"type": "securestring"
},
"location": {
"type": "string"
},
"serverName": {
"type": "string"
},
"serverEdition": {
"type": "string"
},
"vCores": {
"type": "int",
"defaultValue": 4
},
"storageSizeGB": {
"type": "int"
},
"haEnabled": {
"type": "string",
"defaultValue": "Disabled"
},
"availabilityZone": {
"type": "string"
},
"standbyAvailabilityZone": {
"type": "string"
},
"version": {
"type": "string"
},
"tags": {
"type": "object",
"defaultValue": {}
},
"firewallRules": {
"type": "object",
"defaultValue": {}
},
"backupRetentionDays": {
"type": "int"
},
"geoRedundantBackup": {
"type": "string"
},
"vmName": {
"type": "string",
"defaultValue": "Standard_B1ms"
},
"publicNetworkAccess": {
"type": "string",
"metadata": {
"description": "Value should be either Enabled or Disabled"
}
},
"storageIops": {
"type": "int"
},
"storageAutogrow": {
"type": "string",
"defaultValue": "Enabled"
},
"vnetData": {
"type": "object",
"metadata": {
"description": "Vnet data is an object which contains all parameters pertaining to vnet and subnet"
},
"defaultValue": {
"virtualNetworkName": "testVnet",
"subnetName": "testSubnet",
"virtualNetworkAddressPrefix": "10.0.0.0/16",
"virtualNetworkResourceGroupName": "[resourceGroup().name]",
"location": "westus2",
"subscriptionId": "[subscription().subscriptionId]",
"subnetProperties": {},
"isNewVnet": false,
"subnetNeedsUpdate": false,
"Network": {}
}
},
"infrastructureEncryption": {
"type": "string"
}
},
"variables": {
"api": "2021-05-01-preview",
"firewallRules": "[parameters('firewallRules').rules]"
},
"resources": [
{
"apiVersion": "[variables('api')]",
"location": "[parameters('location')]",
"name": "[parameters('serverName')]",
"properties": {
"version": "[parameters('version')]",
"administratorLogin": "[parameters('administratorLogin')]",
"administratorLoginPassword": "[parameters('administratorLoginPassword')]",
"publicNetworkAccess": "[parameters('publicNetworkAccess')]",
"Network": "[if(empty(parameters('vnetData').Network), json('null'), parameters('vnetData').Network)]",
"Storage": {
"StorageSizeGB": "[parameters('storageSizeGB')]",
"Iops": "[parameters('storageIops')]",
"Autogrow": "[parameters('storageAutogrow')]"
},
"Backup": {
"backupRetentionDays": "[parameters('backupRetentionDays')]",
"geoRedundantBackup": "[parameters('geoRedundantBackup')]"
},
"availabilityZone": "[parameters('availabilityZone')]",
"highAvailability": {
"mode": "[parameters('haEnabled')]",
"standbyAvailabilityZone": "[parameters('standbyAvailabilityZone')]"
},
"dataencryption": {
"infrastructureEncryption": "[parameters('infrastructureEncryption')]"
}
},
"sku": {
"name": "[parameters('vmName')]",
"tier": "[parameters('serverEdition')]",
"capacity": "[parameters('vCores')]"
},
"tags": "[parameters('tags')]",
"type": "Microsoft.DBforMySQL/flexibleServers"
},
{
"condition": "[greater(length(variables('firewallRules')), 0)]",
"type": "Microsoft.Resources/deployments",
"apiVersion": "2019-08-01",
"name": "[concat('firewallRules-', copyIndex())]",
"copy": {
"count": "[if(greater(length(variables('firewallRules')), 0), length(variables('firewallRules')), 1)]",
"mode": "Serial",
"name": "firewallRulesIterator"
}
}
]
}
DEPLOY
deployment_mode = "Incremental"
}
Output:
I want to make fuzzy match for email or telephone by Elasticsearch. For example:
match all emails end with #gmail.com
or
match all telephone startwith 136.
I know I can use wildcard,
{
"query": {
"wildcard" : {
"email": "*gmail.com"
}
}
}
but the performance is very poor. I tried to use regexp:
{"query": {"regexp": {"email": {"value": "*163\.com*"} } } }
But doesn't work.
Is there better way to make it?
curl -XGET localhost:9200/user_data
{
"user_data": {
"aliases": {},
"mappings": {
"user_data": {
"properties": {
"address": {
"type": "string"
},
"age": {
"type": "long"
},
"comment": {
"type": "string"
},
"created_on": {
"type": "date",
"format": "dateOptionalTime"
},
"custom": {
"properties": {
"key": {
"type": "string"
},
"value": {
"type": "string"
}
}
},
"gender": {
"type": "string"
},
"name": {
"type": "string"
},
"qq": {
"type": "string"
},
"tel": {
"type": "string"
},
"updated_on": {
"type": "date",
"format": "dateOptionalTime"
},
}
}
},
"settings": {
"index": {
"creation_date": "1458832279465",
"uuid": "Fbmthc3lR0ya51zCnWidYg",
"number_of_replicas": "1",
"number_of_shards": "5",
"version": {
"created": "1070299"
}
}
},
"warmers": {}
}
}
the mapping:
{
"settings": {
"analysis": {
"analyzer": {
"index_phone_analyzer": {
"type": "custom",
"char_filter": [ "digit_only" ],
"tokenizer": "digit_edge_ngram_tokenizer",
"filter": [ "trim" ]
},
"search_phone_analyzer": {
"type": "custom",
"char_filter": [ "digit_only" ],
"tokenizer": "keyword",
"filter": [ "trim" ]
},
"index_email_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [ "lowercase", "name_ngram_filter", "trim" ]
},
"search_email_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [ "lowercase", "trim" ]
}
},
"char_filter": {
"digit_only": {
"type": "pattern_replace",
"pattern": "\\D+",
"replacement": ""
}
},
"tokenizer": {
"digit_edge_ngram_tokenizer": {
"type": "edgeNGram",
"min_gram": "3",
"max_gram": "15",
"token_chars": [ "digit" ]
}
},
"filter": {
"name_ngram_filter": {
"type": "ngram",
"min_gram": "3",
"max_gram": "20"
}
}
}
},
"mappings" : {
"user_data" : {
"properties" : {
"name" : {
"type" : "string",
"analyzer" : "ik"
},
"age" : {
"type" : "integer"
},
"gender": {
"type" : "string"
},
"qq" : {
"type" : "string"
},
"email" : {
"type" : "string",
"analyzer": "index_email_analyzer",
"search_analyzer": "search_email_analyzer"
},
"tel" : {
"type" : "string",
"analyzer": "index_phone_analyzer",
"search_analyzer": "search_phone_analyzer"
},
"address" : {
"type": "string",
"analyzer" : "ik"
},
"comment" : {
"type" : "string",
"analyzer" : "ik"
},
"created_on" : {
"type" : "date",
"format" : "dateOptionalTime"
},
"updated_on" : {
"type" : "date",
"format" : "dateOptionalTime"
},
"custom": {
"type" : "nested",
"properties" : {
"key" : {
"type" : "string"
},
"value" : {
"type" : "string"
}
}
}
}
}
}
}
An easy way to do this is to create a custom analyzer which makes use of the n-gram token filter for emails (=> see below index_email_analyzer and search_email_analyzer + email_url_analyzer for exact email matching) and edge-ngram token filter for phones (=> see below index_phone_analyzer and search_phone_analyzer).
The full index definition is available below.
PUT myindex
{
"settings": {
"analysis": {
"analyzer": {
"email_url_analyzer": {
"type": "custom",
"tokenizer": "uax_url_email",
"filter": [ "trim" ]
},
"index_phone_analyzer": {
"type": "custom",
"char_filter": [ "digit_only" ],
"tokenizer": "digit_edge_ngram_tokenizer",
"filter": [ "trim" ]
},
"search_phone_analyzer": {
"type": "custom",
"char_filter": [ "digit_only" ],
"tokenizer": "keyword",
"filter": [ "trim" ]
},
"index_email_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [ "lowercase", "name_ngram_filter", "trim" ]
},
"search_email_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [ "lowercase", "trim" ]
}
},
"char_filter": {
"digit_only": {
"type": "pattern_replace",
"pattern": "\\D+",
"replacement": ""
}
},
"tokenizer": {
"digit_edge_ngram_tokenizer": {
"type": "edgeNGram",
"min_gram": "1",
"max_gram": "15",
"token_chars": [ "digit" ]
}
},
"filter": {
"name_ngram_filter": {
"type": "ngram",
"min_gram": "1",
"max_gram": "20"
}
}
}
},
"mappings": {
"your_type": {
"properties": {
"email": {
"type": "string",
"analyzer": "index_email_analyzer",
"search_analyzer": "search_email_analyzer"
},
"phone": {
"type": "string",
"analyzer": "index_phone_analyzer",
"search_analyzer": "search_phone_analyzer"
}
}
}
}
}
Now, let's dissect it one bit after another.
For the phone field, the idea is to index phone values with index_phone_analyzer, which uses an edge-ngram tokenizer in order to index all prefixes of the phone number. So if your phone number is 1362435647, the following tokens will be produced: 1, 13, 136, 1362, 13624, 136243, 1362435, 13624356, 13624356, 136243564, 1362435647.
Then when searching we use another analyzer search_phone_analyzer which will simply take the input number (e.g. 136) and match it against the phone field using a simple match or term query:
POST myindex
{
"query": {
"term":
{ "phone": "136" }
}
}
For the email field, we proceed in a similar way, in that we index the email values with the index_email_analyzer, which uses an ngram token filter, which will produce all possible tokens of varying length (between 1 and 20 chars) that can be taken from the email value. For instance: john#gmail.com will be tokenized to j, jo, joh, ..., gmail.com, ..., john#gmail.com.
Then when searching, we'll use another analyzer called search_email_analyzer which will take the input and try to match it against the indexed tokens.
POST myindex
{
"query": {
"term":
{ "email": "#gmail.com" }
}
}
The email_url_analyzer analyzer is not used in this example but I've included it just in case you need to match on the exact email value.
I'm trying to add a custom template for all logstash indexes in elasticsearch, however whenever I add one, logstash raises a 400 error on all the logs and fails to add anything to elasticsearch.
I'm adding the template using the REST API for elasticsearch:
POST _template/logstash
{
"order": 0,
"template" : "logstash*",
"settings": {
"index.refresh_interval": "5s"
},
"mappings": {
"_default_": {
"_all" : {
"enabled" : true,
"omit_norms": true
},
"dynamic_templates": [
{
"message_field": {
"mapping": {
"index": "analyzed",
"omit_norms": true,
"type": "string"
},
"match_mapping_type": "string",
"match": "message"
}
},
{
"string_fields": {
"mapping": {
"index": "analyzed",
"omit_norms": true,
"type": "string",
"fields": {
"raw": {
"ignore_above": 256,
"index": "not_analyzed",
"type": "string"
}
}
},
"match_mapping_type": "string",
"match": "*"
}
}
],
"properties": {
"geoip": {
"dynamic": true,
"type": "object",
"properties": {
"location": {
"type": "geo_point"
}
}
},
"#version": {
"index": "not_analyzed",
"type": "string"
},
"#fields": {
"type": "object",
"dynamic": true,
"path": "full"
},
"#message": {
"type": "string",
"index": "analyzed"
},
"#source": {
"type": "string",
"index": "not_analyzed"
},
"method": {
"type": "string",
"index": "not_analyzed"
},
"requested": {
"type": "date",
"format": "dateOptionalTime",
"index": "not_analyzed"
},
"response_time": {
"type": "float",
"index": "not_analyzed"
},
"hostname": {
"type": "string",
"index": "not_analyzed"
},
"ip": {
"type": "string",
"index": "not_analyzed"
},
"error": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
you should try to add the template using logstash instead of using the rest api directly.
In your logstash configuration:
output {
elasticsearch {
# add additional configurations appropriately
template => # path to the template file you want to use
template_name => "logstash"
template_overwrite => true
}
}