I have one development server and already installed
elasticsearch
kibana
filebeat
docker
on docker already running 2 container mariadb database.
I already set filebeat for 1 mariadb database.
with config on /etc/filebeat/modules.d/mysql.yml like this
- module: mysql
# Error logs
error:
enabled: true
var.paths: ["/media/dbdev1/data/mysql_error.log"]
# Slow logs
slowlog:
enabled: true
var.paths: ["/media/dbdev1/data/mysql_slow.log"]
if I need more error log and slow log from other container mariadb database is just change /etc/filebeat/modules.d/mysql.yml like this
- module: mysql
# Error logs
error:
enabled: true
var.paths: ["/media/dbdev1/data/mysql_error.log","/media/dbdev2/data/mysql_error.log"]
# Slow logs
slowlog:
enabled: true
var.paths: ["/media/dbdev1/data/mysql_slow.log","/media/dbdev2/data/mysql_slow.log"]
my expectation filebeat can pull mysql_error.log from 2 different mariadb container with different path also
some filebeat setup log
2021-10-25T10:43:25.616+0700 INFO instance/beat.go:665 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2021-10-25T10:43:25.619+0700 INFO instance/beat.go:673 Beat ID: cb340c7a-15b4-44f7-8a66-06f6850c1c0f
2021-10-25T10:43:26.499+0700 INFO [beat] instance/beat.go:1014 Beat info {"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat", "home": "/usr/share/filebeat", "logs": "/var/log/filebeat"}, "type": "filebeat", "uuid": "cb340c7a-15b4-44f7-8a66-06f6850c1c0f"}}}
2021-10-25T10:43:26.501+0700 INFO [beat] instance/beat.go:1023 Build info {"system_info": {"build": {"commit": "5ae799cb1c3c490c9a27b14cb463dc23696bc7d3", "libbeat": "7.15.1", "time": "2021-10-07T22:06:49.000Z", "version": "7.15.1"}}}
2021-10-25T10:43:26.501+0700 INFO [beat] instance/beat.go:1026 Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":2,"version":"go1.16.6"}}}
2021-10-25T10:43:26.503+0700 INFO [beat] instance/beat.go:1030 Host info {"system_info": {"host": {"architecture":"x86_64","boot_time":"2021-10-25T09:45:23+07:00","containerized":false,"name":"localhost.localdomain","ip":["127.0.0.1/8","::1/128","10.0.2.20/24","fe80::a00:27ff:fe8c:82d0/64","192.168.131.5/24","fe80::a00:27ff:fedd:bb9e/64","172.17.0.1/16","172.18.0.1/16","fe80::42:89ff:fe04:e2cb/64","fe80::2c35:92ff:fe88:4daf/64","fe80::38b2:66ff:fe52:b1ec/64"],"kernel_version":"4.18.0-305.19.1.el8_4.x86_64","mac":["08:00:27:8c:82:d0","08:00:27:dd:bb:9e","02:42:ad:3d:07:6b","02:42:89:04:e2:cb","2e:35:92:88:4d:af","3a:b2:66:52:b1:ec"],"os":{"type":"linux","family":"redhat","platform":"centos","name":"CentOS Linux","version":"8","major":8,"minor":4,"patch":2105},"timezone":"WIB","timezone_offset_sec":25200,"id":"b14f68ad4b8c4732a4cfe379692179ec"}}}
2021-10-25T10:43:26.503+0700 INFO [beat] instance/beat.go:1059 Process info {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39"],"ambient":null}, "cwd": "/media/dbdev1", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 3503, "ppid": 1941, "seccomp": {"mode":"disabled","no_new_privs":false}, "start_time": "2021-10-25T10:43:23.920+0700"}}}
2021-10-25T10:43:26.503+0700 INFO instance/beat.go:309 Setup Beat: filebeat; Version: 7.15.1
2021-10-25T10:43:26.504+0700 INFO [index-management] idxmgmt/std.go:184 Set output.elasticsearch.index to 'filebeat-7.15.1' as ILM is enabled.
2021-10-25T10:43:26.517+0700 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2021-10-25T10:43:26.521+0700 INFO [publisher] pipeline/module.go:113 Beat name: localhost.localdomain
2021-10-25T10:43:26.585+0700 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2021-10-25T10:43:26.820+0700 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2021-10-25T10:43:26.895+0700 INFO [index-management] idxmgmt/std.go:261 Auto ILM enable success.
2021-10-25T10:43:26.929+0700 INFO [index-management.ilm] ilm/std.go:170 ILM policy filebeat exists already.
2021-10-25T10:43:26.929+0700 INFO [index-management] idxmgmt/std.go:401 Set setup.template.name to '{filebeat-7.15.1 {now/d}-000001}' as ILM is enabled.
2021-10-25T10:43:26.929+0700 INFO [index-management] idxmgmt/std.go:406 Set setup.template.pattern to 'filebeat-7.15.1-*' as ILM is enabled.
2021-10-25T10:43:26.929+0700 INFO [index-management] idxmgmt/std.go:440 Set settings.index.lifecycle.rollover_alias in template to {filebeat-7.15.1 {now/d}-000001} as ILM is enabled.
2021-10-25T10:43:26.929+0700 INFO [index-management] idxmgmt/std.go:444 Set settings.index.lifecycle.name in template to {filebeat {"policy":{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.
2021-10-25T10:43:26.974+0700 INFO template/load.go:229 Existing template will be overwritten, as overwrite is enabled.
2021-10-25T10:43:28.637+0700 INFO [add_cloud_metadata] add_cloud_metadata/add_cloud_metadata.go:101 add_cloud_metadata: hosting provider type not detected.
2021-10-25T10:43:31.539+0700 INFO template/load.go:132 Try loading template filebeat-7.15.1 to Elasticsearch
2021-10-25T10:43:32.442+0700 INFO template/load.go:124 Template with name "filebeat-7.15.1" loaded.
2021-10-25T10:43:32.442+0700 INFO [index-management] idxmgmt/std.go:297 Loaded index template.
2021-10-25T10:43:32.475+0700 INFO [index-management.ilm] ilm/std.go:126 Index Alias filebeat-7.15.1 exists already.
2021-10-25T10:43:32.476+0700 INFO kibana/client.go:167 Kibana url: http://localhost:5601
2021-10-25T10:43:38.391+0700 INFO kibana/client.go:167 Kibana url: http://localhost:5601
2021-10-25T10:44:58.953+0700 INFO instance/beat.go:848 Kibana dashboards successfully loaded.
2021-10-25T10:44:58.976+0700 WARN [cfgwarn] instance/beat.go:574 DEPRECATED: Setting up ML using Filebeat is going to be removed. Please use the ML app to setup jobs. Will be removed in version: 8.0.0
2021-10-25T10:44:58.993+0700 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2021-10-25T10:44:59.006+0700 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2021-10-25T10:44:59.006+0700 INFO kibana/client.go:167 Kibana url: http://localhost:5601
2021-10-25T10:44:59.098+0700 WARN fileset/modules.go:425 X-Pack Machine Learning is not enabled
2021-10-25T10:44:59.207+0700 WARN fileset/modules.go:425 X-Pack Machine Learning is not enabled
2021-10-25T10:44:59.207+0700 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2021-10-25T10:44:59.212+0700 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2021-10-25T10:44:59.214+0700 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2021-10-25T10:44:59.219+0700 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2021-10-25T10:44:59.351+0700 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {"pipeline": "filebeat-7.15.1-mysql-error-pipeline"}
2021-10-25T10:44:59.480+0700 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {"pipeline": "filebeat-7.15.1-mysql-slowlog-pipeline"}
2021-10-25T10:44:59.480+0700 INFO cfgfile/reload.go:262 Loading of config files completed.
2021-10-25T10:44:59.481+0700 INFO [load] cfgfile/list.go:129 Stopping 1 runners ...
Am relatively new to logstash & Elasticsearch...
Installed logstash & Elasticsearch using on macOS Mojave (10.14.2):
brew install logstash
brew install elasticsearch
When I check for these versions:
brew list --versions
Receive the following output:
elasticsearch 6.5.4
logstash 6.5.4
When I open up Google Chrome and type this into the URL Address field:
localhost:9200
This is the JSON response that I receive:
{
"name" : "9oJAP16",
"cluster_name" : "elasticsearch_local",
"cluster_uuid" : "PgaDRw8rSJi-NDo80v_6gQ",
"version" : {
"number" : "6.5.4",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "d2ef93d",
"build_date" : "2018-12-17T21:17:40.758843Z",
"build_snapshot" : false,
"lucene_version" : "7.5.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
Inside:
/usr/local/etc/logstash/logstash.yml
Resides the following variables:
path.data: /usr/local/Cellar/logstash/6.5.4/libexec/data
pipeline.workers: 2
path.config: /usr/local/etc/logstash/conf.d
log.level: info
path.logs: /usr/local/var/log
Inside:
/usr/local/etc/logstash/pipelines.yml
Resides the following variables:
- pipeline.id: main
path.config: "/usr/local/etc/logstash/conf.d/*.conf"
Have setup the following logstash_etl.conf file underneath:
/usr/local/etc/logstash/conf.d
Its contents:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://myapp-production.crankbftdpmc.us-west-2.rds.amazonaws.com:3306/products"
jdbc_user => "products_admin"
jdbc_password => "products123"
jdbc_driver_library => "/etc/logstash/mysql-connector/mysql-connector-java-5.1.21.jar"
jdbc_driver_class => "com.mysql.jdbc.driver"
schedule => "*/5 * * * *"
statement => "select * from products"
use_column_value => false
clean_run => true
}
}
# sudo /usr/share/logstash/bin/logstash-plugin install logstash-output-exec
output {
if ([purge_task] == "yes") {
exec {
command => "curl -XPOST 'localhost:9200/_all/products/_delete_by_query?conflicts=proceed' -H 'Content-Type: application/json' -d'
{
\"query\": {
\"range\" : {
\"#timestamp\" : {
\"lte\" : \"now-3h\"
}
}
}
}
'"
}
}
else {
stdout { codec => json_lines}
elasticsearch {
"hosts" => "localhost:9200"
"index" => "product_%{product_api_key}"
"document_type" => "%{[#metadata][index_type]}"
"document_id" => "%{[#metadata][index_id]}"
"doc_as_upsert" => true
"action" => "update"
"retry_on_conflict" => 7
}
}
}
When I do this:
brew services start logstash
Receive the following inside my /usr/local/var/log/logstash-plain.log file:
[2019-01-15T14:51:15,319][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x399927c7 run>"}
[2019-01-15T14:51:15,663][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-01-15T14:51:16,514][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-15T14:57:31,432][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times {:error_message=>"Java::ComMysqlCjJdbcExceptions::CommunicationsException: Communications link failure\n\nThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server."}
[2019-01-15T14:57:31,435][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times {:error_message=>"Java::ComMysqlCjJdbcExceptions::CommunicationsException: Communications link failure\n\nThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server."}[2019-01-15T14:51:15,319][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x399927c7 run>"}
[2019-01-15T14:51:15,663][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-01-15T14:51:16,514][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-15T14:57:31,432][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times
What am I possibly doing wrong?
Is there a way to obtain a dump (e.g. mysqldump) from an Elasticsearch server (Stage or Production) and then reimport into a local instance running Elasticsearch without using logstash?
This is the same configuration file that works inside an Amazon EC-2 Production Instance but don't know why its not working in my local macOS Mojave instance?
You may encounter the SSL issue of RDS, since
If you use either the MySQL Java Connector v5.1.38 or later, or the MySQL Java Connector v8.0.9 or later to connect to your databases, even if you haven't explicitly configured your applications to use SSL/TLS when connecting to your databases, these client drivers default to using SSL/TLS. In addition, when using SSL/TLS, they perform partial certificate verification and fail to connect if the database server certificate is expired.
as described in AWS RDS Doc
To overcome, either set up the trust store for the LogStash, which is described in the above link as well.
Or take the risk to disable the SSL in the connecting string, like
jdbc_connection_string => "jdbc:mysql://myapp-production.crankbftdpmc.us-west-2.rds.amazonaws.com:3306/products?sslMode=DISABLED"
I am trying to connect MongoDb with Hadoop. I have Hadoop-1.2.1 installed in my Ubuntu 14.04. I installed MongoDB-3.0.4 and also downloaded and added mongo-hadoop-hive-1.3.0.jar, mongo-java-driver-2.13.2.jar jars in hive session. I have downloaded mongo-connector.sh (found in this site)and included it under Hadoop_Home/lib.
I have set input and output sources like this :
hive> set MONGO_INPUT=mongodb://[user:password#]<MongoDB Instance IP>:27017/DBname.collectionName;
hive> set MONGO_OUTPUT=mongodb://[user:password#]<MongoDB Instance IP>:27017/DBname.collectionName;
hive> add JAR brickhouse-0.7.0.jar;
hive> create temporary function collect as 'brickhouse.udf.collect.CollectUDAF';
My collection in MongoDb is this :
> db.shows.find()
{ "_id" : ObjectId("559eb22fa7999b1a5f50e4e6"), "title" : "Arrested Development", "airdate" : "November 2, 2003", "network" : "FOX" }
{ "_id" : ObjectId("559eb238a7999b1a5f50e4e7"), "title" : "Stella", "airdate" : "June 28, 2005", "network" : "Comedy Central" }
{ "_id" : ObjectId("559eb23ca7999b1a5f50e4e8"), "title" : "Modern Family", "airdate" : "September 23, 2009", "network" : "ABC" }
>
Now I am trying to create a Hive table
CREATE EXTERNAL TABLE mongoTest(title STRING,network STRING)
> STORED BY 'com.mongodb.hadoop.hive.MongoStorageHandler'
> WITH SERDEPROPERTIES('mongo.columns.mapping'='{"title":"name",”airdate”:”date”,”network”:”name”}')
> TBLPROPERTIES('mongo.uri'='${hiveconf:MONGO_INPUT}');
When I run this command, it says
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. com/mongodb/util/JSON
Then I added hive-json-serde.jar and hive-serdes-1.0-SNAPSHOT.jar jars and tried to create the table again. But the error remains the same. How can I rectify this error?
I actually added these mongo-hadoop-core-1.3.0.jar , mongo-hadoop-hive-1.3.0.jar and mongo-java-driver-2.13.2.jar jars in Hadoop_Home/lib folder. Then I was able to get data from MongoDb to Hive without any errors.
There are smart-quotes which the parser is seeing - ”
”airdate”:”date”,”network”:”name”
They should be
"airdate":"date","network":"name"
I'm trying to configure the shovel plugin via the config file (running in docker) but I get this error:
BOOT FAILED
===========
Error description:
{error,{failed_to_cluster_with,[rabbit#dalmacpmfd57],
"Mnesia could not connect to any nodes."}}
The config is set up this way because the destination for shovel will be created on demand when a dev environment is spun up... the source is a permanent rabbitmq instance running that the new, dev environment will attach to.
Here is the config file contents:
[
{rabbitmq_shovel,
[{shovels,
[{indexer_replica_static,
[{sources,
[{broker, [ "amqp://guest:guest#rabbitmq/newdev" ]},
{declarations,
[{'queue.declare', [{queue, <<"Indexer_Replica_Static">>}, durable]},
{'queue.bind',[ {exchange, <<"Indexer">>}, {queue, <<"Indexer_Replica_Static">>}]}
]
}
]
},
{destinations,
[{broker, "amqp://"},
{declarations, [ {'exchange.declare', [ {exchange, <<"Indexer_Replica_Static">>}
, {type, <<"fanout">>}, durable]},
{'queue.declare', [
{queue, <<"Indexer_Replica_Static">>},
durable]},
{'queue.bind',
[ {exchange, <<"Indexer_Replica_Static">>}
, {queue, <<"Indexer_Replica_Static">>}
]}
]
}
]
},
{queue, <<"Indexer_Replica_Static">>},
{prefetch_count, 0},
{ack_mode, on_confirm},
{publish_properties, [ {delivery_mode, 2} ]},
{reconnect_delay, 2.5}
]
}
]
},
{reconnect_delay, 2.5}
]
}
].
[UPDATE]
This is being run in docker but since I couldn't debug the issue in docker I tried booting up rabbit locally with the same config file. I noticed in the logs that the rabbit config system variable I set (RABBITMQ_CONFIG_FILE) isn't reflected in the log and the shovel settings haven't been applied (no surprise huh). I verified the variable with an echo statement and the correct path is displayed: /dev/rabbitmq_server-3.3.4/rabbitmq
=INFO REPORT==== 3-Sep-2014::15:30:37 ===
node : rabbit#dalmacpmfd57
home dir : /Users/e002678
config file(s) : (none)
cookie hash : n6vhh8tY7Z+uR2DV6gcHUg==
log : /usr/local/rabbitmq_server-3.3.4/sbin/../var/log/rabbitmq/rabbit#dalmacpmfd57.log
sasl log : /usr/local/rabbitmq_server-3.3.4/sbin/../var/log/rabbitmq/rabbit#dalmacpmfd57- sasl.log
database dir : /usr/local/rabbitmq_server-3.3.4/sbin/../var/lib/rabbitmq/mnesia/rabbit#dalmacpmfd57
Thanks!
I am using Elastic search version 1.2.0, Jdbc river version 1.2.0.1.
Following is my Jdbc river command.
curl -XPUT 'localhost:9200/_river/tbl_messages/_meta' -d '{
"type" : "jdbc",
"jdbc" : {
"strategy" : "simple",
"url" : "jdbc:mysql://localhost:3306/messageDB",
"user" : "username",
"password" : "password",
"sql" : "select messageAlias.id as _id,messageAlias.subject as subject from tbl_messages messageAlias",
"index" : "MessageDb",
"type" : "tbl_messages",
"maxbulkactions":1000,
"maxconcurrentbulkactions" : 4,
"autocommit" : true,
"schedule" : "0 0-59 0-23 ? * *"
}
}'
Subject column's index meta data
subject: {
type: string
}
This table has 2 Million records and subject field contains arbitrary strings. Some sample data are "You're invited ","{New York:45} We rock!!","{Invitation:27}" so on.
My problem is that when jdbc river encounters one such record with {anything inside of this}, It stalls the river and throws parsing exception. It never moves on to index next records.
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [subject]
at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:418)
at org.elasticsearch.index.mapper.object.ObjectMapper.serializeObject(ObjectMapper.java:537)
at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:479)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:515)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:462)
at org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:394)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:413)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:155)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:534)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:433)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.ElasticsearchIllegalArgumentException: unknown property [Inivitation]
at org.elasticsearch.index.mapper.core.StringFieldMapper.parseCreateFieldForString(StringFieldMapper.java:332)
at org.elasticsearch.index.mapper.core.StringFieldMapper.parseCreateField(StringFieldMapper.java:278)
at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:408)
... 12 more
Deleting this record in db,clearing data inside ES_HOME/data and recreating the river seems to be the only way to proceed until it encounter the above said formatted record again.
How do I make it to continue indexing irrespective of exception when parsing few records?
It is related to Elastic search and not the river.
https://github.com/jprante/elasticsearch-river-jdbc/issues/258
https://github.com/elasticsearch/elasticsearch/issues/2898