How to connect Mysql with logstash? - mysql

When I was on windows this file working well but now I must restart my project on Linux and I don't know how can I do it ? bellow see logstash.conf xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
cammand : ./logstash -f longstash.conf
input {
jdbc {
jdbc_connection_string =>"jdbc:mysql://localhost:3306/test"
jdbc_user =>"root"
jdbc_password =>"password"
jdbc_driver_library =>"/usr/share/java/mysql-connector-java-5.1.45.jar"
jdbc_driver_class =>"com.mysql.jdbc.Driver"
schedule =>"* * * * *"
statement =>"SELECT * FROM Pro WHERE last_modificate >:sql_last_value"
use_column_value =>true
tracking_column =>last_modificate
}
}
output {
elasticsearch {
hosts =>"localhost:9200"
action=>update
document_id =>"%{id}"
doc_as_upsert =>true
index =>"blog"
document_type =>"pro"
}
}
And bellow see the error :
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2019-02-06 16:24:38.441 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2019-02-06 16:24:38.495 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.6.0"}
[WARN ] 2019-02-06 16:24:58.845 [Converge PipelineAction::Create<main>] elasticsearch - You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[//localhost:9200], doc_as_upsert=>true, action=>"update", index=>"blog", id=>"0d9b8021264f8db7c25bca76842096f28d088e42d8e84a573b39874bc2c38c19", document_id=>"%{id}", document_type=>"pro", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_0d66fa34-7e13-432a-9405-8084af971c1a", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>false, ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[INFO ] 2019-02-06 16:24:58.963 [Converge PipelineAction::Create<main>] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[INFO ] 2019-02-06 16:25:00.378 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[WARN ] 2019-02-06 16:25:01.241 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://localhost:9200/"}
[INFO ] 2019-02-06 16:25:02.692 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>6}
[WARN ] 2019-02-06 16:25:02.705 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[INFO ] 2019-02-06 16:25:02.805 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[INFO ] 2019-02-06 16:25:02.881 [Ruby-0-Thread-5: :1] elasticsearch - Using mapping template from {:path=>nil}
[INFO ] 2019-02-06 16:25:03.044 [Ruby-0-Thread-5: :1] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[INFO ] 2019-02-06 16:25:03.726 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x5e9c3d30 run>"}
[INFO ] 2019-02-06 16:25:03.868 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2019-02-06 16:25:05.345 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
Wed Feb 06 16:26:05 CET 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Wed Feb 06 16:26:06 CET 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
[ERROR] 2019-02-06 16:26:06.396 [Ruby-0-Thread-15: :1] jdbc - Unable to connect to database. Tried 1 times {:error_message=>"Java::JavaSql::SQLException: Access denied for user 'root'#'localhost'"}
{ 2014 rufus-scheduler intercepted an error:
2014 job:
2014 Rufus::Scheduler::CronJob "* * * * *" {}
2014 error:
2014 2014
2014 Sequel::DatabaseConnectionError
2014 Java::JavaSql::SQLException: Access denied for user 'root'#'localhost'
2014 com.mysql.jdbc.SQLError.createSQLException(com/mysql/jdbc/SQLError.java:965)
2014 com.mysql.jdbc.MysqlIO.checkErrorPacket(com/mysql/jdbc/MysqlIO.java:3973)
2014 com.mysql.jdbc.MysqlIO.checkErrorPacket(com/mysql/jdbc/MysqlIO.java:3909)
2014 com.mysql.jdbc.MysqlIO.checkErrorPacket(com/mysql/jdbc/MysqlIO.java:873)
2014 com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(com/mysql/jdbc/MysqlIO.java:1710)
2014 com.mysql.jdbc.MysqlIO.doHandshake(com/mysql/jdbc/MysqlIO.java:1226)
2014 com.mysql.jdbc.ConnectionImpl.coreConnect(com/mysql/jdbc/ConnectionImpl.java:2188)
2014 com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(com/mysql/jdbc/ConnectionImpl.java:2219)
2014 com.mysql.jdbc.ConnectionImpl.createNewIO(com/mysql/jdbc/ConnectionImpl.java:2014)
2014 com.mysql.jdbc.ConnectionImpl.<init>(com/mysql/jdbc/ConnectionImpl.java:776)
2014 com.mysql.jdbc.JDBC4Connection.<init>(com/mysql/jdbc/JDBC4Connection.java:47)
2014 java.lang.reflect.Constructor.newInstance(java/lang/reflect/Constructor.java:423)

As the error is self explanatory.
2014 Java::JavaSql::SQLException: Access denied for user 'root'#'localhost'
kindly check the configuration credentials. On linux have have to go /usr/share/logstash directory then run the following commnad.
sudo bin/logstash -f /etc/logstash/conf.d/yourfilename.conf

Related

Possible to set multiple slow log and error log on mysql module filebeat?

I have one development server and already installed
elasticsearch
kibana
filebeat
docker
on docker already running 2 container mariadb database.
I already set filebeat for 1 mariadb database.
with config on /etc/filebeat/modules.d/mysql.yml like this
- module: mysql
# Error logs
error:
enabled: true
var.paths: ["/media/dbdev1/data/mysql_error.log"]
# Slow logs
slowlog:
enabled: true
var.paths: ["/media/dbdev1/data/mysql_slow.log"]
if I need more error log and slow log from other container mariadb database is just change /etc/filebeat/modules.d/mysql.yml like this
- module: mysql
# Error logs
error:
enabled: true
var.paths: ["/media/dbdev1/data/mysql_error.log","/media/dbdev2/data/mysql_error.log"]
# Slow logs
slowlog:
enabled: true
var.paths: ["/media/dbdev1/data/mysql_slow.log","/media/dbdev2/data/mysql_slow.log"]
my expectation filebeat can pull mysql_error.log from 2 different mariadb container with different path also
some filebeat setup log
2021-10-25T10:43:25.616+0700 INFO instance/beat.go:665 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2021-10-25T10:43:25.619+0700 INFO instance/beat.go:673 Beat ID: cb340c7a-15b4-44f7-8a66-06f6850c1c0f
2021-10-25T10:43:26.499+0700 INFO [beat] instance/beat.go:1014 Beat info {"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat", "home": "/usr/share/filebeat", "logs": "/var/log/filebeat"}, "type": "filebeat", "uuid": "cb340c7a-15b4-44f7-8a66-06f6850c1c0f"}}}
2021-10-25T10:43:26.501+0700 INFO [beat] instance/beat.go:1023 Build info {"system_info": {"build": {"commit": "5ae799cb1c3c490c9a27b14cb463dc23696bc7d3", "libbeat": "7.15.1", "time": "2021-10-07T22:06:49.000Z", "version": "7.15.1"}}}
2021-10-25T10:43:26.501+0700 INFO [beat] instance/beat.go:1026 Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":2,"version":"go1.16.6"}}}
2021-10-25T10:43:26.503+0700 INFO [beat] instance/beat.go:1030 Host info {"system_info": {"host": {"architecture":"x86_64","boot_time":"2021-10-25T09:45:23+07:00","containerized":false,"name":"localhost.localdomain","ip":["127.0.0.1/8","::1/128","10.0.2.20/24","fe80::a00:27ff:fe8c:82d0/64","192.168.131.5/24","fe80::a00:27ff:fedd:bb9e/64","172.17.0.1/16","172.18.0.1/16","fe80::42:89ff:fe04:e2cb/64","fe80::2c35:92ff:fe88:4daf/64","fe80::38b2:66ff:fe52:b1ec/64"],"kernel_version":"4.18.0-305.19.1.el8_4.x86_64","mac":["08:00:27:8c:82:d0","08:00:27:dd:bb:9e","02:42:ad:3d:07:6b","02:42:89:04:e2:cb","2e:35:92:88:4d:af","3a:b2:66:52:b1:ec"],"os":{"type":"linux","family":"redhat","platform":"centos","name":"CentOS Linux","version":"8","major":8,"minor":4,"patch":2105},"timezone":"WIB","timezone_offset_sec":25200,"id":"b14f68ad4b8c4732a4cfe379692179ec"}}}
2021-10-25T10:43:26.503+0700 INFO [beat] instance/beat.go:1059 Process info {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39"],"ambient":null}, "cwd": "/media/dbdev1", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 3503, "ppid": 1941, "seccomp": {"mode":"disabled","no_new_privs":false}, "start_time": "2021-10-25T10:43:23.920+0700"}}}
2021-10-25T10:43:26.503+0700 INFO instance/beat.go:309 Setup Beat: filebeat; Version: 7.15.1
2021-10-25T10:43:26.504+0700 INFO [index-management] idxmgmt/std.go:184 Set output.elasticsearch.index to 'filebeat-7.15.1' as ILM is enabled.
2021-10-25T10:43:26.517+0700 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2021-10-25T10:43:26.521+0700 INFO [publisher] pipeline/module.go:113 Beat name: localhost.localdomain
2021-10-25T10:43:26.585+0700 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2021-10-25T10:43:26.820+0700 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2021-10-25T10:43:26.895+0700 INFO [index-management] idxmgmt/std.go:261 Auto ILM enable success.
2021-10-25T10:43:26.929+0700 INFO [index-management.ilm] ilm/std.go:170 ILM policy filebeat exists already.
2021-10-25T10:43:26.929+0700 INFO [index-management] idxmgmt/std.go:401 Set setup.template.name to '{filebeat-7.15.1 {now/d}-000001}' as ILM is enabled.
2021-10-25T10:43:26.929+0700 INFO [index-management] idxmgmt/std.go:406 Set setup.template.pattern to 'filebeat-7.15.1-*' as ILM is enabled.
2021-10-25T10:43:26.929+0700 INFO [index-management] idxmgmt/std.go:440 Set settings.index.lifecycle.rollover_alias in template to {filebeat-7.15.1 {now/d}-000001} as ILM is enabled.
2021-10-25T10:43:26.929+0700 INFO [index-management] idxmgmt/std.go:444 Set settings.index.lifecycle.name in template to {filebeat {"policy":{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.
2021-10-25T10:43:26.974+0700 INFO template/load.go:229 Existing template will be overwritten, as overwrite is enabled.
2021-10-25T10:43:28.637+0700 INFO [add_cloud_metadata] add_cloud_metadata/add_cloud_metadata.go:101 add_cloud_metadata: hosting provider type not detected.
2021-10-25T10:43:31.539+0700 INFO template/load.go:132 Try loading template filebeat-7.15.1 to Elasticsearch
2021-10-25T10:43:32.442+0700 INFO template/load.go:124 Template with name "filebeat-7.15.1" loaded.
2021-10-25T10:43:32.442+0700 INFO [index-management] idxmgmt/std.go:297 Loaded index template.
2021-10-25T10:43:32.475+0700 INFO [index-management.ilm] ilm/std.go:126 Index Alias filebeat-7.15.1 exists already.
2021-10-25T10:43:32.476+0700 INFO kibana/client.go:167 Kibana url: http://localhost:5601
2021-10-25T10:43:38.391+0700 INFO kibana/client.go:167 Kibana url: http://localhost:5601
2021-10-25T10:44:58.953+0700 INFO instance/beat.go:848 Kibana dashboards successfully loaded.
2021-10-25T10:44:58.976+0700 WARN [cfgwarn] instance/beat.go:574 DEPRECATED: Setting up ML using Filebeat is going to be removed. Please use the ML app to setup jobs. Will be removed in version: 8.0.0
2021-10-25T10:44:58.993+0700 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2021-10-25T10:44:59.006+0700 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2021-10-25T10:44:59.006+0700 INFO kibana/client.go:167 Kibana url: http://localhost:5601
2021-10-25T10:44:59.098+0700 WARN fileset/modules.go:425 X-Pack Machine Learning is not enabled
2021-10-25T10:44:59.207+0700 WARN fileset/modules.go:425 X-Pack Machine Learning is not enabled
2021-10-25T10:44:59.207+0700 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2021-10-25T10:44:59.212+0700 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2021-10-25T10:44:59.214+0700 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2021-10-25T10:44:59.219+0700 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2021-10-25T10:44:59.351+0700 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {"pipeline": "filebeat-7.15.1-mysql-error-pipeline"}
2021-10-25T10:44:59.480+0700 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {"pipeline": "filebeat-7.15.1-mysql-slowlog-pipeline"}
2021-10-25T10:44:59.480+0700 INFO cfgfile/reload.go:262 Loading of config files completed.
2021-10-25T10:44:59.481+0700 INFO [load] cfgfile/list.go:129 Stopping 1 runners ...

How to manually recreate the bootstrap client certificate for OpenShift 3.11 master?

Our origin-node.service on the master node fails with:
root#master> systemctl start origin-node.service
Job for origin-node.service failed because the control process exited with error code. See "systemctl status origin-node.service" and "journalctl -xe" for details.
root#master> systemctl status origin-node.service -l
[...]
May 05 07:17:47 master origin-node[44066]: bootstrap.go:195] Part of the existing bootstrap client certificate is expired: 2020-02-20 13:14:27 +0000 UTC
May 05 07:17:47 master origin-node[44066]: bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
May 05 07:17:47 master origin-node[44066]: certificate_store.go:131] Loading cert/key pair from "/etc/origin/node/certificates/kubelet-client-current.pem".
May 05 07:17:47 master origin-node[44066]: server.go:262] failed to run Kubelet: cannot create certificate signing request: Post https://lb.openshift-cluster.mydomain.com:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: EOF
So it seems that kubelet-client-current.pem and/or kubelet-server-current.pem contains an expired certificate and the service tries to create a CSR using an endpoint which is probably not yet available (because the master is down). We tried redeploying the certificates according to the OpenShift documentation Redeploying Certificates, but this fails while detecting an expired certificate:
root#master> ansible-playbook -i /etc/ansible/hosts openshift-master/redeploy-openshift-ca.yml
[...]
TASK [openshift_certificate_expiry : Fail when certs are near or already expired] *******************************************************************************************************************************************
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"changed": false, "msg": "Cluster certificates found to be expired or within 60 days of expiring. You may view the report at /root/cert-expiry-report.20200505T042754.html or /root/cert-expiry-report.20200505T042754.json.\n"}
[...]
root#master> cat /root/cert-expiry-report.20200505T042754.json
[...]
"kubeconfigs": [
{
"cert_cn": "O:system:cluster-admins, CN:system:admin",
"days_remaining": -75,
"expiry": "2020-02-20 13:14:27",
"health": "expired",
"issuer": "CN=openshift-signer#1519045219 ",
"path": "/etc/origin/node/node.kubeconfig",
"serial": 27,
"serial_hex": "0x1b"
},
{
"cert_cn": "O:system:cluster-admins, CN:system:admin",
"days_remaining": -75,
"expiry": "2020-02-20 13:14:27",
"health": "expired",
"issuer": "CN=openshift-signer#1519045219 ",
"path": "/etc/origin/node/node.kubeconfig",
"serial": 27,
"serial_hex": "0x1b"
},
[...]
"summary": {
"expired": 2,
"ok": 22,
"total": 24,
"warning": 0
}
}
There is a guide for OpenShift 4.4 for Recovering from expired control plane certificates, but that does not apply for 3.11 and we did not find such a guide for our version.
Is it possible to recreate the expired certificates without a running master node for 3.11? Thanks for any help.
OpenShift Ansible: https://github.com/openshift/openshift-ansible/releases/tag/openshift-ansible-3.11.153-2
Update 2020-05-06: I also executed redeploy-certificates.yml, but it fails at the same TASK:
root#master> ansible-playbook -i /etc/ansible/hosts playbooks/redeploy-certificates.yml
[...]
TASK [openshift_certificate_expiry : Fail when certs are near or already expired] ******************************************************************************
Wednesday 06 May 2020 04:07:06 -0400 (0:00:00.909) 0:01:07.582 *********
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"changed": false, "msg": "Cluster certificates found to be expired or within 60 days of expiring. You may view the report at /root/cert-expiry-report.20200506T040603.html or /root/cert-expiry-report.20200506T040603.json.\n"}
Update 2020-05-11: Running with -e openshift_certificate_expiry_fail_on_warn=False results in:
root#master> ansible-playbook -i /etc/ansible/hosts -e openshift_certificate_expiry_fail_on_warn=False playbooks/redeploy-certificates.yml
[...]
TASK [Wait for master API to come back online] *****************************************************************************************************************
Monday 11 May 2020 03:48:56 -0400 (0:00:00.111) 0:02:25.186 ************
skipping: [master.openshift-cluster.mydomain.com]
TASK [openshift_control_plane : restart master] ****************************************************************************************************************
Monday 11 May 2020 03:48:56 -0400 (0:00:00.257) 0:02:25.444 ************
changed: [master.openshift-cluster.mydomain.com] => (item=api)
changed: [master.openshift-cluster.mydomain.com] => (item=controllers)
RUNNING HANDLER [openshift_control_plane : verify API server] **************************************************************************************************
Monday 11 May 2020 03:48:57 -0400 (0:00:00.945) 0:02:26.389 ************
FAILED - RETRYING: verify API server (120 retries left).
FAILED - RETRYING: verify API server (119 retries left).
[...]
FAILED - RETRYING: verify API server (1 retries left).
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"attempts": 120, "changed": false, "cmd": ["curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://lb.openshift-cluster.mydomain.com:8443/healthz/ready"], "delta": "0:00:00.182367", "end": "2020-05-11 03:51:52.245644", "msg": "non-zero return code", "rc": 35, "start": "2020-05-11 03:51:52.063277", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
root#master> systemctl status origin-node.service -l
[...]
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: E0511 04:23:28.077964 109972 bootstrap.go:195] Part of the existing bootstrap client certificate is expired: 2020-02-20 13:14:27 +0000 UTC
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: I0511 04:23:28.078001 109972 bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: I0511 04:23:28.080555 109972 certificate_store.go:131] Loading cert/key pair from "/etc/origin/node/certificates/kubelet-client-current.pem".
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: F0511 04:23:28.130968 109972 server.go:262] failed to run Kubelet: cannot create certificate signing request: Post https://lb.openshift-cluster.mydomain.com:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: EOF
[...]
I have this same case in customer environment, this error is because the certified was expiry, i "cheated" changing da S.O date before the expiry date. And the origin-node service started in my masters:
systemctl status origin-node
● origin-node.service - OpenShift Node
Loaded: loaded (/etc/systemd/system/origin-node.service; enabled; vendor preset: disabled)
Active: active (running) since Sáb 2021-02-20 20:22:21 -02; 6min ago
Docs: https://github.com/openshift/origin
Main PID: 37230 (hyperkube)
Memory: 79.0M
CGroup: /system.slice/origin-node.service
└─37230 /usr/bin/hyperkube kubelet --v=2 --address=0.0.0.0 --allow-privileged=true --anonymous-auth=true --authentication-token-webhook=true --authentication-token-webhook-cache-ttl=5m --authorization-mode=Webhook --authorization-webhook-c...
Você tem mensagem de correio em /var/spool/mail/okd
The openshift_certificate_expiry role uses the openshift_certificate_expiry_fail_on_warn variable to determine if the playbook should fail when the days left are less than openshift_certificate_expiry_warning_days.
So try running the redeploy-certificates.yml with this additional variable set to "False":
ansible-playbook -i /etc/ansible/hosts -e openshift_certificate_expiry_fail_on_warn=False playbooks/redeploy-certificates.yml

Logstash breaking with recent renaming of JDBC mySQL connector

For some reason Logstash with the Elastic Stack X-Pack is breaking. I believe it's to do with the recent renaming of the mySQL connector, which hasn't been updated in the various config files. However, I can't find where the error file is originating in this error log. Additionally, if anyone knows how to rename the actual mySQL connector from com.mysql.cj.jdbc.Driver to com.mysql.jdbc.Driver this should fix everything.
Error Log:
C:\Program Files\logstash-6.3.2\bin>logstash -f sql.conf
Sending Logstash's logs to C:/Program Files/logstash-6.3.2/logs which is now configured via log4j2.properties
[2018-08-31T15:01:52,502][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-08-31T15:01:53,031][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.2"}
[2018-08-31T15:01:55,217][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-08-31T15:02:06,342][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-08-31T15:02:06,342][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-08-31T15:02:06,539][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-08-31T15:02:06,575][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-08-31T15:02:06,592][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-31T15:02:06,609][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-08-31T15:02:06,625][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-08-31T15:02:06,659][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-08-31T15:02:06,909][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x598b8674 sleep>"}
[2018-08-31T15:02:06,996][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-08-31T15:02:07,359][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-08-31T15:02:07,895][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times {:error_message=>"Java::JavaSql::SQLNonTransientConnectionException: Cannot load connection class because of underlying exception: com.mysql.cj.exceptions.WrongArgumentException: Failed to parse the host:port pair 'localhost:3306;user=test;password=test123;databaseName=test;integratedSecurity=true;'."}
[2018-08-31T15:02:07,916][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Jdbc jdbc_connection_string=>"jdbc:mysql://localhost:3306;user=test;password=test123;databaseName=test;integratedSecurity=true;", jdbc_driver_class=>"com.mysql.cj.jdbc.Driver", jdbc_user=>"doesntmatterwithauthentication", statement=>"SELECT * FROM phones", id=>"de31f73d4505e1de7e76bce4917c48b412909473f3872288edd51acccf0e0be6", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_ad3ad9d5-a3e8-492a-9231-1e729e8c4190", enable_metric=>true, charset=>"UTF-8">, jdbc_paging_enabled=>false, jdbc_page_size=>100000, jdbc_validate_connection=>false, jdbc_validation_timeout=>3600, jdbc_pool_timeout=>5, sql_log_level=>"info", connection_retry_attempts=>1, connection_retry_attempts_wait_time=>0.5, parameters=>{"sql_last_value"=>1970-01-01 01:00:00 +0100}, last_run_metadata_path=>"C:\\Users\\ross.massie/.logstash_jdbc_last_run", use_column_value=>false, tracking_column_type=>"numeric", clean_run=>false, record_last_run=>true, lowercase_column_names=>true>
Error: Java::JavaSql::SQLNonTransientConnectionException: Cannot load connection class because of underlying exception: com.mysql.cj.exceptions.WrongArgumentException: Failed to parse the host:port pair 'localhost:3306;user=test;password=test123;databaseName=test;integratedSecurity=true;'.
Exception: Sequel::DatabaseConnectionError
Stack: com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(com/mysql/cj/jdbc/exceptions/SQLError.java:110)
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(com/mysql/cj/jdbc/exceptions/SQLError.java:97)
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(com/mysql/cj/jdbc/exceptions/SQLError.java:89)
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(com/mysql/cj/jdbc/exceptions/SQLError.java:63)
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(com/mysql/cj/jdbc/exceptions/SQLError.java:73)
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(com/mysql/cj/jdbc/exceptions/SQLExceptionsMapping.java:79)
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(com/mysql/cj/jdbc/exceptions/SQLExceptionsMapping.java:131)
com.mysql.cj.jdbc.NonRegisteringDriver.connect(com/mysql/cj/jdbc/NonRegisteringDriver.java:227)
java.lang.reflect.Method.invoke(java/lang/reflect/Method)
org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:423)
org.jruby.javasupport.JavaMethod.invokeDirect(org/jruby/javasupport/JavaMethod.java:290)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.adapters.jdbc.connect(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/adapters/jdbc.rb:215)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.adapters.jdbc.RUBY$method$connect$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/vendor/bundle/jruby/$2_dot_3_dot_0/gems/sequel_minus_5_dot_10_dot_0/lib/sequel/adapters/C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/adapters/jdbc.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.connection_pool.make_new(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/connection_pool.rb:127)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.connection_pool.RUBY$method$make_new$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/vendor/bundle/jruby/$2_dot_3_dot_0/gems/sequel_minus_5_dot_10_dot_0/lib/sequel/C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/connection_pool.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.connection_pool.threaded.assign_connection(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/connection_pool/threaded.rb:206)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.connection_pool.threaded.RUBY$method$assign_connection$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/vendor/bundle/jruby/$2_dot_3_dot_0/gems/sequel_minus_5_dot_10_dot_0/lib/sequel/connection_pool/C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/connection_pool/threaded.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.connection_pool.threaded.acquire(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/connection_pool/threaded.rb:138)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.connection_pool.threaded.RUBY$method$acquire$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/vendor/bundle/jruby/$2_dot_3_dot_0/gems/sequel_minus_5_dot_10_dot_0/lib/sequel/connection_pool/C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/connection_pool/threaded.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.connection_pool.threaded.hold(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/connection_pool/threaded.rb:90)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.database.connecting.synchronize(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/database/connecting.rb:270)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.database.connecting.test_connection(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/database/connecting.rb:279)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.database.connecting.connect(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/database/connecting.rb:58)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.core.connect(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/core.rb:116)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.plugin_mixins.jdbc.block in jdbc_connect(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/jdbc.rb:114)
org.jruby.RubyKernel.loop(org/jruby/RubyKernel.java:1292)
org.jruby.RubyKernel$INVOKER$s$0$0$loop.call(org/jruby/RubyKernel$INVOKER$s$0$0$loop.gen)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.plugin_mixins.jdbc.jdbc_connect(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/jdbc.rb:111)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.plugin_mixins.jdbc.RUBY$method$jdbc_connect$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/vendor/bundle/jruby/$2_dot_3_dot_0/gems/logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9/lib/logstash/plugin_mixins/C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/jdbc.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.plugin_mixins.jdbc.open_jdbc_connection(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/jdbc.rb:164)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.plugin_mixins.jdbc.RUBY$method$open_jdbc_connection$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/vendor/bundle/jruby/$2_dot_3_dot_0/gems/logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9/lib/logstash/plugin_mixins/C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/jdbc.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.plugin_mixins.jdbc.execute_statement(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/jdbc.rb:220)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.plugin_mixins.jdbc.RUBY$method$execute_statement$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/vendor/bundle/jruby/$2_dot_3_dot_0/gems/logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9/lib/logstash/plugin_mixins/C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/jdbc.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.inputs.jdbc.execute_query(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/inputs/jdbc.rb:264)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.inputs.jdbc.RUBY$method$execute_query$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/vendor/bundle/jruby/$2_dot_3_dot_0/gems/logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9/lib/logstash/inputs/C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/inputs/jdbc.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.inputs.jdbc.run(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/inputs/jdbc.rb:250)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.inputs.jdbc.RUBY$method$run$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/vendor/bundle/jruby/$2_dot_3_dot_0/gems/logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9/lib/logstash/inputs/C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/inputs/jdbc.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.logstash_minus_core.lib.logstash.pipeline.inputworker(C:/Program Files/logstash-6.3.2/logstash-core/lib/logstash/pipeline.rb:512)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.logstash_minus_core.lib.logstash.pipeline.RUBY$method$inputworker$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/logstash_minus_core/lib/logstash/C:/Program Files/logstash-6.3.2/logstash-core/lib/logstash/pipeline.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.logstash_minus_core.lib.logstash.pipeline.block in start_input(C:/Program Files/logstash-6.3.2/logstash-core/lib/logstash/pipeline.rb:505)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:289)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:246)
sql.conf:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/test?useSSL=false&serverTimezone=GMT&DatabaseName=test"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
jdbc_driver_library => "C:\Program Files (x86)\MySQL\Connector J 8.0\mysql-connector-java-8.0.12.jar"
jdbc_user => "test"
jdbc_password => "test123"
statement => "SELECT * FROM phones"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "phones"
}
}
Rewriting the whole connection string worked after a few tweaks. At the time of the issue a few drivers references I found within x-pack were referencing the wrong driver if memory serves and needed changing.

Logstash pipeline error: error registering jdbc plugin

When I first created Logstash jdbc conf file to import my MySQL data to Elasticsearch, it was working good. But, suddenly, the same files which worked okay, are not working any more and giving an error "Error registering plugin".
Here is my sms-logstash.conf file
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/sms"
# The user we wish to execute our statement as
jdbc_user => "root"
jdbc_password => ""
# The path to our downloaded jdbc driver
jdbc_driver_library => "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/bin/mysql-connector-java-5.1.45-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
# our query
statement => "SELECT * FROM salon_reg"
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
"hosts" => "localhost:9200"
"index" => "sms"
"document_type" => "salon_reg"
}
}
When I run this command as bin/logstash -f sms-logstash.conf
It gives the following error
C:\Users\robesh\Downloads\logstash-6.2.3\logstash-6.2.3\bin>logstash -f sms-logstash.conf
Sending Logstash's logs to C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/logs which is now configured via log4j2.properties
[2018-04-15T15:05:46,900][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/modules/fb_apache/configuration"}
[2018-04-15T15:05:47,028][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/modules/netflow/configuration"}
[2018-04-15T15:05:47,665][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-04-15T15:05:49,635][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.3"}
[2018-04-15T15:05:51,303][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-04-15T15:06:04,935][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[//localhost:9200], index=>"sms", document_type=>"salon_reg", id=>"7eecf64f77b050d7ebba1e645e2de1d988a4f3d4b88814c75044d6e6c4606a2b", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_cbc6f6b2-287c-44bf-8771-3a951d7ceabf", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-04-15T15:06:05,141][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-04-15T15:06:06,405][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-04-15T15:06:06,426][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-04-15T15:06:07,079][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-04-15T15:06:07,280][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-04-15T15:06:07,289][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-04-15T15:06:07,336][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-04-15T15:06:07,424][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-04-15T15:06:07,533][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-04-15T15:06:08,863][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main", :plugin=>"<LogStash::Inputs::Jdbc jdbc_connection_string=>\"jdbc:mysql://localhost:3306/sms\", jdbc_user=>\"root\", jdbc_driver_library=>\"C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/bin/mysql-connector-java-5.1.46-bin.jar\", jdbc_driver_class=>\"com.mysql.jdbc.Driver\", statement=>\"SELECT * FROM salon_reg\", id=>\"0c99246377cb88117db974a51d7bdcb982e8fe882ab825575c8ebdc3c890fb5a\", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>\"plain_d687f831-1ac5-4480-b23d-e7fc976f5e9a\", enable_metric=>true, charset=>\"UTF-8\">, jdbc_paging_enabled=>false, jdbc_page_size=>100000, jdbc_validate_connection=>false, jdbc_validation_timeout=>3600, jdbc_pool_timeout=>5, sql_log_level=>\"info\", connection_retry_attempts=>1, connection_retry_attempts_wait_time=>0.5, last_run_metadata_path=>\"C:\\\\Users\\\\robesh/.logstash_jdbc_last_run\", use_column_value=>false, tracking_column_type=>\"numeric\", clean_run=>false, record_last_run=>true, lowercase_column_names=>true>", :error=>"(<unknown>): 'reader' unacceptable code point '\u0000' (0x0) special characters are not allowed\nin \"'reader'\", position 0 at line 0 column 0", :thread=>"#<Thread:0x1ad70417 run>"}
[2018-04-15T15:06:09,386][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Psych::SyntaxError: (<unknown>): 'reader' unacceptable code point ' ' (0x0) special characters are not allowed
in "'reader'", position 0 at line 0 column 0>, :backtrace=>["org/jruby/ext/psych/PsychParser.java:231:in `parse'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/psych.rb:377:in `parse_stream'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/psych.rb:325:in `parse'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/psych.rb:252:in `load'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.5/lib/logstash/plugin_mixins/value_tracking.rb:102:in `read'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.5/lib/logstash/plugin_mixins/value_tracking.rb:78:in `get_initial'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.5/lib/logstash/plugin_mixins/value_tracking.rb:36:in `initialize'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.5/lib/logstash/plugin_mixins/value_tracking.rb:29:in `build_last_value_tracker'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.5/lib/logstash/inputs/jdbc.rb:216:in `register'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/logstash-core/lib/logstash/pipeline.rb:341:in `register_plugin'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/logstash-core/lib/logstash/pipeline.rb:352:in `block in register_plugins'", "org/jruby/RubyArray.java:1734:in `each'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/logstash-core/lib/logstash/pipeline.rb:352:in `register_plugins'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/logstash-core/lib/logstash/pipeline.rb:502:in `start_inputs'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/logstash-core/lib/logstash/pipeline.rb:393:in `start_workers'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/logstash-core/lib/logstash/pipeline.rb:289:in `run'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/logstash-core/lib/logstash/pipeline.rb:249:in `block in start'"], :thread=>"#<Thread:0x1ad70417 run>"}
[2018-04-15T15:06:09,506][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: LogStash::PipelineAction::Create/pipeline_id:main, action_result: false", :backtrace=>nil}
C:\Users\robesh\Downloads\logstash-6.2.3\logstash-6.2.3\bin>
The interesting thing here is that this same file was working fine previously and it indexed my data nicely to Elasticsearch, but suddenly now its giving an error.
It may be an encoding problem, ensure that the logstash file has UTF-8 encoding.
Make sure your config is correct :
bin/logstash --config.test_and_exit -f <path_to_config_file>
If your config is valid, then looks at $USER_HOME/.logstash_jdbc_last_run file, probably that file exists but isn't valid YAML. Fix what's broken about the file, or just delete it.

Upgrading K8S cluster from v1.2.0 to v1.3.0

I have 1 master and 4 minions all running on version 1.2.0. I am planning to upgrade them to 1.3.0. I want this done with minimal downtime.
So I did the following on one minion.
systemctl stop kubelet
yum update kubernetes-1.3.0-0.3.git86dc49a.el7
systemctl start kubelet
Once I bring up the service, i see the following ERROR.
Mar 28 20:36:55 csdp-e2e-kubernetes-minion-6 kubelet[9902]: E0328 20:36:55.215614 9902 kubelet.go:1222] Unable to register node "172.29.240.169" with API server: the body of the request was in an unknown format - accepted media types include: application/json, application/yaml
Mar 28 20:36:55 csdp-e2e-kubernetes-minion-6 kubelet[9902]: E0328 20:36:55.217612 9902 event.go:198] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"172.29.240.169.14b01ded8fb2d07b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"172.29.240.169", UID:"172.29.240.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientDisk", Message:"Node 172.29.240.169 status is now: NodeHasSufficientDisk", Source:api.EventSource{Component:"kubelet", Host:"172.29.240.169"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63626321182, nsec:814949499, loc:(*time.Location)(0x4c8a780)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63626330215, nsec:213372890, loc:(*time.Location)(0x4c8a780)}}, Count:1278, Type:"Normal"}': 'the body of the request was in an unknown format - accepted media types include: application/json, application/yaml' (will not retry!)
Mar 28 20:36:55 csdp-e2e-kubernetes-minion-6 kubelet[9902]: E0328 20:36:55.246100 9902 event.go:198] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"172.29.240.169.14b01ded8fb2fc88", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"172.29.240.169", UID:"172.29.240.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.29.240.169 status is now: NodeHasSufficientMemory", Source:api.EventSource{Component:"kubelet", Host:"172.29.240.169"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63626321182, nsec:814960776, loc:(*time.Location)(0x4c8a780)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63626330215, nsec:213381138, loc:(*time.Location)(0x4c8a780)}}, Count:1278, Type:"Normal"}': 'the body of the request was in an unknown format - accepted media types include: application/json, application/yaml' (will not retry!)
Is v1.2.0 incompatible with v1.3.0 ?
Seems like the issue is with JSON incompatibility ? application/json, application/yaml
From master standpoint ::
[root#kubernetes-master ~]# kubectl get nodes
NAME STATUS AGE
172.29.219.105 Ready 3h
172.29.240.146 Ready 3h
172.29.240.168 Ready 3h
172.29.240.169 NotReady 3h
The node that I upgraded is in NotReady state.
As per the documentation you must upgrade your master components (kube-scheduler, kube-apiserver and kube-controller-manager) before your node components (kubelet, kube-proxy).
https://kubernetes.io/docs/getting-started-guides/ubuntu/upgrades/