I used "fluent-plugin-snmp" and "fluent-plugin-mysql" want to get snmp info and save in MySQL.
but I can't save SNMP info to MySQL
my config code is
<source>
type snmp
tag snmpser1
nodes name, value
host localhost
community public
mib sysDescr.0, sysName.0
method_type get
polling_time 10
polling_type async_run
</source>
<match snmpser*>
#type mysql_bulk
host localhost
database test01
username root
password password
column_names name,value
key_names data01,data02
table new_table
flush_interval 10s
</match>
but it can't save in mysql.
error info is
2017-09-08 14:05:53 -0700 [info]: #0 bulk insert values size (table: new_table) => 2
2017-09-08 14:05:53 -0700 [warn]: #0 failed to flush the buffer. retry_time=1 next_retry_seconds=2017-09-08 14:05:53 -0700 chunk="558b3f08bf79121a7b4d669b30703220" error_class=Mysql2::Error error="Unknown column 'name' in 'field list'"
2017-09-08 14:05:53 -0700 [warn]: #0 suppressed same stacktrace
2017-09-08 14:05:55 -0700 [info]: #0 bulk insert values size (table: new_table) => 2
2017-09-08 14:05:55 -0700 [warn]: #0 failed to flush the buffer. retry_time=2 next_retry_seconds=2017-09-08 14:05:55 -0700 chunk="558b3f08bf79121a7b4d669b30703220" error_class=Mysql2::Error error="Unknown column 'name' in 'field list'"
so...what should I do?
Related
When I was on windows this file working well but now I must restart my project on Linux and I don't know how can I do it ? bellow see logstash.conf xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
cammand : ./logstash -f longstash.conf
input {
jdbc {
jdbc_connection_string =>"jdbc:mysql://localhost:3306/test"
jdbc_user =>"root"
jdbc_password =>"password"
jdbc_driver_library =>"/usr/share/java/mysql-connector-java-5.1.45.jar"
jdbc_driver_class =>"com.mysql.jdbc.Driver"
schedule =>"* * * * *"
statement =>"SELECT * FROM Pro WHERE last_modificate >:sql_last_value"
use_column_value =>true
tracking_column =>last_modificate
}
}
output {
elasticsearch {
hosts =>"localhost:9200"
action=>update
document_id =>"%{id}"
doc_as_upsert =>true
index =>"blog"
document_type =>"pro"
}
}
And bellow see the error :
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2019-02-06 16:24:38.441 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2019-02-06 16:24:38.495 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.6.0"}
[WARN ] 2019-02-06 16:24:58.845 [Converge PipelineAction::Create<main>] elasticsearch - You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[//localhost:9200], doc_as_upsert=>true, action=>"update", index=>"blog", id=>"0d9b8021264f8db7c25bca76842096f28d088e42d8e84a573b39874bc2c38c19", document_id=>"%{id}", document_type=>"pro", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_0d66fa34-7e13-432a-9405-8084af971c1a", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>false, ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[INFO ] 2019-02-06 16:24:58.963 [Converge PipelineAction::Create<main>] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[INFO ] 2019-02-06 16:25:00.378 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[WARN ] 2019-02-06 16:25:01.241 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://localhost:9200/"}
[INFO ] 2019-02-06 16:25:02.692 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>6}
[WARN ] 2019-02-06 16:25:02.705 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[INFO ] 2019-02-06 16:25:02.805 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[INFO ] 2019-02-06 16:25:02.881 [Ruby-0-Thread-5: :1] elasticsearch - Using mapping template from {:path=>nil}
[INFO ] 2019-02-06 16:25:03.044 [Ruby-0-Thread-5: :1] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[INFO ] 2019-02-06 16:25:03.726 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x5e9c3d30 run>"}
[INFO ] 2019-02-06 16:25:03.868 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2019-02-06 16:25:05.345 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
Wed Feb 06 16:26:05 CET 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Wed Feb 06 16:26:06 CET 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
[ERROR] 2019-02-06 16:26:06.396 [Ruby-0-Thread-15: :1] jdbc - Unable to connect to database. Tried 1 times {:error_message=>"Java::JavaSql::SQLException: Access denied for user 'root'#'localhost'"}
{ 2014 rufus-scheduler intercepted an error:
2014 job:
2014 Rufus::Scheduler::CronJob "* * * * *" {}
2014 error:
2014 2014
2014 Sequel::DatabaseConnectionError
2014 Java::JavaSql::SQLException: Access denied for user 'root'#'localhost'
2014 com.mysql.jdbc.SQLError.createSQLException(com/mysql/jdbc/SQLError.java:965)
2014 com.mysql.jdbc.MysqlIO.checkErrorPacket(com/mysql/jdbc/MysqlIO.java:3973)
2014 com.mysql.jdbc.MysqlIO.checkErrorPacket(com/mysql/jdbc/MysqlIO.java:3909)
2014 com.mysql.jdbc.MysqlIO.checkErrorPacket(com/mysql/jdbc/MysqlIO.java:873)
2014 com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(com/mysql/jdbc/MysqlIO.java:1710)
2014 com.mysql.jdbc.MysqlIO.doHandshake(com/mysql/jdbc/MysqlIO.java:1226)
2014 com.mysql.jdbc.ConnectionImpl.coreConnect(com/mysql/jdbc/ConnectionImpl.java:2188)
2014 com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(com/mysql/jdbc/ConnectionImpl.java:2219)
2014 com.mysql.jdbc.ConnectionImpl.createNewIO(com/mysql/jdbc/ConnectionImpl.java:2014)
2014 com.mysql.jdbc.ConnectionImpl.<init>(com/mysql/jdbc/ConnectionImpl.java:776)
2014 com.mysql.jdbc.JDBC4Connection.<init>(com/mysql/jdbc/JDBC4Connection.java:47)
2014 java.lang.reflect.Constructor.newInstance(java/lang/reflect/Constructor.java:423)
As the error is self explanatory.
2014 Java::JavaSql::SQLException: Access denied for user 'root'#'localhost'
kindly check the configuration credentials. On linux have have to go /usr/share/logstash directory then run the following commnad.
sudo bin/logstash -f /etc/logstash/conf.d/yourfilename.conf
I have 1 master and 4 minions all running on version 1.2.0. I am planning to upgrade them to 1.3.0. I want this done with minimal downtime.
So I did the following on one minion.
systemctl stop kubelet
yum update kubernetes-1.3.0-0.3.git86dc49a.el7
systemctl start kubelet
Once I bring up the service, i see the following ERROR.
Mar 28 20:36:55 csdp-e2e-kubernetes-minion-6 kubelet[9902]: E0328 20:36:55.215614 9902 kubelet.go:1222] Unable to register node "172.29.240.169" with API server: the body of the request was in an unknown format - accepted media types include: application/json, application/yaml
Mar 28 20:36:55 csdp-e2e-kubernetes-minion-6 kubelet[9902]: E0328 20:36:55.217612 9902 event.go:198] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"172.29.240.169.14b01ded8fb2d07b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"172.29.240.169", UID:"172.29.240.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientDisk", Message:"Node 172.29.240.169 status is now: NodeHasSufficientDisk", Source:api.EventSource{Component:"kubelet", Host:"172.29.240.169"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63626321182, nsec:814949499, loc:(*time.Location)(0x4c8a780)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63626330215, nsec:213372890, loc:(*time.Location)(0x4c8a780)}}, Count:1278, Type:"Normal"}': 'the body of the request was in an unknown format - accepted media types include: application/json, application/yaml' (will not retry!)
Mar 28 20:36:55 csdp-e2e-kubernetes-minion-6 kubelet[9902]: E0328 20:36:55.246100 9902 event.go:198] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"172.29.240.169.14b01ded8fb2fc88", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"172.29.240.169", UID:"172.29.240.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.29.240.169 status is now: NodeHasSufficientMemory", Source:api.EventSource{Component:"kubelet", Host:"172.29.240.169"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63626321182, nsec:814960776, loc:(*time.Location)(0x4c8a780)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63626330215, nsec:213381138, loc:(*time.Location)(0x4c8a780)}}, Count:1278, Type:"Normal"}': 'the body of the request was in an unknown format - accepted media types include: application/json, application/yaml' (will not retry!)
Is v1.2.0 incompatible with v1.3.0 ?
Seems like the issue is with JSON incompatibility ? application/json, application/yaml
From master standpoint ::
[root#kubernetes-master ~]# kubectl get nodes
NAME STATUS AGE
172.29.219.105 Ready 3h
172.29.240.146 Ready 3h
172.29.240.168 Ready 3h
172.29.240.169 NotReady 3h
The node that I upgraded is in NotReady state.
As per the documentation you must upgrade your master components (kube-scheduler, kube-apiserver and kube-controller-manager) before your node components (kubelet, kube-proxy).
https://kubernetes.io/docs/getting-started-guides/ubuntu/upgrades/
I've had ModSecurity and the Core OWASP Rule Set ver.2.2.5 installed for some months now, but a JSON endpoint on the site has recently stopped responding, and the Apache log gets the following:
[Tue Jul 21 10:41:12 2015] [error] [client 194.54.11.146] ModSecurity:
Warning. Match of "streq %{SESSION.IP_HASH}" against "TX:ip_hash"
required. [file
"/etc/modsecurity/activated_rules/modsecurity_crs_16_session_hijacking.conf"]
[line "35"] [id "981059"] [msg "Warning - Sticky SessionID Data
Changed - IP Address Mismatch."] [hostname "************"] [uri
"/api/campaigns/d3c735cb-0773-11e4-98bd-02f651afdab5"] [unique_id
"Va4hyKwfKiYAAAYSLigAAAAJ"]
[Tue Jul 21 10:41:12 2015] [error] [client 194.54.11.146] ModSecurity:
Warning. Match of "streq %{SESSION.UA_HASH}" against "TX:ua_hash"
required. [file
"/etc/modsecurity/activated_rules/modsecurity_crs_16_session_hijacking.conf"]
[line "36"] [id "981060"] [msg "Warning - Sticky SessionID Data
Changed - User-Agent Mismatch."] [hostname "************"] [uri
"/api/campaigns/d3c735cb-0773-11e4-98bd-02f651afdab5"] [unique_id
"Va4hyKwfKiYAAAYSLigAAAAJ"]
[Tue Jul 21 10:41:12 2015] [error] [client 194.54.11.146] ModSecurity:
Warning. Operator EQ matched 2 at TX:sticky_session_anomaly. [file
"/etc/modsecurity/activated_rules/modsecurity_crs_16_session_hijacking.conf"]
[line "37"] [id "981061"] [msg "Possible Session Hijacking - IP
Address and User-Agent Mismatch."] [hostname "************"] [uri
"/api/campaigns/d3c735cb-0773-11e4-98bd-02f651afdab5"] [unique_id
"Va4hyKwfKiYAAAYSLigAAAAJ"]
[Tue Jul 21 10:41:12 2015] [error] [client 194.54.11.146] ModSecurity:
Warning. Match of "rx ^%{tx.allowed_request_content_type}$" against
"TX:0" required. [file
"/etc/modsecurity/activated_rules/modsecurity_crs_30_http_policy.conf"]
[line "64"] [id "960010"] [msg "Request content type is not allowed by
policy"] [data "application/json"] [severity "WARNING"] [tag
"POLICY/ENCODING_NOT_ALLOWED"] [tag "WASCTC/WASC-20"] [tag
"OWASP_TOP_10/A1"] [tag "OWASP_AppSensor/EE2"] [tag "PCI/12.1"]
[hostname "************"] [uri
"/api/campaigns/d3c735cb-0773-11e4-98bd-02f651afdab5"] [unique_id
"Va4hyKwfKiYAAAYSLigAAAAJ"]
I'm new to mod_security and the OWASP rules (I basically followed the guide here) but as I understand, rules are scored, and if a request passes a threshold, it's nuked. I assume this is what I'm seeing here.
The final one is the one that concerns me - "application/json" should certainly be allowed. From looking at /etc/modsecurity/modsecurity_crs_10_setup.conf, I see:
setvar:'tx.allowed_request_content_type=application/x-www-form-urlencoded|multipart/form-data|text/xml|application/xml|application/x-amf'
My question is:
1. Can I just add application/json in here to make the error go away?
2. Is that the correct way to do it?
Yes you can so it reads like this:
setvar:'tx.allowed_request_content_type=application/x-www-form-urlencoded|multipart/form-data|text/xml|application/xml|application/x-amf|application/json'
Yes that is the correct way of doing this.
my rails app generates the following server queries
Processing by ResponsesController#create as HTML
Parameters: {"utf8"=>"✓", "authenticity_token"=>"WJBM8CJQaLxkRfC3/cGimaEytM3EvqCWhOmfX1WBRQA=", "response"=>{"q_id"=>"4", "taker_id"=>"10", "value"=>"4", "text"=>"fsfsf"}, "commit"=>"Create Response", "locale"=>"en"}
Question Load (0.1ms) SELECT "questions".* FROM "questions" WHERE "questions"."id" = ? LIMIT 1 [["id", "4"]]
(0.1ms)
begin transaction
SQL (0.4ms) INSERT INTO "responses" ("created_at", "q_id", "taker_id", "text", "updated_at", "value") VALUES (?, ?, ?, ?, ?, ?) [["created_at", Fri, 11 Oct 2013 18:54:10 UTC +00:00], ["q_id", 4], ["taker_id", 10], ["text", "fsfsf"], ["updated_at", Fri, 11 Oct 2013 18:54:10 UTC +00:00], ["value", 4]]
(2.3ms) commit transaction
default_url_options is passed options: {}
However, there is no actual change I can see in my local mysql db (on macos)
mysql> use medisupply
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> select * from responses
-> ;
Empty set (0,00 sec)
Any ideas what could be the reason for that? Its driving me insane.
my database.yml config:
development:
adapter: mysql
encoding: utf8
reconnect: false
host: localhost
database: medisupply
pool: 5
username: root
password:
socket: /tmp/mysql.sock
thanks heaps
I am running a mysql recipe that is failing. When I do vagrant up after a halt it claims that grants.sql template's checksum has changed causing it to re-run when it shouldn't.
[default] [Wed, 28 Mar 2012 12:58:48 -0700] INFO: Processing template[/etc/mysql/grants.sql] action create (mysql::server line 128)
: stdout
[default] [Wed, 28 Mar 2012 12:58:48 -0700] DEBUG: Current content's checksum: 3992e44304b56cebdbd4bf23183ddd78f877539c025227546e19098b0b5872ca
: stdout
[default] [Wed, 28 Mar 2012 12:58:48 -0700] DEBUG: Rendered content's checksum: f967f212b3e7b25a08ed35d086938846c188f6e9980a1ecc42635136841587a4
: stdout
[default] [Wed, 28 Mar 2012 12:58:48 -0700] INFO: template[/etc/mysql/grants.sql] backed up to /var/chef/backup/etc/mysql/grants.sql.chef-20120328125848
: stdout
[default] [Wed, 28 Mar 2012 12:58:48 -0700] INFO: template[/etc/mysql/grants.sql] updated content
: stdout
[default] [Wed, 28 Mar 2012 12:58:48 -0700] INFO: template[/etc/mysql/grants.sql] sending run action to execute[mysql-install-privileges] (immediate)
: stdout
[default] [Wed, 28 Mar 2012 12:58:48 -0700] INFO: Processing execute[mysql-install-privileges] action run (mysql::server line 137)
: stdout
[default] [Wed, 28 Mar 2012 12:58:48 -0700] INFO: execute[mysql-install-privileges] sh(/usr/bin/mysql -u root -p"evanta" < /etc/mysql/grants.sql)
: stdout
[default] [Wed, 28 Mar 2012 12:58:48 -0700] ERROR: execute[mysql-install-privileges] (mysql::server line 137) has had an error
[Wed, 28 Mar 2012 12:58:48 -0700] ERROR: template[/etc/mysql/grants.sql] (/tmp/vagrant-chef-1/chef-solo-1/mysql/recipes/server.rb:128:in `rescue in from_file') had an error:
execute[mysql-install-privileges] (mysql::server line 137) had an error: Chef::Exceptions::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of /usr/bin/mysql -u root -p"evanta" < /etc/mysql/grants.sql ----
STDOUT:
STDERR: ERROR 1396 (HY000) at line 12: Operation CREATE USER failed for 'root'#'%'
---- End output of /usr/bin/mysql -u root -p"evanta" < /etc/mysql/grants.sql ----
Ran /usr/bin/mysql -u root -p"evanta" < /etc/mysql/grants.sql returned 1
Any ideas how these checksums are completed and how to fix this?
The best way to debug these issues is to take a look at the new file it created, in this case at /etc/mysql/grants.sql, and then to look at the backup at /var/chef/backup/etc/mysql/grants.sql.chef-20120328125848 (from the logs you posted). The backup is always made, so you can compare the contents of the two, and proceed to fix the Chef recipe to make sure it generates the same content.