Can I concatenate openssl conf files? - configuration

If I have a file (my.cnf) containing
[v3_req]
subjectAltName = #alt_names
[alt-names]
DNS ....
DNS ....
and I concatenate it with openssl.cnf (cat openssl.cnf my.cnf > myopenssl.cnf) will openssl parse the new file and add the subjectAltName = to the earlier [v3_req] section or will it overwrite it (so I lose the previous values in [v3_req])?

On Openssl 1.1.1 you can concatenate and it will merge sections with the same name.
For example, the following configuration will result in a certificate request with basicConstraints = CA:TRUE and all three DNSs: localhost, localhost.localdomain and 127.0.0.1.
...
[ req_ext ]
basicConstraints = CA:FALSE
[ req_ext ]
basicConstraints = CA:TRUE
subjectAltName = #alternate_names
[ alternate_names ]
DNS.1 = localhost
DNS.2 = localhost.localdomain
[ alternate_names ]
DNS.3 = 127.0.0.1

Related

Collecting JVM heap dump from AWS ElasticBeanstalk

I'm using AWS ElasticBeanstalk with Tomcat as web server. I would like to debug and to log the Java Virtual Machine performance, crash reports and to write them in CloudWatch Logs.
Currently the AWS ElasticBeanstalk collects logs created by the web server, application server, Elastic Beanstalk platform scripts. You can use CloudWatch logs as a centralized log system.
How can I collect my custom JVM logs in CloudWatch as I mentioned at the beginning?
Thanks.
Florin
I have contected the AWS support team and their answer was:
This can be done using .ebextensions configuration file to install awslogs package and define the log file, log_group_name and log_stream name to stream on CloudWatch,
Kindly find an example below for Enabling Catalina Logs in CloudWatch for Tomcat8 Environment
---.ebextensions/cwatch.config---
packages:
yum:
awslogs: []
files:
"/etc/awslogs/awscli.conf" :
mode: "000600"
owner: root
group: root
content: |
[plugins]
cwlogs = cwlogs
[default]
region = `{"Ref":"AWS::Region"}`
"/etc/awslogs/awslogs.conf" :
mode: "000600"
owner: root
group: root
content: |
[general]
state_file = /var/lib/awslogs/agent-state
[/var/log/tomcat8]
log_group_name = `{"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "catalina"]]}`
log_stream_name = {instance_id}_messages
file = /var/log/tomcat8/catalina.out
datetime_format = %b %d %H:%M:%S
initial_position = start_of_file
buffer_duration = 5000
[/var/log/messages]
log_group_name = `{"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "syslog"]]}`
log_stream_name = {instance_id}_messages
file = /var/log/messages
datetime_format = %b %d %H:%M:%S
initial_position = start_of_file
buffer_duration = 5000
[/var/log/eb-activity.log]
log_group_name = `{"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "eb"]]}`
log_stream_name = {instance_id}_eb-activity.log
file = /var/log/eb-activity.log
datetime_format = [%Y-%m-%dT%H:%M:%S.%3NZ]
initial_position = start_of_file
buffer_duration = 5000
[/var/log/eb-cfn-init.log]
log_group_name = `{"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "eb"]]}`
log_stream_name = {instance_id}_eb-cfn-init.log
file = /var/log/eb-cfn-init.log
datetime_format = [%Y-%m-%dT%H:%M:%S.%3NZ]
initial_position = start_of_file
buffer_duration = 5000
[/var/log/eb-commandprocessor.log]
log_group_name = `{"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "eb"]]}`
log_stream_name = {instance_id}_eb-commandprocessor.log
file = /var/log/eb-commandprocessor.log
datetime_format = [%Y-%m-%dT%H:%M:%S.%3NZ]
initial_position = start_of_file
buffer_duration = 5000
[/var/log/eb-publish-logs.log]
log_group_name = `{"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "eb"]]}`
log_stream_name = {instance_id}_eb-publish-logs.log
file = /var/log/eb-publish-logs.log
datetime_format = %Y-%m-%d %H:%M:%S,%3N
initial_position = start_of_file
buffer_duration = 5000
[/var/log/eb-tools.log]
log_group_name = `{"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "eb"]]}`
log_stream_name = {instance_id}_eb-tools.log
file = /var/log/eb-tools.log
datetime_format = %Y-%m-%d %H:%M:%S,%3N
initial_position = start_of_file
buffer_duration = 5000
commands:
"01":
command: chkconfig awslogs on
"02":
command: service awslogs restart
---end---
and here is a simpler version to Add log to default CloudWatch logs streaming config
Warning: does not set retention policy, won't obey retention policy (e.g "delete when env is deleted)
---.ebextensions/log.config---
files:
/etc/awslogs/config/mylog.conf:
owner: root
group: root
mode: "000644"
content:
Fn::Sub: |
[/var/log/mylog.log]
log_group_name=/aws/elasticbeanstalk/${AWSEBEnvironmentName}/var/log/mylog.log
log_stream_name={instance_id}
file=/var/log/mylog.log
commands:
restart_awslogs:
command: service awslogs restart || service awslogs start
---end---
Note: Do not use the sample provided in production. The goal of the sample is to illustrate the functionality."
I didn't check it, but if someone will test this solution please comment and mark it as valid. If not I'll delete my answer.

Cygnus didn't update database

I want to subscribe Orion to send notifications to Cygnus. Then cygnus will save all data in mysql database. I use this script to subscribe the speed attribute of car1.
(curl 130.206.118.44:1026/v1/subscribeContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'Fiware-Service: vehicles' --header 'Fiware-ServicePath: /4wheels' -d #- | python -mjson.tool) <<EOF
{
"entities": [
{
"type": "car",
"isPattern": "false",
"id": "car1"
}
],
"attributes": [
"speed",
"oil_level"
],
"reference": "http://192.168.1.49:5050/notify",
"duration": "P1M",
"notifyConditions": [
{
"type": "ONCHANGE",
"condValues": [
"speed"
]
}
],
"throttling": "PT1S"
}
EOF
But when I update the speed attribute of car 1, cygnus doesn't update the database.
Databases available:
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)
Some information about my cygnus service and my cygnus configuration (systemctl status cygnus):
cygnus.service - SYSV: cygnus
Loaded: loaded (/etc/rc.d/init.d/cygnus)
Active: active (exited) since Wed 2015-10-21 17:54:07 UTC; 8min ago
Process: 31566 ExecStop=/etc/rc.d/init.d/cygnus stop (code=exited, status=0/SUCCESS)
Process: 31588 ExecStart=/etc/rc.d/init.d/cygnus start (code=exited, status=0/SUCCESS)
Oct 21 17:54:05 cygnus systemd[1]: Starting SYSV: cygnus...
Oct 21 17:54:05 cygnus su[31593]: (to cygnus) root on none
Oct 21 17:54:07 cygnus cygnus[31588]: Starting Cygnus mysql... [ OK ]
Oct 21 17:54:07 cygnus systemd[1]: Started SYSV: cygnus.
agent_mysql.conf:
# main configuration
cygnusagent.sources = http-source
cygnusagent.sinks = mysql-sink
cygnusagent.channels = mysql-channel
# source configuration
cygnusagent.sources.http-source.channels = mysql-channel
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnusagent.sources.http-source.port = 5050
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# url target
cygnusagent.sources.http-source.handler.notification_target = /notify
cygnusagent.sources.http-source.handler.default_service = def_serv
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
cygnusagent.sources.http-source.handler.events_ttl = 10
cygnusagent.sources.http-source.interceptors = ts gi
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
#Orion MysqlSink Configuration
cygnusagent.sinks.mysql-sink.channel = mysql-channel
cygnusagent.sinks.mysql-sink.type = com.telefonica.iot.cygnus.sinks.OrionMySQLSink
cygnusagent.sinks.mysql-sink.enable_grouping = false
# mysqldb ip
cygnusagent.sinks.mysql-sink.mysql_host = 127.0.0.1
# mysqldb port
cygnusagent.sinks.mysql-sink.mysql_port = 3306
cygnusagent.sinks.mysql-sink.mysql_username = root
cygnusagent.sinks.mysql-sink.mysql_password = 12345
cygnusagent.sinks.mysql-sink.attr_persistence = column
cygnusagent.sinks.mysql-sink.table_type = table-by-destination
# configuracao do canal mysql
cygnusagent.channels.mysql-channel.type = memory
cygnusagent.channels.mysql-channel.capacity = 1000
cygnusagent.channels.mysql-channel.transactionCapacity = 100
After read this question, I changed my agent_mysql.conf in this line:
cygnusagent.sinks.mysql-sink.attr_persistence = column to cygnusagent.sinks.mysql-sink.attr_persistence = row and restarted the service. Then I updated orion entity and I queried database and nothing happened.
Cygnus log file: http://pastebin.com/B2FNKcVf
Note: My JAVA_HOME is set.
As you can see in the logs you posted, there is a problem with the Cygnus log file:
java.io.FileNotFoundException: ./logs/cygnus.log (No such file or directory)
After that, Cygnus stops. You must check your configuration regarding log4j, everything is at /usr/cygnus/conf/log4j.properties (it should exists, it is created by the RPM... if not existing -because you italled from sources instead of the RPM-, it must be created from the available template). In addition, can you post your instance configuration file? Anyway, which version are you running?
EDIT 1
Recently, we have found another user dealing with the same error, and the problem was the content of the /usr/cygnus/conf/log4j.properties file was:
flume.root.logger=INFO,LOGFILE
flume.log.dir=./log
flume.log.file=flume.log
Instead of what the template contains:
flume.root.logger=INFO,LOGFILE
flume.log.dir=/var/log/cygnus/
flume.log.file=flume.log
Once changed it worked because the RPM creates /var/log/cygnus but not ./log.

Cygnus: Bad HTTP notification (curl/7.29.0 user agent not supported)

I installed cygnus version 0.8.2 on Fiware instance basing on the image CentOS-7-x64 using:
sudo yum install cygnus
I configured my agent as the following:
cygnusagent.sources = http-source
cygnusagent.sinks = mongo-sink
cygnusagent.channels = mongo-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = mongo-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# GroupinInterceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
# ============================================
# OrionMongoSink configuration
# sink class, must not be changed
cygnusagent.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.OrionMongoSink
# channel name from where to read notification events
cygnusagent.sinks.mongo-sink.channel = mongo-channel
# FQDN/IP:port where the MongoDB server runs (standalone case) or comma-separated list of FQDN/IP:port pairs where the MongoDB replica set members run
cygnusagent.sinks.mongo-sink.mongo_hosts = 127.0.0.1:27017
# a valid user in the MongoDB server (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_username =
# password for the user above (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_password =
# prefix for the MongoDB databases
cygnusagent.sinks.mongo-sink.db_prefix = kura_
# prefix pro the MongoDB collections
cygnusagent.sinks.mongo-sink.collection_prefix = kura_
# true is collection names are based on a hash, false for human redable collections
cygnusagent.sinks.mongo-sink.should_hash = false
#=============================================
# mongo-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mongo-channel.type = memory
# capacity of the channel
cygnusagent.channels.mongo-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mongo-channel.transactionCapacity = 100
I tried to test it locally using the following curl command:
URL=$1
curl $URL -v -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' --header "Fiware-Service: qsg" --header "Fiware-ServicePath: testsink" -d #- <<EOF
{
"subscriptionId" : "51c0ac9ed714fb3b37d7d5a8",
"originator" : "localhost",
"contextResponses" : [
{
"contextElement" : {
"attributes" : [
{
"name" : "temperature",
"type" : "float",
"value" : "26.5"
}
],
"type" : "Room",
"isPattern" : "false",
"id" : "Room1"
},
"statusCode" : {
"code" : "200",
"reasonPhrase" : "OK"
}
}
]
}
EOF
but I got this exception:
2015-10-06 14:38:50,138 (1251445230#qtp-1186065012-0) [INFO - com.telefonica.iot.cygnus.handlers.OrionRestHandler.getEvents(OrionRestHandler.java:150)] Starting transaction (1444142307-244-0000000000)
2015-10-06 14:38:50,140 (1251445230#qtp-1186065012-0) [WARN - com.telefonica.iot.cygnus.handlers.OrionRestHandler.getEvents(OrionRestHandler.java:180)] Bad HTTP notification (curl/7.29.0 user agent not supported)
2015-10-06 14:38:50,140 (1251445230#qtp-1186065012-0) [WARN - org.apache.flume.source.http.HTTPSource$FlumeHTTPServlet.doPost(HTTPSource.java:186)] Received bad request from client.
org.apache.flume.source.http.HTTPBadRequestException: curl/7.29.0 user agent not supported
at com.telefonica.iot.cygnus.handlers.OrionRestHandler.getEvents(OrionRestHandler.java:181)
at org.apache.flume.source.http.HTTPSource$FlumeHTTPServlet.doPost(HTTPSource.java:184)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:725)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:814)
at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
Any idea of what can be the cause of this exception?
Cygnus version <= 0.8.2 controls the HTTP headers, only accepting user-agets starting by orion. This has been fixed in 0.9.0 (this is the particular issue). Thus, you have two options:
Avoiding sending such a user-agent header. According to the curl documentation, you can use the -A, --user-agent <agent string> option in order to modify the user-agent and sending something starting by orion (e.g. orion/0.24.0).
Moving to Cygnus 0.9.0 (in order to avoid you have to install it from the sources, I'll upload along the day a RPM in the FIWARE repo).

Using Fiware Cygnus with mysql sink, no data base has been created

I configured my cygnusagent to use mySQL sink as bellow and I used the notifcation script from the tutorial [1] to test it, but nothing has been changed in my data base, no new data base has been created.
Any ideas of what I may missed? Thanks!
cygnusagent.sources = http-source
cygnusagent.sinks = mysql-sink
cygnusagent.channels = mysql-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = mysql-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts
# TimestampInterceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# GroupinInterceptor, do not change
#cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
#cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
# ============================================
# OrionMySQLSink configuration
# channel name from where to read notification events
cygnusagent.sinks.mysql-sink.channel = mysql-channel
# sink class, must not be changed
cygnusagent.sinks.mysql-sink.type = com.telefonica.iot.cygnus.sinks.OrionMySQLSink
# the FQDN/IP address where the MySQL server runs
cygnusagent.sinks.mysql-sink.mysql_host = 192.168.1.107
# the port where the MySQL server listes for incomming connections
cygnusagent.sinks.mysql-sink.mysql_port = 3306
# a valid user in the MySQL server
cygnusagent.sinks.mysql-sink.mysql_username = root
# password for the user above
cygnusagent.sinks.mysql-sink.mysql_password = poiu
# how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.mysql-sink.attr_persistence = row
#=============================================
# mysql-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mysql-channel.type = memory
# capacity of the channel
cygnusagent.channels.mysql-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mysql-channel.transactionCapacity = 100
Here is my cygnus logs after the reception of data:
15/07/16 16:42:34 INFO handlers.OrionRestHandler: Starting transaction (1437057740-95-0000000000)
15/07/16 16:42:34 INFO handlers.OrionRestHandler: Received data ({ "subscriptionId" : "51c0ac9ed714fb3b37d7d5a8", "originator" : "localhost", "contextResponses" : [ { "contextElement" : { "attributes" : [ { "name" : "temperature", "type" : "centigrade", "value" : "26.5" } ], "type" : "Room", "isPattern" : "false", "id" : "Room1" }, "statusCode" : { "code" : "200", "reasonPhrase" : "OK" } } ]})
15/07/16 16:42:34 INFO handlers.OrionRestHandler: Event put in the channel (id=119782869, ttl=10)
15/07/16 16:42:34 INFO sinks.OrionSink: Event got from the channel (id=119782869, headers={content-type=application/json, fiware-service=room, fiware-servicepath=room, ttl=10, transactionId=1437057740-95-0000000000, timestamp=1437057754877}, bodyLength=612)
15/07/16 16:42:34 WARN sinks.OrionSink:
15/07/16 16:42:34 INFO sinks.OrionSink: Finishing transaction (1437057740-95-0000000000)
[1] https://github.com/telefonicaid/fiware-cygnus/blob/release/0.8.2/doc/quick_start_guide.md
You have commented this part of the configuration file:
# GroupinInterceptor, do not change
#cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
#cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
You must uncomment those parameters, even if you are not going to use the Grouping Rules feature (simply leave in blank /usr/cygnus/conf/grouping_rules.conf).

MySQL import hangs on Vagrant CoreOS box on Mac

I have a local development setup using the following:
Mac Yosemite 10.10.3
Vagrant 1.7.3
CoreOS alpha version 681.0.0
2 Docker containers one for apache PHP and another for mysql both based on Ubuntu 12.10
Its set up to sync the local dev directory ~/Sites to the Vagrant box using NFS, since my working directories as well as the MySQL directories are located here (~/Sites/.coreos-databases/mysql). From what I have read this is not the best type of setup but it has worked for me for quite some time as well as others at work.
Recently I have not been able to import any database dumps into this setup. The import starts and hangs approximately half way through the process. It happens on the command line as well as with Sequel Pro. It does import some of the the tables, but freezes exactly at the same spot everytime. It doesn't seem to matter what the size of the dump is - the one I have been attempting is only 104Kb. Someone else is having the same issue with a 100MB+ dump - freezing at the same spot approx halfway.
My Vagrantfile:
Vagrant.configure("2") do |config|
# Define the CoreOS box
config.vm.box = "coreos-alpha"
config.vm.box_url = "http://alpha.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json"
# Define a static IP
config.vm.network "private_network",
ip: "33.33.33.77"
# Share the current folder via NFS
config.vm.synced_folder ".", "/home/core/sites",
id: "core",
:nfs => true,
:mount_options => ['nolock,vers=3,udp,noatime']
# Provision docker with shell
# config.vm.provision
config.vm.provision "shell",
path: ".coreos-devenv/scripts/provision-docker.sh"
end
Dockerfile for mysql:
# Start with Ubuntu base
FROM ubuntu:12.10
# Install some basics
RUN apt-get update
# Install mysql
RUN apt-get install -y mysql-server
# Clean up after install
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Add a grants file to set up remote user
# and disbale the root user's remote access.
ADD grants.sql /etc/mysql/
# Add a conf file for correcting "listen"
ADD listen.cnf /etc/mysql/conf.d/
# Run mysqld on standard port
EXPOSE 3306
ENTRYPOINT ["/usr/sbin/mysqld"]
CMD ["--init-file=/etc/mysql/grants.sql"]
I 'vagrant ssh' in and run dmesg and this is what it spits out after it freezes:
[ 465.504357] nfs: server 33.33.33.1 not responding, still trying
[ 600.091356] INFO: task mysqld:1501 blocked for more than 120 seconds.
[ 600.092388] Not tainted 4.0.3 #2
[ 600.093277] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 600.094442] mysqld D ffff880019dbfbc8 0 1501 939 0x00000000
[ 600.095953] ffff880019dbfbc8 ffffffff81a154c0 ffff88001ec61910 ffff880019dbfba8
[ 600.098871] ffff880019dbffd8 0000000000000000 7fffffffffffffff 0000000000000002
[ 600.101594] ffffffff8150b4e0 ffff880019dbfbe8 ffffffff8150ad57 ffff88001ed5eb18
[ 600.103794] Call Trace:
[ 600.104376] [<ffffffff8150b4e0>] ? bit_wait+0x50/0x50
[ 600.105934] [<ffffffff8150ad57>] schedule+0x37/0x90
[ 600.107505] [<ffffffff8150da7c>] schedule_timeout+0x20c/0x280
[ 600.108369] [<ffffffff8101d799>] ? read_tsc+0x9/0x10
[ 600.109370] [<ffffffff810d731e>] ? ktime_get+0x3e/0xa0
[ 600.110353] [<ffffffff8101d799>] ? read_tsc+0x9/0x10
[ 600.111327] [<ffffffff810d731e>] ? ktime_get+0x3e/0xa0
[ 600.112347] [<ffffffff8150a31c>] io_schedule_timeout+0xac/0x130
[ 600.113368] [<ffffffff810a9ee7>] ? prepare_to_wait+0x57/0x90
[ 600.114358] [<ffffffff8150b516>] bit_wait_io+0x36/0x50
[ 600.115332] [<ffffffff8150b145>] __wait_on_bit+0x65/0x90
[ 600.116343] [<ffffffff81146072>] wait_on_page_bit+0xc2/0xd0
[ 600.117453] [<ffffffff810aa360>] ? autoremove_wake_function+0x40/0x40
[ 600.119304] [<ffffffff81146179>] filemap_fdatawait_range+0xf9/0x190
[ 600.120646] [<ffffffff81152ffe>] ? do_writepages+0x1e/0x40
[ 600.121346] [<ffffffff81147f96>] ? __filemap_fdatawrite_range+0x56/0x70
[ 600.122397] [<ffffffff811480bf>] filemap_write_and_wait_range+0x3f/0x70
[ 600.123460] [<ffffffffa0207b1e>] nfs_file_fsync_commit+0x23e/0x3c0 [nfs]
[ 600.124399] [<ffffffff811e7bf0>] vfs_fsync_range+0x40/0xb0
[ 600.126163] [<ffffffff811e7cbd>] do_fsync+0x3d/0x70
[ 600.127092] [<ffffffff811e7f50>] SyS_fsync+0x10/0x20
[ 600.128086] [<ffffffff8150f089>] system_call_fastpath+0x12/0x17
Any ideas as whats going on here?
I am also using this same setup. Vagrant defaults to UDP so removing that from your setup seems to work. Haven't tested it though but I didn't run into the MYSQL issues you had.
config.vm.synced_folder ".", "/home/core/sites",
id: "core",
nfs_version: "4",
:nfs => true,
:mount_options => ['nolock,noatime']
This worked for me. YMMV.