rabbitmq 3.3.4 shovel configuration is crashing start process - configuration

I'm trying to configure the shovel plugin via the config file (running in docker) but I get this error:
BOOT FAILED
===========
Error description:
{error,{failed_to_cluster_with,[rabbit#dalmacpmfd57],
"Mnesia could not connect to any nodes."}}
The config is set up this way because the destination for shovel will be created on demand when a dev environment is spun up... the source is a permanent rabbitmq instance running that the new, dev environment will attach to.
Here is the config file contents:
[
{rabbitmq_shovel,
[{shovels,
[{indexer_replica_static,
[{sources,
[{broker, [ "amqp://guest:guest#rabbitmq/newdev" ]},
{declarations,
[{'queue.declare', [{queue, <<"Indexer_Replica_Static">>}, durable]},
{'queue.bind',[ {exchange, <<"Indexer">>}, {queue, <<"Indexer_Replica_Static">>}]}
]
}
]
},
{destinations,
[{broker, "amqp://"},
{declarations, [ {'exchange.declare', [ {exchange, <<"Indexer_Replica_Static">>}
, {type, <<"fanout">>}, durable]},
{'queue.declare', [
{queue, <<"Indexer_Replica_Static">>},
durable]},
{'queue.bind',
[ {exchange, <<"Indexer_Replica_Static">>}
, {queue, <<"Indexer_Replica_Static">>}
]}
]
}
]
},
{queue, <<"Indexer_Replica_Static">>},
{prefetch_count, 0},
{ack_mode, on_confirm},
{publish_properties, [ {delivery_mode, 2} ]},
{reconnect_delay, 2.5}
]
}
]
},
{reconnect_delay, 2.5}
]
}
].
[UPDATE]
This is being run in docker but since I couldn't debug the issue in docker I tried booting up rabbit locally with the same config file. I noticed in the logs that the rabbit config system variable I set (RABBITMQ_CONFIG_FILE) isn't reflected in the log and the shovel settings haven't been applied (no surprise huh). I verified the variable with an echo statement and the correct path is displayed: /dev/rabbitmq_server-3.3.4/rabbitmq
=INFO REPORT==== 3-Sep-2014::15:30:37 ===
node : rabbit#dalmacpmfd57
home dir : /Users/e002678
config file(s) : (none)
cookie hash : n6vhh8tY7Z+uR2DV6gcHUg==
log : /usr/local/rabbitmq_server-3.3.4/sbin/../var/log/rabbitmq/rabbit#dalmacpmfd57.log
sasl log : /usr/local/rabbitmq_server-3.3.4/sbin/../var/log/rabbitmq/rabbit#dalmacpmfd57- sasl.log
database dir : /usr/local/rabbitmq_server-3.3.4/sbin/../var/lib/rabbitmq/mnesia/rabbit#dalmacpmfd57
Thanks!

Related

packer build debian arm64 img appear no root file system is defined

problem
I need to create a qcow2 image of Debian (arm64) through packer and Debian (arm64) ISO. The community examples of packer are all AMD64. I modified the example of the packer community of AMD 64 HCl file can start QEMU arm64 virtual machine, load preset file and enter automatic installation,
However, no root file system is defined appears in the partition disks step. I checked the QEMU disk (qcow2), and partman auto doesn't work. Here are my relevant codes. Please point out the problems and solutions
Related code
packer .hcl
. HCl is also x86, but by modifying the binary file of QEMU, there is boot_ command qemu_ Args can be installed automatically
The amendments are as follows
boot_command = [
"<wait10>c<wait5><wait10>",
"linux /install.a64/vmlinuz --quiet",
" auto=true ",
" url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/${var.preseed_file} ",
" hostname=${var.vm_name} ", " domain=${var.domain} ", " interface=auto ",
" ---",
"<enter><wait>",
"initrd /install.a64/initrd.gz",
"<enter><wait>",
"boot<enter><wait>"
]
qemuargs = [
[ "-m", "8192" ],
[ "-cpu", "max" ],
[ "-smp", "4" ],
[ "-M", "virt" ],
# iso
[ "-device", "nec-usb-xhci" ],
[ "-device", "usb-storage,drive=install" ],
[ "-drive", "file=/root/packer-build-master/source/debian/debian-11.2.0-arm64-netinst.iso,if=none,id=install,media=cdrom,readonly=on" ],
# hd
[ "-drive", "if=none,file=build/test/arm64/base-bullseye,id=hd,cache=writeback,discard=ignore,format=qcow2" ],
["-device", "virtio-blk-device,drive=hd"],
# [ "-bios", "edk2-aarch64-code.fd" ],
[ "-bios", "/usr/share/qemu-efi-aarch64/QEMU_EFI.fd" ],
[ "-boot", "strict=off" ],
[ "-monitor", "none" ]
The preseed code is directly used by x86 because the community example does not have arm. Another pitfall here is that the packer variable is used in the preset file, but this is a feature community example. This feature has not been implemented yet. My local preset has changed the variable to the actual value
preseed code
Operation results and error reporting contents
Connect the QEMU virtual machine through VNC to check whether the disk partition is reached, and then no root file system is defined appears
My solution ideas and tried methods
Find the partition example of Debian preset about amr64, but it is not found
Official link:
https://www.debian.org/releases/stable/arm64/apbs04.en.html#preseed -Partman#preset partition related
https://salsa.debian.org/installer-team/partman-auto#pressed There are various architecture partition examples in the open source warehouse, but there is no arm64
What I want to achieve
Find the partition about arm64 Debian preset, so that the automatic installation of packer can continue

Disabling the Consul HTTP endpoints

We have enabled ACL's and TLS for Consul cluster in our environment. We have disabled the UI as well. But when I use the URL: http://<consul_agent>:8500/v1/coordinate/datacenters. How can disable the URL's as this?
I tested with adding the following to the consulConfig.json:
"ports":{
"http": -1
}
this did not solve the problem.
Apart from the suggestion provided to use "http_config": { "block_endpoints": I am trying to use the ACL Policy if that can solve.
I enabled the ACL's first
I created a policy using the command: consul acl policy create -name "urlblock" -description "Url Block Policy" -rules #service_block.hcl -token <tokenvalue>
contents of the service_block.hcl: service_prefix "/v1/status/leader" { policy = "deny" }
I created a agent token for this using the command: consul acl token create -description "Block Policy Token" -policy-name "urlblock" -token <tokenvalue>
I copied the agent token from the output of the above command and pasted that in the consul_config.json file in the acl -> tokens section as "tokens": { "agent": "<agenttokenvalue>"}
I restarted the consul agents (did the same in the consul client also).
Still I am able to access the endpoint /v1/status/leader. Any ideas as what is wrong with this approach?
That configuration should properly disable the HTTP server. I was able to validate this works using the following config with Consul 1.9.5.
Disabling Consul's HTTP server
Create config.json in the agent's configuration directory which completely disables the HTTP API port.
config.json
{
"ports": {
"http": -1
}
}
Start the Consul agent
$ consul agent -dev -config-file=config.json
==> Starting Consul agent...
Version: '1.9.5'
Node ID: 'ed7f0050-8191-999c-a53f-9ac48fd03f7e'
Node name: 'b1000.local'
Datacenter: 'dc1' (Segment: '<all>')
Server: true (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: -1, HTTPS: -1, gRPC: 8502, DNS: 8600)
Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false
==> Log data will now stream in as it occurs:
...
Note the HTTP port is set to "-1" on the Client Addr line. The port is now inaccessible.
Test connectivity to HTTP API
$ curl localhost:8500
curl: (7) Failed to connect to localhost port 8500: Connection refused
Blocking access to specific API endpoints
Alternatively you can block access to specific API endpoints, without completely disabling the HTTP API, by using the http_config.block_endpoints configuration option.
For example:
Create a config named block-endpoints.json
{
"http_config": {
"block_endpoints": [
"/v1/catalog/datacenters",
"/v1/coordinate/datacenters",
"/v1/status/leader",
"/v1/status/peers"
]
}
}
Start Consul with this config
consul agent -dev -config-file=block-endpoints.json
==> Starting Consul agent...
Version: '1.9.5'
Node ID: '8ff15668-8624-47b5-6e83-7a8bfd715a56'
Node name: 'b1000.local'
Datacenter: 'dc1' (Segment: '<all>')
Server: true (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: 8502, DNS: 8600)
Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false
==> Log data will now stream in as it occurs:
...
In this example, the HTTP API is enabled and listening on port 8500.
Test connectivity to HTTP API
If you issue a request to one of the blocked endpoints, the following error will be returned.
$ curl localhost:8500/v1/status/peers
Endpoint is blocked by agent configuration
However, access to other endpoints are still permitted.
$ curl localhost:8500/v1/agent/members
[
{
"Name": "b1000.local",
"Addr": "127.0.0.1",
"Port": 8301,
"Tags": {
"acls": "0",
"build": "1.9.5:3c1c2267",
"dc": "dc1",
"ft_fs": "1",
"ft_si": "1",
"id": "6d157a1b-c893-3903-9037-2e2bd0e6f973",
"port": "8300",
"raft_vsn": "3",
"role": "consul",
"segment": "",
"vsn": "2",
"vsn_max": "3",
"vsn_min": "2",
"wan_join_port": "8302"
},
"Status": 1,
"ProtocolMin": 1,
"ProtocolMax": 5,
"ProtocolCur": 2,
"DelegateMin": 2,
"DelegateMax": 5,
"DelegateCur": 4
}
]

Logstash: Unable to connect to external Amazon RDS Database

Am relatively new to logstash & Elasticsearch...
Installed logstash & Elasticsearch using on macOS Mojave (10.14.2):
brew install logstash
brew install elasticsearch
When I check for these versions:
brew list --versions
Receive the following output:
elasticsearch 6.5.4
logstash 6.5.4
When I open up Google Chrome and type this into the URL Address field:
localhost:9200
This is the JSON response that I receive:
{
"name" : "9oJAP16",
"cluster_name" : "elasticsearch_local",
"cluster_uuid" : "PgaDRw8rSJi-NDo80v_6gQ",
"version" : {
"number" : "6.5.4",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "d2ef93d",
"build_date" : "2018-12-17T21:17:40.758843Z",
"build_snapshot" : false,
"lucene_version" : "7.5.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
Inside:
/usr/local/etc/logstash/logstash.yml
Resides the following variables:
path.data: /usr/local/Cellar/logstash/6.5.4/libexec/data
pipeline.workers: 2
path.config: /usr/local/etc/logstash/conf.d
log.level: info
path.logs: /usr/local/var/log
Inside:
/usr/local/etc/logstash/pipelines.yml
Resides the following variables:
- pipeline.id: main
path.config: "/usr/local/etc/logstash/conf.d/*.conf"
Have setup the following logstash_etl.conf file underneath:
/usr/local/etc/logstash/conf.d
Its contents:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://myapp-production.crankbftdpmc.us-west-2.rds.amazonaws.com:3306/products"
jdbc_user => "products_admin"
jdbc_password => "products123"
jdbc_driver_library => "/etc/logstash/mysql-connector/mysql-connector-java-5.1.21.jar"
jdbc_driver_class => "com.mysql.jdbc.driver"
schedule => "*/5 * * * *"
statement => "select * from products"
use_column_value => false
clean_run => true
}
}
# sudo /usr/share/logstash/bin/logstash-plugin install logstash-output-exec
output {
if ([purge_task] == "yes") {
exec {
command => "curl -XPOST 'localhost:9200/_all/products/_delete_by_query?conflicts=proceed' -H 'Content-Type: application/json' -d'
{
\"query\": {
\"range\" : {
\"#timestamp\" : {
\"lte\" : \"now-3h\"
}
}
}
}
'"
}
}
else {
stdout { codec => json_lines}
elasticsearch {
"hosts" => "localhost:9200"
"index" => "product_%{product_api_key}"
"document_type" => "%{[#metadata][index_type]}"
"document_id" => "%{[#metadata][index_id]}"
"doc_as_upsert" => true
"action" => "update"
"retry_on_conflict" => 7
}
}
}
When I do this:
brew services start logstash
Receive the following inside my /usr/local/var/log/logstash-plain.log file:
[2019-01-15T14:51:15,319][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x399927c7 run>"}
[2019-01-15T14:51:15,663][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-01-15T14:51:16,514][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-15T14:57:31,432][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times {:error_message=>"Java::ComMysqlCjJdbcExceptions::CommunicationsException: Communications link failure\n\nThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server."}
[2019-01-15T14:57:31,435][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times {:error_message=>"Java::ComMysqlCjJdbcExceptions::CommunicationsException: Communications link failure\n\nThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server."}[2019-01-15T14:51:15,319][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x399927c7 run>"}
[2019-01-15T14:51:15,663][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-01-15T14:51:16,514][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-15T14:57:31,432][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times
What am I possibly doing wrong?
Is there a way to obtain a dump (e.g. mysqldump) from an Elasticsearch server (Stage or Production) and then reimport into a local instance running Elasticsearch without using logstash?
This is the same configuration file that works inside an Amazon EC-2 Production Instance but don't know why its not working in my local macOS Mojave instance?
You may encounter the SSL issue of RDS, since
If you use either the MySQL Java Connector v5.1.38 or later, or the MySQL Java Connector v8.0.9 or later to connect to your databases, even if you haven't explicitly configured your applications to use SSL/TLS when connecting to your databases, these client drivers default to using SSL/TLS. In addition, when using SSL/TLS, they perform partial certificate verification and fail to connect if the database server certificate is expired.
as described in AWS RDS Doc
To overcome, either set up the trust store for the LogStash, which is described in the above link as well.
Or take the risk to disable the SSL in the connecting string, like
jdbc_connection_string => "jdbc:mysql://myapp-production.crankbftdpmc.us-west-2.rds.amazonaws.com:3306/products?sslMode=DISABLED"

"Connection refused" when creating Elastic Beanstalk environment

I'm following this tutorial to set up Laravel on an Elastic Beanstalk environment:
https://deliciousbrains.com/scaling-laravel-using-aws-elastic-beanstalk-part-3-setting-elastic-beanstalk/
I've gone through it twice on a completely fresh install of Laravel just to see if it works, and it worked both times.
Now, I've gone through it again, but this time on my main Laravel project. I've double checked everything in the tutorial, and I'm confident that I didn't miss anything.
However, when I create the environment using this command from the tutorial (with the values filled in of course):
eb create --vpc.id {VPCID} --vpc.elbpublic --vpc.elbsubnets {VPCELBSUBNETS} --vpc.ec2subnets {VPCEC2SUBNETS} --vpc.securitygroups {VPCSG}
I get the following output error:
Printing Status:
INFO: createEnvironment is starting.
INFO: Using elasticbeanstalk-us-east-1-487650495335 as Amazon S3 storage bucket for environment data.
INFO: Created security group named: sg-018fe470
INFO: Created load balancer named: awseb-e-7-AWSEBLoa-1M3V7HA824OQ0
INFO: Created security group named: sg-7489ec08
INFO: Environment health has transitioned to Pending. Initialization in progress (running for 19 seconds). There are no instances.
INFO: Created Auto Scaling launch configuration named: awseb-e-7xdtjzn4bn-stack-AWSEBAutoScalingLaunchConfiguration-RZPSBCGFS6HY
INFO: Added instance [i-09cc6faf451ef3670] to your environment.
INFO: Created Auto Scaling group named: awseb-e-7xdtjzn4bn-stack-AWSEBAutoScalingGroup-BQI6UG2OLL7E
INFO: Waiting for EC2 instances to launch. This may take a few minutes.
INFO: Created Auto Scaling group policy named: arn:aws:autoscaling:us-east-1:457680865345:scalingPolicy:a3629314-6d24-4871-a0a1-59d74a1087c2:autoScalingGroupName/awseb-e-7xdtjzn4bn-stack-AWSEBAutoScalingGroup-BQI6UG2OLL7E:policyName/awseb-e-7xdtjzn4bn-stack-AWSEBAutoScalingScaleDownPolicy-1SM372VEND7T6
INFO: Created Auto Scaling group policy named: arn:aws:autoscaling:us-east-1:457680865345:scalingPolicy:b03a08fb-e39f-4dc5-8e00-f81f8059fc56:autoScalingGroupName/awseb-e-7xdtjzn4bn-stack-AWSEBAutoScalingGroup-BQI6UG2OLL7E:policyName/awseb-e-7xdtjzn4bn-stack-AWSEBAutoScalingScaleUpPolicy-AS65NNA5M4PP
INFO: Created CloudWatch alarm named: awseb-e-7xdtjzn4bn-stack-AWSEBCloudwatchAlarmLow-1D9SO13U3HBR0
INFO: Created CloudWatch alarm named: awseb-e-7xdtjzn4bn-stack-AWSEBCloudwatchAlarmHigh-1FUCKP1GWED3A
ERROR: [Instance: i-09cc6faf451ef3670] Command failed on instance. Return code: 1 Output: (TRUNCATED)...
[PDOException]
SQLSTATE[HY000] [2002] Connection refused
Script php artisan optimize handling the post-install-cmd event returned with error code 1.
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/10_composer_install.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
ERROR: Create environment operation is complete, but with errors. For more information, see troubleshooting documentation.
As you can see, it says SQLSTATE[HY000] [2002] Connection refused when running php artisan optimize in the post-install-cmd.
If I try to connect to my MySQL RDS database on the command line, it connects successfully, so it doesn't look like connecting is the problem.
It took me a while to find what it meant by post-install-cmd, but I found it in my composer.json file:
{
"name": "laravel/laravel",
"description": "The Laravel Framework.",
"keywords": ["framework", "laravel"],
"license": "MIT",
"type": "project",
"require": {
"php": ">=5.6.4",
"laravel/framework": "5.3.*",
"doctrine/dbal": "^2.5",
"embed/embed": "^2.7",
"pda/pheanstalk": "~3.0",
"guzzlehttp/guzzle": "^6.2",
"intervention/image": "^2.3",
"approached/laravel-image-optimizer": "2.3.0",
"php-ffmpeg/php-ffmpeg": "0.9.3",
"laravelcollective/html": "^5.3",
"tymon/jwt-auth": "0.5.*",
"brozot/laravel-fcm": "^1.2",
"league/flysystem": "^1.0",
"cybercog/laravel-ban": "^2.1",
"pragmarx/firewall": "^1.0",
"predis/predis": "^1.1",
"aws/aws-sdk-php": "^3.31",
"dusterio/laravel-aws-worker": "^0.1.9",
"fideloper/proxy": "^3.3",
"aws/aws-sdk-php-laravel": "^3.1",
"league/flysystem-aws-s3-v3": "^1.0"
},
"require-dev": {
"fzaninotto/faker": "~1.4",
"mockery/mockery": "0.9.*",
"phpunit/phpunit": "~5.0",
"symfony/css-selector": "3.1.*",
"symfony/dom-crawler": "3.1.*"
},
"autoload": {
"classmap": [
"database"
],
"psr-4": {
"App\\": "app/"
}
},
"autoload-dev": {
"classmap": [
"tests/TestCase.php"
]
},
"scripts": {
"post-root-package-install": [
"php -r \"file_exists('.env') || copy('.env.example', '.env');\""
],
"post-create-project-cmd": [
"php artisan key:generate"
],
"post-install-cmd": [
"Illuminate\\Foundation\\ComposerScripts::postInstall",
"php artisan optimize"
],
"post-update-cmd": [
"Illuminate\\Foundation\\ComposerScripts::postUpdate",
"php artisan optimize"
]
},
"config": {
"preferred-install": "dist"
}
}
To me, it doesn't look like anything is out of place.
What could the problem be?
Update
I just tried copying and pasting the contents of my project's composer.json to the fresh install of Laravel, and it successfully created an Elastic Beanstalk environment, so it looks like my composer.json isn't the problem.
What could it be?
Update #2
Looking at the eb logs, I discovered this:
[Illuminate\Database\QueryException]
SQLSTATE[HY000] [2002] Connection refused (SQL: select * from `categories` where exists (select * from `topics` where `topics`.`category_id` = `categories`.`id`) order by `name` asc)
[Doctrine\DBAL\Driver\PDOException]
SQLSTATE[HY000] [2002] Connection refused
The only place that this query is called is in my AppServiceProvider.php class. So I removed it, along with everything else that I added to the class, but I'm still getting the exact same error above, even though I removed that query call.
Why?
To answer my own question, the code in my AppServiceProvider.php file (within the boot() method) was causing it to fail.
What I did was wrapped all the code in the boot() method with this:
public function boot()
{
if (!$this->app->runningInConsole())
{
// Code here
}
}

MySQL import hangs on Vagrant CoreOS box on Mac

I have a local development setup using the following:
Mac Yosemite 10.10.3
Vagrant 1.7.3
CoreOS alpha version 681.0.0
2 Docker containers one for apache PHP and another for mysql both based on Ubuntu 12.10
Its set up to sync the local dev directory ~/Sites to the Vagrant box using NFS, since my working directories as well as the MySQL directories are located here (~/Sites/.coreos-databases/mysql). From what I have read this is not the best type of setup but it has worked for me for quite some time as well as others at work.
Recently I have not been able to import any database dumps into this setup. The import starts and hangs approximately half way through the process. It happens on the command line as well as with Sequel Pro. It does import some of the the tables, but freezes exactly at the same spot everytime. It doesn't seem to matter what the size of the dump is - the one I have been attempting is only 104Kb. Someone else is having the same issue with a 100MB+ dump - freezing at the same spot approx halfway.
My Vagrantfile:
Vagrant.configure("2") do |config|
# Define the CoreOS box
config.vm.box = "coreos-alpha"
config.vm.box_url = "http://alpha.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json"
# Define a static IP
config.vm.network "private_network",
ip: "33.33.33.77"
# Share the current folder via NFS
config.vm.synced_folder ".", "/home/core/sites",
id: "core",
:nfs => true,
:mount_options => ['nolock,vers=3,udp,noatime']
# Provision docker with shell
# config.vm.provision
config.vm.provision "shell",
path: ".coreos-devenv/scripts/provision-docker.sh"
end
Dockerfile for mysql:
# Start with Ubuntu base
FROM ubuntu:12.10
# Install some basics
RUN apt-get update
# Install mysql
RUN apt-get install -y mysql-server
# Clean up after install
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Add a grants file to set up remote user
# and disbale the root user's remote access.
ADD grants.sql /etc/mysql/
# Add a conf file for correcting "listen"
ADD listen.cnf /etc/mysql/conf.d/
# Run mysqld on standard port
EXPOSE 3306
ENTRYPOINT ["/usr/sbin/mysqld"]
CMD ["--init-file=/etc/mysql/grants.sql"]
I 'vagrant ssh' in and run dmesg and this is what it spits out after it freezes:
[ 465.504357] nfs: server 33.33.33.1 not responding, still trying
[ 600.091356] INFO: task mysqld:1501 blocked for more than 120 seconds.
[ 600.092388] Not tainted 4.0.3 #2
[ 600.093277] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 600.094442] mysqld D ffff880019dbfbc8 0 1501 939 0x00000000
[ 600.095953] ffff880019dbfbc8 ffffffff81a154c0 ffff88001ec61910 ffff880019dbfba8
[ 600.098871] ffff880019dbffd8 0000000000000000 7fffffffffffffff 0000000000000002
[ 600.101594] ffffffff8150b4e0 ffff880019dbfbe8 ffffffff8150ad57 ffff88001ed5eb18
[ 600.103794] Call Trace:
[ 600.104376] [<ffffffff8150b4e0>] ? bit_wait+0x50/0x50
[ 600.105934] [<ffffffff8150ad57>] schedule+0x37/0x90
[ 600.107505] [<ffffffff8150da7c>] schedule_timeout+0x20c/0x280
[ 600.108369] [<ffffffff8101d799>] ? read_tsc+0x9/0x10
[ 600.109370] [<ffffffff810d731e>] ? ktime_get+0x3e/0xa0
[ 600.110353] [<ffffffff8101d799>] ? read_tsc+0x9/0x10
[ 600.111327] [<ffffffff810d731e>] ? ktime_get+0x3e/0xa0
[ 600.112347] [<ffffffff8150a31c>] io_schedule_timeout+0xac/0x130
[ 600.113368] [<ffffffff810a9ee7>] ? prepare_to_wait+0x57/0x90
[ 600.114358] [<ffffffff8150b516>] bit_wait_io+0x36/0x50
[ 600.115332] [<ffffffff8150b145>] __wait_on_bit+0x65/0x90
[ 600.116343] [<ffffffff81146072>] wait_on_page_bit+0xc2/0xd0
[ 600.117453] [<ffffffff810aa360>] ? autoremove_wake_function+0x40/0x40
[ 600.119304] [<ffffffff81146179>] filemap_fdatawait_range+0xf9/0x190
[ 600.120646] [<ffffffff81152ffe>] ? do_writepages+0x1e/0x40
[ 600.121346] [<ffffffff81147f96>] ? __filemap_fdatawrite_range+0x56/0x70
[ 600.122397] [<ffffffff811480bf>] filemap_write_and_wait_range+0x3f/0x70
[ 600.123460] [<ffffffffa0207b1e>] nfs_file_fsync_commit+0x23e/0x3c0 [nfs]
[ 600.124399] [<ffffffff811e7bf0>] vfs_fsync_range+0x40/0xb0
[ 600.126163] [<ffffffff811e7cbd>] do_fsync+0x3d/0x70
[ 600.127092] [<ffffffff811e7f50>] SyS_fsync+0x10/0x20
[ 600.128086] [<ffffffff8150f089>] system_call_fastpath+0x12/0x17
Any ideas as whats going on here?
I am also using this same setup. Vagrant defaults to UDP so removing that from your setup seems to work. Haven't tested it though but I didn't run into the MYSQL issues you had.
config.vm.synced_folder ".", "/home/core/sites",
id: "core",
nfs_version: "4",
:nfs => true,
:mount_options => ['nolock,noatime']
This worked for me. YMMV.