Reload vagrant after an update - mysql

What is the fastest method to reload vagrant after changing a provisioning script ?
I am actually copying the file "mysql.sql" to the guest machine, in "Vagrant":
config.vm.provision "file", source: "mysql.sql", destination: "mysql.sql"
and call it from "bootstrap.sh":
mysql -h localhost -u root -proot < /home/vagrant/mysql.sql
I used to use:
vagrant destroy
vagrant up
vagrant provision
vagrant ssh
I tried to see if:
vagrant reload
will do the same thing, but I am not sure about this, since my modifications happens on the .sql file and not in the Vagrant file.

You can run vagrant reload --provision to run your provisioner.
Provision
On the first vagrant up that creates the environment, provisioning is run. If the environment was already created and the up is just resuming a machine or booting it up, they won't run unless the --provision flag is explicitly provided.
When vagrant provision is used on a running environment.
When vagrant reload --provision is called. The --provision flag must be present to force provisioning.
P.S. A bug was fix in Vagrant 1.7.2, if you have a error during vagrant reload --provision, you car rm .vagrant/machines/default/virtualbox/synced_folders and after run vagrant provision.

Related

How to connect docker container with host machine's localhost mysql database?

I have a war file that uses the MySQL database in the backend.
I have deployed my war file in a docker container and I am able to ping this from my browser.
I want to connect my app with the MySQL database. This database exists on my host machine's localhost:3306
As I am unable to connect this from inside container's localhost, what I tried is,
I run a command docker inspect --format '{{ .NetworkSettings.IPAddress }}' 213be777a837
This command gave me an IP address 172.17.0.2. I went to MySQL server options and put this IP address in the bind field and restarted the server. After that, I have updated my projects database connection string with 172.17.0.2:3306
But it is not working. Could anyone please tell what I am missing?
I have also tried adding a new DB user with root#% and then run command allow all permission to 'root#%' but nothing worked.
Follow the steps:-
docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 dockernet
docker run -p 8082:8080 --network dockernet -d 6ab907c973d2
in your project set connection string : jdbc:mysql://host.docker.internal:3306/....
And then deploy.
tl;dr: Use 172.17.0.1:3306 if you're on Linux.
Longer description:
As I understand what you need to do is to connect from your Docker container to a host port. But what you have done is to try to bind the host process (MySQL) to the container networking interface. Not sure what the implications of a host process trying to bind to another host process network namespace, but IIUC your MySQL process should not be able to bind to that address.
When you start MySQL with default settings that bind it to 0.0.0.0 it's available for Docker containers through the Docker virtual bridge. Therefore, what you should do is to route your requests from the WAR process to the host process through that virtual bridge (if this is the networking mode you're using. If you have not changed any Docker networking settings, it should be). This is done by specifying the bridge gateway address as the MySQL address and the port it's started with.
You can get the bridge IP address by checking your network interfaces. When Docker is installed, it configures the virtual bridge by default, and that should show up as docker0 if you're on Linux. The IP address for this will most probably be 172.17.0.1. So your MySQL address from the container's point of view is jdbc:mysql://172.17.0.1:3306/....
1 - https://docs.docker.com/network/
2 - https://docs.docker.com/network/bridge/
From your question, I am assuming you both your war file and MySQL is deployed locally, and you want to connect them. One way to allow both containers that are locally deployed to talk to each other is by:
Create your own network docker network create <network-name>
Then when you run your war file and MySQL, deploy both of them using the --network. E.g.
War File: docker run --name war-file --network <network-name> <war file image>
MySQL: docker run --name mysql --network <network-name> <MySQL image>
After that, if you should be able to connect to your MySQL using mysql:3306 from inside your war file docker container, since they are both on the same custom network.
If you want to read up more about this, can take a look at docker documentation on network. (https://docs.docker.com/network/bridge/).
Your setup is fine. You just need to do this one change.
While running the application container (the one in which you are deploying your war file), you need to add following argument in its docker run command.
--net=host
Example:
docker run -itd -p 8082:8080 --net=host --name myapp myimage
With this change, you need not to change connection string as well. localhost:3306 would work fine. And you will be able to set up a connection with MySQL.

Laravel Docker Container not connecting to local mysql

I have an issue connecting to mysql running in the local machine in my DockerFile i have mentioned
FROM php:7
RUN apt-get update -y && apt-get install -y openssl zip unzip git
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN docker-php-ext-install pdo mbstring pdo_mysql
WORKDIR /home
COPY . /home
RUN composer install --ignore-platform-reqs
CMD php artisan serve --host=0.0.0.0 --port=8081
EXPOSE 8081
and this in my .env configuration
DB_HOST=localhost
DB_DATABASE=databasename
DB_USERNAME=root
DB_PASSWORD=testpassword
I have very less clue about where it is failing. Do i need to install mysql for Docker container also?
A much simpler solution (for Mac OSX & Docker for Windows) is to replace the host address from localhost
to host.docker.internal
DB_HOST=host.docker.internal
DB_DATABASE=databasename
DB_USERNAME=root
DB_PASSWORD=testpassword
Basically the DNS namehost.docker.internal will resolves to the internal IP address used by the host.
NB: If you have changed your address to host.docker.internal but you still receive connection refused error, it’s most probably because MySQL is currently configured to only listen to the local network.
To resolve that, please update the value of the bind_address to 0.0.0.0 in your my.cnf configuration file.
you are trying to connect to mysql in localhost, which is (surprisingly) the reference to the local host. since its a relative address, inside the container it is being resolved as the container own address, and no mysql is awaiting you there...
so to solve it - just give it your real host ip instead of localhost or 127.0.0.1.
step 1 - fix .env file:
DB_HOST=<your_host_ip> #run `ifconfig` and look for your ip on `docker0` network
DB_DATABASE=databasename
DB_USERNAME=laravel_server #not root, since we are going to allow this user remote access.
DB_PASSWORD=testpassword
step 2 - create dedicated user:
open your mysql: mysql -u root -p, give your root password, and run the following:
CREATE USER 'laravel_server'#'%' IDENTIFIED BY 'testpassword';
GRANT ALL PRIVILEGES ON databasename.* TO 'laravel_server'#'%';
FLUSH PRIVILEGES;
we created the user and gave it the permissions.
step 3 - open mysql to remote access:
we have to make it listening on all interfaces and not just localhost and therefore we run:
sudo sed 's/.*bind-address.*/bind-address=0.0.0.0/' /etc/mysql/mysql.conf.d/mysqld.cnf
(you will be prompted for password. this command is just replacing the line in mysql configuration file)
step 4 - updating:
in the project directory: php artisan config:cache
service mysql restart
then docker build and run a new container again. it should work for you.
I see two options -
Use the private IP of your docker host i.e where mysql server is running.
Use the host network mode while running container in case you want to use localhost.
docker container --net=host ...
In my case (Ubuntu 20.04 Desktop), I had MariaDB already running and using port 3306. So when the app inside the docker container was trying to start MySQL that was inside the container it failed because it was trying to listen to an already used port. I switched off the already-running MariaDB using the command below :
sudo systemctl stop mariadb.service
Then tried starting the docker app. It ran successfully because port 3306 was now free and used by MySQL inside the container. But since I intend to use both of them, a much more permanent solution would be to configure either of the Database systems i.e the one inside the docker container or the one outside the docker container to use a different port other than the default 3306.

MySQL server in a docker container doesn't work after commiting the container

I created docker-compose.yml file, which creates MySQL image and sets the password of MySQL root user to "hello".
# docker-compose.yml
version: '3.1'
services:
mysql:
image: mysql:5.6.40
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
- MYSQL_ROOT_PASSWORD=hello
- MYSQL_ALLOW_EMPTY_PASSWORD=hello
- MYSQL_RANDOM_ROOT_PASSWORD=hello
Then I run:
sudo docker-compose up # (1)
... from the directory with this file.
The first problem is that the newly created container starts running in the foreground but not in bash and I can't put it in the background without exiting it, which I can do by Ctrl+C, or somehow enter bash, getting out from this process.
But when I open a new terminal window and run:
sudo docker exec -it bdebee1b8090 /bin/bash # (2)
..., where bdebee1b8090 is the id of the running container, I enter bash, where I can enter MySQL shell as root user, entering password "hello":
mysql -u root -p # (3)
Then I exit MySQL shell and bash shell in the container without stopping the container.
And then I commit changes to the container:
sudo docker commit bdebee1b8090 hello_mysql # (4)
..., creating a image. And then, when I run the image:
sudo docker run -it --rm hello_mysql /bin/bash # (5)
... and try to start MySQL shell again as root user, entering password "hello", I get an error like
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
And even after I restart the MySQL server:
/etc/init.d/mysql restart # (6)
..., I get the same error.
All of the above commands were run on ubuntu.
Why is this happening?
Edit:
When I try to make those steps on MacOS High Sierra, I get stuck on the step (3), because when I try to enter the password "hello", it doesn't get accepted. This error is shown on the screen:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
And when I try to restart MySQL server in the container
/etc/init.d/mysql restart
..., the container restarts in the background
..., but when I run it again and try to repeat steps (2) and (3) it gives the same error, and when I restart the MySQL server again, the container restarts in the background again...
Edit2:
After I removed lines:
- MYSQL_ALLOW_EMPTY_PASSWORD=hello
- MYSQL_RANDOM_ROOT_PASSWORD=hello -
...it started to work on Mac, but still after I commit the container, and try to enter the MySQL shell, it gives and error like:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
.... again.
According to official documentation for MySQL docker images
https://hub.docker.com/r/mysql/mysql-server/
The boolean variables including MYSQL_RANDOM_ROOT_PASSWORD,
MYSQL_ONETIME_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD, and
MYSQL_LOG_CONSOLE are made true by setting them with any strings of
non-zero lengths. Therefore, setting them to, for example, “0”,
“false”, or “no” does not make them false, but actually makes them
true. This is a known issue of the MySQL Server containers.
Which means your config:
- MYSQL_ALLOW_EMPTY_PASSWORD=hello
- MYSQL_RANDOM_ROOT_PASSWORD=hello
equal to:
- MYSQL_ALLOW_EMPTY_PASSWORD = true
- MYSQL_RANDOM_ROOT_PASSWORD = true
Which bring us to the point:
MYSQL_RANDOM_ROOT_PASSWORD: When this variable is true (which is its
default state, unless MYSQL_ROOT_PASSWORD is set or
MYSQL_ALLOW_EMPTY_PASSWORD is set to true), a random password for the
server's root user is generated when the Docker container is started.
The password is printed to stdout of the container and can be found by
looking at the container’s log.
So either check your container log file to find random password generated
or just remove config parameter MYSQL_RANDOM_ROOT_PASSWORD from your file.
Docker images are generally designed to run a single process or server, in the foreground, until they exit. The standard mysql image works this way: if you docker run mysql without -d you will see all of the server logs printed to stdout, and there’s not an immediate option to get a shell. You should think of interactive shells in containers as a debugging convenience, and not the usual way Docker containers run.
When you docker run --rm -it /bin/bash, it runs the interactive shell instead of the database server, and that’s why you get the “can’t connect to server” error. As a general rule you should assume things like init.d scripts just don’t work in Docker; depending on the specific setup they might, but again, they’re not the usual way Docker containers run.
You can install a mysql client binary on your host or elsewhere and use that to interact with the server running in the container. You don’t need an interactive shell in a container to do any of what you’re describing here.
Bit late here:
i also faced this issue.
i took an ubuntu image and ran apt-get install mysql-server it worked fine in current instance but after commiting it into image it didnt work and i was unable to up the service as well.
so in ubuntu i made it to apt-get install mariadb-server.After commiting it in image in new container i ran service mysql start; mysql
It worked fine !

how to setup and configure mysql-proxy on ubuntu on amazon ec2

i am trying to setup mysql-proxy on ubuntu on amazon ec2
i have done following:
sudo apt-get install mysql-proxy --yes
vi /etc/default/mysql-proxy
i put following content on "/etc/default/mysql-proxy"
ENABLED="true"
OPTIONS="--proxy-lua-script=/usr/share/mysql-proxy/rw-splitting.lua
--proxy-address=127.0.0.1:3306
--proxy-backend-addresses=private_ip_of_another_ec2_db_server:3306,private_ip_of_another_ec2_db_server:3306"
also tied with "--proxy-address=private_ip_or_public_ip_of_proxy-server:3306 or 4040"
and "--proxy-backend-addresses=public_ip_of_another_ec2_db_server:3306,public_ip_of_another_ec2_db_server:3306"
after that i tried to connect proxy server from another pc using mysql like:
mysql -u some_user -pxxxxx -h proxy_server_ip
or
mysql -u some_user -pxxxxx -h proxy_server_ip -P 4040
but its not working
its showing error:
ERROR 2003 (HY000): Can't connect to MySQL server on 'ip' (10061)
i want to tell you can connect the db server remotely where i allowed remote connection to any host
i also tried /etc/init.d/mysql-proxy start or /etc/init.d/mysql-proxy restart but no result
just to inform you that /etc/init.d/mysql-proxy stop is showing failed
can anyone please help me to setup and configure mysql-proxy on ubuntu
===
Edit
i found some help from other question of stackoverflow and also according to a suggestion in the comments, have done following procedure. and it seems its working now.
i installed mysql-client and mysql-server locally(on proxy server)
then i tried to run mysql-proxy using following command:
mysql-proxy --proxy-backend-addresses=10.73.151.244:3306 --proxy-backend-addresses=10.73.198.7:3306 --proxy-address=:4040 --admin-username=root --admin-password=root --admin-lua-script=>/usr/lib/mysql-proxy/lua/admin.lua
then i tried to connect remotely to the proxy server and its working.
but it seems i need to run this command under screen because when i close the terminal proxy stops working.
Can you please tell me that do i need to run this command under screen or is there any other way to make it alive all time?
There is no need to install Mysql client or Mysql Server on your mysql-proxy.
Installing mysql-proxy does have "full daemon capabilities" compiled into it.
If your are running Ubuntu Server, you may wish to use an UPSTART service script.
This script can be copied into /etc/init/mysql-proxy.conf
# mysql-proxy.conf (Ubuntu 14.04.1) Upstart proxy configuration file for AWS RDS
# mysql-proxy - mysql-proxy job file
description "mysql-proxy upstart script"
author "shadowbq <shadowbq#gmail.com>"
# Stanzas
#
# Stanzas control when and how a process is started and stopped
# See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn
# When to start the service
start on runlevel [2345]
# When to stop the service
stop on runlevel [016]
# Automatically restart process if crashed
respawn
# Essentially lets upstart know the process will detach itself to the background
expect daemon
# Run before process
pre-start script
[ -d /var/run/mysql-proxy ] || mkdir -p /var/run/mysql-proxy
echo "starting mysql-proxy"
end script
# Start the process
exec /usr/bin/mysql-proxy --plugins=proxy --proxy-lua-script=/usr/share/mysql-proxy/rw-splitting.lua --log-level=debug --proxy-backend-addresses=private_ip_of_another_ec2_db_server:3306,private_ip_of_another_ec2_db_server:3306 --daemon --log-use-syslog --pid-file=/var/run/mysql-proxy/mysql-proxy.pid
In the above example I hard coded the AWS RDS server into script, instead of fiddling with defaults and config file
Install Upgraded version 0.8.5
Note:
apt repo does not have 0.8.5 so we need to download tar from mysql official site
Prerequisite :-
Create file /etc/default/mysql-proxy with following content
ENABLED="true"
OPTIONS="--defaults-file=/etc/mysql/mysql-proxy.cnf"
Installation Procedure :-
Download mysql-proxy 0.8.x
Untar in /usr/local
Update PATH environment with /usr/local/mysql-proxy-0.8.5-linux-debian6.0-x86-64bit/bin
vim /etc/environment (to update environment path)
cd /usr/local/mysql-proxy-0.8.5-linux-debian6.0-x86-64bit/bin
Run command sudo ./mysql-proxy --defaults-file=/etc/mysql/mysql-proxy.cnf
Sample mysql-proxy.cnf file
[mysql-proxy]
log-level=debug
log-file=/var/log/mysql-proxy.log
pid-file = /var/run/mysql-proxy.pid
daemon = true
--no-proxy = false
admin-username=ADMIN
admin-password=ADMIN
proxy-backend-addresses=RDS-ENDPOINT:RDS-PORT
admin-lua-script=/usr/lib/mysql-proxy/lua/admin.lua
proxy-address=0.0.0.0:4040
admin-address=localhost:4041
change host ip and port of RDS or mysql
connect to Mysql server via proxy with
mysql -h{proxy-host-ip} -P 4040 -u{mysql_username} -p

Something goes wrong with the SSH while setting up hadoop

I'm a new fish for hadoop.I installed Ubuntu 12.10 on my computer and I wanna install Hadoop in pseudo-distributed mode on one single node.I searched and get lots of tutorials but I have a problem with the SSH.I did what the tutorial said.
I am sure the problem is about the SSH.I get the openssh-server,and had done this:
hadoop00#WebsoftStation:~$ssh-keygen -t dsa -P "" -f ~/.ssh/id_dsa
hadoop00#WebsoftStation:~/.ssh$cat ~/.ssh/id_dsa.pub >> authorized_keys
Then I can successfully ssh my localhost like this:
hadoop00#WebsoftStation:~$ssh localhost
It worked.
So I changed the path to hadoop and then:
hadoop00#WebsoftStation:/usr/local/hadoop$ sudo bin/start-all.sh
[sudo] password for hadoop00:
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-namenode-WebsoftStation.out
root#localhost's password:
root#localhost's password: localhost: Permission denied, please try again.
So,what's the problem?
You have setup password-less ssh for only your current account. Since, when you can use ssh localhost without any problem, the thing you need to do next is giving execution permission to your scripts.
Execute the following commands:
chmod +x bin/*.sh ---> assigns execution permission to all the scripts
./start.all ----> executes the script
Note: Hadoop can also be run without having password-less ssh setup using hadoop-daemon.sh script. The only advantage with password-less ssh is that, the ./start.all, script will take the trouble of doing that on behalf of you in each of the nodes.
You need to change permissions for your Hadoop folder to be owned by the hadoop00 user:
cd /usr/local/
sudo chown -R hadoop00:hadoop00 /usr/local/hadoop
Then you can cd into the sbin folder and run things without sudo. If you use sudo you're running the scripts as root which has different environment variables etc which is why you have a different behavior.
Why are you using sudo this is clearly a permission problem.
Try running this without sudo
bin/start-all.sh