PM2 doesn't watch file change at Vagrant machine - pm2

I set up a server with the following directory structure:
project
|-- bootstrap
| `-- process.json
`-- server
|-- server.js
`-- other_folder
The project/bootstrap/process.json is the PM2 app config file and the project/server/server.js is the server entry. I define the process.json as
{
"apps": [
{
"name": "odd.server",
"script": "../server/server.js",
"watch": "../server"
}
]
}
I try to start the server and watch any change in the server with the following command:
pm2 start process.json --only odd.server --env production
The server is up, however, the file watching is not working. Any change made to the server.js cannot trigger the restart of the server.
The path is correct and I have no idea on why it doesn't work. I will be grateful to anyone who provides some hint.
UPDATE:
PM2 is running in a Vagrant machine and the project folder is a folder of my host machine that is exposed to Vagrant.

It needs to set the watch_options as
"watch_options": {
"usePolling": true
}
source: https://github.com/Unitech/pm2/issues/931

Related

pm2-logrotate install on offline linux machine

I want to install pm2-logrotate on linux machine where the machine is not having internet connectivity
https://www.npmjs.com/package/pm2-logrotate
To make an offline installer, you need to make your own tar.gz file.
First, you need to download the Source code (tar.gz) from Pm2 GitHub
Extract the tar.gz file and this will create a folder with the source inside
Rename the main folder (normally named pm2-logrotate-2.7.0) to module and create the pm2-logrotate-2.7.0.tar.gz file again with the module folder
The struct of the files inside of the tar.gz will be something like:
module
|- .gitignore
|- CHANGELOG.md
|- README.md
|- app.js
|- node_modules
|- package-lock.json
|- package.json
|- pres
|- test
Then you can use the command pm2 install pm2-logrotate-2.7.0.tar.gz without getting errors
1.Download source code tar.gz pm2-logrotate
2.Install modules in code, Copy code to the same server environment(in network)
tar -xzvf pm2-logrotate-2.7.0.tar.gz
cd pm2-logrotate-2.7.0/
npm install
tar -czvf pm2-logrotate.tar.gz pm2-logrotate-2.7.0
3.Copy pm2-logrotate.tar.gz to offline server to .pm2/modules
cd ~/.pm2/modules/
tar -xzvf pm2-logrotate.tar.gz
pm2 module:generate pm2-logrotate
cd pm2-logrotate
pm2 install .
4.Add conf in to .pm2/module_conf.json
{
"pm2-logrotate": {
"max_size": "10M",
"retain": "30",
"compress": false,
"dateFormat": "YYYY-MM-DD_HH-mm-ss",
"workerInterval": "30",
"rotateInterval": "0 0 * * *",
"rotateModule": true
},
"module-db-v2": {
"pm2-logrotate": {}
}
}
Run pm2 conf pm2-logrotate can see pm2-logrotate config
Reset conf
pm2 set pm2-logrotate:max_size 100M
pm2 install pm2-logrotate-2.7.0.tgz

"Start by creating and opening a systemd socket file for Gunicorn with sudo privileges" (directory to this file does not *appear* to exist)

I am working on a server running ubuntu 18.04. This digital ocean tutorial on django deployment(https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04) is telling me to do the following:
"We’re now finished configuring our Django application. We can back out of our virtual environment by typing:
(env): deactivate" I am familiar with virtual environments, I did this. Now for the part I am not at all familiar with:
"Start by creating and opening a systemd socket file for Gunicorn with sudo privileges:
sudo nano /etc/systemd/system/gunicorn.socket
"
First, since I just deactivated my env, I am now at justin#ubuntu-s-1vcpu-1gb-nyc3-01:~$. If I ls I only see the project folder I created which holds the virtualenv, the python project, manage.py and the static directory. Nowhere can I find this
/etc/systemd/system/
directory and the command they are telling me to use cannot create directories, only files. So I am very confused, any help would be greatly appreciated.
/etc doesn't live inside ~. Try ls /etc to see what's already in that directory. If you need to create that directory, you can do so wih sudo mkdir -p /etc/systemd/system/ (the -p flag is to make sure that, in case systemd is also not present under etc, it will get created).

Docker with Angular 2: cannot get

We're trying to set up an Angular 2 application with Docker following this tutorial: https://scotch.io/tutorials/create-a-mean-app-with-angular-2-and-docker-compose
The application deploys but we get 'Cannot GET /'
This is how we build our app:
sudo docker build -t frontend:dev .
and this is how we run our app
sudo docker run -d --name frontend -p 1014:4200 frontend:dev
Our dockerfile is exactly the same as the tutorial:
# Create image based on the official Node 6 image from dockerhub
FROM node:6
# Create a directory where our app will be placed
RUN mkdir -p /usr/src/app
# Change directory so that our commands run inside this new directory
WORKDIR /usr/src/app
# Copy dependency definitions
COPY package.json /usr/src/app
# Install dependecies
RUN npm install
# Get all the code needed to run the app
COPY . /usr/src/app
# Expose the port the app runs in
EXPOSE 4200
# Serve the app
CMD ["npm", "start"]
And this is an excerpt from package.json
{
"name": "courseka",
"version": "0.0.0",
"scripts": {
"start": "ng serve -H 0.0.0.0"
"build": "ng build"
}
}
And as last, something from our index.html file
<html>
<head>
<base href="/">
</head>
</html>
Seems the problem was a Linux-Windows issue. The development op the app happened on Windows where the import folders and classes are case-insensitive, while on Linux they are and two imports were wrongly capitalized.
The problem was detected by running with the
-it
parameter, that logs all the output

redis file being deleted on ./etc folder path

I accidentally deleted the redis file under ./etc now i cannot seem to connect to the redes serve. is there anyway I could reinstall redis in it? if so how? please help me. Thank you in advance
Hm... It seems to be '/etc'.
You can manually build and install. It is described in https://redis.io/download#installation.
Or reinstall package via package manager like yum - if the redis was installed by package manager.
$ yum --enablerepo=epel,remi reinstall redis
But in your case I think it's okay if you make a /etc/redis/* file like redis.conf. Here is a script run example is installing redis version is 3.2.8 with default setting.
$ wget http://download.redis.io/releases/redis-3.2.8.tar.gz
$ tar xzf redis-3.2.8.tar.gz
$ cd redis-3.2.8/utils
$ sudo ./install_server.sh
Welcome to the redis service installer
This script will help you easily set up a running redis server
Please select the redis port for this instance: [6379]
Selecting default: 6379
Please select the redis config file name [/etc/redis/6379.conf]
Selected default - /etc/redis/6379.conf
Please select the redis log file name [/var/log/redis_6379.log]
Selected default - /var/log/redis_6379.log
Please select the data directory for this instance [/var/lib/redis/6379]
Selected default - /var/lib/redis/6379
Please select the redis executable path [] /usr/local/bin/redis-server
Selected config:
Port : 6379
Config file : /etc/redis/6379.conf
Log file : /var/log/redis_6379.log
Data dir : /var/lib/redis/6379
Executable : /usr/local/bin/redis-server
Cli Executable : /usr/local/bin/redis-cli
Is this ok? Then press ENTER to go on or Ctrl-C to abort.
Copied /tmp/6379.conf => /etc/init.d/redis_6379
Installing service...
Successfully added to chkconfig!
Successfully added to runlevels 345!
Starting Redis server...
Installation successful!
$
After then, you should configure other parameters like password, bind, ...
I hope it is helpful.

Install an sql dump file to a docker container with mariaDB

I am just learning the basics of docker, but have come stuck on importing an SQl file from the local system. I am on windows 10 and have allowed my docker containers to access my shared drives. I have an SQL file located on D i would like to import to the base image of Maria DB i got from docker hub.
I have found a command to install that sql file on my image and tried to directly import the image from inside the container sql command prompt, but get a failed to open file error.
Below are the two methods i have tried, but where do i store my sql dump and how do i import it?
Method 1 tried via mysql command line
winpty docker run -it --link *some-mariadb*:mysql --rm mariadb sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
Then
use *database-created* // previously created
Then
source d:/sql_dump_file.sql
Method 2
docker exec -i my-new-database mysql -uroot -pnew-db-password --force < d:/sql_dump_file.sql
Update 5/12/2016
So after a disappointing weekend of playing around. I currently change the drive to C: drive as there seemed to be some unknown issue i can't debug with getting D: drive to work.
Then i found the inspect command to see what volumes are mounted to a container. I have the following, but i can't import to SQL as it says file does not exist, but it clearly says in docker that it is there and mounted, the SQL file is inside the map_me folder. I created a folder called map_me in the main directory of C:
docker inspect dbcontainer
"Mounts":
[
{
"Source": "/map_me",
"Destination": "/docker-entrypoint-initdb.d/sl_dump_fil‌​e.sql",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
]
I recommend the following:
Mount your sql_dump_file.sql at /docker-entrypoint-initdb.d/ when creating the container. The official MariaDB image will restore it at startup.
docker run -d --name <containername> -v d:/sql_dump_file.sql:/docker-entrypoint-initdb.d/sl_dump_file.sql <environment_variables> <imagename> <startup commands>
So after some tweaking and better understanding, i came to the conclusion after testing, that docker-compose is the way to go. So the first folder contains a my.cnf file that does the configuration, then the other folder which #Farhad identified is used to intilize the .sql file.
version: "2"
services:
mariadb_a:
image: mariadb:latest
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=111111
volumes:
- c:/some_folder:/etc/mysql/conf.d
- c:/some_other_folder:/docker-entrypoint-initdb.d