Related
I try to use pm2 deploy but i don't find how to integrate user password in config file (https://pm2.keymetrics.io/docs/usage/deployment/):
{
"apps" : [{
"name" : "HTTP-API",
"script" : "http.js"
}],
"deploy" : {
// "production" is the environment name
"production" : {
"user" : "ubuntu",
"host" : ["xxxxxxxx"],
"ref" : "origin/master",
"repo" : "git#github.com:Username/repository.git",
"path" : "/var/www/my-repository",
"post-deploy" : "npm install; grunt dist"
},
}
}
I'm not able to run npm install on my server without sudo, according to this, how can i pass the password inside this config?
================= SOLUTIONS ===================
The only solution i found is to pass directly the password in my command line and read it with sudo -S :
production: {
key: "/home/xxxx/.ssh/xxx.pem",
user: "xxx",
host: ["xxxxxxxx"],
ssh_options: "StrictHostKeyChecking=no",
ref: "origin/xxxx",
repo: "xxxxx#bitbucket.org:xxxx/xxxxx.git",
path: "/home/xxxx/xxxxxx",
'pre-setup': "echo '## Pre-Setup'; echo 'MYPASS' |sudo -S bash setup.sh;",
'post-setup': "echo '## Post-Setup'",
'post-deploy': "echo 'MYPASS' |sudo -S bash start.sh; ",
}
As I understand, there is no option for ssh password in pm2 deploy, only keys.
in OpenShift 4.3, I'm trying to set env key from param value within a template. for example:
"env": [
{
"name: "${FOO}-TEST",
"value": "${BAR}"
},
{
"name: "TEST",
"value": "${BAR}"
}
]
"parameters": [
{
"name": "FOO",
"required": true
},
{
"name": "BAR",
"required": true
}
]
Then, oc new-app with -p FOO=X -p BAR=Y, and checking env vars on pod, it shows:
TEST=Y
But does not show:
X-TEST=Y
In template, can I not include a parameter value as env key?
I think you can set up a parameter value as env key.
Could you check the template is working well as you expected as follows ?
export the template as yaml file first.
$ oc get template <your template name> -o yaml > test-template.yml
check whether the parameter you specified is setting up or not from the output.
$ oc process -f test-template.yml -p FOO=X -p BAR=Y
It's my simple test result.
e.g.>
$ cat test-temp.yml
:
containers:
- env:
- name: "${NAME}-KEY"
value: ${NAME}
:
$ oc process -f test-temp.yml -p NAME=test
:
"containers": [
{
"env": [
{
"name": "test-KEY",
"value": "test"
}
],
:
I hope it help you.
Export your variables
oc process FOO=${FOO} BAR=${BAR} -f yamlFile
I have this UserData in the configurationConfig resource in my cloudformation template:
"UserData":{ "Fn::Base64" : {
"Fn::Join" : ["", [
"#!/bin/bash -xv\n",
"yum -y update\n",
"yum -y install aws-cfn-bootstrap\n",
"yum -y install awslogs jq\n",
"#Install NFS client\n",
"yum -y install nfs-utils\n",
"#Install pip\n",
"yum -y install python27 python27-pip\n",
"#Install awscli\n",
"pip install awscli\n",
"#Upgrade to the latest version of the awscli\n",
"#pip install --upgrade awscli\n",
"#Add support for EFS to the CLI configuration\n",
"aws configure set preview.efs true\n",
"#Get region of EC2 from instance metadata\n",
"EC2_AVAIL_ZONE=`curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone`\n",
"EC2_REGION=",{ "Ref": "AWS::Region"} ,"\n",
"mkdir /efs-tmp/\n",
"chown -R ec2-user:ec2-user /efs-tmp/\n",
"DIR_SRC=$EC2_AVAIL_ZONE.",{ "Fn::FindInMap" : [ "FileSystemMap", {"Ref" : "EnvParam"}, "FileSystemID"] },".efs.$EC2_REGION.amazonaws.com\n",
"DIR_TGT=/efs-tmp/\n",
"touch /home/ec2-user/echo.res\n",
"echo ",{ "Fn::FindInMap" : [ "FileSystemMap", {"Ref" : "EnvParam"}, "FileSystemID"] }," >> /home/ec2-user/echo.res\n",
"echo $EC2_AVAIL_ZONE >> /home/ec2-user/echo.res\n",
"echo $EC2_REGION >> /home/ec2-user/echo.res\n",
"echo $DIR_SRC >> /home/ec2-user/echo.res\n",
"echo $DIR_TGT >> /home/ec2-user/echo.res\n",
"#Mount EFS file system\n",
"mount -t nfs4 -o vers=4.1 $DIR_SRC:/ $DIR_TGT >> /home/ec2-user/echo.res\n",
"#Backup fstab\n",
"cp -p /etc/fstab /etc/fstab.back-$(date +%F)\n",
"echo -e \"$DIR_SRC:/ $DIR_TGT nfs4 nfsvers=4.1 0 0 | tee -a /etc/fstab\n",
"docker ps\n",
"service docker stop\n",
"service docker start\n",
"/opt/aws/bin/cfn-init -v",
" --stack ", { "Ref": "AWS::StackName" },
" --resource ContainerInstances",
" --region ", { "Ref" : "AWS::Region" },"\n",
"service awslogs start\n",
"chkconfig awslogs on\n"
]]}
Here is the security group of the ECS container:
"EcsSecurityGroup":{
"Type" : "AWS::EC2::SecurityGroup",
"Properties" : {
"GroupDescription" : "ECS SecurityGroup",
"SecurityGroupIngress" : [
{
"IpProtocol" : "tcp",
"FromPort" : "2049",
"ToPort" : "2049",
"CidrIp" : {"Ref" : "CIDRVPC"}
},
{
"IpProtocol" : "tcp",
"FromPort" : "22",
"ToPort" : "22",
"CidrIp" : "0.0.0.0/0"
}
],
"SecurityGroupEgress" : [
{
"IpProtocol" : "-1",
"FromPort" : "-1",
"ToPort" : "-1",
"CidrIp" : "0.0.0.0/0"
}
],
"VpcId":{ "Ref":"VpcId" }
}
},
After running the template, I ssh into the instance, waited for userdata to finish executing, then I found in /var/log/cloud-init-ouptut.log this error:
mount.nfs4: Connection timed out
Moreover, the /etc/fstab file does not contain the mount line.
And I can't access the File system, because the created folder for EFS is empty..
Please tell me where is the issue here?
Ensure you created EFS security group and allow your ec2 security in the ingress rules:
"EfsSecurityGroup": {
"Properties": {
"GroupDescription": "EFS security group",
"SecurityGroupIngress": [
{
"FromPort": 2049,
"IpProtocol": "tcp",
"SourceSecurityGroupId": {
"Ref": "YOUR_EC2_SECURITY_GROUP"
},
"ToPort": 2049
},
],
"Tags": [
{
"Key": "Application",
"Value": {
"Ref": "AWS::StackName"
}
},
{
"Key": "Name",
"Value": "efs-sg"
}
],
"VpcId": {
"Ref": "YOUR_VPC_ID"
}
},
"Type": "AWS::EC2::SecurityGroup"
}
Ensure EFS mountarget exist:
"EFSMountTargetYourAZ": {
"Properties": {
"FileSystemId": "EFS_id",
"SecurityGroups": [
{
"Ref": "EFS_SECURITY_GROUP"
}
],
"SubnetId": {
"Ref": "SUBNET_ID"
}
},
"Type": "AWS::EFS::MountTarget"
},
There's a typo (missing closing \") in this line in your script, which is causing the attempted write to /etc/fstab to fail:
echo -e \"$DIR_SRC:/ $DIR_TGT nfs4 nfsvers=4.1 0 0 | tee -a /etc/fstab\n",
This should read:
echo -e \"$DIR_SRC:/ $DIR_TGT nfs4 nfsvers=4.1 0 0\" | tee -a /etc/fstab\n",
You need to make sure that an AWS::EFS::MountTarget resource exists in the availability zone specified. Otherwise, the attempt to mount the filesystem using the DNS name will fail to resolve correctly. See Mounting File Systems and AWS::EFS::FileSystem for further documentation.
I created a AWS cloudformation, which creates a launch configuration and an autoscaling group. In the user Data in the launch Config I have configured the file system mount target, and I installed the cloudwatch agent:
Code EDITED
"LaunchConfig":{
"Type":"AWS::AutoScaling::LaunchConfiguration",
"Metadata" : {
"AWS::CloudFormation::Init" : {
"config" : {
"files" : {
"/etc/cwlogs.cfg": {
"content": { "Fn::Join" : ["", [
"[general]",
"state_file = /var/awslogs/state/agent-state",
"[/var/log/syslog]",
"file = /tmp/",
"log_group_name = ecs-dataloader",
"log_stream_name = ECS-loader",
"datetime_format = %b %d %H:%M:%S"
]]},
"mode": "000755",
"owner": "root",
"group": "root"
},
"/etc/ecs/ecs.config": {
"content": { "Fn::Join" : ["", [
"ECS_CLUSTER=", { "Ref" : "ClusterName" }
]]},
"mode": "000755",
"owner": "root",
"group": "root"
}
},
"commands": {
"Update": {
"command": "yum -y update"
},
"InstallNfs":{
"command": "yum -y install nfs-utils"
},
"CreatFolder": {
"command": "mkdir -p /efs-mount-point/"
},
"EditPerms": {
"command": "chown ec2-user:ec2-user /efs-mount-point/"
},
"MountPoint": {
"command": { "Fn::Join" : ["", [
"AZ=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)\n",
"echo LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0\n",
"$AZ.",{ "Fn::FindInMap" : [ "FileSystemMap", {"Ref" : "EnvParam"}, "FileSystemID"] },
".efs.",{ "Ref" : "AWS::Region" },".amazonaws.com:/ /efs-script-import-tmp nfs4 nfsvers=4.1 0 0 >> /etc/fstab"
]]}
},
"Mount": {
"command": "mount -a -t nfs4"
},
"CloudWatchAgent": {
"command": { "Fn::Join" : ["", [
"curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O\n",
"python ./awslogs-agent-setup.py --region ",{"Ref" : "AWS::Region"},"\n",
"chmod +x ./awslogs-agent-setup.py ./awslogs-agent-setup.py -n -r",
{"Ref" : "AWS::Region"}," -c /etc/cwlogs.cfg"
]]}
}
},
"services" : {
"sysvinit" : {
"awslogs" : { "enabled" : "true", "ensureRunning" : "true" }
}
}
}
}
},
"Properties":{
"ImageId":{ "Fn::FindInMap":[ "AWSRegionToAMI", { "Ref":"AWS::Region" }, "AMIID" ] },
"SecurityGroups":[ { "Ref":"EcsSecurityGroup" } ],
"InstanceType": {"Ref":"InstanceType" },
"IamInstanceProfile":{ "Ref":"EC2InstanceProfile" },
"KeyName":{ "Fn::FindInMap" : [ "KeyPairMapping", {"Ref" : "EnvParam"}, "Key"] },
"UserData":{ "Fn::Base64" : {
"Fn::Join" : ["", [
"#!/bin/bash -xe\n",
"/opt/aws/bin/cfn-init -v ",
" --stack ", { "Ref": "AWS::StackName" },
" --resource LaunchConfig",
" --region ", { "Ref" : "AWS::Region" },"\n"
]]}
}
}
}
The image details : "eu-west-1": { "AMIID":"ami-ba346ec9" },
After running the template, the resources got created successfuly. So I connected to my instance that got created by the autoscaling group via SSH to see if the userData was properly run and set.
Unfortunately, After checking, this is what I found in the /etc/fstab file:
$ cat /etc/fstab
LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0
$ cat /etc/ecs/ecs.config
cat: /etc/ecs/ecs.config: No such file or directory
The instance is not connected to the file system, the file that I tried to create in the cloudformation::init /etc/cwlogs.cfg does not exist either (it's the cloudwatch agent config file) . Can any one tell me what is wrong in the user data that it didn't get executed?
I tried to check the log files but :
$ cat /var/log/cfn-init.log
cat: /var/log/cfn-init.log: No such file or directory
What is the problem here ?
EDIT
$ cat /var/log/cloud-init-ouput.log
...
Cloud-init v. 0.7.6 running 'modules:final' at Fri, 17 Feb 2017 11:43:42 +0000. Up 44.66 seconds.
+ yum install -y aws-cfn-bootstrap/opt/aws/bin/cfn-init -v --stack Mystack --resource LaunchConfig --region eu-west-1
Loading "priorities" plugin
Loading "update-motd" plugin
Config time: 0.009
Command line error: no such option: --stack
Feb 17 11:43:43 cloud-init[2814]: util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [1]
Feb 17 11:43:43 cloud-init[2814]: cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
Feb 17 11:43:43 cloud-init[2814]: util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/dist-packages/cloudinit/config/cc_scripts_user.pyc'>) failed
Cloud-init v. 0.7.6 finished at Fri, 17 Feb 2017 11:43:43 +0000. Datasource DataSourceEc2. Up 45.18 seconds
User Data log files are located at:
Linux cloud-init: /var/log/cloud-init.log
Windows EC2Config: C:\cfn\log\cloud-init.log
Check to see whether anything is in the log file. If not, then something's wrong with passing the User Data script from the template. (Why do you have the initial empty quotes in the Join?)
cfn-init is only installed by default on Amazon Linux AMI, so if you're using any other Image ID to launch your EC2 instance you need to ensure that it's installed correctly before invoking it. See my previous answer to the question, "Installing packages using apt-get in CloudFormation file" for more info.
Here is how I resolved the problem: I Update the cloud-init in the user data before calling the meta-data and instead of installing the cloudwatch agent in the metadata, I did in the userdata.
"UserData":{ "Fn::Base64" : {
"Fn::Join" : ["", [
"#!/bin/bash -xe\n",
"yum -y install aws-cfn-bootstrap\n",
"/opt/aws/bin/cfn-init -v",
" --stack ", { "Ref": "AWS::StackName" },
" --resource LaunchConfig",
" --region ", { "Ref" : "AWS::Region" },"\n",
"# Get the CloudWatch Logs agent\n",
"wget https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py\n",
"# Install the CloudWatch Logs agent\n",
"python ./awslogs-agent-setup.py -n -r ", { "Ref" : "AWS::Region" }, " -c /etc/cwlogs.cfg || error_exit 'Failed to run CloudWatch Logs agent setup'\n",
"service awslogs start"
]]}
I'm going through the manual installation for Vim's, apt-vim plugin and when I try the command apt-vim install -y, brew is installed and I get the following error: -e:1: '$(' is not allowed as a global variable name. However, the installation completes and then another error crops up when I try apt-vim init as directed by the installation guide.
Traceback (most recent call last):
File "/Users/cssummer16/.vimpkg/bin/apt-vim", line 805, in <module>
apt_vim.main()
File "/Users/cssummer16/.vimpkg/bin/apt-vim", line 785, in main
self.process_cmd_args()
File "/Users/cssummer16/.vimpkg/bin/apt-vim", line 758, in process_cmd_args
self.MODES[mode]()
File "/Users/cssummer16/.vimpkg/bin/apt-vim", line 534, in first_run
shutil.copy('vim_config.json', VIM_CONFIG_PATH)
File "/Users/cssummer16/anaconda/lib/python3.5/shutil.py", line 235, in copy
copyfile(src, dst, follow_symlinks=follow_symlinks)
File "/Users/cssummer16/anaconda/lib/python3.5/shutil.py", line 114, in copyfile
with open(src, 'rb') as fsrc:
FileNotFoundError: [Errno 2] No such file or directory: 'vim_config.json'
In my apt-vim file, the global VIM_CONFIG_PATH is set to os.path.abspath(os.path.join(SCRIPT_ROOT_DIR, 'vim_config.json')). Here is my vim_config_json file, which should be the same as the one I got from the apt-vim repository.
{
"global": {
"depends-on": [
{
"name": "vim",
"recipe": {
"darwin": [],
"linux": [
"sudo apt-get install -y vim"
]
}
},
{
"name": "git",
"recipe": {
"darwin": [],
"linux": [
"sudo apt-get install -y git"
]
}
},
{
"name": "brew",
"recipe": {
"darwin": [
"ruby -e \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)\""
],
"linux": []
}
},
{
"name": "python",
"recipe": {}
}
],
"install-target": "~/.vimpkg/bundle"
},
"packages": [
{
"depends-on": [],
"name": "pathogen",
"pkg-url": "https://github.com/tpope/vim-pathogen.git",
"recipe": {
"all": [
"mkdir -p ~/.vim/autoload",
"curl -LSso ~/.vim/autoload/pathogen.vim https://tpo.pe/pathogen.vim"
]
}
},
{
"depends-on": [
{
"name": "ctags",
"recipe": {
"darwin": [
"curl -LSso ctags-5.8.tar.gz 'http://sourceforge.net/projects/ctags/files/ctags/5.8/ctags-5.8.tar.gz/download?use_mirror=iweb'",
"tar xzf ctags-5.8.tar.gz",
"cd ctags-5.8",
"sudo ./configure",
"sudo make",
"sudo make install"
],
"linux": [
"curl -LSso ctags-5.8.tar.gz 'http://sourceforge.net/projects/ctags/files/ctags/5.8/ctags-5.8.tar.gz/download?use_mirror=iweb'",
"tar xzf ctags-5.8.tar.gz",
"cd ctags-5.8",
"sudo ./configure",
"sudo make",
"sudo make install"
]
}
}
],
"name": "tagbar",
"pkg-url": "https://github.com/majutsushi/tagbar.git",
"recipe": {}
}
]
}
Here is the code block in the apt-vim file which assigns all the global path variables.
import json, sys, os, re, shutil, shlex, getopt, platform, stat, ast
from distutils.util import strtobool
from subprocess import call, check_output, CalledProcessError
HOME = os.path.expanduser("~")
SCRIPT_ROOT_DIR = os.path.abspath(os.path.join(HOME, '.vimpkg'))
VIM_ROOT_DIR = os.path.abspath(os.path.join(HOME, '.vim'))
BUNDLE_PATH = os.path.abspath(os.path.join(VIM_ROOT_DIR, 'bundle'))
SRC_DIR = os.path.abspath(os.path.join(SCRIPT_ROOT_DIR, 'src'))
BIN_DIR = os.path.abspath(os.path.join(SCRIPT_ROOT_DIR, 'bin'))
VIM_CONFIG_PATH = os.path.abspath(os.path.join(SCRIPT_ROOT_DIR, 'vim_config.json'))
SCRIPT_EXE_PATH = os.path.abspath(os.path.join(BIN_DIR, 'apt-vim'))
There are is just one copy of the vim_config.json file in ~/.vimpkg. If someone could point me in the right direction for troubleshooting this installation, I would really appreciate it.
Maintainer of apt-vim here.
It's unfortunate you're seeing this bug. Not entirely sure why it's happening but it appears to be an issue with the automated installation of brew (see vim_config.json --> brew recipe).
As a workaround, try installing brew first with the official install command from brew.sh:
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Then, try starting apt-vim init once more. If you continue having issues, please open an issue on apt-vim's GitHub page. Thanks!