I'm going through the manual installation for Vim's, apt-vim plugin and when I try the command apt-vim install -y, brew is installed and I get the following error: -e:1: '$(' is not allowed as a global variable name. However, the installation completes and then another error crops up when I try apt-vim init as directed by the installation guide.
Traceback (most recent call last):
File "/Users/cssummer16/.vimpkg/bin/apt-vim", line 805, in <module>
apt_vim.main()
File "/Users/cssummer16/.vimpkg/bin/apt-vim", line 785, in main
self.process_cmd_args()
File "/Users/cssummer16/.vimpkg/bin/apt-vim", line 758, in process_cmd_args
self.MODES[mode]()
File "/Users/cssummer16/.vimpkg/bin/apt-vim", line 534, in first_run
shutil.copy('vim_config.json', VIM_CONFIG_PATH)
File "/Users/cssummer16/anaconda/lib/python3.5/shutil.py", line 235, in copy
copyfile(src, dst, follow_symlinks=follow_symlinks)
File "/Users/cssummer16/anaconda/lib/python3.5/shutil.py", line 114, in copyfile
with open(src, 'rb') as fsrc:
FileNotFoundError: [Errno 2] No such file or directory: 'vim_config.json'
In my apt-vim file, the global VIM_CONFIG_PATH is set to os.path.abspath(os.path.join(SCRIPT_ROOT_DIR, 'vim_config.json')). Here is my vim_config_json file, which should be the same as the one I got from the apt-vim repository.
{
"global": {
"depends-on": [
{
"name": "vim",
"recipe": {
"darwin": [],
"linux": [
"sudo apt-get install -y vim"
]
}
},
{
"name": "git",
"recipe": {
"darwin": [],
"linux": [
"sudo apt-get install -y git"
]
}
},
{
"name": "brew",
"recipe": {
"darwin": [
"ruby -e \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)\""
],
"linux": []
}
},
{
"name": "python",
"recipe": {}
}
],
"install-target": "~/.vimpkg/bundle"
},
"packages": [
{
"depends-on": [],
"name": "pathogen",
"pkg-url": "https://github.com/tpope/vim-pathogen.git",
"recipe": {
"all": [
"mkdir -p ~/.vim/autoload",
"curl -LSso ~/.vim/autoload/pathogen.vim https://tpo.pe/pathogen.vim"
]
}
},
{
"depends-on": [
{
"name": "ctags",
"recipe": {
"darwin": [
"curl -LSso ctags-5.8.tar.gz 'http://sourceforge.net/projects/ctags/files/ctags/5.8/ctags-5.8.tar.gz/download?use_mirror=iweb'",
"tar xzf ctags-5.8.tar.gz",
"cd ctags-5.8",
"sudo ./configure",
"sudo make",
"sudo make install"
],
"linux": [
"curl -LSso ctags-5.8.tar.gz 'http://sourceforge.net/projects/ctags/files/ctags/5.8/ctags-5.8.tar.gz/download?use_mirror=iweb'",
"tar xzf ctags-5.8.tar.gz",
"cd ctags-5.8",
"sudo ./configure",
"sudo make",
"sudo make install"
]
}
}
],
"name": "tagbar",
"pkg-url": "https://github.com/majutsushi/tagbar.git",
"recipe": {}
}
]
}
Here is the code block in the apt-vim file which assigns all the global path variables.
import json, sys, os, re, shutil, shlex, getopt, platform, stat, ast
from distutils.util import strtobool
from subprocess import call, check_output, CalledProcessError
HOME = os.path.expanduser("~")
SCRIPT_ROOT_DIR = os.path.abspath(os.path.join(HOME, '.vimpkg'))
VIM_ROOT_DIR = os.path.abspath(os.path.join(HOME, '.vim'))
BUNDLE_PATH = os.path.abspath(os.path.join(VIM_ROOT_DIR, 'bundle'))
SRC_DIR = os.path.abspath(os.path.join(SCRIPT_ROOT_DIR, 'src'))
BIN_DIR = os.path.abspath(os.path.join(SCRIPT_ROOT_DIR, 'bin'))
VIM_CONFIG_PATH = os.path.abspath(os.path.join(SCRIPT_ROOT_DIR, 'vim_config.json'))
SCRIPT_EXE_PATH = os.path.abspath(os.path.join(BIN_DIR, 'apt-vim'))
There are is just one copy of the vim_config.json file in ~/.vimpkg. If someone could point me in the right direction for troubleshooting this installation, I would really appreciate it.
Maintainer of apt-vim here.
It's unfortunate you're seeing this bug. Not entirely sure why it's happening but it appears to be an issue with the automated installation of brew (see vim_config.json --> brew recipe).
As a workaround, try installing brew first with the official install command from brew.sh:
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Then, try starting apt-vim init once more. If you continue having issues, please open an issue on apt-vim's GitHub page. Thanks!
Related
I'm having trouble getting anaconda prompt to work with VSCode shell launcher.
Im trying to set up the Shell Launcher Extension for VSCode to run the following terminals on windows 10:
Git Bash,
CMD,
Powershell,
Anaconda Prompt
I have configured my settings. json with the following code:
{
"terminal.integrated.shell.windows": "C:\\Program Files\\Git\\bin\\bash.exe",
"shellLauncher.shells.windows": [
{
"shell": "C:\\Program Files\\Git\\bin\\bash.exe",
"args": [],
"label": "bash"
},
{
"shell": "cmd",
"args": [],
"label": "cmd"
},
{
"shell": "C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe",
"args": [],
"label": "PowerShell"
},
{
"shell": "cmd",
"args": [
"/K",
"C:\\ProgramData\\Anaconda3\\Scripts\\activate.bat C:\\ProgramData\\Anaconda3"
],
"label": "Conda"
}
]
}
As you can see Bash is my default terminal that opens with ctrl+` and my shell launcher opens with ctrl+shift+t .
The Shell launcher lists my all of the entries above, and all terminals launch through Shell Launcher except Anaconda Prompt.
From what I understand according to this blog post: How to Add Anaconda Prompt to VSCode Integrated Terminal,
Anaconda Prompt extends windows cmd and I just need to pass in the Arguments that run the script.
I pulled the args out of the properties of Anaconda menu, but when I try to launch the anaconda prompt I get the following error message:
The terminal process command 'cmd /K 'C:\ProgramData\Anaconda3\Scripts\activate.bat C:\ProgramData\Anaconda3'' failed to launch (exit code: 2)
Here is the path from the properties menu of the anaconda prompt desktop icon that works normally.
%windir%\System32\cmd.exe "/K" C:\ProgramData\Anaconda3\Scripts\activate.bat C:\ProgramData\Anaconda3
I have tried adding the actual path of cmd as:
%windir%\\System32\\cmd.exe
, but this just removes the Anaconda prompt from the Shell Launcher drop-down menu completely.
How can I fix this?
Any help will be appreciated. :)
I fixed it. "cmd.exe" was the path that worked.
For anyone else who want's to set up multiple integrated terminals in VScode for windows 10 here's the settings for the Shell Launcher extension that I am using.
This sets the my default terminal to Git Bash and allows me to open bash, cmd, Anaconda prompt and powershell with Shell Launcher.
"terminal.integrated.shell.windows": "C:\\Program Files\\Git\\bin\\bash.exe",
"shellLauncher.shells.windows": [
{
"shell": "C:\\Program Files\\Git\\bin\\bash.exe",
"args": [],
"label": "bash"
},
{
"shell": "cmd.exe",
"args": [],
"label": "cmd"
},
{
"shell": "cmd.exe",
"args": [
"/K",
"C:\\ProgramData\\Anaconda3\\Scripts\\activate C:\\ProgramData\\Anaconda3"
],
"label": "Conda"
},
{
"shell": "C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe",
"args": [],
"label": "PowerShell"
}
]
Happy Hacking. ;)
In AWS Batch I have a job definition and a job queue and a compute environment where to execute my AWS Batch jobs.
After submitting a job, I find it in the list of the failed ones with this error:
Status reason
Essential container in task exited
Container message
CannotStartContainerError: API error (404): oci runtime error: container_linux.go:247: starting container process caused "exec: \"/var/application/script.sh --file= --key=.
and in the cloudwatch logs I have:
container_linux.go:247: starting container process caused "exec: \"/var/application/script.sh --file=Toulouse.json --key=out\": stat /var/application/script.sh --file=Toulouse.json --key=out: no such file or directory"
I have specified a correct docker image that has all the scripts (we use it already and it works) and I don't know where the error is coming from.
Any suggestions are very appreciated.
The docker file is something like that:
# Pull base image.
FROM account-id.dkr.ecr.region.amazonaws.com/application-image.base-php7-image:latest
VOLUME /tmp
VOLUME /mount-point
RUN chown -R ubuntu:ubuntu /var/application
# Create the source directories
USER ubuntu
COPY application/ /var/application
# Register aws profile
COPY data/aws /home/ubuntu/.aws
WORKDIR /var/application/
ENV COMPOSER_CACHE_DIR /tmp
RUN composer update -o && \
rm -Rf /tmp/*
Here is the Job Definition:
{
"jobDefinitionName": "JobDefinition",
"jobDefinitionArn": "arn:aws:batch:region:accountid:job-definition/JobDefinition:25",
"revision": 21,
"status": "ACTIVE",
"type": "container",
"parameters": {},
"retryStrategy": {
"attempts": 1
},
"containerProperties": {
"image": "account-id.dkr.ecr.region.amazonaws.com/application-dev:latest",
"vcpus": 1,
"memory": 512,
"command": [
"/var/application/script.sh",
"--file=",
"Ref::file",
"--key=",
"Ref::key"
],
"volumes": [
{
"host": {
"sourcePath": "/mount-point"
},
"name": "logs"
},
{
"host": {
"sourcePath": "/var/log/php/errors.log"
},
"name": "php-errors-log"
},
{
"host": {
"sourcePath": "/tmp/"
},
"name": "tmp"
}
],
"environment": [
{
"name": "APP_ENV",
"value": "dev"
}
],
"mountPoints": [
{
"containerPath": "/tmp/",
"readOnly": false,
"sourceVolume": "tmp"
},
{
"containerPath": "/var/log/php/errors.log",
"readOnly": false,
"sourceVolume": "php-errors-log"
},
{
"containerPath": "/mount-point",
"readOnly": false,
"sourceVolume": "logs"
}
],
"ulimits": []
}
}
In Cloudwatch log stream /var/log/docker:
time="2017-06-09T12:23:21.014547063Z" level=error msg="Handler for GET /v1.17/containers/4150933a38d4f162ba402a3edd8b7763c6bbbd417fcce232964e4a79c2286f67/json returned error: No such container: 4150933a38d4f162ba402a3edd8b7763c6bbbd417fcce232964e4a79c2286f67"
This error was because the command was malformed. I was submitting the job by a lambda function (python 2.7) using boto3 and the syntax of the command should be something like this:
'command' : ['sudo','mkdir','directory']
Hope it helps somebody.
I created a AWS cloudformation, which creates a launch configuration and an autoscaling group. In the user Data in the launch Config I have configured the file system mount target, and I installed the cloudwatch agent:
Code EDITED
"LaunchConfig":{
"Type":"AWS::AutoScaling::LaunchConfiguration",
"Metadata" : {
"AWS::CloudFormation::Init" : {
"config" : {
"files" : {
"/etc/cwlogs.cfg": {
"content": { "Fn::Join" : ["", [
"[general]",
"state_file = /var/awslogs/state/agent-state",
"[/var/log/syslog]",
"file = /tmp/",
"log_group_name = ecs-dataloader",
"log_stream_name = ECS-loader",
"datetime_format = %b %d %H:%M:%S"
]]},
"mode": "000755",
"owner": "root",
"group": "root"
},
"/etc/ecs/ecs.config": {
"content": { "Fn::Join" : ["", [
"ECS_CLUSTER=", { "Ref" : "ClusterName" }
]]},
"mode": "000755",
"owner": "root",
"group": "root"
}
},
"commands": {
"Update": {
"command": "yum -y update"
},
"InstallNfs":{
"command": "yum -y install nfs-utils"
},
"CreatFolder": {
"command": "mkdir -p /efs-mount-point/"
},
"EditPerms": {
"command": "chown ec2-user:ec2-user /efs-mount-point/"
},
"MountPoint": {
"command": { "Fn::Join" : ["", [
"AZ=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)\n",
"echo LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0\n",
"$AZ.",{ "Fn::FindInMap" : [ "FileSystemMap", {"Ref" : "EnvParam"}, "FileSystemID"] },
".efs.",{ "Ref" : "AWS::Region" },".amazonaws.com:/ /efs-script-import-tmp nfs4 nfsvers=4.1 0 0 >> /etc/fstab"
]]}
},
"Mount": {
"command": "mount -a -t nfs4"
},
"CloudWatchAgent": {
"command": { "Fn::Join" : ["", [
"curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O\n",
"python ./awslogs-agent-setup.py --region ",{"Ref" : "AWS::Region"},"\n",
"chmod +x ./awslogs-agent-setup.py ./awslogs-agent-setup.py -n -r",
{"Ref" : "AWS::Region"}," -c /etc/cwlogs.cfg"
]]}
}
},
"services" : {
"sysvinit" : {
"awslogs" : { "enabled" : "true", "ensureRunning" : "true" }
}
}
}
}
},
"Properties":{
"ImageId":{ "Fn::FindInMap":[ "AWSRegionToAMI", { "Ref":"AWS::Region" }, "AMIID" ] },
"SecurityGroups":[ { "Ref":"EcsSecurityGroup" } ],
"InstanceType": {"Ref":"InstanceType" },
"IamInstanceProfile":{ "Ref":"EC2InstanceProfile" },
"KeyName":{ "Fn::FindInMap" : [ "KeyPairMapping", {"Ref" : "EnvParam"}, "Key"] },
"UserData":{ "Fn::Base64" : {
"Fn::Join" : ["", [
"#!/bin/bash -xe\n",
"/opt/aws/bin/cfn-init -v ",
" --stack ", { "Ref": "AWS::StackName" },
" --resource LaunchConfig",
" --region ", { "Ref" : "AWS::Region" },"\n"
]]}
}
}
}
The image details : "eu-west-1": { "AMIID":"ami-ba346ec9" },
After running the template, the resources got created successfuly. So I connected to my instance that got created by the autoscaling group via SSH to see if the userData was properly run and set.
Unfortunately, After checking, this is what I found in the /etc/fstab file:
$ cat /etc/fstab
LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0
$ cat /etc/ecs/ecs.config
cat: /etc/ecs/ecs.config: No such file or directory
The instance is not connected to the file system, the file that I tried to create in the cloudformation::init /etc/cwlogs.cfg does not exist either (it's the cloudwatch agent config file) . Can any one tell me what is wrong in the user data that it didn't get executed?
I tried to check the log files but :
$ cat /var/log/cfn-init.log
cat: /var/log/cfn-init.log: No such file or directory
What is the problem here ?
EDIT
$ cat /var/log/cloud-init-ouput.log
...
Cloud-init v. 0.7.6 running 'modules:final' at Fri, 17 Feb 2017 11:43:42 +0000. Up 44.66 seconds.
+ yum install -y aws-cfn-bootstrap/opt/aws/bin/cfn-init -v --stack Mystack --resource LaunchConfig --region eu-west-1
Loading "priorities" plugin
Loading "update-motd" plugin
Config time: 0.009
Command line error: no such option: --stack
Feb 17 11:43:43 cloud-init[2814]: util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [1]
Feb 17 11:43:43 cloud-init[2814]: cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
Feb 17 11:43:43 cloud-init[2814]: util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/dist-packages/cloudinit/config/cc_scripts_user.pyc'>) failed
Cloud-init v. 0.7.6 finished at Fri, 17 Feb 2017 11:43:43 +0000. Datasource DataSourceEc2. Up 45.18 seconds
User Data log files are located at:
Linux cloud-init: /var/log/cloud-init.log
Windows EC2Config: C:\cfn\log\cloud-init.log
Check to see whether anything is in the log file. If not, then something's wrong with passing the User Data script from the template. (Why do you have the initial empty quotes in the Join?)
cfn-init is only installed by default on Amazon Linux AMI, so if you're using any other Image ID to launch your EC2 instance you need to ensure that it's installed correctly before invoking it. See my previous answer to the question, "Installing packages using apt-get in CloudFormation file" for more info.
Here is how I resolved the problem: I Update the cloud-init in the user data before calling the meta-data and instead of installing the cloudwatch agent in the metadata, I did in the userdata.
"UserData":{ "Fn::Base64" : {
"Fn::Join" : ["", [
"#!/bin/bash -xe\n",
"yum -y install aws-cfn-bootstrap\n",
"/opt/aws/bin/cfn-init -v",
" --stack ", { "Ref": "AWS::StackName" },
" --resource LaunchConfig",
" --region ", { "Ref" : "AWS::Region" },"\n",
"# Get the CloudWatch Logs agent\n",
"wget https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py\n",
"# Install the CloudWatch Logs agent\n",
"python ./awslogs-agent-setup.py -n -r ", { "Ref" : "AWS::Region" }, " -c /etc/cwlogs.cfg || error_exit 'Failed to run CloudWatch Logs agent setup'\n",
"service awslogs start"
]]}
what is best possible way to install the logstash forwarder on the Elastic Beanstalk application (Rails Application) to forward logs on the Logstash
Here what I did , create config file .ebextensions/02-logstash.config
files:
"/etc/yum.repos.d/logstash.repo":
mode: "000755"
owner: root
group: root
content: |
[logstash-forwarder]
name=logstash-forwarder repository
baseurl=http://packages.elasticsearch.org/logstashforwarder/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
commands:
"100-rpm-key":
command: "rpm --import http://packages.elasticsearch.org/GPG-KEY-elasticsearch"
"200-install-logstash-forwarder":
command: "yum -y install logstash-forwarder"
"300-install-contrib-plugin":
command: "rm -rf /etc/logstash-forwarder.conf && cp /var/app/current/logstash-forwarder.conf /etc/ "
test: "[ ! -f /etc/logstash-forwarder.conf ]"
"400-copy-cert":
command: "cp /var/app/current/logstash-forwarder.crt /etc/pki/tls/certs/"
"500-install-logstash":
command: "service logstash-forwarder restart"
1: logstash-forwarder.conf
{
"network": {
"servers": [
"logstashIP:5000"
],
"timeout": 15,
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/var/log/messages",
"/var/log/secure",
"/var/log/eb-version-deployment.log",
"/var/app/support/logs/passenger.log",
"/var/log/eb-activity.log",
"/var/log/eb-commandprocessor.log"
],
"fields": {
"type": "syslog"
}
}
]
}
I have a program, which is using what they called wmake to build the code and it's very convenient. Suppose I have a folder and a C++ file: /path/to/file.C, all I have to do is go to /path/to folder and then type the wmake command and return and all is set.
When I am using sublimetext, I would like to open this file.C file and then ctrl+B to build it, but it doesn't work. Currently I customized a build system like:
{
"cmd": "wmake"
}
the error shows as
[Errno 2] No such file or directory
[cmd: wmake]
[dir: /path/to/file.C]
[path: /home/meee/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/bin]
[Finished]
Anyone knows how to customize a build system in sublimetext2? I read the online mannual but still have no clue. Thanks
My aim
All I want to do is to get the same effect as I type in shell window a simple
wmake /path/to
Edit-1
I tried this, it's not working either, the same error. I dont understand why "no such file"?
{
"cmd": "wmake",
"selector" : "source.C",
"shell": false,
"working_dir" : "$file_path",
"variants":
[
{
"name": "Run",
"cmd": ["bash", "-c", "wmake '${file_path}'"]
}
]
}
Edit-2
I tried using full path of wmake, and the error complains environment variable $WM_OPTIONS not set. So in shell, every time the ~/.bashrc is auto loaded, and to initialize all the environment variables, but this is not so in Sublime!!!!!!!!!!!!! What should I do???
{
"cmd": "/fullpath/to/wmake",
"selector" : "source.C",
"shell": false,
"working_dir" : "$file_path",
"variants":
[
{
"name": "Run",
"cmd": ["bash", "-c", "/fullpath/to/wmake '${file_path}'"]
}
]
}
Your build command isn't complete. See my customized C build:
{
"cmd" : ["/usr/local/gfortran/bin/gcc", "$file_name", "-o", "${file_base_name}", "-lgsl","-lgslcblas", "-lm" , "-Wall"],
"selector" : "source.c",
"shell":false,
"working_dir" : "$file_path",
"variants":
[
{
"name": "Run",
"cmd": ["bash", "-c", "/usr/local/gfortran/bin/gcc '${file}' -lgsl -lgslcblas -lm -Wall -o '${file_path}/${file_base_name}' && '${file_path}/${file_base_name}'"]
}
]
}