Error While creating a bucket through cbq shell in Couchbase - couchbase

When I execute the following command to create a bucket in couchbase server.
cbq.exe> bucket-create -c 127.0.0.1:8091 -u Administrator -p password --bucket=test_bucket --bucket-type=couchbase --bucket-port=11222 --bucket-ramsize=200 --bucket-replica=1 --wait;
Error output:
{
"requestID": "d88894ad-085d-44c7-b600-3917bc254035",
"errors": [
{
"code": 3000,
**"msg": "syntax error - at bucket"**
}
],
"status": "fatal",
"metrics": {
"elapsedTime": "1.0001ms",
"executionTime": "1.0001ms",
"resultCount": 0,
"resultSize": 0,
"errorCount": 1
}
}
Could some body tell me what is wrong in my command?

I see you are using cbq which is for running N1QL queries.
Perhaps you meant couchbase-cli instead.
$ /opt/couchbase/bin/couchbase-cli bucket-create -c 127.0.0.1:8091
-u Administrator -p password --bucket=test_bucket --bucket-type=couchbase
--bucket-port=11222 --bucket-ramsize=100 --bucket-replica=1 --wait
...SUCCESS: bucket-create

Related

Wazuh remote command from API doesn't work

I'm trying to execute remote command from Wazuh manager to the agent using API, below waht i'm trying to do:
curl -k -X PUT "https://192.168.1.76:55000/active-response?agents_list=001" -H "Authorization: Bearer $TOKEN" -H "content-type: application/json" -d '{"command": "customA", "custom":true}'
and then the response:
{"data": {"affected_items": ["001"], "total_affected_items": 1, "total_failed_items": 0, "failed_items": []}, "message": "AR command was sent to all agents", "error": 0}
The problem is simply that the command "customA" isn't triggered in the agent.
Here the body of the "/var/ossec/etc/ossec.conf" file in the MANAGER:
<command>
<name>customA</name>
<executable>launcher.cmd</executable>
<extra_args>custom_remove.py</extra_args>
</command>
<command>
<name>customB</name>
<executable>launcher.cmd</executable>
<extra_args>custom_remove.py</extra_args>
</command>
<command>
<name>forRemote</name>
<executable>custom_remove.exe</executable>
</command>
<active-response>
<disabled>no</disabled>
<command>customA</command>
<location>local</location>
<rules_id>255001</rules_id>
</active-response>
<active-response>
<disabled>no</disabled>
<command>customA</command>
<location>local</location>
<rules_id>999001</rules_id>
</active-response>
And this is the "local_internal_options.conf" file in the Windows AGENT 001:
windows.debug=2
rootcheck.sleep=0
syscheck.sleep=0
logcollector.remote_commands=1
wazuh_command.remote_commands=1
Eventually, I think that command and active response are correctly configured, because they will work correctly if i try to test them triggering a rule (for exampple rule 999001).
Moreover, i post the response of the api "GET /manager/configuration/analysis/command":
{
"data": {
"affected_items": [
{
"command": [
{
"name": "disable-account",
"executable": "disable-account",
"timeout_allowed": 1
},
{
"name": "restart-wazuh",
"executable": "restart-wazuh",
"timeout_allowed": 0
},
{
"name": "firewall-drop",
"executable": "firewall-drop",
"timeout_allowed": 1
},
{
"name": "host-deny",
"executable": "host-deny",
"timeout_allowed": 1
},
{
"name": "route-null",
"executable": "route-null",
"timeout_allowed": 1
},
{
"name": "win_route-null",
"executable": "route-null.exe",
"timeout_allowed": 1
},
{
"name": "netsh",
"executable": "netsh.exe",
"timeout_allowed": 1
},
{
"name": "customA",
"executable": "launcher.cmd",
"timeout_allowed": 0
},
{
"name": "customB",
"executable": "launcher.cmd",
"timeout_allowed": 0
},
{
"name": "forRemote",
"executable": "custom_remove.exe",
"timeout_allowed": 0
},
{
"name": "remove-threat",
"executable": "remove-threat.exe",
"timeout_allowed": 0
}
]
}
],
"total_affected_items": 1,
"total_failed_items": 0,
"failed_items": []
},
"message": "Active configuration was successfully read",
"error": 0
}
I hope that someone will help me. Thanks in advice!
please open C:\Program Files (x86)\ossec-agent\etc\shared\ar.conf file and verify that you have:
customA0 - launcher.cmd - 0
if you don't have it, create any file in /var/ossec/etc/shared/default/ for the manager to update the agent by sending a merged.mg, this resets the agent and updates it according to what you configured in ossec.conf from the manager.
The command should have customA0 instead of customA.
Example:
curl -k -X PUT "https://192.168.1.72:55000/active-response?agents_list=001" -H "Authorization: Bearer $(curl -u wazuh:wazuh -k -X GET "https://192.168 .1.xxx:55000/security/user/authenticate?raw=true)" -H "content-type: application/json" -d '{"command": "customA0", "custom":true}'
I hope this is useful.
Regards
Note: I attach an example that I did to test
manager
agent

kubectl create pod using override return error: Invalid JSON Patch

I am trying to run my pod using below command but keep getting error:
error: Invalid JSON Patch
kubectl run -i tmp-pod --rm -n=my-scripts --image=placeholder --restart=Never --overrides= "$(cat pod.json)"
Here is my pod.json file:
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "test",
"namespace": "my-ns",
"labels": {
"app": "test"
}
},
"spec": {
"containers": [
{
"name": "test",
"image": "myimage",
"command": [
"python",
"/usr/bin/cma/excute.py"
]
}
]
}
}
What am I doing wrong here?
I did a bit of testing and it seems there is an issue with Cmder not executing $() properly - either not working at all, or treating newlines as Enter, and thus executing a commant before entire JSON is passed.
You may want to try running your commands in PowerShell:
kubectl run -i tmp-pod --rm -n=my-scripts --image=placeholder --restart=Never --overrides=$(Get-Content pod.json -Raw)
There is a similar issue on GitHub [Windows] kubectl run not accepting valid JSON as --override on Windows (same JSON works on Mac) #519. Unfortunately, there is no clear solution for this.
Possible solutions are:
Passing JSON as a string directly
kubectl run -i tmp-pod --rm -n=my-scripts --image=placeholder --restart=Never --overrides='{"apiVersion":"v1","kind":"Pod","metadata":{...}}'
Using ' instead of " around overrides.
Using triple quotes """ instead of single quotes " in JSON file.

CannotStartContainerError while submitting a AWS Batch Job

In AWS Batch I have a job definition and a job queue and a compute environment where to execute my AWS Batch jobs.
After submitting a job, I find it in the list of the failed ones with this error:
Status reason
Essential container in task exited
Container message
CannotStartContainerError: API error (404): oci runtime error: container_linux.go:247: starting container process caused "exec: \"/var/application/script.sh --file= --key=.
and in the cloudwatch logs I have:
container_linux.go:247: starting container process caused "exec: \"/var/application/script.sh --file=Toulouse.json --key=out\": stat /var/application/script.sh --file=Toulouse.json --key=out: no such file or directory"
I have specified a correct docker image that has all the scripts (we use it already and it works) and I don't know where the error is coming from.
Any suggestions are very appreciated.
The docker file is something like that:
# Pull base image.
FROM account-id.dkr.ecr.region.amazonaws.com/application-image.base-php7-image:latest
VOLUME /tmp
VOLUME /mount-point
RUN chown -R ubuntu:ubuntu /var/application
# Create the source directories
USER ubuntu
COPY application/ /var/application
# Register aws profile
COPY data/aws /home/ubuntu/.aws
WORKDIR /var/application/
ENV COMPOSER_CACHE_DIR /tmp
RUN composer update -o && \
rm -Rf /tmp/*
Here is the Job Definition:
{
"jobDefinitionName": "JobDefinition",
"jobDefinitionArn": "arn:aws:batch:region:accountid:job-definition/JobDefinition:25",
"revision": 21,
"status": "ACTIVE",
"type": "container",
"parameters": {},
"retryStrategy": {
"attempts": 1
},
"containerProperties": {
"image": "account-id.dkr.ecr.region.amazonaws.com/application-dev:latest",
"vcpus": 1,
"memory": 512,
"command": [
"/var/application/script.sh",
"--file=",
"Ref::file",
"--key=",
"Ref::key"
],
"volumes": [
{
"host": {
"sourcePath": "/mount-point"
},
"name": "logs"
},
{
"host": {
"sourcePath": "/var/log/php/errors.log"
},
"name": "php-errors-log"
},
{
"host": {
"sourcePath": "/tmp/"
},
"name": "tmp"
}
],
"environment": [
{
"name": "APP_ENV",
"value": "dev"
}
],
"mountPoints": [
{
"containerPath": "/tmp/",
"readOnly": false,
"sourceVolume": "tmp"
},
{
"containerPath": "/var/log/php/errors.log",
"readOnly": false,
"sourceVolume": "php-errors-log"
},
{
"containerPath": "/mount-point",
"readOnly": false,
"sourceVolume": "logs"
}
],
"ulimits": []
}
}
In Cloudwatch log stream /var/log/docker:
time="2017-06-09T12:23:21.014547063Z" level=error msg="Handler for GET /v1.17/containers/4150933a38d4f162ba402a3edd8b7763c6bbbd417fcce232964e4a79c2286f67/json returned error: No such container: 4150933a38d4f162ba402a3edd8b7763c6bbbd417fcce232964e4a79c2286f67"
This error was because the command was malformed. I was submitting the job by a lambda function (python 2.7) using boto3 and the syntax of the command should be something like this:
'command' : ['sudo','mkdir','directory']
Hope it helps somebody.

apt-vim Plugin FileNotFoundError

I'm going through the manual installation for Vim's, apt-vim plugin and when I try the command apt-vim install -y, brew is installed and I get the following error: -e:1: '$(' is not allowed as a global variable name. However, the installation completes and then another error crops up when I try apt-vim init as directed by the installation guide.
Traceback (most recent call last):
File "/Users/cssummer16/.vimpkg/bin/apt-vim", line 805, in <module>
apt_vim.main()
File "/Users/cssummer16/.vimpkg/bin/apt-vim", line 785, in main
self.process_cmd_args()
File "/Users/cssummer16/.vimpkg/bin/apt-vim", line 758, in process_cmd_args
self.MODES[mode]()
File "/Users/cssummer16/.vimpkg/bin/apt-vim", line 534, in first_run
shutil.copy('vim_config.json', VIM_CONFIG_PATH)
File "/Users/cssummer16/anaconda/lib/python3.5/shutil.py", line 235, in copy
copyfile(src, dst, follow_symlinks=follow_symlinks)
File "/Users/cssummer16/anaconda/lib/python3.5/shutil.py", line 114, in copyfile
with open(src, 'rb') as fsrc:
FileNotFoundError: [Errno 2] No such file or directory: 'vim_config.json'
In my apt-vim file, the global VIM_CONFIG_PATH is set to os.path.abspath(os.path.join(SCRIPT_ROOT_DIR, 'vim_config.json')). Here is my vim_config_json file, which should be the same as the one I got from the apt-vim repository.
{
"global": {
"depends-on": [
{
"name": "vim",
"recipe": {
"darwin": [],
"linux": [
"sudo apt-get install -y vim"
]
}
},
{
"name": "git",
"recipe": {
"darwin": [],
"linux": [
"sudo apt-get install -y git"
]
}
},
{
"name": "brew",
"recipe": {
"darwin": [
"ruby -e \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)\""
],
"linux": []
}
},
{
"name": "python",
"recipe": {}
}
],
"install-target": "~/.vimpkg/bundle"
},
"packages": [
{
"depends-on": [],
"name": "pathogen",
"pkg-url": "https://github.com/tpope/vim-pathogen.git",
"recipe": {
"all": [
"mkdir -p ~/.vim/autoload",
"curl -LSso ~/.vim/autoload/pathogen.vim https://tpo.pe/pathogen.vim"
]
}
},
{
"depends-on": [
{
"name": "ctags",
"recipe": {
"darwin": [
"curl -LSso ctags-5.8.tar.gz 'http://sourceforge.net/projects/ctags/files/ctags/5.8/ctags-5.8.tar.gz/download?use_mirror=iweb'",
"tar xzf ctags-5.8.tar.gz",
"cd ctags-5.8",
"sudo ./configure",
"sudo make",
"sudo make install"
],
"linux": [
"curl -LSso ctags-5.8.tar.gz 'http://sourceforge.net/projects/ctags/files/ctags/5.8/ctags-5.8.tar.gz/download?use_mirror=iweb'",
"tar xzf ctags-5.8.tar.gz",
"cd ctags-5.8",
"sudo ./configure",
"sudo make",
"sudo make install"
]
}
}
],
"name": "tagbar",
"pkg-url": "https://github.com/majutsushi/tagbar.git",
"recipe": {}
}
]
}
Here is the code block in the apt-vim file which assigns all the global path variables.
import json, sys, os, re, shutil, shlex, getopt, platform, stat, ast
from distutils.util import strtobool
from subprocess import call, check_output, CalledProcessError
HOME = os.path.expanduser("~")
SCRIPT_ROOT_DIR = os.path.abspath(os.path.join(HOME, '.vimpkg'))
VIM_ROOT_DIR = os.path.abspath(os.path.join(HOME, '.vim'))
BUNDLE_PATH = os.path.abspath(os.path.join(VIM_ROOT_DIR, 'bundle'))
SRC_DIR = os.path.abspath(os.path.join(SCRIPT_ROOT_DIR, 'src'))
BIN_DIR = os.path.abspath(os.path.join(SCRIPT_ROOT_DIR, 'bin'))
VIM_CONFIG_PATH = os.path.abspath(os.path.join(SCRIPT_ROOT_DIR, 'vim_config.json'))
SCRIPT_EXE_PATH = os.path.abspath(os.path.join(BIN_DIR, 'apt-vim'))
There are is just one copy of the vim_config.json file in ~/.vimpkg. If someone could point me in the right direction for troubleshooting this installation, I would really appreciate it.
Maintainer of apt-vim here.
It's unfortunate you're seeing this bug. Not entirely sure why it's happening but it appears to be an issue with the automated installation of brew (see vim_config.json --> brew recipe).
As a workaround, try installing brew first with the official install command from brew.sh:
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Then, try starting apt-vim init once more. If you continue having issues, please open an issue on apt-vim's GitHub page. Thanks!

Logstash Forwarder on AWS Elastic Beasntalk

what is best possible way to install the logstash forwarder on the Elastic Beanstalk application (Rails Application) to forward logs on the Logstash
Here what I did , create config file .ebextensions/02-logstash.config
files:
"/etc/yum.repos.d/logstash.repo":
mode: "000755"
owner: root
group: root
content: |
[logstash-forwarder]
name=logstash-forwarder repository
baseurl=http://packages.elasticsearch.org/logstashforwarder/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
commands:
"100-rpm-key":
command: "rpm --import http://packages.elasticsearch.org/GPG-KEY-elasticsearch"
"200-install-logstash-forwarder":
command: "yum -y install logstash-forwarder"
"300-install-contrib-plugin":
command: "rm -rf /etc/logstash-forwarder.conf && cp /var/app/current/logstash-forwarder.conf /etc/ "
test: "[ ! -f /etc/logstash-forwarder.conf ]"
"400-copy-cert":
command: "cp /var/app/current/logstash-forwarder.crt /etc/pki/tls/certs/"
"500-install-logstash":
command: "service logstash-forwarder restart"
1: logstash-forwarder.conf
{
"network": {
"servers": [
"logstashIP:5000"
],
"timeout": 15,
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/var/log/messages",
"/var/log/secure",
"/var/log/eb-version-deployment.log",
"/var/app/support/logs/passenger.log",
"/var/log/eb-activity.log",
"/var/log/eb-commandprocessor.log"
],
"fields": {
"type": "syslog"
}
}
]
}