couchbase-cli: command not found - couchbase

I downloaded deb package from https://www.couchbase.com/downloads and installed it using:
sudo dpkg -i couchbaseXXX.deb
It is successfully installed but when I try to execute:
couchbase-cli bucket-create -c localhost:8091 -u Administrator ****
Returns:
couchbase-cli: command not found
What is the issue behind that, How to fix it?

First you have to setup the couchbase cluster with same command before the bucket creation. An example below, --services could be index,data,query.
/opt/couchbase/bin/couchbase-cli cluster-init -c 127.0.0.1:8091 -u Administrator -p Public123 --cluster-username=Administrator --cluster-password=Public123 --cluster-port=8091 --cluster-ramsize=49971 --cluster-index-ramsize=2000 --services=data

you have to go into CLI directory location and run the command.
Below are the steps I have done.
cd /opt/couchbase/bin
./couchbase-cli bucket-create -c localhost:8091 -u Administrator -p password --bucket test-data --bucket-type couchbase --bucket-ramsize 100
Once I run the above command, I got the success message and the bucket has been created.

Related

Сan't automatically connect to RDS via the script on EC2?

Im use terraform to create infrastr. in AWS. I'm using script on ec2 userdata, that connects to rds.But this script doesn`t work.
#! /bin/bash
yum update -y
yum install -y httpd
service httpd start
usermod -a -G apache centos
chown -R centos:apache /var/www
yum install -y mysql php php-mysql
systemctl enable httpd.service
cd /var/www/html/
echo "[mysql]" > ~/.my.cnf
echo "user = myuser" >> ~/.my.cnf
echo "password = passworddata" >> ~/.my.cnf
chmod 600 ~/.my.cnf
cd database/
mysql -h db_server_address < script.sql
systemctl restart httpd.service
/var/log/cloud-init-output.log
ERROR 1045 (28000): Access denied for user 'root'#'10.0.1.91' (using password: NO)
Application cannot connect to the database because there is no schema created.
But, when I do it manually into the instance everything works fine.I understand that script is not perfect, but what's the problem,why the script don`t take credentials from ~/.my.cnf?
Probably env variables are not loaded to load data from /root, use:
mysql --defaults-file=~/.my.cnf -h db_server_address < script.sql

OpenShift Login failed (401 Unauthorized)

I'm new to openshift or K8'S. I have installed Openshift v3.11.0+bf985b1-463 cluster in my centos 7.
While running prerequisites.yml and deploy_cluster.yml run status is successful. And i have updated htpasswd and granted the cluster-admin role for my user.
htpasswd -b ${HTPASSWD_PATH}/htpasswd $OKD_USERNAME ${OKD_PASSWORD}
oc adm policy add-cluster-role-to-user cluster-admin $OKD_USERNAME
and i have create the user and identity also by the below cmd.
oc create user bob
oc create identity ldap_provider:bob
oc create useridentitymapping ldap_provider:bob bob
When i try to login with oc login -u bob -p password it say's
Login failed (401 Unauthorized)
Verify you have provided correct credentials.
But i can able to login with oc login -u system:admin
For your information: the okd deploy_cluster.yml ran successfully but the below pod is in error state.
Is that cause the problem? cmd oc get pods
Suggest me how can i fix the issue. Thank you.
UPDATE:
I have ran the deploy_cluster.yml once again the login issue is solved able to login. But it fails with the below error.
This phase can be restarted by running: playbooks/openshift-logging/config.yml
Node logging-es-data-master-ioblern6 in cluster logging-es was unable to rollout. Please see documentation regarding recovering during a rolling cluster restart
In openshift console the Logging Pod have the below event.
But all the servers have enough memory like more than 65% is free.
And the Ansible version is 2.6.5
1 Master node config:
4CPU, 16GB RAM, 50GB HDD
2 Slave and 1 infra node config:
4CPU, 16GB RAM, 20GB HDD
To create a new user try to follow these steps:
1 Create on each master node the password entry in htpasswd file with:
$ htpasswd -b </path/to/htpasswd> <user_name>
$ htpasswd -b /etc/origin/master/htpasswd myUser myPassword
2 Restart on each master node the master api and master controllers
$ master-restart controllers && master-restart api
or
$ /usr/local/bin/master-restart api && /usr/local/bin/master-restart controllers
3 Apply needed roles
$ oc adm policy add-cluster-role-to-user cluster-admin myUser
4 Login as myUser
$ oc login -u myUser -p myPassword
Running again the deploy_cluster.yaml after configuring the htpasswd file, will force the restart of master controllers and api so you've been able to login as your new user.
About the other problem, registry-console and loggin-es-data-master pods not running it's because you cannot run again the deploy_cluster.yaml when your cluster is already up and running so you have to uninstall okd and then run again the playbook. This happens because the SDN is already working and all your nodes already own all needed certificates.
$ ansible-playbook -i path/to/inventory /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
and then again
$ ansible-playbook -i path/to/inventory /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
More detailed informations are here
If, after all this procedure, the logging-es-data-master pod should not run then uninstall the logging component with
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=False -e openshift_logging_purge_logging=true
and then uninstall the whole okd and install it again.
If your cluster is already working and you cannot perform again the installation so try only to uninstall and reinstall the logging component:
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=False -e openshift_logging_purge_logging=true
$ ansible-playbook -i /path/to/inventory> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml -e openshift_logging_install_logging=True
RH detailed instructinos are here

How to import mysql database (schema.sql) in OpenShift v3 with mysql datastore?

Before I'm using OpenShift v2 and it is quite easy to import the mysql schema to the app. I'll just add a phpMyadmin cartridges to my OpenShift app and then import my sql file. But now in OpenShift v3 they don't have a phpMyadmin cartridge.
If I understand correctly, you want to migrate MySQL database applications from OpenShift version 2 (v2) to OpenShift version 3 (v3). If so, here are the steps:
Export all databases to a dump file and copy it to a local machine (into the current directory):
$ rhc ssh <v2_application_name>
$ mysqldump --skip-lock-tables -h $OPENSHIFT_MYSQL_DB_HOST -P ${OPENSHIFT_MYSQL_DB_PORT:-3306} -u ${OPENSHIFT_MYSQL_DB_USERNAME:-'admin'} \
--password="$OPENSHIFT_MYSQL_DB_PASSWORD" --all-databases > ~/app-root/data/all.sql
$ exit
Download dbdump to your local machine:
$ mkdir mysqldumpdir
$ rhc scp -a <v2_application_name> download mysqldumpdir app-root/data/all.sql
Create a v3 mysql-persistent pod from template:
$ oc new-app mysql-persistent -p \
MYSQL_USER=<your_V2_mysql_username> -p \
MYSQL_PASSWORD=<your_v2_mysql_password> -p MYSQL_DATABASE=<your_v2_database_name>
Check to see if the pod is ready to use:
$ oc get pods
When the pod is up and running, copy database archive files to your v3 MySQL pod:
$ oc rsync /local/mysqldumpdir <mysql_pod_name>:/var/lib/mysql/data
Restore the database in the v3 running pod:
$ oc rsh <mysql_pod>
$ cd /var/lib/mysql/data/mysqldumpdir
In v3, to restore databases you need to access MySQL as root user.
In v2, the $OPENSHIFT_MYSQL_DB_USERNAME had full privileges on all databases. In v3, you must grant privileges to $MYSQL_USER for each database.
$ mysql -u root
$ source all.sql
Grant all privileges on <dbname> to <your_v2_username>#localhost, then flush privileges.
Remove the dump directory from the pod:
$ cd ../; rm -rf /var/lib/mysql/data/mysqldumpdir

Can't log into MariaDB/MySQL after install (RedHat) using mysql -u root -p

I have run a binary install of MariaDB, i will provide the following commands that i used.
(Forgive me I have a fairly basic level of mysql so I have annotated with my understanding of what the commands do).
Added group called mysql
shell> groupadd mysql
shell> useradd -r -g mysql mysql
shell> cd /usr/local
Untars the mariaDB binaries into which directory you choose.
shell> tar zxvf /usr/local/mysql/mysql-VERSION-OS.tar.gz
Created a symbolic link.
shell> ln -s /usr/local/mysql/mysql-VERSION-OS mysql
shell> cd mysql
Recursively changes ownership to the user/group.
shell> chown -R mysql .
shell> chgrp -R mysql .
Runs the mysql install db.
shell> scripts/mysql_install_db --user=mysql
shell> chown -R root .
shell> chown -R mysql data
Makes a copy of the my.cnf file to put into the etc folder
shell> cp support-files/my-medium.cnf /etc/my.cnf
shell> bin/mysqld_safe --user=mysql &
Makes a copy of the server file to the init.d file which allows it to start automatically
shell> cp support-files/mysql.server /etc/init.d/mysql.server
Running my my_secure_installation script
./mysql_secure_installation --basedir=/usr/local/mysql/mariadb-5.5.34-linux-x86_64
I then closed the terminal and reopened, did:
ps -ef | grep mysql
to check the mysqld server was running (it was).
So I have done the above steps, I try and enter:
mysql -u root -p
and I receive the error
bash: mysql: command not found.
Any ideas why I cannot access it? Thanks in advance.

Can't delete mysql database, table or even alter table inside docker

I want to have a mysql database with some basic dataset.
I create mysql docker image using this https://index.docker.io/u/brice/mysql/ Dockerfile, but delete VOLUME ["/var/lib/mysql", "/var/log/mysql"] line, so the Dockerfile looks like:
FROM ubuntu:12.10
MAINTAINER Brandon Rice <brice84#gmail.com>
RUN dpkg-divert --local --rename --add /sbin/initctl
RUN ln -s /bin/true /sbin/initctl
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get -y install mysql-server
RUN sed -i -e"s/^bind-address\s*=\s*127.0.0.1/bind-address = 0.0.0.0/" /etc/mysql/my.cnf
RUN /usr/bin/mysqld_safe & \
sleep 10s && \
mysql < create_my_db.sql && \
mysql -e "GRANT ALL ON *.* to 'root'#'%'; FLUSH PRIVILEGES"
EXPOSE 3306
CMD ["mysqld_safe"]
After that I build image such as:
docker build -t my_db_mysql .
Everything is ok while i append data, but when i want to delete db, for example:
FROM my_db_mysql
RUN /usr/bin/mysqld_safe & \
sleep 10s && \
mysql -e "DROP DATABASE my_db;"
EXPOSE 3306
CMD ["mysqld_safe"]
I obtain next error:
ERROR 6 (HY000) at line 1: Error on delete of './my_db//db.opt' (Errcode: 1)
It is appears not only when I want to build image, but even when I exec: mysql -u user -p -e "DROP DATABASE my_db;"
How to solve this?
Thanks
Update: Also I tried to run docker with different filesystem, e.g -s vfs or -s devicemapper, but nothing changed.
When I build image with VOLUME ["/var/lib/mysql", "/var/log/mysql"] - everything works properly, but i can't commit this changes.
Update: Seems I resolve this issue. Problem was in host machine with ubuntu 12.04. Issue disappeared when I update ubuntu to 13.10. Thank a lot!