I'd need to create a PostgreSQL application in Openshift 4 using the 'oc' commandline.
I can see that by adding the app from the Catalog, the application is correctly added.
On the other hand, by using the following shell, the Pod goes into CrashLoopBackOff:
oc new-app -e POSTGRESQL_USER=postgres -e POSTGRESQL_PASSWORD=postgres -e POSTGRESQL_DATABASE=demodb postgresql
The following error is contained in the log file.
fixing permissions on existing directory /var/lib/postgresql/data ... initdb: error: could not change permissions of directory "/var/lib/postgresql/data": Operation not permitted
Which is the correct shell to start up PostgreSQL?
Thanks
Related
I encountered and spend around an hour on this issue. While following the documentation on https://docs.bitnami.com/aws/apps/wordpress/get-started/connect-mysql/ in order to change the admin user email address, on Amazon AWS browser-based SSH client Lightsail terminal, when executing
mysql -u root -p
it returns -bash: mysqld: command not found.
One way to fix this is to create an alias of command to locate the working mysql path. In my case, mysql path is path is /opt/bitnami/mysql/bin/mysql. Therefore, I created an alias to that path by:
first, sudo nano /etc/bash.bashrc,
second, add this line : alias mysql='sudo /opt/bitnami/mysql/bin/mysql', save it.
third, exit the terminal, and start the terminal again.
From now, I can execute mysql command as usual and continue following the documentation.
Greetings and thanks in advance, I'm actually new to docker and docker-compose, watching a lot of videos and reading a lot of articles so far along with trying things.
I've got a front end container and a back end container that build and run alone as a Dockerfile and in a docker-compose setup.
(I've been building with Dockerfile first and then integrating the containers into docker-compose to make sure i understand things correctly)
I'm at the point where i need my database info, since i'll use docker-compose, as i understand it, it should build under the same network with a react front end and django back end.
I have a backup mysql dump file that I'm working with, what i think i need to do is have a container running mysql server and serving out my tables (like I have it locally working). I haven't been able to figure out how to import the backup into my docker mysql container.
Any help is appreciated.
What I've tried so far is using docker in the command line to outline the pieces i'll need in the Dockerfile and then what to move into the docker-compose as mentioned above:
docker run -d --name root -e MYSQL_ROOT_PASSWORD=root mysql # to create my db container
Then I've tried a bunch of commands and permutations of commands, recently in the CLI, here are some of my most recent trials and errors:
docker exec -i root mysql -uroot -proot --force < /Users/homeImac/Downloads/dump-dev-2020-11-10-22-43-06.dmp
ERROR 1046 (3D000) at line 22: No database selected
docker exec -i f803170ce38b sh -c 'exec mysql -uroot -p"root"' < /Users/homeImac/Downloads/dump-dev-2020-11-10-22-43-06.dmp
ERROR 1046 (3D000) at line 22: No database selected
docker exec -i f803170ce38b sh -c 'exec mysql -uroot -h 192.168.1.51 -p"root"' < /Users/homeImac/Downloads/dump-dev-2020-11-10-22-43-06.dmp
ERROR 1045 (28000): Access denied for user 'root'#'homeimac' (using password: YES)
I've scoured the web so far and i'm not sure where to go next, have I got the right idea? If anyone has an example of how to import a database dump (in dmp or dmp.gz), once i get that working, I'll actually do that in the docker-compose file.
Thinking about it, i just have to create the container and import so I might not even need a Dockerfile.
I'll cross that bridge when i get there. This is what I'm thinking though:
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: 'app'
etc etc
I've learned a lot super fast, maybe too fast. Thanks for any tips!
The answer to your question is given in the docker hub page of MySQL.
Initializing a fresh instance
When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint->initdb.d. Files will be executed in alphabetical order. You can easily populate your mysql services by mounting a SQL dump into that directory and provide custom images with contributed data. SQL files will be imported by default to the database specified by the MYSQL_DATABASE variable.
In your docker-compose.yml use:
volumes:
- ${PWD}/config/start.sql:/docker-entrypoint-initdb.d/start.sql
and that's it.
Here's the answer that worked for me after working with 2 colleagues that know backend better where I work.
It was pretty simple actually. I created a directory in my repo that would be empty.
I added *.sql and *.dmp to my .gitignore so the dump files would not increase the size of my image.
That directory using docker-compose would be used as a volume under the mysql service:
volumes:
- ~/workspace/app:/workspace/app
The dump file is placed there and is imported into the sql service when I run:
mysql -u app -papp app < /path/to/the/dumpfile
I can go in using docker exec and verify not only the database is there but the tables from my dump file are there as well.
For me, I had to create a new superuser also in my backend container through our Django app.
python3 manage.py createsuperuser
With that, then logging in on localhost:8000/api, everything was linked up between the mysql, backend, and frontend containers.
Hope this helps! I'm sure not all the details are the same for others post volumes, but using volumes, I didn't have to copy any dump file in and it ended up automatically imported and served. That was my big issue.
another way:
docker exec -i containername mysql -uroot -ppassword mysql < dump.sql
from the folder where dump.sql resides
does any one has an idea how to execute MySQL command inside Kubernetes POD from GitLab Runner ?
My Problem:
I want to create two View Table for my Database that is setup and ready inside a GitLab Pipeline.
My current approach:
1 I read out the wordpress pod infos
MSPOD=$(kubectl get pods --namespace=default -o=jsonpath="{.items[*].metadata.name}" -l app=wordpress,tier=mysql)
2 I try to execute the create table view as single command as i can not sh into the POD via Runner.
kubectl exec $MSPOD -- mysql --database=wordpress --password='M*****?' -e "CREATE VIEW ...;"
But this does not work it actully tries to run the single items of the command in the Terminal.
It also does not work as a embedded execution
kubectl exec $MSPOD -- $(mysql --database=wordpress --password='M*****?' -e "CREATE VIEW ...;")
Causing the same error.
The init container with MySQL client should work for you. Your SQL code can be provided as a configmap.
Using: MySQL 5.7
What I want to achieve:
To save console output of Cloud SQL output as a text.
Error:
Cloud SQL returns this:
ERROR 1045 (28000): Access denied for user 'root'#'%' (using password: YES)
Things I tried:
Logging in with no password → asks password anyways, and any password including that of the server itself does not work.
Creating various users with password → same error result
Creating a Cloud SQL instance with skip-grant-tables so that no permission is required to modify the table → Cloud SQL does not support this flag
I tried manually flagging the database with this option, but Cloud Shell doesn’t even support root login without password.
Possible solution:
If I can: mysql -u root with Cloud SQL with no password, then it should be able to do this just fine. It seems that any user besides root cannot even login to the instance.
Thank you in advance. Any clues / help is appreciated.
I believe the most trivial solution is to use the Google Cloud SDK with the following command.
You will export the results of the query in CSV format to Google Cloud Storage bucket, and copy them from the bucket to your system. Then you’ll have to parse the CSV file which is a standard procedure.
There’s an how-to guide here and you can take a look at a concrete example below:
Have some variables that will be used in multiple commands
INSTANCE_ID=your_cloud_sql_instance_id
BUCKET=gs://your_bucket here
Create bucket if you don’t have one, choosing the location accordingly
gsutil mb -l EU -p $DEVSHELL_PROJECT_ID $BUCKET
You can read the explanation of the following commands in the documentation 2, but bottom line will have a csv file on your file system at the end. Also make sure to edit the name of the DATABASE variable below as well as the correspondent query.
gsutil acl ch -u `gcloud sql instances describe $INSTANCE_ID --format json | jq -c ".serviceAccountEmailAddress" | cut -d \" -f 2`:W $BUCKET
DATABASE=db_visit
FILENAME=$DATABASE'_'`date +%Y%m%d%H%M%Y`_query.csv
gcloud beta sql export csv $INSTANCE_ID $BUCKET/$FILENAME --database=$DATABASE --query="select * from visit"
gsutil cp $BUCKET/$FILENAME .
To automate the login through mysql client and make subsequent queries and get its output I encourage you to research a solution along the lines of pexpect.
I’m looking to create a deploy script that I can run from a terminal and it automatically deploys my site from a repository. The steps I’ve identified are:
Connect to remote server via SSH
Fetch latest version of site from remote repository
Run any SQL patches
Clean up and exit
I’ve placed the SSH connection and git pull commands in my shell file, but what I’m stuck with is MySQL with it being an (interactive?) shell itself. So in my file I have:
#!/bin/bash
# connect to remote server via SSH
ssh $SSH_USER#$SSH_HOST
# update code via Git
git pull origin $GIT_BRANCH
# connect to the database
mysql --user $MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DBNAME
# run any database patches
# disconnect from the database
# TODO
exit 0
As you can see, I’m connecting to the database, but not sure how to then execute any MySQL statements.
At the moment, I have a directory containing SQL patches in numerical order. So 1.sql, 2.sql, and so on. Then in my database, I have a table that simply records the last patch to be run. So I’d need to do a SELECT statement, read the last patch to be ran, and then run any neccesary patches.
How do I issue the SELECT statement to the mysql prompt in my shell script?
Then what would be the normal flow? Close the connection and re-open it, passing a patch file as the input? Or to run all required patches in one connection?
I assume I’ll be checking the last patch file, and doing a do loop for any patches in between?
Help here would be greatly appreciated.
Assuming you want to do all the business on the remote side:
ssh $SSH_USER#$SSH_HOST << END_SSH
git pull origin $GIT_BRANCH
mysql --user $MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DBNAME << END_SQL
<sql statements go here>
END_SQL
END_SSH
You could get the output from mysql using Perl or similar. This could be used to do your control flow.
Put your mysql commands into a file as you would enter them.
Then run as: mysql -u <user> -p -h <host> < file.sqlcommands.
You can also put queries on the mysql command line using '-e'. Put your 'select max(patch) from .' and read the output in your script.
cat *.sql | mysql --user $MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DBNAME