Restoring a dokuwki to openshift - openshift

I have an openshift account with dokuwiki in one app php 5.3 cartige, I do backups using rhc snapshot save every day, today I try to do a restore it with rhc snapshot restore, but it looks like data is from the last git push what I did and the changes which I did into the dokuwiki aren´t into the restored "snapshot".
am I doing something wrong? ,
Rhc command help displays snapshot saves "the state of the application", doesn´t it mean what I expect (save the whole state of application)?
Thanks :)
OpenShift offers functionality to backup and restore with the snapshot command within the rhc client tools.
To backup your application code, data, logs and configuration, you run:
rhc snapshot save -a {appName}
To restore your application, you run:
rhc snapshot restore -a {appName} -f {/path/to/snapshot/appName.tar.gz}

When you do an rhc snapshot save, it saves what is in your git repository, what is in your app-root/data, and what is in any databases that you have running. So if you have ssh'd or sftp'd into your application and made changes, or used a web editor to make physical file changes (ones not stored in a database), then those changes will not be reflected in the backup/restore procedure.

Related

How to view logs of running node of ejabberd

I hosted my ejabberd on the AWS cloud server and accessing using putty. I start my ejabberd node using the ./ejabberdctl live command which is working perfectly fine. When I closed my putty session and start again on the next day I can't attach live logs again until I stop that running node and start again. How can I attach live logging of the previously running node?
There are typically two ways to run ejabberd:
A)
ejabberdctl live starts a new node and attaches an interactive shell immediately to it. You view the logs immediately in the shell. This is useful for debugging, testing, developing
B)
ejabberdctl start starts a new node keeping it running in the background. You can see the log messages in the log files (/var/log/ejabberd/ejabberd.log or something like that). This is useful for production servers.
Later, you can run ejabberdctl debug to attach an interactive shell to that node. This is useful when you run a production server, and want to perform some administrative task.

Is there any way to access the git repository when pushing to openshift (v2)?

I'd like to access the git commit hash and tag, say using git describe, but the openshift virtual machines seem to have no git repository (readable by me anyway).
Is there any way to access this information using the openshift deployment hooks? I'd like to be able to have each instance log the version (ie tag/commit hash) it's running.
At the moment, my best option seems to be to write a deployment script, and use rhc scp to push the version string to the server, but it feels a bit hacky.
Thanks!

Automatically start gcloud sql proxy when google compute engine VM starts

I'm using google compute engine and have an auto scaling instance group that spins up new VMs as needed all sitting behind a load balancer. I'm also using google's cloud SQL in the same project. The VMs need to connect to the cloud SQL instance.
Since the IPs of the VMs are dynamic I can't just plug in the IPs to the SQL access config so I followed the cloud sql proxy setup along with the notes from this very similar question:
How to connect from a pool of Google Compute Engine instances to Cloud SQL DB in the same project?
I can now log into a single test VM and run:
./cloud_sql_proxy -instances=PROJ_NAME:TIMEZONE:SQL_NAME=tcp:3306
and everything works great and that VM connects to the cloud SQL instance.
The next step is where I'm having issues. How can I setup the VM so it automatically starts up the proxy when it's either built from an instance template or just restarted. The obvious answer seem to be to shove the above in the VM's start-up script but that doesn't seem to be working. So with my single test VM I can SSH into the VM and manually run the cloud_sql_proxy command and all works. If I then include the below in my start-up script and restart the VM it doesn't connect:
#! /bin/bash
./cloud_sql_proxy -instances=PROJ_NAME:TIMEZONE:SQL_NAME=tcp:3306
Any suggestions? I seriously can't believe it's this hard to connect to the SQL cloud from a VM in the same project...
The startup script you have shown doesn’t show the download step of the cloud_sql_proxy.
You need to first download and then launch the proxy. So, your startup script should look like:
sudo wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64
sudo mv cloud_sql_proxy.linux.amd64 cloud_sql_proxy
sudo chmod +x cloud_sql_proxy
sudo ./cloud_sql_proxy -instances=PROJ_NAME:TIMEZONE:SQL_NAME=tcp:3306 &
I choose crontab to run cloud_sql_proxy automatically when vm start up.
$crontab -e
and add
#reboot cloud_sql_proxy blah blah.

how to recover credentials of cartridge installation?

is possible to recover the credentials generated during installation? In particular jboss bpm suite' s console and dashboard.
It happens that the creation last longer and i need to refresh the page for this reason i can't get the green frame with all details.
If you have rhc command-line installed, then you can simply do
rhc ssh --app <app_name> --command 'env'
Get to your application console page (through the openshift web ui), grab the ssh connection link (Remote Access) and use it to get a terminal connection open in your application space.
Then type "env" and you should see a bunch of environnement variables, your credentials should be stored in one of them.

Mercurial Repository Nightly Pull from a subdirectory on a server

I am attempting to run a Windows batch script nightly to pull a fresh copy of data to my local hard drive from a Mercurial repository, overwriting any data I have locally. The server on which the repository is located has many repos, so is located in a sub-directory on the server. I have set up PuTTY to use an RSA key so when I log onto the server with PuTTY, I need only enter my username.
The batch script has a command:
hg pull ssh://myusername#mydomain.com/targetrepo/
...but this only opens a prompt for me to enter my password. Normally, this would be fine but because the pull will be executed from a batch script, I need the RSA key authentication to work.
How do I allow a batch script in a subdirectory on the server that contains a Mercurial repository to execute without requiring entry of a password?
You said it yourself -- you need the RSA key authentication to work. So you'll need to debug why that isn't working. The easiest way would be to see the sshd logs on the server side. It'll probably be one of
Your key isn't on the server
The ~/.ssh directory or its contents' permissions on the server are wrong
The SSH daemon on the server doesn't allow passwordless access
It's not actually asking for a password at all; it's asking for a passphrase for your key