Openstack trove database instance status=error after creating a new instance - mysql

Recently I installed openstack trove using the automated script (devstack). After it is installed successfully and creating some user and projects, I manged to create a database instance and database inside it. unfortunately every database instance that I am going to build (via command line or horizon dashboard) trove gave me error status. Therefore when I tried to create database inside each of created database instance I stock with database instance is not ready.
I did some google and some people mentioned that I should check nova-compute.log, but unfortunately I did not find this log file. Would you please guide me?
Regards.

If you are running devstack it creates a screen session for all the services and it contains the logs. Try running "screen -x" to attach to the screen session and you can view the logs from there. Each window within the screen session is a separate service running.
If you had an issue creating an instance from trove it maybe because the image you are using is not setup correctly for trove because it needs the trove guest agent installed and a configuration for the guest agent baked in the image.
We have a repo that uses devstack to create a development trove installation that might be of use to test things out. This readme should help getting you started.
https://github.com/openstack/trove-integration/blob/master/README.md

Related

How do you change the public key of a Oracle Cloud Instance?

I accidentally deleted my public and private key and had to generate new SSH keys due to not being able to restore the keys (and not having a backup anywhere). How do I change the public SSH key then of my Oracle Cloud instance?
Terminating the instance and remaking it isn't an option, and I've tried looking online but wasn't able to find much. Any help would be appreciated.
Thanks
Some background
Found a solution! Just so people are aware, there are methods online that involve connecting to the machine via VNC, but for me personally it felt very trial-an-error, when pressing buttons at the wrong time, and it ended up not working properly (VNC didn't display recovery mode for me, just a blank screen after selecting it).
Summary
This guide involves: Creating another machine (as incl. in free tier anyway), detaching the boot volume drive from the machine and attaching it to the machine just created, to do editing to change the keys over, then attaching the drive back up.
Create another VPS (Oracle have them incl. > free tier)
I deleted one of my other VPS' in the Oracle panel (that was a free machine - as I didn't need it and wasn't using it) and created it again anew (I made sure to delete the old boot volumes before continuing).
(This solution is assuming your using Ubuntu 20.04, but this will probably work for other OS's as well)
Basically from there,
I powered off the machine I wanted to change my SSH key of.
After fully being powered off, just detach the boot volume from the VPS, and attached it as a block volume to the machine just created.
Login to the machine via SSH, and run the connection commands by hitting the three dots (image below) and viewing the connection commands, to connect the drive up.
Editing files on the drive & mounting process
Then by running blkid (or sudo fdisk -l for a more friendly view)
you're able to see what drives are available for mounting. So then you just make a folder and simply type:
sudo mount [drive path e.g. /dev/sdb] [folder path e.g. ./drive]
Edit the file at /home/ubuntu/.ssh/authorized_keys, or however your machine is configured (Oracle by default disallows root, but if you've edited your configuration it's up to your end).
Then, simply go to the relevant path to be able to unmount the drive, umount [folder path e.g. ./drive]
Run the disconnect commands for the drive from the panel.
Then, simply detach the drive from your other machine and reattach it back to the original machine. Wait till it's fully attached and then start the machine again.
You can create a console connection, connection to it, then reboot the instance (through OCI console), and get to GRUB in the console connection... a few more steps and you can upload a new ssh key: https://docs.oracle.com/en-us/iaas/Content/Compute/References/serialconsole.htm

Trigger external pipeline / job after Jira in OpenShift startet

I'm running jira in openshift using the basic image from atlassian: https://hub.docker.com/r/atlassian/jira-software
So far most things work fine.
I installed a plugin using the web ui which worked as well.
But now I'm running into an issue when a pod is restarted. The pod uses the image and naturally (as specified) my plugin is not installed anymore. I can install the plugin via webservice calls and register it as an osgi module for jira. But I don't want to do this manually. Building a pipeline or jon for this is quite easy (I'm thinking jenkins or ansible tower). But I so far I didn't find a way to trigger this pipeline after the pod is started (or better after jira is started).
Anyone got an idea how to handle this?
Thanks and best regards. Sebastian
Why not create a custom image based on the Atlassian image with everything you need installed?
As far as I know, there isn't a way to trigger a pipeline when a Pod is started; only Webhook, Image Change, and Config Change triggers are available. You'll need to write a Jenkinsfile to script all of the installation and setup you want, but then that can be triggered in one of the three ways mentioned.
I'm thinking an Image Change trigger would work best for you, so when the latest version of Atlassian's image comes out, you can run your pipeline to set everything up on the latest version.
Also, just curious, but do you have some persistent storage attached to the Jira pod? If not, you'll lose everything in Jira if the Pod dies; that means tickets, boards, comments, everything.
Update:
Looking at this page, it looks like most of the stuff you're trying to persist is stored in jira-home, so maybe mounting that as a persistent volume will be a good solution for you.
You're correct that the tickets are stored in the database, but I'm guessing the database connection settings are getting wiped when the Pod is cycled.
The jira-home directory stores your application and database connection settings, as well as a subdirectory for your plugins.
dbconfig.xml
This file (located at the root of your JIRA home directory) defines
all details for JIRA's database connection. This file is typically
created by running the JIRA setup wizard on new installations of JIRA
or by configuring a database connection using the JIRA configuration
tool.
You can also create your own dbconfig.xml file. This is useful if you
need to specify additional parameters for your specific database
configuration, which are not generated by the setup wizard or JIRA
configuration tool. For more information, refer to the 'manual'
connection instructions of the appropriate database configuration
guide in Connecting JIRA to a database.
jira-config.properties
This file (also located at the root of your JIRA home directory)
stores custom values for most of JIRA's advanced configuration
settings. Properties defined in this file override the default values
defined in the jpm.xml file (located in your JIRA application
installation directory). See Advanced JIRA configuration for more
information.
In new JIRA installations, this file may not initially exist and if
so, will need to be created manually. See Making changes to the
jira-config.properties file for more information. This file is
typically present in JIRA installations upgraded from version 4.3 or
earlier, whose advanced configuration options had been customized
(from their default values).
plugins/
This is the directory where plugins built on Atlassian's Plugin
Framework 2 (i.e. 'Plugins 2' plugins) are stored. If you are
installing a new 'Plugins 2' plugin, you will need to deploy it into
this directory under the installed-plugins sub-directory.
'Plugins 1' plugins should be stored in the JIRA application
installation directory.
This directory is created on JIRA startup, if it does not exist
already.

Legacy GCE and GKE metadata requests from google_daemon/manage_addresses.py

I have an old Debian Compute Engine instance (created and running since December 2013) and got an email warning about the turndown of Legacy GCE and GKE metadata server endpoints (more details at https://cloud.google.com/compute/docs/migrating-to-v1-metadata-server).
I followed the directions for locating the process and found that the requests were coming from /usr/share/google/google_daemon/manage_addresses.py. The script seems to be the same as what's at https://github.com/gtt116/gce/blob/master/google_daemon/manage_addresses.py (also with what's in that directory).
I don't recall installing this, so I'm imaging it came with the provided Debian image I used in 2013.
Does anyone know what this manage_addresses.py script is, what it does, and what I should do with it now that the legacy metadata server endpoints are turning down? Is it safe to just stop running it? Or is there a new script I should replace it with? Or should I just try to update it myself to use the new endpoint?
I dug around and was able to trace /usr/share/google/google_daemon/manage_addresses.py as being installed by a package called google-compute-daemon. A search for that brought me to https://github.com/GoogleCloudPlatform/compute-image-packages#troubleshooting which explains that google-compute-daemon has been replaced with python-google-compute-engine. That led me to https://cloud.google.com/compute/docs/images/install-guest-environment . I followed the instructions there and manually installed the guest environment.
I noticed during installation that it said it was removing the google-compute-daemon package (and a packaged called google-startup-scripts), so this seems like the right thing. And I'm no longer seeing any requests to the legacy endpoints. So it seems like at some point the old guest environment failed to update.
TLDR; If you have this problem, follow the instructions at https://cloud.google.com/compute/docs/images/install-guest-environment#installing_guest_environment to manually update the guest environment.

I need a foolproof checklist for hosting an already built e-commerce site on Amazon Web Services

I have built an e-commerce website on my local computer that uses Django version 2.2 and python 3.7.
The website consists of:
fancyfetish is the main project directory.
The apps, (cart, users, baseapp, products, blog) are all stored in their own directory 'apps.
Within the settings folder I have three settings files:
- production.py
- base.py
- development.py
The static file in the main directory is where I put collectstatic files.
Media is where I store externally uploaded images (product images for example)
Docs is just random bits like a hand drawn site layout.
Static files like JS and CSS are stored within baseapp, within apps.
I want to host this website on Amazon Web Services, and I assume I need to use Elastic Beanstalk. I went through the process of trying to host with free version of EB, installed the EB CLI, and after using eb create and eb deploy on the CLI my website appeared.
However, the static files didn't load properly in the first instance because I had not properly configured DJANGO_SETTINGS_MODULE. I have now done this. But before deploying I added eb migrate functionality so that I could also migrate my database.
This seems to have messed everything up. I can no longer deploy because there is a DATABASE error, which I expected. The error said 'Not able to connect to MySQL database through 'localhost'. Well, of course it cant.
So, in order to deploy my site on AWS I needed to configure the databases, because with the eb migrate functionality it will no longer deploy without trying to also connect to my database using the settings I have configured.
I have so far, whilst in development mode, connected my project to MySQL and everything is running perfectly on localhost, with my models transferring beautifully to the database just as I would like.
I worked out that I need to create a database on AWS, obviously. So I set up an RDS. I didn't link it to my deployed application because it would appear that the application doesn't have an environment that I can see when I log into my console. So where my project has been deploying to I don't know, because it doesn't look like the CLI version is connected to the online version in my console.
So I thought I'd deal with that problem later and work out how to make a database, which I managed to do. However, migrating the database I already have up and running on MySQL to my RDS database seems impossible, and there are not very good instructions. Let alone trying to then connect said database to my deployed application, which doesn't seem to sync with my local app.
So, I have ended up deleting everything because I was becoming so confused, with so many new directories (.ebextensions etc etc) and a database that wont connect, a project that won't deploy, a database that wont point to my project etc. I ended up created an EC2 folder and all sorts, getting myself massively confused with what I actually need to do to make this whole thing work.
If any part of this ramble makes any sense to anyone out there, and you yourself have managed to deploy a larger django project to AWS and keep your existing databases then please do let me know. But I have a feeling this may be a long shot.
Basically I need a step by step list of what to do to deploy:
For example:
1) Create an elastic beanstalk instance
2) Create an environment on CLI that syncs to the one in my AWS console
etc
etc
(With how to's if you possibly have the time!)
Thank you, and I am so sorry for being so confused by something that may be simple
Edited to show my process:
I have built a directory called .ebextensions with a file within it called django.config with the following content:
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: fancyfetish/wsgi.py
I have run the following command:
eb init -p python-3.6 fancyfetish
There was no output as a result of this in the terminal, however a directory was created called .elasticbeanstalk with one file in it called config.yml
I then typed eb init to create an SSH key pair and there was no output from this command at all:
As you can see I have tried doing this several times.
Instead I created a key pair manually within AWS console and a file automatically to my computer called keyname.pem
I then typed into the console
chmon 400 path/to/key/keyname.pem
This provided no output on the terminal so I cannot know if it worked.
I moved the downloaded SSH file into the .SSH directory in the Home directory of my computer, and then in the terminal typed:
eb init -k nameofkey
The output was:
WARNING: Uploaded SSH public key for "fancyfet" into EC2 for region us-
west-2.
I then went on to type
eb create fancyfet-env
And an environment was created with the following output:
I know that this has to do with databases and connecting to MySQL.
I then typed:
eb deploy
With the following output:
So now comes the bit where I get stuck, successfully creating a database that connects to my already existing database that is populated with database in MySQL, and connecting the project to the database.
HELP!(Thank you so much!)

Why does my custom beanstalk keep restarting?

I am trying to customize the default AMI of beanstalk, but everytime I get server restarts after some random time. I went so far as not to change anything, but nothing works.
I have tried the following:
find the instance of running beanstalk, create AMI, modify the AMI of beanstalk-crashing
create new instance with same AMI as on beanstalk, create AMI, modify configuration-crashing
I have tried both stopping the instance before creating AMI, and creating AMI of running instance.
Edit: I found the answer here: Can't generate a working customized EC2 AMI from Amazon Beanstalk sample appl
From personal experience, place the health status page to point to a dummy, static .html file. Although not recommended, this will prevent the health checks from restarting the machine and you could make more inside inspection.
AWS captures into the S3 logs only the ones output via java.util.logging. It means all console logging is not transferred.
That said, make sure you define an private key in your environment config, so you could ssh to it easily and see its output (it changes - for Tomcat 7, it is at /opt/tomcat7. For tomcat6, it is under /usr/share/tomcat6)
Just to add to what aldrinleal wrote (can't comment yet): In the past, I would often find a failed Healthcheck would also disable my site. By which I mean: If you have the health check on your actual app and that app threw an exception, you wouldn't actually get to see anything, the environment would just report a failed state. Only after I changed to a static file for the health check, did I manage to see the errors.
Now I obviously this is more a problem with a dev environment and you can always just pull the logs. But especially in the beginning as someone new to AWS/Beanstalk this helped me a lot.