Legacy GCE and GKE metadata requests from google_daemon/manage_addresses.py - google-compute-engine

I have an old Debian Compute Engine instance (created and running since December 2013) and got an email warning about the turndown of Legacy GCE and GKE metadata server endpoints (more details at https://cloud.google.com/compute/docs/migrating-to-v1-metadata-server).
I followed the directions for locating the process and found that the requests were coming from /usr/share/google/google_daemon/manage_addresses.py. The script seems to be the same as what's at https://github.com/gtt116/gce/blob/master/google_daemon/manage_addresses.py (also with what's in that directory).
I don't recall installing this, so I'm imaging it came with the provided Debian image I used in 2013.
Does anyone know what this manage_addresses.py script is, what it does, and what I should do with it now that the legacy metadata server endpoints are turning down? Is it safe to just stop running it? Or is there a new script I should replace it with? Or should I just try to update it myself to use the new endpoint?

I dug around and was able to trace /usr/share/google/google_daemon/manage_addresses.py as being installed by a package called google-compute-daemon. A search for that brought me to https://github.com/GoogleCloudPlatform/compute-image-packages#troubleshooting which explains that google-compute-daemon has been replaced with python-google-compute-engine. That led me to https://cloud.google.com/compute/docs/images/install-guest-environment . I followed the instructions there and manually installed the guest environment.
I noticed during installation that it said it was removing the google-compute-daemon package (and a packaged called google-startup-scripts), so this seems like the right thing. And I'm no longer seeing any requests to the legacy endpoints. So it seems like at some point the old guest environment failed to update.
TLDR; If you have this problem, follow the instructions at https://cloud.google.com/compute/docs/images/install-guest-environment#installing_guest_environment to manually update the guest environment.

Related

Trigger external pipeline / job after Jira in OpenShift startet

I'm running jira in openshift using the basic image from atlassian: https://hub.docker.com/r/atlassian/jira-software
So far most things work fine.
I installed a plugin using the web ui which worked as well.
But now I'm running into an issue when a pod is restarted. The pod uses the image and naturally (as specified) my plugin is not installed anymore. I can install the plugin via webservice calls and register it as an osgi module for jira. But I don't want to do this manually. Building a pipeline or jon for this is quite easy (I'm thinking jenkins or ansible tower). But I so far I didn't find a way to trigger this pipeline after the pod is started (or better after jira is started).
Anyone got an idea how to handle this?
Thanks and best regards. Sebastian
Why not create a custom image based on the Atlassian image with everything you need installed?
As far as I know, there isn't a way to trigger a pipeline when a Pod is started; only Webhook, Image Change, and Config Change triggers are available. You'll need to write a Jenkinsfile to script all of the installation and setup you want, but then that can be triggered in one of the three ways mentioned.
I'm thinking an Image Change trigger would work best for you, so when the latest version of Atlassian's image comes out, you can run your pipeline to set everything up on the latest version.
Also, just curious, but do you have some persistent storage attached to the Jira pod? If not, you'll lose everything in Jira if the Pod dies; that means tickets, boards, comments, everything.
Update:
Looking at this page, it looks like most of the stuff you're trying to persist is stored in jira-home, so maybe mounting that as a persistent volume will be a good solution for you.
You're correct that the tickets are stored in the database, but I'm guessing the database connection settings are getting wiped when the Pod is cycled.
The jira-home directory stores your application and database connection settings, as well as a subdirectory for your plugins.
dbconfig.xml
This file (located at the root of your JIRA home directory) defines
all details for JIRA's database connection. This file is typically
created by running the JIRA setup wizard on new installations of JIRA
or by configuring a database connection using the JIRA configuration
tool.
You can also create your own dbconfig.xml file. This is useful if you
need to specify additional parameters for your specific database
configuration, which are not generated by the setup wizard or JIRA
configuration tool. For more information, refer to the 'manual'
connection instructions of the appropriate database configuration
guide in Connecting JIRA to a database.
jira-config.properties
This file (also located at the root of your JIRA home directory)
stores custom values for most of JIRA's advanced configuration
settings. Properties defined in this file override the default values
defined in the jpm.xml file (located in your JIRA application
installation directory). See Advanced JIRA configuration for more
information.
In new JIRA installations, this file may not initially exist and if
so, will need to be created manually. See Making changes to the
jira-config.properties file for more information. This file is
typically present in JIRA installations upgraded from version 4.3 or
earlier, whose advanced configuration options had been customized
(from their default values).
plugins/
This is the directory where plugins built on Atlassian's Plugin
Framework 2 (i.e. 'Plugins 2' plugins) are stored. If you are
installing a new 'Plugins 2' plugin, you will need to deploy it into
this directory under the installed-plugins sub-directory.
'Plugins 1' plugins should be stored in the JIRA application
installation directory.
This directory is created on JIRA startup, if it does not exist
already.

Openshift OKD Excessive Logging

So I installed a single host Openshift OKD v3.11 cluster. I installed it on a VM running Centos 7.8.2003.
It seems to have installed ok except that it continually streams verbose logs to /var/log/messages. Around 5 logs per second and all seem to be about throttling requests. Example of a typical log message:
******Jun 13 15:49:13 centos7 journal: I0613 14:49:13.011402 1 request.go:485] Throttling request took 196.341689ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-cert-signer/serviceaccounts/service-serving-cert-signer-sa*****
The only reference I have managed to find is a question here but the access to the discussion is only available to those with deep pockets.
https://access.redhat.com/solutions/3348921
I assume these logs are nothing to worry about and so my main question is what is the "best"/cleanest/simplest/easiest way to ensure the Openshift cluster doesn't continue to fill up /var/log/messages but will still log any important messages there?
I would recommend looking at the root cause for this behavior. These messages indicate that there are a lot of requests coming to your API. Typically this is due to some application performing calls in a tight loop leading to this many messages. In your case check your openshift-service-cert-signer if you can see any warnings or an abnormal amount of log messages.
If you want to get rid of the throttling messages, you can increase the amount of Queries per second (QPS) for the API server: Recommended Practices for OKD Master Hosts (lower part).
The only reference I have managed to find is a question here but the access to the discussion is only available to those with deep pockets. https://access.redhat.com/solutions/3348921
I do not understand why you're saying that, as I can access that document with my free Red Hat account without any subscriptions. Have you tried with a free account as it says on the site?
Simon's answer was helpful but I've finally got to the bottom of this.
The problem was simply that the version of Docker I had installed was old. At the time of writing the latest version of Centos is 7.8.2003 and if you install that and then simply run "yum install docker" hoping that you'll get something at least reasonably new and certainly compatible with the rest of the linux installation, you'll probably be making a mistake.
The right thing to do is to follow the simple steps here:
https://docs.docker.com/engine/install/centos/
The reason I found the problem was because excessive logging of my openshift cluster wasn't the only issue. I started seeing strange behaviour of other containers. A process of trial and error narrowed down the issue to the default Centos version of docker. Once I followed the page above all my problem vanished including the original problem of /var/log/message getting hammered by openshift containers.
The main reason I decided to answer my own question was because surely someone else is going to be as impatient/thick as me and simply install Centos7 then try "yum install docker" without knowing they're about to enter a world of pain.

2 factor authentication (2 step verification) with Google compute engine

Is there a possibility to enable 2 factor authentication (or 2 step verification a-la Google terminology) for Google compute engine?
I'm interested in protecting my VMs, cloud storage and the developers console.
I've tried using the Google Authenticator (libapm) referring to this article Securing SSH with two factor authentication using Google Authenticator on a VM but it didn't succeed (I managed to login with the gcloud compute shell with no additional code).
[Jan 12th]
Some updates:
Google developer console works perfectly. Thanks.
For 2-step verification with the compute-engine SSH access, I retried everything all over again. Followed the instructions mentioned in the links provided, and did the following:
I created a new Google-Cloud project.
I used 2 different OS instances - Debian 8.2 and Ubuntu 15.10.
All of these tests failed - there was no prompt for a verification code.
I looked around in the Google compute-engine documentation, and they mention explicitly they support only certificate authentication (rather than username/password), so I cannot verify whether this is the root cause.
Is there anyone using 2-step verification with Google compute-engine?
Thanks
At last - a solution (thanks for Google cloud support).
A couple of updates on top of the document I have referred to:
Apart of adding a line to /etc/pam.d/sshd, one should also comment out the #include common-auth line. So it should be something like:
auth required pam_google_authenticator.so # from the original instructions
# #include common-auth # commenting out is new...
Apart of changing the ChallengeResponseAuthentication property in /etc/ssh/sshd_config, one should also add AuthenticationMethods publickey,keyboard-interactive in the following line:
ChallengeResponseAuthentication yes # from the original instructions
AuthenticationMethods publickey,keyboard-interactive # this is new...
Of course, this is on top of the regular instructions of installing libpam-google-authenticator, changing the sshd and sshd_config (as mentioned above), restarting the ssh/sshd service, and setting up the google-authenticator for the account.
Finally, a few more points:
Consider this carefully - from restarting the ssh/sshd account, no one can login without proper 2FA. So make sure anyone who should have ssh access - configured it properly.
I'm contemplating whether this is the proper solution for us, as it requires setup the VMs (each VM separately), and manual setting up the authenticator per each account and each VM manually. Not sure how scalable is this alternative. I would appreciate your thoughts...
Last but not least - the setup of libpam-google-authenticator may be simplified by using apt-get, no need for manually installing all dependencies and building it. Worked for me by running:
sudo apt-get install libpam-google-authenticator
Good Luck!

OpenShift system and package updates/patches

How does one keep OpenShift gears up-to-date? For example, updates to:
The Linux kernel
Important components/libraries like libc
Apache
Apache modules like mod_wsgi
Python
Python packages
Does OpenShift automatically update these and then restart the gear (or reboot the node)? Or does OpenShift send email notifications and the end-user can restart the gear during maintenance windows? What is the model?
What got me thinking about this was back in January there was a remote-code-execution bug in Ruby on Rails that everyone had to patch immediately.
This FAQ seems to suggest that some level of upgrades happen automatically, but it isn’t clear whether this only applies to the OpenShift-specific code, or also other components like the kernel, Apache, etc.
I can tell you from my experience that changes to the openshift system are not always automatic. They had a change about 10 days ago and I'm still tracking down what they did to make my app run correctly. As far as I know there was no email sent. I did find a blog post of some of the major changes, not all. Of course, they introduced at least one bug that I know of. YMMV
My experiences over the last few weeks have been the following:
Last week there seemed to be an unannounced reboot of the server. I detected this by logging from a custom action hook. I didn't receive any email about it and I didn't see any notice at https://twitter.com/openshift_ops or https://openshift.redhat.com/app/status.
This week, there was the Heartbleed OpenSSL vulnerability and it seems like some gears were restarted. I didn't receive any email about it, Twitter didn't show anything, but there was information on the status page.

Windows Store App Cert Kit Fails to Start

The Windows Store App Cert kit fails to start.
Normally the flow is:
Start App Cert Kit
Choose Windows Store Application
Choose App
Choose Certs to Run
Run Certs
Save results
Currently, it fails just before 'Choose App'.
I get this result:
I have tried uninstalling and reinstalling, installed all updates, running appcert.exe reset. All to no avail.
I think it may have to do with the same issue I get with the Windows Store app. I have 38 languages installed (our app supports all of them) and the Windows Store app doesn't like it. I've had to escalate with customer support repeatedly for this, and have not had it resolved properly. I wonder if the same issue applies here.
So I was unable to start the UI, but I was still able to run the appcert by manually calling it on the command line as described here. It worked fine. One thing to note is that between each attempt, make sure to run appcert.exe reset.