OS Patch Management feature - reports important security updates available for instances after running patch job - google-compute-engine

I have set up 3 CentOS 7 VMs to test out the OS patch management feature that is now available in GCP. I deployed the patch manager agent and the Patch Management console reports all 3 VMs have "Important / security updates available". I then scheduled a Patch deployment job for CentOS and it ran on all 3 machines.
When I check the logs, I can see that the task began at the scheduled time and reports "No packages to update".
An hour later, the dashboard still reports: "Important / security updates available".
I have rebooted the VMs and the dashboard still has not changed and it shows 100% of the VMs requiring patching.
While I suspect that there really IS no security update available, I am not sure how we can trust the dashboard. Further, there are no hyperlinks available for more information about what these important/security updates are so how would you even know what fixes were going to be applied if there were any?

I was able to reproduce this error in my own project.
I created a VM Instance with CentOS 7, then I created a new patch development,
It is worth mentioning that I used 'Minimal and Security updates’ on the Patch Config menu:
Then I received the same message:
But I was able to fix it.
I have found the following documentation: What is included in an OS patch job?
Where it is mentioned that:
For Red Hat Enterprise Linux and Centos operating systems, you can apply all or select from the following updates:
System updates
Security updates
So I created another patch development, but in this case, I didn’t select the ‘Minimal and Security updates’ I just kept it blank, in this way its select the default value, and it applies all the updates instead of only the minimal and security updates.
And it worked.

Related

Openshift OKD Excessive Logging

So I installed a single host Openshift OKD v3.11 cluster. I installed it on a VM running Centos 7.8.2003.
It seems to have installed ok except that it continually streams verbose logs to /var/log/messages. Around 5 logs per second and all seem to be about throttling requests. Example of a typical log message:
******Jun 13 15:49:13 centos7 journal: I0613 14:49:13.011402 1 request.go:485] Throttling request took 196.341689ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-cert-signer/serviceaccounts/service-serving-cert-signer-sa*****
The only reference I have managed to find is a question here but the access to the discussion is only available to those with deep pockets.
https://access.redhat.com/solutions/3348921
I assume these logs are nothing to worry about and so my main question is what is the "best"/cleanest/simplest/easiest way to ensure the Openshift cluster doesn't continue to fill up /var/log/messages but will still log any important messages there?
I would recommend looking at the root cause for this behavior. These messages indicate that there are a lot of requests coming to your API. Typically this is due to some application performing calls in a tight loop leading to this many messages. In your case check your openshift-service-cert-signer if you can see any warnings or an abnormal amount of log messages.
If you want to get rid of the throttling messages, you can increase the amount of Queries per second (QPS) for the API server: Recommended Practices for OKD Master Hosts (lower part).
The only reference I have managed to find is a question here but the access to the discussion is only available to those with deep pockets. https://access.redhat.com/solutions/3348921
I do not understand why you're saying that, as I can access that document with my free Red Hat account without any subscriptions. Have you tried with a free account as it says on the site?
Simon's answer was helpful but I've finally got to the bottom of this.
The problem was simply that the version of Docker I had installed was old. At the time of writing the latest version of Centos is 7.8.2003 and if you install that and then simply run "yum install docker" hoping that you'll get something at least reasonably new and certainly compatible with the rest of the linux installation, you'll probably be making a mistake.
The right thing to do is to follow the simple steps here:
https://docs.docker.com/engine/install/centos/
The reason I found the problem was because excessive logging of my openshift cluster wasn't the only issue. I started seeing strange behaviour of other containers. A process of trial and error narrowed down the issue to the default Centos version of docker. Once I followed the page above all my problem vanished including the original problem of /var/log/message getting hammered by openshift containers.
The main reason I decided to answer my own question was because surely someone else is going to be as impatient/thick as me and simply install Centos7 then try "yum install docker" without knowing they're about to enter a world of pain.

Legacy GCE and GKE metadata requests from google_daemon/manage_addresses.py

I have an old Debian Compute Engine instance (created and running since December 2013) and got an email warning about the turndown of Legacy GCE and GKE metadata server endpoints (more details at https://cloud.google.com/compute/docs/migrating-to-v1-metadata-server).
I followed the directions for locating the process and found that the requests were coming from /usr/share/google/google_daemon/manage_addresses.py. The script seems to be the same as what's at https://github.com/gtt116/gce/blob/master/google_daemon/manage_addresses.py (also with what's in that directory).
I don't recall installing this, so I'm imaging it came with the provided Debian image I used in 2013.
Does anyone know what this manage_addresses.py script is, what it does, and what I should do with it now that the legacy metadata server endpoints are turning down? Is it safe to just stop running it? Or is there a new script I should replace it with? Or should I just try to update it myself to use the new endpoint?
I dug around and was able to trace /usr/share/google/google_daemon/manage_addresses.py as being installed by a package called google-compute-daemon. A search for that brought me to https://github.com/GoogleCloudPlatform/compute-image-packages#troubleshooting which explains that google-compute-daemon has been replaced with python-google-compute-engine. That led me to https://cloud.google.com/compute/docs/images/install-guest-environment . I followed the instructions there and manually installed the guest environment.
I noticed during installation that it said it was removing the google-compute-daemon package (and a packaged called google-startup-scripts), so this seems like the right thing. And I'm no longer seeing any requests to the legacy endpoints. So it seems like at some point the old guest environment failed to update.
TLDR; If you have this problem, follow the instructions at https://cloud.google.com/compute/docs/images/install-guest-environment#installing_guest_environment to manually update the guest environment.

Sikuli with jenkins setup for continuous integration

I have my test writtern in Sikuli. If I RDP into my Jenkins machine and have an active session then all sikuli test pass.
However, for overnight run, my Jenkins machine do get locked. I want to understand if anyone has encountered and solved this issue before. Thanks!
Note: I cannot leave my Jenkins slave unlocked due to security reasons.
It's a known limitation of RDP.
Two optional solutions:
install VNC Server (like UltraVNC), and run it as Windows service (make sure it is launched during Windows logon).
OR
Create a batch file that disconnects Remote Desktop, and use it instead of closing the RDP session with the regular X button. The batch command is:
%windir%\system32\tscon.exe %SESSIONNAME% /dest:console

OpenShift system and package updates/patches

How does one keep OpenShift gears up-to-date? For example, updates to:
The Linux kernel
Important components/libraries like libc
Apache
Apache modules like mod_wsgi
Python
Python packages
Does OpenShift automatically update these and then restart the gear (or reboot the node)? Or does OpenShift send email notifications and the end-user can restart the gear during maintenance windows? What is the model?
What got me thinking about this was back in January there was a remote-code-execution bug in Ruby on Rails that everyone had to patch immediately.
This FAQ seems to suggest that some level of upgrades happen automatically, but it isn’t clear whether this only applies to the OpenShift-specific code, or also other components like the kernel, Apache, etc.
I can tell you from my experience that changes to the openshift system are not always automatic. They had a change about 10 days ago and I'm still tracking down what they did to make my app run correctly. As far as I know there was no email sent. I did find a blog post of some of the major changes, not all. Of course, they introduced at least one bug that I know of. YMMV
My experiences over the last few weeks have been the following:
Last week there seemed to be an unannounced reboot of the server. I detected this by logging from a custom action hook. I didn't receive any email about it and I didn't see any notice at https://twitter.com/openshift_ops or https://openshift.redhat.com/app/status.
This week, there was the Heartbleed OpenSSL vulnerability and it seems like some gears were restarted. I didn't receive any email about it, Twitter didn't show anything, but there was information on the status page.

ClickOnce and application settings

I have a ClickOnce deployed Windows Forms application that uses application settings for two key features: the database the user connects and whether they use replication services or connect to the main server. Those settings were changed for some, but not all, users after they installed the most recent update.
What can cause application settings to be changed and how can I prevent it from happening in the future? The only explanation I can come up with is that I published from a different workstation than I had in the past.