https://docs.openshift.com/container-platform/4.5/support/troubleshooting/investigating-pod-issues.html
Says :
Depending on policy and exit code, pod and container logs remain available after pods have been terminated.
But, doesn't say which policy can I set to make pod logs available after termination. Or which exit codes keep the logs. Anyone got any ideas? Clearly it must be possible to keep the logs available after termination, but how?
(Yes, I know I could probably use some external logging solution. But I don't have one handy and it'd be a lot of work to stand one up in the dev env and do whatever changes to make the pod being developed send it's logs there, all of which is not valid in "prod" environment so needs removing before changes can go live. Whereas a change to a policy somewhere is totally outside the code that launches the pod, so I can leave it in place in dev and not worry about making changes before dev work can go live.)
Related
So I installed a single host Openshift OKD v3.11 cluster. I installed it on a VM running Centos 7.8.2003.
It seems to have installed ok except that it continually streams verbose logs to /var/log/messages. Around 5 logs per second and all seem to be about throttling requests. Example of a typical log message:
******Jun 13 15:49:13 centos7 journal: I0613 14:49:13.011402 1 request.go:485] Throttling request took 196.341689ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-cert-signer/serviceaccounts/service-serving-cert-signer-sa*****
The only reference I have managed to find is a question here but the access to the discussion is only available to those with deep pockets.
https://access.redhat.com/solutions/3348921
I assume these logs are nothing to worry about and so my main question is what is the "best"/cleanest/simplest/easiest way to ensure the Openshift cluster doesn't continue to fill up /var/log/messages but will still log any important messages there?
I would recommend looking at the root cause for this behavior. These messages indicate that there are a lot of requests coming to your API. Typically this is due to some application performing calls in a tight loop leading to this many messages. In your case check your openshift-service-cert-signer if you can see any warnings or an abnormal amount of log messages.
If you want to get rid of the throttling messages, you can increase the amount of Queries per second (QPS) for the API server: Recommended Practices for OKD Master Hosts (lower part).
The only reference I have managed to find is a question here but the access to the discussion is only available to those with deep pockets. https://access.redhat.com/solutions/3348921
I do not understand why you're saying that, as I can access that document with my free Red Hat account without any subscriptions. Have you tried with a free account as it says on the site?
Simon's answer was helpful but I've finally got to the bottom of this.
The problem was simply that the version of Docker I had installed was old. At the time of writing the latest version of Centos is 7.8.2003 and if you install that and then simply run "yum install docker" hoping that you'll get something at least reasonably new and certainly compatible with the rest of the linux installation, you'll probably be making a mistake.
The right thing to do is to follow the simple steps here:
https://docs.docker.com/engine/install/centos/
The reason I found the problem was because excessive logging of my openshift cluster wasn't the only issue. I started seeing strange behaviour of other containers. A process of trial and error narrowed down the issue to the default Centos version of docker. Once I followed the page above all my problem vanished including the original problem of /var/log/message getting hammered by openshift containers.
The main reason I decided to answer my own question was because surely someone else is going to be as impatient/thick as me and simply install Centos7 then try "yum install docker" without knowing they're about to enter a world of pain.
my old father is using ubuntu-gnome. He has no static ip address. In order to perform remote administration, I need to know his ip. I was using dyndns free account (configuration in the adsl modem), but this will stop working in a couple of days.
I would like to run a script each time he logs in to publish his ip on my website. I have tried to put a script on the boot, but the network is not available. It seems that it is gnome 3 that starts the network, but I do not know much about gnome 3.
How should I do to have my script run automatically as soon as the network is available ?
One possible non-elegant solution for this is to put your script in his cron to run every X minutes :)
Looking to mine /etc/NetworkManager/ looks like there is a folder dispatcher.d that I think it'll do what you want. Just experiment with a bash/perl/python w/e script in there set the permission appropriately. You can find the UUID in the system-connections/ folder. More information is available in man networkmanager.
EDIT: Look what I found: https://askubuntu.com/questions/13963/call-script-after-connecting-to-a-wireless-network. Seems like this is exactly what you want.
The easiest way is to use another dynamic DNS service. I used to use my own. You could also put curl or wget command to cron or create a systemd service that will call that command periodically. As a target you would have to use your machine with a web server where you can see the IP in your logs.
It is not Gnome that connects the network, it is a system service called NetworkManager. It tries to connect at boot if possible. In some cases it waits for wireless signal, in other cases it waits for a user password. I recently verified that in Fedora, NetworkManager properly implements the systemd's network-online.target but it may have yet to be fixed in other distributions, see the upstream bug report.
https://bugzilla.gnome.org/show_bug.cgi?id=728965
If you want to run a system service just after boot, you need to use:
[Unit]
...
Wants=network-online.target
After=network-online.target
You could also just run a script that calls nm-online at the beginning to wait for the network connectivity if you can expect the connectivity to come up in reasonable time, otherwise it times out. Such a script can be run from any environment including a user session.
And, as noted already, you can put a script into /etc/NetworkManager/dispatcher.d that will be called on any network configuration change and such a script can then filter connection up events and start the notification script.
I'm migrating a few projects from SVN to Mercurial and I'm not sure how to address this issue: because we are working with MVC 3, we have some SQL connection strings stored in our Web.config file.
Since TortoiseHg automatically starts a wide-open web-server when you click "Web Server" from the context menu, I'm looking into ways to restrict it or lock it down, but I haven't been having any luck. We obviously don't want anyone being able to browse or pull, which is enabled by default. While the simplest solution is just to not run it, it is entirely possible that a developer accidentally clicks it while trying to synchronize or clone, clicks X to close it, and then ends up with his local server without a clue.
How do other developers address this? Am I missing something? I've thought about pushing out a GPO blocking :8000 remote access, but there's nothing stopping a dev. from scrolling up and changing the ports or something silly.
After all clarifications, I still believe you're trying to solve the wrong problem.
hg serve is a legitimate tool that can be used to pull changesets between developers on the same network when it's too early to push those changesets to the server. It may or may not fit into your workflow, but I don't think the problem lies there.
If you expect malice, than nothing prevents any developer to expose the sensitive information in the Web.config (and, by the way, the source code itself) to the third party even you somehow block hg serve.
On the other hand, if you expect carelessness, then you should instruct the developers not to use hg serve, or stop storing any sensitive information there, possibly both.
I am trying to customize the default AMI of beanstalk, but everytime I get server restarts after some random time. I went so far as not to change anything, but nothing works.
I have tried the following:
find the instance of running beanstalk, create AMI, modify the AMI of beanstalk-crashing
create new instance with same AMI as on beanstalk, create AMI, modify configuration-crashing
I have tried both stopping the instance before creating AMI, and creating AMI of running instance.
Edit: I found the answer here: Can't generate a working customized EC2 AMI from Amazon Beanstalk sample appl
From personal experience, place the health status page to point to a dummy, static .html file. Although not recommended, this will prevent the health checks from restarting the machine and you could make more inside inspection.
AWS captures into the S3 logs only the ones output via java.util.logging. It means all console logging is not transferred.
That said, make sure you define an private key in your environment config, so you could ssh to it easily and see its output (it changes - for Tomcat 7, it is at /opt/tomcat7. For tomcat6, it is under /usr/share/tomcat6)
Just to add to what aldrinleal wrote (can't comment yet): In the past, I would often find a failed Healthcheck would also disable my site. By which I mean: If you have the health check on your actual app and that app threw an exception, you wouldn't actually get to see anything, the environment would just report a failed state. Only after I changed to a static file for the health check, did I manage to see the errors.
Now I obviously this is more a problem with a dev environment and you can always just pull the logs. But especially in the beginning as someone new to AWS/Beanstalk this helped me a lot.
I am setting up a number of slaves to my Hudson master, grouped by labels. I would like to be able to have a set of nodes that run during the day and an additional set of nodes that are turned on during the evening.
Is this possible, either directly by hudson or via plugin or script? If so what is your recommended solution?
There is an experimental feature to schedule when each slave should be available. It is in core, but you have to set a system property to enable it. So if you start Hudson with
java -Dhudson.scheduledRetention=true -jar hudson.war
You will get an extra configuration option on each node, allowing you to specify a schedule of when that node should be used.
Let the OS (or any other scheduler) control the start and stop of a node. Hudson only uses what's available. Not sure how Hudson acts if a node dies while running a job.
Update: The feature that Michael Donohue is not experimental anymore and is available for all nodes (I use the SSH node). Works great (at least the take only if needed feature).
Expanding on what #Peter Schuetze said...
Unless the nodes are VMs that you want Hudson to manage (see the VMware plugin), the start and stop operations are out of Hudson's control. Depending on how you have your slaves set up, Hudson may just automatically connect when it sees the node is online, or you may need to make sure the slave runs something at startup.
You can use the Hudson API (generally HTTP POSTs to URLs on the Hudson master) to tell Hudson that nodes are going offline ahead of time. This will help avoid builds that get killed when the node goes down. Check out the HTML source on the node's page (http://hudson/computer/node_name) to see what the web interface does for the "mark offline" and "disconnect" operations.