IBM's Bluemix Toolchain process hangs on completion - containers

I've built my container multiple times successfully, as noted by the image below. Each time it remains on 99% for > 20+ mins AFTER saying 'Finished: SUCCESS' in the logs. It never makes it past this. I cant kick off the deploy phase until the build registers completion. Is there a way to get past this hang?
I've got no notable errors in the console. The build is based on the registry.ng.bluemix.net/ibmnode:latest image, runs an apache2 server with some Node.js processes that run during the build phase. And lastly, it kicks off a bash script to run apache2 in the foreground.

I just checked my toolchain and wasn't able to reproduce this problem. Please try again, it might have been a transient issue with the toolchains.
If the problem persists, it might have to do with how you have your build script setup. If you are spawning processes and leaving them running, that could be stopping the build from finishing.

Related

AWS Beanstalk keeps changing the instance underneath

I have Node JS application running on AWS Beanstalk with load balance, I set the minimum instance number to 1 and max to 5.
The Node JS application executes some time consuming tasks that could take several hours.
Everything is working ok on my local environment but after deploying to Beanstalk, I notice Beanstalk is running ok at the beginning, but after 5-10 minutes, the instance seems be replaced with a clone.
E.g.: My Node JS task keeps writing to the log file to track the task progress, and at the beginning I can see the beanstalk log file contains the progress text and it keeps growing, but after 5 minutes, when I look at the log file again, the progress the task logged is gone. Then I downloaded the full log file and checked full log, the log file size is very small and only contains basic server start information. It looks like a newly started instance as even the instance is restarted, the previous log should still in the file.
It gives me a strong indication that the log file now I am looking at is not the same instance I was looking at before, and I suspect Beanstalk swap the instance that was running my task with a brand new but cloned instance. And even every code I deployed before is still there, but the task progress is gone. Therefore, the task is never able to fully completed.
Anyone has seen this before? And how to fix it?

Openshift OKD Excessive Logging

So I installed a single host Openshift OKD v3.11 cluster. I installed it on a VM running Centos 7.8.2003.
It seems to have installed ok except that it continually streams verbose logs to /var/log/messages. Around 5 logs per second and all seem to be about throttling requests. Example of a typical log message:
******Jun 13 15:49:13 centos7 journal: I0613 14:49:13.011402 1 request.go:485] Throttling request took 196.341689ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-cert-signer/serviceaccounts/service-serving-cert-signer-sa*****
The only reference I have managed to find is a question here but the access to the discussion is only available to those with deep pockets.
https://access.redhat.com/solutions/3348921
I assume these logs are nothing to worry about and so my main question is what is the "best"/cleanest/simplest/easiest way to ensure the Openshift cluster doesn't continue to fill up /var/log/messages but will still log any important messages there?
I would recommend looking at the root cause for this behavior. These messages indicate that there are a lot of requests coming to your API. Typically this is due to some application performing calls in a tight loop leading to this many messages. In your case check your openshift-service-cert-signer if you can see any warnings or an abnormal amount of log messages.
If you want to get rid of the throttling messages, you can increase the amount of Queries per second (QPS) for the API server: Recommended Practices for OKD Master Hosts (lower part).
The only reference I have managed to find is a question here but the access to the discussion is only available to those with deep pockets. https://access.redhat.com/solutions/3348921
I do not understand why you're saying that, as I can access that document with my free Red Hat account without any subscriptions. Have you tried with a free account as it says on the site?
Simon's answer was helpful but I've finally got to the bottom of this.
The problem was simply that the version of Docker I had installed was old. At the time of writing the latest version of Centos is 7.8.2003 and if you install that and then simply run "yum install docker" hoping that you'll get something at least reasonably new and certainly compatible with the rest of the linux installation, you'll probably be making a mistake.
The right thing to do is to follow the simple steps here:
https://docs.docker.com/engine/install/centos/
The reason I found the problem was because excessive logging of my openshift cluster wasn't the only issue. I started seeing strange behaviour of other containers. A process of trial and error narrowed down the issue to the default Centos version of docker. Once I followed the page above all my problem vanished including the original problem of /var/log/message getting hammered by openshift containers.
The main reason I decided to answer my own question was because surely someone else is going to be as impatient/thick as me and simply install Centos7 then try "yum install docker" without knowing they're about to enter a world of pain.

Is there a way to stop OpenShift restarting my application when it exits?

I want to run an OpenShift application that takes a few hours to run. Once the application finishes (exit 0) I don't want it run again via a restart. I just want to look at the logs to see what it did.
One way of achieving this is for the application to stop itself once it's done processing using the OpenShift API. I'm hoping there is an easier way than this.
The application uses node.js
The use case is I'm running back tests for a trading application. There is no UI to the application. It's just processing historical trades stored in a large file.

Openshift: Cron job fails to run

As of yesterday, 3/12/2014 my cron job for my free python 3.3, mongodb, cron app on Openshift stopped running. Everything was working fine until late afternoon. I can still run the cron job manually without error.
The execution bit is set. I wouldn't think I have run out of processing time since I can run it manually. I'm not sure if it's related to this issue:
Openshift app push error 98
Both issue started at the same time. Any ideas?

Application Error: Application Launch was not detected for application App

When I was validation my Windows Store App I got the following error:
Application Error: Application Launch was not detected for application
App. This could be because your application failed to launch
correctly. Please consider re-running the test and avoid interacting
with the application while tests are running.
What does this mean? The app will not validate.
I thought it was a bit weird and I couldn't find anything when I googled it but people who had almost the same problem as me, though rather than launching it was failing to sleep. This was just plain odd.
I tried to launch the app from Visual Studio 2012 just to prove to myself that it did start properly and for some crazy reason it didn't work. I usually test the game on my Local Machine rather than the Simulator but now, for some reason, it was set to the Simulator and I have been having problems getting it to start lately (the simulator).
I changed it back to the local machine and ran the tests again. This time it worked.
So, if you get this error it might be time to see if your simulator works and if not, have the Local Machine set as default to run with.