Container getting exited on windows - .net-4.8

We are using mcr.microsoft.com/dotnet/framework/runtime 4.8-windowsservercore-ltsc2019:latest as my base image and performing docker build. Docker build exited with error. We manually tried to login to the based image for validation, container got exited within an minute. Have any one faced similar issue. Please NOTE the base got updated 2 days before.

Related

gitlab ci shell runner reports profile loading problem

I'm trying to setup the gitlab ci shell runner. I've used the docker runner before successfully but now I'd like to use another docker container within my testing routine and therefore switched to the shell runner.
After registering I'm running into an exception:
ERROR: Job failed (system failure): prepare environment: exit status 1. Check https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading for more information
So, I went through the linked material but that didn't cure the problem. Now, I verified that the gitlab-runner user exists and it has access to docker (needed to run the docker test container). Also the gitlab-runner user is part of the docker group. I can also --login, fire up the /bin/bash without problems.
Still, all I get from the runner side is the the enigmatic message above. What other checkups to I need to do to track down this issue?
The careful reader will find the answer:
"A common failure is when you have a .bash_logout that tries to clear
the console."

Hyperledger: Error trying to ping. Unexpected end of JSON input

I'm trying to deploy my own .bna file to a business network using
this tutorial.
The only difference is that I am doing this with three organizations.
Everything works fine until I run the step eight because when I run this command:
composer network ping -c alice#trade-network
I got this error:
Error trying to ping. Unexpected end of JSON input.
Does anybody know how can I solve this?
Thank you
I had the same issue, and I knew it was a matter of local configuration since it stopped working suddenly after throwing several commands. So, I fixed it by stopping fabric and tearing it down.
Then, started fabric, created my peer credential and deployed everything again.
Review the commands in here (section: Controlling your dev environment):
https://hyperledger.github.io/composer/v0.19/installing/development-tools.html

OpenShift Pro - Start Build Not Working

I have two web apps running on OpenShift Pro, they have been running nicely for a couple of weeks but today I had to make a very small change and push the change to OpenShift. The push failed...
Upon investigation I have discovered that both apps have the same problem (which is strange).
The problem:-
On the Builds | AppName page there is a button labelled Start Build. Clicking this button just produces an error message alert:-
An error occurred while starting the build. Reason: Error resolving
ImageStreamTag jboss-webserver30-tomcat8-openshift:1.2 in namespace
openshift: unable to find latest tagged image
If I click on the latest build I go to the Builds | AppName | Build # page where there is a button labelled Rebuild. Clicking this button rebuilds successfully.
The real problem here is that this means that GitHub pushes fail to start a build, so development and changes are no longer possible...
Any ideas as to why Start Build no longer works?
I think it may be a problem at OpenShift as I have changed nothing recently...
Looking into what happened but you can update your build configuration to use tag 1.3 or latest instead of 1.2.

Elastic Beanstalk stops at EbExtensionPostBuild

I am having a problem deploying an EB instance with a custom .ebextensions file. This is the relevant part in that file:
container_commands:
01_migrate:
command: 'python db_migrate.py'
02_npm_build:
command: 'npm install && npm run prod'
As you can see, these commands are for migrating my PostgreSQL database (via a Flask backend) and building my React .jsx files.
If I leave these commands out, the deployment completes perfectly well. However, once I put them in, looking at the eb-activity.log it stalls at this part forever (as far as I can tell):
[2017-04-10T02:39:24.106Z] INFO [3023] - [Application deployment app-613e-170409_223418#1/StartupStage0/EbExtensionPostBuild] : Starting activity...
I also get this message on the Health overview in the console (this is after 1 day):
Performing application deployment (running for 1 day).
I have also tried to deploy it without those container_commands, and then including it back after the successful initial deployment. Then I get the same error message as before in eb-activity.log, and I also get this message on the Health overview:
Incorrect application version "app-2a3d-170409_214923" (deployment 1). Expected version "app-2a3d-170409_214923" (deployment 1).
Which is very strange because those two versions referenced are the same versions. I don't know what this means!
I found a solution.
Remove all you container_commands from .ebextensions/
Go ssh to instance, kill process with.
sudo killall python
Then Deploy new version without container_commands.
And start debuging all your container_commands, one by one on ssh..
Have fun.

Troubleshooting failed packer build

I am just getting started with Packer, and have had several instances where my build is failing and I'd LOVE to log in to the box to investigate the cause. However, there doesn't seem to be a packer login or similar command to give me a shell. Instead, the run just terminates and tears down the box before I have a chance to investigate.
I know I can use the --debug flag to pause execution at each stage, but I'm curios if there is a way to just pause after a failed run (and prior to cleanup) and then runt he cleanup after my debugging is complete.
Thanks.
This was my top annoyance with packer. Thankfully, packer build now has an option -on-error that gives you options.
packer build -on-error=ask ... to the rescue.
From the packer build docs:
-on-error=cleanup (default), -on-error=abort, -on-error=ask - Selects what to do when the build fails. cleanup cleans up after the previous steps, deleting temporary files and virtual machines. abort exits without any cleanup, which might require the next build to use -force. ask presents a prompt and waits for you to decide to clean up, abort, or retry the failed step.
Having used Packer extensively, the --debug flag is most helpful. Once the process is paused you SSH to the box with the key (in the current dir) and figure out what is going on.
Yeah, the way I handle this is to put a long sleep in a script inline provisioner after the failing step, then I can ssh onto the box and see what's up. Certainly the debug flag is useful, but if you're running the packer build remotely (I do it on jenkins) you can't really sit there and hit the button.
I do try and run tests on all the stuff I'm packing outside of the build - using the Chef provisioner I've got kitchen tests all over everything before it gets packed. It's a royal pain to try and debug anything besides packer during a packer run.
While looking up info for this myself, I ran across numerous bug reports/feature requests for Packer.
Apparently, someone added new features to the virtualbox and vmware builders a year ago (https://github.com/mitchellh/packer/issues/409), but it hasn't gotten merged into main.
In another bug (https://github.com/mitchellh/packer/issues/1687), they were looking at adding additional features to --debug, but that seemed to stall out.
If a Packer build is failing, first check where the build process has got stuck, but do the check in this sequence:
Are the boot commands the appropriate ones?
Is the preseed config OK?
If 1. and 2. are OK, then it means box has booted and the next to check is the login: SSH keys, ports, ...
Finally any issues within the provisioning scripts