I configured my build triggers to start as soon as I push to the repo on Github. This has always worked well.
Not since yesterday. It didn't work for a long time yesterday. I pushed again for a few hours, then it worked.
I thought it had something to do with this https://www.githubstatus.com/incidents/zms09plx5zps case.
But today it still doesn't work.
On Github you can see the message: Some checks haven't completed yet when committing.
The affected build trigger says "Queued - Waiting to run this check".
I've pushed more than once. However, the build triggers in Cloud Build do not start.
Related
I'm trying to publish website to ipfs using below command
1. ipfs add -r my-website
2. ipfs name publish <hash of the website>
while publishing getting error as Error: context deadline exceeded. What does that mean? How to resolve it?
It means that it took too long (more than 1 minute) to publish the data.
The built-in timeout will be removed in the soon to be released patch release v0.5.1.
More information on why this is happening is at https://github.com/ipfs/go-ipfs/issues/7244. If you don't want to wait for the patch release, or rebuild from the latest master, then you may have to retry a few times (in my tests a few days ago publish times were on average ~30s).
Note: v0.5.0 was recently released (less than a week ago as of this post) and contained a number of large upgrades to IPNS publishing performance. While some of the performance improvements are immediately noticeable, the lions share will only occur once more of the network upgrades.
Update: go-ipfs v0.5.1 has been released
I have two web apps running on OpenShift Pro, they have been running nicely for a couple of weeks but today I had to make a very small change and push the change to OpenShift. The push failed...
Upon investigation I have discovered that both apps have the same problem (which is strange).
The problem:-
On the Builds | AppName page there is a button labelled Start Build. Clicking this button just produces an error message alert:-
An error occurred while starting the build. Reason: Error resolving
ImageStreamTag jboss-webserver30-tomcat8-openshift:1.2 in namespace
openshift: unable to find latest tagged image
If I click on the latest build I go to the Builds | AppName | Build # page where there is a button labelled Rebuild. Clicking this button rebuilds successfully.
The real problem here is that this means that GitHub pushes fail to start a build, so development and changes are no longer possible...
Any ideas as to why Start Build no longer works?
I think it may be a problem at OpenShift as I have changed nothing recently...
Looking into what happened but you can update your build configuration to use tag 1.3 or latest instead of 1.2.
Out of the blue recently, I started receiving notifications that my Jekyll builds were failing on GitHub Pages:
Page build failed. For more information, see https://help.github.com/articles/troubleshooting-github-pages-builds/.
Besides that, there was no info given, and the site built fine on my local machine. I tried everything I could think of: I built the site locally (worked fine on my machine), I deleted the last few files that had been added (no improvement), and I reset the master branch to exactly as it was when I last had a successful build. I figured for sure the last tactic would work, but I kept getting build failures.
I eventually figured out the answer, which I'm going to write in a moment.
It turned out the problem was that GitHub upgraded their version of Jekyll. I had to come to the solution by two steps:
Upgrade the github-pages gem on my own computer:
$ bundle update github-pages
Discover an interesting new error message:
Liquid Exception: undefined method `gsub' for 1000:Fixnum in /_layouts/post.html
After some fiddling around (and using Jekyll's --verbose option to find where the build was choking), I discovered that this gsub error was caused by a post I had, which was titled "1,000". (It was about a sleepless night, where I tried to count my way to sleep, and gave up after 1,000.) Some updated parser was trying to parse this as a number, apparently. To fix it I changed
title: 1,000
to
title: "1,000"
And voilĂ , GitHub Pages was satisfied.
I am just getting started with Packer, and have had several instances where my build is failing and I'd LOVE to log in to the box to investigate the cause. However, there doesn't seem to be a packer login or similar command to give me a shell. Instead, the run just terminates and tears down the box before I have a chance to investigate.
I know I can use the --debug flag to pause execution at each stage, but I'm curios if there is a way to just pause after a failed run (and prior to cleanup) and then runt he cleanup after my debugging is complete.
Thanks.
This was my top annoyance with packer. Thankfully, packer build now has an option -on-error that gives you options.
packer build -on-error=ask ... to the rescue.
From the packer build docs:
-on-error=cleanup (default), -on-error=abort, -on-error=ask - Selects what to do when the build fails. cleanup cleans up after the previous steps, deleting temporary files and virtual machines. abort exits without any cleanup, which might require the next build to use -force. ask presents a prompt and waits for you to decide to clean up, abort, or retry the failed step.
Having used Packer extensively, the --debug flag is most helpful. Once the process is paused you SSH to the box with the key (in the current dir) and figure out what is going on.
Yeah, the way I handle this is to put a long sleep in a script inline provisioner after the failing step, then I can ssh onto the box and see what's up. Certainly the debug flag is useful, but if you're running the packer build remotely (I do it on jenkins) you can't really sit there and hit the button.
I do try and run tests on all the stuff I'm packing outside of the build - using the Chef provisioner I've got kitchen tests all over everything before it gets packed. It's a royal pain to try and debug anything besides packer during a packer run.
While looking up info for this myself, I ran across numerous bug reports/feature requests for Packer.
Apparently, someone added new features to the virtualbox and vmware builders a year ago (https://github.com/mitchellh/packer/issues/409), but it hasn't gotten merged into main.
In another bug (https://github.com/mitchellh/packer/issues/1687), they were looking at adding additional features to --debug, but that seemed to stall out.
If a Packer build is failing, first check where the build process has got stuck, but do the check in this sequence:
Are the boot commands the appropriate ones?
Is the preseed config OK?
If 1. and 2. are OK, then it means box has booted and the next to check is the login: SSH keys, ports, ...
Finally any issues within the provisioning scripts
Ive set up deployment in hudson. SVN > Build > copy to production. I need to set up a schedule build to test for build error which is running every hour or so. What I dont want is the schedules builds to deploy to production.
Is it posible to detect, in nant, if the current build is a scheduled build or a manually started build. Or should I create a seperate project for the schedule build?
The cleanest option is to create a separate job for your scheduled build; you can then keep other artifacts like test results separated (since I assume your scheduled job will be running a different set of tests).
If you're just running the scheduled job to look for build errors, this will also keep the checked-out code that you're building separate from the triggered builds, which will minimize the risk of the production builds getting polluted with artifacts created by the test build.