Gitlab Runner waiting before launching next pipeline - gitlab-ci-runner

My Gitlab Runner sometimes wait for a few minutes (up to 10mn) to start a new pipeline, even if all runners are free.
Is it normal ? Is there an option I missed ?
Thanks :)

During our testings, we created a lot of runners that were unused.
After removing these from our Gitlab, then cleaning gitlab-runner by deleting inactive runners one by one, our pipelines are now starting without delay :)

Related

Google Cloud Build triggers don't fire when I push my repo

I configured my build triggers to start as soon as I push to the repo on Github. This has always worked well.
Not since yesterday. It didn't work for a long time yesterday. I pushed again for a few hours, then it worked.
I thought it had something to do with this https://www.githubstatus.com/incidents/zms09plx5zps case.
But today it still doesn't work.
On Github you can see the message: Some checks haven't completed yet when committing.
The affected build trigger says "Queued - Waiting to run this check".
I've pushed more than once. However, the build triggers in Cloud Build do not start.

context deadline exceeded - IPFS

I'm trying to publish website to ipfs using below command
1. ipfs add -r my-website
2. ipfs name publish <hash of the website>
while publishing getting error as Error: context deadline exceeded. What does that mean? How to resolve it?
It means that it took too long (more than 1 minute) to publish the data.
The built-in timeout will be removed in the soon to be released patch release v0.5.1.
More information on why this is happening is at https://github.com/ipfs/go-ipfs/issues/7244. If you don't want to wait for the patch release, or rebuild from the latest master, then you may have to retry a few times (in my tests a few days ago publish times were on average ~30s).
Note: v0.5.0 was recently released (less than a week ago as of this post) and contained a number of large upgrades to IPNS publishing performance. While some of the performance improvements are immediately noticeable, the lions share will only occur once more of the network upgrades.
Update: go-ipfs v0.5.1 has been released

Troubleshooting failed packer build

I am just getting started with Packer, and have had several instances where my build is failing and I'd LOVE to log in to the box to investigate the cause. However, there doesn't seem to be a packer login or similar command to give me a shell. Instead, the run just terminates and tears down the box before I have a chance to investigate.
I know I can use the --debug flag to pause execution at each stage, but I'm curios if there is a way to just pause after a failed run (and prior to cleanup) and then runt he cleanup after my debugging is complete.
Thanks.
This was my top annoyance with packer. Thankfully, packer build now has an option -on-error that gives you options.
packer build -on-error=ask ... to the rescue.
From the packer build docs:
-on-error=cleanup (default), -on-error=abort, -on-error=ask - Selects what to do when the build fails. cleanup cleans up after the previous steps, deleting temporary files and virtual machines. abort exits without any cleanup, which might require the next build to use -force. ask presents a prompt and waits for you to decide to clean up, abort, or retry the failed step.
Having used Packer extensively, the --debug flag is most helpful. Once the process is paused you SSH to the box with the key (in the current dir) and figure out what is going on.
Yeah, the way I handle this is to put a long sleep in a script inline provisioner after the failing step, then I can ssh onto the box and see what's up. Certainly the debug flag is useful, but if you're running the packer build remotely (I do it on jenkins) you can't really sit there and hit the button.
I do try and run tests on all the stuff I'm packing outside of the build - using the Chef provisioner I've got kitchen tests all over everything before it gets packed. It's a royal pain to try and debug anything besides packer during a packer run.
While looking up info for this myself, I ran across numerous bug reports/feature requests for Packer.
Apparently, someone added new features to the virtualbox and vmware builders a year ago (https://github.com/mitchellh/packer/issues/409), but it hasn't gotten merged into main.
In another bug (https://github.com/mitchellh/packer/issues/1687), they were looking at adding additional features to --debug, but that seemed to stall out.
If a Packer build is failing, first check where the build process has got stuck, but do the check in this sequence:
Are the boot commands the appropriate ones?
Is the preseed config OK?
If 1. and 2. are OK, then it means box has booted and the next to check is the login: SSH keys, ports, ...
Finally any issues within the provisioning scripts

How to run 2 scripts in a job from Hudson master?

I have implemented Hudson master, slave job configuration and it is working fine. In which i have to run 2 scripts in a job from hudson master. can you help me is there possibility to run 2 scripts? as well both scripts are dependencies like the second should run after the first script executed.
Thanks in advance - sri
All you need to do is include the script as a build step in your current build for example.
Pull down source
Build it
Run the script, as part of the build step.
This option is only available for Free Style Jobs.
Here is where the scripts would go or path to script to execute.
Goodluck.

Detect if a hudson build is manually or schedule (periodically) invoked

Ive set up deployment in hudson. SVN > Build > copy to production. I need to set up a schedule build to test for build error which is running every hour or so. What I dont want is the schedules builds to deploy to production.
Is it posible to detect, in nant, if the current build is a scheduled build or a manually started build. Or should I create a seperate project for the schedule build?
The cleanest option is to create a separate job for your scheduled build; you can then keep other artifacts like test results separated (since I assume your scheduled job will be running a different set of tests).
If you're just running the scheduled job to look for build errors, this will also keep the checked-out code that you're building separate from the triggered builds, which will minimize the risk of the production builds getting polluted with artifacts created by the test build.