context deadline exceeded - IPFS - ipfs

I'm trying to publish website to ipfs using below command
1. ipfs add -r my-website
2. ipfs name publish <hash of the website>
while publishing getting error as Error: context deadline exceeded. What does that mean? How to resolve it?

It means that it took too long (more than 1 minute) to publish the data.
The built-in timeout will be removed in the soon to be released patch release v0.5.1.
More information on why this is happening is at https://github.com/ipfs/go-ipfs/issues/7244. If you don't want to wait for the patch release, or rebuild from the latest master, then you may have to retry a few times (in my tests a few days ago publish times were on average ~30s).
Note: v0.5.0 was recently released (less than a week ago as of this post) and contained a number of large upgrades to IPNS publishing performance. While some of the performance improvements are immediately noticeable, the lions share will only occur once more of the network upgrades.
Update: go-ipfs v0.5.1 has been released

Related

Google Cloud Build triggers don't fire when I push my repo

I configured my build triggers to start as soon as I push to the repo on Github. This has always worked well.
Not since yesterday. It didn't work for a long time yesterday. I pushed again for a few hours, then it worked.
I thought it had something to do with this https://www.githubstatus.com/incidents/zms09plx5zps case.
But today it still doesn't work.
On Github you can see the message: Some checks haven't completed yet when committing.
The affected build trigger says "Queued - Waiting to run this check".
I've pushed more than once. However, the build triggers in Cloud Build do not start.

Gitlab Runner waiting before launching next pipeline

My Gitlab Runner sometimes wait for a few minutes (up to 10mn) to start a new pipeline, even if all runners are free.
Is it normal ? Is there an option I missed ?
Thanks :)
During our testings, we created a lot of runners that were unused.
After removing these from our Gitlab, then cleaning gitlab-runner by deleting inactive runners one by one, our pipelines are now starting without delay :)

JFrog Mission Control - Version Control with Gitlab

Using Mission Control, we want to use the Version Control integration with gitlb so that we have version history of all the scripts.
We are running the docker container version.
during this setup process we encountered some problems where the commits back to gitlab were failing, even though the UI said they were successful. (Long story, not really relevant but it boiled down to a restriction in the project to check whether author is a gitlab user).
The concern here is: if these commits fail AND the UI assumes they worked, there is an ability to modify and a script from MC without that change ever being reflected in version control.
How do I force MC to disallow execution of a script that has not been committed to the gitlab source control?
Mission Control V 2.1.0

Chef MySQL cookbook deprecation warnings

When using the current mysql 8.4.0 cookbook I receive a full screen of deprecation errors when deploying.
Deprecated features used!
rename install_method to new_resource.install_method at 1 location:
- /root/chef-solo/local-mode-cache/cache/cookbooks/mysql/libraries/mysql_service.rb:34:in `installation'
See https://docs.chef.io/deprecations_namespace_collisions.html for further details.
The github project does not show any outstanding issues related to the deprecation warnings.
Does anyone know how to get rid of these messages so I can have a clean deploy?
You'll have to wait until the cookbook fixes things, but it's only a warning so you don't need to worry. It's a new warning in Chef 13.2 so we're still working on getting things cleaned up. The feature won't actually be removed until Chef 14 in April 2018 so no rush :)

Troubleshooting failed packer build

I am just getting started with Packer, and have had several instances where my build is failing and I'd LOVE to log in to the box to investigate the cause. However, there doesn't seem to be a packer login or similar command to give me a shell. Instead, the run just terminates and tears down the box before I have a chance to investigate.
I know I can use the --debug flag to pause execution at each stage, but I'm curios if there is a way to just pause after a failed run (and prior to cleanup) and then runt he cleanup after my debugging is complete.
Thanks.
This was my top annoyance with packer. Thankfully, packer build now has an option -on-error that gives you options.
packer build -on-error=ask ... to the rescue.
From the packer build docs:
-on-error=cleanup (default), -on-error=abort, -on-error=ask - Selects what to do when the build fails. cleanup cleans up after the previous steps, deleting temporary files and virtual machines. abort exits without any cleanup, which might require the next build to use -force. ask presents a prompt and waits for you to decide to clean up, abort, or retry the failed step.
Having used Packer extensively, the --debug flag is most helpful. Once the process is paused you SSH to the box with the key (in the current dir) and figure out what is going on.
Yeah, the way I handle this is to put a long sleep in a script inline provisioner after the failing step, then I can ssh onto the box and see what's up. Certainly the debug flag is useful, but if you're running the packer build remotely (I do it on jenkins) you can't really sit there and hit the button.
I do try and run tests on all the stuff I'm packing outside of the build - using the Chef provisioner I've got kitchen tests all over everything before it gets packed. It's a royal pain to try and debug anything besides packer during a packer run.
While looking up info for this myself, I ran across numerous bug reports/feature requests for Packer.
Apparently, someone added new features to the virtualbox and vmware builders a year ago (https://github.com/mitchellh/packer/issues/409), but it hasn't gotten merged into main.
In another bug (https://github.com/mitchellh/packer/issues/1687), they were looking at adding additional features to --debug, but that seemed to stall out.
If a Packer build is failing, first check where the build process has got stuck, but do the check in this sequence:
Are the boot commands the appropriate ones?
Is the preseed config OK?
If 1. and 2. are OK, then it means box has booted and the next to check is the login: SSH keys, ports, ...
Finally any issues within the provisioning scripts