Dependabot pip package-ecosystem with two separate schedules - dependabot

I have Dependabot running daily for package-ecosystem: "pip".
The problem I face is that the AWS boto library has a lot of updates and this is inflating my GitHub costs substantially.
I should like to be able to run Dependabot on two separate schedules
Daily for Python packages except boto and botocore
Weekly for boto and botocore
Dependabot complains of a duplicate if I have 2 package-ecosystem keys set to pip so I am struggling to see how I can achieve this.

Related

Github azure/arm-deploy actions fails when a new bicep version is available

The github action fails when using github action azure/arm-deploy to deploy a bicep template on a github hosted agent because bicep writes an output to stderr indicating there is a new version. The action fails as soon as something was send to stderr.
I have seen this behavior a couple of days back when bicep was upgraded from v0.13.1 to v0.14.6. Today I encounter the same when upgrading to v0.14.46. The only thing I was able to do at that time was waiting until the latest version of bicep was available (luckily it lasted less then a day before the hosted-agents were updated with the latest bicep version).
While trying more, I noticed that some action pipelines succeeded. This was probably because agents were getting updated and I was just lucky to have an agent with the latest bicep version.
Is there a way I can circumvent this? Can I deploy a bicep template even if the github hosted agent is not on the latest bicep version?
Following has been tried:
I added a step in the pipeline to deploy a specific bicep version. This didn't seem to work; the bicep version available on the hosted agent was taken (making multiple runs resulted in a random Bicep version, depending on what's available on the agent).
Setting failOnStdErr: false (property on azure/arm-deploy) had no effect and is not prefered because I want to be informed if a bicep deployment failed or not.
There's a few options, run these before the arm-deploy task.
Upgrade bicep in your workflow. Add:
steps:
- run: az bicep upgrade
Turn off update checking:
steps:
- run: az config set bicep.version_check=False
Don't let az pick the bicep from the pat:
-run: az config set bicep.use_binary_from_path=false

How expensive are github actions in terms of network traffic?

one of the steps of my github test action for pull requests is installing 3rd party software via
- name: Install imagemagick and graphviz
run: |
sudo apt install graphviz
sudo apt install imagemagick
The package size seems to be about 15MB, see https://imagemagick.org/script/download.php. That's not too bad. But it made me wonder: if I installed a package of, say 500MB, would the github servers have to download the 500MB every time the action is triggered? That would be bad..
Yes, it will download them each time, unless you will cache it. You can find more details here Caching APT packages in GitHub Actions workflow. You can also create your own docker image with pre installed packages and use that image in your pipeline. You will also find an example in above mention topic.

How can I upgrade to the latest Operator Lifecycle Manager on OpenShift 3.11?

I've found version 0.6.0 of the Operator Framework's Operator Lifecycle Manager (OLM) to be lacking and see that 0.12.0 is available with lots of new updates. How can I upgrade to that version?
Also, what do I need to consider regarding this upgrade? Will I lose any integrations, etc.
One thing that needs to be considered is that in OpenShift 3, OLM is running from a namespace called operator-lifecycle-manager. In future versions that becomes simply olm. Some things to consider,
Do you have operators running right now and if you make this change will your catalog names change? This will need to be reflected in your subscriptions.
Do you want to change any of the default install configuration?
Look into values.yaml to configure your OLM
Look into the yaml files in step 2 to and adjust if needed.
1) First, turn off OLM 0.6.0 or whatever version you might have.
You can delete that namespace, or as I did stopped the deployments within and scale the replicasets down to 0 pods which effectively turns OLM 0.6.0 off.
2) Install OLM 0.12.0
oc create -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.12.0/crds.yaml
oc create -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.12.0/olm.yaml
alt 2) If you'd rather just install the latest from the repo's master branch:
oc create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/crds.yamlcrds.yaml
oc create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/olm.yaml
So now you have OLM 0.12.0 installed. You should be able to see in the logs that it picks up where 0.6.0 left off. You'll need to start learning about OperatorGroups though as that's new and going to start impacting your operation of operators pretty quick. The Cluster Console's ability to show your catalogs does seem to be lost, but you can still view that information through the commandline with oc get packagemanifests.

Deploying Code and Managing configuration with Terraform

Just to give context:
I am planning to use Terraform to bring up new separate environments with ec2 machines, elb etc. and then maintaining configuration as well.
Doing that with terraform and using AWS provider sounds fairly simple.
Problem 1:
While launching those instances I want to install few packages etc. so that when Terraform launches the instances (servers) things/ apps should be up and running.
Assuming the above is up and running:
Problem 2:
How do I deploy new code on the servers in this environment launched by Terraform?
Should I use for eg. ansible playbooks/chef recipes/puppet manifests for that? or Terraform gives some other options/ways?
Brief answers:
Problem 1: While launching those instances I want to install few packages etc. so that when Terraform launches the instances (servers) things/ apps should be up and running.
A couple of options:
Create an AMI of your instance with the installed packages and specify that in the resource.
Use the user data script to install the packages that you need when the instance starts.
Use ansible playbooks/chef recipes/puppet to install packages once the instance is running (e.g. creating an opsworks stack with terraform)
Problem 2: How do I deploy new code on the servers in this environment
launched by Terraform? Should I use for eg. ansible playbooks/chef
recipes/puppet manifests for that? or Terraform gives some other
options/ways?
Not the intended use case for terraform, use other tools like jenkins or aws services like codepipeline or codedeploy. Ansible/chef/puppet can also help (e.g. with opsworks)

Elastic Beanstalk stops at EbExtensionPostBuild

I am having a problem deploying an EB instance with a custom .ebextensions file. This is the relevant part in that file:
container_commands:
01_migrate:
command: 'python db_migrate.py'
02_npm_build:
command: 'npm install && npm run prod'
As you can see, these commands are for migrating my PostgreSQL database (via a Flask backend) and building my React .jsx files.
If I leave these commands out, the deployment completes perfectly well. However, once I put them in, looking at the eb-activity.log it stalls at this part forever (as far as I can tell):
[2017-04-10T02:39:24.106Z] INFO [3023] - [Application deployment app-613e-170409_223418#1/StartupStage0/EbExtensionPostBuild] : Starting activity...
I also get this message on the Health overview in the console (this is after 1 day):
Performing application deployment (running for 1 day).
I have also tried to deploy it without those container_commands, and then including it back after the successful initial deployment. Then I get the same error message as before in eb-activity.log, and I also get this message on the Health overview:
Incorrect application version "app-2a3d-170409_214923" (deployment 1). Expected version "app-2a3d-170409_214923" (deployment 1).
Which is very strange because those two versions referenced are the same versions. I don't know what this means!
I found a solution.
Remove all you container_commands from .ebextensions/
Go ssh to instance, kill process with.
sudo killall python
Then Deploy new version without container_commands.
And start debuging all your container_commands, one by one on ssh..
Have fun.