Ansible module: ec2_elb unable to find ELB (when ELB count>400?) - boto

I have EC2 instances that needs to be added to an ELB. While trying this from ansible, getting the following error. I am able to add the same using AWS CLI. Found this open issue with the module ec2_elb in ansible: https://github.com/ansible/ansible-modules-core/issues/2115
Is there any work around for this? Or any other version of boto/python where this works as expected. I do have >400 ELB's in the profile that i am using.
msg: ELB MyTestELB does not exist.

This worked for me. Using AWS CLI command from ansible to get rid of above issue with boto/ansible not able to identify the ELB.
- name: Add EC2 instance to ELB {{ elb_result.elb.name }} using AWS - CLI from within ansible play
command: "sudo -E aws elb register-instances-with-load-balancer --load-balancer-name {{ elb_result.elb.name }} --instances i-456r3546 --profile <<MyProfileHereIfNeeded>>"
environment:
http_proxy: http://{{ proxyUserId }}:{{ proxyPwd }}#proxy.com:port
https_proxy: http://{{ proxyUserId }}:{{ proxyPwd }}#proxy.com:port

Related

Pulling GitHub Actions images via Docker Hub whilst authenticated

I have a GitHub action which uses various services to run, for example:
services:
postgres:
image: postgres:14.5
We're moving to running these jobs on self-hosted runners, but we keep hitting Docker Hub rate limits:
Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
The thing that is broken though is that I have a Docker Hub account and we have been adding a step from the suggested action to try and remedy this, but because the images are being pulled before that code gets to execute it doesn't seem to be logging in.
Is there a way of using the services, whilst pulling their images from Docker Hub authenticated?
Supply credentials for pulling the service image in the workflow YAML:
services:
postgres:
image: postgres:14.5
credentials:
username: ${{ secrets.DOCKER_USER }}
password: ${{ secrets.DOCKER_PASSWORD }}

Changing an ApiGateway restapi stage deployment via the cli or sdk

I have a system in place for creating new deployments but I would like to be able to change a stage to use a previous deployment. You can do this via the aws console but it appears it's not an option for v1 API gateways via the SDK or CLI?
Can be done via CLI for V1 APIs. You will have to run two commands -> get-deployments and update-stage. Get the deployment ID from output of first and use it in the second.
$ aws apigateway get-deployments --rest-api-id $API_ID
$ aws apigateway update-stage --rest-api-id $API_ID --stage $STAGE_NAME --patch-operations op=replace,path=/deploymentId,value=$DEPLOYMENT_ID
get-deployments
update-stage

Github actions echo command not creating file

I'm in the process of moving a working CircleCI workflow over to Github Actions.
I'm running:
runs-on: ubuntu-latest
container:
image: google/cloud-sdk:latest
I run the following command:
echo ${{ secrets.GCLOUD_API_KEYFILE }} > ./gcloud-api-key.json
Before running this command, gcloud-api-key.json has not yet been created. This command works in CircleCI but in Github Actions I get the error:
/__w/_temp/asd987as89d7cf.sh: 2: /__w/_temp/asd987as89d7cf.sh: type:: not found
Does anyone know what this error means?
The reason was because my secret key was more then 1 line long. Once I made it one line it worked.
In order to use secrets which contain more than just one line (like secret jsons) one has to save the base64 encoded secret in Github which makes it one line.
On linux the encoding is done via:
cat mysecret.json | base64
Then in the action you need to decode it using
echo ${{ secrets.YOUR_SECRET }} | base64 -d > secret.json

Scaling Up of GlusterFS-storage only add new peer without new bricks in Openshift

Observed behavior
I started with one node Openshift cluster and it successfully deployed master/node and gluster volume. Now I extend Openshift cluster and it was successfully.
but on extending glusterfs volume with below
[glusterfs]
10.1.1.1 glusterfs_devices='[ "/dev/vdb" ]'
10.1.1.2 glusterfs_devices='[ "/dev/vdb" ]' openshift_node_labels="type=upgrade"
ansible-playbook -i inventory2.ini /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml -e openshift_upgrade_nodes_label="type=upgrade"
it only added 10.1.1.2 as peer but volume still has only one brick
Following customization done to start deploy gluster from 1 node {--durability none}
openshift-ansible/roles/openshift_storage_glusterfs/tasks/heketi_init_db.yml
- name: Create heketi DB volume
command: "{{ glusterfs_heketi_client }} setup-openshift-heketi-storage --image {{ glusterfs_heketi_image }} --listfile /tmp/heketi-storage.json **--durability none**"
register: setup_storage
>gluster peer status
Number of Peers: 1
Hostname: 10.1.1.2
Uuid: 1b8159e4-99e2-4f4d-ad95-e97bc8655d32
State: Peer in Cluster (Connected)
gluster volume info
Volume Name: heketidbstorage
Type: Distribute
Volume ID: 769419b9-d28f-4cdd-a8f3-708b6b738f65
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.1.1.1:/var/lib/heketi/mounts/vg_4187bfa3eb090ceffea9c53b156ddbd4/brick_80401b43be8c3c8a74417b18ad574524/brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
Expected/desired behavior
I am expecting that on addition of every new node it should create new brick too
Details on how to reproduce (minimal and precise)
Add nodes in gluster cluster with below commands
ansible-playbook -i inventory2.ini /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml -e openshift_upgrade_nodes_label="type=upgrade"
Information about the environment:
Heketi version used (e.g. v6.0.0 or master): OpenShift 3.10
Operating system used: CentOS
Heketi compiled from sources, as a package (rpm/deb), or container: Container
If container, which container image: docker.io/heketi/heketi:latest
Using kubernetes, openshift, or direct install: Openshift
If kubernetes/openshift, is gluster running inside kubernetes/openshift or outside: outside
If kubernetes/openshift, how was it deployed (gk-deploy, openshift-ansible, other, custom): openshift-ansible
Just adding a node/server does not mean that the brick will also be added to existing
gluster volume.
You have to add that brick, hosted on new node, to existing volume.
command -
"gluster volume add-brick host:brick-path commit force"
Not sure if you have provided this command in your automation script or not.

Cannot create dataproc cluster due to SSD label error

I've been creating dataproc clusters successfully over the past couple of weeks using the following gcloud command:
gcloud dataproc --region us-east1 clusters create test1 --subnet
default --zone us-east1-c --master-machine-type n1-standard-4
--master-boot-disk-size 250 --num-workers 10 --worker-machine-type n1-standard-4 --worker-boot-disk-size 200 --num-worker-local-ssds 1
--image-version 1.2 --scopes 'https://www.googleapis.com/auth/cloud-platform' --project MyProject
--initialization-actions gs://MyBucket/MyScript.sh
But today I'm getting the following error when I try to create dataproc cluster from either gcloud cli or the GCP web console:
ERROR: (gcloud.dataproc.clusters.create) Operation
[projects/MyProject/regions/us-east1/operations/SOMELONGIDHERE]
failed: Invalid value for field
'resource.disks[1].initializeParams.labels': ''. Cannot specify
initializeParams.labels for local SSD..
I tried changing the cluster name and the zone (not region), without any success.
Thanks in advance
There was an issue on Google's end that was corrected.
It should be working now.