Hello Github actions community :)
I have a workflow in github actions that I don't quite understand why it is not working.
I am currently using git-secrets to encrypt my credentials using git-secrets and I am trying to decrypt them in the github actions workflow.
This is the code block that I execute when I want to decrypt the files:
- name: Reveal data
run: |
echo
echo 'Before decrypt'
ls -ls
git secret reveal -p ${{ secrets.PASSPHRASE }} -f
echo 'After decrypt'
ls -ls
git secret whoknows
Before decrypt
total 4
4 -rw-r--r-- 1 runner docker 630 Jul 18 09:39 secrets.md.secret
done. all 1 files are revealed.
After decrypt
total 4
4 -rw-r--r-- 1 runner docker 630 Jul 18 09:39 secrets.md.secret
testing#testing.com
According to github actions this works because as you can see the github actions returns 'done. all 1 files are revealed.'. However, as you can see below, no new file is being generated.
Locally it works and I get the decrypted file by running the same command.
How to reproduce it locally:
Install git-secrets
Create a GPG key (gpg --full-generate-key)
Run 'git secret tell email-used-in-the-gpg
Run 'git secret add filename
Run 'git secret hide' to encrypt the file
Run 'rm filename'
Run 'git secret reveal' and pass the password. This will create the decrypted file
How to reproduce it in github actions:
Create a new workflow
Paste this step:
- name: Reveal
run: |
git secret reveal -p ${{ secrets.PASSPHRASE }}
Does anyone have any idea what this is about? Github Workflows does not allow file creation maybe?
Thank you very much in advance and best regards!
Related
I was using a CI CD pipeline to deploy my project to the server.
However it suddenly stopped working and I got two errors.
The first one is related to git and
The second one is a docker error.
Can somebody help me what could be the problem?
32 out: Total reclaimed space: OB
33 err: error: cannot pull with rebase:
You have unstaged changes. err: error: please commit or stash them. 34 35
out: docker build -f Dockerfile . -t
tourmix-next
36 err: time="20***-10-08T11:06:33Z"
level-error msg="Can't add file
/mnt/tourmix-main/database/mysql.sock
to tar: archive/tar: sockets not supported"
37 out: Sending build context to Docker daemon 255MB
38
out: Step 1/21 : FROM node:1ts as
dependencies
39 out: Its: Pulling from library/node
40 out: Digest:
sha256:b35e76ba744a975b9a5428b6c3cde1a1 cf0be53b246e1e9a4874f87034***b5a
47 41 out: Status: Downloaded newer image for node:1ts
2 42 out: ---> 946ee375d0e0
3 4 out: Step 2/21: WORKDIR /tourmix out: ---> Using cache
5 45 out: ---> 05e933ce4fa7
This is my Dockerfile:
1 FROM node:1ts as dependencies
2 WORKDIR /tourmix
3 COPY package*.json ./
4 RUN npm install --force
5
6 FROM node:lts as builder
7 WORKDIR /tourmix
8 COPY . .
9 COPY -from-dependencies /tourmix/node_modules ./node_modules
10 RUN npx prisma generate
11 RUN npm run build
12
13 FROM node:lts as runner
14 WORKDIR /tourmix
15 ENV NODE_ENV production
16 # If you are using a custom next.config.js file, uncomment this line.
17 COPY --from-builder /tourmix/next.config.js ./
18 COPY --from-builder /tourmix/public ./public
19 COPY --from-builder /tourmix/.next ./.next
20 COPY --from-builder /tourmix/node_modules ./node_modules
21 COPY -from-builder /tourmix/package.json ./package.json
22 COPY --from-builder /tourmix/.env ./.env
24 # copy the prisma folder
25 EXPOSE 3000
26 CMD ["yarn", "start"]
This is my GitHub workflow file:
# This is a basic workflow that is manually triggered
name: Deploy application
# Controls when the action will run. Workflow runs when manually triggered using the UI
# or API.
on:
push:
branches: [master]
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "greet"
deploy:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
- name: multiple command
uses: appleboy/ssh-action#master
with:
host: ${{secrets.SSH_HOST}}
username: ${{ secrets. SSH_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
port: ${{ secrets.SSH_PORT}} passphrase: ${{ secrets.SSH_PASSPHRASE}}
script:|
docker system prune -a -f
cd /mnt/tourmix-main
git pull origin master --rebase
make release
docker system prune -a -f
- uses: actions/checkout#v3
with:
clean: 'true'
Start with the first error:
Add a git clean pre-step in your pipeline, to clean any private file from your workspace.
If you are using GitLab as a CICD platform, use Git clean flags (GitLab Runner 11.10+, Q2 2019)
For a GitHub Action, if the error is on the git pull command, add a git clean -ffdx just before the git pull.
script:|
docker system prune -a -f
cd /mnt/tourmix-main
git clean -ffdx <====
git stash <====
git pull origin master --rebase
make release
docker system prune -a -f
I'm trying to trigger a workflow event in Github.
for some reason, I'm able to GET information about my organization repository workflow but can not use '/dispatches'
Work is based on: https://docs.github.com/en/rest/actions/workflows#create-a-workflow-dispatch-event
Here is the curl code:
curl -X POST \
-H "Accept:application/vnd.github.v3+json" \
-H 'Authorization:token ${{ github.token }}' \
'https://api.github.com/repos/[owner/org]/[repo]/actions/workflows/9999999/dispatches' \
-d '{"event_type":"semantic-release"}'
Getting error:
422 Unprocessable Entity
"message": "Invalid request.\n\nFor 'links/0/schema', nil is not an object.",
"documentation_url": "https://docs.github.com/rest/reference/repos#create-a-repository-dispatch-event"
Am I missing some basic information for this to work and trigger an event?
Instead of trying to call the GitHub API directly, try and use the GitHub CLI gh (that you can install first to test locally).
You can also use GitHub CLI in workflows.
GitHub CLI is preinstalled on all GitHub-hosted runners.
For each step that uses GitHub CLI, you must set an environment variable called GITHUB_TOKEN to a token with the required scopes
It has a gh workflow run, which does create a workflow_dispatch event for a given workflow.
Authenticates first (gh auth login, if you are doing a local test):
# authenticate against github.com by reading the token from a file
$ gh auth login --with-token < mytoken.txt
Examples:
# Run the workflow file 'triage.yml' at the remote's default branch
$ gh workflow run triage.yml
# Run the workflow file 'triage.yml' at a specified ref
$ gh workflow run triage.yml --ref my-branch
# Run the workflow file 'triage.yml' with command line inputs
$ gh workflow run triage.yml -f name=scully -f greeting=hello
# Run the workflow file 'triage.yml' with JSON via standard input
$ echo '{"name":"scully", "greeting":"hello"}' | gh workflow run triage.yml --json
In your case (GitHub Action):
jobs:
push:
runs-on: ubuntu-latest
steps:
- run: gh workflow run triage.yml
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
As explained by hanayama in the comments:
Found out the secrets. GITHUB_TOKEN doesn't work, even with permissions edited for the entire workflow.
Using a personal access token worked.
So I am pretty stuck yet so close to getting a Google Apps Script project to push and deploy with Clasp through Googles Cloudbuild. So the push and deploy commands come from Googles Clasp cli which requires you to log in with your Google credentials with clasp login. The login will create a file in your home dir called ~/.clasprc.json with your credentials. This is needed to push and deploy. In the cloudbuild.yaml I created a substitution called _CLASPRC to hold the contents of this file and used my own custom image to write it to the container while running the build.
Now for the issue, I get the error below when the push command runs which is basically a not very useful way of saying I'm not logged in or any other error with the .clasprc.json. Since this is the only error I ever get no matter what the problem is, the issue is a bit hard to debug.
Could not read API credentials. Are you logged in globally?
I have tried putting the .clasprc.json in the home dir and the project dir but get the same issue both ways. I'm pretty sure the file is getting written to the projects dir because when I try to run on my local without the .clasp.json it complains it's missing before complaining I'm not logged in. When the .clasp.json is there it only complains I'm not logged in.
The project is just a personal project of mine and it is all open source on Github so here is the link to the actual project if you want some reference to the actual code. My Lil Admin and the builder I used My Builders. However you really don't need the project, to reproduce follow the steps below on your local.
make sure to have a GCP project created and the gcloud cli with Apps Script API enabled
have the clasp cli with npm install -g #google/clasp
clasp login to get a .clasprc.json and auth with GCP
clasp create --title "My Script" --type webapp and take note of the Scripts ID
associate the apps script project with your GCP project
The following steps are the files which lead to the problem. Simply add them to the clasp project created.
6. Here is the entrypoint for my Clasp Builder Image:
builder/clasp_ci.sh
#!/bin/bash
# if there is a _CLASPRC var and no .clasprc.json file
if [ ! -z "${_CLASPRC}" -a ! -f "${HOME}/.clasprc.json" ]; then
echo $_CLASPRC > "$HOME/.clasprc.json"
fi
# if there is a _SCRIPT_ID and PROJECT_ID and no .clasp.json file
if [ ! -z "${_SCRIPT_ID}" -a ! -z "$PROJECT_ID" -a ! -f ".clasp.json" ]; then
cat > '.clasp.json' << EOF
{"scriptId":"$_SCRIPT_ID","projectId": "$PROJECT_ID"}
EOF
fi
# pass args to clasp
clasp "$#"
The builders dockerfile
builder/Dockerfile
# use Node LTS (Boron)
FROM node:8.16.1
COPY clasp_ci.sh /usr/local/bin/clasp_ci
# install Clasp CLI
RUN npm install -g #google/clasp && \
chmod +x /usr/local/bin/clasp_ci
ENTRYPOINT ["/usr/local/bin/clasp_ci"]
now the cloudbuild to push the clasp builder
builder/cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/clasp', '.' ]
images:
- 'gcr.io/$PROJECT_ID/clasp'
my cloudbuild ci for an apps script project. If you're making a new project to follow along you don't need the build steps nor the dir key in the push and deploy steps. This is pretty specific to the project in the links to my project above.
cloudbuild.yaml
steps:
- id: install
name: 'gcr.io/cloud-builders/npm'
args: ['install']
- id: build-server
name: 'gcr.io/cloud-builders/npm'
args: ['run','gas']
env:
- 'NODE_ENV=production'
- id: build-client
name: 'gcr.io/cloud-builders/npm'
args: ['run','prod']
env:
- 'NODE_ENV=production'
- id: push
name: 'gcr.io/$PROJECT_ID/clasp'
dir: './dist/gas'
args: ['push','-f']
- id: deploy
name: 'gcr.io/$PROJECT_ID/clasp'
dir: './dist/gas'
args: ['deploy','$TAG_NAME']
substitutions:
_CLASPRC: 'your clasp rc file in your home dir after logging in locally'
_SCRIPT_ID: 'your script id of the apps script project to deploy to'
Here is the command to load the builder. Make sure to replace yourproject with your actual project ID.
cd builder && gcloud builds submit --project yourproject --config=cloudbuild.yaml .
the command to finally get the error. Make sure to replace yourproject with your actual project ID and your_script_id with your actual script ID you took note of in step 4.
gcloud builds submit --project yourproject --config=cloudbuild.yaml . \
--substitutions=_CLASPRC="$(cat $HOME/.clasprc.json)" \
--substitutions=_SCRIPT_ID="your_script_id"
I have also tried using the credentials created from logging in with OAuth but I got the same exact error. However this may be useful in solving the issue. Docs for Clasp Run with OAuth
Hopefully someone can help me get this working. If so, this would be the first documentation online for a Cloudbuild CI with Apps Script and Clasp since I can't find anyone doing this anywhere. I have found some links using travis and jenkins but what they are doing for some reason does not work. Does anyone see what something that I'm not? What am I missing here?!?!
Some other somewhat related or never solved issues:
https://github.com/google/clasp/issues/524
https://github.com/google/clasp/blob/master/tests/README.md
https://github.com/google/clasp/issues/225
https://github.com/gazf/google-apps-script-ci-starter
OK, so after a bunch of debugging I find out the cloudbuild substitution variables do not translate to environment variables in the container. You have to manually set the environment variables to the substitution variables and then the container will get the variables it needs.
Here is the updated CI Entry point:
builder/clasp_si.sh
#!/bin/bash
if [ ! -z "${CLASPRC}" -a ! -f "${HOME}/.clasprc.json" ]; then
echo $CLASPRC > "${HOME}/.clasprc.json"
fi
if [ ! -z "${SCRIPT_ID}" -a ! -z "$PROJECT_ID" -a ! -f ".clasp.json" ]; then
cat > '.clasp.json' << EOF
{"scriptId":"$SCRIPT_ID","projectId": "$PROJECT_ID"}
EOF
fi
clasp "$#"
and then the updated cloudbuild config:
cloudbuild.yaml
steps:
- id: install
name: 'gcr.io/cloud-builders/npm'
args: ['install']
- id: build-server
name: 'gcr.io/cloud-builders/npm'
args: ['run','gas']
env:
- 'NODE_ENV=production'
- id: build-client
name: 'gcr.io/cloud-builders/npm'
args: ['run','prod']
env:
- 'NODE_ENV=production'
- id: push
name: 'gcr.io/$PROJECT_ID/clasp'
dir: './dist/gas'
args: ['push','-f']
env:
- 'CLASPRC=$_CLASPRC'
- 'SCRIPT_ID=$_SCRIPT_ID'
- 'PROJECT_ID=$PROJECT_ID'
- id: deploy
name: 'gcr.io/$PROJECT_ID/clasp'
dir: './dist/gas'
args: ['deploy','$TAG_NAME']
env:
- 'CLASPRC=$_CLASPRC'
- 'SCRIPT_ID=$_SCRIPT_ID'
- 'PROJECT_ID=$PROJECT_ID'
substitutions:
_CLASPRC: 'your clasp rc file in your home dir after logging in locally'
_SCRIPT_ID: 'your script id of the apps script project to deploy to'
I've been trying to get gcloud to a usable state on Travis and I just can't seem to get passed the gcloud auth activate-service-account point.
When ever it runs I just get the following error:
ERROR: (gcloud.auth.activate-service-account) PyOpenSSL is not available.
See https://developers.google.com/cloud/sdk/crypto for details.
I've tried apt-get and pip installs both with the export CLOUDSDK_PYTHON_SITEPACKAGES=1 set and nothing seems to work.
Does anyone have any ideas or alternatives?
This is Travis version Ubuntu 14.04.
Update
If I run the command from the docs on travis I get the following error:
usage: gcloud auth activate-service-account ACCOUNT --key-file KEY_FILE [optional flags]
ERROR: (gcloud.auth.activate-service-account) too few arguments
This made me think I had to have an ACCOUNT parameter, but after running the command locally with the un-encrypted service account key, I know it's not needed (unless something has changed).
The only other thing I can think of is that the file isn't be decrypted correctly or the command itself isn't happy in Travis:
- gcloud auth activate-service-account --key-file client-secret.json
Update 2
Just dumped a load of logs to figure what is going on. (Massive shout out to #Vilas for his help)
It looks like gcloud is installed on the VM for node already, but it's a super old version.
$ which gcloud
/usr/bin/gcloud
$ gcloud --version
Google Cloud SDK 0.9.37
bq 2.0.18
bq-nix 2.0.18
compute 2014.11.25
core 2014.11.25
core-nix 2014.11.25
dns 2014.11.25
gcutil 1.16.5
gcutil-nix 1.16.5
gsutil 4.6
gsutil-nix 4.6
sql 2014.11.25
The next question is how can I get the path to find the right gcloud?
I've confirmed that the downloaded SDK installs to ${HOME}/google-cloud-sdk/bin by running this command.
$ ls -l ${HOME}/google-cloud-sdk/bin
total 24
drwxr-xr-x 2 travis travis 4096 Apr 27 21:44 bootstrapping
-rwxr-xr-x 1 travis travis 3107 Mar 28 14:53 bq
-rwxr-xr-x 1 travis travis 912 Apr 21 18:56 dev_appserver.py
-rwxr-xr-x 1 travis travis 3097 Mar 28 14:53 gcloud
-rwxr-xr-x 1 travis travis 3144 Mar 28 14:53 git-credential-gcloud.sh
-rwxr-xr-x 1 travis travis 3143 Mar 28 14:53 gsutil
I finally got a solution for it. Essentially Travis has a super old version of the gcloud SDK installed that was taking presidence over the downloaded SDK.
Steps to Help Diagnose
In your .travis.yml file add:
env:
global:
# Ensure the downloaded SDK is first on the PATH
- PATH=${HOME}/google-cloud-sdk/bin:$PATH
# Ensure the install happens without prompts
- CLOUDSDK_CORE_DISABLE_PROMPTS=1
Then in your install step add the following:
install:
# Make sure SDK is downloaded - cache once it's working
# NOTE: Note sure how to update the SDK if it's cached
- curl https://sdk.cloud.google.com | bash;
# List the SDK contents to ensure it's downloaded
- ls -l ${HOME}/google-cloud-sdk/bin
# Ensure the correct gcloud is being used
- which gcloud
# Print the gcloud version and make sure it's something
# Reasonably up to date compared with:
# https://cloud.google.com/sdk/downloads#versioned
- gcloud --version
When I try to clone a repository from Bitbucket with Ansible, it seems like the task 'hangs'.
In the documentation I have found some information, but I'm not using SSH.
If the task seems to be hanging, first verify remote host is in
known_hosts. SSH will prompt user to authorize the first contact with
a remote host. One solution is to add StrictHostKeyChecking no in
.ssh/config which will accept and authorize the connection on behalf
of the user. However, if you run as a different user such as setting
sudo to True), for example, root will not look at the user .ssh/config
setting.
These are the two Playbooks I've tried. They both 'hangs'.
Playbook #1
- hosts: staging_mysql
user: ec2-user
sudo: yes
vars_files:
- vars/mercurial.yml
tasks:
- name: Mercurial credentials setup
action: template src=templates/hgrc.j2 dest=/home/ec2-user/.hgrc
- name: Install Mercurial
action: yum name=hg
- name: Setup API repository
action: command hg clone https://bbusername#bitbucket.org/username/my-repo -r default --debug
Playbook #2
- hosts: staging_mysql
user: ec2-user
sudo: yes
vars_files:
- vars/mercurial.yml
tasks:
- name: Mercurial credentials setup
action: template src=templates/hgrc.j2 dest=/home/ec2-user/.hgrc
- name: Install Mercurial
action: yum name=hg
- name: Clone API repo
hg: dest=/home/ec2-user repo=https://bbusername#bitbucket.org/username/my-repo
Any help is welcome. Thanks in advance!
I found better answer for those who want to clone private repository. Bitbucket has feature called "Deployment keys". Login into your project, go into "Settings" and "Deployment Keys". "Add key" and then provide this key within your project deployment process, before hg:
- file: dest=/var/www/someuser/.ssh/config state=touch mode=600
- lineinfile: dest=/var/www/someuser/.ssh/config
line="Host bitbucket.org"
state=present
- copy: src=someuser.key dest=/var/www/someuser/.ssh/id_rsa mode=0600
- copy: src=someuser.key.pub dest=/var/www/someuser/.ssh/id_rsa.pub mode=0600
- lineinfile: dest=/var/www/someuser/.ssh/config
line="IdentityFile ~/.ssh/id_rsa"
- lineinfile: dest=/var/www/someuser/.ssh/config
line=" StrictHostKeyChecking no"
insertafter="Host bitbucket.org"
state=present
- name: install site code
hg: repo='ssh://hg#bitbucket.org/somecode'
dest=someuser
revision=stable
tags: someuser_code
I think it is easier to access BitBucket using the HTTPS protocol rather than ssh. If you are using private repositories in BitBucket, you should also use Ansible to create (or copy) a $HOME/.hgrc to your server.
Here is the content of the .hgrc file:
[auth]
bb.prefix = https://bitbucket.org/{{ user }}/
bb.username = {{ user }}
bb.password = {{ password }}
Two extra tips:
Now it isn't necessary put bbusername# in your BitBucket urls.
Create another user in BB with access to your repositories and configure it as your user in the Ansible host. If your someone breach into your site, they will be able to modify the repository, but won't be able to delete it. Since everything is version controled, you will always be able to rollback the modifications.
This solution uses ssh (so that we can use a ssh deployment key instead of storing credentials for https) and pre-populates ~/.ssh/known_hosts with the relevant entries so that hg doesn't hang on the prompt to accept the host key verification. This should also work whether or not you use sudo - as long as you populate the correct user's known_hosts file
# copy the deploy key to ~/.ssh/id_rsa of the ansible user - we use copy here to
# simplify things but really you should use ansible vault or something similar
- name: copy deploy key
copy: src=id_rsa_deploy dest=/home/{{ ansible_ssh_user }}/.ssh/id_rsa
owner={{ ansible_ssh_user }} group={{ ansible_ssh_user }} mode=0600
- name: add bitbucket to deploy user's ~/.ssh/known_hosts
lineinfile: dest=/home/{{ ansible_ssh_user }}/.ssh/known_hosts line="bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw=="
- name: 2 add bitbucket to deploy user's ~/.ssh/known_hosts
lineinfile: dest=/home/{{ ansible_ssh_user }}/.ssh/known_hosts line="|1|w3ouhSzx3veHkFkoo/0KlzmLWiY=|dyifJ0YlWhJOElkc09kd5ZP2i6c= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw=="
- name: 3 add bitbucket to deploy user's ~/.ssh/known_hosts
lineinfile: dest=/home/{{ ansible_ssh_user }}/.ssh/known_hosts line="|1|/an77APTih6pDOBpi0GcQ8b5uno=|VOep3g6ll+3Xd8WdUQ/1BqtiF1A= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw=="
- name: copy repo
hg: repo={{ project.repo }} dest={{ project.local_repo }}
How are you actually accessing the hg repository? Try leaving off the last task in your playbook and then logging in and manually trying the hg clone and see what happens. I suspect it is indeed prompting for a password.
I've managed to solve the problem. The Mercurial task 'hangs' when logging in as sudo user.
After removing the line sudo: yes from both Playbooks, everything works as expected.
Working Playbook
- hosts: staging_mysql
user: ec2-user
vars_files:
- vars/mercurial.yml
tasks:
- name: Mercurial credentials setup
action: template src=templates/hgrc.j2 dest=/home/ec2-user/.hgrc
- name: Install Mercurial
action: yum name=hg
- name: Clone API repo
hg: dest=/home/ec2-user repo=https://bbusername#bitbucket.org/username/my-repo