Scenario:
Create an issue in org/bar whenever an action gets triggered by a PR in org/foo.
I've used the organization's private key to create a token to request https://api.github.com/repos/org/bar/issues for creating the issue, but the problem is if the PR is from a fork, the action can't access org/foo's secrets. So it means the action can't ready the private key from the secrets to create the access token.
The access limit description in the documentation: https://help.github.com/en/actions/configuring-and-managing-workflows/creating-and-storing-encrypted-secrets#using-encrypted-secrets-in-a-workflow
So, since GITHUB_TOKEN doesn't have access to other repositories in the organization, is there any other possible way to do the authorization for creating an issue on another repository using a GitHub action?
I think you could use the same approach as I did in Workaround to post comments from GitHub actions from forked repos :
create a cron job to regularly check for some condition (i.e. PR is created), and if condition is met, execute steps that require elevated privileges.
a PR could create some artifacts that cron job could use. Just make sure cron job is reasonably secure to not be manipulated into doing something evil.
Related
What is the difference between pull_request and pull_request_target event in GitHubActions?
I found this explanation in the GitHubActions Docs:
This event (pull_request_target) runs in the context of the base of
the pull request, rather than in the context of the merge commit, as
the pull_request event does.
But, I can't understand what the context in githubAction is. Can anybody explain it?
That is summarized in "Github Actions and the threat of malicious pull requests" by Nathan Davison:
When Github first launched Actions in 2018, this was not the case (or at least, it wasn't intended to be) - in Actions terminology, the pull_request event and its variants were the only events that triggered on a PR being opened from a fork, and these events were made to not have access to repo secrets, including having access to a GITHUB_TOKEN value that is read-only.
However, sometime later, in August 2020, the pull_request_target event was added.
This event is given repo secrets and a full read/write GITHUB_TOKEN to boot, however there is a catch - this action only runs in the pull request's target branch, and not the pull request's branch itself.
This differs from the CircleCI approach, which happily checked out the pull request's code when it was instructed to share secrets with PRs from forked repositories, including the pipeline configuration in the pull request (that would allow pull requests to be submitted just to steal tokens and secrets stored within the settings of a CircleCI project).
The blog post confirms:
In order to protect public repositories for malicious users we run all pull request workflows raised from repository forks with a read-only token and no access to secrets.
This makes common workflows like labeling or commenting on pull requests very difficult.
In order to solve this, we’ve added a new pull_request_target event, which behaves in an almost identical way to the pull_request event with the same set of filters and payload.
However, instead of running against the workflow and code from the merge commit, the event runs against the workflow and code from the base of the pull request.
This means the workflow is running from a trusted source and is given access to a read/write token as well as secrets enabling the maintainer to safely comment on or label a pull request.
This event can be used in combination with the private repository settings as well.
I wish to schedule a Vertex Pipelines and deploy it from my local machine for now.
I have defined my pipeline which runs well I deploy it using: create_run_from_job_spec, on AIPlatformClient running it once.
When trying to schedule it with create_schedule_from_job_spec, I do have a Cloud Scheduler object well created, with a http endpoint to a Cloud Function. But when the scheduler runs, it fails because of Permission denied error. I used several service accounts with owner permissions on the project.
Do you know what could have gone wrong?
Since AIPlatformClient from Kubeflow pipelines raises deprecation warning, I also want to use PipelineJob from google.cloud.aiplatform but I cant see any direct way to schedule the pipeline execution.
I've spent about 3 hours banging my head on this too. In my case, what seemed to fix it was either:
disabling and re-enabling cloud scheduler api. Why did I do this? There is supposed to be a service account called service-[project-number]#gcp-sa-cloudscheduler.iam.gserviceaccount.com. If it is missing then re-enabling API might fix it
for older projects there is an additional step: https://cloud.google.com/scheduler/docs/http-target-auth#add
Simpler explanations include not doing some of the following steps
creating a service account for scheduler job. Grant cloud function invoker during creation
use this service account (see create_schedule_from_job_spec below)
find the (sneaky) cloud function that was created for you it will be called something like 'templated_http_request-v1' and add your service account as a cloud function invoker
response = client.create_schedule_from_job_spec(
job_spec_path=pipeline_spec,
schedule="*/15 * * * *",
time_zone="Europe/London",
parameter_values={},
cloud_scheduler_service_account="<your-service-account>#<project_id>.iam.gserviceaccount.com"
)
If you are still stuck, it is also useful to run gcloud scheduler jobs describe <pipeline-name> as it really helps to understand what scheduler is doing. You'll see cloudfunction url, POST payload which is some base64 encoded and contains pipeline yaml and you'll see that it is using OIDC/service account for security. Also useful is to view the code of the 'templated_http_request-v1' cloud function (sneakily created!). I was able to invoke the cloudfunction from POSTMAN using the payload obtained from scheduler job.
I am trying to make sure we have a secure way to integrate our cloud and Github Actions.
We have multiple Accounts in our cloud to reduce the blast radius if there is an issue. For this we need to make sure we can assume the correct role to deploy to the correct sub-account. We where planning to do a discovery capability made based on the extraction of the metadata of the GITHUB_TOKEN generated in runtime.
Is there a way to obtain the repo name or action that generated the GITHUB_TOKEN?
The GitHub integration for https://desert.readthedocs.io/en/stable/ was working fine then the GitHub user account (python-desert) was converted to an organization and since then we (myself and the owner) are unable to get the hook re-synced or deleted/re-added. I am able to add a webhook on GitHub so it seems my account does have the necessary permissions. I could delete the project and reconnect it and hope that helps but it would obviously be nice to avoid that.
https://docs.readthedocs.io/en/latest/webhooks.html#webhook-activation-failed-make-sure-you-have-the-necessary-permissions followed the link to the OAuth apps, already had the RTD app in my user, clicked it, granted permission to the org, deleted the webhook stuff on both GH and RTD, had RTD recreate via the inteagrations... I still get the following complaint from RTD but the PR did trigger builds on RTD and status callbacks to the PR.
The project desert doesn't have a valid webhook set up, commits won't trigger new builds for this project. See the project integrations for more information.
I am having two interdependent jobs, so my purpose is to send email notification to commiter of job-A after the completion of job-B.
For sending the notification i got a reply like i need to have fingerprinting between dependent jobs.
So my question is for fingerprinting, whether i needed to archive artifacts, i have finger print that artifacts or i can fingerprint whatever i file required for checking dependency between two jobs?
How can i send email notification on the basis of job-B(success/failure) to those who commit on job-A?
Please somebody explain it elaborately because I am new to jenkins.
Yes you need to archive artifacts and fingerprint them in order to create a dependency chain between your jobs. That way Job A build #256 with a fingerprinted item "some.file" will be linked to Job B build #1623 with the same fingerprint id of item "some.file".
In order to send email notifications when Job B fails you need to setup promoted builds.
Promoted builds allow you to define an action when a downstream project succeeds, breaks, fails, etc.. For example by sending an E-mail.