Running this dbt docs generatecommand generates a catalog.json file in the target folder. The process works well locally.
feature/dbt-docs:
- step:
name: 'setup dbt and generate docs'
image: fishtownanalytics/dbt:1.0.0
script:
- cd dbt_folder
- dbt docs generate
- cp target/catalog.json ../catalog.json
After generating the catalog.json file, I want to upload it to s3 in the next step. I copy it from the target folder to the root folder and then I upload it somewhat like this:
- step:
name: 'Upload to S3'
image: python:3.7.2
script:
- aws s3 cp catalog.json s3://testunzipping/
However, I get an error that:
+ aws s3 cp catalog.json s3://testunzipping/
The user-provided path catalog.json does not exist.
Although the copy command works well locally, it seems to not generate the file properly within the bitbucket pipeline. Is there any other way that I can save the content of catalog.json in some variable in the first step and then later upload it to S3?
In bitbucket pipelines, each step has its own build environment. To be able to share things between steps, you should use artifacts.
You may want to try the steps below.
feature/dbt-docs:
- step:
name: 'setup dbt and generate docs'
image: fishtownanalytics/dbt:1.0.0
script:
- cd dbt_folder
- dbt docs generate
- cp target/catalog.json ../catalog.json
artifacts:
- catalog.json
- step:
name: 'Upload to S3'
image: python:3.7.2
script:
- aws s3 cp catalog.json s3://testunzipping/
Reference : https://support.atlassian.com/bitbucket-cloud/docs/use-artifacts-in-steps/
Related
I have two steps in GitHub Actions:
The first uploads a zipped artifact:
- name: Upload artifact
uses: actions/upload-artifact#master
with:
name: artifacts
path: target/*.jar
The second uses a custom java command to read the uploaded artifact:
name: Read artifact
runs: java -jar pipeline-scan.jar -- "artifacts.zip"
I've redacted the java command, but it's supposed to scan my zip file using Veracode. GitHub Actions returns the following error:
java -jar pipeline-scan.jar: error: argument -f/--file: Insufficient
permissions to read file: 'artifacts.zip'
I've tried changing the permissions of the GITHUB_TOKEN, but apparently you can only pass in the $GITHUB_TOKEN secret with a "uses" parameter and not a "runs" parameter. I've also made sure that my default workflow permissions are set to "read and write permissions."
Does anyone know how to resolve this permissions issue?
I have an ECS Fargate cluster that is being deployed to through BitBucket Pipelines. I have my docker image being stored in ECR. Within BitBucket pipelines I am utilizing pipes in order to push my docker image to ECR and a second pipe to deploy to Fargate.
I'm facing a blocker when it comes to Fargate deploying the correct image on the deployment. The way I have the pipeline is setup is below. The docker image gets tagged with the BitBucket Build Number for each deployment. Below is the pipe for the Docker image that gets built and pushed to ECR:
name: Push Docker Image to ECR
script:
- ECR_PASSWORD=`aws ecr get-login-password --region $AWS_DEFAULT_REGION`
- AWS_REGISTRY=$ACCT.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
- docker login --username AWS --password $ECR_PASSWORD $AWS_REGISTRY
- docker build -t $DOCKER_IMAGE .
- pipe: atlassian/aws-ecr-push-image:1.6.2
variables:
IMAGE_NAME: $DOCKER_IMAGE
TAGS: $BITBUCKET_BUILD_NUMBER
The next part of the pipeline is to deploy the image, that was pushed to ECR, to Fargate. The pipe associated with the push to Fargate is below:
name: Deploy to Fargate
script:
- pipe: atlassian/aws-ecs-deploy:1.6.2
variables:
CLUSTER_NAME: $CLUSTER_NAME
SERVICE_NAME: $SERVICE_NAME
TASK_DEFINITION: $TASK_DEFINITION
FORCE_NEW_DEPLOYMENT: 'true'
DEBUG: 'true'
Within this pipe, the attribute for TASK_DEFINITION specifies a file in the repo that ECS runs its tasks off. This file which is a JSON file, has a key pair for the image ECS is to use. Below is an example of the key pair:
"image": "XXXXXXXXXXXX.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$DOCKER_IMAGE:latest",
The problem with this line is that the tag of the image is changing with each deployment.
What I would like to do is have this entire deployment process be automated, but am having this step prevent me from doing that. I had came across this link that shows how to change the tag in the task definition in the build environment of the pipeline. The article utilizes envsubst. I've seen how envsubst works, but not sure how to use it for a JSON file.
Any recs on how I can change the tag in the task definition from latest to the Bitbucket Build Number using envsubst would be appreciated.
Thank you.
I have 2 JSON files:Parameters.json and updatedParam.json
I want to write a yml script that allows me to take both the json files and patch the change done in Parameters.json to updatedParam.json
I am trying to trigger a pipeline whenever a change is made in the Parameters.json file.
Thanks in advance.
I want to write a yml script that allows me to take both the json
files and patch the change done in Parameters.json to
updatedParam.json
DevOps doesn't support this feature, if you need, you need to design your own code.
And if you want your pipeline triggered by a specific file like Parameters.json, your pipeline should be like this:
trigger:
branches:
include:
- main
paths:
include:
- test/Parameters.json
pool:
vmImage: ubuntu-latest
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script'
- script: |
echo Add other tasks to build, test, and deploy your project.
echo See https://aka.ms/yaml
displayName: 'Run a multi-line script'
Several cloud functions use the same requirements: a few libraries and a utility module in localpackage. All of those functions are built and deployed by CloudBuild.
Is there any way to use CloudBuild '--cache-from' feature to use the same base for all those cloud functions?
Here are the steps in yaml file to build and deploy a cloud function:
steps:
# create gcf source directory
- name: 'bash'
args:
- '-c'
- |
echo 'Creating gcf_source directory for ${_GCF_NAME}'
mkdir _gcf_source
cp -r cloudfuncs/${_GCF_NAME}/. _gcf_source
rm -f _gcf_source/readme.md
mkdir _gcf_source/localpackage
touch _gcf_source/localpackage/__init__.py
cp cloudfuncs/localpackage/gcf_utils.py _gcf_source/localpackage
echo "" >> _gcf_source/requirements.txt
cat cloudfuncs/localpackage/requirements.txt >> _gcf_source/requirements.txt
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- ${_GCF_NAME}
Cloud Build is serverless and you can't keep the data from one execution to another one.
However, you have 2 solutions
Build a container with your localpackage and use it in your subsequent deployment as the container of the step.
Store the file in Google Cloud Storage and load them in your Cloud Build workspace as first step of your deployments. The artifact feature could help you in this task (to save files).
I'm creating a role in Ansible and got stuck on a step that requires downloading a publicly shared archive from Google Drive (https://drive.google.com/file/d/0BxpbZGYVZsEeSFdrUnBNMUp1YzQ/view?usp=sharing).
I didn't find any Ansible module that would be able to get such file from Gdrive and (as far as I know) it's not possible to get a direct link with extension at the end...
Is there any solution for this problem, or do I need to download it and upload somewhere else, so I could then get it directly through Ansible get_url module?
I found a solution myself :)
By using third-party script from here: https://github.com/circulosmeos/gdown.pl/blob/master/gdown.pl
And then running command module with proper arguments to download the file.
- name: Copy "gdown" script to /usr/local/bin
copy: src=gdown.pl
dest=/usr/local/bin/gdown
mode=0755
- name: Download DRAGNN CONLL2017 data archive
command: "/usr/local/bin/gdown {{ dragnn_data_url }} {{ dragnn_dir }}/conll17.tar.gz"
args:
creates: "{{ dragnn_dir }}/conll17.tar.gz"
become_user: "{{ docker_user }}"
become: yes
become_method: sudo
You can do it like this:
- name: Download archive from google drive
get_url:
url: "https://drive.google.com/uc?export=download&id={{ file_id }}"
dest: /file/destination/file.tgz
mode: u=r,g-r,o=r
For file_id use 0BxpbZGYVZsEeSFdrUnBNMUp1YzQ