I'm creating a role in Ansible and got stuck on a step that requires downloading a publicly shared archive from Google Drive (https://drive.google.com/file/d/0BxpbZGYVZsEeSFdrUnBNMUp1YzQ/view?usp=sharing).
I didn't find any Ansible module that would be able to get such file from Gdrive and (as far as I know) it's not possible to get a direct link with extension at the end...
Is there any solution for this problem, or do I need to download it and upload somewhere else, so I could then get it directly through Ansible get_url module?
I found a solution myself :)
By using third-party script from here: https://github.com/circulosmeos/gdown.pl/blob/master/gdown.pl
And then running command module with proper arguments to download the file.
- name: Copy "gdown" script to /usr/local/bin
copy: src=gdown.pl
dest=/usr/local/bin/gdown
mode=0755
- name: Download DRAGNN CONLL2017 data archive
command: "/usr/local/bin/gdown {{ dragnn_data_url }} {{ dragnn_dir }}/conll17.tar.gz"
args:
creates: "{{ dragnn_dir }}/conll17.tar.gz"
become_user: "{{ docker_user }}"
become: yes
become_method: sudo
You can do it like this:
- name: Download archive from google drive
get_url:
url: "https://drive.google.com/uc?export=download&id={{ file_id }}"
dest: /file/destination/file.tgz
mode: u=r,g-r,o=r
For file_id use 0BxpbZGYVZsEeSFdrUnBNMUp1YzQ
Related
I have two steps in GitHub Actions:
The first uploads a zipped artifact:
- name: Upload artifact
uses: actions/upload-artifact#master
with:
name: artifacts
path: target/*.jar
The second uses a custom java command to read the uploaded artifact:
name: Read artifact
runs: java -jar pipeline-scan.jar -- "artifacts.zip"
I've redacted the java command, but it's supposed to scan my zip file using Veracode. GitHub Actions returns the following error:
java -jar pipeline-scan.jar: error: argument -f/--file: Insufficient
permissions to read file: 'artifacts.zip'
I've tried changing the permissions of the GITHUB_TOKEN, but apparently you can only pass in the $GITHUB_TOKEN secret with a "uses" parameter and not a "runs" parameter. I've also made sure that my default workflow permissions are set to "read and write permissions."
Does anyone know how to resolve this permissions issue?
I have a local file and I need to upload into a remote FTP (not SFTP) server with login.
Please, how could I do that?
Thanks in advance!
Depending on your use case, infrastructure, capabilities of the remote FTP server, etc., there might be several options.
If you like to use plain File Transfer Protocol (FTP) over TCP/21
A Custom Module like ftp - Transfers files and directories from or to FTP servers
The shell module – Execute shell commands on targets with curl
- name: Transfer file to FTP server
shell:
cmd: "curl --silent --user {{ ansible_user }}:{{ ansible_password }} ftp://ftp.example.com -T {{ fileToTransfer }}"
register: result
If the FTP server software has additionally HTTP server capabilities implemented
The module uri - Interacts with webservices with parameter method: PUT
- name: Upload content
local_action:
module: uri
url: "http://ftp.example.oom"
method: PUT
url_username: "{{ ansible_user }}"
url_password: "{{ ansible_password }}"
body: "{{ lookup('file', fileToTransfer) }}"
register: result
... not sure if this would work, haven't tested such setup yet and there is still information missing
Other Q&A
How to upload one file by FTP from command line?
How to upload a file to FTP via curl but from stdin?
Further Documentation
curl --upload-file
RFC 959
Running this dbt docs generatecommand generates a catalog.json file in the target folder. The process works well locally.
feature/dbt-docs:
- step:
name: 'setup dbt and generate docs'
image: fishtownanalytics/dbt:1.0.0
script:
- cd dbt_folder
- dbt docs generate
- cp target/catalog.json ../catalog.json
After generating the catalog.json file, I want to upload it to s3 in the next step. I copy it from the target folder to the root folder and then I upload it somewhat like this:
- step:
name: 'Upload to S3'
image: python:3.7.2
script:
- aws s3 cp catalog.json s3://testunzipping/
However, I get an error that:
+ aws s3 cp catalog.json s3://testunzipping/
The user-provided path catalog.json does not exist.
Although the copy command works well locally, it seems to not generate the file properly within the bitbucket pipeline. Is there any other way that I can save the content of catalog.json in some variable in the first step and then later upload it to S3?
In bitbucket pipelines, each step has its own build environment. To be able to share things between steps, you should use artifacts.
You may want to try the steps below.
feature/dbt-docs:
- step:
name: 'setup dbt and generate docs'
image: fishtownanalytics/dbt:1.0.0
script:
- cd dbt_folder
- dbt docs generate
- cp target/catalog.json ../catalog.json
artifacts:
- catalog.json
- step:
name: 'Upload to S3'
image: python:3.7.2
script:
- aws s3 cp catalog.json s3://testunzipping/
Reference : https://support.atlassian.com/bitbucket-cloud/docs/use-artifacts-in-steps/
In my case I am mounting my Ansible code inside a Docker container under /ansible/playbook/. Under this directory you will see the roles, inventories
I would like to mount another directory that contains some RPM files.
In Ansible I have this code:
---
- name: copy ZooKeeper rpm file
copy:
src: zookeeper-3.4.13-1.x86_64.rpm
dest: /tmp
- name: install ZooKeeper rpm package
yum:
name: /tmp/zookeeper-3.4.13-1.x86_64.rpm
state: present
The problem is that ZooKeeper does not exist in any of the default search paths:
Could not find or access 'zookeeper-3.4.13-1.x86_64.rpm'
Searched in:
/ansible/playbook/roles/kafka/files/zookeeper-3.4.13-1.x86_64.rpm
/ansible/playbook/roles/kafka/zookeeper-3.4.13-1.x86_64.rpm
/ansible/playbook/roles/kafka/tasks/files/zookeeper-3.4.13-1.x86_64.rpm
/ansible/playbook/roles/kafka/tasks/zookeeper-3.4.13-1.x86_64.rpm
/ansible/playbook/files/zookeeper-3.4.13-1.x86_64.rpm
/ansible/playbook/zookeeper-3.4.13-1.x86_64.rpm
How can I add extra search paths to this list for example:
/ansible/rpms/zookeeper-3.4.13-1.x86_64.rpm//
I don't want to hardcode absolute paths (if this works) in the Ansible code. I would like to provide something like: ANSIBLE_EXTRA_SEARCH_PATH.
How can I do this?
PS: I cannot create a symlink to my RPM directory inside the already mounted /ansible/playbook because Docker will see it and a bad link (not being able to read it, because the target directory, the one containing the RPM files is not part of the Docker container file system.)
An option would be to
put the rpm in any_path
link any_path to /ansible/playbook/rpms
use src: rpms/zookeeper-3.4.13-1.x86_64.rpm
You could use a lookup with a list of paths and the first_found query:
- name: install ZooKeeper rpm package
yum:
name: "{{ item }}/zookeeper-3.4.13-1.x86_64.rpm"
state: present
loop: "{{ query('first_found', { 'paths': mypaths}) }}"
vars:
mypaths: ['/tmp', '/opt/other_location/somedir/', '/rpms']
More information at https://docs.ansible.com/ansible/latest/user_guide/playbooks_conditionals.html#selecting-files-and-templates-based-on-variables
When I build a maven project from GitHub using Cloud Build (resulting in jar files in a bucket) I get an extra file uploaded to my bucket that specifies what files have been built (artifacts-[build-no].json). The file has a unique name for every build, so the bucket gets filled up with loads of unwanted files. Is there a way to disable the creation of that file?
I think the json is only generated when using the artifacts flag, such as:
artifacts:
objects:
location: 'gs://$PROJECT_ID/'
paths: ['hello']
You could manually push to a bucket in a step with the gsutil cloud builder, without the helper syntax. This would avoid the json creation.
https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/gsutil
# Upload it into a GCS bucket.
- name: 'gcr.io/cloud-builders/gsutil'
args: ['cp', 'gopath/bin/hello', 'gs://$PROJECT_ID/hello']