Store variable to file on SFTP server with Ansible [duplicate] - json

I have a local file and I need to upload into a remote FTP (not SFTP) server with login.
Please, how could I do that?
Thanks in advance!

Depending on your use case, infrastructure, capabilities of the remote FTP server, etc., there might be several options.
If you like to use plain File Transfer Protocol (FTP) over TCP/21
A Custom Module like ftp - Transfers files and directories from or to FTP servers
The shell module – Execute shell commands on targets with curl
- name: Transfer file to FTP server
shell:
cmd: "curl --silent --user {{ ansible_user }}:{{ ansible_password }} ftp://ftp.example.com -T {{ fileToTransfer }}"
register: result
If the FTP server software has additionally HTTP server capabilities implemented
The module uri - Interacts with webservices with parameter method: PUT
- name: Upload content
local_action:
module: uri
url: "http://ftp.example.oom"
method: PUT
url_username: "{{ ansible_user }}"
url_password: "{{ ansible_password }}"
body: "{{ lookup('file', fileToTransfer) }}"
register: result
... not sure if this would work, haven't tested such setup yet and there is still information missing
Other Q&A
How to upload one file by FTP from command line?
How to upload a file to FTP via curl but from stdin?
Further Documentation
curl --upload-file
RFC 959

Related

Update Task Definition for ECS Fargate

I have an ECS Fargate cluster that is being deployed to through BitBucket Pipelines. I have my docker image being stored in ECR. Within BitBucket pipelines I am utilizing pipes in order to push my docker image to ECR and a second pipe to deploy to Fargate.
I'm facing a blocker when it comes to Fargate deploying the correct image on the deployment. The way I have the pipeline is setup is below. The docker image gets tagged with the BitBucket Build Number for each deployment. Below is the pipe for the Docker image that gets built and pushed to ECR:
name: Push Docker Image to ECR
script:
- ECR_PASSWORD=`aws ecr get-login-password --region $AWS_DEFAULT_REGION`
- AWS_REGISTRY=$ACCT.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
- docker login --username AWS --password $ECR_PASSWORD $AWS_REGISTRY
- docker build -t $DOCKER_IMAGE .
- pipe: atlassian/aws-ecr-push-image:1.6.2
variables:
IMAGE_NAME: $DOCKER_IMAGE
TAGS: $BITBUCKET_BUILD_NUMBER
The next part of the pipeline is to deploy the image, that was pushed to ECR, to Fargate. The pipe associated with the push to Fargate is below:
name: Deploy to Fargate
script:
- pipe: atlassian/aws-ecs-deploy:1.6.2
variables:
CLUSTER_NAME: $CLUSTER_NAME
SERVICE_NAME: $SERVICE_NAME
TASK_DEFINITION: $TASK_DEFINITION
FORCE_NEW_DEPLOYMENT: 'true'
DEBUG: 'true'
Within this pipe, the attribute for TASK_DEFINITION specifies a file in the repo that ECS runs its tasks off. This file which is a JSON file, has a key pair for the image ECS is to use. Below is an example of the key pair:
"image": "XXXXXXXXXXXX.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$DOCKER_IMAGE:latest",
The problem with this line is that the tag of the image is changing with each deployment.
What I would like to do is have this entire deployment process be automated, but am having this step prevent me from doing that. I had came across this link that shows how to change the tag in the task definition in the build environment of the pipeline. The article utilizes envsubst. I've seen how envsubst works, but not sure how to use it for a JSON file.
Any recs on how I can change the tag in the task definition from latest to the Bitbucket Build Number using envsubst would be appreciated.
Thank you.

Using aws cli without a homedirectory

I need to use aws cli on an OpenShift Cluster that is quite restricted - it looks like the homedirectory is set to /, while the user in the container does not have permissions to write to /.
The only directory that is writeable from that user is /tmp. Now I need to use aws cli from within a pod of this OpenShift cluster. I came across the environment variables AWS_CONFIG_FILE and AWS_SHARED_CREDENTIALS_FILE. So I would place each a credentials file and a config file to /tmp.
When running aws configure list-profiles with this setup, only the one profile from AWS_SHARD_CREDENTIALS_FILE is listed. Not the one from AWS_CONFIG_FILE.
So it looks to me like AWS_CONFIG_FILE is not respected by aws cli.
Do you have an idea why these files might not be respected by the aws executable? Is there a way to pass the location of these files directly to the cli as parameter or s.th.?
Instead of configuring files for the AWS CLI, I would assume you could set the following 2 environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and issue your CLI commands immediately.
bruno#pop-os ~> export AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
bruno#pop-os ~> export AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
bruno#pop-os ~> aws cloudformation list-stacks --region us-east-2
{
"StackSummaries": []
}
To answer on:
So it looks to me like AWS_CONFIG_FILE is not respected by aws cli.
The AWS CLI does respect this.
You can specify a non-default location for the config file by setting
the AWS_CONFIG_FILE environment variable to another local path.

How to place files outside app deployment directory in AWS Elastic Beanstalk?

In AWS EB, how to place my environment.properties (contains app runtime config like port, logs dir, DB info, security keys, etc.) under /var/env_config/myapp, so it can be referred by the app at runtime?
Though my further plan is to put this environment.properties in a secure non app directory of local or remote file system as it contains sensitive information.
global.env = propsReader(path.join(process.env.ENV_PATH, 'env-main.properties'));
On the EB, I have added an Environment property 'ENV_PATH = /var/env_config/myapp'
EB logs:
web: > myapp#1.0.0 start /var/app/current
web: > node src/app-main.js
web: 8266 [
web: '/opt/elasticbeanstalk/node-install/node-v12.18.1-linux-x64/bin/node',
web: '/var/app/current/src/app-main.js'
web: ]
web: /var/env_config/myapp
web: internal/fs/utils.js:230
web: throw err;
web: ^
web: Error: ENOENT: no such file or directory, open '/var/env_config/myapp/env-main.properties'
I just wanna deploy my application in the same fashion in AWS EB or Docker or VM or local machine, with just an environment property saying where the required runtime input files are.
How to access Elastic Beanstalk file system to configure my .properties file?
Not sure what do you mean by "accessing file system", but usually you would create .ebextensions folder in your project directory. The extensions are commonly used for running commands or scripts when you are deploying your app. There are special sections for that:
commands: You can use the commands key to execute commands on the EC2 instance. The commands run before the application and web server are set up and the application version file is extracted.
container_commands: You can use the container_commands key to execute commands that affect your application source code. Container commands run after the application and web server have been set up and the application version archive has been extracted, but before the application version is deployed.
Therefore, you could use the above sections to modify your .properties file during deployment of your application into EB.

Ansible - download a file from Google Drive

I'm creating a role in Ansible and got stuck on a step that requires downloading a publicly shared archive from Google Drive (https://drive.google.com/file/d/0BxpbZGYVZsEeSFdrUnBNMUp1YzQ/view?usp=sharing).
I didn't find any Ansible module that would be able to get such file from Gdrive and (as far as I know) it's not possible to get a direct link with extension at the end...
Is there any solution for this problem, or do I need to download it and upload somewhere else, so I could then get it directly through Ansible get_url module?
I found a solution myself :)
By using third-party script from here: https://github.com/circulosmeos/gdown.pl/blob/master/gdown.pl
And then running command module with proper arguments to download the file.
- name: Copy "gdown" script to /usr/local/bin
copy: src=gdown.pl
dest=/usr/local/bin/gdown
mode=0755
- name: Download DRAGNN CONLL2017 data archive
command: "/usr/local/bin/gdown {{ dragnn_data_url }} {{ dragnn_dir }}/conll17.tar.gz"
args:
creates: "{{ dragnn_dir }}/conll17.tar.gz"
become_user: "{{ docker_user }}"
become: yes
become_method: sudo
You can do it like this:
- name: Download archive from google drive
get_url:
url: "https://drive.google.com/uc?export=download&id={{ file_id }}"
dest: /file/destination/file.tgz
mode: u=r,g-r,o=r
For file_id use 0BxpbZGYVZsEeSFdrUnBNMUp1YzQ

Validate OpenShift objects defined in yaml before actually applying or executing it

I have a OpenShift template in template.yaml file which includes following objects - deployment-config, pod, service and route. I am using the following command to execute the yaml:
oc process -f template.yml | oc apply -f -
I want to perform following validations before I actually apply/execute the yaml:
YAML syntax validation - if there are any issues with the YAML syntax.
OpenShift schema validation - to check if the object definition abides by the OpenShift object schema.
It seems that the command 'oc process' is doing following checking:
Basic YAML syntax validation
Template object schema validation
How to perform schema validation of other objects (e.g. deployment-config, service, pod, etc.) that are defined in template.yaml?
This is now possible with the OpenShift client (and on Kubernetes in general), e.g.
$ oc login
Username: john.doe
Password:
Login successful.
$ oc apply -f openshift/template-app.yaml --dry-run
template "foobar-app" created (dry run)
It's also possible to process the template locally, thus you can avoid sending it to the server first, e.g.
$ oc process -f openshift/template-app.yaml --local -p APP_NAME=foo | oc apply --dry-run --validate -f -
deploymentconfig "foo" created (dry run)
service "foo" created (dry run)
Also note the --validate option I'm using for schema validation. Unfortunately, you still have to log in for the apply command to work (there's no --local option for apply).
Oddly, this feature is not described in the CLI documentation, however it's mentioned on the help screen:
$ oc apply --help
Apply a configuration to a resource by filename or stdin.
JSON and YAML formats are accepted.
Usage:
oc apply -f FILENAME [options]
...
Options:
...
--dry-run=false: If true, only print the object that would be sent, without sending it.
...
--validate=false: If true, use a schema to validate the input before sending it
Use "oc <command> --help" for more information about a given command.
Use "oc options" for a list of global command-line options (applies to all commands).
I'm having the same issue with cryptic errors coming back from the oc process command.
However if you go into the Openshift Console and use the "Add to Project" link at the top of the console, choose the "Import YAML / JSON" option and import your YAML/JSON that way you get slightly more useful errors.