We've created a bash script to rollout our Azure infrastructure based on Azure CLI & ARM Templates.
We also use keyvault to store our secrets and we need to it for references when deploying resources.
Example (this works with static values in the parameters json):
templateUri="armdeploymysql.json"
az group deployment create \
--name $Environment \
--resource-group $RSGName \
--template-file $templateUri \
--parameters #armdeploymysql-parameters.json
In the armdeploymysql-parameters.json you find this:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"administratorLogin": {
"value": "termysqladmin"
},
"administratorLoginPassword": {
"reference": {
"keyVault": {
"id": "/subscriptions/xxx-xxx-xxx-xxx--xx/resourceGroups/resourcegroupname/providers/Microsoft.KeyVault/vaults/keyvaultname"
},
"secretName": "WORDPRESSDBPASSWORD"
}
},
As you can see we are using static values. But we need to deploy this template for multiple environments (Test, Acc & Prod), so we would like to use variables instead of static values.
It works for most of the ARM parameters and we used configure it like:
templateUri="armdeploymysql.json"
az group deployment create \
--name $Environment \
--resource-group $RSGName \
--template-file $templateUri \
--parameters "version=$version" \
"location=$location" \
"administratorLogin=$SQLAdmin" \
"administratorLoginPassword=$SQLPass"
So the question is:
Can we make a parameter reference like the last example to point to a keyvault?
How can we parse variables in the parameters json?
Why don't you use az to get the secret, and then pass it to your template.
WpPwd = az keyvault secret show --vault-name "keyvaultname" --name "WORDPRESSDBPASSWORD"
templateUri="armdeploymysql.json"
az group deployment create \
--name $Environment \
--resource-group $RSGName \
--template-file $templateUri \
--parameters "version=$version" \
"location=$location" \
"administratorLogin=$SQLAdmin" \
"administratorLoginPassword=$SQLPass"
"wordpresspassword=$WpPwd"
Final fix for this specific case (credits to #Murray Foxcroft), snippet from the code:
keyVaultName="keyvaultname-$Environment"
keyVaultsecret="WORDPRESSDBPASSWORD"
SQLPass=$(az keyvault secret show --vault-name $keyVaultName --name $keyVaultsecret --query value -o tsv)
az group deployment create \
--name $Environment \
--resource-group $RSGName \
--template-file $templateUri \
--parameters "version=$version" \
"location=$location" \
"administratorLogin=$SQLAdmin" \
"administratorLoginPassword=$SQLPass" \
The -o tsv was important to avoid adding the extra characters that the normal command passes to the variable.
Thanks for the help!
Related
I am trying to upload to elastic beanstalk with bitbucket and I am using the following yml file:
image: atlassian/default-image:2
pipelines:
branches:
development:
- step:
name: "Install Server"
image: node:10.19.0
caches:
- node
script:
- npm install
- step:
name: "Install and Build Client"
image: node:14.17.3
caches:
- node
script:
- cd ./client && npm install
- npm run build
- step:
name: "Build zip"
script:
- cd ./client
- shopt -s extglob
- rm -rf !(build)
- ls
- cd ..
- apt-get update && apt-get install -y zip
- zip -r application.zip . -x "node_modules/**"
- step:
name: "Deployment to Development"
deployment: staging
script:
- ls
- pipe: atlassian/aws-elasticbeanstalk-deploy:1.0.2
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_REGION
APPLICATION_NAME: $APPLICATION_NAME
ENVIRONMENT_NAME: $ENVIRONMENT_NAME
ZIP_FILE: "application.zip"
All goes well until I reach the AWS deployment and I get this error:
+ docker container run \
--volume=/opt/atlassian/pipelines/agent/build:/opt/atlassian/pipelines/agent/build \
--volume=/opt/atlassian/pipelines/agent/ssh:/opt/atlassian/pipelines/agent/ssh:ro \
--volume=/usr/local/bin/docker:/usr/local/bin/docker:ro \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/aws-elasticbeanstalk-deploy:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/aws-elasticbeanstalk-deploy \
--workdir=$(pwd) \
--label=org.bitbucket.pipelines.system=true \
--env=BITBUCKET_STEP_TRIGGERER_UUID="$BITBUCKET_STEP_TRIGGERER_UUID" \
--env=BITBUCKET_REPO_FULL_NAME="$BITBUCKET_REPO_FULL_NAME" \
--env=BITBUCKET_GIT_HTTP_ORIGIN="$BITBUCKET_GIT_HTTP_ORIGIN" \
--env=BITBUCKET_PROJECT_UUID="$BITBUCKET_PROJECT_UUID" \
--env=BITBUCKET_REPO_IS_PRIVATE="$BITBUCKET_REPO_IS_PRIVATE" \
--env=BITBUCKET_WORKSPACE="$BITBUCKET_WORKSPACE" \
--env=BITBUCKET_DEPLOYMENT_ENVIRONMENT_UUID="$BITBUCKET_DEPLOYMENT_ENVIRONMENT_UUID" \
--env=BITBUCKET_SSH_KEY_FILE="$BITBUCKET_SSH_KEY_FILE" \
--env=BITBUCKET_REPO_OWNER_UUID="$BITBUCKET_REPO_OWNER_UUID" \
--env=BITBUCKET_BRANCH="$BITBUCKET_BRANCH" \
--env=BITBUCKET_REPO_UUID="$BITBUCKET_REPO_UUID" \
--env=BITBUCKET_PROJECT_KEY="$BITBUCKET_PROJECT_KEY" \
--env=BITBUCKET_DEPLOYMENT_ENVIRONMENT="$BITBUCKET_DEPLOYMENT_ENVIRONMENT" \
--env=BITBUCKET_REPO_SLUG="$BITBUCKET_REPO_SLUG" \
--env=CI="$CI" \
--env=BITBUCKET_REPO_OWNER="$BITBUCKET_REPO_OWNER" \
--env=BITBUCKET_STEP_RUN_NUMBER="$BITBUCKET_STEP_RUN_NUMBER" \
--env=BITBUCKET_BUILD_NUMBER="$BITBUCKET_BUILD_NUMBER" \
--env=BITBUCKET_GIT_SSH_ORIGIN="$BITBUCKET_GIT_SSH_ORIGIN" \
--env=BITBUCKET_PIPELINE_UUID="$BITBUCKET_PIPELINE_UUID" \
--env=BITBUCKET_COMMIT="$BITBUCKET_COMMIT" \
--env=BITBUCKET_CLONE_DIR="$BITBUCKET_CLONE_DIR" \
--env=PIPELINES_JWT_TOKEN="$PIPELINES_JWT_TOKEN" \
--env=BITBUCKET_STEP_UUID="$BITBUCKET_STEP_UUID" \
--env=BITBUCKET_DOCKER_HOST_INTERNAL="$BITBUCKET_DOCKER_HOST_INTERNAL" \
--env=DOCKER_HOST="tcp://host.docker.internal:2375" \
--env=BITBUCKET_PIPE_SHARED_STORAGE_DIR="/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes" \
--env=BITBUCKET_PIPE_STORAGE_DIR="/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/aws-elasticbeanstalk-deploy" \
--env=APPLICATION_NAME="$APPLICATION_NAME" \
--env=AWS_ACCESS_KEY_ID="$AWS_ACCESS_KEY_ID" \
--env=AWS_DEFAULT_REGION="$AWS_REGION" \
--env=AWS_SECRET_ACCESS_KEY="$AWS_SECRET_ACCESS_KEY" \
--env=ENVIRONMENT_NAME="$ENVIRONMENT_NAME" \
--env=ZIP_FILE="application.zip" \
--add-host="host.docker.internal:$BITBUCKET_DOCKER_HOST_INTERNAL" \
bitbucketpipelines/aws-elasticbeanstalk-deploy:1.0.2
unable to resolve docker endpoint: open /root/.docker/ca.pem: no such file or directory
I'm unsure how to approach this as I've followed the documentation bitbucket lies out exactly and it doesn't look like there's any place to add a .pem file.
Follwing https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/tutorial-use-custom-image-cli fails here with the error
...
$ az vmss create --resource-group myResourceGroup --name myScaleSet --image /subscriptions/.../myGallery/images/myImageDefinition
Deployment failed. Correlation ID: 6c5f031b-aa0e-42a8-a1d9-faba9b11b208. {
"error": {
"code": "InvalidParameter",
"message": "Parameter 'osProfile' is not allowed.",
"target": "osProfile"
}
}
Any suggestions? You can reproduce this easily using the script https://github.com/dankegel/azure-scripts/blob/main/ss-demo.sh
I can reproduce the error with your script. The problem is that there is a missing "\" after the parameter --image $IDID in your script.
az vmss create \
--resource-group myResourceGroup \
--name myScaleSet \
--image $IDID
--specialized
It should be
az vmss create \
--resource-group myResourceGroup \
--name myScaleSet \
--image $IDID \
--specialized
I am trying to crosscompile qt-everywhere-opensource-5.4.0 to iMx6 board.
The following is my configuration file (config.imx6):
./configure --prefix=/tools/rootfs/usr/local/qt-5.4.0 -examplesdir /tools/rootfs/usr/local/qt-5.4.0/examples -verbose -opensource -confirm-license -make libs -make examples -device imx6 \
-device-option CROSS_COMPILE=\
/home/acsia/Desktop/imx6-Qt5/arm-tool-chain/freescale/usr/local/gcc-4.6.2-glibc-2.13-linaro-multilib-2011.12/fsl-linaro-toolchain/bin/arm-fsl-linux-gnueabi- \
-no-pch -no-opengl -no-icu -no-xcb -no-c++11 \
-opengl es2 \
-eglfs \
-compile-examples \
-glib -gstreamer -pkg-config -no-directfb\
When I run ./config.imx I am getting following error:
-gstreamer: invalid command-line switch
But same configuration file runs fine qt-everywhere-opensource-5.1.1
The platform I am using is ubuntu 14.04.
How do I resolve this?
First I've created an embedded Virtual File System, as described here.
It generates this AS code:
package C_Run {}
package com.adobe.flascc.vfs {
import com.adobe.flascc.vfs.*;
import com.adobe.flascc.BinaryData
public class myvfs extends InMemoryBackingStore {
public function myvfs() {
addDirectory("/data")
addFile("/data/localization.en.afgpack", new C_Run.ALC_FS_6D79766673202F646174612F6C6F63616C697A6174696F6E2E656E2E6166677061636B)
addFile("/data/dataAudio.afgpack", new C_Run.ALC_FS_6D79766673202F646174612F64617461417564696F2E6166677061636B)
addFile("/data/data.afgpack", new C_Run.ALC_FS_6D79766673202F646174612F646174612E6166677061636B)
}
}
}
It is compiled into myvfs.abc.
Then I'm trying to create custom console with this VFS.
I've imported myvfs in Console.as:
import com.adobe.flascc.vfs.myvfs;
And created vfs object:
var my_vfs_embedded:InMemoryBackingStore = new myvfs();
So, the problem is that compiling Console.abc sometimes fails with error "Call to a possibly undefined method myvfs" and sometimes builds successfully with the same code. How can this be?
Console.abc is built by this command:
cd ./../../Engine/library/baselib/sources/flash && \
java -jar $(FLASCC_FOR_EXT)/usr/lib/asc2.jar -merge -md -AS3 -strict -optimize \
-import $(FLASCC_FOR_EXT)/usr/lib/builtin.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/playerglobal.abc \
-import $(GLS3D_ABS)/install/usr/lib/libGL.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/ISpecialFile.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/IBackingStore.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/IVFS.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/InMemoryBackingStore.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/AlcVFSZip.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/CModule.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/C_Run.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/BinaryData.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/PlayerKernel.abc \
-import $(BUILD_FULL_PATH)/myvfs.abc \
Console.as -outdir $(BUILD_FULL_PATH) -out Console
myvfs.abc is located at BUILD_FULL_PATH, hinting that it might be built at the same time as Console.as. If the build order is not fully predictable, the myvfs.abc binary might be in an undetermined state when Console.as is compiled. This can happen if, for instance, you build myvfs.as and Console.as as different independent targets and are using multithreaded option in make (-j).
Seems like my VFS was too big for compiler. When I take less data everything was ok. So, I suppose it was a bug in compiler.
There is a move operation in v1 API.
But is there any equivalent for v2 Rest API? There is copy function in V2, I tried to replace it as move but no love, with operation not permitted error or something like that:
curl https://api.box.com/2.0/files/FILE_ID/move \
-H "Authorization: BoxAuth api_key=API_KEY&auth_token=AUTH_TOKEN" \
-d '{"parent": {"id" : FOLDER_ID}}' \
-X MOVE
You can do this by updating the parent of the item via a PUT request i.e.
curl https://api.box.com/2.0/files/FILE_ID \
-H "Authorization: BoxAuth api_key=API_KEY&auth_token=AUTH_TOKEN" \
-d '{"parent": {"id": "THE_NEW_PARENT_ID"}}' \
-X PUT