How to add application packages to Azure Batch task from Azure CLI? - azure-cli

I am trying to write a bash command line script that will create an azure batch task with an application package. The package is called "testpackage" and exists and is activated on the batch account. However, every time I create this task, I get the following error code: BlobAccessDenied.
This only occurs when I include the application-package-references option on the command line. I tried to follow the documentation here, which states the following:
--application-package-references
The space-separated list of IDs specifying the application packages to be installed. Space-separated application IDs with optional version in 'id[#version]' format.
I have tried --application-package-references "test", --application-package-references" test[1]", and --application-package-references test[1], all with no luck. Does anyone have an example of doing this properly?
Here is the complete script I am running:
#!/usr/bin/env bash
AZ_BATCH_KEY=myKey
AZ_BATCH_ACCOUNT=myBatchAccount
AZ_BATCH_ENDPOINT=myBatchEndpoint
AZ_BATCH_POOL_ID=myPoolId
AZ_BATCH_JOB_ID=myJobId
AZ_BATCH_TASK_ID=myTaskId
az batch task create \
--task-id $AZ_BATCH_TASK_ID \
--job-id $AZ_BATCH_JOB_ID \
--command-line "/bin/sh -c \"echo HELLO WORLD\"" \
--account-name $AZ_BATCH_ACCOUNT \
--account-key $AZ_BATCH_KEY \
--account-endpoint $AZ_BATCH_ENDPOINT \
--application-package-references testpackage

Ah the classic "write up a detailed SO question then immediately answer it yourself" conundrum.
All I needed was --application-package-references testpackage#1
Have a good day world.

Related

'gcloud functions deploy' deploys code that cannot listen to Firestore events

When I try to use a gcloud CLI to deploy a small python script that listens to Firestore events, the script fails to listen to the Firestore events. If I use the web inline UI or web zip upload, the script actually listens to Firestore events. The command line doesn't show any errors.
Deploy script
gcloud beta functions deploy print_name \
--runtime python37 \
--service-account <myprojectid>#appspot.gserviceaccount.com \
--verbosity debug \
--trigger-event providers/cloud.firestore/eventTypes/document.create \
--trigger-resource projects/<myprojectid>/databases/default/documents/Test/{account}
main.py
def print_name(event, context):
value = event["value"]["fields"]["name"]["stringValue"]
print("New name: " + str(value))
gcloud --version
Google Cloud SDK 243.0.0
beta 2019.02.22
bq 2.0.43
core 2019.04.19
gsutil 4.38
Back to comments
The document is pretty basic (has a name string field).
Any ideas? I'm curious if the gcloud CLI has a bug.
The inline web UI and zip uploader work great. I've tried multiple variations of this (e.g. removing 'beta', adding and removing different deploy args).
I'd expect the script to actually listen to Firestore events.
The "default" in trigger-resource needs parentheses around it.
gcloud beta functions deploy print_name \
--runtime python37 \
--service-account <myprojectid>#appspot.gserviceaccount.com \
--verbosity debug \
--trigger-event providers/cloud.firestore/eventTypes/document.create \
--trigger-resource "projects/<myprojectid>/databases/(default)/documents/Test/{account}"

How to import/load/run mysql file using golang?

I’m trying to run/load sql file into mysql database using this golang statement but this is not working:
exec.Command("mysql", "-u", "{username}", "-p{db password}", "{db name}", "<", file abs path )
But when i use following command in windows command prompt it’s working perfect.
mysql -u {username} -p{db password} {db name} < {file abs path}
So what is the problem?
As others have answered, you can't use the < redirection operator because exec doesn't use the shell.
But you don't have to redirect input to read an SQL file. You can pass arguments to the MySQL client to use its source command.
exec.Command("mysql", "-u", "{username}", "-p{db password}", "{db name}",
"-e", "source {file abs path}" )
The source command is a builtin of the MySQL client. See https://dev.mysql.com/doc/refman/5.7/en/mysql-commands.html
Go's exec.Command runs the first argument as a program with the rest of the arguments as parameters. The '<' is interpreted as a literal argument.
e.g. exec.Command("cat", "<", "abc") is the following command in bash: cat \< abc.
To do what you want you have got two options.
Run (ba)sh and the command as argument: exec.Command("bash", "-c", "mysql ... < full/path")
Pipe the content of the file in manually. See https://stackoverflow.com/a/36383984/8751302 for details.
The problem with the bash version is that is not portable between different operating systems. It won't work on Windows.
Go's os.exec package does not use the shell and does not support redirection:
Unlike the "system" library call from C and other languages, the os/exec package intentionally does not invoke the system shell and does not expand any glob patterns or handle other expansions, pipelines, or redirections typically done by shells.
You can call the shell explicitly to pass arguments to it:
cmd := exec.Command("/bin/sh", yourBashCommand)
Depending on what you're doing, it may be helpful to write a short bash script and call it from Go.

How to pass an array to a jenkins parameterized job via remote access api?

I am trying to call a Jenkins parameterized job using curl command. I am following Remote API Jenkins.
I have Active choice parameter plugin. One of the parameters of the job is an Active choice reactive parameter.
Here is the screenshot of the job:
I am using the following curl command to trigger it with parameter:
curl -X POST http://localhost:8080/job/active-choice-test/buildWithParameters -u abhishek:token --data-urlencode json='{"parameter": [{"name":"state", "value":"Maharashtra"},{"name":"cities", "value":["Mumbai", "Pune"]}]}'
But I am not able to pass the cities parameter which should be a json array. The above script is giving error.
I am printing the state & cities variable like this:
The job is getting executed and showing error for cities:
Started by user abhishek
Building in workspace /var/lib/jenkins/workspace/active-choice-test
[active-choice-test] $ /bin/sh -xe /tmp/hudson499503098295318443.sh
+ echo Maharashtra
Maharashtra
+ echo error
error
Finished: SUCCESS
Please tell me how to pass array parameter to a jenkins parameterized job while using remote access API?
You may change the value to strings rather than array:
curl -X POST http://localhost:8080/job/active-choice-test/buildWithParameters -u abhishek:token --data-urlencode json='{"parameter": [{"name":"state", "value":"Maharashtra"},{"name":"cities", "value":"Mumbai,Pune"}]}'

Export .MWB to working .SQL file using command line

We recently installed a server dedicated to unit tests, which deploys
updates automatically via Jenkins when commits are done, and sends
mails when a regression is noticed
> This requires our database to always be up-to-date
Since the database-schema-reference is our MWB, we added some scripts
during deploy, which export the .mwb to a .sql (using python) This
worked fine... but still has some issues
Our main concern is that the functions attached to the schema are not exported at all, which makes the DB unusable.
We'd like to hack into the python code to make it export scripts... but didn't find enough informations about it.
Here is the only piece of documentation we found. It's not very clear for us. We didn't find any information about exporting scripts.
All we found is that a db_Script class exists. We don't know where we can find its instances in our execution context, nor if they can be exported easily. Did we miss something ?
For reference, here is the script we currently use for the mwb to sql conversion (mwb2sql.sh).
It calls the MySqlWorkbench from command line (we use a dummy x-server to flush graphical output.)
What we need to complete is the python part passed in our command-line call of workbench.
# generate sql from mwb
# usage: sh mwb2sql.sh {mwb file} {output file}
# prepare: set env MYSQL_WORKBENCH
if [ "$MYSQL_WORKBENCH" = "" ]; then
export MYSQL_WORKBENCH="/usr/bin/mysql-workbench"
fi
export INPUT=$(cd $(dirname $1);pwd)/$(basename $1)
export OUTPUT=$(cd $(dirname $2);pwd)/$(basename $2)
"$MYSQL_WORKBENCH" \
--open $INPUT \
--run-python "
import os
import grt
from grt.modules import DbMySQLFE as fe
c = grt.root.wb.doc.physicalModels[0].catalog
fe.generateSQLCreateStatements(c, c.version, {})
fe.createScriptForCatalogObjects(os.getenv('OUTPUT'), c, {})" \
--quit-when-done
set -e

Bug in GCE Developer Console 'equivalent command line'

When attempting to create an instance in a project that includes local SSDs, I am given the following (redacted) command line equivalent:
gcloud compute --project "PROJECTS" instances create "INSTANCE" --zone "us-central1-f" \
--machine-type "n1-standard-2" --network "default" --maintenance-policy "MIGRATE" \
--scopes [...] --tags "http-server" --local-ssd-count "2" \
--image "ubuntu-1404-trusty-v20150316" --boot-disk-type "pd-standard" \
--boot-disk-device-name "INSTANCEDEVICE"
This fails with:
ERROR: (gcloud) unrecognized arguments: --local-ssd-count 2
Indeed, I find no mention of --local-ssd-count in the current docs: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
Changing this to --local-ssd --local-ssd works, as then the default are used.
This is using Google Cloud SDK 0.9.54, the most recent after gcloud components update.
If you've found a bug for GCE or have a feature you'd like to propose/request, the best way to report it is to use the 'Public Issue Tracker' that Google has made available.
Visit this Issue Tracker to report your feedback/bug/feature request. It does not require any support package at all.
I highly encourage you to do so as they have staff actively monitoring and working on those reports (Please note that it's different than what they seem to do on StackOverflow as the tracker is for bugs and feature requests, while SO is for questions). It is likely the best way to get your feedback to their engineers. But we do know that they have staff answering questions on Stack Overflow as well. Questions go here, bug reports go to the tracker as far as I understand.