Azure CLI running .sh script on windows - azure-cli

Is it posible to run .sh files from Azure-CLI in windows command line.
Following is the cosmosDB collection creation script that I'm trying to run on local Azure-CLI
#!/bin/bash
# Set variables for the new account, database, and collection
resourceGroupName='testgropu'
location='testlocation'
accountName='testaccount'
databaseName='testDB'
prefix='prefix.'
tenantName='testTenant'
collectionTest='.test'
originalThroughput=400
newThroughput=500
az cosmosdb collection create \
--resource-group $resourceGroupName \
--collection-name "$prefix$tenantName$collectionTest" \
--name $accountName \
--db-name $databaseName \
--partition-key-path "/'\$v'/testId/'\$v'"
Is it posible to run these commands as a script from Azure-CLI like in linux . test.sh?

Actually, you cannot run the bash script like in Linux . test.sh. But you can run the bash script in the command prompt if you install the WSL in your windows like this:
bash test.sh
Additional, if you do not install the WSL in your windows, then the bash is not recognized as an internal or external command. And you also cannot use the bash command in the PowerShell.
Note: you should take care of when you create the bash script in your windows, there are different about the chars between Windows and Linux. You can use the tools that support multiple languages, for example, the notepad++.

Related

OpenSSH tcl package

I am trying to replace a tcl/Expect script that calls ssh from Windows. We are upgrading to a 64-bit OS and Expect does not work with 64 bit Windows. I am trying to run the code found on this page but I cannot get it to recognize the ssh command in the third line.
set ssh [open "|ssh -tt -q -o {StrictHostKeyChecking no} $user#$host /bin/sh" r+]
OpenSSH is installed and I can run ssh commands from the windows command line, but I cannot do it from tcl. I am assuming I need to install an OpenSSH package, but I can't find any information on how to do it or where to get it.

Connect to MySQL Database using gcloud console by shell script

I want to write a shell script where I will enter any gcloud side environment and I will have multiple docker images that will be running. I will enter as a bash mode in any container and then run the gcloud connect MySQL command which is nothing but a Rubik. How can I enter these using a shell script? The script I have written is like a plain simple::
#!/bin/bash
echo "Executing cluster"
docker exec -ti cluster-name bash
echo "Starting a connection to gcloud mysql"
gcloud sql connect ql03-ee102-mysql --user=root --quiet.
The command is running till "docker exec -ti cluster-name bash" and entering the bash mode inside Rubik but stops running after that and do not run the next statements. If it does then I can create other DB scripts. How Do I achieve it? I am stuck. Any helo would be highly appreciated.

Unable to run startup script when creating instance on Google Cloud Platform

I have a simple startup script which looks like so:
#!/usr/bin/env bash
sudo apt update
sudo apt install -y ruby-full ruby-bundler build-essential
And create VM instance on GCP like so:
$ gcloud compute instances create test-app --boot-disk-size=10GB --image-family ubuntu-1604-lts --image-project=ubuntu-os-cloud --machine-type=g1-small --zone europe-west1-b --tags test-server --restart-on-failure --metadata-from-file startup-script=startup.sh
My startup.sh is executable. I set its rights like so:
$ chmod +x startup.sh
When however I enter the shell of my newly created instance and check bundler:
test-app:~$ bundle -v
I get these messages:
The program 'bundle' is currently not installed...
So, what is wrong with that and how can I fix it? PS. If I run all my commands just from inside the instance shell, it's all ok, so there is some problem with using startup script on GCP.
I tested with your use case, But the bundle package was installed without making any changes.
Output:
bundle -v
Bundler version 1.11.2
You can check VM serial console log output to verify if start-up script ran. Check the VM instance to verify if the package is installed using the commands below:
sudo apt list --installed | grep -i bundle
sudo egrep bundle /var/log/dpkg.log
In addition, check the gem list bundle

gcloud ssh and commands with parameters

I'm having issues trying to execute a command over ssh using gcloud. This works perfectly when I execute from my Mac:
gcloud compute ssh instanceName --command="cd /folder; ls"
However, when I try to run that from Ubuntu inside one of the VMs, I get the following error:
ERROR: (gcloud.compute.ssh) unrecognized arguments: /folder; ls
Sounds like it is splitting the command by spaces. I tried different options like using single quotes, use vars, etc., but nothing worked for me.
What is the correct way to do it?
I found the issue. If you install from the Debian packages following this instructions:
https://cloud.google.com/sdk/#debubu
it will install an old version of gcloud. After installing using these instructions:
https://cloud.google.com/sdk/#nix
I got the latest version (0.9.83) and was able to execute the command without issues.
For me it's fixed by changing single-quotes to double-quotes.
I changed
gcloud compute ssh --zone us-east1-b instance-1 --command 'echo hello'
to
gcloud compute ssh --zone us-east1-b instance-1 --command "echo hello"

How to copy the environment variables in cluster system using qsub?

I use the SUN's SGE to submit my jobs into a cluster system. The problem is how to let the
computing machine find the environment variables in the host machine, or how to config the qsub script to make the computing machine load the environment variables in host machine?
The following is an script example, but it will say some errors, such as libraries not found:
#!/bin/bash
#
#$ -V
#$ -cwd
#$ -j y
#$ -o /home/user/jobs_log/$JOB_ID.out
#$ -e /home/user/jobs_log/$JOB_ID.err
#$ -S /bin/bash
#
echo "Starting job: $SGE_TASK_ID"
# Modify this to use the path to matlab for your system
/home/user/Matlab/bin/matlab -nojvm -nodisplay -r matlab_job
echo "Done with job: $SGE_TASK_ID"
The technique you are using (adding a -V) should work. One possibility since you are specifying the shell with -S is that grid engine is configured to launch /bin/bash as a login shell and your profile scripts are stomping all over the environment you are trying to pass to the job.
Try using qstat -xml -j on the job while it is queued/running to see what environment variables grid engine is trying to pass to the job.
Try adding an env command to the script to see what variables are set.
Try adding shopt -q login_shell;echo $? in the script to tell you if it is being run as a login shell.
To list out shells that are configured as login shells in grid engine try:
SGE_SINGLE_LINE=true qconf -sconf|grep ^login_shells
I think this issue is due to you didn't config BASH in the login_shells of SGE
check your login_shells by qconf -sconf and see if bash in there.
login_shells
UNIX command interpreters like the Bourne-Shell (see sh(1)) or the C-
Shell (see csh(1)) can be used by Grid Engine to start job scripts. The
command interpreters can either be started as login-shells (i.e. all
system and user default resource files like .login or .profile will be
executed when the command interpreter is started and the environment
for the job will be set up as if the user has just logged in) or just
for command execution (i.e. only shell specific resource files like
.cshrc will be executed and a minimal default environment is set up by
Grid Engine - see qsub(1)). The parameter login_shells contains a
comma separated list of the executable names of the command inter-
preters to be started as login-shells. Shells in this list are only
started as login shells if the parameter shell_start_mode (see above)
is set to posix_compliant.
Changes to login_shells will take immediate effect. The default for
login_shells is sh,csh,tcsh,ksh.
This value is a global configuration parameter only. It cannot be over-
written by the execution host local configuration.