OpenSSH tcl package - tcl

I am trying to replace a tcl/Expect script that calls ssh from Windows. We are upgrading to a 64-bit OS and Expect does not work with 64 bit Windows. I am trying to run the code found on this page but I cannot get it to recognize the ssh command in the third line.
set ssh [open "|ssh -tt -q -o {StrictHostKeyChecking no} $user#$host /bin/sh" r+]
OpenSSH is installed and I can run ssh commands from the windows command line, but I cannot do it from tcl. I am assuming I need to install an OpenSSH package, but I can't find any information on how to do it or where to get it.

Related

How to execute a command inside a singularity container that do not interact/sources options from the host OS?

I have a binary installed on a Docker container that I have been trying to run via Singularity:
singularity run docker://repo/container_image ./repository/bin --flag
The problem is that with such command it sources my .bashrc, which is causing some problems with the binary.
So I tried running it with --no-home and flaged repositories to be mounted with -B:
singularity run --no-home -B /hostrepo01:/data,/hostrepo02:/results docker://repo/container_image ./repository/bin --flag
This still imports some paths form my host os, for instance if I open a Singularity shell with the options bellow and do a cd, the shell tries to access the adress I have for my home on the host OS.
singularity run --no-home -B /hostrepo01:/data,/hostrepo02:/results docker://repo/container_image
How can I execute a command inside a singularity container that do not interact/sources options from the host OS, other than what I specify with the -B flag?
You can use the --contain flag
-c, --contain use minimal /dev and empty other
directories (e.g. /tmp and $HOME) instead
of sharing filesystems from your host
singularity run --contain -B /hostrepo01:/data,/hostrepo02:/results docker://repo/container_image

Azure CLI running .sh script on windows

Is it posible to run .sh files from Azure-CLI in windows command line.
Following is the cosmosDB collection creation script that I'm trying to run on local Azure-CLI
#!/bin/bash
# Set variables for the new account, database, and collection
resourceGroupName='testgropu'
location='testlocation'
accountName='testaccount'
databaseName='testDB'
prefix='prefix.'
tenantName='testTenant'
collectionTest='.test'
originalThroughput=400
newThroughput=500
az cosmosdb collection create \
--resource-group $resourceGroupName \
--collection-name "$prefix$tenantName$collectionTest" \
--name $accountName \
--db-name $databaseName \
--partition-key-path "/'\$v'/testId/'\$v'"
Is it posible to run these commands as a script from Azure-CLI like in linux . test.sh?
Actually, you cannot run the bash script like in Linux . test.sh. But you can run the bash script in the command prompt if you install the WSL in your windows like this:
bash test.sh
Additional, if you do not install the WSL in your windows, then the bash is not recognized as an internal or external command. And you also cannot use the bash command in the PowerShell.
Note: you should take care of when you create the bash script in your windows, there are different about the chars between Windows and Linux. You can use the tools that support multiple languages, for example, the notepad++.

gcloud ssh and commands with parameters

I'm having issues trying to execute a command over ssh using gcloud. This works perfectly when I execute from my Mac:
gcloud compute ssh instanceName --command="cd /folder; ls"
However, when I try to run that from Ubuntu inside one of the VMs, I get the following error:
ERROR: (gcloud.compute.ssh) unrecognized arguments: /folder; ls
Sounds like it is splitting the command by spaces. I tried different options like using single quotes, use vars, etc., but nothing worked for me.
What is the correct way to do it?
I found the issue. If you install from the Debian packages following this instructions:
https://cloud.google.com/sdk/#debubu
it will install an old version of gcloud. After installing using these instructions:
https://cloud.google.com/sdk/#nix
I got the latest version (0.9.83) and was able to execute the command without issues.
For me it's fixed by changing single-quotes to double-quotes.
I changed
gcloud compute ssh --zone us-east1-b instance-1 --command 'echo hello'
to
gcloud compute ssh --zone us-east1-b instance-1 --command "echo hello"

how to execute init scripts from the command line using ssh

ssh -t user#server1 "ls -al /root/test/"
Above mentioned code works fine and displays all the contents of the test directory folder but this code fails
ssh -t user#server1 "/etc/init.d/mysql start"
It does not start the mysql server, I have login in to server and use the same command to start the mysql server
Can any one explain this behaviour ? what I am doing wrong bit puzzled :(
Do something like this:
ssh user#hostname "/etc/init.d/mysql start < /dev/null > /tmp/log 2>&1 &"
ssh needs to use stdin and stdout to interact. This will allow it to do that and redirect the output to somewhere useful.
I'm not sure about roots of this behaviour, probably because ssh allocates pseudo-terminal. However you can use workaround with sudo:
ssh -t user#server1 "sudo service mysql start"

How to copy the environment variables in cluster system using qsub?

I use the SUN's SGE to submit my jobs into a cluster system. The problem is how to let the
computing machine find the environment variables in the host machine, or how to config the qsub script to make the computing machine load the environment variables in host machine?
The following is an script example, but it will say some errors, such as libraries not found:
#!/bin/bash
#
#$ -V
#$ -cwd
#$ -j y
#$ -o /home/user/jobs_log/$JOB_ID.out
#$ -e /home/user/jobs_log/$JOB_ID.err
#$ -S /bin/bash
#
echo "Starting job: $SGE_TASK_ID"
# Modify this to use the path to matlab for your system
/home/user/Matlab/bin/matlab -nojvm -nodisplay -r matlab_job
echo "Done with job: $SGE_TASK_ID"
The technique you are using (adding a -V) should work. One possibility since you are specifying the shell with -S is that grid engine is configured to launch /bin/bash as a login shell and your profile scripts are stomping all over the environment you are trying to pass to the job.
Try using qstat -xml -j on the job while it is queued/running to see what environment variables grid engine is trying to pass to the job.
Try adding an env command to the script to see what variables are set.
Try adding shopt -q login_shell;echo $? in the script to tell you if it is being run as a login shell.
To list out shells that are configured as login shells in grid engine try:
SGE_SINGLE_LINE=true qconf -sconf|grep ^login_shells
I think this issue is due to you didn't config BASH in the login_shells of SGE
check your login_shells by qconf -sconf and see if bash in there.
login_shells
UNIX command interpreters like the Bourne-Shell (see sh(1)) or the C-
Shell (see csh(1)) can be used by Grid Engine to start job scripts. The
command interpreters can either be started as login-shells (i.e. all
system and user default resource files like .login or .profile will be
executed when the command interpreter is started and the environment
for the job will be set up as if the user has just logged in) or just
for command execution (i.e. only shell specific resource files like
.cshrc will be executed and a minimal default environment is set up by
Grid Engine - see qsub(1)). The parameter login_shells contains a
comma separated list of the executable names of the command inter-
preters to be started as login-shells. Shells in this list are only
started as login shells if the parameter shell_start_mode (see above)
is set to posix_compliant.
Changes to login_shells will take immediate effect. The default for
login_shells is sh,csh,tcsh,ksh.
This value is a global configuration parameter only. It cannot be over-
written by the execution host local configuration.