OpenShift - oc rsh hangs when not passing in <cmd> arg - openshift

We are running OSE 3.2 and I'm trying to rsh with oc to various pods. I'm using Cygwin. As long as I pass it a command, it works, so I assume it's unable to give me a shell. I've tried setting my TERM environment variable to vt100, xterm, and ansi with no luck. I am able to rsh into pods with oc using the Windows cmd prompt with TERM not set at all, but I really don't like that thing and would prefer to use Cygwin for all functions. I've searched quite a bit for a solution to this, but have come up empty. Thanks much.

Related

VSCode with Remote Docker does not launch Anaconda base and does not recognize the conda command

I use WSL2 and Desktop Container and my system is Windows 11. I create a container(the system used in the container is Ubuntu 20.04) and then connect the container with VSCode (remote docker). I have installed a miniconda in the container. But when I connected the container with VSCode, I can't use any conda commands. It seems that VSCode blocks the miniconda or doesn't recognize it. But I can use conda commands in this container if I access the container with "docker exec"(not with VSCode).
When I run "conda -h" in the VSCode, it shows the following information(conda does not work):
$ conda -h
bash: conda: command not found.
When I run "conda -h" in a container terminal(access with "docker exec"), it shows(conda works):
(base) root ➜ / $ conda -h
usage: conda [-h] [-V] command ...
conda is a tool for managing and deploying applications, environments and packages.
This problem may be similar to the problem which might cause by the VSCode while connecting a running container(VSCode does not launch Anaconda base Python).
But I have no idea why they happened.
Does anybody have any improvement to these problems? Thank you.
Python works all well in both cases.
The miniconda was installed into the Ubnutu system after I created the container. Is it the reason that VSCode does not recognize the conda commands?
I had problems with the same combination (vs code and anaconda) on Window 10.
In fact, there is a chance that you have to define your PATH variables for Anaconda and that it will work fine after that...
1. path\to\Anaconda\Scripts
2. path\to\Anaconda\Lib
3. path\to\Anaconda\
but it isn't the most elegant way. According to Anaconda the preferred way is working with a $PROFILE and $ENV VARIABLES
But on stackflow, are a lot of posts about similar problems I noticed:
Maybe in some of the comments you find your answer?
Conda: Creating a virtual environment
https://stackoverflow.com/questions/53137700/ssl-module-in-python-is-not-available-windows-7]
I hope you'll find it fast, probably something small and easy to change.
Like choosing the right terminal from VS Code or so...
Good luck!

Install MySQL on Windows Docker Image

Anyone had success adding MySQL to a Windows docker image? I tried two different ways of deploying MySQL to my image.
I tried using the msi from MySQL in non-interactive mode. Does not work at all in a container.
While Installing Mysql.msi through powershell getting below error
I tried extracting the zip to set things up manually using the mysqld commands does nothing at all. Literally nothing, the exectutables behave as if they just run and exit (no output, nothing):
https://github.com/Somesh-K/Automation-Mysql/blob/main/1.mysql_setup_v2.ps1
Something is very weird about all of this.
Yes, I know that there's a perfectly good MySQL docker linux container published by Oracle to Docker hub. This works. The problem is that running a Windows container and Linux container that need to interact creates a really unnecessary frustration for the user in terms of networking between the two.
Using a different back-end (like SQL server) for our application is not feasible and using .NET core instead of .NET framework is not feasible. To simplify, I'd like to just install MySQL on our windows based webserver docker image. This seems do-able using the two methods described in the links above, but as noted, it does not work and there's very odd behavior from the MySQL binaries when they are run in the container.
Here's an example of the odd behavior:
Install Docker Desktop for Windows
Download the Win32 install zips from MySQL and place in C:\mydata
https://dev.mysql.com/downloads/mysql/
Pull down the ASPNET image from Docker Hub, Run it, and Open up Powershell:
# docker pull mcr.microsoft.com/dotnet/framework/aspnet:4.8
# docker run --name testweb -v C:\mydata:C:\mydata:R -d mcr.microsoft.com/dotnet/framework/aspnet:4.8
# docker exec -it testweb powershell
C:\ > cd C:\mydata
C:\mydata\ > Expand-archive -path .\mysql-5.7.36-winx64.zip .
C:\mydata\ > cd \mysql-5.7.36-winx64\bin
C:\mydata\mysql-5.7.36-winx64\bin\ > .\mysql.exe -version
[zero output, acts like it's an empty executable]
Results
None of the executables/binaries in the extracted mysql bin directory on the container do anything at all. They behave as if someone wrote and executable that just exits. I thought I had a bad install zip so I extracted the same zip on my regular Windows 10 workstation. All of the binaries at least return errors or do something.
This is super odd. Any help would be appreciated.
Downloading this executable and putting it into my container seemed to do the trick:
https://download.microsoft.com/download/2/E/6/2E61CFA4-993B-4DD4-91DA-3737CD5CD6E3/vcredist_x64.exe
Placed this on my container and started it
C:\vcredist.exe /Q
After doing this, the executables starting working:
C:\ > cmd.exe /C "C:\mysql\bin\mysqld" --initialize-insecure
C:\ > cmd.exe /C "C:\mysql\bin\mysqld" --install
C:\ > start-service mysql
C:\ > cmd.exe /C "C:\mysql\bin\mysql" -u root

Adding Labels to Images with Openshift s2i Binary build

I would like to add some labels (commit hash, branch name,...) to images I create using Openshift source-to-image binary build. These labels will have naturally different values for every build.
Currently oc start-build does not even support -e flags to add environment variables. (At least is seems to, it works for Git source, its a bug?)
And for binary build does not supports --build-arg to pass argument for docker file.
The only way I was able to accomplish this to call oc set env bc [build-name] then start the build. And use Label in Dockerfile with values from environment variables.
My question is isn't there a better way to do this? (Ideally in a way that Dockerfile is not necessarily changed) Doesn't s2i supports passing --label to docker build behind?
Thank you.
Do you want to add environment variable when you start oc start-build ? Because you mentioned oc set env bc [build-name].
Then you can use --env=<key>=<value> option, refer Starting Build for more details.
$ oc start-build <buildconfig_name> --env=<key>=<value>
I hope it help you.

Unable to delete openshift application

I installed a PHP application on one of my gears in Openshift. It is a git clone from https://github.com/ThinkUpLLC/ThinkUp/tree/v2.0-beta.10. Something went wrong with the application and hence I would like to delete this application now. However I get an error as "Unable to perform action on app object. Another operation is already running." while trying to delete the application using rhc command tool. I have already tried using rhc app-force-stop, however it did not make any difference.
Sounds a bit like this bug - https://bugzilla.redhat.com/show_bug.cgi?id=997008. There seems to be no solution/workaround though.
Have you tried deleting the application via the web console?
You can try this command, this will kill all the background processes associated with the app.
pbrun /usr/sbin/oo-admin-ctl-app -l svc-<domain-id> -c destroy -a <app-name> -n <domain-id>

Openshift MYSQL environment variables not set

all my MySQL environment variables result in an empty string such as
echo getenv('OPENSHIFT_MYSQL_DB_URL');
echo getenv('OPENSHIFT_MYSQL_DB_HOST');
however the others such as
echo getenv('OPENSHIFT_APP_NAME');
echo getenv('OPENSHIFT_REPO_DIR');
work perfectlly fine. Any ideas what i am doing wrong?
I had the same issue, and tried creating and recreating Applications multiple times without success
The solution was to use Git to push the code to Openshift (at least one time), if you only use sFTP to push the code, those variables will not be accessible
You can just use the rhc app stop & rhc app start commands to restart your application and the environment variables will then be provided to your application. Make sure that you don't just use the rhc app restart command, as it doesn't not usually work, think of it as an apachectl stop/start vs apachectl reload.