Blank Screen error while connecting the vncserver from SUSE-12 SP4 server - suse

While connecting the vncserver getting the blank screen.I have installed the vnc server in the SUSE-12 platform and while connecting the vncserver the log file has below error;
vncext: VNC extension running!
vncext: Listening for VNC connections on all interface(s),
port 5901
vncext: Listening for HTTP connections on all
interface(s), port 5801
vncext: created VNC server for screen 0
** (process:9295): WARNING **: Could not make bus activated
clients aware of XDG_CURRENT_DESKTOP=GNOME environment variable:
Could not connect: Connection refused
gnome-session-is-accelerated: llvmpipe detected.
gnome-session-binary[9295]: WARNING: Could not make bus
activated clients aware of GNOME_DESKTOP_SESSION_ID=this-is-
deprecated environment variable: Could not connect: Connection
refused
gnome-session-binary[9295]: WARNING: Could not make bus
activated clients aware of XDG_MENU_PREFIX=gnome- environment
variable: Could not connect: Connection refused
gnome-session-binary[9295]: WARNING: Could not make bus
activated clients aware of QT_QPA_PLATFORMTHEME=qgnomeplatform
environment variable: Could not connect: Connection refused
gnome-session-binary[9295]: WARNING: Lost name on bus:
org.gnome.SessionManager
Unable to init server: Could not connect: Connection refused
** (gnome-session-failed:9316): WARNING **: Cannot open display:
/root/.xinitrc: line 1: 9295 Terminated gnome-
session
I have followed below link for the installation of the vncserver;
https://www.dell.com/support/article/in/en/inbsd1/sln283107/how-to-install-and-configure-a-vnc-server-on-suse-linux-enterprise-server-sles-11?lang=en
Thanks in advance for help.
I have tried editing the .vnc/xstartup and also updated all the required packages, but it didn't work.
Right now my .vnc/xstartup file has configuration as;
#!/bin/sh
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
export SESSION_MANAGER
export DBUS_SESSION_BUS_ADDRESS
userclientrc=$HOME/.xinitrc
sysclientrc=/etc/X11/xinit/xinitrc
if [ -f "$userclientrc" ]; then
client="$userclientrc"
elif [ -f "$sysclientrc" ]; then
client="$sysclientrc"
fi
if [ -x "$client" ]; then
exec "$client"
fi
if [ -f "$client" ]; then
exec sh "$client"
fi
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
if [ -x /usr/bin/icewm ]; then
/usr/bin/icewm &
else
echo "No window manager found. You should install a window
manager to get properly working VNC session."
fi

Related

How can I do an ssh tunnel with port forwarding on a Windows runner in Github actions?

I have a MongoDB instance on a Google compute engine running that I want to connect to from my Github action (On a windows runner if it makes a difference) to insert test and performance results.
Currently, I am trying to open an SSH tunnel with port forwarding and just test that the port is open.
Here is what my GIthub action step is:
- name: 'Create ssh tunnel'
if: (runner.os == 'Windows')
run: |
gcloud config set auth/impersonate_service_account *****#***.iam.gserviceaccount.com
gcloud compute config-ssh
$sshTunnelJob = Start-Job -Name SshTunnelJob -ScriptBlock { ssh -o "User=*****_iam_gserviceaccount_com" *****.us-east1-b.**** -vvv -fNT -L 27017:0.0.0.0:27017}
Get-Job
Receive-Job -Name SshTunnelJob | Format-List -Force -Expand CoreOnly
netstat -aon
Test-NetConnection localhost -port 27017
gcloud config unset auth/impersonate_service_account
gcloud compute config-ssh --remove
I expect this, Test-NetConnection localhost -port 27017, to succeed, but it fails. Forwarding port 80 is succeeding, though.
Here is the output:
WARNING: TCP connect to (::1 : 27017) failed
WARNING: TCP connect to (127.0.0.1 : 27017) failed
ComputerName: localhost
RemoteAddress: ::1
ResolvedAddresses: {::1, 127.0.0.1}
PingSucceeded: True
PingReplyDetails: System.Net.NetworkInformation.PingReply
TcpClientSocket:
TcpTestSucceeded: False
RemotePort: 27017
TraceRoute:
Detailed: False
InterfaceAlias: Loopback Pseudo-Interface 1
InterfaceIndex: 1
InterfaceDescription:
NetAdapter:
NetRoute: MSFT_NetRoute (InstanceID = "DD;9;?B55;55DD55;")
SourceAddress: ::1
NameResolutionSucceeded: True
BasicNameResolution: {Microsoft.DnsClient.Commands.DnsRecord_AAAA,Microsoft.DnsClient.Commands.DnsRecord_A}
LLMNRNetbiosRecords: {}
DNSOnlyRecords: {Microsoft.DnsClient.Commands.DnsRecord_A}
AllNameResolutionResults: {Microsoft.DnsClient.Commands.DnsRecord_AAAA,Microsoft.DnsClient.Commands.DnsRecord_A}
IsAdmin: True
NetworkIsolationContext: Loopback
MatchingIPsecRules:
What am I missing? Is GitHub limiting ports? I couldn't find any documentation on what ports are blocked or not.
Solution 1 :
The issue might be that the connection from client to server is blocked by a firewall. Can you Please check if the relevant GCP firewall setting is enabled for port 27017.
Also , Please check the target tags and update it accordingly if required . This will allow instances tagged with mongodb-instance to accept connections on port 27017.
Solution 2 :
As per the below output provided by you, it is observed that PingSucceeded was True. Whereas, the response returned as False for the PingSucceeded. In such cases, it is observed that the ICMP requests might be disabled on the remote server/device.
PingSucceeded: True
TcpTestSucceeded: False
As you are expecting Test-NetConnection localhost -port 27017 to succeed,please follow the below steps.
Open PowerShell in the Windows server and type the following command:
tnc <ip_address> -port <PortNumber>
If the device was having issues where it powered off or it got disconnected from the network, a response like below is expected.
PingSucceeded : False
TCPTestSucceeded : False
If the connection is healthy (i.e. MongoDb Server is able to successfully connect) then the following response in PowerShell is expected.
TcpTestSucceeded : True
The above response tells us specifically that the Port 27017 is open and the Test-NetConnection module was able to validate TCP handshake, so the port should be ready to establish a connection.
The above information is derived from the link which was drafted by Rodrigo Restrepo

How to connect QEMU qmp-shell to a VM via unix socket?

I followed this tutorial to connect qmp-shell to a QEMU VM instance.
1. Start QMP on a unix socket
# qemu-system-aarch64 -M virt -qmp unix:./qmp-sock,server,wait=off
2. Run the script
# qmp-shell ./qmp-sock
3. You should get the following prompt
(QEMU)
But step 2 gives below error:
ERROR: Couldn't connect to ./qmp-sock: Failed to establish connection: [Errno 2] No such file or directory
What could be wrong?

Powershell remote fails with "The SSH client session has ended with error message: subsystem request failed on channel 0."

I am running Windows 10 Pro on one of my workstations with ssh enabled. I am able to ssh from my Mac to Windows successfully but when I try the command
New-PSSession -HostName xxxx -UserName yyyy
I receive the following message after entering my password: The background process reported an error with the following message:
The SSH client session has ended with error message: subsystem request failed on channel 0.
Sorry, I also ran into the same issue and couldn't get it done.
the best way I think is to run it using shh directly
Eg:
ssh user#ip-or-hostname "quser"
also have this issue.... using ssh directly is working but I was trying to do something like that:
$s = New-PSSession -ComputerName myComputer -UserName userName -Port sshPort
Invoke-Command -Session $s -ScriptBlock {
cd /pathToDockerCompose
docker-compose down
docker-compose up -d
}
so if someone has an alternative to do this, I'm all ears :-).
edit: btw, I want it to be called from a Windows machine and executed on linux.
Had a similar situation just occur on one of my machines, SSH worked but trying to SSH through powershell with Enter-PSSession gave the same "subsystem request failed on channel 0" error. Turns out I didn't have the powershell subsystem registered in the sshd_config file for OpenSSH.
See https://lazyadmin.nl/powershell/powershell-ssh/ for additional info and the same change I made to get it working

Orion Context Broker functional test failure

I have successfully forked and built the Context Broker source code on a CentOS 6.9 VM and now I am trying to run the functional tests as the official documentation suggests. First, I installed the accumulator-server.py script:
$ make install_scripts INSTALL_DIR=~
Verified that it is installed:
$ accumulator-server.py -u
Usage: accumulator-server.py --host <host> --port <port> --url <server url> --pretty-print -v -u
Parameters:
--host <host>: host to use database to use (default is '0.0.0.0')
--port <port>: port to use (default is 1028)
--url <server url>: server URL to use (default is /accumulate)
--pretty-print: pretty print mode
--https: start in https
--key: key file (only used if https is enabled)
--cert: cert file (only used if https is enabled)
-v: verbose mode
-u: print this usage message
And then run the functional tests:
$ make functional_test INSTALL_DIR=~
But the test fails and exits with the message below:
024/927: 0000_ipv6_support/ipv4_ipv6_both.test ........................................................................ (FAIL 11 - SHELL-INIT exited with code 1) testHarness.sh/IPv6 IPv4 Both : (0000_ipv6_support/ipv4_ipv6_both.test)
make: *** [functional_test] Error 11
$
I checked the file ../0000_ipv6_support/ipv4_ipv6_both.shellInit.stdout for any hint on what may be going wrong but error log does not lead me anywhere:
{ "dropped" : "ftest", "ok" : 1 }
accumulator running as PID 6404
Unable to start listening application after waiting 30
Does anyone have any idea about what may be going wrong here?
I checked the script which prints the error line Unable to start listening application after waiting 30 and noticed that stderr for accumulator-server.py is logged into the /tmp folder.
The accumulator_9977_stderr file had this log: 0000_ipv6_support/ipv4_ipv6_both.shellInit: line 27: accumulator-server.py: command not found
Once I saw this log I understood the mistake I made. I was running the
functional tests with sudo and the secure_path was being used instead of my PATH variable.
So at the end, running the functional tests with the command below solved the issue for me.
$ sudo "PATH=$PATH" make functional_test INSTALL_DIR=~
This can also be solved by editing the /etc/sudoers file by:
$ sudo visudo
and modifying the secure_path value.

Windows container failed to start with error, "failed to create endpoint on network nat: HNS failed with error : Failed to create endpoint."

I have been trying Windows Containers on windows server 2016 TP5. Suddenly I started getting error while running a container with port maping option -p 80:80
c:\>docker run -it -p 80:80 microsoft/iis cmd
docker: Error response from daemon: failed to create endpoint sharp_brahmagupta on network nat: HNS failed with error : Failed to create endpoint.
I made sure that no other container is running and port 80 on host machine is not being used by any other service.
Did anyone face same issue?
After searching around I stunbled upon this issue on github. This seemed to be a known issue with Windows containers on Windows server TP5.
Then thanks to this forum, I found the solution
You can check active static port mapping with below command
C:\>powershell
PS C:\>Get-NetNatStaticMapping
StaticMappingID : 3
NatName : Hda6caca4-06ec-4251-8a98-1fe0b4c5af88
Protocol : TCP
RemoteExternalIPAddressPrefix : 0.0.0.0/0
ExternalIPAddress : 0.0.0.0
ExternalPort : 80
InternalIPAddress : 172.31.181.4
InternalPort : 80
InternalRoutingDomainId : {00000000-0000-0000-0000-000000000000}
Active : True
From above output it seemed that even though container was removed the static port mapping was not removed and was still active.
But I removed it with below command.
PS C:\> Get-NetNatStaticMapping | ? ExternalPort -eq 80 | Remove-NetNatStaticMapping
Then simply rebooted the system and the error was gone.
For me these steps solved the problem:
Stop-Service docker
Get-ContainerNetwork | Remove-ContainerNetwork
Get-NetNat | Remove-NetNat
Get-VMSwitch | Remove-VMSwitch
Start-Service docker
(suggested by JMesser81 at:https://github.com/Microsoft/Virtualization-Documentation/issues/273)
I had similar error.
$ docker --version
Docker version 1.13.0-rc3, build 4d92237
$ docker-compose -f .\docker-compose.windows.yml up
Starting musicstore_db_1
ERROR: for db Cannot start service db: {"message":"failed to create endpoint musicstore_db_1 on network nat: HNS failed with error : Unspecified error"}
ERROR: Encountered errors while bringing up the project.
Static mapping removal did not work, only network removal helped:
Get-ContainerNetwork -Name nat | Remove-ContainerNetwork
Execute the command in PowerShell as administrator, then restart Docker.
Update:
Use CleanupContainerHostNetworking.ps1 script to resolve Docker 17 networking issues.
.\CleanupContainerHostNetworking.ps1 -Cleanup -ForceDeleteAllSwitches
I had a docker and docker-compose which were already working on Centos.
I did the following changes to make it work on windows server 2016:
Stop the docker service, remove nat, start the docker service.
ps>stop-service docker
ps>Get-ContainerNetwork | Remove-ContainerNetwork -Force -ea SilentlyContinue
ps>start-service docker
Configure network in your docker-compose.yml
version: '3.7'
networks:
default:
external:
name: nat
That's It!