I can't connect to remote server - powershell-remoting

I have tried to install a .exe file with PowerShell on remote computer but I have following error:
Connecting to remote server failed with the following error message : The WinRM client cannot process the request. Default authentication may be used with an IP address under the following conditions: the transport is HTTPS or the destination is in the TrustedHosts list, and explicit credentials are provided. Use winrm.cmd to configure TrustedHosts. Note that computers in the TrustedHosts list might not be authenticated. For more information on how to set TrustedHosts run the following command: winrm help config.
I run the following command:
Invoke-Command -ComputerName y.y.y.y -ScriptBlock{"d:UltraVNC_1_2_10_X86_Setup"
I changed the execution policy to RemoteSigned, but that didn't help.

If the computer is password protected, try using a PSCredential object :
$user = "SRV\test"
$ip = "75.12.35.36"
$pass = "P455W0RD" | ConvertTo-SecureString -AsPlainText -Force
$cred = New-Object System.Management.Automation.PSCredential ($user, $pass)
And use the command as following :
Invoke-Command -ComputerName $ip -Credential $cred {your instructions}
If that does not work verify that the WinRM service is currently running on the remote machine :
get-service winrm
start-service winrm
And also add the remote machine to your TrustedHosts list :
Set-Item WSMan:\localhost\Client\TrustedHosts -Value $ip -force

I solved that problem adding the target nodes to the trusted host list
Set-Item WSMan:\localhost\client\trustedhosts <target node ip> -force -concatenate
Let say that I want to connect from PC1 to PC2(192.168.1.2) and PC3(192.168.1.3) using winrm, so I added PC2 and PC3 to the trusted host list of PC1
Set-Item WSMan:\localhost\client\trustedhosts 192.168.1.2 -force -concatenate
Set-Item WSMan:\localhost\client\trustedhosts 192.168.1.3 -force -concatenate

Related

How can I do an ssh tunnel with port forwarding on a Windows runner in Github actions?

I have a MongoDB instance on a Google compute engine running that I want to connect to from my Github action (On a windows runner if it makes a difference) to insert test and performance results.
Currently, I am trying to open an SSH tunnel with port forwarding and just test that the port is open.
Here is what my GIthub action step is:
- name: 'Create ssh tunnel'
if: (runner.os == 'Windows')
run: |
gcloud config set auth/impersonate_service_account *****#***.iam.gserviceaccount.com
gcloud compute config-ssh
$sshTunnelJob = Start-Job -Name SshTunnelJob -ScriptBlock { ssh -o "User=*****_iam_gserviceaccount_com" *****.us-east1-b.**** -vvv -fNT -L 27017:0.0.0.0:27017}
Get-Job
Receive-Job -Name SshTunnelJob | Format-List -Force -Expand CoreOnly
netstat -aon
Test-NetConnection localhost -port 27017
gcloud config unset auth/impersonate_service_account
gcloud compute config-ssh --remove
I expect this, Test-NetConnection localhost -port 27017, to succeed, but it fails. Forwarding port 80 is succeeding, though.
Here is the output:
WARNING: TCP connect to (::1 : 27017) failed
WARNING: TCP connect to (127.0.0.1 : 27017) failed
ComputerName: localhost
RemoteAddress: ::1
ResolvedAddresses: {::1, 127.0.0.1}
PingSucceeded: True
PingReplyDetails: System.Net.NetworkInformation.PingReply
TcpClientSocket:
TcpTestSucceeded: False
RemotePort: 27017
TraceRoute:
Detailed: False
InterfaceAlias: Loopback Pseudo-Interface 1
InterfaceIndex: 1
InterfaceDescription:
NetAdapter:
NetRoute: MSFT_NetRoute (InstanceID = "DD;9;?B55;55DD55;")
SourceAddress: ::1
NameResolutionSucceeded: True
BasicNameResolution: {Microsoft.DnsClient.Commands.DnsRecord_AAAA,Microsoft.DnsClient.Commands.DnsRecord_A}
LLMNRNetbiosRecords: {}
DNSOnlyRecords: {Microsoft.DnsClient.Commands.DnsRecord_A}
AllNameResolutionResults: {Microsoft.DnsClient.Commands.DnsRecord_AAAA,Microsoft.DnsClient.Commands.DnsRecord_A}
IsAdmin: True
NetworkIsolationContext: Loopback
MatchingIPsecRules:
What am I missing? Is GitHub limiting ports? I couldn't find any documentation on what ports are blocked or not.
Solution 1 :
The issue might be that the connection from client to server is blocked by a firewall. Can you Please check if the relevant GCP firewall setting is enabled for port 27017.
Also , Please check the target tags and update it accordingly if required . This will allow instances tagged with mongodb-instance to accept connections on port 27017.
Solution 2 :
As per the below output provided by you, it is observed that PingSucceeded was True. Whereas, the response returned as False for the PingSucceeded. In such cases, it is observed that the ICMP requests might be disabled on the remote server/device.
PingSucceeded: True
TcpTestSucceeded: False
As you are expecting Test-NetConnection localhost -port 27017 to succeed,please follow the below steps.
Open PowerShell in the Windows server and type the following command:
tnc <ip_address> -port <PortNumber>
If the device was having issues where it powered off or it got disconnected from the network, a response like below is expected.
PingSucceeded : False
TCPTestSucceeded : False
If the connection is healthy (i.e. MongoDb Server is able to successfully connect) then the following response in PowerShell is expected.
TcpTestSucceeded : True
The above response tells us specifically that the Port 27017 is open and the Test-NetConnection module was able to validate TCP handshake, so the port should be ready to establish a connection.
The above information is derived from the link which was drafted by Rodrigo Restrepo

Powershell remote fails with "The SSH client session has ended with error message: subsystem request failed on channel 0."

I am running Windows 10 Pro on one of my workstations with ssh enabled. I am able to ssh from my Mac to Windows successfully but when I try the command
New-PSSession -HostName xxxx -UserName yyyy
I receive the following message after entering my password: The background process reported an error with the following message:
The SSH client session has ended with error message: subsystem request failed on channel 0.
Sorry, I also ran into the same issue and couldn't get it done.
the best way I think is to run it using shh directly
Eg:
ssh user#ip-or-hostname "quser"
also have this issue.... using ssh directly is working but I was trying to do something like that:
$s = New-PSSession -ComputerName myComputer -UserName userName -Port sshPort
Invoke-Command -Session $s -ScriptBlock {
cd /pathToDockerCompose
docker-compose down
docker-compose up -d
}
so if someone has an alternative to do this, I'm all ears :-).
edit: btw, I want it to be called from a Windows machine and executed on linux.
Had a similar situation just occur on one of my machines, SSH worked but trying to SSH through powershell with Enter-PSSession gave the same "subsystem request failed on channel 0" error. Turns out I didn't have the powershell subsystem registered in the sshd_config file for OpenSSH.
See https://lazyadmin.nl/powershell/powershell-ssh/ for additional info and the same change I made to get it working

Blank Screen error while connecting the vncserver from SUSE-12 SP4 server

While connecting the vncserver getting the blank screen.I have installed the vnc server in the SUSE-12 platform and while connecting the vncserver the log file has below error;
vncext: VNC extension running!
vncext: Listening for VNC connections on all interface(s),
port 5901
vncext: Listening for HTTP connections on all
interface(s), port 5801
vncext: created VNC server for screen 0
** (process:9295): WARNING **: Could not make bus activated
clients aware of XDG_CURRENT_DESKTOP=GNOME environment variable:
Could not connect: Connection refused
gnome-session-is-accelerated: llvmpipe detected.
gnome-session-binary[9295]: WARNING: Could not make bus
activated clients aware of GNOME_DESKTOP_SESSION_ID=this-is-
deprecated environment variable: Could not connect: Connection
refused
gnome-session-binary[9295]: WARNING: Could not make bus
activated clients aware of XDG_MENU_PREFIX=gnome- environment
variable: Could not connect: Connection refused
gnome-session-binary[9295]: WARNING: Could not make bus
activated clients aware of QT_QPA_PLATFORMTHEME=qgnomeplatform
environment variable: Could not connect: Connection refused
gnome-session-binary[9295]: WARNING: Lost name on bus:
org.gnome.SessionManager
Unable to init server: Could not connect: Connection refused
** (gnome-session-failed:9316): WARNING **: Cannot open display:
/root/.xinitrc: line 1: 9295 Terminated gnome-
session
I have followed below link for the installation of the vncserver;
https://www.dell.com/support/article/in/en/inbsd1/sln283107/how-to-install-and-configure-a-vnc-server-on-suse-linux-enterprise-server-sles-11?lang=en
Thanks in advance for help.
I have tried editing the .vnc/xstartup and also updated all the required packages, but it didn't work.
Right now my .vnc/xstartup file has configuration as;
#!/bin/sh
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
export SESSION_MANAGER
export DBUS_SESSION_BUS_ADDRESS
userclientrc=$HOME/.xinitrc
sysclientrc=/etc/X11/xinit/xinitrc
if [ -f "$userclientrc" ]; then
client="$userclientrc"
elif [ -f "$sysclientrc" ]; then
client="$sysclientrc"
fi
if [ -x "$client" ]; then
exec "$client"
fi
if [ -f "$client" ]; then
exec sh "$client"
fi
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
if [ -x /usr/bin/icewm ]; then
/usr/bin/icewm &
else
echo "No window manager found. You should install a window
manager to get properly working VNC session."
fi

Gunicorn listening always at http://127.0.0.1:8000

I have set up my django application on webfaction and now I am trying to move to using Guicorn to serve my application. When I set up my files and config everything seems to be working except that it is always listening at 127.0.0.1:8000.
My configuration is as below.
supervisord.conf
[unix_http_server]
file=/home/devana/tmp/supervisor.sock
[supervisord]
logfile=/home/devana/tmp/supervisord.log
logfile_maxbytes=50MB
logfile_backups=10
loglevel=info
pidfile=/home/devana/webapps/devana/etc/supervisord.pid
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///home/devana/tmp/supervisor.sock
[include]
files = /home/devana/webapps/devana/etc/supervisord/*.ini
Supervisor.ini
[program:devana]
command=/home/devana/webapps/devana/scripts/start_server
directory=/home/devana/webapps/devana/csiop/
user=devana
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile = /home/devana/tmp/gunicorn_supervisor.log
start_server
NAME="devana" # Name of the application
DJANGODIR=/home/devana/webapps/devana/csiop # Django project directory
SOCKFILE=/home/devana/webapps/devana/run/gunicorn.sock # we will communicte using this
unix socket
USER=devana # the user to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=devana.settings.production # which settings should Django use
DJANGO_WSGI_MODULE=devana.wsgi # WSGI module name
BIND=2.14.5.58:31148 (IP and the port number provided by webfaction in this place)
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec /home/devana/webapps/devana/bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER \
--log-level=debug \
--bind=$BIND
Now when I type the '../bin/Supervisord' command, guicorn is starting but it listening at 127.0.0.1:8000 instead of the bind variable I provided and I am not able to open my website using http://mywebsite.com.
Could someone point me what I am doing wrong
I found the problem. Instead of using BIND variable containing both IP and port, I separated them into two different variables and used --bind=$IP:$PORT. That seems to work
If gunicon listens on 127.0.0.1:8000 it probably is the default that is applied because the supplied -b / --bind parameter cannot be applied.
In my case, I ran gunicorn via Docker and had the following directive in my Dockerfile to run as default command:
CMD ["gunicorn", "config.wsgi", "--bind 0.0.0.0:8000"] # listening on 127.0.0.1:8000
CMD ["gunicorn", "config.wsgi", "--bind", "0.0.0.0:8000"] # listening on 0.0.0.0:8000
I'm not sure what was broken in your case but if someone from the future stumbles upon this: check how the --bind value is passed to gunicorn.

kvm net devices sharing traffic

Using linux KVM/QEMU, I have a virtual machine with two NICs presented at the host as tap interfaces:
-net nic,macaddr=AA:AA:AA:AA:00:01,model=virtio \
-net tap,ifname=tap0a,script=ifupbr0.sh \
-net nic,macaddr=AA:AA:AA:AA:00:02,model=virtio \
-net tap,ifname=tap0b,script=ifupbr1.sh \
In the guest (also running linux), these are configured with different subnets:
eth0 Link encap:Ethernet HWaddr aa:aa:aa:aa:00:01
inet addr:10.0.0.10 Bcast:10.0.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
eth1 Link encap:Ethernet HWaddr aa:aa:aa:aa:00:02
inet addr:192.168.0.10 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Routes only go to the expected places:
ip route list
default via 10.0.0.1 dev eth0 metric 100
10.0.0.0/16 dev eth0 proto kernel scope link src 10.0.0.10
192.168.0.0/24 dev eth1 proto kernel scope link src 192.168.0.10
But somehow don't seem to be treated by KVM as being connected to distinct networks.
If I trace the individual interfaces, they both see the same traffic.
For example, if I ping on the 10.0.0.0/16 subnet, ping -I eth0 10.0.0.1
And simultaneously trace the two tap interfaces with tcpdump , I see the pings coming through on both tap interfaces:
sudo tcpdump -n -i tap0a
10:51:56.308190 IP 10.0.0.10 > 10.0.0.1: ICMP echo request, id 867, seq 1, length 64
10:51:56.308217 IP 10.0.0.1 > 10.0.0.10: ICMP echo reply, id 867, seq 1, length 64
sudo tcpdump -n -i tap0b
10:51:56.308190 IP 10.0.0.10 > 10.0.0.1: ICMP echo request, id 867, seq 1, length 64
10:51:56.308217 IP 10.0.0.1 > 10.0.0.10: ICMP echo reply, id 867, seq 1, length 64
That seems strange to me since it's pretty clear that the guest OS would have only actually sent this on the tap0a interface.
Is this expected behavior? Is there a way to keep the interfaces separate as I expected?
Is this some misconfiguration issue on my part?
Additional info, here are the two ifupbr0.sh and ifupbr1.sh scripts:
% cat ifupbr1.sh
#!/bin/sh
set -x
switch=br0
echo args = $*
if [ -n "$1" ];then
sudo tunctl -u `whoami` -t $1
sudo ip link set $1 up
sleep 0.5s
sudo brctl addif $switch $1
exit 0
else
echo "Error: no interface specified"
exit 1
fi
% cat ifupbr1.sh
#!/bin/sh
set -x
switch=br1
echo args = $*
if [ -n "$1" ];then
sudo tunctl -u `whoami` -t $1
sudo ip link set $1 up
sleep 0.5s
sudo brctl addif $switch $1
exit 0
else
echo "Error: no interface specified"
exit 1
fi
I see this problem even if I detach the "tap0b" interface from the br1. It still shows the traffic that I'd expect only for tap0a. That is, even when:
% brctl show
bridge name bridge id STP enabled interfaces
br0 8000.26a2d168234b no tap0a
br1 8000.000000000000 no
br2 8000.000000000000 no
It looks like I answered my own question eventually, but I'll document it for anyone else that hits this.
Evidently this really is the intended behavior of KVM for the options I was using.
At this URL:
http://wiki.qemu.org/Documentation/Networking
I found:
QEMU previously used the -net nic option instead of -device DEVNAME
and -net TYPE instead of -netdev TYPE. This is considered obsolete
since QEMU 0.12, although it continues to work.
The legacy syntax to create virtual network devices is:
-net nic,model=MODEL
And sure enough, I'm using this legacy syntax. I thought the new syntax was just more flexible but it apparently actually has this intended behavior:
The obsolete -net syntax automatically created an emulated hub (called
a QEMU "VLAN", for virtual LAN) that forwards traffic from any device
connected to it to every other device on the "VLAN". It is not an
802.1q VLAN, just an isolated network segment.
The vlans it supports are also just emulated hubs, and don't forward out to the host at all as best I can tell.
Regardless, I reworked the QEMU options to use the "new" netdev syntax and obtained the behavior I wanted here.
What do you have in the ifupbr0.sh and ifupbr1.sh scripts? What bridging tool are you using? That is the important piece which segregates your traffic to the interfaces desired.
I've used openvswitch to handle my bridging stuff. But before that I used bridge-utils in Debian.
I wrote some information about bridge-utils at http://blog.raymond.burkholder.net/index.php?/archives/31-QEMUKVM-BridgeTap-Network-Configuration.html. I have other posts regarding what I did with bridging on the OpenVSwitch side of things.