I've added the following configuration to the runner toml config:
[session_server]
listen_address = "[::]:8093" # listen on all available interfaces on port 8093
advertise_address = "runner-host-name.tld:8093"
session_timeout = 1800
The manager instance and workers are running in the same VPC on AWS. I have put the manager's private IP address in the advertise_address option. This address and port are reachable from worker machines. But when I click on the Debug button in the job, it opened the debug page with the black rectangle and nothing more, no shell appears in it. No errors or warnings related to the session server connectivity in the job log. What I'm doing wrong?
Related
I have deployed the Spring boot app on the OCI compute and its comping up nicely. Compute is created with public ip and have the security list updated to allow connections from internet. But, I wasn't able to hit the end point from internet. For that reason, I thought of configuring the load balancer.
Created load balancer in a separate subnet(10.0.1.0/24), routing table and security list. Configured the LB's security list to send all protocol packets to compute's CIDR(10.0.0.0/24) and configured compute's security list to accept the packets from LB. I was expecting LB to make connection with back end. But, its not.
I am able to hit the LB from internet :-
Lb's routing table with all ips routed through internet gateway. There is no routing defined for compute's CIDR as its in the VCN.
LB has its own security list, which has allowed out going packets to compute and incoming from internet as below:
Compute's security list accepting packet's from LB:
Let me know, if I am missing something here.
My internet gateway :-
My backend set connection configuration from LB:
LB fails to make connection with backend, there seems to be no logging info available :
App is working fine , if I access from the compute node :
The LB has a health check that tests the connection to your service. If it fails, the LB will keep your backend out of rotation and give you the critical health like you're seeing.
You can get to it by looking at the backend set and clicking the Update Health Check button.
Edit:
Ultimately I figured it out, you should run the following commands on your backend:
sudo firewall-cmd --permanent --add-port=8080/tcp
sudo firewall-cmd --reload
Use the port that you configured your app to listen on.
I used httpd instead of spring, but I also did the following
sudo semanage fcontext -a -t httpd_sys_content_t "/var/www/html(/.*)?"
sudo restorecon -F -R -v /var/www/html
I'm not really too familiar with selinux but you may need to do something similar for your application.
Additionally, setting up a second host in the same subnet to login to and test connecting to the other host will help troubleshooting, since it will verify if your app is accessible at all outside the host that it's on. Once it is, the LB should come up fine.
TL;DR In my case it helped to switch the Security List rules from stateful to stateless on the 2 relevant subnets (where the loadbalancer was hosted and where the backends were located).
In our deployment I had a loadbalancer with public IP located on one subnet, while the backend to this loadbalancer was on another subnet. Both subnets had one ingress and one egress rule - to allow everything (i.e. 0.0.0.0/0 and all ports allowed). The backends were still not reachable from the loadbalancer and the healthchecks were failing.
Even despite the fact that in my case as per the documentation switching between stateful and stateless should not have an effect, it solved my issue.
I have a Symfony 4 application running on 1und1 on a package called "1&1 Unlimited Plus". The SMTP config looks like this:
MAILER_URL=smtp://smtp.1and1.com?username=****&password=******
and it works fine. I also have a development copy of this application on my local dev server with same config. This dev copy can send emails, too.
Since the databases on "1&1 Unlimited Plus" are limited to 1GB, I ordered another cloud server from Ionos. With the same config I am not able to send emails. I got this error in dev.log:
Exception occurred while flushing email queue: Connection could not be established with host smtp.1and1.com [Connection timed out #110]
Ping on smtp.1and1.com works, it reveals the same IP like if I ping on my dev server. On this cloud server I have running:
Plesk Onyx
Ubuntu 18.04.2 LTS‬
DNS turned off. I have just an A-record on the origin server to the IP of cloud server. NO MX-records set.
I checked the firewall rules. No outgoing limits found, just incoming. I added TCP 25 to incoming rules but I dont know if it is necessary.
I tried another ports but then I got this:
Exception occurred while flushing email queue: Expected response code 220 but got an empty response []
More config:
swiftmailer:
url: '%env(MAILER_URL)%'
spool: { type: 'memory' }
Any idea whats wrong?
I found the solution. Ionos closes the outgoing port 25 per default. This is nothing I can find or change in admin area, only technical support can open this port.
I am trying to setup multiple IPFS peers on the same Windows machine, in order to test file sharing and pubsub service.
I have created a different .ipfs folder for each peer, that is .ipfs1, .ipfs2.
In each config file i have replaced the ports 4001, 5001 and 8080 to not overlap.
So when i want to run all the daemons at the same time i open 2 console windows and input in each one:
set IPFS_PATH=C:\Users\MyName\.ipfsX (X = the peer number)
ipfs daemon --enable-pubsub-experiment
When i want to execute commands in a specific peer i open a new console window and type:
set IPFS_PATH=C:\Users\MyName\.ipfsX (X = the peer number)
cmd
So let's get to the problem. I want to run 2 peers, subscribe both to the same pubsub channel and exchange messages.
I have 6 open console windows, 3 for each peer:
1 for the running daemon
1 for executing sub and listening for messages
1 for inputing commands
The issue is that when i send a pubsub message, only the same peer receives it.
Only Peer1 listens to messages created by Peer1, etc.
Is there something wrong with my multi-peer setup? Any help would be appreciated.
A better approach is to use docker or VMs, the setup you described is very likely to cause issues. Try doing ipfs swarm peers to see if your nodes are connected to any of the peers.
I was having issues with internet access on a Windows Server 2012 VM, and accidentally disabled the NIC via RDP. Now I can't access the VM.
I tried editing the instance from console; I wanted to add a 2nd NIC, but couldn't do so. I saw something about having to add another "network" but couldn't figure it out.
Is there any way to re-access this VM??
You can re-enable the network interface on the Windows VM using the Serial port.
Try these steps:
Open the VM instance page from the Google Cloud Platform Console.
Click Edit on the top bar.
Enable the Enable connecting to serial ports option and click Save.
Start the VM if it isn't already running.
You will be taken back to the VM's info page and now you can select Connect to Serial port dropdown, select Port 2.
A new window will open up and you will get the Special Administration Console (SAC). Run cmd in this serial command prompt.
Open up Command prompt channel by pressing Esc + Tab.
You will have to login as admin on that instance using your admin credentials.
In the command prompt, you can enable back your network interface by running these commands:
Commands:
# List all network adapters - The name is important
netsh interface show interface
# Enable the network adapter
netsh interface set interface "MY_NETWORK_ADAPTER_NAME" admin=enable
Your instance's network adapter should now be enabled and you should have network access to your VM now.
I have observed similar problem on a windows server, where it was not able to resolve Metadata server and auto assigned an APIPA address 169.254.x.x
Performed troubleshooting steps as per Google documentation, also tried with restarting network related services, nothing worked. Finally reached to this post and tried the following and it got fixed..!
netsh interface set interface name="NAME OF INTERFACE" admin=disabled
netsh interface set interface name="NAME OF INTERFACE" admin=enabled
Not sure why windows behaves as such, Hope this helps
I accidentally disabled the network adapter on Windows virtual machine on compute engine. I tried delete-access-config and add-access-config through gcloud utility and that did not seem to make any difference. Any suggestions on how to enable the network adapter back so I can RDP back into VM or am I going to have to re-build the VM?
Thanks,
To those having this issue in 2017. I was just having same problem and spent a day trying to figure out a solution. I ended up successfully enabling network adapter using serial port connection. Here are the steps:
1) Open Google Compute VM instance from the console and click "Edit"
2) Scroll down to "Serial Port" and enable it. Save...
3) Select drop down next to "Connect to serial port" and chose "Port 2"
4) New window will open with serial command prompt. Run "cmd" command in it
5) Once executed switch to cmd channel by pressing ESC+Tab
6) Authenticate with user credentials that has admin rights on your instance
7) Now you have access to your instance command prompt. To enable your network interface run the following commands:
netsh interface show interface (this will show you all network adapter names, remember the name of the one you need to enable)
netsh interface set interface "network_adapter_name" admin=enable
(e.g netsh interface set interface "MyEthernet" admin=enable)
8) Show everyone around how happy you are to figure this one out.
At present, there's no way to enable a disabled network interface on a Windows GCE VM, therefore rendering RDP unusable. Recreating the VM anew seems like the only option.
In order not to lose data from your disabled instance, make sure "Delete boot disk when instance is deleted" is unchecked in the Developers Console configuration for this VM prior to deleting the instance. You can then attach the left-behind disk to your new instance, in order to retrieve data. Afterwards you may keep or delete the additional disk.