It appears my GCP Compute Engine service/instance/whatever-you-call-it is refusing connections from my machine at times. I was just trying to set up an SFTP connection through a desktop app and probably failed a password too many times.
But I don't have Fail2Ban installed, and I don't see any Firewall Rules in the GCP interface blocking my IP. During what I perceive as the block, I can't even ping the machine. As soon as I switch to my cellphone's hotspot - I can ping it again. See screenshot below - I switched to the hotspot mid-way in that ping.
Does anyone know where I can look to control this setting and/or see what's being done here?
lastb output reflects regular attempts to get into my machine so I don't understand why something is being so harsh on me while this level of spam is flowing to the Linux level at least.
Found the answer - it's sshguard running on linux.
in /var/log/auth.log
Apr 19 01:43:05 x-x sshguard[696]: Blocking "-.-.-.-/32" for 122880 secs (3 attacks in 1 secs, after 11 abuses over 3268716 secs.)
Related
My GCP Compute engine is down. In GCP console it is up and running. It is an Ubuntu 18.04 server with 0.6GB memory(an always free tier compute engine). It was not restarted for more than couple of months. The system usage was around 60% last checked. I have already checked this answer. And none of them seems valid for me(seems so).
Free disk stands at 54%.
SSH perfectly configured.
No firewall issue.
It just seemed the VM stopped responding putting the hosted url down. When I checked Compute engine monitoring tabs, all the graphs were normal without any visible changes. I even checked the logging but no system crash kind of logs were present. I stopped and restarted the compute engine, and it started working perfectly, as nothing happened. In AWS, the VM instance failed System Reachability tests in such scenarios.
Does GCP has something similar like AWS system reachability test?
Any possible logs or something, by which I can understand the reason why the Compute Engine stopped responding?
There are different kinds of test with custom scenarios in your project , please find it on reference link [1]
[1] https://cloud.google.com/network-intelligence-center/docs/connectivity-tests/how-to/running-connectivity-tests
I have a basic stack of containers on their own user-defined network with a subnet of 172.21.0.0/16. My MySQL container's address is 172.21.0.2 and the PHP/Apache container's address is 172.21.0.3.
Until this point I had to permit MySQL to allow incoming connections via PHP from 172.21.0.3, which made perfect sense. Now, it seems as though the connections are coming from 172.21.0.1, the gateway, and this doesn't make much sense to me. My (basic to intermediate) understanding suggests that the gateway should only be used when traffic is destined for an address outside of its local network - but obviously in this case MySQL and PHP/Apache are on the same network.
Two of our environments have started acting like this, and while it's a simple fix to permit connections from the gateway address, I'm hesitant to proceed without an understanding as to what has happened and why. This also seems to add extra delay to database queries within the application.
Logging in to an affected environment via phpMyAdmin displays "User: root#172.21.0.1" in the "Database Server" information pane. An unaffected environment displays "root#phpmyadmin_1.test_default" (user#[container].[network]).
Both environments are using the exact same images, and the same version of Docker - 18.06.1-ce. Other than a version upgrade of Docker, nothing else has changed with regards to the docker-compose.yml I was using.
Why has my environment started acting like this? Should I prefer the connection coming in from the actual source, and not via the gateway? How can I return to that way of operation?
Thank you for any guidance or knowledge.
For anyone else that experiences a similar rut, I'm of the mind that this was caused by an upgrade of Docker from 18.03.1-ce to 18.06.1-ce via Docker's own repository. Performing a server reboot after this operation has (for now) restored sense to the networking of the stack.
The connection to my MySQL container is now correctly coming from the PHP/Apache container and not from the gateway address of the bridge network. The lag this introduced is gone, and I'm able to remove the privilege associated with the gateway address.
I'm using Azure App Services to run about 15 PHP web apps. Most of these apps connect to my 'Azure Database for MySQL server' instance. This is a Basic-tier instance (1 vCore & 2GB memory).
The MySQL instance hosts about 30 small databases (ranging between 1 to 100MB in size).
The load on the MySQL instance is stable and low. CPU is constantly under 20%, memory is constantly under 50% and IO does not even show up in the metrics in the Azure Portal.
My problem is this:
Every once in a while the server goes offline for about 1 or 2 minutes (max 5 min). I see that client applications try to connect, they hang for a while to finally get the error:
SQLSTATE[HY000] [2006] MySQL server has gone away
It seems to happen randomly. Sometimes a few times a week or even a day. But sometimes it doesn't happen for weeks.
What's noticeable though, when it happens I see a downward spike in memory and an upward spike in CPU in the metrics graph on the portal like this:
Does anyone experience the same issue on Azure Database for MySQL? And did anyone find a solution?
I'm starting to think that it's caused by a resources movement on the Azure side but I don't have any evidence to back that up. If so, shouldn't that happen without any downtime?
Scaling up from the Basic 1 core tier with Compute Gen 4 to Basic 2 core tier with Compute Gen 5 seemed to resolve the problem.
Not sure though what was causing the issue though.
I started experiencing this error in May 2019.
If I happen to be connected on the mariadb server with ssh at the time it occurs and htop is running, I can see rsyslog suddenly going crazy. It bogs down the CPU and the network connection becomes unresponsive. The CPU and network activity doesn't show up in Azure but running w in the ssh session after the network recovers shows the CPU load was definitely very high during the last 15 minutes.
I traced it back to OMS agent. When that service is killed on the mariadb server, the server runs without any problem. As soon as OMS agent is started, "Mysql has gone away" pops up on the clients within 24 hours due to unresponsive network connection with the server machine.
It is possible to uninstall OMS agent from the Azure portal but it comes back within 48H.
The only way I found of getting rid of OMS agent is to stop walinuxagent too on the linux server.
Scaling the server up may solve the problem as you have more CPU power to process the extra CPU load induced by OMS agent. I prefer to kill OMS agent and walinuxagent instead of spending more money on an expansive server.
Edit:
It turns out OMS is installed because the VM is part of a Log Analytics workspace (search for Log Analytics workspaces in the search bar). Removing the VM from the workspace immediately uninstall OMS. There is no need to stop walinuxagent.
It all started when I was trying to connect to a VM setup on GCP for SFTP only. Everytime I try to check SSH or setup an SFTP on this machine, it becomes unreachable (and its reachable and well connected on my friend's laptop at the same time).
Ping <ip address>
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
I can ping other VMs and also able to SSH into them.
I thought the problem might be with my router. So I used my phone HOTSPOT to connect to internet and tried again. It still did not work. Moreover to my surprise I got same timeouts when I ping www.google.com (strange, because I am able to use internet through chrome browser).
Other details:
MacBook Pro, High Sierra
Airtel Broadband/ Vodafone (HOTSPOT)
Firewall is off.
Others seem to have faced a similar issue (but I could not find any satisfactory answer in these links):
https://askubuntu.com/questions/608194/have-internet-connection-but-cant-ping-external-sites
Check whether you turned Stealth mode in Settings > Security & Privacy > Firewall.
If it's turned off try turning the whole firewall off, just to test whether that's the culprit.
I want to send email with Exchange by using telnet to port 25. Until two week ago I was able to, but now a "security fix" from Microsoft has removed this possibility.
When I try, I get this message:
421 4.3.2 Service not available, closing transmission channel
What can I do?
I use a service (Message Labs (ML)) to filter out all the spam. We got a new internet connection and in the process of re-configuring ML's inbound/outbound services to the new IP, I got an error. So, I tested it from external by telneting to the IP on port 25 and got the "421 4.3.2 Service not available, closing transmission channel" error. What I didn't realize at first was that the reason it failed was because I had set a specific grouping of IPs on the 2007 edge server receive connector (for the ML servers). So, I added my lan network & additionally another IP for the external host I was testing from and low and behold, I could connect from both.
What I figured was happening with ML was that their server that was testing the connectivity was on an address that was excluded from the edge server.
So, I removed my testing IPs and created a new, temporary, receive connector on the edge server, accepting from all addresses (0.0.0.0 - 255.255.255.255). I then submitted the change to ML again and guess what...this time they accepted it. Now, I'll simply remove the test receive connector and everything should be golden.
SMTP is the protocol that is used to receive email from the rest of the world so I doubt that Microsoft has dropped that. There must be some other misconfiguration on your server.
Try double-checking your relay-settings and the event-log on your exchange-server.
I found the answer at website:
http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=2900802&SiteID=17
Thanks for your help!
Basically, this functionality was removed by default and it could be restored by means of an ad hoc configuration - but with no guarrantee that further "updates" break the system again. Thanks, Microsoft.
After more than 5 years of flawless working, the 2010 EDG server suddenly stopped accepting with "421 4.3.2 Service not available". The SmtpReceive log (Get-TransportServer | select ReceiveProtocolLogPath) confirmed that it was indeed the edge server generating this error.
The EDGE server had two ip-addresses on a single NIC. After the following steps all worked fine again:
remove one ip-address from the nic on the edge server
update the static entry in DNS to point the second ip-address
on the Default internal receive connector allow to receive mail on all available IPv4 addresses.
Notice: this setup is not a security best practice for a DMZ. Better to use two NICs each with a leg in a different zone.