Configuring sendmail behind a firewall - configuration

I'm setting up a server which is on a network behind a firewall and I want programs on this computer to be able to use sendmail to send emails to any email address. We have an SMTP server running on this network (let's call it mailrelay.example.com) which is how we're supposed to get outgoing emails through the firewall.
So how do I configure sendmail to send all mail through mailrelay.example.com? Googling hasn't given me the answer yet, and has only revealed that sendmail configuration is extremely complex and annoying.

#eli: modifying sendmail.cf directly is not usually recommended, since it is generated by the macro compiler.
Edit /etc/mail/sendmail.mc to include the line:
define(`SMART_HOST',`mailrelay.example.com')dnl
After changing the sendmail.mc macro configuration file, it must be recompiled
to produce the sendmail configuration file.
# m4 /etc/mail/sendmail.mc > /etc/sendmail.cf
And restart the sendmail service (Linux):
# /etc/init.d/sendmail restart
As well as setting the smarthost, you might want to also disable name resolution configuration and possibly shift your sendmail to non-standard port, or disable daemon mode.
Disable Name Resolution
Servers that are within fire-walled networks or using Network Address
Translation (NAT) may not have DNS or NIS services available. This creates
a problem for sendmail, since it will use DNS by default, and if it is not
available you will see messages like this in mailq:
host map: lookup (mydomain.com): deferred)
Unless you are prepared to setup an appropriate DNS or NIS service that
sendmail can use, in this situation you will typically configure name
resolution to be done using the /etc/hosts file. This is done by enabling
a 'service.switch' file and specifying resolution by file, as follows:
1: Enable service.switch for sendmail
Edit /etc/mail/sendmail.mc to include the lines:
define(`confSERVICE_SWITCH_FILE',`/etc/mail/service.switch')dnl
2: Configure service.switch for files
Create or modify /etc/mail/service.switch to refer only to /etc/hosts for name
resolution:
# cat /etc/mail/service.switch
hosts files
3: Recompile sendmail.mc and restart sendmail for this setting to take effect.
Shift sendmail to non-standard port, or disable daemon mode
By default, sendmail will listen on port 25. You may want to change this port
or disable the sendmail daemon mode altogether for various reasons:
- if there is a security policy prohibiting the use of well-known ports
- if another SMTP product/process is to be running on the same host on the standard port
- if you don't want to accept mail via smtp at all, just send it using sendmail
1: To shift sendmail to use non-standard port.
Edit /etc/mail/sendmail.mc and modify the "Port" setting in the line:
DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')
For example, to get sendmail to use port 125:
DAEMON_OPTIONS(`Port=125,Addr=127.0.0.1, Name=MTA')
This will require sendmail.mc to be recompiled and sendmail to be restarted.
2: Alternatively, to disable sendmail daemon mode altogether (Linux)
Edit /etc/sysconfig/sendmail and modify the "DAEMON" setting to:
DAEMON=no
This change will require sendmail to be restarted.

http://www.elandsys.com/resources/sendmail/smarthost.html
Sendmail Smarthost
A smarthost is a host through which
outgoing mail is relayed. Some ISPs
block outgoing SMTP traffic (port 25)
and require their users to send out
all mail through the ISP's mail
server. Sendmail can be configured to
use the ISP's mail server as the smart
host.
Read the linked article for instruction for how to set this up.

#Espo: Thanks for the great advice on where to start. Your link would have been better if I had been configuring sendmail for its first use instead of taking an existing configuration and making this small change. However, once I knew to look for stuff on "SmartHost", I found an easier way.
All I had to do was edit my /etc/mail/sendmail.cf file to change
DS
to
DSmailrelay.example.com
then restart sendmail and it worked.

Related

How do I ensure my JDBC source is possible to connect to from Foundry?

I want to ensure I can connect to my JDBC-capable source, but I want to ensure my firewall rules are correctly set up to allow this. How can I ensure my network infrastructure is correctly set up to connect?
One way to validate this is to use an SSH session to the Agent VM and using the ping command (or similar netstat / traceroute) to ensure a simple route to your host is available.
This would look like:
ssh onto the Agent VM
Run ping my.hostname.com:PORT
If no route to host comes back, this means your firewall rules aren't correctly set up, so the Agent VM owner will need to change configuration to allow the traffic.
If a notice indicating X bytes received from my.hostname.com comes back, this means you can hit the host and your firewall rules are correctly set up.

Zabbix user-defined parameters with PSK encryption

I am trying to configure a user defined parameter on a Windows host. All my hosts are configured with PSK encryption and Zabbix server is able to get data without any issues.
However I cannot figure out how to use the zabbix_get manually with PSK encryption enabled.
zabbix_get -s x.x.x.x -p 10050 -k "internet.connection.check" --tls-connect=psk --tls-psk-identity="name" --tls-psk-file=cannot find any psk file on zabbix server
The problem is I cannot locate any PSK file on the zabbix server. Can I pass the PSK somehow?
The serverside PSK is configured in the GUI and stored in the database.
The Zabbix agent stores the PSK in a file.
I see 3 options:
Manually create a psk-file.
Remember that a change of the key must be done in the GUI, at the agent and in your special file.
Make a script that reads the key from the database.
Remember that direct access to the database of an application is most times forbidden and can cause compatibility issues after updating the application. Read-only access should be possible.
Use the same keys for all your agents;
When you install a Zabbix Agent on the Zabbix Server (allowing you to monitor the server), you do have a file on a normal place.
I wouldn't try to use an API or some smart script during Discovery, this will make the solution hard to maintain. I withdraw my last remark, when you have thousands of servers to monitor and a team working with Zabbix.

Cannot access Google Cloud Compute Instance External IP

I have set up an Google Cloud Compute Instance:
Machine type
n1-standard-1 (1 vCPU, 3.75 GB memory)
CPU platform
Intel Haswell
Zone
us-east1-c
I can ssh in using the external address.
I have installed the vncserver and can access it on port 5901 from localhost as well as the internal IP.
I am trying to access it from the static, external IP address but it is not working.
I have configured the firewall to open to port to 0.0.0.0/0, but it is not reachable.
Can anyone help?
------after further investigation from the tips from the two answers (thanks, both!), I have a partial answer:
The Google Cloud Compute instance was set, by default, to not allow
HTTP traffic. I reset the configuration to allow HTTP traffic. I
then tried the troubleshooting tip to run a small HTTP service in
python. I was able to get a ressponse from the service over the
internet.
The summary of the current situation is as follows:
The external IP address can be reached
It is enabled and working for SSH
It is enabled and working for HTTP
It does not seem to allow traffic from vncserver
Any idea how to configure the compute instance to allow for vncserver traffic?
If you already verified that Google Firewall or your VM are not blocking packets, you must make sure that VNC service is configured to listen on the external IP address.
You can always use a utility like nmap outside Google project to reveal information on the port status.
enable http/https traffic form the firewall as per the need. it will work!!
The Google Cloud Compute instance was set, by default, to not allow HTTP traffic. I reset the configuration to allow HTTP traffic. I then tried the troubleshooting tip to run a small HTTP service in python. I was able to get a response from the service over the internet.
As such, the original question is answered, I can access Google Cloud Compute Instance External IP. My wider issue is still not solved, but I will post a new, more specific question about this issue
TLDR: make sure you are requesting http not https
In my case i was following the link from my CE instance's External Ip property which takes you directly to the https version and i didn't set up https, so that was causing the 'site not found' error.
Create an entry in your local ssh config file as below with mentioned local forward port. In my case its an example of yarn's IP, which I want to access in browser.
Host hadoop
HostName <External-IP>
User <Local-machine-username>
IdentityFile ~/.ssh/<private-key-for-above-user>
LocalForward 8089 <Internal-IP>:8088
In addition to having the firewall rules to allow HTTP traffic in both Google Cloud Platform and within the OS of the instance, make sure you install a web server such as Apache or Nginx.
After installing the web server, you connect to the instance using SSH and verify you do not get a failed connection with the following command:
$ sudo wget http://localhost
If the connection is positive, it means that you can access your external URL:
http://<IP-EXTERNAL-VM>
Usually there are two main things to check.
1. Port
By default, only port 80, 443 and ICMP are exposed. If your server is running on a different port, create a record for the same.
2. Firewall
Make sure you are allowing http and https traffic based on your need.
oua re
For me the problem was that I set up the traffic for the firewall rule to be 'Egress' instead of 'Ingress'.
If anyone already initiated 'https'
just disable it and check again.

Server Sent Events in Google Compute Engine

I'm trying to get an app that uses Server Sent events working on Google Compute Engine, when SSH'd into the box I can view them, but not externally via the ephermeral IP, aka
curl 0.0.0.0/route
works from inside the box but
curl xx.xx.xx.xx/route
just hangs, looking at the headers from other routes there seems to be some sort of cacheing proxy in between the box and the outside word that is preventing server sent events from getting out because the the connection hasn't completed, there is a similar issue with nginx until you set proxy_cache off, but as far as I can tell there is no documentation for configuring the proxy that compute engine uses.
Is it possible to do server sent events from Google Compute Engine and if so what do you have to do to get it to work?
edit:
Request is created with the browser EventSource object, so it has the default headers which look to be Accept:text/event-stream, Cache-Control:no-cache, plus Referer and User-Agent.
The headers I add are Content-Type:text/event-stream, Cache-Control:no-cache, and Connection:keep-alive.
When run in AWS all is fine when I run it behind nginx assuming I modify the config appropriately.
In Google Compute Engine other pages load fine but the route with Server Sent Events just hangs never even receiving headers. The reason I suspect google is sticking a proxy between the GCE box and the outside world is the addition of Via:HTTP/1.1 proxy10205 headers.
There may be magic on the lower network layers but there is no (transparent or otherwise) proxy between your VM and the internet on GCE for the external IP. I'm not sure where the Via header comes from, doesn't the browser/client have a proxy configured?
External IPs are not configured in the most straightforward way on GCE though which might be tripping up something in the stack. I think for external IPs, the external IP itself does not appear anywhere in the VM config, it's translated to the VM internal IP by 1-1 NAT. Loadbalanced IPs do end up on the host with external IP visible though (even though even these are configured in a funny way).
Even though I don't think anything should really care about the server IP for SSE, maybe try setting up a loadbalanced IP pointing to just that one instance and see if it works any better?
"Via:HTTP/1.1 proxy10205" in your HTTP response is not from Google Compute Engine.
The GCE does not strip out the Server-Sent-Events headers. I list the simple steps below which can help you to configure a demo Server-Sent Events on an GCE VM instance:
Create an GCE instance using CentOS image.
Install Apache web server and PHP:
$ sudo yum install httpd php
Create an index.html file with the HTML content from this page :
$ sudo vi /var/www/html/index.html
Create a PHP file called demo_sse.php in the www root directory ($ sudo vi /var/www/html/demo_sse.php ) with the following content:
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
$time = date('r');
echo "data: The server time is: {$time}\n\n";
flush();
?>
Now visit the webpage. You can also verify the header using curl command:
$ curl -H "Accept:text/event-stream" --verbos http://<YOUR-GCE-IP ADDRESS>/demo_sse.php

When trying to connect through a proxy server TortoiseHg for Windows says "SSL error: unknown protocol"

The scenario:
You're behind a proxy server on Windows. You've configured TortoiseHg to use a proxy server; that is you've entered a server name/IP and port number. You are able to connect to the internet using Internet Explorer. But when you try to pull or push and it produces the error message "SSL error: unknown protocol".
(I plan to answer this myself.)
The cause is that Internet Explorer is using an automatic proxy configuration script and TortoiseHg is using a particular proxy server. IE is not using the same proxy server because the automatic script picked a different proxy server.
The solution is to enter the proxy server used by TortoiseHg in IE's connection settings, or figure out which proxy server you're using at the moment and tell TortoiseHg to use that one. You may need to browse an external web site before TortoiseHg can connect.
You can figure out which proxy server you're using by browsing with IE and then running the DOS command:
netstat
and you'll see some connections in the Foreign Address column on port 80 or 8080 (common proxy server ports).
In addition to your excellent tip, I offer one more...
If your company is using an automatic proxy script, then the proxy used for web browsing may not be the one you need for Mercurial. Thus if you try the proxy you find via netstat, and you get "getaddrinfo failed" errors in tortoise, then try this...
Get the proxy script address: IE->config->Internet Options->Connection->LAN ?Settings. Copy the url from the "Address" box.
Browse to that address and save the file to disk.
Open that file in notepad and scroll to the end, it probably ends with something like-- return "PROXY ipaddresshere:port" that's the IP and port you need.
Plug that IP and port into tortoise: right-click the repo, click settings, click proxy, put the ip and port into the Host field. I generally don't need user and password so try without it first.