how to block all the connectionsfrom the fortigate to fortiguard servers and FDN - fortigate

i have purchased a new fortigate 101e and it uses the fortiOS 6.0.6 and before i connect it to the internet i want to disable all connections to fortiguard servers and forti Distribution Network(FDN), our enviroment will use a manual updates for it and its services, so i have:
Changed the DNS and the NTP (because they contain ips with is in fortinet)
In FortiGuard we disabled push update and scheduled updates, improve IPS quality, override FortiGuard server.
Disabled sending malware statics to FortiGuard
Disable the submission of security rating results to FortiGuard by:
set security-rating-result-submission disable
Change the DNS record for the update.fortiguard.net to resolve to a local ip in the dns server.
disabled the fortiguard anycast.
and in the web filter and DNS i will not use the foriguard category base filter i will use a static url filter.
i just want to make sure non of my traffic reach fortiguard or FDN or any of their servers before i connect it to the internet.
Appreciate your help.
thanks.

Everything you've done so far appears to be solid. You could also block UDP/8888 and HTTPS/8888. I like your approach for update.fortiguard.net. You could also include "service, securewf, usservice, ussecurewf".fortiguard.net the same way.

Related

Zabbix: filter discovery action by IP address

I'm currently monitoring several routers I have in my network with Zabbix 3.4.4. I'm now adding them manually but I'd like to use the discovery feature to do this automatically. The problem I have is that I need to monitor only the router, and not all other hosts on the net.
For example: I have a discovery rule for 10.0.0.0/16, I add a new network 10.0.10.0/24 which has several hosts, but I want to monitor only 10.0.10.1. Sadly being routers and from different manufacturers I cannot test services or responses, I can rely on ping only.
From what I see in the Action options there's no way to filter for such option, am I right? Is there any other way to filter hosts IPs so that I can add Zabbix monitoring only to router's IPs?
It seems like the benefit of repeatedly scanning the whole subnet just to find a small number of hosts is just not there. I'd suggest looking into creating those hosts via API instead.
Having said that, a range of 10.0.0-255.1 might work, and also reduce your network traffic significantly.

Solution for 1 GCP network-to-many GCP networks VPN topologies that addresses internal IP ambiguity

I have a problem where our firm has many GCP projects, and I need to expose services on my project to these distinct GCP projects. Firewalling in individual IPs isn't really sustainable, as we dynamically spin up and tear down hundreds of GCE VMs a day.
I've successfully joined a network from my project to another project via GCP's VPN, but I'm not sure what the best practice should be joining multiple networks to my single network, especially since most of the firm has the same default internal address subnetwork range for the project's default network. I understand that doing it the way that I am will probably work (it's unclear if it'll actually reach the right network, though), but this creates a huge ambiguity in terms of IP collisions, where potentially two VMs could exists in separate networks and have the same internal IP.
I've read that outside of the cloud, most VPNs support NAT remapping, which seems to let you remap the internal IP space of the remote peer's subnet (like, 10.240.* to 11.240.*), such that you can never have ambiguity from the peer doing the remapping.
I also know that Cloud Router may be an option, but it seems like a solution to a very specific problem that doesn't fully encompass this one: dynamically adding and removing subnets to the VPN.
Thanks.
I think you will need to utilize the custom subnet mode network (non-default), specify non-overlapping IP ranges for the networks to avoid collision. See "Creating a new network with custom subnet ranges" in this doc: https://cloud.google.com/compute/docs/subnetworks#networks_and_subnetworks

AWS Elastic Load Balancing: Seeing extremely long initial connection time

For a couple of days, we often see an extremely long initial connection time (15s - 1.3 minutes) to our ELBs when making any request via ssl.
Oddly, I was only able to observe this in Google Chrome (not Safari nor Firefox nor curl).
It does not occur every single request, but around 50% of requests. It occurs with the first request (OPTIONS-call).
Our setup is the following:
Cross-Zone ELB that connects to a node.js backend (currently in 2 AZs in eu-west-1). All instances are healthy and once the request comes through, it is processed normally. Currently, there is basically no load on the system. Cloudwatch for ELB does not report any backend connection errors, neither a SurgeQueue (value 0) nor a spillover count. The ELB metrics show a low latency (< 100 ms).
We have Route53 configured to route to the ELB (we don't see any dns trouble, see attached screenshot).
We have different REST-APIs that all have this setup. It occurs to all of the ELBs (each of them is connecting to an indipendent node.js backend). All of these ELBs are set up the same way via our cloudformation template.
The ELBs also do our SSL-termination.
What could lead to such a behavior? Is it possible that the ELBs are not configured properly? And why could it only appear on Google Chrome?
I think it is a possible ELB misconfiguration. I had the same problem when I put private subnets to ELB. Fixed it by changing private subnets to public. See https://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-manage-subnets.html
Just to follow up on #Nikita Ogurtsov's excellent answer; I had the same problem except that it was just one of my subnets that happened to be private and the rest public.
Even if you think your subnets are public, I recommend you double check the route tables to ensure that they all have a Gateway.
You can use a single Route Table that has a Gateway for all your LB subnets if this make sense
VPC/Subnets/(select subnet)/Route Table/Edit
For me the issue was that I had an unused "Availability Zone" in my Classic Load Balancer. Once I removed the unhealthy and unused Availability Zone the consistent 20 or 21 second delay in "Initial Connection" dropped to under 50ms.
Note: You may need to give it time to update. I had my DNS TTL set to 60 seconds so I would see the fix within a minute of removing the unused Availability Zone.
This can be a problem with the elb of amazon. The elb scale the number of instances with the number of request.
You should see some pick of requests at those times.
Amazon adds some instances in order to fit the load.
the instances are reachable during the launch process so your clients get those timeout. it's totally randomness so you should :
ping the elb in order to get all the ip used
use mtr on all ip found
Keep an eye on CloudWatch
Find some clues
Solution If you're DNS is configured to hit directly on the ELB -> you should reduce the TTL of the association (IP,DNS). The IP can change at any time with the ELB so you can have serious damage on your traffic.
The client keep Some IP from the ELB in cache so you can have those can of trouble.
Scaling Elastic Load Balancers
Once you create an elastic load balancer, you must configure it to accept incoming traffic and route requests to your EC2 instances. These configuration parameters are stored by the controller, and the controller ensures that all of the load balancers are operating with the correct configuration. The controller will also monitor the load balancers and manage the capacity that is used to handle the client requests. It increases capacity by utilizing either larger resources (resources with higher performance characteristics) or more individual resources. The Elastic Load Balancing service will update the Domain Name System (DNS) record of the load balancer when it scales so that the new resources have their respective IP addresses registered in DNS. The DNS record that is created includes a Time-to-Live (TTL) setting of 60 seconds, with the expectation that clients will re-lookup the DNS at least every 60 seconds. By default, Elastic Load Balancing will return multiple IP addresses when clients perform a DNS resolution, with the records being randomly ordered on each DNS resolution request. As the traffic profile changes, the controller service will scale the load balancers to handle more requests, scaling equally in all Availability Zones.
Best Practices ELB on AWS
ALB Loadbalancer need 2 Availability Zones. If you use a Privat/Public/Nat VPC setting, then must all public subnets have a connection to the Internet.
For me the issue was that the ALB was pointing to an Nginx instance, which had a misconfigured DNS resolver. This meant that Nginx tried to use the resolver, timed out, and then actually started working a bit later.
Not really super connected with Load Balancer itself, but maybe helps someone figure out the issue in their own setup.
Check a security group too. That was an issue in my case.
I see a similar problem in my Chrome logs (1.3m lag). It happens in an OPTIONS request, and from wireshark, I don't even see the request leaving the PC in the first place. Any suggestions as to what Chrome might be doing are welcome.
We have recently encountered chrome taking 1.3 mins to load pages but the cause was slightly different. Just popping it here incase it helps someone.
1.3 mins seems to be how long Chrome will wait when trying to connect to a specific IP. Our domain name has multiple IP addresses in the A record (similar to a CNAME setup) and one of those IP's belonged to a server that had crashed. So sometimes the browser would connect quickly because it used a valid IP and sometimes we would get the long wait as it tried to connect to the invalid IP, timed out, and then retried with a valid IP.
So it is worth checking that all the IP's listed when you dig your domain, are resolving correctly.

how do you add additional nics to a compute engine vm?

how do I add a NIC to a compute engine instance? I need more then one NIC so I can build out an environment...I've looked all over and there is nothing on how to do it...
I know it's probably some API call through the SDK, but I have no idea, and I can't find anything on it.
EDIT:
It's the rhel6 image. figured I should clarify.
The question is probably old and a lot has changed since. Now it's definitely possible to add more nics to an instance but only at creation time (you can find a networking tab on the create instance page on the portal - corresponding rest api exists too). Each nic has to connect to a different virtual network, so you need to create more before creating the instance (if you don't have already).
Do you need an external address or an internal address? If external, you can use gcutil to add an IP address to an existing instance. If internal, you can configure a static network address on the instance, and add a route entry to send traffic for that address to that instance.
I was looking for similiar thing (to have a VM which runs Apache and nginx simultaneously on different IPs), but it seems like although you can have multiple networks (up to 5) in a project and each network can belong to multiple instances, you can not have more than one network per instance. From the documentation:
A project can contain multiple networks and each network can have multiple instances attached to it. [...] A network belongs to only one project and each instance can only belong to one network.

Business website hosted publicly (for APIs, etc) needs to be accessible ONLY from inside office?

Would appreciate your patience with this question; still learning a lot of things.
My Taxi booking start-up has a website (CakePHP) hosted on EC2 (for reliability) which is a ERP of sorts used only by internal employees. This tool also interacts with the Cabs/Taxis' GPS receivers in that these GPS machines send some data to the public server through some APIs which help decide logic for the Booking process. And as we don't have very strong Net on premises, we've kept it all on EC2.
Now, we are increasingly concerned about leaving information (customer data, vehicle info) like this on the public domain and accessible from the internet and outside the premises by a rogue employee. For our implementation, MySQL replication has already been considered with us reading from a local slave, writing to the master and etc. The only issue being, there's no way non-technical employees would know whether the data is new or whether the replication is broken. Also, we'd prefer our servers online as we don't want to invest in physical security for this hardware.
We are thinking of the following:
IP address based auth; those belonging to the local NAT would be allowed. Problem is we have a dynamic IP.
Computername/MacID based auth; almost no-security once the user finds out. Also, can we read these parameters from Chrome?
Storing a list of IP addresses that login and as there are just 6 employees, we'd be able to monitor it for weird IPs. Not scalable or even secure.
Hosts file configuration on employee PC and this "host" would be configured on apache2 so directly hitting the IP address would do no good. Again, needs one smart employee.
Do help us out!
I think you should look at VPC, its Amazon's virtual private cloud. Its the better option for hosting solutions on EC2 that are private to your enterprise.
it would allow you to create a private network that is only accessible from your computers, with internet facing servers in a seperate subnet.
you have a number if ways of connecting the private subnet to your office, a VPN seems the option here for you (low cost, no special h/w required), see http://aws.amazon.com/vpc/ and http://d36cz9buwru1tt.cloudfront.net/Extend_your_IT_infrastructure_with_Amazon_VPC.pdf
I considered writing this as a comment but don't have enough rep...
I'm not sure where you are from, but in my region, the cost of a static IP is negligible ($10-$50) a month, which is a drop considering the risk of liability you are facing. Then you can secure the server with usernames and passwords, and check the originating IP also.
You may also be able to setup a computer to scrape something like whatismyip every hour to see if the IP changes and update the IP if it changes.