I am trying to run Quagga on a couple of connected VMs and am confused about how to write the neighbor command in the bgpd.conf configuration file. All my queries are about the following specific statement of neighbor specification:
neighbor peer remote-as asn
What should I provide for the 'peer' IP value?
Say I am configuring a VM A which is many hops away from a neighbor B (lets assume same AS number). So when I add neighbor B in the bgpd.conf configuration file, which particular interface IP of B should be added as the peer IP in the configuration file.
I am seeing that for some interface IPs the connection is not getting established and for some it is. So I want to know theoretically which of the interface IPs should be specified.
I did a lot of Google study but no ones clear about this.
Please help.
Are you able to ping those ip addresses/interfaces where you are unable to establish the bgp session?
If not then you can't establish.
Related
I have a problem where our firm has many GCP projects, and I need to expose services on my project to these distinct GCP projects. Firewalling in individual IPs isn't really sustainable, as we dynamically spin up and tear down hundreds of GCE VMs a day.
I've successfully joined a network from my project to another project via GCP's VPN, but I'm not sure what the best practice should be joining multiple networks to my single network, especially since most of the firm has the same default internal address subnetwork range for the project's default network. I understand that doing it the way that I am will probably work (it's unclear if it'll actually reach the right network, though), but this creates a huge ambiguity in terms of IP collisions, where potentially two VMs could exists in separate networks and have the same internal IP.
I've read that outside of the cloud, most VPNs support NAT remapping, which seems to let you remap the internal IP space of the remote peer's subnet (like, 10.240.* to 11.240.*), such that you can never have ambiguity from the peer doing the remapping.
I also know that Cloud Router may be an option, but it seems like a solution to a very specific problem that doesn't fully encompass this one: dynamically adding and removing subnets to the VPN.
Thanks.
I think you will need to utilize the custom subnet mode network (non-default), specify non-overlapping IP ranges for the networks to avoid collision. See "Creating a new network with custom subnet ranges" in this doc: https://cloud.google.com/compute/docs/subnetworks#networks_and_subnetworks
I assume the support for NAT is already available with the routing and networking available in compute engine? Looking for some easy to read documentation and commands to setup a situation where either one instance acts as a router and other instances can use that to access the public internet. Another scenario I'm looking for is how to make instances with no external IP address be able to access the internet. Is there a gcutil friendly way of scripting this up?
It sounds like you're looking for the Routes Collection. For your first case, the examples should show you how one instance can act as a gateway for other instances by setting a route for the internal nodes to use the gateway as a "next hop" for their traffic.
For your second scenario, there is a caveat listed that "Currently, any packets sent to the Internet must be sent by an instance that has an external IP address. If you create a route that sends packets to the Internet from a particular instance, that instance must also have an external IP. If you create a route that sends packets to the Internet gateway, but the source instance doesn't have an external IP address, the packet will be dropped."
how do I add a NIC to a compute engine instance? I need more then one NIC so I can build out an environment...I've looked all over and there is nothing on how to do it...
I know it's probably some API call through the SDK, but I have no idea, and I can't find anything on it.
EDIT:
It's the rhel6 image. figured I should clarify.
The question is probably old and a lot has changed since. Now it's definitely possible to add more nics to an instance but only at creation time (you can find a networking tab on the create instance page on the portal - corresponding rest api exists too). Each nic has to connect to a different virtual network, so you need to create more before creating the instance (if you don't have already).
Do you need an external address or an internal address? If external, you can use gcutil to add an IP address to an existing instance. If internal, you can configure a static network address on the instance, and add a route entry to send traffic for that address to that instance.
I was looking for similiar thing (to have a VM which runs Apache and nginx simultaneously on different IPs), but it seems like although you can have multiple networks (up to 5) in a project and each network can belong to multiple instances, you can not have more than one network per instance. From the documentation:
A project can contain multiple networks and each network can have multiple instances attached to it. [...] A network belongs to only one project and each instance can only belong to one network.
I'm looking for a simple way to implement this scenario:
Say I have two machines I'd like to share data between. The location/addresses of these machines can change at any time. I'd like both machines to check in to a central server to announce their availability. One of the two systems wants to pull a file from the other. I know that I can have the sink system make a request to the server, who then requests the file from the source, pulls it, then feeds it to the requester. However, this seems inefficient from a bandwidth perspective. The file will be transfered twice. Is there a system in place where the source can broadcast it directly to the sink?
Without being able to guarantee things like port forwarding if a system is behind a firewall etc. I don't know of a way.
Thanks.
When machine A wants to send data to machine B, A sends a request to the central server C. C asks B for permission. If accepted, C gives B's IP and port to A. A attempts to connect to B directly. If unsuccessful (ie, if B is behind a router/firewall), then A notifies C of the failure. C then gives A's IP and port to B. B attempts to connect directly to A (which should be able to pass through B's firewall/router). If either connection is successful, then A has a direct connection to send data to B. If both connections are unsuccessful (ie, if A is also behind a firewall/router), then C has to act as a proxy for all transfers between A and B.
I can think of a few hacks using ping, the box name, and the HA shared name but I think that they are leading to data leakage.
Should a box even know its part of an HA cluster or what that cluster name is? Is this more a function of DNS? Is there some API exposed for boxes to join an HA cluster and request the id of the currently active node?
I want to differentiate between the inactive node and active node in alerting mechanisms for a running program. If the active node is alerting I want to hit a pager and on the inactive node I want to send an email. Pushing the determination into the alerting layer moves the same problem elsewhere.
EASY SOLUTION: Polling the server from an external agent that connects through the network makes any shell game of who is the active node a moot point. To clarify this the only thing that will page is the remote agent monitoring the real. Each box can send emails all day long for all I care.
It really depends on the HA system you're using.
For example, if your system uses a shared IP and the traffic is managed by some hardware box, then it can be hard to determine if a certain box is a master or slave. That will depend on a specific solution really... As long as you can add a custom script to the supervisor, you should be ok - for example the controller can ping a daemon on the master server every second. In the alerting script, simply check if the time of the last ping < 2 sec...
If your system doesn't have a supervisor / controller node, but each node tries to determine the state itself, you can have more problems. If a split brain occurs, you can end up with both slaves or both masters, so your alerting software will be wrong in both cases. Gadgets that can ensure only one live node (STONITH and others) could help.
On the other hand, in the second scenario, if the HA software works on both hosts properly, you should be able to obtain the master/slave information straight from it. It has to know its own state at any time, because it's one of its main functions. In most HA solutions you should be able to either get the current state, or add some code to run when the state changes. Heartbeat offers both.
I wouldn't worry about the edge cases like a split brain though. Almost any situation when you lose connection between the clustered nodes will be more important than the stuff that happens on the separate nodes :)
If the thing you care about is really logging / alerting only, then ideally you could have a separate logger box which gets all the information about the current network / cluster status. External box will probably have better idea how to deal with the situation. If your cluster gets dos'ed / disconnected from the network / loses power, you won't get any alert. A redundant pair of independent monitors can save you from that.
I'm not sure why you mentioned DNS - due to its refresh time it shouldn't be a source of any "real-time" cluster information.
One way is to get the box to export it's idea of whether it is active into your monitoring. From there you can predicate paging/emailing on this status (with a race condition around failover), and alert on none/too many systems believing they are active.
Another option is to monitor the active system via a DNS alias (or some other method to address the active system) and page on that. Then also monitor all the systems, both active and inactive, and email on that. This will cause duplicate alerts for the active system, but that's probably okay.
It's hard to be more specific without knowing more about your setup.
As a rule, the machines in a HA cluster shouldn't really know which one is active. There's one exception, mind, and that's with cronjobs. At work, we have a HA cluster on top of which some rather important services run. Some of those use services have cronjobs, and we only want them running on the active box. To do that, we use this shell script:
#!/bin/sh
HA_CLUSTER_IP=0.0.0.0
if ip addr | grep $HA_CLUSTER_IP >/dev/null; then
eval "$#"
fi
(Note that this is running on Debian.) What this does is check to see if the current box is the active one within the cluster (replace 0.0.0.0 with the external IP of your HA cluster), and if so, executes the command passed in as arguments to the script. This ensures that one and only one box is ever actually executing the cronjobs.
Other than that, there's really no reasons I can think of why you'd need to know which box is the active one.
UPDATE: Our HA cluster uses Heartbeat to assign the cluster's external IP address as a secondary address to the active machine in the cluster. Programmatically, you can check to see if your machine is the current active box by calling gethostbyname(), and iterating over the data returned until you either get to the end or you find the cluster's IP in the list.
Without hard-coding.... ? I assume you mean some native heartbeat query, not sure. However, you could use ifconfig, HA creates a virtual interface on whatever interface it is configured to run on. For instance if HA was configured on eth0 then it would create a virtual interface of eth0:0, but only on the active node.
Therefore you could do a simple query of the ifconfig output to determine if the server twas the active node or not, for example if eth0 was the configured interface:
ACTIVE_NODE=`ifconfig | grep -c 'eth0:0'`
That will set the $ACTIVE_NODE variable to 1 (for active) and 0 (if standby). Hope that may help.
http://www.of-networks.co.uk