I have a server (S) and two connected client (A, B). I know both clients ping to server. How can i calculate both clients ping to each other without sending any packet ?
I don't think you can do it reliably. It is possible that there is a shorter direct path between A and B and just adding the two ping values may overestimate the ping value between A and B.
Related
I am trying to run Quagga on a couple of connected VMs and am confused about how to write the neighbor command in the bgpd.conf configuration file. All my queries are about the following specific statement of neighbor specification:
neighbor peer remote-as asn
What should I provide for the 'peer' IP value?
Say I am configuring a VM A which is many hops away from a neighbor B (lets assume same AS number). So when I add neighbor B in the bgpd.conf configuration file, which particular interface IP of B should be added as the peer IP in the configuration file.
I am seeing that for some interface IPs the connection is not getting established and for some it is. So I want to know theoretically which of the interface IPs should be specified.
I did a lot of Google study but no ones clear about this.
Please help.
Are you able to ping those ip addresses/interfaces where you are unable to establish the bgp session?
If not then you can't establish.
I have a dedicated server to host one domain. I have lots of ips. Is it possible theese ips reverse dns ?
Example :
On One Server, and domain
127.0.0.1 Reverse DNS : a1.xyz.com
127.0.0.2 Reverse DNS : a2.xyz.com
127.0.0.3 Reverse DNS : a3.xyz.com
Based on http://en.wikipedia.org/wiki/RDNS you can have multiple reverse DNS IP address with different hostnames. "While most rDNS entries only have one PTR record, DNS does not restrict the number.". It is said not recommended since it depends on programs to really read all replies. there is a discussion in talks on the wiki page about that subject http://en.wikipedia.org/wiki/Talk:Reverse_DNS_lookup "Multiple records".
I might be that in future, program should adapt to this since it is very usual to have many virtual hosts / web servers on the same host.
Yes, you can set a reverse DNS name per IP address. If you have multiple IPs then you can set multiple reverse DNS names.
I was wondering if I could see any example source code (Language: C) of a client that uses client-side Moxi.
I've seen architecture , but I have no idea how to write it in codes.
Also, from the get_callback function, if I need to return the CAS value and the Data received, is there any suggested way to do this?
And what is this vbucketmap thing? what do they represent and how to configure them?
Client side moxi means that you setup a moxi server on your client machine and then just tells the client to connect to moxi on your local host. This means that if moxi is running on localhost port 11211 then you tell you client to connect to localhost port 11211 and moxi will handle communication with the server. You don't need to write any special code to do this.
Also, from the get_callback function, if I need to return the CAS value and the Data received, is there any suggested way to do this?
I'm not very familiar with the c api, but there is probably a gets function call that returns the cas id in the callback.
And what is this vbucketmap thing? what do they represent and how to configure them?
A vbucket map is a map of servers to VBuckets. In Couchbase Server there are 1024 vbuckets that your data can hash into. VBuckets a spread around a cluster and the map tells the client which server to send a request to. With that said you shouldn't ever touch the vbucket map with your code. The map is obtained from the cluster and managed by either the client-side SDK or in your case Moxi.
I have searched but I could not find the following:
Process1 transmits data over TCP socket. The code that does the transmission is (pseudocode)
//Section 1
write(sock,data,len);//any language.Just write data
//Section 2
Process1 after the write could continue in section 2, but this does not mean that data has been transmitted. TCP could have buffered the data for later transmission.
Now Process2 is running concurrently with Process1. Both processes try to send data concurrently. I.e. both will have code as above.
Question1: If both processes write data to TCP socket simultaneously how will the data be eventually transmitted over the wire by IP/OS?
a) All data of Process1 followed by all data of Process2 (or reverse) i.e. some FIFO order?
or
b) Data from Process1 & Process2 would be multiplexed by IP layer (or OS) over the wire and would be send "concurrently"?
Question2: If e.g. I added a delay, would I be sure that data from the 2 processes were send serially over the wire (e.g. all data of Process1 followed by all data of Process2)?
UPDATE:
Process1 and Process2 are not parent child. Also they are working on different sockets
Thanks
Hmm, are you are talking about single socket shared by two processes (like parent and child)? In such a case the data will be buffered in order of output system calls (write(2)s).
If, which is more likely, you are talking about two unrelated TCP sockets in two processes then there's no guarantee of any order in which the data will hit the wire. The reason for that is sockets might be connected to remote points that consume data with different speeds. TCP flow control then makes sure that fast sender does not overwhelm slow receiver.
Answer 1: the order is unspecified, at least on the sockets-supporting OS's that I've seen. Processes 1 & 2 should be designed to cooperate, e.g. by sharing a lock/mutex on the socket.
Answer 2: not if you mean just a fixed-time delay. Instead, have process 1 give a go-ahead signal to process 2, indicating that process 1 has done sending. Use pipes, local sockets, signals, shared memory or whatever your operating system provides in terms of interprocess communication. Only send the signal after "flushing" the socket (which isn't actually flushing).
A TCP socket is identified by a tuple that usually is at least (source IP, source port, destination IP, destination port). Different sockets have different identifying tuples.
Now, if you are using the same socket on two processes, it depends on the order of the write(2) calls. But, you should take into account that write(2) may not consume all the data you've passed to it, the send buffer may be full, causing a short write (write()'ing less than asked for, and returning the number of bytes written as return value), causing write() to block/sleep until there is some buffer space, or causing write() to return an EAGAIN/EWOULDBLOCK error (for non-blocking sockets).
write() is atomic; ditto send() and friends. Whichever one executed first would transmit all its data while the other one blocks.
The delay is unnecessary, see (1).
EDIT: but if as I now see you are talking about different sockets per process your question seems pointless. There is no way for an application to know how TCP used the network so what does it matter? TCP will transmit in packets of up to an MTU each in whatever order it sees fit.
I'm looking for a simple way to implement this scenario:
Say I have two machines I'd like to share data between. The location/addresses of these machines can change at any time. I'd like both machines to check in to a central server to announce their availability. One of the two systems wants to pull a file from the other. I know that I can have the sink system make a request to the server, who then requests the file from the source, pulls it, then feeds it to the requester. However, this seems inefficient from a bandwidth perspective. The file will be transfered twice. Is there a system in place where the source can broadcast it directly to the sink?
Without being able to guarantee things like port forwarding if a system is behind a firewall etc. I don't know of a way.
Thanks.
When machine A wants to send data to machine B, A sends a request to the central server C. C asks B for permission. If accepted, C gives B's IP and port to A. A attempts to connect to B directly. If unsuccessful (ie, if B is behind a router/firewall), then A notifies C of the failure. C then gives A's IP and port to B. B attempts to connect directly to A (which should be able to pass through B's firewall/router). If either connection is successful, then A has a direct connection to send data to B. If both connections are unsuccessful (ie, if A is also behind a firewall/router), then C has to act as a proxy for all transfers between A and B.