How to extend ipv4 addresses to 64 bit? - ipv4

I know that ipv4 addresses are 32 bits. But is it possible to change the ipv4 addresses to 64 bit from 32 bit?

The feature is called enhanced-ipv4 or EnIP, take a look at this document (IPv4 with 64 bit Address Space January 2015):
Enhanced IP (EnIP) was designed to minimize impact on core and border routers. ... EnIP packets carry additional address bits and state in an IP option, eliminating routing table updates like IPv6. EnIP supports end-to- end connectivity, a shortcoming of NAT, making it easier to implement mobile networks. Host renumbering is also not required in EnIP as has been the case with other 64-bit protocol proposals

Enhanced IP (EnIP) is a method for extending the IP address space from 32-bits to 64-bits. The 64-bit addresses look like two IP addresses concatenated together. Enhanced IP is much simpler to implement than IPv6. To illustrate, there are 432 IPv6 RFCs and 1 Enhanced IP RFC.
Examples addresses for comparison:
IPv6 address: 2001:0101:c000:0202:0a01:0102::0
EnIP address: 192.0.2.2.10.1.1.2
IPv4 address: 192.0.2.2
Quick intro to Enhanced IP
Github project

Related

QEMU riscv32-softmmu, what does the softmmu mean?

Does the 'softmmu' mean that the virtual machine has a single linear address space available to machine and user mode? Or does it have some virtual memory capabilities that are implemented via software and not the underlying processor? Or maybe it means something different entirely?
-softmmu as a suffix in QEMU target names means "complete system emulation including an emulated MMU, for running entire guest OSes or bare metal programs". It is opposed to QEMU's -linux-user mode, which means "emulates a single Linux binary only, translating syscalls it makes into syscalls on the host". Building the foo-softmmu target will give you a qemu-system-foo executable; building foo-linux-user will give you a qemu-foo executable.
So a CPU emulated by -softmmu should provide all the facilities that the real guest CPU's hardware MMU provides, which usually means multiple address spaces which can be configured via the guest code setting up page tables and enabling the MMU.

Maximum number of external IP addresses per machine?

What is the maximum number of external IP addresses that can be assigned to a machine on Google Compute Engine? I found the AWS limits but I can't find the same for Google Compute Engine.
the Resource Quotas and Interconnect Quotas do not state any limit for external IP addresses, therefore they are available as good as unlimited, but still may be limited by two factors: a) by the availability of the resource-type in the region and b) how many of them you are able to pay. in case you may wonder how many of them one can assign to a single instance; this might be 4000-5000 (based upon reports), until the network stack becomes unstable vs. the theoretical limit of 4 294 967 294 (- 2). one possibly can only estimate that value, because the hardware configuration also plays a role there; with multiple virtual NIC it should be able to take a multiple of IPs.

Is is valid to assume that Google virtual CPUs are all on 1 socket (if < 16 VCPUs?)

We're building a high performance computing scientific application (lots and lots of computations) using Java. To the best of our knowledge, google compute engine does not provide the "true" physical socket information, nor do they have a service like AWS's dedicated hosting (https://aws.amazon.com/ec2/dedicated-hosts/ and then see section on "affinity") where (for a fee) one could see the actual physical sockets.
However, based on our understanding, JIT compiler will do a lot better if it knows that all the threads are really on a single physical socket. Would it be reasonable, therefore, to assume that even though google compute engines do NOT display the true underlying physical socket structure, that if we have a google compute engine that's <= 16 cores, that's it's definitely (or most likely, eg >95%) coming from a single physical socket? If so, we can also then assume that the cpu numbers (when doing cat /proc/cpuinfo) correspond logically (in sequence) to the physical cpu cores/logical cores so that if we wanted our program to put two threads onto the same physical core (but two logical cores), we could just tell it to put the two threads on CPU 0 and 1 and we would know that CPU0 and CPU1 belong to the same physical CPU core, and that CPU2 and CPU3 belong to the same physical core and so on?
If so, would it be reasonable to assume that for compute engines that are 32 VCPUs or 64 VCPUs that the number of sockets are 2 and 4 respectively? And that the result of cat proc/cpuinfo also follows a logical order, so that not only is CPU0 and CPU1 on the same phyisical CPU core, but that we can assume that CPU0 through CPU15 is on physical socket #1, and CPU16 to CPU31 is on physical socket #2 etc?
as you inferred, GCE currently does not expose the actual NUMA architecture of the machine, and we do not guarantee that a VM will run entirely on one socket, nor can you intentionally land VM threads on specific cores/hyperthreads. These capabilities are on our radar for possible future enhancements/features.
I don't believe this is specifically documented currently, however I am speaking as a Product Manager for GCE.

Parse logical location with segments

I'm programming a c# dll to parse messages from a NX 584 Module.
I'm new to binary messages and I'm stuck with the following message:
I'm having trouble understanding the location part.
The logical location is 12 bits long, but an IP address is 4 bytes long. So I don't get how the fit an IP address in 12 bits.
What should I do with the segment size and offset?
Also, what is meant with number of segments?
Any help would be appreciated, thanks.
Disclaimer: I know absolutely nothing about NX 584, but skimming through the documentation suggested that "logical location" isn't an IP address, but some kind of storage/RAM address.
12 bits would therefore seem to be the address bus size of this thing.

Are there any protocols/standards on top of TCP optimized for high throughput and low latency?

Are there any protocols/standards that work over TCP that are optimized for high throughput and low latency?
The only one I can think of is FAST.
At the moment I have devised just a simple text-based protocol delimited by special characters. I'd like to adopt a protocol which is designed for fast transfer and supports perhaps compression and minification of the data that travels over the TCP socket.
Instead of using heavy-weight TCP, we can utilize the connection-oriented/reliable feature of TCP on the top of UDP by any of the following way:
UDP-based Data Transfer Protocol(UDT):
UDT is built on top of User Datagram Protocol (UDP) by adding congestion control and reliability control mechanisms. UDT is an application level, connection oriented, duplex protocol that supports both reliable data streaming and partial reliable messaging.
Acknowledgment:
UDT uses periodic acknowledgments (ACK) to confirm packet delivery, while negative ACKs (loss reports) are used to report packet loss. Periodic ACKs help to reduce control traffic on the reverse path when the data transfer speed is high, because in these situations, the number of ACKs is proportional to time, rather than the number of data packets.
Reliable User Datagram Protocol (RUDP):
It aims to provide a solution where UDP is too primitive because guaranteed-order packet delivery is desirable, but TCP adds too much complexity/overhead.
It extends UDP by adding the following additional features:
Acknowledgment of received packets
Windowing and congestion control
Retransmission of lost packets
Overbuffering (Faster than real-time streaming)
en.wikipedia.org/wiki/Reliable_User_Datagram_Protocol
If layered on top of TCP, you won't get better throughput or latency than the 'barest' TCP connection.
there are other non-TCP high-throughput and/or low-latency connection-oriented protocols, usually layered on top of UDP.
almost the only one i know is UDT, which is optimized for networks where the high bandwidth or long round trip times (RTT) makes typical TCP retransmissions suboptimal. These are called 'extremely long fat networks' (LFN, pronounced 'elefan').
You may want to consider JMS. JMS can run on top of TCP, and you can get reasonable latency with a message broker like ActiveMQ.
It really depends on your target audience though. If your building a game which must run anywhere, you pretty much need to use HTTP or HTTP/Streaming. If you are pushing around market data on a LAN, than something NOT using TCP would probably suite you better. Tibco RV and JGroups both provide reliable low-latency messaging over multicast.
Just as you mentioned FAST - it is intended for market data distribution and is used by leading stock exchanges and is running on the top of UDP multicast.
In general, with current level of networks reliability it always worth putting your protocol on the top of UDP.
Whatever having session sequence number, NACK+server-to-client-heartbeat and binary marshalling should be close to theoretical performance.
If you have admin/root privilege on the sending side, you can also try a TCP acceleration driver like SuperTCP.