Is it possible / common sense to change a CIDR prefix to meet a certain requirement of a network? - ipv4

Say for example, you have an IP address with a CIDR prefix of /27 (mask of 255.255.255.224) and are required to create 4 subnets each with an equal amount of usable host addresses, would it be commonplace / accepted to adjust the prefix to /26 (using only 2 additional
bits, leaving 6 for the Host ID) to allow for 4 subnets, each with 62 (60 usable addresses ?
Or
Is just a case of when a prefix is already set (in this case /27) you simply “get what your given” and utilise only 4 of the possible 8 subnets that can be created with the additional 3 bits and have 4 subnets, each with 32 (30 usable addresses)?
I understand the prefix denotes how many bits are considered significant for the network address with the remainder allocated for the host / node.
Thanks in advance, I am quite new to networking but feel like common sense might apply in this case?
I haven to been able to find anything that confirms or denies the possibility of doing this when required.

Related

Nand2Tetris-Obtaining Register from RAM chips

I've recently completed Chapter 3 of the associated textbook for this course: The Elements of Computing, Second Edition.
While I was able to implement all of the chips described in this chapter, I am still trying to wrap my head around how exactly the RAM chips work. I think I understand them in theory (e.g. a Ram4K chip stores a set of 8 RAM512 chips, which itself is a set of 8 RAM64 chips).
What I am unsure about is actually using the chips. For example, suppose I try to output a single register from RAM16K using this code, given an address:
CHIP RAM16K {
IN in[16], load, address[14];
OUT out[16];
PARTS:
Mux4Way16(a=firstRam, b=secondRam, c=thirdRam, d=fourthRam, sel=address[12..13], out=out);
And(a=load, b=load, out=shouldLoad);
DMux4Way(in=shouldLoad, sel=address[12..13], a=setRamOne, b=setRamTwo, c=setRamThree, d=setRamFour);
RAM4K(in=in, load=setRamOne, address=address[0..11], out=firstRam);
RAM4K(in=in, load=setRamTwo, address=address[0..11], out=secondRam);
RAM4K(in=in, load=setRamThree, address=address[0..11], out=thirdRam);
RAM4K(in=in, load=setRamFour, address=address[0..11], out=fourthRam);
}
How does the above code get the underlying register? If I understand the description of the chip correctly, it is supposed to return a single register. I can see that it outputs a RAM4K based on a series of address bits -- does it also get the base register itself recursively through the chips at the bottom? Why doesn't this code have an error if it's outputting a RAM4K when we expect a register?
It's been a while since I did the course so please excuse any minor errors below.
Each RAM chip (whatever the size) consists of an array of smaller chips. If you are implementing a 16K chip with 4K subchips, then there will be 4 of them.
So you would use 2 bits of the incoming address to select what sub-chip you need to work with, and the remaining 12 bits are sent on to all the sub-chip. It doesn't matter how you divide up the bits, as long as you have a set of 2 and a set of 12.
Specifically, the 2 select bits are used to route the load signal to just one sub-chip (ie: using a DMux4Way), so loads only affect that one sub-chip, and they are also used to pick which of the sub-chips outputs are used (ie: a Mux4Way16).
When I was doing it, I found that the simplest way to do things was always use the least-significant bits as the select bits. So for example, my RAM64 chip used address[0..2] as the select bits, and passed address[3..5] to the RAM8 sub-chips.
The thing that may be confusing you is that in these kinds of circuits, all of the sub-chips are activated. It's just that you use the select bits to decide which sub-chip's output to pass on to the outputs, and also as a filter to decide which sub-chip might perform a load.
As the saying goes, "It's turtles (or ram chips) all the way down."

Need a formula to get total LUN size using lunSizeLow and lunSizeHigh SNMP objects

I have 2 SNMP Objects/OIDs. Below are the details:
Object1:
Name: lunSizeLow
OID: 1.3.6.1.4.1.43906.1.4.3.2.3.1.9
Description: `LUN` size in bytes - low order bytes
Object2:
Name: lunSizeHigh
OID: 1.3.6.1.4.1.43906.1.4.3.2.3.1.10
Description: `LUN` size in bytes - high order bytes
My requirement:
I want to monitor LUN size through some script. But i didn't found any SNMP object, which can give total LUN size directly. I found 2 separate objects (lunSizeLow and lunSizeHigh) to get LUN total size, so i need a formula to get total LUN size using these 2 low order and high order SNMP objects (lunSizeLow and lunSizeHigh).
I gone through many articles over internet and i found couple of formulas in community.hpe.com.
But I'm not sure which one is correct.
Formula 1:
Max unsigned number that can be stored in 32bits counter is 4294967295.
Total size would be: LOW_ORDER_BYTES + HIGH_ORDER_BYTES * 4294967296
Formula 2:
Total size in GB is LOW_ORDER_BYTES / 1073741824 + HIGH_ORDER_BYTES * 4
Could any one help me to get correct formula.
Most languages will have the bit-shift operator, allowin you to do something similar to the below (pseudo-Java):
long myBigInteger = lunSizeHigh
myBigInteger << 32 # Shifts the high bits 32 positions to the left, into the high half of the long
myBigInteger = myBigInteger + lunSizeLow
This has two advantages over multiplying:
Bit shifting is often faster than multiplication, even though most compilers would optimize that particular multiplication into a bit shift anyway.
It is easier to read the code and understand why this would provide the correct answer, given the description from the MIB. Magic numbers should be avoided where possible.
That aside, putting some numbers into the Windows Calculator (using Programmer Mode) and trying formula 1, we can see that it works.
Now, you don't specify what language or environment you're working in, and in some languages you won't have any number type that supports the size of numbers you want to manipulate. (Same reason that this number had to be split into two counters to begin with - it's larger than the largest number representation available on some (primitive) platforms.) If you want to do it using multiplication, you'll have to make sure your implementation language can do better.

What is the IPV4 class ?

I'm trying to identify the classes of IPV4 IP's. I convert the first octet or block into binary and then i follow the algorithm here in the photo. My problem is when the IP starts with 7 for example then its binary is 111 and it doesn't match any of classes and another thing when i turn 47 for instance to binary (101111) then it should belong to class B but instead its class is A and i don't know why?
The binary number you have separated on each net doesn't mean the class definition, a class is defined by the network portion, for example the ip you mention that starts with 7 belongs to the A class:
7 = 00000111
The A class defines the first 8 bits as the network portion
The B class defines the fisrt 16 bits as the network portion
The C class defines the first 24 bits as the network portion
Check the following tables and let me know if you have more questions.

Really 1 KB (KiloByte) equals 1024 bytes?

Until now I believed that 1024 bytes equals 1 KB (kilobyte) but I was reading on the internet about decimal and binary system.
So, actually 1024 bytes = 1 KB would be the correct way to define or simply there is a general confusion?
What you are seeing is a marketing stunt.
Since non-technical people don't know the difference between Metric Meg, Gig, etc. against the binary Meg, Gig, etc. marketers for storage will use the Metric calculation, thus 1000 Bytes == 1 KiloByte.
This can cause issues with development or highly technical people so you get the idea of a binary Meg, Gig, etc. which is designated with a bi instead of the standard combination (ex. Mebibyte vs Megabyte, or Gibibyte vs Gigabyte)
There are two ways to represent big numbers: You could either display them in multiples of 1000 (base 10) or 1024 (base 2). If you divide by 1000, you probably use the SI prefix names, if you divide by 1024, you probably use the IEC prefix names. The problem starts with dividing by 1024. Many applications use the SI prefix names for it and some use the IEC prefix names. But it is important how it is written:
Using IEC standard:
1 KiB = 1,024 bytes (Note: big K)
1 MiB = 1,024 KiB = 1,048,576 bytes
Using SI standard:
1 kB = 1,000 bytes (Note: small k)
1 MB = 1,000 kB = 1,000,000 bytes
Source: ubunty units policy: https://wiki.ubuntu.com/UnitsPolicy
In the normal world, most things go by the power of 10. This would include electricity, for example.
But, in the computer world, it is about half binary. For example, when they sell a hard drive, they sell it by the value of 10, so if it is a 1KB drive, then it is 1000 B. But, when the computer reads it, the OS's usually read by the value of 1024. This is why, when you read the size of space available on a drive, it reads much less then what it was advertised. A 500 GB drive will read only about 466GB, because the computer is reading the drive by the binary 1024 version. Not the power of 10 that it was sold and advertised by. Same will go with flash drives. But, RAM is sold, and read by the computer, by the Binary 1024 version.
One thing to note.. It is "B", not "b". There are 8 bits "b" in a Byte "B". The reason I bring this up is when you get internet service, they usually advertise the speed by bits, not bytes. When it reads in the download box on the computer, it reads the speed in bytes. Say you have a 50Mb internet connection, it is actually 6.25MB connection in the download speed box, because you have to divide the 50 by 8 since there are 8 bits in a byte. That is how the computer reads it. Another marking strategy too. After all, 50Mb sounds much faster then 6.25MB. Other then speeds through a network, most things are read by bytes "B". Some people do not realize that there is a difference between the "B" and "b".
Quite simple...
The word 'Byte' is a computing reference for which the letter 'B' is used as abbreviation.
It must follow then that any reference to Bytes, eg. KB, MB etc, must be based on the well known and widely accepted 1024 base.
Therefore 1KB must equal 1024 Bytes, 1MB must equal 1048576 Bytes (1024x1024) etc.
Any non-computing reference to Kilo/Mega etc. Is based on the decimal 1000 base, eg. 1KW or 1KiloWatt which is 1000 Watts.

Only one node owns data in a Cassandra cluster

I am new to Cassandra and just run a cassandra cluster (version 1.2.8) with 5 nodes, and I have created several keyspaces and tables on there. However, I found all data are stored in one node (in the below output, I have replaced ip addresses by node numbers manually):
Datacenter: 105
==========
Address Rack Status State Load Owns Token
4
node-1 155 Up Normal 249.89 KB 100.00% 0
node-2 155 Up Normal 265.39 KB 0.00% 1
node-3 155 Up Normal 262.31 KB 0.00% 2
node-4 155 Up Normal 98.35 KB 0.00% 3
node-5 155 Up Normal 113.58 KB 0.00% 4
and in their cassandra.yaml files, I use all default settings except cluster_name, initial_token, endpoint_snitch, listen_address, rpc_address, seeds, and internode_compression. Below I list those non-ip address fields I modified:
endpoint_snitch: RackInferringSnitch
rpc_address: 0.0.0.0
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "node-1, node-2"
internode_compression: none
and all nodes using the same seeds.
Can I know where I might do wrong in the config? And please feel free to let me know if any additional information is needed to figure out the problem.
Thank you!
If you are starting with Cassandra 1.2.8 you should try using the vnodes feature. Instead of setting the initial_token, uncomment # num_tokens: 256 in the cassandra.yaml, and leave initial_token blank, or comment it out. Then you don't have to calculate token positions. Each node will randomly assign itself 256 tokens, and your cluster will be mostly balanced (within a few %). Using vnodes will also mean that you don't have to "rebalance" you cluster every time you add or remove nodes.
See this blog post for a full description of vnodes and how they work:
http://www.datastax.com/dev/blog/virtual-nodes-in-cassandra-1-2
Your token assignment is the problem here. An assigned token are used determines the node's position in the ring and the range of data it stores. When you generate tokens the aim is to use up the entire range from 0 to (2^127 - 1). Tokens aren't id's like with mysql cluster where you have to increment them sequentially.
There is a tool on git that can help you calculate the tokens based on the size of your cluster.
Read this article to gain a deeper understanding of the tokens. And if you want to understand the meaning of the numbers that are generated check this article out.
You should provide a replication_factor when creating a keyspace:
CREATE KEYSPACE demodb
WITH REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor': 3};
If you use DESCRIBE KEYSPACE x in cqlsh you'll see what replication_factor is currently set for your keyspace (I assume the answer is 1).
More details here