Virtual addresses size computing [closed] - language-agnostic

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am stuck on this problem which I am studying for an exam tomorrow. (I understand the concept of virtual vs. physical addresses, page frames, address bus, etc.)
If you're using 4K pages with 128K of RAM and a 32 bit address bus, how large could a virtual address be? How many regular page frames could you have?
EDIT: I believe the answer is 2^32 and 2^20. I just do not know how to compute this.

Your answers are exactly right.
With a 32-bit address bus, you can access a virtual space of 2^32 unique addresses.
Each 4K page uses 2^12 (physical) addresses, so you can fit (2^32) / (2^12) = 2^20 pages into the space.
Good luck with your exam!
Edit to address questions in the comments:
How do you know you cannot access more than 2^32 addresses?
A 32-bit address bus means there are 32 wires connected to the address pins on the RAM--each wire is represented by one of the bits. Each wire is held at either a high or low voltage, depending on whether the corresponding bit is 1 or 0, and each particular combination of ones and zeroes, represented by a 32-bit value such as 0xFFFF0000, selects a corresponding memory location. With 32 wires, there are 2^32 unique combinations of voltages on the address pins, which means you can access 2^32 locations.
So what about the 4K page size?
If the system has a page size of 4K, it means the RAM chips in each page have 12 address bits (because 2^12 = 4K). If your hypothetical system has 128K of RAM, you'd need 128K/4K = 32 pages, or sets of RAM chips. So you can use 12 bits to select the physical address on each chip by routing the same 12 wires to the 12 address pins on every RAM chip. Then use 5 more wires to select which one of the 32 pages contains the address you want. We've used 12 + 5 = 17 address bits to access 2^17 = 128K of RAM.
Let's take the final step and imagine that the 128K of RAM resides on a memory card. But with a 32-bit address bus, you still have 32-17 = 15 address bits left! You can use those bits to select one of 2^15 = 32768 memory cards, giving you a total virtual address space of 2^32 = 4G of RAM.
Is this useful beyond RAM and memory cards?
It's common practice to divide a large set of bits, like those in the address, into smaller sub-groups to make them more manageable. Sometimes they're divided for physical reasons, such as address pins and memory cards; other times it's for logical reasons, such as IP addresses and subnets. The beauty is that the implementation details are irrelevant to the end user. If you access memory at address 0x48C7D3AB, you don't care which RAM chip it's in, or how the memory is arranged, as long as a memory cell is present. And when you browse to 67.199.15.132, you don't care if the computer is on a class C subnet as long as it accepts your upvotes. :-)

Related

How many Cycles are needed to transfer a block of 32 bytes?

The question is:
A wide bus configuration has the following parameters:
Number of cycles to send the address
Number of cycles for a bus transfer = 2 cycles
Memory Access = 30 cycles
How many cycles are need to transfer a block of 32 bytes?
So, since it's a wide bus configuration I assumed that the bus transfer cycle will be done over one iteration and same for memory access
Which means that I got 30 + 2 cycles = 32
However I can't make sense of the size of the bus and its impact. I can't understand how i can calculate the the number of cycles left from it

Virtual memory: If each page table entry maps one word and requires 4 bytes, how large is the whole page table for a 32-bit machine?

When I try to solve it I get it that the virtual address is 20 bits, thus the amount of entries are 2^20 and each entry holds 1 word, i.e 4 bytes. Hence, 2^20*4 bytes, i.e 4 194 304 bytes ~4 MB is the size of the page table.
Correct?
There is missing information: the page size, and page table construction (flat or hierarchical).
However, assuming a 4k byte page size with flat page tables, then the page number is 20 bits (32-12) as you computed. (A virtual address is still 32-bits, and a physical address could be the same, smaller, or larger).
That means that for each process using virtual memory, it would have a 4MB page table, assuming there is a page table entry for every page in the process' virtual address space (not always the case, for example, some MIPS process layouts would give at most only the lower 2GB to a process).

Safe maximum amount of nodes in the DOM? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
For a web application, given the available memory in a target mobile device1 running a target mobile browser2, how can one estimate the maximum number of DOM nodes, including text nodes, that can be generated via HTML or DHTML?
How can one calculate the estimate before
Failure
Crash
Significant degradation in response
Also, is there a hard limit on any browser not to cross per tab open?
Regarding Prior Closure
This is not like the other questions in the comments below. It is also asking a very specific question seeking a method for estimation. There is nothing duplicated, broad, or opinion based about it, especially now that it is rewritten for clarity without changing its author's expressed interests.
Footnotes
[1] For instance, Android or IOS mobile devices sold from 2013 - 2018 with some specific RAM capacity
[2] Firefox, Chrome, IE 11, Edge, Opera, Safari
This is a question for which only a statistical answer could be accurate and comprehensive.
Why
The appropriate equation is this, where N is the number of nodes, bytesN is the total bytes required to represent them in the DOM, and the node index n ∈ [0, N).
bytesN = ∑N (bytesContentn + bytesOverheadn)
The value requested in the question is the maximum value of N in the worst case handheld device, operating system, browser, and operating conditions. Solving for N for each permutation is not trivial. The equation above reveals three dependencies, each of which could drastically alter the answer.
The average size of a node is dependent on the average number of bytes used in each to hold the content, such as UTF-8 text, attribute names and values, or cached information.
The average overhead of a DOM object is dependent on the HTTP user agent that manages the DOM representation of each document. W3C's Document Object Model FAQ states, "While all DOM implementations should be interoperable, they may vary considerably in code size, memory demand, and performance of individual operations."
The memory available to use for DOM representations is dependent upon the browser used by default (which can vary depending on what browser handheld device vendors or users prefer), user override of the default browser, the operating system version, the memory capacity of the handheld device, common background tasks, and other memory consumption.
Rigorous Solution
One could run tests to determine (1) and (2) for each of the common http user agents used on handheld devices. The distribution of user agents for any given site can be obtained by configuring the logging mechanism of the web server to place the HTTP_USER_AGENT if it isn't there by default and then stripping all but that field in the log and counting the instances of each value.
The number of bytes per character would need to be tested for both attributes values and UTF-8 inner text (or whatever the encoding) to get a clear pair of factors for calculating (1).
The memory available would need to be tested too under a variety of common conditions, which would be a major research project by itself.
The particular value of N chosen would have to be ZERO to handle the actual worst case, so one would chose a certain percentage of typical cases of content, node structures, and run time conditions. For instance, one may take a sample of cases using some form of randomized in situ (within normal environmental conditions) study and find N that satisfies 95% of those cases.
Perhaps a set of cases could be tested in the above ways and the results placed in a table. Such would represent a direct answer to your question.
I'm guessing it would take an excellent mobile software engineer with a good math background and a statistics expert working together full time with a substantial budget for about four weeks to get reasonable results.
A More Practical Estimation
One could guess the worst case scenario. With a few full days of research and a few proof-of-concept apps, this proposal could be refined. Absent of the time to do that, here's a good first guess.
Consider a cell phone that permits 1 Gbyte for DOM because normal operating conditions use 3 Gbytes out of the 4 GBytes for the above mentioned purposes. One might assume the average consumption of memory for a node to be as follows, to get a ballpark figure.
2 bytes per character for 40 characters of inner text per node
2 bytes per character for 4 attribute values of 10 characters each
1 byte per character for 4 attribute names of 4 characters each
160 bytes for the C/C++ node overhead
In this case Nworst_case, the worst case max nodes,
= 1,024 X 1,024 X 1,024
/ (2 X 40 + 2 X 4 X 10 + 1 X 4 X 4 + 160)
= 3,195,660 . 190,476.
I would not, however, build a document in a browser with three million DOM nodes if it could be at all avoided. Consider employing the more common practice below.
Common Practice
The best solution is to stay far below what N might be and simply reduce the total number of nodes to the degree possible using standard HTTP design techniques.
Reduce the size and complexity of that which is displayed on any given page, which also improves visual and conceptual clarity.
Request minimal amounts of data from the server, deferring content that is not yet visible using windowing techniques or balancing response time with memory consumption in well-planned ways.
Use asynchronous calls to assist with the above minimalism.
There is no limit for the DOM. Instead there is a limit for a running application, called 'browser'. As all other applications, it has a limit of 4GB of virtual memory. How much of resident memory is used depends on the amount of physical memory. With low RAM you might get to situation of constantly swapping in and out (having affordable amount of swap memory). Some systems (Linux, Android) have a special kernel task to kill applications if the system runs out of memory. Also, the maximum size of application in Linux like systems is usually limited to 2MB of virual memory and can be changed by ulimit command.

size of memory of computer that uses 16 bits memory address

If the memory address of a computer uses 16 bits, what is the size of its memory? I find many references online but I can't be certain which are relevant. Thank you.
2^16?
From wikipedia:
For instance, a computer said to be "32-bit" also usually allows
32-bit memory addresses; a byte-addressable 32-bit computer can
address 2^32 = 4,294,967,296 bytes of memory, or 4 gibibytes (GiB).
This seems logical and useful, as it allows one memory address to be
efficiently stored in one word.
So to answer your question, in general, yes a 16-bit computer can address 2^16 bytes of memory per word write.

Math behind 4GB limit on 32 bit systems

I have a very fundamental question relating to 32 bit memory addresses. My understanding is that 2^32 is the maximum number of possible memory addresses on a 32 bit system. Where I am confused is how we go from this number to the alleged 4GB limit. In my research I have seen some people do this:
2^32 = 4,294,967,296 bytes
4,294,967,296 / (1,024 * 1,024) = ~4 GB
First, where does this (1,024 * 1,024) come from?
Second, correct me if I am wrong, but 4,294,967,296 is labeled as bytes because a byte is the smallest unit of storage space that can be addressed in RAM. Since we're limited to 2^32 addresses, that's the number of bytes that can be addressed.
Third, even though the smallest addressable space in RAM is a byte, this must not be the case with the hard-drive because 32 bit systems usually have hard disk's well in excess of 4 GB. Can someone briefly describe the addressing scheme for hard disks?
This is a case of basic arithmetics: Number of bytes per addressed unit times number of addressable units equals number of addressable bytes.
The hard part is, where to get those numbers from. Here is my take on it:
1 - What is a Kilobyte, Megabyte, Gigabyte?
For RAM, there is consent, that a Gigabyte is 1024 Megabytes, each consisting of 1024 Kilobytes, each being 1024 Bytes. This stems from the fact, that 1024 is 2^10, but close enough to 1000 to historically allow the Kilo prefix
For Storage, vendors have years ago started to use strictly decimal units, a Megabyte being 1000000 bytes (As it makes the capacities look bigger in glossy brochures)
This has led to 1024*1024 Bytes being called a MiB and 1000*1000 Bytes being called a MB
2 - The addressable unit
For RAM, the addressable unit is the byte, even if it is fetched from physical RAM in chunks of at least 4.
For mass Storage, the addressable unit is the sector or block, which most often is 512 bytes, but 4096 Bytes catches up fast.
3 - The number of addressable units is much more complicated, let's start with RAM:
A 32 Bit CPU (sans the MMU!) can address 2^32 Bytes or 4 GiB
All modern 32 Bit CPUs include a MMU, that maps these 4 GiB of virtual address space into a physical address space
This physical address space can have a different size than 4 GiB, as a function of the MMU using more (or in prehistoric times less) than 32 physical address lines. Today's most common implementation are 36 or more physical Bits, resulting in 16*4 GiB or more (PAE or physical adress extension)
This MMU magic does not work around the CPU running in 32 Bit mode, i.e. for every process, the address space can't be larger than 4 GiB
To make things a little more interesting, a part of this address space is used for kernel functionality in every modern OS I know of. This results in 2 GiB or 3 GiB maximum usable address space per process for all mainstream OSes.
And as this still is much too simple: Running the MMU in a mode, where it can actually use more than 4 GiB of physical RAM must be supported by the OS. A remarkable example is Windows XP 32 Bit, which does NOT allow that.
And last but not least: A part of the physical address space is most often used for memory-mapping hardware. If this is combined with OS limits as above, it results in Windows XP 32 Bit sometimes being able to use only 2.5 to 3.5 GiB of physical RAM
It's much less of a hassle for storage:
in all modern PC-based Cases I know of, the addressable units are simply counted with 32 or 48 Bits (LBA or logical block addressing). Even in it's most basic version this is enough for 2 TiB of storage per disk (2^32 blocks of 512 Bytes each). Maxed-out versions with 48 Bit LBA and 4 KiB per block result in ca. a Gazillion TiB per disk.
A computer is not all memory. The 32 bits are the maximum spots for an Instruction Set to be organized. 64 bits gives you more bits to reference more memory. I think those people meant 4,294,967,296 bit combinations not bytes (8 bits).
As for the math - it seems to mean that 20 bits are reserved for other uses besides specifying a possible memory address.