What is the meaning of Radio_on during radio duty cycle simulation in cooja? - radio

There are three headings shown during radio duty cycle simulation in cooja: radio_on, radio_rx and radio_tx. The meaning of radio_rx and radio_tx are obvious. What is meant by radio_on option?

It's the time when the radio chip hardware is turned on, i.e. is either in ready-to-receive state, Rx'ing or Tx'ing.
Radio Tx time: for how long the chip transmits PHY-layer packets.
Radio Rx time: for how long the chip receives PHY-layer packets.
And by the way, when the radio chip is on and neither transmitting nor receiving, the energy consumption is almost the same as in receive mode. The energy gets spent on keeping the receive machinery active and continuously sampling the medium to detect start of a packet.

Related

Is a "single cycle cpu" possible if asynchronous components are used?

I have heard the term "Single Cycle Cpu" and was trying to understand what single cycle cpu actually meant. Is there a clear agreed definition and consensus and what is means?
Some home brew "single cycle cpu's" I've come across seem to use both the rising and the falling edges of the clock to complete a single instruction. Typically, the rising edge acts as fetch/decode and the falling edge as execute.
However, in my reading I came across the reasonable point made here ...
https://zipcpu.com/blog/2017/08/21/rules-for-newbies.html
"Do not transition on any negative (falling) edges.
Falling edge clocks should be considered a violation of the one clock principle,
as they act like separate clocks.".
This rings true to me.
Needing both the rising and falling edges (or high and low phases) is effectively the same as needing the rising edge of two cycles of a single clock that's running twice as fast; and this would be a "two cycle" CPU wouldn't it.
So is it honest to state that a design is a "single cycle CPU" when both the rising and falling edges are actively used for state change?
It would seem that a true single cycle cpu must perform all state changing operations on a single clock edge of a single clock cycle.
I can imagine such a thing is possible providing the data strorage is all synchronous. If we have a synchronous system that has settled then on the next clock edge we can clock the results into a synchronous data store and simultaneously clock the program counter on to the next address.
But if the target data store is for example async RAM then the surely control lines would be changing whilst that data is being stored leading to unintended behaviours.
Am I wrong, are there any examples of a "single cycle cpu" that include async storage in the mix?
It would seem that using async RAM in ones design means one must use at least two logical clock cycles to achive the state change.
Of course, with some more complexity one could perhaps add anhave a cpu that uses a single edge where instructions use solely synchronout components, but relies on an extra cycle when storing to async data; but then that still wouldn't be a single cycle cpu, but rather a a mostly single cycle cpu.
So no CPU that writes to async RAM (or other async component) can honestly be considered a single cycle CPU because the entire instruction cannot be carried out on a single clock edge. The RAM write needs two edges (ie falling and rising) and this breaks the single clock principal.
So is there a commonly accepted single cycle CPU and are we applying the term consistently?
What's the story?
(Also posted in my hackday log https://hackaday.io/project/166922-spam-1-8-bit-cpu/log/181036-single-cycle-cpu-confusion and also on a private group in hackaday)
=====
Update: Looking at simple MIP's it seems the models use synchronous memory and so can probably operate off a single edge ad maybe it does - therefore warrant the category "single cycle".
And perhaps FPGA memory is always synchronous - I don't know about that.
But is the term using inconsistently elsewhere - ie like most Homebrew TTL Computers out there??
Or am I just plain wrong?
====
Update :
Some may have misunderstood my point.
Numerous home brew TTL cpu's claim "single cycle CPU" status (not interested for the purposes of this discussion in more complex beasts that do pipelining or whatever).
By single cycle these CPU's they typically mean that they do something like advancing the PC on one edge of the clock and then the use the opposing edge of the clock to update flipflops with the result. OR they will use the other phase of the clock to update async components like latches and sram.
However, the ZipCPU reference I provided suggests that using the opposing clock edge is akin to using a second clock cycle or even a second clock. BTW Ben Eater in his vids even compares the inverted clock that he uses to update his SRAM to being a second clock.
My objection to the use of "single cycle CPU" with such CPU's (basically most/all home bred TTL CPU's I've seen as they all work that way) is that I agree with ZipCPU that using the opposing edge (or phase) of the clock for the commit is effectively the same as using a second clock and this makes a mockery of the "single cycle" claim.
If the use of oposing edge is effectively the same a using a single edge but of dual clock cycles then I think that makes use of the term questionable. So I take ZipCPU's point to heart and tighten the term to mean use of a single edge.
On the other hand is seems perfectly possible to build a CPU that uses only sync components (ie edge triggered flip flops) and which uses only a single edge, where on each edge, we clock whatever is on the bus into whatever device is selected for write and at the same moment advance the PC.
Between one edge and the next same direction edge, settling occurs.
In this manner we end up with CPI=1 and use of only a single edge - which is very distinctly different to the common TTL CPU pattern of using both edges of the clock.
BTW my impression of FPGA's (which I'm not referring to here) is that the storage elements in FPGA are all synchronous flip flops. I don't know, but that's what my reading suggests. Anyway, if this is true then a trivial FPGA based CPU probabnly has a CPI=1 and uses only say the +ve edge and so these might well meet my narrow definition of "single cycle cpu". Also, my reading suggests that various MIP's impls (educational efforts probably) are probably meeting my definition ok.
This seems mostly a question of definitions and terminology, moreso than how to actually build simple CPUs.
If you insist on that strict definition of "single cycle CPU" meaning to truly use only clock edge to set everything in motion for that instruction, then yes, that would exclude real-world toy/hobby CPUs that use a 2nd clock edge to give a consistent interval for memory access.
But they certainly still fulfil the spirit of a single-cycle CPU, which is that every instruction runs in 1 clock cycle, with no pipelining and no multi-cycle microcode.
A whole clock cycle does have 2 clock edges, and it's normal for real-world (non-single-cycle) CPUs to use the "other" edge for internal timing in some cases, but we still talk about their frequency in whole cycles, not the edge frequency, except in cases like DDR memory where we do talk about the transfer rate = twice the memory clock frequency. What sets that apart is always using both edges, and for approximately equal things, not just some extra timing / synchronization within a clock cycle.
Now could you build a CPU that keeps a store value on a memory bus for some minimum time, without using a clock edge? Maybe.
Perhaps make sure the critical path leading to store-data it is short enough that the data is always ready. And possibly propagate some "data-ready" signal along with your computations (or just from the longest critical path of any instruction), and after a couple gate delays after the data is on the bus, flip the memory clock. (And on the next CPU clock edge, flip it back). If your memory doesn't mind its clock not having a uniform duty cycle, this might be fine as long as each half of the memory clock is long enough.
For loading from memory, you can maybe do something similar by initiating a memory load cycle some gate-delays after the CPU clock edge that starts this "cycle" of your single-cycle CPU. This might even involve building a long chain of gate delays intentionally with inverters dedicated to that purpose. Or perhaps even an analog RC time delay, but either way that's obviously worse than just using the other edge of the main clock, and you'd only ever do this as an exercise in single-cycle dogmatic purity. (It can also be flaky because gate-delay isn't constant, depending on voltage and temperature, so one side of the chip running hotter than the other could change relative timing.)
The definition says that a single cycle CPU takes just one instruction per one cycle. So it's possible to make a conclusion in theory that there are other CPU's that takes more or less instruction per cycle. You can check it out that there are some concepts like multi-cycle processor and pipelined processor (Instruction pipelining). "Pipelining attempts to keep every part of the processor busy with some instruction by dividing incoming instructions into a series of sequential steps." acorkcording to Wiki. I don't know how exactly it works, but maybe it just uses available registers (maybe instead of using for example EAX, ECX is used like EAX or maybe it works some other way, but one for sure is 100% true that number of registers in increasing, so maybe it's one of main purposes. Source: https://en.wikipedia.org/wiki/Instruction_pipelining
I think the answer for the question: "is-a-single-cycle-cpu-possible-if-asynchronous-components-are-used" depends on CPU controller that controls both CPU and RAM with opcodes. I found interesing information about this on site: http://people.cs.pitt.edu/~cho/cs1541/current/handouts/lect-pipe_4up.pdf
https://ibb.co/tKy6sR2
CONCLUSION: In my opinion, if we consider the term "single cycle CPU" it should be the simplest possible construction. The term "asynchronous" implements a conclusion, that is more complex than "synchronous". So both terms are not equivalent. It's something like "Can a basic data type be considered as a structure?". In my opinion the word "single" means the simplest possible and "asynchronous" means some modification, so more complex, so just think it's not possible, but maybe the term "are used" can be bypassed by "are used at the time" - if some switch, some controller can turn off asynchronous mode and make this all the simplest possible. But generally just think it's not possible

implementing jump register (jr, sll, slti) in multicycle

I've been asked to sketch a multicycle datapath and control unit for the instructions (js, sll, slti) all together. and draw the main controller FSM for these 3 instructions.
I'm struggle with it, I know how to make it with single cycle datapath but not for multicycle.
please, help
If you know the single cycle datapath, and want to take that to multi cycle, generally speaking we subdivide the single cycle into stages.
As only one stage will execute at a time, a relatively simple state machine controls what stage to execute currently/next to activate the appropriate stage, and deactivate the others.
The stages would be similar to the stages of a pipelined processor, namely those corresponding to Instruction Fetch, Decode, Execute, Memory, and Writeback.
In a pipelined machine, all the instructions need to go through all the stages, however, in a multicycle machine, certain instructions can skip certain states, and this should be manifest in the state machine. For example, while all instructions share Instruction Fetch and Decode, only loads and stores interact with the Data Memory, so any other instruction can skip the Mem stage.
The simplest possible state machine simply chooses all the states in succession, for every instruction, without ever consulting the instruction. This would fail to take advantage of the ability to skip stages for certain instructions that don't need those stages, and of course, you're being asked to do more than the simplest state machine.
A better state machine might start with Instruction Fetch as the active state, go to Decode state next, and by then it can use knowledge of the actual instruction to decide which of the remaining stages to skip for that given instruction.
You can imagine that the Control logic fetches some bits that inform the state machine of which stages can be skipped for any given opcode. There are only three states/stages left, so all that is needed from Control is a boolean for each.
For more information, especially on the stages and what they do, I suggest searching on Pipelined/Pipelining MIPS, as I believe there is more material on pipelined processors than on the multicycle design. You won't be concerned with pipeline hazzards, but the break down of the single cycle design into the stages might help.

STM32F4 exit from STOP on Usart receive interrupt

STM32F429 discovery board:
It's not possible to exit from STOP mode on Uart receive interrupt, because all the clocks are stopped? As far as I read any EXTI Line configured in Interrupt mode can wake up the microcontroller.EXTI0 - EXTI15 .
Please, I'd appreciate any advice on how to start with it.
I tried the following with STM32 cube Mx:
PA0 as GPIO_EXT0 and generated the code
how to link the uart receive pin to GPIO_EXT0
While you are correct about the EXTI0 - EXTI15 pins being configurable for a wake up, unfortunately, this particular series of microcontroller (STM32F4) cannot have the USART clock active when stop mode is on. This means that the peripheral cannot see any data. You can; however, use an external watchdog, RTC, etc... this will allow for that with your current microcontroller. There are workarounds for this.
You could use sleep mode, which just the Cortex M4 Clock and the CPU would be stopped while all the peripherals are left running. However, with all the peripheral clocks enabled you will draw more current.
If you are interested in USART clock functionality in stop mode, check out the STM32L0, or STM32L4. Both of these have that feature and it works phenomenally well and I would highly recommend these two series for a low-power application as this is what they are designed for.
It can be done in software, but not with STM32CubeMX
GPIO inputs and EXTI (if configured) are active even if the pin is configured as alternate function. Configure the UART RX pin as you would for UART receive, then select that pin as EXTI source in the appropriate SYSCFG->EXTICR* register, and configure EXTI registers accordingly. You'll probably want interrupt on falling edge, as the line idle state is high.
Keep in mind that it takes some time for the MCU to resume operations, therefore some data received on the UART port will be inevitably lost.
PA0 cannot be configured as a UART RX pin, use the EXTI line corresponding to the RX pin of the used UART.

When pressing Enter in the address bar of a modern browser, is there an interrupt that handles that event?

Some context: I was witness to an interview not so long ago, where the interviewer asked the interviewee what happens when a user presses the Enter button. A long explanation later, the interviewer explained that this action actually fires an interrupt so the CPU can handle the event (the event being the user hit Enter in the address bar).
This got me thinking as to whether or not this would actually result in an interrupt. While such low-level system/OS semantics are not my speciality, I was always under the impression that interrupts were mostly (exclusively?) for hardware devices.
So, when a user presses the Enter button in an address bar, is there ultimately an interrupt that causes the CPU to execute the code that loads the webpage?
Not ultimately, the very moment you press a key or move the mouse, the input device generates an interrupt that the CPU services in a dedicated interrupt handler routine, where it reads the data off the device (the key code or the moved distance). And this has nothing to do with the browser per se.

How do machines without RTC generate pseudorandom numbers?

I always wondered how some old game consoles like the NES were able to generate random numbers without a seed like time(NULL);
Thanks
You can do that using the time between subsequent key presseses, joystick movement or any similar human originated interaction. If you can time the events for example in microseconds and take modulo 100 or so you end with a reasonable seed. If needed you can also do this several times to harvest enough bits to create a big enough (i.e. 64 bits) seed.
On some other systems that cannot depend on human interaction for the seed, the reset circuit uses a RC circuit but component variations will make this time slightly different on every system. An external (CPU independent counter) can be started on power up and then read by the CPU during start up. If the counter has enough resolution relative to the reset circuit time constant then the last bits can be used as a seed. This was used long time ago by networked devices to generated the MAC-equivalent address before the Ethernet era.
--ga