How do you adjust size of windows in ModelSim GUI through TCL? - tcl

Right now I build and add signals to ModelSim through a TCL script. The time I spend resizing windows is obnoxious.
Is there a way to use TCL to tell ModelSim, "Make the wave window X by Y, and scrunch everything else as small as you need to"?

Related

Using "tic" without "toc" in Octave

Quick background, Octave GUI version 6.2.0 takes at least 20-30 seconds to initialize on one computer, but 1-2 seconds on another computer running the same version and same OS. I wanted to figure out where the holdup was, so I used the tic and toc timer functions in various places within the octaverc startup file. At some point, I noticed that merely having tic at the very end of the startup file completely fixed the slow initialize time.
So while my main issue is resolved (though I don't know why that worked), I wanted to see if using tic without ever using toc in an Octave session can cause any issues, e.g., does this leave some timer process running in the background? Thanks in advance.
tic stores the current clock time in an internal variable (see tic_toc_timestamp here, the source code for tic starts right after that line), toc compares the current clock time to the stored variable (see its source code here).
So, no, there’s no timer running in the background after you call tic.

MARS Mips Breakpoint

I want to check out what the registers are at this point in my program (as pictured). Naturally, I tried to add a breakpoint in my MARS mips compiler as pictured below. However, MARS just blows through it without stopping. So, I tried to get my program to crash right after this but I couldn't get it to crash, even by setting the stack point to -4 and calling lw. Any ideas as to how to get it to stop at a given line?
After one compiles, one cannot exit the "execute" window or else any breakpoints will not work (or at least this is what my experience was). Idk if anyone else can corroborate this?

Octave JIT compiler. Current state, and minimal example demonstrating effect

I hear very conflicting information about Octave's experimental JIT compiler feature, ranging from "it was a toy project but it basically doesn't work" to "I've used it and I get a significant speedup".
I'm aware that in order to use it successfully one needs to
Compile octave with the --enable-jit at configure time
Launch octave with the --jit-compiler option
Specify jit compilation preference at runtime using jit_enable and jit_startcnt commands
but I have been unable to reproduce the effects convincingly; not sure if this is because I've missed out any other steps I'm unaware of, or it simply doesn't have much of an effect on my machine.
Q: Can someone who has used the feature successfully provide a minimal working example demonstrating its proper use and the effect it has (if any) on their machine?
In short:
you don't need to do anything to use JIT, it should just work and speed up your code if it can;
it's mostly useless because it only works for simple loops;
it's little more than a proof of a concept;
there is no one currently working on improving it because:
it's a complicated problem;
it's mostly a fix for sloppy Octave code;
uses LLVM which is too unstable.
Q: Can someone who has used the feature successfully provide a minimal working example demonstrating its proper use and the effect it has (if any) on their machine?
There is nothing to show. If you build Octave with JIT support, Octave will automatically use faster code for some loops. The only difference is on the speed, and you don't have to change your code (although you can disable jit at runtime):
octave> jit_enable (1) # confirm JIT is enabled
octave> tic; x = 0; for i=1:100000, x += i; endfor, toc
Elapsed time is 0.00490594 seconds.
octave> jit_enable (0) # disable JIT
octave> tic; x = 0; for i=1:100000, x += i; endfor, toc
Elapsed time is 0.747599 seconds.
## but you should probably write it like this
octave> tic; x = sum (1:100000); toc
Elapsed time is 0.00327611 seconds.
## If Octave was built without JIT support, you will get the following
octave> jit_enable (1)
warning: jit_enable: support for JIT was unavailable or disabled when Octave was built
This is a simple example but you can see better examples and more details on the blog of the only person that worked on it, as well as his presentation at OctConf 2012. More details on the (outdated), Octave's JIT wiki page
Note that Octave's JIT works only for very simple loops. So simple loops that no one familiar with the language would write them in the first place. The feature is there as proof of concept, and a starting point for anyone that may want to extend it (I personally prefer to write vectorized code, that's what the language is designed for).
Octave's JIT has one other problem which is that it uses LLVM. Octave developers have found it too unreliable for this purpose because it keeps breaking backwards compatibility. Every minor release of LLVM has broken Octave builds so Octave developers stopped fixing when LLVM 3.5 got released and disabled it by default.

Write to QEMU guest system registers & memory?

How do you write to the processor registers and specific memory addresses of a virtual system running in QEMU?
My desire would be to accomplish this from a user space program running outside of QEMU. This would be to induce interrupts and finely control execution of the processor and virtual hardware.
The QEMU Monitor is supposed to read parameters or do simple injects of mouse or keyboard events, but I haven't seen anything about writing.
GDB server within QEMU Monitor seems to be the best for your purpose. One of your options is implementing a gdb protocol, another one is driving gdb itself through its command line.
I've tested it a bit: attaching, reading and writing memory seems to work (I read what I write); jumping to another address seems to work too. (If you may call injected code, you can do anything, theoretically). Writing to text-mode video memory doesn't work (I don't even read what I wrote, and nothing changes on display).

adding functions in a CUDA program

So, I think I have a very weird question.
So, let say that I already have a program put on my GPU and in that program I call a function X. But that function X is not declared yet.
I want to be able, dynamically, to modify that function X, by completely changing the code and put it in the program without recompiling the rest or losing any pointers whatsoever.
To compare it with something that most of us know, I want to be able to do like the shaders in OpenGL. In the middle of the execution, I can change the code of one shader, only recompile that shader, activate the program and now I used this one.
So, is it possible. Or do I need to recompile the whole thing all the time. And if I have to recompile, do I lose the various arrays that I created in global memory ?
Thanks
W
If you compile with the -cuda flag using nvcc, you can get the intermediate C++ source that streams PTX to the processor. In theory, you could post-process this intermediate output to dynamically generate PTX on the fly and send it over. You might even be able to have PTX be self modifying, but that's way out of my league.