Just started to use Vivado 2016.1
1. What is considered as simulation of a circuit?
2. what steps i follow to make a circuit simulation?
Thank you
Related
I’m trying to use QEMU to emulate a piece of firmware, but I’m having trouble getting the UART device to properly update the Line Status Register and display the input character.
Details:
Target device: Qualcomm QCA9533 (Documentation here if you're curious)
Target firmware: VxWorks 6.6 with U-Boot bootload
CPU: MIPS 24Kc
Board: mipssim (modified)
Memory: 512MB
Command used: qemu-system-mips -S -s -cpu 24Kc -M mipssim –nographic -device loader,addr=0xBF000000,cpu-num=0 -serial /dev/ttyS0 -bios target_image.bin
I have to apologize here, but I am unable to share my source. However, as I am attempting to retool the mipssim board, I have only made minor changes to the code, which are as follows:
Rebased bios memory region to 0x1F000000
Changed load_image_targphys() target address to 0x1F000000
Changed $pc initial value to 0xBF000000 (TLB remap of 0x1F000000)
Replaced the mipssim serial_init() ¬call with serial_mm_init(isa, 0x20000, env->irq[0], 115200, serial_hd(0), DEVICE_NATIVE_ENDIAN).
While it seems like serial_init() is probably the currently accepted standard, I wasn’t having any luck with remapping it. I noticed the malta board had no issues outputting on a MIPS test kernel I gave it, so I tried to mimic what was done there. However, I still cannot understand how QEMU works and I am unable to find many good resources that explain it. My slog through the source and included docs is ongoing, but in the meantime I was hoping someone might have some insight into what I’m doing wrong.
The binary loads and executes correctly from address 0xBF000000, but hangs when it hits the first UART polling loop. A look at mtree in the QEMU monitor shows that the I/O device is mapped correctly to address range 0x18020000-0x1802003F, and when the firmware writes to the Tx buffer, gdb shows the character successfully is written to memory. There’s just no further action from the serial device to pull that character and display it, so the firmware endlessly polls on the LSR waiting for an update.
Is there something I’m missing when it comes to serial/hardware interaction in QEMU? I would have assumed that remapping all of the existing functional components of the mipssim board would be enough to at least get serial communication working, especially since the target uses the same 16550 UART that mipssim does. Please let me know if you have any insights. It would be helpful if I could find a way to debug QEMU itself with symbols, but at the same time I’m not totally sure what I’d be looking for. Even advice on how to scope down the issue would be useful.
Thank you!
Well after a lot of hard work I got the UART working. The answer to the question lies within the serial_ioport_read() and serial_ioport_write() functions. These two methods are assigned as the callbacks QEMU invokes when data is read or written to the MemoryRegion for the serial device (which is initialized in serial_init() or serial_mm_init()). These functions do a bit of masking on the address (passed into the functions as addr) to determine which register is being referenced, then return the value from the SerialState struct corresponding to that register. It's surprisingly simple, but I guess everything seems simple once you've figured it out. The big turning point was the realization that QEMU effectively implements the serial device as a MemoryRegion with special functionality that is triggered on a memory operation.
Anyway, hope this helps someone in the future avoid the nightmare I went through. Cheers!
i've cloned a "PointPillars" repo for 3D detection using just point cloud as input. But when I came to run it, I noted it use cuda and numba. With any prior knowledge about these two, I'm asking if there is any way to remove or disable numba and cuda. I want to run it on local server with CPU only, so I want your advice to solve.
The actual code matters here.
If the usage is only of vectorize or guvectorize using the target=cuda parameter, then "removal" of CUDA should be trivial. Just remove the target parameter.
However if there is use of the #cuda.jit decorator, or explicit copying of data between host and device, then other code refactoring would be involved. There is no simple answer here in that case, the code would have to be converted to an alternate serial or parallel realization via refactoring or porting.
I have a custom bootloader, I have the entry point of the bootloader, How do I specify this address to qemu?
I also have this warning when I try to load the image with this line qemu-system-mips -pflash img_:
WARNING: Image format was not specified for 'img_' and probing guessed raw.
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
Specify the 'raw' format explicitly to remove the restrictions.
I tried -pflash=img_,format=raw but it didn't work.
Thanks.
You should perform some dig into qemu sources to play with custom bootloader images at qemu. QEMU loads booloader at board initilizaton function, which is board specific.
Run following to view all available MIPS board models:
qemu-system-mips64 -machine ?
Supported machines are:
magnum MIPS Magnum
malta MIPS Malta Core LV (default)
mips mips r4k platform
mipssim MIPS MIPSsim platform
none empty machine
pica61 Acer Pica 61
This models are implemented in qemu/hw/mips/ files. (look for * _init funtctions)
For example, for malta board (which is default) it is hw/mips/mips_malta.c. Look at mips_malta_init function: it constructs devices, buses and cpu, register memory regions and place bootloader to memory. Seems that FLASH_ADDRESS macro is that you looking for.
Note that this init functions is common thing to all boards that QEMU implements.
Also every board have some reference/datasheet document, and QEMU model should complement with them, as a programer view.
I hear very conflicting information about Octave's experimental JIT compiler feature, ranging from "it was a toy project but it basically doesn't work" to "I've used it and I get a significant speedup".
I'm aware that in order to use it successfully one needs to
Compile octave with the --enable-jit at configure time
Launch octave with the --jit-compiler option
Specify jit compilation preference at runtime using jit_enable and jit_startcnt commands
but I have been unable to reproduce the effects convincingly; not sure if this is because I've missed out any other steps I'm unaware of, or it simply doesn't have much of an effect on my machine.
Q: Can someone who has used the feature successfully provide a minimal working example demonstrating its proper use and the effect it has (if any) on their machine?
In short:
you don't need to do anything to use JIT, it should just work and speed up your code if it can;
it's mostly useless because it only works for simple loops;
it's little more than a proof of a concept;
there is no one currently working on improving it because:
it's a complicated problem;
it's mostly a fix for sloppy Octave code;
uses LLVM which is too unstable.
Q: Can someone who has used the feature successfully provide a minimal working example demonstrating its proper use and the effect it has (if any) on their machine?
There is nothing to show. If you build Octave with JIT support, Octave will automatically use faster code for some loops. The only difference is on the speed, and you don't have to change your code (although you can disable jit at runtime):
octave> jit_enable (1) # confirm JIT is enabled
octave> tic; x = 0; for i=1:100000, x += i; endfor, toc
Elapsed time is 0.00490594 seconds.
octave> jit_enable (0) # disable JIT
octave> tic; x = 0; for i=1:100000, x += i; endfor, toc
Elapsed time is 0.747599 seconds.
## but you should probably write it like this
octave> tic; x = sum (1:100000); toc
Elapsed time is 0.00327611 seconds.
## If Octave was built without JIT support, you will get the following
octave> jit_enable (1)
warning: jit_enable: support for JIT was unavailable or disabled when Octave was built
This is a simple example but you can see better examples and more details on the blog of the only person that worked on it, as well as his presentation at OctConf 2012. More details on the (outdated), Octave's JIT wiki page
Note that Octave's JIT works only for very simple loops. So simple loops that no one familiar with the language would write them in the first place. The feature is there as proof of concept, and a starting point for anyone that may want to extend it (I personally prefer to write vectorized code, that's what the language is designed for).
Octave's JIT has one other problem which is that it uses LLVM. Octave developers have found it too unreliable for this purpose because it keeps breaking backwards compatibility. Every minor release of LLVM has broken Octave builds so Octave developers stopped fixing when LLVM 3.5 got released and disabled it by default.
I am now trying to cross-compile my C code for MIPS architecture.
And I am using a cross-compiler I make on my host computer. I have a FPGA board which implemented with a multi-cycle (not pipeline) MIPS core. I was just wondering if I could compile the code without delay-slot yet keep other optimizations made via GCC.
The simplest flag -O for optimization still implements the delay-slot. So I was wondering if there is any options to simply disable delay-slot in MIPS cross-compiling.