Programming EP2C35F672C6 FPGA purchased - configuration

I am new to FPGAs & board development. This semester, I was introduced to Quartus II, VHDL, and FPGAs. I have uploaded several basic designs onto the DE2 Board, which has an EP2C35F672C6N FGPA on it. However, every time I power up the board, I must re-download the configuration. I was wondering if someone could explain what sort of Altera FPGAs, similar to the EP2C35F672C6, retain their configuration once set until a new configuration is uploaded to the board.
Also, I purchased an EP2C35F672C6 FPGA chip from Altera. However, I do not see a way to program it using my current board due to the fact that the FPGA on my DE2 board appears to be smelted onto it. Are their special boards out there you can use to configure standalone FPGAs? Thank you.

The FPGA can load its configuration from a flash chip. The FPGA itself cannot store anything non-volatile. You must use it on a board and configure it each time the power comes on. Up until now you have been doing this with a JTAG cable (I assume).
You can also program the EPCS16 serial flash device on the DE2 board. According to the user manual, page 24, this can be programmed over JTAG by assigning a POF file to the device in the chain. Then the FPGA will configure from that flash device each power up.

Related

What is real difference between Firmware and Embedded Software

I am searching real difference between firmware and embedded software.
On the internet it is written for firmware is firmware is a type of embedded software but not vice versa. In addition to that a classic BIOS example it is very old.
They both run in non-volatile memory. One difference is Embedded software like an application programming that has an rtos and file system and can be run on RAM.
If i dont use rtos and RAM and only uses flash memory it means my embedded software is a firmware, it is true?
What actually makes real difference its memory layout.
The answers on the internet are lack of technical explanations and not satisfied.
Thank you very much.
They are not distinctly separate things, or even well defined. Firmware is a subset of software; the term typically implies that it is in read-only memory:
Software refers to any machine executable code - including "firmware".
Firmware refers to software in read-only memory
Read-only memory in this context includes re-writable memory such as flash or EPROM that requires a specific erase/write operation and is not simply random-access writable.
The distinction between RAM and ROM execution is not really a distinction between firmware and software. Many embedded systems load executable code from ROM and execute from RAM for performance reasons, while others execute directly from ROM. Rather if the end-user cannot easily modify or replace the software without special tools or a bootloader, then it might be regarded as "firm". If on the other hand a normal end-user can modify, update or replace the software using facilities on the system itself (by copying a file from removable media or network for example), then it is not firmware. Consider the difference in operation for example in updating your PC's BIOS and updating Microsoft Office - the former requires a special procedure distinct from normal operating system services for loading and running software.
For example, the operating system, bootloader and BIOS of a smart phone might be considered firmware. The apps a user loads from an app-store are certainly not firmware.
In other contexts "firmware" might refer to the configuration of a programmable logic device such as an FPGA as opposed to sequentially executed processor instructions. But that is rather a niche distinction, but useful in systems employing both programmable logic and software execution.
Ultimately you would use the term "firmware" to imply some level of "permanence" of software in a system, but there is a spectrum, so you would use the term in whatever manner is useful in the context of your particular system. For example, I am working on a system where all the code runs from flash, so only ever use the term software to refer to it because there is no need to distinguish it from any other kind of software in the system.

What does the term "Real-Time Software Development refer to?

I saw a job description with the term Real-Time Software Development:
Software Engineers at Boeing develop solutions that provide world class performance and capability to customers around the world.
Boeing Defense, Space and Security in St. Louis is looking for
software engineers to join the growing and talented teams developing
modeling and simulation software for a variety of applications,
including flight control and aerodynamic performance, weapon and
sensor systems, simulation tools and more. The software is integrated
with live assets to enable a next-generation virtual battle
environment to explore new system concepts and optimal engineering
solutions.
Our software engineers are responsible for full life-cycle software development which means you will have a hand in defining the
requirements; designing, implementing and testing the software. You
will work with a team in a casual but professional environment where
there is long-term potential for career growth into management or
technical leadership positions.
**Languages & Databases**
Real-time SW Development Tool
Real-time Target Environment
Job:*Software Engineer
I can't figure out what that means in this context, what does Real Time Software development mean?
The links in comments give some useful information. The real problem with Real Time is that there are far less usages than ordinary scientific or data processing applications and so less specialists around.
I used a Real Time development environment many years ago, a a friend of mine used another one more recently. The generic characteristics were :
the development system is an IDE more or less like any other IDE
you have the ability to get the precise time that will last any routine, because if you use a RT system, it is because you need deterministic processing times
you have an emulator that allows you to run the program or more exactly simulate it running on the real system with different inputs (including hardware inputs) and control both the outputs and the times
you generally mix high level programming (C or others) for non critical parts and low level assembly routines in time critical parts.
The remaining really depended on the simulated system.
Real time in this context means software that always run in the same time. Normal server and desktop OSes such as Mac, Linux, and Windows have multitasking without exact scheduling, making it impossible to say exactly how long time it will take for a piece of code to run. In a real time OS, the time it will take a piece of code is always the same.
This is used in space craft, aircraft and similar areas.
Not to be confused with real time processing speed, eg. encoding video in real time means to encode it as fast as the frames are coming.

Direct3D texture resource life cycle

I have been working on a project with Direct3D on Windows Phone. It is just a simple game with 2d graphics, and I make use of DirectXTK for helping me out with sprites.
Recently , I have come across to an out of memory error while I was debugging on 512mb emulator. This error was not common and was the result of a sequence of open, suspend , open , suspend ...
Tracking it down, I found out that the textures are loaded on every activation of the app, and finally filling up the allowed memory. To solve it , I will probably go and edit it so as to load textures only on opening but activation from suspends; but after this problem I am curious about the correct life cycle management of texture resources. While searching I have came across to Automatic (or "managed" by microsoft) Texture Management http://msdn.microsoft.com/en-us/library/windows/desktop/bb172341(v=vs.85).aspx . which can probably help out with some management of the textures in video memory.
However, I also would like to know other methods since I couldnt figure out a good way to incorporate managed textures into my code.
My best is to call the Release method of ID3D11ShaderResourceView pointers I store in destructors to prevent filling up the memory , but how do I ensure textures are resting in memory while other apps would want to use it(the memory)?
Windows phone uses Direct3D 11 which 'virtualizes' the GPU memory. Essentially every texture is 'managed'. If you want a detailed description of this, see "Why Your Windows Game Won't Run In 2,147,352,576 Bytes?". The link you provided is Direct3D 9 era for Windows XP XPDM, not any Direct3D 11 capable platform.
It sounds like the key problem is that your application is leaking resources or has too large a working set. You should enable the debug device and first make sure you have cleaned up everything as you expected. You may also want to check that you are following the recommendations for launching/resuming on MSDN. Also keep in mind that Direct3D 11 uses 'deferred destruction' of resources so just because you've called Release everywhere doesn't mean that all the resources are actually gone... To force a full destruction, you need to use Flush.
With Windows phone 8.1 and Direct3D 11.2, there is a specific Trim functionality you can use to reduce an app's memory footprint, but I don't think that's actually your issue.

What is an "interlocked pipeline" as in the MIPS acronym?

I am going through a MIPS procesor architecture.
As per this tutorial it states : Microprocessor without Interlocked Pipeline Stages
http://en.wikipedia.org/wiki/MIPS_architecture
One major barrier to pipelining was that some instructions, like division, take longer to complete and the CPU
therefore has to wait before passing the next instruction into the pipeline.
One solution to this problem is to
use a series of interlocks that allows stages to indicate that they are busy, pausing the other stages upstream.
Hennessy's team viewed these interlocks as a major performance barrier since they had to communicate to all the
modules in the CPU which takes time, and appeared to limit the clock speed.A major aspect of the MIPS design
was to fit every sub-phase, including cache-access, of all instructions into one cycle, thereby removing any
needs for interlocking, and permitting a single cycle throughput.
This link says :---
https://www.cs.tcd.ie/Jeremy.Jones/vivio/dlx/dlxtutorial.htm
issue a "stall" instruction instead of a nop instruction upon a stall
What exactly is Interlock Pipeline disadvantage ?
Why routers use to prefer Processors with MIPS Architecture ?
A major aspect of the MIPS design was to fit every sub-phase, including cache-access, of all instructions into one cycle, thereby removing any needs for interlocking, and permitting a single cycle throughput.
But in later version of MIPS, http://cs.nyu.edu/courses/spring02/V22.0480-002/vliw.pdf slide 9, interlocking was reintroduced into architecure:
After all MIPS originally stood for something like
Microprocessor without interlocking pipeline stages
Because new implementations (with different memory latencies) would have required more than one slot and we don’t like correctness of code being dependent on the version of the implementation.
Because other instructions required interlocking anyway (e.g. floating-point)
Because it is not that painful to do interlocking
So, considering your questions:
What exactly is Interlock Pipeline disadvantage ?
Interlocking needs more complex hardware (control unit of CPU), which was not so easy to design and test in the era of hand-drawn transistors and CPUs of 100s thousands of transistors. They selected the goal of designing CPU core without Interlocking, but they failed. They were unable to produce compatible series of commercial chips without Interlocking.
Why routers use to prefer Processors with MIPS Architecture ?
Historically they were popular in first network devices and were used in next devices possibly due to inertia and investments in MIPS-based devices (both from the network device makers and from MIPS chip makers).
Check this book "See MIPS Run" By Dominic Sweetman, pages 15,16,22
http://books.google.com/books?id=kk8G2gK4Tw8C&pg=PR15
There were several easy accessible MIPS chips in the middle of 1990s, R4600, RM5200 and RM7000. The R4600 from 1993 was used by Cisco, next models had 64-bit bus and large on-chip L2 cache. They had enough performance to drive routers of the time.
In 2010s, I think, there are routers on ARM (there is a lot of SoCs with network and ARM now). This is because ARM is most widely licensed architecture (in terms of licensed core count, 78% in 2011); second architecture is ARC with 10% (check the Intel vPro sticker on your PC or laptop - if you has sticker, you has ARC core in your chipset; they are also used in many SSD controllers). MIPS is only third in this rating with only 6% of 10 billion cores total in market.

How much interacting can i get with the GPU with Flash CS4?

as many of you most likly know Flash CS4 intergrates with the GPU. My question to you is, is there a way that you can make all of your rendering execute on the GPU or can i not get that much access.
The reason i ask is with regards to Flash 3D nearly all existing engines are software renderers. However, i would like to work on top of one of theses existing engines and convert it to be as much of a Hardware renderer as possible.
Thanks for your input
Regards
Mark
First off, it's not Flash CS4 that is hardware accelerated, it is Flash Player 10 that does it.
Apparently "The player offloads all raster content rendering (graphics effects, filters, 3D objects, video etc) to the video card". It does this automatically. I don't think you get much choice.
The new GPU accelerated abilities of Flash Player 10 is not something that is accessible to you as a developer, it's simply accelerated blitting that's done "over your head".
The closest you can get to the hardware is Pixel Bender filters. They are basically Flash' equivalent to pixel shaders. However, due to (afaik) cross platform consistency issues these do not actually run on the GPU when run in the Flash player (they're available in other adobe products, and some do run them on the gpu).
So, as far as real hardware acceleration goes the pickings are pretty slim.
If you need all the performance you can get Alchemy can be something worth checking out, this is a project that allows for cross compiling c/c++ code to the AVM2 (the virtual machine that runs actionscript3). This does some nifty tricks to allow for better performance (due to the non dynamic nature of these languages).
Wait for Flash Player 11 to release as a beta in the first half of next year. It would be an awesome.