I want to design and implement an H.264 baseline/main profile encoder on FPGA for real-time HD video processing. To begin with, I am looking for design examples that would help me to understand the H.26 implementation on FPGA. Therefore, is there any open source project for this? I tried to search on Github, but found only one repository.
Anybody who knows some information about this, please help me!
And provide some information about:
What are the essential technical skills required?
What is the best way to implement this?
As far as I know, we can write RTL-only codes or implement by HW/SW co-design.
But I have no idea about what are the differences, what is better.
Question 1:
What are the essential technical skills required?
Answer:
H.264 is a highly used video coding standard. The complete standard is a family of specifications covering a variety of encoding/decoding features, resolutions, and frame rates. So initially you need to understand the fundamentals of the standard. You need to understand how H.264 encoder and decoder works. May be this reference can help you to start with H.264 fundamentals from FPGA's point of view. You can also go through the H.264 codec explained.
Question 2:
What is the best way to implement this? As far as I know, we can write RTL-only codes or implement by HW/SW co-design.
Answer:
Although preferably one should write the main design called RTL in a hardware description language (HDL) namely Verilog or VHDL. Because most FPGA compilers expect to be given a design description in RTL form. RTL is an acronym for register transfer level. This means that your Verilog or VHDL code describes how data is transformed as it is passed from register to register.
However, it's plan wrong to say that you can only implement H.264 design (your H.264 RTL) in VHDL or Verilog. You can even write your H.264 design in C/C++ and use a compiler to generate your RTL in Verilog and VHDL. Below is code snippet of a simple H.264 decoder written in plain C that can be synthesized on almost any FPGA.
void decode_main(NALU_t* nalu,
StorablePicture pic[MAX_REFERENCE_PICTURES],
StorablePictureInfo pic_info[MAX_REFERENCE_PICTURES]) {
#pragma HLS INTERFACE ap_none register port=nalu->startcodeprefix_len
#pragma HLS RESOURCE core=AXI4LiteS variable=nalu->startcodeprefix_len
#pragma HLS INTERFACE ap_none register port=nalu->len
#pragma HLS RESOURCE core=AXI4LiteS variable=nalu->len
#pragma HLS INTERFACE ap_none register port=nalu->nal_unit_type
#pragma HLS RESOURCE core=AXI4LiteS variable=nalu->nal_unit_type
#pragma HLS INTERFACE ap_none register port=nalu->nal_reference_idc
// optimization pragmas continue//
extern seq_parameter_set_rbsp_t SPS_GLOBAL;
extern pic_parameter_set_rbsp_t PPS_GLOBAL;
extern ImageParameters img_inst;
extern slice_header_rbsp_t sliceHeader_inst;
extern char intra_pred_mode[PicWidthInMBs*4][FrameHeightInMbs*4];
// below rest of the code continues//
}
If you see, it has explicit compiler specific optimizations as HLS pragmas. That actually means High-Level Synthesis (HLS) optimizations. On Stackoverflow (SO) seeking recommendations for books, tools, software libraries, and more is rightly not appreciated. Only to help you understand that you can still implement H.264 design apart from HDLs like Verilog or VHDL and since I have given you a brief explanation of my own, you can go through the complete design here for your further understanding.
Related
Apologies in advance for the perhaps stupid question. Is it possible to integrate into the CHISEL flow a Scala script that generate timing constraint specifications (SDC) for a given design? e.g. press a button and you get your CHISEL design converted to Verilog along with an SDC file, ready for synthesis.
I currently have such a toolflow in place for VHDL (using python to generate the constraints files). But in VHDL the naming conventions are quite clear, not so sure about the CHISEL backend (also I couldn't find any reference on the web doing this)
Is it possible, or this is just not how CHISEL was intended to be used ?
Thanks in advance !
Chisel has an annotation system to support tracking and linking against signals in the emitted Verilog. I've described this system in a previous question here on StackOverflow: Chisel: getting signal name in final Verilog
There is existing work to leverage this support and build physical design flows, see Hammer which is used by Chipyard.
Basically I would like to start hacking on the internals of Chisel/FIRRTL. It would help if someone could point me to where I could start looking at.
I have been reading through the source code. I so far understand that Chisel has been implemented as Scala library. Each Chisel object has some methods for emitting FIRRTL. After a particular Scala program is run, the objects are traversed and FIRRTL is generated.
What I wanted to know is if I have been looking in the right direction. I still haven't figured where the AST formation for the Chisel modules and the type inference happens. Eventually I will get there, but it would be great if someone could summarize to me places that I should look.
Of course, this is too much of an ask from the Chisel developers, but even some basic information would help!
I'd say there are two basic places to start.
Firrtl is a good start because it's newer than chisel and overall the code base is newer. Firrtl is a parser, transforms and emitters, and those are pretty straightforward. The transforms encapsulate most operations quite nicely
Chisel as an EDSL is much more complicated and quirky. The place to start is in chiselFrontend. The Builder class is the root of the magic for constructing the internal graph that is used to emit chirrtl/high-firrtl. It uses a dynamic variable to provide a place where modules and their components register their creation and connections to the graph.
Hope that helps you get started, happy sleuthing
I program ActionScript for the FlashPlayer. This means compiling a set of ActionScript files into a SWF file (a bunch of bytecode that gets executed by the FlashPlayer in your browser). Anything that is not compiled into the SWF file must be requested. Examples of this would include ANY textual content, media, or graphical content that wasn't originally compiled in. Unfortunately this means dealing with a lot of asynchrony. A double-edged sword since dealing with asynchrony can be a pain in the ass, but can also be a fun force? on your design.
I just want to make the point that ActionScript is single-threaded, but the FlashPlayer is multi-threaded, so things like requesting content over HTTP are done in the background and we are notified of completion via an event broadcasting system (which is built into the language). So the issue here is not a concurrency issue (although I'm interested in any concurrency literature that might be relevant).
When I'm putting together a website I will be adding in functionality a little bit at a time. A little bit at a time usually translates into small steps. And by small step I mean small enough that I don't go from needing content to loading content (e.g. XML using HTTP) in one step. So I'll use, say... Fake It, but at some point I need to implement it for real, hence my search for literature on Refactoring To Asynchrony.
Any thoughts or help would be greatly apprecated. Thanks =)
There is astonishingly beautiful Reactive eXtensions for C# (all .net) and JavaScript.
It has been ported in ActionScript 3 as well and has its own wiki.
From description:
raix (Reactive And Interactive eXtensions) is a functional, composable, API for AS3 that simplifies working with data, regardless of whether its interactive (arrays) or reactive (events). raix was previously RxAs
Reactive part of it helps you to build highly asynchronous applications in simple intuitive way.
Hope it will help!
If one wanted to program a game in an unusual language, but no library or functions exist to manipulate graphics, how would this be accomplished? By writing your own low level routines?
You are standing by a house. A road runs off to the east, and south there is a narrow path leading around the back of the house.
There is a mailbox here.
What do you do? _
By interfacing to OpenGL, it's graphics-card independent API.
For a concrete example of what that means and how it's done, have a look at the OpenGL bindings for Python
Look here for bindings to other languages.
Why do you have to do this?
Well, graphics programming is highly dependent on the hardware (i.e. graphics card), and there are many of them. OpenGL is the standard language that they all understand. (I think the same can be said about Direct3D, but that's owned by Microsoft, while OpenGL is more open).
Find out how it interfaces with C libraries, then use that to make an interface to OpenGL, DirectX, or OpenAL. Alternatively you can port something else, like say SDLlib. In a weird case you might want to embed a language that has the library you'd like. Say if it's Java3D and you want to compile it with Mono 2.2. I'm not sure that's even possible, but one of the mono-project changes is Java support. Of course on Mono you have other game library options.
If you plan to use graphics hardware, you need to have its drivers and OpenGL or DirectX.
If you're programming this for some exotic piece of limited hardware, there's only so much you could do. If you're simply not given access to draw to the screen other than in some extremely limited manner (perhaps all you can do is render a string of text), then there's nothing you can do.
If you're doing this on an ordinary computer but your language of choice simply doesn't have any OpenGL or DirectX bindings, then you'll need to write some yourself.
Conventional wisdom would suggest to "use the best tool for the job", if there are no existing libraries to manipulate graphics for your language, it's likely not the best tool for the job.
I'm interested in making a language to run on the AVM2 and I'm looking for advice on where to start. I do realize that this is by no means a trivial task, but I would like to give it a try and at the very least learn more about implementing a language along the way.
I have messed around with ANTLR and have been reading up on syntax issues for language development. What I'm looking for is advice on a path to take or useful references/books.
For instance I would like to generate (script/manually) some very simple AVM2 bytecode and get that to run on the VM as a start.
Thanks
If you are not interested in Haxe, you will basically need to write your own compiler that compiles objects down to ABC (Actionscript Byte Code). The AVM2 Overview document available from Adobe on ABC and the AVM2 which should help you get started. It's a fairly thorough document but stay alert for a few typo's in the bytecode instructions.
You will also need to wrap the bytecode in a doABC tag as part of a SWF container. You can get more information from the SWF File Format documentation.
If you'd like a headstart on writing the data structures (optimised int formats, etc), feel free to checkout the code at asmock, a dynamic mocking project I've been working on. The SWF/ByteCode generation stuff is a bit messy but there are IDataOutput wrappers (SWF, ByteCode) that might come in handy.
Project Alchemy by Adobe can be a good reference
http://labs.adobe.com/technologies/alchemy/
How did it go?
I'm also interested in doing a Java to AVM2 compiler...
Do you have any published code?
Take a look at Haxe: it is an open source language that can target different platforms, including the AVM. You can dig into the SWF compiler source code to get some inspiration.