VGA Video using an ARM7 - vga

I need to put out a VGA signal from an AT91SAM7SE512. How can I do this without using an extra controller? I saw stuff on the web, but it needs to be able to modify the specific pixels.

You could probably use something similar to old tricks to make NTSC signals with PWM it will probably look horrible. A better bet is to get some form of video controller even a cheap low resolution one.
You could also try some form of FPGA to VGA like this

Unless your ARM7 has some kind of controller, capable of reading memory and outputting video signal without CPU intervention, ie some kind of framebuffer, I don't think you can do that with an ARM7. Well, you probably can, but not within a general purpose OS like linux.
What you can do is transform your ARM7 into a VGA dedicated controlleur, that spends its time launching dma transfer from SDRAM to an external bus. This will IMO not leave a lot of resource to do anything else.

Your ARM chip has an ADC. It doesn't have a DAC, though. VGA is an multiple-channel analog output, so you need some kind of DAC, and in turn an external component. Another problem you might encounter is the need for proper drivers (the electronic kind, not sofware). A VGA cable can be quite long, which means you have large capacities to overcome, plus it may work as an antenna.

Related

qemu-system-arm and lm3s6965evb / Cortex M3 ... need more ram

I'm successfully compiling my unit-test with arm-eabi-none-gcc and running them in qemu-system-arm with the machine lm3s6965evb.... But for some of the unit-tests I need more than the 64k of RAM that the lm3s6965evb mcu/machine has.
The IAR simulator apparently has no hard limit in the 'machine', so I just made a phony linkerfile that allows the unittest-program to use e.g. 512k RAM. This works (surprisingly) fine , but qemu doesn't play like that (hangs the moment I change the RAM section in the linkerfile). So I need another machine...
But thinking about it: I think I just need something that executes ARMv7 thumb(2?) code, like the CortexM3. It could also be Cortex-M33 which is a ARMv8 ...
I don't care about Hardware-registers or interrupts etc. I do need, however, printf() to work via semihosting or other means (uart etc), to printout unittest status (success/failures)
What are my best candidates,
modify the lm3s6965evb somehow?
taking an A7?
taking some of the ARM vhdl/fpga machines? (msp2.. musca ...) ?
(The 'virt' machine does not support cortex-m3/m4, according to error message)
?
Thanks
/T
(It turns out, that I misread the "mps2-an385" documentation & tutorials, - it wasn't complicated at all.)
It works if I just use the "mps2-an385" machine and modify the linkerfile to use more flash and ram. Currently i beefed it up to 4x ram and flash which is enough currently. (Haven't found out what the exact limits are.)
Still, I would like to hear if there are other solutions.
QEMU's lm3s6965evb model follows the real hardware, which does not have much RAM. If you want more RAM and you don't specifically want to have a model of those Stellaris boards, pick a board model type which has more RAM. If you need to use an M-profile core, try one of the MPS2 boards. If you are happy with an A-profile core, then the "virt" board with a Cortex-A15 may be a good choice.

why game is running slow in libgdx?

I am making racing game in Libgdx.My game apk size is 9.92 mb and I am using four texture packer of total size is 9.92 Mb. My game is running on desktop but its run on android device very slow. What is reason behind it?
There are few loopholes which we neglect while programming.
Desktop processors are way more powerful so the game may run smoothly on Desktop but may slow on mobile Device.
Here are some key notes which you should follow for optimum game flow:
No I/O operations in render method.
Avoid creating Objects in Render Method.
Objects must be reused (for instance if your game have 1000 platforms but on current screen you can display only 3, than instead of making 1000 objects make 5 or 6 and reuse them). You can use Pool class provided by LibGdx for object pooling.
Try to load only those assets which are necessary to show on current screen.
Try to check your logcat if the Garbage collector is called. If so than try to use finalize method of object class to find which class object are collected as garbage and try to improve on it.
Good luck.
I've got some additional tips for improving performance:
Try to minimize texture bindings (or generally bindings when you're making a 3D game for example) in you render loop. Use texture atlases and try to use one texture after binding as often as possible, before binding another texture unit.
Don't display things that are not in the frustum/viewport. Calculate first if the drawn object can even be seen by the active camera or not. If it's not seen, just don't load it onto your GPU when rendering!
Don't use spritebatch.begin() or spritebatch.end() too often in the render loop, because every time you begin/end it, it's flushed and loaded onto the GPU for rendering its stuff.
Do NOT load assets while rendering, except you're doing it once in another thread.
The latest versions of libgdx also provide a GLProfiler where you can measure how many draw calls, texture bindings, vertices, etc. you have per frame. I'd strongly recommend this since there always can be situations where you would not expect an overhead of memory/computational usage.
Use libgdx Poolable (interface) objects and Pool for pooling objects and minimizing the time for object creation, since the creation of objects might cause tiny but noticable stutterings in your game-render loop
By the way, without any additional information, no one's going to give you a good or precise answer. If you think it's not worth it to write enough text or information for your question, why should it be worth it to answer it?
To really understand why your game is running slow you need to profile your application.
There are free tools avaiable for this.
On Desktop you can use VisualVM.
On Android you can use Android Monitor.
With profiling you will find excatly which methods are taking up the most time.
A likely cause of slowdowns is texture binding. Do you switch between different pages of packed textures often? Try to draw everything from one page before switching to another page.
The answer is likely a little more that just "Computer fast; phone slow". Rather, it's important to note that your computer Java VM is likely Oracles very nicely optimized JVM while your phone's Java VM is likely Dalvik, which, to say nothing else of its performance, does not have the same optimizations for object creation and management.
As others have said, libGDX provides a Pool class for just this reason. Take a look here: https://github.com/libgdx/libgdx/wiki/Memory-management
One very important thing in LibGDX is that you should make sure that sometimes loading assets from the memory cannot go in the render() method. Make sure that you are loading the assets in the right times and they are not coming in the render method.
Another very important thing is that try to calculate your math and make it independent of the render in the sense that your next frame should not wait for calculations to happen...!
These are the major 2 things i encountered when I was making the Snake game Tutorial.
Thanks,
Abhijeet.
One thing I have found, is that drawing is laggy. This means that if you are drawing offscreen items, then it uses a lot of useless resources. If you just check if they are onscreen before drawing, then your performance improves by a lot surprisingly.
Points to ponder (From personal experience)
DO NOT keep calling a function,in render method, that updates something like time,score on HUD (Make these updates only when required eg when score increases ONLY then update score etc)
Make calls IF specific (Make updations on certain condition, not all the time)
eg. Calling/updating in render method at 60FPS - means you update time 60 times a sec when it just needs to be updated once per sec )
These points will effect hugely on performance (thumbs up)
You need to check the your Image size of the game.If your image size are more than decrease the size of images by using the following link "http://tinypng.org/".
It will be help you.

Is it possible to control LCD components in software?

Is it possible, say, using a programming language like C or C++, to write a program that directly interacts with the power inverter or controller in a modern LCD monitor?
I'm told that it used to be possible to forcefully overclock the oscillator in a CRT to make it catch on fire. I'm curious as to whether the same principle can be applied to a modern monitor.
Being able to inflict real damage on a modern external monitor is highly unlikely.
Connections like VGA, DVI and HDMI don't provide sufficiently direct access to the screen's hardware.
The hardware design of a consumer product can be considered flawed if it allows a killer poke, i.e. destruction of a hardware component by issuing
software instructions.
In modern PC hardware, laptops have a tightly integrated display. It may be possible to write a program that has harmful effects on a laptop's backlight,
e.g. by flicking it on and off rapidly by calling the ACPI interface.
From http://ibm-acpi.sourceforge.net/README:
Whatever you do, do NOT ever call thinkpad-acpi backlight-level change
interface and the ACPI-based backlight level change interface
(available on newer BIOSes, and driven by the Linux ACPI video driver)
at the same time. The two will interact in bad ways, do funny things,
and maybe reduce the life of the backlight lamps by needlessly kicking
its level up and down at every change.
Since inputs are digital or at least inputs with D/A converters it is unlikely. That might work with really old VGA monitors without any digital logic. VGA in general does not even have clock, just hsync and vsync which tells timing for returning electron beam and was direct controller for controlling beam. Most modern CRT monitors had automatic detection of improper inputs, so no it is impossible to kill LCD this way.
http://www.epanorama.net/documents/pc/vga_timing.html

Advice needed for a physics engine

I've recently started a project, building a physics engine.
I was hoping you could give me some advice related to some documentation and/or best technologies for this.
First of all, I've seen that Game-Physics-Engine-Development is highly recommended for the task at hand, and I was wondering if you could give me a second opinion.Should I get it?
Also, while browsing Amazon, I've stumbled onto Game Engine Architecture and since I want to build my physics engine for games, I thought this might be a good read aswell.
Second, I know that simulating physics is highly computation intensive so I would like to use either CUDA or OpenCL.Right now I'm leaning towards OpenCL, because it would work on both NVIDIA and ATI chipsets.What do you guys suggest?
PS: I will be implementing this in C++ on Linux.
Thanks.
I would suggest first of all planning a simple game as a test case for your engine. Having a basic game will drive feature and API development. Writing an engine without having clear goal makes the project riskier. While I agree nVidia and ATi should be treated as separate targets for performance reasons, I'd recommend you start with neither.
I personally wrote physics engine for Uncharted:Drake's Fortune - a PS3 game - and I did a pass in C++, and when it worked, made a pass to optimize it for VMX and then put it on SPU. Mind you, I did just a fraction what I wanted to do initially because of time constraints. After that I made an iteration to split data stages out and formulate a pipeline of data transformations. It's important because whether CPU, GPU or SPU, modern processors running nontrivial code spend most of the time waiting for caches. You have to pay special attention to data structures and pipeline them such that you have a small working set of data at any stage. E.g. first I do broadphase, so I don't need shapes but I need world-space bounding boxes. So I split bounding boxes into separate array and compute them all together in another pass, that writes them out in an optimal way. As input to bbox computation, I need shape transformations and some bounds from them, but not necessarily the whole shapes. After broadphase, I compute/update sim islands, at the same time performing narrow phase, for which I do actually need the shapes. And so on - I described this with pictures in an article to Game Physics Pearls I wrote.
I guess what I'm trying to say are the following points:
Make sure you have a clear goal that drives your development - a very basic game with flushed out design would be best in game physics engine case.
Don't try to optimize before you have a working product. Write it in the simplest and fastest way possible first and fix all the bugs in math. Design it so that you can port it to CUDA later, but don't start writing CUDA kernels before you have boxes rolling on the screen.
After you write the first pass in C++, optimize it for CPU : streamline it such that it doesn't thrash the cache, and compartmentalize the code so that there's no spaghetti of calls and all the code from each stage is localized. This will help a) port to CUDA b) port to OpenCL c) port to a console d) make it run reasonably fast e) make it possible to debug.
While developing, resist temptation to go do something you just thought about unless that feature is not necessary for your clear goal (see #1) - that's why you need a goal, to steer you towards it and make it possible to finish the actual project. Distractions usually kill projects without clear goals.
Remember that in one way or another, software development is iterative. It's ok to do a rough-in and then refine it. Leather, rinse, repeat - it's a mantra of a programmer :)
It's easy to give advice. If you wanna do something, just go and do it, and we'll sit back and critique :)
Here is an answer regarding the choice of CUDA or OpenCL. I do not have a recommendation for a book.
If you want to run your program on both NVIDIA and ATI chipsets, then OpenCL will make the job easier. However, you will want to write a different version of each kernel to get good performance on each chipset. For example, on ATI cards you'll want to manually vectorize code using float4/int4 data types (or accept a nearly 4x performance penalty), while NVIDIA works better with scalar data types.
If you're only targeting NVIDIA, then CUDA is somewhat more convenient to program in.

Actionscript memory management?

I saw System.gc() somewhere on the internet today and I wanted to know if it is or isn't recommended to use in a Flash CS5 project and why.
In every garbage-collected system I know of, the garbage collection machinery was designed to run in the background as an abstraction the programmer should theoretically pay no attention to. There are some special situations where forcing a collection is useful, but these usually involve interrupts (real machine interrupts, not actionscript events), testing/debugging scenarios, or some tricky latency management necessities. Odds are you will never need to call System.gc() and you can safely ignore it.
System.gc() is only available in the debugger version of Flash Player and some AIR applications. Calling it on a normal website, under a normal Flash Player will have no effect whatsoever and will silently fail.
System.gc() is designed only for testing purposes.
System.gc() is only for testing purposes. It can be handy to monitor the memory usage your application is using and call System.gc() in order to highlight the possibility of any memory leaks.
Tip: As far as I remember you have to call System.gc() twice to force it to collect immediately.
The documentation states that this method only works in a debug player.
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/system/System.html#gc()
So, to summarise, if you're testing memory, it's quite handy, otherwise don't use it.