Is it possible to control LCD components in software? - language-agnostic

Is it possible, say, using a programming language like C or C++, to write a program that directly interacts with the power inverter or controller in a modern LCD monitor?
I'm told that it used to be possible to forcefully overclock the oscillator in a CRT to make it catch on fire. I'm curious as to whether the same principle can be applied to a modern monitor.

Being able to inflict real damage on a modern external monitor is highly unlikely.
Connections like VGA, DVI and HDMI don't provide sufficiently direct access to the screen's hardware.
The hardware design of a consumer product can be considered flawed if it allows a killer poke, i.e. destruction of a hardware component by issuing
software instructions.
In modern PC hardware, laptops have a tightly integrated display. It may be possible to write a program that has harmful effects on a laptop's backlight,
e.g. by flicking it on and off rapidly by calling the ACPI interface.
From http://ibm-acpi.sourceforge.net/README:
Whatever you do, do NOT ever call thinkpad-acpi backlight-level change
interface and the ACPI-based backlight level change interface
(available on newer BIOSes, and driven by the Linux ACPI video driver)
at the same time. The two will interact in bad ways, do funny things,
and maybe reduce the life of the backlight lamps by needlessly kicking
its level up and down at every change.

Since inputs are digital or at least inputs with D/A converters it is unlikely. That might work with really old VGA monitors without any digital logic. VGA in general does not even have clock, just hsync and vsync which tells timing for returning electron beam and was direct controller for controlling beam. Most modern CRT monitors had automatic detection of improper inputs, so no it is impossible to kill LCD this way.
http://www.epanorama.net/documents/pc/vga_timing.html

Related

When to use Vectored-Interrupt vs. Non-vectored Interrupt?

Why would you choose Vectored Interrupt and non-vectored interrupt?
I know the differences between them but not sure when you would use one over the other/what devices use either one!
Thank you so much.
If the hardware supports vectored interrupts, there is no reason not to use them. This is more a question of implementation cost (vector tables and prioritisation logic) vs software cost (reading status registers and looking up the correct vector).
As hardware has become cheaper over time, it makes sense to have dedicated logic to provide the correct vector address - this improves interrupt latency for typical real world implementations to start processing 'actual handler code'.
Where hardware supports both, the non-vectored mode may be for legacy compatibility, or for the unusual case where only one interrupt is required (possibly saving one or two cycles of latency).

Type-1 Hypervisors non-volatile memory isolation

Hypervisors isolate different OS running on the same physical machine from each other. Within this definition, non-volatile memory (like hard-drives or flash) separation exists as well.
When thinking on Type-2 hypervisors, it is easy to understand how they separate non-volatile memory because they just use the file system implementation of the underlying OS to allocate different "hard-drive files" to each VM.
But than, when i come to think about Type-1 hypervisors, the problem becomes harder. They can use IOMMU to isolate different hardware interfaces, but in the case of just one non-volatile memory interface in the system I don't see how it helps.
So one way to implement it will be to separate one device into 2 "partitions", and make the hypervisor interpret calls from the VMs and decide whether the calls are legit or not. I'm not keen on communication protocols to non-volatile interfaces but the hypervisor will have to be be familiar with those protocols in order to make the verdict, which sounds (maybe) like an overkill.
Are there other ways to implement this kind of isolation?
Yes you are right, hypervisor will have to be be familiar with those protocols in order to make the isolation possible.
The overhead is mostly dependent on protocol. Like NVMe based SSD basically works on PCIe and some NVMe devices support SR-IOV which greatly reduces the effort but some don't leaving the burden on the hyperviosr.
Mostly this support is configured at build time, like how much memory will be given to each guest, command privilege for each guest etc, and when a guest send a command, hypervisor verifies its bounds and forwards them accordingly.
So why there is not any support like MMU or IOMMU in this case?
There are hundreds of types of such devices with different protocols, NVMe, AHCI etc, and if the vendor came to support these to allow better virtualization, he will end up with a huge chip that's not gonna fit in.

Can I lower a Chrome tab's memory capacity?

Websites often take up too much memory and can make the browser slow. I'd prefer it if the tab didn't crash (a laggy UI is preferred). I could turn off Javascript if the site was unusable with the memory cap.
This kind of functionality is not available nor would it ever be available. A feature such as memory limits per-tab is an extremely niche use and it wouldn't have a return on investment for implementing and rolling it out. The team is far better off working on the memory issues themselves and reducing their usage internally.
It is difficult to simply "cap memory" for web apps. When it is available it will get used and when it isn't then something is going to get dumped out of memory. Part of the memory issues are within the Chrome team however, some are because web developers to program responsibly and not waste user resources.
There simply isn't much in user-space that can be done here. It is up to the Chrome team to optimize internally as well as web developers being responsible and thoughtful towards what they produce.

Does watching HD videos slow down my program using the CUDA CPU? [duplicate]

I'm trying to figure out if I can use OpenACC in place of normal CPU serial execution calls. Usually my programming is all about 3D programming, or uses the GPU normally in some way. I.E. Image processing, or some other type of rendering that requires the use of shaders. I'm trying to figure out if this Library would benefit me or not.
The reason I ask this is because if I'm rendering 3D Graphics (as fast as possible) would it slow down that process in away? Or is it able to maintain it's (in theory) "high frame rates" or not.
If so, what's the trade off, and how much? I'm not willing to loose 3D Graphics (display) performance to enhance operations that can be done on the CPU serially.
Edit:
This is a C++ context.
On the AMD and NVIDIA GPUs that I am familiar with, OpenACC programs will make use of compute resources that would also be used to some degree by shader programs. There are many other pieces of graphics hardware in a GPU that are not shared between compute and graphics, but there are some shared resources. Likewise, the GPU may be connected to the system by PCIE, and so this can also present a shared resource or contention point (however it's the rare compute or graphics program that would even come close to using up the bandwidth of a modern Gen3 x16 PCIE connection.)
So if you were using both graphics (or compute) shaders, as well as OpenACC acceleration, there would be contention for resources, to some degree. The level of contention, or the trade off, is not something that I can generalize about. It will depend very much on the specifics of your program, and the extent and the detail sequencing of the compute functions and the graphics functions.
GPU designers have these types of use-cases in mind, and so GPUs are generally pretty good at rapid context switching between the various tasks that may compete for resources.

VGA Video using an ARM7

I need to put out a VGA signal from an AT91SAM7SE512. How can I do this without using an extra controller? I saw stuff on the web, but it needs to be able to modify the specific pixels.
You could probably use something similar to old tricks to make NTSC signals with PWM it will probably look horrible. A better bet is to get some form of video controller even a cheap low resolution one.
You could also try some form of FPGA to VGA like this
Unless your ARM7 has some kind of controller, capable of reading memory and outputting video signal without CPU intervention, ie some kind of framebuffer, I don't think you can do that with an ARM7. Well, you probably can, but not within a general purpose OS like linux.
What you can do is transform your ARM7 into a VGA dedicated controlleur, that spends its time launching dma transfer from SDRAM to an external bus. This will IMO not leave a lot of resource to do anything else.
Your ARM chip has an ADC. It doesn't have a DAC, though. VGA is an multiple-channel analog output, so you need some kind of DAC, and in turn an external component. Another problem you might encounter is the need for proper drivers (the electronic kind, not sofware). A VGA cable can be quite long, which means you have large capacities to overcome, plus it may work as an antenna.