where does BIOS store com port enable? - configuration

My goal is to write a program in DOS that will disable COM2 in BIOS. Any solution must work in DOS.
Where does the BIOS store the Enable / Disable state for peripherals such as serial ports?
I have examined the CMOS and nothing other than the clock data changes when I enable the port in BIOS and reboot.
The CPU board is an Advantech PCA6028, a single-host board with an AMI BIOS circa 2019.

Related

how QEMU guest can send packets to NIC of the host?

i'm trying to test XDP program but the test environment was provided by the client consists of one server and qemu guest running on it and act as a traffic generator ( using scapy or DPDK ). and to have this test run successfully,the packets from the guest should arrive/be injected to the NIC driver ( XDP working in native mode ) of the host. is there's any configs/hacks that can make the traffic goes from the guest to the host driver?
edit :
some points to be cleared as #vipin suggested,
on the host, the NIC is connected to virbr0 on kernel.
the XDP is running on the physical NIC.
i'm using bpf_redirect_map for redirecting as we still in simple stage.
anyway, i got a work around is just to add a physical router to the lab setup and it's enough for this stage of test.
XDP (Express Data Path) supported in Linux is for RX side, and there were patches for TX but not integrated. Based on the current update XDP-eBPF is on Physical NIC. So all RX packet on physical NIC is processed.
But as per the question shared packets from the guest should arrive/be injected to the NIC driver ( XDP working in native mode ) of the host.. If one needs to run the logic for traffic coming from GUEST OS, XDP has to be loaded to emulator or TAP or Bridge interface. This will allow to redirect packets based on Kernel NIC id to Physcial NIC.

EVE-NG QEMU based nodes are not starting

Setup:Dell PowerEdge R620 128GB Ram 12 Core server.
Vmware ESXI 6.5 Based setup: 1 VM for EVE-NG: 500GB SSD + 32 GB allocated RAM.
2nd VM for Windows Server 2016: 100GB HDD + 16 GB RAM.
On Windows client, I can access the EVE-NG via Firefox and Putty. I have tried cisco Dynamips images and nodes are starting (I can telnet with putty and change config)
When I try to created nodes based on Qemu Images(Cisco, Aruba, Paloalto, etc), the nodes do not start. I have followed the guidelines for qcow2 names as well as checked multiple sources. I have also edited the node and tried to play with all possible settings.
I have reinstalled the EVE-NG on ESXi as well but the issue remains the same.
Thanks a lot for your help and support.
I finally found an answer in the EVE-NG cookbook: https://www.eve-ng.net/wp-content/uploads/2020/06/EVE-Comm-BOOK-1.09-2020.pdf
Page33: Step 6: IMPORTANT Open VM Settings. Set the quantity of CPUs and number of
cores per socket. Set Intel VT-x/EPT. Hardware Virtualization engine to ON
(checked).
Once I checked this field, all nodes are started to work.
On CPU Settings on Set "Enable Virtualized CPU Performance Counter"
This helped me a bit. I was using mac and struggling to get the consoles up. Posting the below steps in case it helps someone like me:
Shut down the VM using command shutdown -h now
Go to the VM settings
Click Advanced -> Check "Remote display over VNC"
Check Enable IOMMU in this virtual machine

Resizing cloud VM disk without taking instance down (Google cloud)

So I saw there is an option in google compute (I assume the same option exists in other cloud VM suppliers so the question isnt specifically on Google compute, but on the underlying technology) to resize the disk without having to restart the machine, and I ask, how is this possible?
Even if it uses some sort of abstraction to the disk and they dont actually assign a physical disk to the VM, but just part of the disk (or part of a number of disks), once the disk is created in the guest VM is has a certain size, how can it change without needing a restart? Does it utilize NFS somehow?
This is built directly into disk protocols these days. This capability has existed for a while, since disks have been virtualized since the late 1990s (either through network protocols like iSCSI / FibreChannel, or through a software-emulated version of hardware like VMware).
Like the VMware model, GCE doesn't require any additional network hops or protocols to do this; the hypervisor just exposes the virtual disk as if it is a physical device, and the guest knows that its size can change and handles that. GCE uses a virtualization-specific driver type for its disks called VirtIO SCSI, but this feature is implemented in many other driver types (across many OSes) as well.
Since a disk can be resized at any time, disk protocols need a way to tell the guest that an update has occurred. In general terms, this works as follows in most protocols:
Administrator resizes disk from hypervisor UI (or whatever storage virtualization UI they're using).
Nothing happens inside the guest until it issues an IO to the disk.
Guest OS issues an IO command to the disk, via the device driver in the guest OS.
Hypervisor emulates that IO command, notices that the disk has been resized and the guest hasn't been alerted yet, and returns a response to the guest telling it to update its view of the device.
The guest OS recognizes this response and re-queries the device size and other details via some other command.
I'm not 100% sure, but I believe the reason it's structured like this is that traditionally disks cannot send updates to the OS unless the OS requests them first. This is probably because the disk has no way to know what memory is free to write to, and even if it did, no way to synchronize access to that memory with the OS. However, those constraints are becoming less true to enable ultra-high-throughput / ultra-low-latency SSDs and NVRAM, so new disk protocols such as NVMe may do this slightly differently (I don't know).

compute.instances.host Error an instance during

In the operations history of my compute engine project, my machines all have an operation listed as "Automatically migrate an instance (compute.instances.automaticRestart)". They are all on the same zone, using the same debian template.
I suppose that their was some maintenance on the plateform, which is fine for me if the OS doesn't reboot.
Unfortunately two machines suffered a reboot. The operations history listed the operation as "compute.instances.hostError an instance" (compute.instances.hostError).
In addition Syslog doesn't suggest a clean shutdown.
Is there anything I should/can do to prevent such problem?
edit : We are europe-west1-b and all servers have the setting : On host maintenance to Migrate VM instance
Doesn't look like this was ever answered.
compute.instances.hostError means there was a hardware or software failure on the physical machine that was hosting your VM.
The FAQ has a description -- https://cloud.google.com/compute/docs/faq#hosterror
As per this article all the zones in GCE except "europe-west1-a" have Transparent maintenance where the instance will be live migrated without rebooting it. If your instance is in a zone with Transparent Maintenance you can set the option On host maintenance to Migrate VM instance using your developer console. Once this option is set your instance will be live migrated without rebooting the instance.

Kvm/Qemu maximum vm count limit

For a research project I am trying to boot as many VM's as possible, using python libvirt bindings, in KVM under Ubuntu server 12.04. All the VM's are set to idle after boot, and to use a minimum amount of memory. At the most I was able to boot 1000 VM's on a single host, at which point the kernel (Linux 3x) became unresponsive, even if both CPU- and memory usage is nowhere near the limits (48 cores AMD, 128GB mem.) Before this, the booting process became successively slower, after a couple of hundred VM's.
I assume this must be related to the KVM/Qemu driver, as the linux kernel itself should have no problem handling this few processes. However, I did read that the Qemu driver was now multi-threaded. Any ideas of what the cause of this slowness may be - or at least where I should start looking?
You are booting all the VMs using qemu-kvm right, and after 100s of VM you feel it's becoming successively slow. So when you feels it stop using kvm, just boot using qemu, I expect you see the same slowliness. My guess is that after those many VMs, KVM (hardware support) exhausts. Because KVM is nothing but software layer for few added hardware registers. So KVM might be the culprit here.
Also what is the purpose of this experiment ?
The following virtual hardware limits for guests have been tested. We ensure host and VMs install and work successfully, even when reaching the limits and there are no major performance regressions (CPU, memory, disk, network) since the last release (SUSE Linux Enterprise Server 11 SP1).
Max. Guest RAM Size --- 512 GB
Max. Virtual CPUs per Guest --- 64
Max. Virtual Network Devices per Guest --- 8
Max. Block Devices per Guest --- 4 emulated (IDE), 20 para-virtual (using virtio-blk)
Max. Number of VM Guests per VM Host Server --- Limit is defined as the total number of virtual CPUs in all guests being no greater than 8 times the number of CPU cores in the host
for more limitations of KVm please refer this document link