Pulseaudio + Qemu high cpu usage - qemu

I am rerouting the audio input and output of a Qemu guest by using the following:
In Environment:
QEMU_AUDIO_DRV=pa
QEMU_PA_SINK=some_sink
QEMU_PA_SOURCE=some_source
QEMU_AUDIO_DAC_FIXED_FREQ=48000
QEMU_AUDIO_ADC_FIXED_FREQ=48000
some_sink is pactl load-module module-null-sink and some_source is a monitor of another null-sink.
I have also setup the default sampling rate of the hosts Pulseaudio to 48000 such that no resampling occurs:
/etc/pulse/daemon.conf:
default-sample-rate = 48000
Pulseaudio version:
$ pulseaudio --version
pulseaudio 13.99.1
The audio out is NOT output on the machine, but forwarded to another system for processing.
The setup works fine (there is audio in and out), but the Pulseaudio CPU usage (on an Intel Xeon 3.50GHz) as reported by top is constantly between 15%-30%, which to me seems like A LOT.
Not doing any resampling and just forwarding a byte stream seems to me like an inexpensive operation...
Is the high CPU usage expected in this setup - if yes, why?
How could I investigate/troubleshoot the reason of pulseaudio's high CPU usage?

I get this too, although not all the time and only on the machine that actually plays the sound.
I have VMs running zoom, citrix that the play audio through my laptop. Periodically on my laptop CPU goes to 30% or so.
pulseaudio -k; pulseaudio -D; fixes the cpu usage until it happens again.
(Annoyingly, once this is done, citrix sound doesn't work until citrix is restarted)

Related

PhpStorm 2020 seems working very slowly. Can I impove it?

After I moved to PhpStorm 2020.1.4 from PhpStorm 2019 it seems to me that
PhpStorm 2020.1.4 works rather slow.
My procesor and memory :
$ sudo lshw | grep -i cpu
*-cpu
description: CPU
product: Intel(R) Core(TM) i5-4210H CPU # 2.90GHz
bus info: cpu#0
version: Intel(R) Core(TM) i5-4210H CPU # 2.90GHz
$ free
total used free shared buff/cache available
Mem: 8085244 5175784 417904 184048 2491556 2443804
Swap: 2104476 31744 2072732
Also in IDE very often I got help hint on items I do not request for help, like : https://prnt.sc/12f6wcp
How to show help hint only by some action from me(hot key+ click) ?
Also are there some other options to make IDE working not so slowly ?
Thanks!

ARM At91 CPU startup in qemu

ARM AT91 can not startup in QEMU. I can't get any print on the console.
I am trying to use QEMU(latest code pulled by git) to simulate an ARM AT91 board. But when startup the QEMU, I got no print in the console. In my understanding, there would be two steps to achieve this:
1, Property setup with the memory address in QEMU, let the QEMU decompress zImage. In this step, I will see "Uncompressing Linux...done, booting the kernel."
2, Property setup the output device(eg: uart0). I will get the kernel startup message.
I've succeeded in starting up with the ARM versatilePB because the QEMU supports versatilePB itself. The difference between versatilePB and AT91 is they have different SDRAM address. I've tried to modify loader_start to 0x20000000 but it seems still not work.
hwaddr loader_start;//0x2000000, which is AT91 SDRAM address
memory_region_add_subregion(sysmem, 0x2000000, ram);
At least it should print Uncompressing Linux...done, booting the kernel., which indicates that the zImage is executed and decompressed.
QEMU (at least upstream QEMU) does not have a model of the AT91 SoCs. The differences between these systems and ones like the versatilePB that QEMU does support are greater than just "the RAM is at a different address" -- they will have different devices of all kinds (including the UART) which both behave differently and are found at different locations. It is impossible to run bare metal code intended for an AT91 without implementing in QEMU a model of the correct board and at least some of the AT91 devices. The changes required would be much much more substantial than just changing a few addresses for the RAM base address.

Trying to monitor resource usage of a kvm/qemu virtual machine with mesos

I’m currently deploying a kvm/qemu virtual machine with mesos/marathon. In marathon, I’m using the built in mesos command executor and running the script.
virsh start centos7.0; while true; do echo 'centos 7.0 guest is running'; sleep 5; done
Note the while loop is there only to keep the task running. My issue is that I cannot get mesos to monitor the resource usage of the virtual machine.
When marathon deploys this task on a mesos-agent, it is creating a container that uses the memory and cpu cgroups.
/sys/fs/cgroup/cpu/mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895
/sys/fs/cgroup/memory/mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895
When the virtual machine is being kicked off, the virsh start command is sending a request to libvirtd. Libvirtd then reads the guest.xml file located in /etc/libvirt/qemu/ and then sends a request to the qemu/kvm driver to deploy it.
In my guest.xml file I’m using a custom partition cgroup slice to monitor my virtual machine usage.
https://libvirt.org/cgroups.html
(for each cgroup)
/sys/fs/cgroup/???/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
What I have tried.
I tried deleting my memory / cpu cgroup from this slice by doing
cgdelete -r cpu,memory:vmHolder.slice
and then adding my qemu guest process to the mesos controllers
cgclassify -g cpu,memory:mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895 GUEST-PID
When I run the command cat /proc/5531/cgroup
11:perf_event:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
10:pids:/
9:devices:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
8:cpuset:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope/emulator
7:net_prio,net_cls:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
6:freezer:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
5:blkio:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
4:hugetlb:/
3:cpuacct,cpu:/mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895
2:memory:/mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895
1:name=systemd:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
It shows that I’m using those controllers, but when I run systemd-cgtop it's not adding the memory usage of the VM. I'm not sure what to do next. Any suggestions?

Unable to access Google Compute Engine instance using external IP address

I have a Google compute engine instance(Cent-Os) which I could access using its external IP address till recently.
Now suddenly the instance cannot be accessed using its using its external IP address.
I logged in to the developer console and tried rebooting the instance but that did not help.
I also noticed that the CPU usage is almost at 100% continuously.
On further analysis of the Serial port output it appears the init module is not loading properly.
I am pasting below the last few lines from the serial port output of the virtual machine.
rtc_cmos 00:01: RTC can wake from S4
rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0
rtc0: alarms up to one day, 114 bytes nvram
cpuidle: using governor ladder
cpuidle: using governor menu
EFI Variables Facility v0.08 2004-May-17
usbcore: registered new interface driver hiddev
usbcore: registered new interface driver usbhid
usbhid: v2.6:USB HID core driver
GRE over IPv4 demultiplexor driver
TCP cubic registered
Initializing XFRM netlink socket
NET: Registered protocol family 17
registered taskstats version 1
rtc_cmos 00:01: setting system clock to 2014-07-04 07:40:53 UTC (1404459653)
Initalizing network drop monitor service
Freeing unused kernel memory: 1280k freed
Write protecting the kernel read-only data: 10240k
Freeing unused kernel memory: 800k freed
Freeing unused kernel memory: 1584k freed
Failed to execute /init
Kernel panic - not syncing: No init found. Try passing init= option to kernel.
Pid: 1, comm: swapper Not tainted 2.6.32-431.17.1.el6.x86_64 #1
Call Trace:
[] ? panic+0xa7/0x16f
[] ? init_post+0xa8/0x100
[] ? kernel_init+0x2e6/0x2f7
[] ? child_rip+0xa/0x20
[] ? kernel_init+0x0/0x2f7
[] ? child_rip+0x0/0x20
Thanks in advance for any tips to resolve this issue.
Mathew
It looks like you might have an script or other program that is causing you to run out of Inodes.
You can delete the instance without deleting the persistent disk (PD) and create a new vm with a higher capacity using your PD, however if it's an script causing this, you will end up with the same issue. It's always recommended to backup your PD before making any changes.
Run this command to find more info about your instance:
gcutil --project= getserialportoutput
If the issue still continue, you can either
- Make a snapshot of your PD and make a PD's copy or
- Delete the instance without deleting the PD
Attach and mount the PD to another vm as a second disk, so you can access it to find what is causing this issue. Visit this link https://developers.google.com/compute/docs/disks#attach_disk for more information on how to do this.
Visit this page http://www.ivankuznetsov.com/2010/02/no-space-left-on-device-running-out-of-inodes.html for more information about inodes troubleshooting.
Make sure the Allow HTTP traffic setting on the vm is still enabled.
Then see which network firewall you are using and it's rules.
If your network is set up to use an ephemral IP, it will be periodically released back. This will cause your IP to change over time. Set it to static/reserved then (on networks page).
https://developers.google.com/compute/docs/instances-and-network#externaladdresses

Hot reconfiguration of HAProxy still lead to failed request, any suggestions?

I found there are still failed request when the traffic is high using command like this
haproxy -f /etc/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)
to hot reload the updated config file.
Here below is the presure testing result using webbench :
/usr/local/bin/webbench -c 10 -t 30 targetHProxyIP:1080
Webbench – Simple Web Benchmark 1.5
Copyright (c) Radim Kolar 1997-2004, GPL Open Source Software.
Benchmarking: GET targetHProxyIP:1080
10 clients, running 30 sec.
Speed=70586 pages/min, 13372974 bytes/sec.
**Requests: 35289 susceed, 4 failed.**
I run command
haproxy -f /etc/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)
several times during the pressure testing.
In the haproxy documentation, it mentioned
They will receive the SIGTTOU
611 signal to ask them to temporarily stop listening to the ports so that the new
612 process can grab them
so there is a time period that the old process is not listening on the PORT(say 80) and the new process haven’t start to listen to the PORT (say 80), and during this specific time period, it will cause the NEW connections failed, make sense?
So is there any approach that makes the configuration reload of haproxy that will not impact both existing connections and new connections?
On recent kernels where SO_REUSEPORT is finally implemented (3.9+), this dead period does not exist anymore. While a patch has been available for older kernels for something like 10 years, it's obvious that many users cannot patch their kernels. If your system is more recent, then the new process will succeed its attempt to bind() before asking the previous one to release the port, then there's a period where both processes are bound to the port instead of no process.
There is still a very tiny possibility that a connection arrived in the leaving process' queue at the moment it closes it. There is no reliable way to stop this from happening though.