I am want to use stress-ng to mimic workload. The end goal is to be able to load the system with different percentages of different tasks. For example 50% CPU, 25% IO etc.
So I started with this command...
sudo stress-ng -v --taskset 1 --sched fifo --sched-prio 80 --perf --all 2 --ioport 1 --ioport-ops 1000000 --matrix 1 -t 60s
stress-ng: debug: [2077] 2 processors online, 2 processors configured
stress-ng: debug: [2077] sched: setting scheduler class 'fifo', priority 80
stress-ng: info: [2077] dispatching hogs: 1 ioport, 1 matrix
stress-ng: debug: [2077] cache allocate: default cache size: 4096K
stress-ng: debug: [2077] starting stressors
stress-ng: debug: [2077] 2 stressors spawned
stress-ng: debug: [2078] stress-ng-ioport: started [2078] (instance 0)
stress-ng: debug: [2079] stress-ng-matrix: started [2079] (instance 0)
But when I monitored process 2078 and 2079, I noticed that only process 2078 was loading the CPU.
Image 1
When I swapped the order of the 2 stressors...
sudo stress-ng -v --taskset 1 --sched fifo --sched-prio 80 --perf --all 2 --matrix 1 --ioport 1 --ioport-ops 1000000 -t 60s
stress-ng: debug: [2084] 2 processors online, 2 processors configured
stress-ng: debug: [2084] sched: setting scheduler class 'fifo', priority 80
stress-ng: info: [2084] dispatching hogs: 1 matrix, 1 ioport
stress-ng: debug: [2084] cache allocate: default cache size: 4096K
stress-ng: debug: [2084] starting stressors
stress-ng: debug: [2084] 2 stressors spawned
stress-ng: debug: [2085] stress-ng-matrix: started [2085] (instance 0)
stress-ng: debug: [2086] stress-ng-ioport: started [2086] (instance 0)
stress-ng: debug: [2085] stress-ng-matrix using method 'all' (x by y)
Only process 2085 was loading the CPU.
Image 2
This suggests the 2 stressors do not run in parallel.
How can I get stressors to run in parallel, moreover, to proportion the 2 tasks?
Are there better opensource tools to mimic workload?
Thanks!
The stress-ng stressors are designed to max out the resources, so specifying 25% I/O is not easy to do as one has to estimate what 100% I/O is and then scale down appropriately. Since I/O rates can vary over time (because it is ultimately down to may hardware specific variables) one needs to have a 2nd order differential feed back loop to make it work well, which is out of the scope of what stress-ng is able to do.
One other way to look at the issue is to scale things with the number of CPUs. Suppose you have 4 CPUs on your system, then 2 CPUS for CPU work = 50%, 1 CPU for I/O = 25%, so run:
stress-ng --cpu 2 --iomix 1
or on an 8 CPU system you could to something else like:
stress-ng --matrix 4 --hdd 2
Related
I am a beginner to OpenOCD and I am trying to flash 3 STM32 targets in a daisy chain with an ST-Link v2 debugger or OLIMEX as shown below using OpenOCD.
The code that I use works if only one Target is connected. But if I connect it to more than one Target, OpenOCD throws an error stating that init failed.
"C:\Program Files\GNU ARM Eclipse\OpenOCD\0.10.0-201601101000-dev\bin\openocd" -f "C:\Program Files\GNU ARM Eclipse\OpenOCD\0.10.0-201601101000-dev\scripts\interface\stlink-v2.cfg" -f "C:\Program Files\GNU ARM Eclipse\OpenOCD\0.10.0-201601101000-dev\scripts\target\stm32f3x.cfg" -c init -c targets -c "halt" -c "flash erase_sector 0 0 127" -c "reset halt" -c "flash write_image C:/Users/Buero-1/Desktop/openOCD/init.hex" -c "verify_image C:/Users/Buero-1/Desktop/openOCD/init.hex" -c "reset run" -c shutdown
A successful result I get when this code is executed is shown below.
GNU ARM Eclipse 64-bits Open On-Chip Debugger 0.10.0-dev-00287-g85cec24-dirty (2016-01-10-10:13)
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
Info : auto-selecting first available session transport "hla_swd". To override use 'transport select <transport>'.
adapter speed: 1000 kHz
adapter_nsrst_delay: 100
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
none separate
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : clock speed 950 kHz
Info : STLINK v2 JTAG v29 API v2 SWIM v7 VID 0x0483 PID 0x3748
Info : using stlink api v2
Info : Target voltage: 3.223311
Info : stm32f3x.cpu: hardware has 6 breakpoints, 4 watchpoints
TargetName Type Endian TapName State
-- ------------------ ---------- ------ ------------------ ------------
0* stm32f3x.cpu hla_target little stm32f3x.cpu halted
Info : device id = 0x20006432
Info : flash size = 256kbytes
erased sectors 0 through 127 on flash bank 0 in 0.025984s
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : Unable to match requested speed 1000 kHz, using 950 kHz
adapter speed: 950 kHz
stm32f3x.cpu: target state: halted
target halted due to debug-request, current mode: Thread
xPSR: 0x01000000 pc: 0xfffffffe msp: 0xfffffffc
Info : Padding image section 0 with 31880 bytes
Info : Padding image section 1 with 1 bytes
stm32f3x.cpu: target state: halted
target halted due to breakpoint, current mode: Thread
xPSR: 0x61000000 pc: 0x2000003a msp: 0xfffffffc
wrote 47676 bytes from file Z:/Elektronik/GSV13/Fertigung_GSV-13iu/Init/GSV13init_Ver1_6.hex in 1.635796s (28.462 KiB/s)
stm32f3x.cpu: target state: halted
target halted due to breakpoint, current mode: Thread
xPSR: 0x61000000 pc: 0x2000002e msp: 0xfffffffc
stm32f3x.cpu: target state: halted
target halted due to breakpoint, current mode: Thread
xPSR: 0x61000000 pc: 0x2000002e msp: 0xfffffffc
verified 15795 bytes in 0.483325s (31.914 KiB/s)
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : Unable to match requested speed 1000 kHz, using 950 kHz
adapter speed: 950 kHz
shutdown command invoked
But as mentioned, if I connect multiple targets in a JTAG chain, the process stops at init and the program ends.
The config files are target/stm32f3x.cfg, interface/ftdi/olimex-arm-usb-ocd-h.cfg and interface/stlink-v2.cfg.
Please excuse me if my question was very basic. It would be of great help if I am provided an update on how to proceed with my problem.
Thank you.
As far as I know STLink v2 does not support daisy chain.
https://community.st.com/s/question/0D50X00009XkZTdSAN/does-stlink-utility-support-multiple-devices-on-jtag-chain
I am trying to emulate a firmware image using qemu. During booting, I get the following error
can't run '/etc/init.d/rcS': No such file or directory
can't open /dev/ttyS0: No such file or directory
can't open /dev/ttyS0: No such file or directory
can't open /dev/ttyS0: No such file or directory
.
.
.
This is the content of the inittab file
# Startup the system
null::sysinit:/etc/init.d/rc.sysinit
# now run any rc scripts
::sysinit:/etc/init.d/rcS
# Put a getty on the serial port
ttyS0::respawn:/sbin/getty -L ttyS0 115200 vt100
# Stuff to do before rebooting
null::shutdown:/bin/umount -a -r
It is able to run the rc.sysinit, but not the rcS.
I have checked permissions of the rcS. Also, the filesystem is mounted as read-only cramfs. Could this be causing an issue?
This is the command I am running:
QEMU_AUDIO_DRV=none \qemu-system-arm -m 256M -M versatilepb
-kernel ~/linux-2.6.23/arch/arm/boot/zImage
-append "console=ttyAMA0,115200 root=/dev/ram rdinit=/sbin/init"
-initrd ~/tmpcramfs2
-nographic
These are the boot messages obtained on running the command:
Linux version 2.6.23 (hsailer#SvanteArrhenius) (gcc version 4.0.2) #1 Thu May 27 09:31:10 EDT 2021
CPU: ARM926EJ-S [41069265] revision 5 (ARMv5TEJ), cr=00093177
Machine: ARM-Versatile PB
Memory policy: ECC disabled, Data cache writeback
CPU0: D VIVT write-through cache
CPU0: I cache: 4096 bytes, associativity 4, 32 byte lines, 32 sets
CPU0: D cache: 65536 bytes, associativity 4, 32 byte lines, 512 sets
Built 1 zonelists in Zone order. Total pages: 65024
Kernel command line: console=ttyAMA0,115200 root=/dev/ram rdinit=/sbin/init
PID hash table entries: 1024 (order: 10, 4096 bytes)
Console: colour dummy device 80x30
Dentry cache hash table entries: 32768 (order: 5, 131072 bytes)
Inode-cache hash table entries: 16384 (order: 4, 65536 bytes)
Memory: 256MB = 256MB total
Memory: 249600KB available (2508K code, 227K data, 100K init)
Mount-cache hash table entries: 512
CPU: Testing write buffer coherency: ok
NET: Registered protocol family 16
NET: Registered protocol family 2
Time: timer3 clocksource has been installed.
IP route cache hash table entries: 2048 (order: 1, 8192 bytes)
TCP established hash table entries: 8192 (order: 4, 65536 bytes)
TCP bind hash table entries: 8192 (order: 3, 32768 bytes)
TCP: Hash tables configured (established 8192 bind 8192)
TCP reno registered
checking if image is initramfs...it isn't (bad gzip magic numbers); looks like an initrd
Freeing initrd memory: 7184K
NetWinder Floating Point Emulator V0.97 (double precision)
Installing knfsd (copyright (C) 1996 okir#monad.swb.de).
JFFS2 version 2.2. (NAND) © 2001-2006 Red Hat, Inc.
JFS: nTxBlock = 2007, nTxLock = 16063
io scheduler noop registered
io scheduler anticipatory registered (default)
io scheduler deadline registered
io scheduler cfq registered
CLCD: Versatile hardware, VGA display
Clock CLCDCLK: setting VCO reg params: S=1 R=99 V=98
Console: switching to colour frame buffer device 80x60
Serial: AMBA PL011 UART driver
dev:f1: ttyAMA0 at MMIO 0x101f1000 (irq = 12) is a AMBA/PL011
console [ttyAMA0] enabled
dev:f2: ttyAMA1 at MMIO 0x101f2000 (irq = 13) is a AMBA/PL011
dev:f3: ttyAMA2 at MMIO 0x101f3000 (irq = 14) is a AMBA/PL011
fpga:09: ttyAMA3 at MMIO 0x10009000 (irq = 38) is a AMBA/PL011
RAMDISK driver initialized: 16 RAM disks of 8192K size 1024 blocksize
smc91x.c: v1.1, sep 22 2004 by Nicolas Pitre <nico#cam.org>
eth0: SMC91C11xFD (rev 1) at d098e000 IRQ 25 [nowait]
eth0: Ethernet addr: 52:54:00:12:34:56
armflash.0: Found 1 x32 devices at 0x0 in 32-bit bank
Intel/Sharp Extended Query Table at 0x0031
Using buffer write method
RedBoot partition parsing not available
afs partition parsing not available
armflash: probe of armflash.0 failed with error -22
mice: PS/2 mouse device common for all mice
input: AT Raw Set 2 keyboard as /class/input/input0
TCP cubic registered
NET: Registered protocol family 1
NET: Registered protocol family 17
VFP support v0.3: implementor 41 architecture 1 part 10 variant 9 rev 0
input: ImExPS/2 Generic Explorer Mouse as /class/input/input1
RAMDISK: cramfs filesystem found at block 0
RAMDISK: Loading 7184KiB [1 disk] into ram disk... done.
VFS: Mounted root (cramfs filesystem) readonly.
Freeing init memory: 100K
can't run '/etc/init.d/rcS': No such file or directory
can't open /dev/ttyS0: No such file or directory
can't open /dev/ttyS0: No such file or directory
can't open /dev/ttyS0: No such file or directory
.
.
.
The errors about /dev/ttyS0 are because your inittab is specifying the wrong device name for the serial port for the (emulated) hardware you're running on. Your QEMU command specifies the 'versatilepb' board, whose serial devices are PL011s, which appear in /dev/ as /dev/ttyAMA0, /dev/ttyAMA1, etc. (/dev/ttyS0 is what the serial ports on an x86 PC appear as.) You need to fix that line of the inittab to refer to ttyAMA0 instead.
For the rcS error, I would suggest you start by double-checking all the things listed in all the responses to this older question.
I successfully setup a mini Ray cluster (1 head + 1 worker, each with 4 CPU cores) manually. However, I failed to set up it automatically using the Apache Ray autoscaler. The head node starts correctly while the worker node never joins the cluster. Below is my YAML configuration for the autoscaler. Is there anything I did wrong?
cluster_name: my_ray_cluster
min_workers: 8
initial_workers: 8
max_workers: 8
provider:
type: local
head_ip: 10.148.186.178
worker_ips: [10.148.186.18]
auth:
ssh_user: USER_NAME
ssh_private_key: ~/.ssh/id_rsa
# Files or directories to copy to the head and worker nodes.
file_mounts: {
# "/path1/on/remote/machine": "/path1/on/local/machine",
# "/path2/on/remote/machine": "/path2/on/local/machine",
}
head_setup_commands:
- pip3 install ray[debug,dashboard]
setup_commands:
- pip3 install ray[debug,dashboard]
# Command to start ray on the head node. You don't need to change this.
head_start_ray_commands:
- ray stop
- ray start --head --redis-port=6379
worker_start_ray_commands:
- ray stop
- ray start --address=10.148.186.178:6379
I am using openshift and testing HA features, pods have been running on 2 nodes as the following:
$ oc get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
hello-1-7j6zp 1/1 Running 0 18m 10.128.0.153 node1.exampledis.com
hello-1-mztf8 1/1 Running 0 18m 10.128.0.152 node1.exampledis.com
hello-1-pmz2g 1/1 Running 0 26m 10.130.0.46 node2.exampledis.com
I shutdown vm which runs as node2.exampledis.com, after about 1 minute, new pod begins to startup on node1, pod on node2 becomes "unknown", I think there should be some parameter to control the interval, who can share some points on this?
version:
oc v3.6.1+008f2d5
kubernetes v1.6.1+5115d708d7
features: Basic-Auth
Server https://master.exampledis.com:8443
openshift v3.7.9
kubernetes v1.7.6+a08f5eeb62
Best regards
Lan
Kubelet --sync-frequency parameter controls sync interval, as shown in kubelet doc
--sync-frequency: Max period between synchronizing running containers and config (default 1m0s)
I am trying to run an sonatype/nexus3 on openshift online v3 pro. If I just use the web console to create a new app from image it assigns it only 512Mi and it dies with OOM. It did get created though and logged a lot of java output before it died of out of memory. When using the web console there doesnt appear a way to set the memory on the image. When I try to edited the yaml of the pod it doesn't let me edited the memory limit.
Reading the docs about memory limits it suggests that I can run with this:
oc run nexus333 --image=sonatype/nexus3 --limits=memory=750Mi
Then it doesn't even start. It dies with:
{kubelet ip-172-31-59-148.ec2.internal} Error: Error response from
daemon: {"message":"create
c30deb38b3c26252bf1218cc898fbf1c68d8fc14e840076710c211d58ed87a59:
mkdir
/var/lib/docker/volumes/c30deb38b3c26252bf1218cc898fbf1c68d8fc14e840076710c211d58ed87a59:
permission denied"}
More information from oc get events:
FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
16m 16m 1 nexus333-1-deploy Pod Normal Scheduled {default-scheduler } Successfully assigned nexus333-1-deploy to ip-172-31-50-97.ec2.internal
16m 16m 1 nexus333-1-deploy Pod spec.containers{deployment} Normal Pulling {kubelet ip-172-31-50-97.ec2.internal} pulling image "registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.6.173.0.21"
16m 16m 1 nexus333-1-deploy Pod spec.containers{deployment} Normal Pulled {kubelet ip-172-31-50-97.ec2.internal} Successfully pulled image "registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.6.173.0.21"
15m 15m 1 nexus333-1-deploy Pod spec.containers{deployment} Normal Created {kubelet ip-172-31-50-97.ec2.internal} Created container
15m 15m 1 nexus333-1-deploy Pod spec.containers{deployment} Normal Started {kubelet ip-172-31-50-97.ec2.internal} Started container
15m 15m 1 nexus333-1-rftvd Pod Normal Scheduled {default-scheduler } Successfully assigned nexus333-1-rftvd to ip-172-31-59-148.ec2.internal
15m 14m 7 nexus333-1-rftvd Pod spec.containers{nexus333} Normal Pulling {kubelet ip-172-31-59-148.ec2.internal} pulling image "sonatype/nexus3"
15m 10m 19 nexus333-1-rftvd Pod spec.containers{nexus333} Normal Pulled {kubelet ip-172-31-59-148.ec2.internal} Successfully pulled image "sonatype/nexus3"
15m 15m 1 nexus333-1-rftvd Pod spec.containers{nexus333} Warning Failed {kubelet ip-172-31-59-148.ec2.internal} Error: Error response from daemon: {"message":"create 3aa35201bdf81d09ef4b09bba1fc843b97d0339acfef0c30cecaa1fbb6207321: mkdir /var/lib/docker/volumes/3aa35201bdf81d09ef4b09bba1fc843b97d0339acfef0c30cecaa1fbb6207321: permission denied"}
I am not sure why if I use the web console I cannot assign more memory. I am not sure why running it with oc run dies with the mkdir error. Can anyone tell me how to run sonatype/nexus3 on openshift online pro?
Looking in the documentation I see that it is a Java VM solution.
When using Java 8, memory usage can be DRAMATICALLY IMPROVED using only the following 2 runtime Java VM options:
... "-XX:+UnlockExperimentalVMOptions", "-XX:+UseCGroupMemoryLimitForHeap" ...
I just deployed my container (Spring Boot JAR) that consumed over 650 MB RAM. With just these two (new) options RAM consumption dropped to just 270 MB!!!
So, with these 2 runtime settings all OOM's are left far behind! Enjoy!
You may want to also follow along with the tutorial that is in the OpenShift docs https://docs.openshift.com/online/dev_guide/app_tutorials/maven_tutorial.html
I have had success deploying this in OpenShift Online Pro
Okay the mkdir /var/lib/docker/volumes/ permission denied seems to be that the image needs a /nexus-data mount and that is refused. I saw that by deploying from the web console (dies with OOM) but the edit yaml for the created pod to see the generated volume mount.
Creating the image with the following yaml using cat nexus3_pod.ephemeral.yaml | oc create -f - with the volume mount and explicit memory settings the container will now start up:
apiVersion: "v1"
kind: "Pod"
metadata:
name: "nexus3"
labels:
name: "nexus3"
spec:
containers:
-
name: "nexus3"
resources:
requests:
memory: "1200Mi"
limits:
memory: "1200Mi"
image: "sonatype/nexus3"
ports:
-
containerPort: 8081
name: "nexus3"
volumeMounts:
- mountPath: /nexus-data
name: nexus3-1
volumes:
- emptyDir: {}
name: nexus3-1
Notes
The mage sets -Xmx1200m as documented at sonatype/docker-nexus3. So if you assign memory less than 1200Mi it will crash with OOM when the heap grows over the limit. You may as well set requested and max to be the max heap side anything.
When the allocated memory was too low it crashed die just as it was setting up the DB which corrupted the db log which meant it then got in a crash loop "couldn't load 4 byte from 0 byte file" when I recreated it with more memory. It seems that with an emptyDir the files hang around between crash restarts and memory changes (that's documented behaviour I think). I had to recreate a pod with a different name to get a clean emptyDir and assigned memory of 1200Mi to get it to all start.