what's the max resolution that I can achieve using xvfb? - puppeteer

xvfb :1 -screen 0 1600x1200x24
Using this command I can get a 1600x1200 screen. But what keeps me from setting it to xvfb :1 -screen 0 160000x120000x32? Are there any limitations on the screen size that I can use? Is it limited by amount of CPU and RAM that the virtual screen is gonna use?
I couldn't find anything in the documentation that says anything related to this.
xvfb documentation link

I did not find reliable information, i tested with 8k, it works at home (it's bit slow):
xvfb-run -a -s "-screen 0 7680x4320x24" <x11 app>
My config:
CPU: AMD R7 5800H
RAM: 32 GB
GPU: RTX 3070 125W (OC to 140W)

Related

Memory limit on composer installation

I have a cloud in the digital ocean where it has 1 GB of ram.
I need to install a docker, laravel, mysql, nginx environment, I found the laradock and installed it normally but when running the composer in the container I am returning a memory limit error.
Error running: composer install
root#b9864446a1e1:/var/www/site# composer install
Loading composer repositories with package information
Updating dependencies (including require-dev)
mmap() failed: [12] Cannot allocate memory
mmap() failed: [12] Cannot allocate memory
PHP Fatal error: Out of memory (allocated 677388288) (tried to allocate 4096 bytes) in phar:///usr/local/bin/composer/src/Composer/DependencyResolver/RuleWatchGraph.php on line 52
Fatal error: Out of memory (allocated 677388288) (tried to allocate 4096 bytes) in phar:///usr/local/bin/composer/src/Composer/DependencyResolver/RuleWatchGraph.php on line 52
Error when trying to change memory.
WARNING: Your kernel does not support swap limit capabilities or the
cgroup is not mounted. Memory limited without swap.
This could be happening because the VPS runs out of memory and has no Swap space enabled.
free -m
total used free shared buffers cached
Mem: xxxx xxx xxxx x x xxx
-/+ buffers/cache: xxx xxxx
Swap: 0 0 0
To enable the swap you can use for example:
/bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=1024
/sbin/mkswap /var/swap.1
/sbin/swapon /var/swap.1
You can make a permanent swap file following this tutorial from DigitalOcean.

Openshift: How to increase the memory limit for sti-build

I'm using (the free trial of) Openshift online and this tier apparently offers 2G of memory for the pods.
I'm trying to install a node project and the npm install phase tries to build some native modules and terminates with an oom error, while trying a gcc compile.
On looking the console we can see that the sti-build container has a limit of 512M
Containers
sti-build
Image: openshift3/ose-docker-builder
Command: openshift-sti-build --loglevel=0
Mount: buildworkdir → /tmp/build read-write
Mount: docker-socket → /var/run/docker.sock read-write
Mount: crio-socket → /var/run/crio/crio.sock read-write
Mount: builder-dockercfg-kpj4q-push → /var/run/secrets/openshift.io/push read-only
Mount: builder-dockercfg-kpj4q-pull → /var/run/secrets/openshift.io/pull read-only
Mount: builder-token-pl672 → /var/run/secrets/kubernetes.io/serviceaccount read-only
CPU: 30 millicores to 1 core
Memory: 409 MiB to 512 MiB
This seems to come from a limit ranger that the platform has injected - as the documentation says that builds should be unlimited.
Any way of overriding?

CUDA : How to detect shared memory bank conflict on device with compute capabiliy >= 7.2?

On device with compute capability <= 7.2 , I always use
nvprof --events shared_st_bank_conflict
but when i run it on RTX2080ti with CUDA10 , it returns
Warning: Skipping profiling on device 0 since profiling is not supported on devices with compute capability greater than 7.2
So how can i detect whether there's share memory bank conflict on this devices ?
I've installed Nvidia Nsight Systems and Nsight Compute , find no such profiling report...
thks
You can use --metrics:
Either
nv-nsight-cu-cli --metrics l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_ld.sum
for conflicts when reading (load'ing) from shared memory, or
nv-nsight-cu-cli --metrics l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_st.sum
for conflicting when writing (store'ing) to shared memory.
It seems this is a problem, and is addressed in this post to the NVIDIA forums. Apparently it should be supported using one of the Nsight tools (either the CLI or UI).

Ethminer Ubuntu 16 not using NVIDIA GPU

I have followed instructions here and successfully build and setup geth.
Ethminer seems to work except it doesn't use the Titan X GPU and the mining rate is only 341022 H/s.
Also when I try to use the -G option ethminer says it is an invalid argument; the -G flag also doesn't appear in the ethminer help command.
Your GPU must have a minimum memory to perform mining. Upgrade to GPU you with higher memories (minimum 4GB is preferable)
The current DAG size is above (2GB). That means you cant mine with GPU with memory less than 2GB.

Why boot system, load two versions of u-boot?

I have a gateway device with MT7620a in MIPS architecture. The device has installed OpenWRT. If I connect to device via UART with the goal of flashing new firmware I see something I don't understand, MCU loading two version U-Boot.
U-Boot 1.1.3
Ralink UBoot Version: 4.3.0.0
Here is Log System after start
U-Boot 1.1.3 (Apr 27 2015 - 13:54:38)
Board: Ralink APSoC DRAM: 128 MB
relocate_code Pointer at: 87fb8000
enable ephy clock...done. rf reg 29 = 5
SSC disabled.
spi_wait_nsec: 29
spi device id: 1c 70 18 1c 70 (70181c70)
find flash: EN25QH128A
raspi_read: from:30000 len:1000
*** Warning - bad CRC, using default environment
============================================
Ralink UBoot Version: 4.3.0.0
--------------------------------------------
ASIC 7620_MP (Port5<->None)
DRAM component: 1024 Mbits DDR, width 16
DRAM bus: 16 bit
Total memory: 128 MBytes
Flash component: SPI Flash
Date:Apr 27 2015 Time:13:54:38
Of course I have a few additional questions in this issue:
What is different between these U-Boot ?
Why does my device need two versions U-Boot ?
Whether this u-boots need separate *.bin image or these is together
in one image *.bin ? In my device is only one partition for u-boot image and one partition for variables:
mtd0: 00030000 00010000 "u-boot"
mtd1: 00010000 00010000 "u-boot-env"
As Alexandre Belloni said, there is probably only one version of U-Boot on your device, it just has two different version identifiers.
The reason for this is that manufacturers often need to modify the U-Boot source code in order to get it to operate on their device, or to add features.
On your device, it looks like the version of U-Boot that Ralink pulled from the official U-Boot source code repository is 1.1.3. Ralink's own internal version number that they use for tracking their internal modifications is 4.3.0.0.
There is probably only one u-boot and "Ralink UBoot Version: 4.3.0.0" is an internal u-boot version for Ralink.