Memory limit on composer installation - mysql

I have a cloud in the digital ocean where it has 1 GB of ram.
I need to install a docker, laravel, mysql, nginx environment, I found the laradock and installed it normally but when running the composer in the container I am returning a memory limit error.
Error running: composer install
root#b9864446a1e1:/var/www/site# composer install
Loading composer repositories with package information
Updating dependencies (including require-dev)
mmap() failed: [12] Cannot allocate memory
mmap() failed: [12] Cannot allocate memory
PHP Fatal error: Out of memory (allocated 677388288) (tried to allocate 4096 bytes) in phar:///usr/local/bin/composer/src/Composer/DependencyResolver/RuleWatchGraph.php on line 52
Fatal error: Out of memory (allocated 677388288) (tried to allocate 4096 bytes) in phar:///usr/local/bin/composer/src/Composer/DependencyResolver/RuleWatchGraph.php on line 52
Error when trying to change memory.
WARNING: Your kernel does not support swap limit capabilities or the
cgroup is not mounted. Memory limited without swap.

This could be happening because the VPS runs out of memory and has no Swap space enabled.
free -m
total used free shared buffers cached
Mem: xxxx xxx xxxx x x xxx
-/+ buffers/cache: xxx xxxx
Swap: 0 0 0
To enable the swap you can use for example:
/bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=1024
/sbin/mkswap /var/swap.1
/sbin/swapon /var/swap.1
You can make a permanent swap file following this tutorial from DigitalOcean.

Related

install openshift on google cloud ( You need to enable virtualization)?

virtualization is enable i've created machine with
gcloud compute instances create openshift-server --enable-nested-virtualization --zone=europe-north1-a --machine-type=e2-standard-4 --image-family=centos-stream-8 --image-project=centos-cloud
i've add libvirt
sudo yum install libvirt
control the cpu virtual active
You need to enable virtualization in BIOS
[n_turri#openshift-server crc-linux-2.9.0-amd64]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU # 2.20GHz
Stepping: 0
CPU MHz: 2200.214
BogoMIPS: 4400.42
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 56320K
NUMA node0 CPU(s): 0-3
but when run the setup
You need to enable virtualization in BIOS ??????
n_turri#openshift-server crc-linux-2.9.0-amd64]$ ./crc setup
INFO Using bundle path /home/n_turri/.crc/cache/crc_libvirt_4.11.3_amd64.crcbundle
INFO Checking if running as non-root
INFO Checking if running inside WSL2
INFO Checking if crc-admin-helper executable is cached
INFO Checking for obsolete admin-helper executable
INFO Checking if running on a supported CPU architecture
INFO Checking minimum RAM requirements
INFO Checking if crc executable symlink exists
INFO Checking if Virtualization is enabled
INFO Setting up virtualization
You need to enable virtualization in BIOS

Out of memory error on initializing Couchbase java Client

Facing out of memory error on initializing Couchbase Java Client. The issue happens in the context of running Test cases in the Gradle build. It doesn't seem to be happening when running individual test cases. It seems to be happening on running all test cases in the build. The error is occurring on MacOS and not on Linux build machine
Environment
JVM = 16 (OpenJDK)
OS = MacOS Monterey
task = Gradle build
jvm memory settings = -Xmx8000m" "-Xms512m" "-XX:MaxDirectMemorySize=2000m"
StackTrace -
Caused by: java.lang.OutOfMemoryError: Cannot reserve 16384 bytes of direct buffer memory (allocated: 536861104, limit: 536870912)
at java.base/java.nio.Bits.reserveMemory(Bits.java:178)
at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:121)
at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:330)
at com.couchbase.client.core.deps.io.netty.channel.unix.Buffer.allocateDirectWithNativeOrder(Buffer.java:40)
at com.couchbase.client.core.deps.io.netty.channel.unix.IovArray.<init>(IovArray.java:72)
at com.couchbase.client.core.deps.io.netty.channel.kqueue.KQueueEventLoop.<init>(KQueueEventLoop.java:62)
at com.couchbase.client.core.deps.io.netty.channel.kqueue.KQueueEventLoopGroup.newChild(KQueueEventLoopGroup.java:151)
at com.couchbase.client.core.deps.io.netty.channel.kqueue.KQueueEventLoopGroup.newChild(KQueueEventLoopGroup.java:32)
at com.couchbase.client.core.deps.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:84)
at com.couchbase.client.core.deps.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:60)
at com.couchbase.client.core.deps.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:49)
at com.couchbase.client.core.deps.io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:59)
at com.couchbase.client.core.deps.io.netty.channel.kqueue.KQueueEventLoopGroup.<init>(KQueueEventLoopGroup.java:110)
at com.couchbase.client.core.deps.io.netty.channel.kqueue.KQueueEventLoopGroup.<init>(KQueueEventLoopGroup.java:97)
at com.couchbase.client.core.deps.io.netty.channel.kqueue.KQueueEventLoopGroup.<init>(KQueueEventLoopGroup.java:73)
at com.couchbase.client.core.env.IoEnvironment.createEventLoopGroup(IoEnvironment.java:476)
at com.couchbase.client.core.env.IoEnvironment.<init>(IoEnvironment.java:285)
at com.couchbase.client.core.env.IoEnvironment.<init>(IoEnvironment.java:66)
at com.couchbase.client.core.env.IoEnvironment$Builder.build(IoEnvironment.java:674)
at com.couchbase.client.core.env.CoreEnvironment.<init>(CoreEnvironment.java:153)
at com.couchbase.client.java.env.ClusterEnvironment.<init>(ClusterEnvironment.java:53)
at com.couchbase.client.java.env.ClusterEnvironment.<init>(ClusterEnvironment.java:46)
at com.couchbase.client.java.env.ClusterEnvironment$Builder.build(ClusterEnvironment.java:213)

How to configure slurm on ubuntu 20.04 with minimum requirements?

I am trying to set-up configuration file on Ubuntu 20.04. I have tried several thing and searched for errors on other websites (link1, link2, link3) and slurm-website as well. Another similar question on SO as well.
Given the following information about my computer, what is the minimum required information must be provided in slurm.conf file.
The general information for my computer;
RAM: 125.5 GB
CPU: 1-20 (Intel® Xeon(R) CPU E5-2687W v3 # 3.10GHz × 20 )
Graphics: NVIDIA Corporation GP104 [GeForce GTX 1080] / NVIDIA Corporation
OS: Ubuntu 20.04.2 LTS 64 bit
and I want to have 2 nodes with 10 CPUs for each and 1 node for GPU.
I have tried the followings;
After configuration and running the followings;
>sudo systemctl restart slurmctld
with no error. But I got error witj slurmd.
> sudo systemctl restart slurmd
Error is as below;
Job for slurmd.service failed because the control process exited with error code.
See "systemctl status slurmd.service" and "journalctl -xe" for details.
if I run "systemctl status slurmd.service"
● slurmd.service - Slurm node daemon
Loaded: loaded (/lib/systemd/system/slurmd.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2021-06-06 21:47:26 CEST; 1min 14s ago
Docs: man:slurmd(8)
Process: 52710 ExecStart=/usr/sbin/slurmd $SLURMD_OPTIONS (code=exited, status=1/FAILURE)
Here is my configuration file slurm.conf generated by configurator_easy.html and saved in /etc/slurm-llnl/slurm.conf
# slurm.conf file generated by configurator easy.html.
# Put this file on all nodes of your cluster.
# See the slurm.conf man page for more information.
#
SlurmctldHost=myhostname
#
AuthType=auth/menge
Epilog=/usr/local/slurm/epilog
Prolog=/usr/local/slurm/prolog
FirstJobId=0
InactiveLimit=120
JobCompType=jobcomp/filetxt
JobCompLoc=/var/log/slurm/jobcomp
KillWait=30
MinJobAge=300
MaxJobCount=10000
#PluginDir=/usr/local/lib
ReturnToService=0
SlurmdPort=6818
SlurmctldPort=6817
SlurmdSpoolDir=/var/spool/slurmd.spool
StateSaveLocation=/var/spool/slurm-llnl/slurm.state
SwitchType=switch/none
TmpFS=/tmp
WaitTime=30
SlurmctldPidFile=/run/slurmctld.pid
SlurmdPidFile=/run/slurmd.pid
SlurmUser=slurm
SlurmdUser=root
TaskPlugin=task/affinity
#
# TIMERS
SlurmctldTimeout=120
SlurmdTimeout=300
#
# SCHEDULING
SchedulerType=sched/backfill
SelectType=select/cons_res
SelectTypeParameters=CR_Core
#
# LOGGING AND ACCOUNTING
#AccountingStorageType=accounting_storage/none
ClusterName=cluster
#JobAcctGatherFrequency=30
#JobAcctGatherType=jobacct_gather/linux
#SlurmctldDebug=info
SlurmctldLogFile=/var/log/slurm-llnl/SlurmctldLogFile
#SlurmdDebug=info
#SlurmdLogFile=
#
# COMPUTE NODES
NodeName=Linux[1-32] State=UP
NodeName=DEFAULT State=UNKNOWN
PartitionName=Linux[1-32] Default=YES
I have Ubuntu 20.04 running on wsl and I was also struggling with setting up slurm as well. It looks like everything is running fine now. I am still a beginner..
I recommend you to really check the logs:
cat /var/log/slurmctld.log
cat /var/log/slurmd.log
In my case I had some permission issues and therefore had to make sure slurm related directories had to be owned by SlurmUser as defined in your config.
At first glance I see in your config the following lines which could cause the problem (if I compare the settings with mine):
I wonder that you defined NodeName twice.
In my case it has at first the value of SlurmctldHost
Hope something of the above mentioned can help.
Regards
Edit: I also would refer to the following Post, which could be similar to yours, if you run your command with sudo.

Openshift: How to increase the memory limit for sti-build

I'm using (the free trial of) Openshift online and this tier apparently offers 2G of memory for the pods.
I'm trying to install a node project and the npm install phase tries to build some native modules and terminates with an oom error, while trying a gcc compile.
On looking the console we can see that the sti-build container has a limit of 512M
Containers
sti-build
Image: openshift3/ose-docker-builder
Command: openshift-sti-build --loglevel=0
Mount: buildworkdir → /tmp/build read-write
Mount: docker-socket → /var/run/docker.sock read-write
Mount: crio-socket → /var/run/crio/crio.sock read-write
Mount: builder-dockercfg-kpj4q-push → /var/run/secrets/openshift.io/push read-only
Mount: builder-dockercfg-kpj4q-pull → /var/run/secrets/openshift.io/pull read-only
Mount: builder-token-pl672 → /var/run/secrets/kubernetes.io/serviceaccount read-only
CPU: 30 millicores to 1 core
Memory: 409 MiB to 512 MiB
This seems to come from a limit ranger that the platform has injected - as the documentation says that builds should be unlimited.
Any way of overriding?

Why boot system, load two versions of u-boot?

I have a gateway device with MT7620a in MIPS architecture. The device has installed OpenWRT. If I connect to device via UART with the goal of flashing new firmware I see something I don't understand, MCU loading two version U-Boot.
U-Boot 1.1.3
Ralink UBoot Version: 4.3.0.0
Here is Log System after start
U-Boot 1.1.3 (Apr 27 2015 - 13:54:38)
Board: Ralink APSoC DRAM: 128 MB
relocate_code Pointer at: 87fb8000
enable ephy clock...done. rf reg 29 = 5
SSC disabled.
spi_wait_nsec: 29
spi device id: 1c 70 18 1c 70 (70181c70)
find flash: EN25QH128A
raspi_read: from:30000 len:1000
*** Warning - bad CRC, using default environment
============================================
Ralink UBoot Version: 4.3.0.0
--------------------------------------------
ASIC 7620_MP (Port5<->None)
DRAM component: 1024 Mbits DDR, width 16
DRAM bus: 16 bit
Total memory: 128 MBytes
Flash component: SPI Flash
Date:Apr 27 2015 Time:13:54:38
Of course I have a few additional questions in this issue:
What is different between these U-Boot ?
Why does my device need two versions U-Boot ?
Whether this u-boots need separate *.bin image or these is together
in one image *.bin ? In my device is only one partition for u-boot image and one partition for variables:
mtd0: 00030000 00010000 "u-boot"
mtd1: 00010000 00010000 "u-boot-env"
As Alexandre Belloni said, there is probably only one version of U-Boot on your device, it just has two different version identifiers.
The reason for this is that manufacturers often need to modify the U-Boot source code in order to get it to operate on their device, or to add features.
On your device, it looks like the version of U-Boot that Ralink pulled from the official U-Boot source code repository is 1.1.3. Ralink's own internal version number that they use for tracking their internal modifications is 4.3.0.0.
There is probably only one u-boot and "Ralink UBoot Version: 4.3.0.0" is an internal u-boot version for Ralink.