I am trying to emulate a firmware image using qemu. During booting, I get the following error
can't run '/etc/init.d/rcS': No such file or directory
can't open /dev/ttyS0: No such file or directory
can't open /dev/ttyS0: No such file or directory
can't open /dev/ttyS0: No such file or directory
.
.
.
This is the content of the inittab file
# Startup the system
null::sysinit:/etc/init.d/rc.sysinit
# now run any rc scripts
::sysinit:/etc/init.d/rcS
# Put a getty on the serial port
ttyS0::respawn:/sbin/getty -L ttyS0 115200 vt100
# Stuff to do before rebooting
null::shutdown:/bin/umount -a -r
It is able to run the rc.sysinit, but not the rcS.
I have checked permissions of the rcS. Also, the filesystem is mounted as read-only cramfs. Could this be causing an issue?
This is the command I am running:
QEMU_AUDIO_DRV=none \qemu-system-arm -m 256M -M versatilepb
-kernel ~/linux-2.6.23/arch/arm/boot/zImage
-append "console=ttyAMA0,115200 root=/dev/ram rdinit=/sbin/init"
-initrd ~/tmpcramfs2
-nographic
These are the boot messages obtained on running the command:
Linux version 2.6.23 (hsailer#SvanteArrhenius) (gcc version 4.0.2) #1 Thu May 27 09:31:10 EDT 2021
CPU: ARM926EJ-S [41069265] revision 5 (ARMv5TEJ), cr=00093177
Machine: ARM-Versatile PB
Memory policy: ECC disabled, Data cache writeback
CPU0: D VIVT write-through cache
CPU0: I cache: 4096 bytes, associativity 4, 32 byte lines, 32 sets
CPU0: D cache: 65536 bytes, associativity 4, 32 byte lines, 512 sets
Built 1 zonelists in Zone order. Total pages: 65024
Kernel command line: console=ttyAMA0,115200 root=/dev/ram rdinit=/sbin/init
PID hash table entries: 1024 (order: 10, 4096 bytes)
Console: colour dummy device 80x30
Dentry cache hash table entries: 32768 (order: 5, 131072 bytes)
Inode-cache hash table entries: 16384 (order: 4, 65536 bytes)
Memory: 256MB = 256MB total
Memory: 249600KB available (2508K code, 227K data, 100K init)
Mount-cache hash table entries: 512
CPU: Testing write buffer coherency: ok
NET: Registered protocol family 16
NET: Registered protocol family 2
Time: timer3 clocksource has been installed.
IP route cache hash table entries: 2048 (order: 1, 8192 bytes)
TCP established hash table entries: 8192 (order: 4, 65536 bytes)
TCP bind hash table entries: 8192 (order: 3, 32768 bytes)
TCP: Hash tables configured (established 8192 bind 8192)
TCP reno registered
checking if image is initramfs...it isn't (bad gzip magic numbers); looks like an initrd
Freeing initrd memory: 7184K
NetWinder Floating Point Emulator V0.97 (double precision)
Installing knfsd (copyright (C) 1996 okir#monad.swb.de).
JFFS2 version 2.2. (NAND) © 2001-2006 Red Hat, Inc.
JFS: nTxBlock = 2007, nTxLock = 16063
io scheduler noop registered
io scheduler anticipatory registered (default)
io scheduler deadline registered
io scheduler cfq registered
CLCD: Versatile hardware, VGA display
Clock CLCDCLK: setting VCO reg params: S=1 R=99 V=98
Console: switching to colour frame buffer device 80x60
Serial: AMBA PL011 UART driver
dev:f1: ttyAMA0 at MMIO 0x101f1000 (irq = 12) is a AMBA/PL011
console [ttyAMA0] enabled
dev:f2: ttyAMA1 at MMIO 0x101f2000 (irq = 13) is a AMBA/PL011
dev:f3: ttyAMA2 at MMIO 0x101f3000 (irq = 14) is a AMBA/PL011
fpga:09: ttyAMA3 at MMIO 0x10009000 (irq = 38) is a AMBA/PL011
RAMDISK driver initialized: 16 RAM disks of 8192K size 1024 blocksize
smc91x.c: v1.1, sep 22 2004 by Nicolas Pitre <nico#cam.org>
eth0: SMC91C11xFD (rev 1) at d098e000 IRQ 25 [nowait]
eth0: Ethernet addr: 52:54:00:12:34:56
armflash.0: Found 1 x32 devices at 0x0 in 32-bit bank
Intel/Sharp Extended Query Table at 0x0031
Using buffer write method
RedBoot partition parsing not available
afs partition parsing not available
armflash: probe of armflash.0 failed with error -22
mice: PS/2 mouse device common for all mice
input: AT Raw Set 2 keyboard as /class/input/input0
TCP cubic registered
NET: Registered protocol family 1
NET: Registered protocol family 17
VFP support v0.3: implementor 41 architecture 1 part 10 variant 9 rev 0
input: ImExPS/2 Generic Explorer Mouse as /class/input/input1
RAMDISK: cramfs filesystem found at block 0
RAMDISK: Loading 7184KiB [1 disk] into ram disk... done.
VFS: Mounted root (cramfs filesystem) readonly.
Freeing init memory: 100K
can't run '/etc/init.d/rcS': No such file or directory
can't open /dev/ttyS0: No such file or directory
can't open /dev/ttyS0: No such file or directory
can't open /dev/ttyS0: No such file or directory
.
.
.
The errors about /dev/ttyS0 are because your inittab is specifying the wrong device name for the serial port for the (emulated) hardware you're running on. Your QEMU command specifies the 'versatilepb' board, whose serial devices are PL011s, which appear in /dev/ as /dev/ttyAMA0, /dev/ttyAMA1, etc. (/dev/ttyS0 is what the serial ports on an x86 PC appear as.) You need to fix that line of the inittab to refer to ttyAMA0 instead.
For the rcS error, I would suggest you start by double-checking all the things listed in all the responses to this older question.
I tried:
git clone git://git.buildroot.net/buildroot
cd buildroot
git checkout 2019.08
make qemu_aarch64_virt_defconfig
make menuconfig
In menuconfig, I set:
Bootloaders
U-Boot configuration (Using an in-tree board defconfig file)
qemu_arm64
Kernel
Install kernel image to /boot in target
and finally:
make BR2_JLEVEL="$nproc"
Now, I can boot fine without U-Boot with the command line mentioned at: How to download the Torvalds Linux Kernel master, (re)compile it, and boot it with QEMU?
./output/host/usr/bin/qemu-system-aarch64 -M virt -cpu cortex-a57 -nographic -smp 1 -kernel output/images/Image -append "root=/dev/vda console=ttyAMA0" -netdev user,id=eth0 -device virtio-net-device,netdev=eth0 -drive file=output/images/rootfs.ext4,if=none,format=raw,id=hd0 -device virtio-blk-device,drive=hd0
but that is not using U-Boot.
When I do:
ls -l output/images/
it contains:
-rw-r--r-- 1 ciro ciro 6.5M 2019-09-20_13:36:23 Image
-rw-r--r-- 1 ciro ciro 60M 2019-09-20_13:39:02 rootfs.ext2
lrwxrwxrwx 1 ciro ciro 11 2019-09-20_13:36:25 rootfs.ext4 -> rootfs.ext2
-rw-r--r-- 1 ciro ciro 583K 2019-09-20_13:34:15 u-boot.bin
so there is a U-Boot binary there: u-boot.bin, but how do I use that with QEMU?
I tried as mentioned at: Can ARM qemu system emulator boot from card image without kernel param? to remove -kernel and -append and add -bios u-boot.bin:
./output/host/usr/bin/qemu-system-aarch64 -M virt -cpu cortex-a57 -nographic -smp 1 -bios output/images/u-boot.bin -netdev user,id=eth0 -device virtio-net-device,netdev=eth0 -drive file=output/images/rootfs.ext4,if=none,format=raw,id=hd0 -device virtio-blk-device,drive=hd0
Now I do get the U-Boot shell, but boot fails and leaves me no the U-Boot prompt:
U-Boot 2019.07 (Sep 20 2019 - 13:34:10 +0100)
DRAM: 128 MiB
Flash: 128 MiB
*** Warning - bad CRC, using default environment
In: pl011#9000000
Out: pl011#9000000
Err: pl011#9000000
Net:
Warning: virtio-net#31 using MAC address from ROM
eth0: virtio-net#31
Hit any key to stop autoboot: 0
starting USB...
No working controllers found
USB is stopped. Please issue 'usb start' first.
scanning bus for devices...
Device 0: unknown device
Device 0: QEMU VirtIO Block Device
Type: Hard Disk
Capacity: 60.0 MB = 0.0 GB (122880 x 512)
... is now current device
** No partition table - virtio 0 **
starting USB...
No working controllers found
BOOTP broadcast 1
DHCP client bound to address 10.0.2.15 (2 ms)
Using virtio-net#31 device
TFTP from server 10.0.2.2; our IP address is 10.0.2.15
Filename 'boot.scr.uimg'.
Load address: 0x40200000
Loading: *
TFTP error: 'Access violation' (2)
Not retrying...
BOOTP broadcast 1
DHCP client bound to address 10.0.2.15 (0 ms)
Using virtio-net#31 device
TFTP from server 10.0.2.2; our IP address is 10.0.2.15
Filename 'boot.scr.uimg'.
Load address: 0x40400000
Loading: *
TFTP error: 'Access violation' (2)
Not retrying...
=>
so it appears that U-Boot cannot handle the VirtIO device? Or according to Peter, I have to create a partition table. I couldn't find that automatically in Buildroot, but I could do it manually, here is one approach: https://unix.stackexchange.com/questions/209566/how-to-format-a-partition-inside-of-an-img-file/527132#527132
Another approach would be to keep -kernel -append and let QEMU put the kernel into memory as done without U-Boot, and then use the booti U-Boot command I've found on help:
booti - boot Linux kernel 'Image' format from memory
so I just need to find out its address. But that is kind of cheating since I want U-boot to do the hard work rather than cheat with QEMU.
My goal is to reach a good setup to develop U-Boot and QEMU's early boot stuff.
Given that u-boot correctly detects the virtio block device, I think it is unlikely that it cannot handle it. The error printed is "** No partition table - virtio 0 **", which is correct, because you've set up the block device to contain just rootfs.ext4, which will be a filesystem image. That suggests that you'll have more luck if you create a disk image with a partition table and write the rootfs to a partition within the disk image.
I followed Peter's advice and put it into a partitioned image with the sfdisk-fs-to-img command from https://unix.stackexchange.com/questions/209566/how-to-format-a-partition-inside-of-an-img-file/527132#527132
I am now able to read the root filesystem with:
ls virtio 0 /boot
and that contains Image file.
Now I think there are only some U-Boot specifics to resolve, which I'm not very familiar with:
load /boot/Image into memory with something like load virtio 0 0x100000 /boot/Image. TODO which address is valid? This arbitrary choice gave ** Reading file would overwrite reserved memory **
find out how to load the DTB and kernel CLI arguments. The DTB would need to be auto-generated with QEMU with qemu-system-aarch64 -machine dumpdtb=dtb.dtb
boot it with something like: booti 0x100000
I was hoping Buildroot would have automated things a bit more for me sadface.
I'm running a VPS, with specs:
Ubuntu 12.04.5 LTS (GNU/Linux 3.13.0-32-generic x86_64)
512mb RAM
1 CPU
20gb SSD
If you're wondering it's a DigitalOcean droplet. It's running TS3, LAMP (with wordpress), OpenVPN, BYOBU, and OwnCloud.
Now my problem is with mySQL dying on me after like 30m to 1hour. Usually after a reboot, the memory usage is 54% and mySQL doesn't have a problem, but as the memory usage goes towards 80-89% I start to get issues.
System load: 0.01 Users logged in: 0
Usage of /: 22.1% of 19.56GB IP address for eth0: *****
Memory usage: 90% IP address for as0t0: *****
Swap usage: 0% IP address for as0t1: *****
Processes: 93
As you can see, the memory usage is VERY high, and I've noticed the trend that mySQL process dies as the memory usage gets higher. However the swap usage is 0%.
Is there a way to make mySQL and the other processes to use the swap?
Would letting mySQL make use of the swap stop letting it die after my memory usage gets so high?
After the high memory usage, the process dies and I get this error:
[2002] SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)
The processor load never goes above 25% in most cases. The server also runs a fast SSD, so it wouldn't be a problem to use a swap, and I don't have that much traffic.
Fixed it, by making a swap file of size 256mb. mySQL doesn't stop now after having no available memory to work in.
After following this tutorial by Etel Sverdlov:
https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-12-04
I was able to make a swap file. I'll copy the tutorial for the sake it gets deleted.
How To Add Swap on Ubuntu 12.04
About Linux Swapping
Linux RAM is composed of chunks of memory called pages. To free up pages of RAM, a “linux swap” can occur and a page of memory is copied from the RAM to preconfigured space on the hard disk. Linux swaps allow a system to harness more memory than was originally physically available.
However, swapping does have disadvantages. Because hard disks have a much slower memory than RAM, virtual private server performance may slow down considerably. Additionally, swap thrashing can begin to take place if the system gets swamped from too many files being swapped in and out.
Check for Swap Space
Before we proceed to set up a swap file, we need to check if any swap files have been enabled on the VPS by looking at the summary of swap usage.
sudo swapon -s
An empty list will confirm that you have no swap files enabled:
Filename Type Size Used Priority
Check the File System
After we know that we do not have a swap file enabled on the virtual server, we can check how much space we have on the server with the df command. The swap file will take 256MB— since we are only using up about 8% of the /dev/sda, we can proceed.
df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda 20907056 1437188 18421292 8% /
udev 121588 4 121584 1% /dev
tmpfs 49752 208 49544 1% /run
none 5120 0 5120 0% /run/lock
none 124372 0 124372 0% /run/shm
Create and Enable the Swap File
Now it’s time to create the swap file itself using the dd command :
sudo dd if=/dev/zero of=/swapfile bs=1024 count=256k
“of=/swapfile” designates the file’s name. In this case the name is swapfile.
Subsequently we are going to prepare the swap file by creating a linux swap area:
sudo mkswap /swapfile
The results display:
Setting up swapspace version 1, size = 262140 KiB
no label, UUID=103c4545-5fc5-47f3-a8b3-dfbdb64fd7eb
Finish up by activating the swap file:
sudo swapon /swapfile
You will then be able to see the new swap file when you view the swap summary.
swapon -s
Filename Type Size Used Priority
/swapfile file 262140 0 -1
This file will last on the virtual private server until the machine reboots. You can ensure that the swap is permanent by adding it to the fstab file.
Open up the file:
sudo nano /etc/fstab
Paste in the following line:
/swapfile none swap sw 0 0
Swappiness in the file should be set to 10. Skipping this step may cause both poor performance, whereas setting it to 10 will cause swap to act as an emergency buffer, preventing out-of-memory crashes.
You can do this with the following commands:
echo 10 | sudo tee /proc/sys/vm/swappiness
echo vm.swappiness = 10 | sudo tee -a /etc/sysctl.conf
To prevent the file from being world-readable, you should set up the correct permissions on the swap file:
sudo chown root:root /swapfile
sudo chmod 0600 /swapfile
All credit to: Etel Sverdlov at: https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-12-04
I am new to Amazon EC2, recently I found mysql is not working properly, continuously crashing.
I think issue is less space or something..
Here is the some of the outputs:
free -k
total used free shared buffers cached
Mem: 1020536 254744 765792 0 54028 83748
-/+ buffers/cache: 116968 903568
Swap: 82628 0 82628
swapon -s
Filename Type Size Used Priority
/swapfile file 82628 0 -1
I am not able to start the mysql.
Can someone please suggest me the solutions?
Check the InnoDB cache size in my.cnf file. If it is more than the size of your system's memory you found your problem. Try to reduce that and try to start MySQL.
I have been running a highly concurrent application on my HP Proliant Servers. The application is a file system indexer i coded in erlang. It spawns a process per Folder it finds on the file system and records all file paths in a fragmented Mnesia Database. (Database consists of disc_only_copies type of tables and a screen shot of its file system can be viewed here.)
The Snippet of code that does the high intensive job of going through the file system is shown below:
%%% -------- COPYRIGHT NOTICE --------------------------------------------------------------------
%% #author Muzaaya Joshua, <joshmuza#gmail.com> [http://joshanderlang.blogspot.com]
%% #version 1.0 free software, but modification prohibited
%% #copyright Muzaaya Joshua (file_scavenger-1.0) 2011 - 2012 . All rights reserved
%% #reference OpenSource Erlang WebSite
%%
%%% ---------------- EDOC INTRODUCTION TO THE MODULE ----------------------------------------------
%% #doc This module provides the low level APIs for reading, writing,
%% searching, joining and moving within directories.The module implementation
%% took place on #date at #time.
%% #end
-module(file_scavenger_utilities).
%%% ------- EXPORTS -------------------------------------------------------------------------------
-compile(export_all).
%%% ------- INCLUDES -----------------------------------------------------------------------------
%%% -------- MACROS ------------------------------------------------------------------------------
-define(IS_FOLDER(X),filelib:is_dir(X)).
-define(IS_FILE(X),filelib:is_file(X)).
-define(FAILED_TO_LIST_DIR(X),error_logger:error_report(["*** File Scavenger Utilities Error ***** ",{error,"Failed to List Directory"},{directory,X}])).
-define(NOT_DIR(X),error_logger:error_report(["*** File Scavenger Utilities Error ***** ",{error,"Not a Directory"},{alleged,X}])).
-define(NOT_FILE(X),error_logger:error_report(["*** File Scavenger Utilities Error ***** ",{error,"Not a File"},{alleged,X}])).
%%%--------- TYPES -------------------------------------------------------------------------------
%% #type dir() = string().
%% Must be containing forward slashes, not back slashes. Must not end with a slash
%% after the exact directory.e.g this is wrong: "C:/Program Files/SomeDirectory/"
%% but this is right: "C:/Program Files/SomeDirectory"
%% #type file_path() = string().
%% Must be containing forward slashes, not back slashes.
%% Should include the file extension as well e.g "C:/Program Files/SomeFile.pdf"
%% -----------------------------------------------------------------------------------------------
%% #doc Enters a directory and executes the fun ForEachFileFound/2 for each file it finds
%% If it finds a directory, it executes the fun %% ForEachDirFound/2.
%% Both funs above take the parent Dir as the first Argument. Then, it will spawn an
%% erlang process that will spread the found Directory too in the same way as the parent directory
%% was spread. The process of spreading goes on and on until every File (wether its in a nested
%% Directory) is registered by its full path.
%% #end
%%
%% #spec spread_directory(dir(),dir(),funtion(),function())-> ok.
spread_directory(Dir,Top_Directory,ForEachFileFound,ForEachDirFound) when is_function(ForEachFileFound),is_function(ForEachDirFound) ->
case ?IS_FOLDER(Dir) of
false -> ?NOT_DIR(Dir);
true ->
F = fun(X)->
FileOrDir = filename:absname_join(Dir,X),
case ?IS_FOLDER(FileOrDir) of
true ->
(catch ForEachDirFound(Top_Directory,FileOrDir)),
spawn(fun() -> ?MODULE:spread_directory(FileOrDir,Top_Directory,ForEachFileFound,ForEachDirFound) end);
false ->
case ?IS_FILE(FileOrDir) of
false -> {error,not_a_file,FileOrDir};
true -> (catch ForEachFileFound(Top_Directory,FileOrDir))
end
end
end,
case file:list_dir(Dir) of
{error,_} -> ?FAILED_TO_LIST_DIR(Dir);
{ok,List} -> lists:foreach(F,List)
end
end.
The function spread_directory/4 is generic in a way that it takes two funs. One fun: ForEachFileFound/2 takes along with the Top Most Directory, the found file and does anything with it and the other fun: ForEachDirFound/2 takes along with the Top Most Directory, the folder it finds and uses it in any way it wants.
The start script i use for this application makes sure that erlang will be able to spawn as many processes as possible. Once a process finishes indexing a folder it exits.
#!/usr/bin/env sh
echo "Starting File Scavenger System. Layer 1 on the P2P File Sharing System....."
erl \
-name file_scavenger#127.0.0.1 \
+P 13421779 \
-pa ./ebin ./lib/*/ebin ./include \
-mnesia dir '"./database"' \
-mnesia dump_log_write_threshold 10000 \
-eval "application:load(file_scavenger)" \
-eval "application:start(file_scavenger)"
There is a gen_server which interfaces the intensive module with the database in which i record all paths. A snippet of where it starts the spread_directory work is shown here below:
handle_cast(index_dirs,#scavenger{directory_paths = Dirs} = State)->
{File,Folder} = case {State#scavenger.verbose,State#scavenger.verbose_to} of
{true,tty} ->
{
fun(TopDir,Fl)->
io:format(" File: ~p~n",[Fl]),
file_scavenger_database:insert_file(filename:basename(Fl),file,Fl,TopDir,filename:extension(Fl))
end,
fun(TopDir,Fd) ->
io:format(" Folder: ~p~n",[Fd]),
file_scavenger_database:insert_file(Fd,folder,Fd,TopDir,undefined)
end
};
{true,SomeFile}->
{
fun(TopDir,Fl)->
os:cmd("echo File: " ++ Fl ++ " >> " ++ SomeFile),
file_scavenger_database:insert_file(filename:basename(Fl),file,Fl,TopDir,filename:extension(Fl))
end,
fun(TopDir,Fd)->
os:cmd("echo Folder: " ++ Fd ++ " >> " ++ SomeFile),
file_scavenger_database:insert_file(Fd,folder,Fd,TopDir,undefined)
end
}
end,
Main = fun(Dir) ->
error_logger:info_msg("*** File scavenger Server indexing directory: ~p~n",[Dir]),
spawn(fun() -> file_scavenger_utilities:spread_directory(Dir,Dir,File,Folder) end)
end,
lists:foreach(Main,Dirs),
{noreply,State};
handle_cast(stop, State) -> {stop, normal, State}.
More Source details can be found in the whole application.
The application entire Source and build can be found here: File_scavenger-1.0.zip.
Now, i start the application on the Server (HP Proliant G6, containing Intel processors (2 processors, each 4 cores, 2.4 GHz speed each core, 8 MB Cache size), 20 GB RAM size, 1.5 Terabytes disk space. Now, 2 of these high power machines are in our disposal. System Database should be replicated across the two. Each server runs Solaris 10, 64 bit), whose terminal now looks like this below:
bash-3.00# sh file_scavenger.sh
Starting File Scavenger System. Layer 1 on the P2P File Sharing System.....
Erlang R14B03 (erts-5.8.4) [source] [smp:8:8] [rq:8] [async-threads:0] [hipe] [kernel-poll:false]
Eshell V5.8.4 (abort with ^G)
(file_scavenger#127.0.0.1)1>
=INFO REPORT==== 18-Aug-2011::09:36:04 ===
Starting File Scavenger Database......
=INFO REPORT==== 18-Aug-2011::09:36:04 ===
Database Successfully Started....
=INFO REPORT==== 18-Aug-2011::09:36:04 ===
Starting File Scavenger Database......
=INFO REPORT==== 18-Aug-2011::09:36:04 ===
Database Successfully Started....
=INFO REPORT==== 18-Aug-2011::09:36:04 ===
File Scavenger Server starting with default verbose settings....
(file_scavenger#127.0.0.1)1> file_scavenger_server:index_dirs().
The server starts to run and verboses to the terminal all files and folders it finds. The server is equipped with too much RAM (20 GB), and Swap space (Swap is 16 GB). However, it ran for about 18 hours and finally, the erlang Virtual machine reported this:
File: "/proc/4324/root/opt/csw/gcc4/share/locale/ja/LC_MESSAGES/gcc.mo"
Folder: "/proc/4324/root/opt/csw/gcc4/share/locale/da"
Folder: "/proc/4324/root/opt/csw/gcc4/share/locale/es/LC_MESSAGES"
File: "/proc/4324/root/proc/4984/root/.thumbnails/normal/dc259e3897e8af4b379c6d956b6c1393.png"
File: "/proc/4324/root/proc/4984/root/.thumbnails/fail/gnome-thumbnail-factory/223c19786421b7101d14075bdec46f61.png"
File: "/proc/4324/root/opt/csw/gcc4/libexec/gcc/i386-pc-solaris2.10/4.5.1/install-tools/mkheaders"
File: "/proc/4324/root/opt/csw/gcc4/libexec/gcc/i386-pc-solaris2.10/4.5.1/cc1plus"
File: "/proc/4324/root/opt/csw/gcc4/lib/libsupc++.la"
Crash dump was written to: erl_crash.dump
eheap_alloc: Cannot allocate 153052320 bytes of memory (of type "heap").
Abort - core dumped
bash-3.00#
Question 1. With such a powerful server, why would the operating system fail to provide such memory to the application (it was the only application running)?
Question 2. The Erlang Emulator i start is instructed to be able to spawn as many processes as it may need. the value +P 13421779. Is Erlang VM failing to access this memory or failing to allocate it to its processes ?
Question 3. To Solaris, it sees one process: epmd, perhaps containing and starting thousands of micro threads. What configurations can i make to Solaris to be able to never stop my application however much "memory hungry" it may be? Swap space available is 16 GB, RAM 20 GB, honestly, there must be something wrong.
Question 4. Which configurations can i make to the Erlang Emulator, to avoid these heap memory crash dumps especially when all the memory it may need is available on the server? How will i run more memory consuming apps on this server if Erlang still fails to allocate such memory to a simple file system indexer (well its heavily concurrent)?
finally, all other tweaks i could do to avoid heap memory problems on such capable hardware are welcome. Thanks in advance
I haven't had time to look at the source, but here are some comments:
Question 1. With such a powerful server, why would the operating
system fail to provide such memory to the application (it was the only
application running)?
Because the Erlang VM tried to consume more than the available free memory.
Question 2. The Erlang Emulator i start is instructed to be able to
spawn as many processes as it may need. the value +P 13421779. Is
Erlang VM failing to access this memory or failing to allocate it to
its processes ?
No. If you would have run out of processess, the Erlang VM would have said so (and the VM would still be up and running):
=ERROR REPORT==== 18-Aug-2011::10:04:04 ===
Error in process <0.31775.138> with exit value: {system_limit,[{erlang,spawn_link, [erlang,apply,[#Fun<shell.3.130303173>,[]]]},{erlang,spawn_link,1},{shell,get_command,5}, {shell,server_loop,7}]}
Question 3. To Solaris, it sees one process: epmd, perhaps containing
and starting thousands of micro threads. What configurations can i
make to Solaris to be able to never stop my application however much
"memory hungry" it may be? Swap space available is 16 GB, RAM 20 GB,
honestly, there must be something wrong.
epmd is the Erlang port mapping deamon. It's responsible for managing distributet Erlang and has nothing to with your individual Erlang application. The processes you should look for will be name beam.smp most likely. These will show the OS memory consumption of the Erlang VM etc.
Question 4. Which configurations can i make to the Erlang Emulator, to
avoid these heap memory crash dumps especially when all the memory it
may need is available on the server? How will i run more memory
consuming apps on this server if Erlang still fails to allocate such
memory to a simple file system indexer (well its heavily concurrent)?
The Erlang VM should be able to use all of the available memory in your machine. However, it depends on how your application is written. There can be many reasons for memory leaks:
Atom table filling up (you create too many unique atoms)
ETS or Mnesia tables are not garbage collected (you do not delete old unused elements)
Not enough memory for processes (you spawn too many processess)
Too many binaries are created (you might keep unused references to old binaries)