Get mountpoint path from device name - partitioning

Suppose I have a path to the Bluray device, let us assume /dev/sr0.
How can I get the mountpoint from it? I understand it should be somewhere in /media/user/.
Can a single Bluray disk have more filesystems on it, therefore multiple mountpoints?
Terminal command or Python code needed. Must work at least on Linux (possibly Windows).

Here is what I just found:
$ udevadm info -q all -n /dev/sr0
yields, among other data fields:
S: disk/by-id/*dvd drive model and serial numer*
S: disk/by-label/*mountpoint name*
S: disk/by-uuid/*disk serial number*
E: DEVNAME=/dev/sr0
E: ID_MODEL=*dvd drive model*
E: ID_SERIAL_SHORT=*dvd drive serial number*
E: ID_FS_UUID=*disk serial number*
E: ID_FS_VOLUME_SET_ID=*full label, not a short*
E: ID_FS_LABEL=*disk label*
E: ID_FS_LOGICAL_VOLUME_ID=*disk label*

Related

How to fix "qemu-system-mipsel: The following two regions overlap (in the memory address space)"?

I would like to run a Linux root filesystem for MIPSEL on qemu-system-mipsel.
The root filesystem was extracted from the firmware using "firmware-analysis-toolkit" (firmadyne).
However, After I build a root filesystem as required I encountered an error when I run
The script for run qemu is:
qemu-system-mipsel -M malta -kernel vmlinuz.elf \
-drive if=ide,format=raw,file=squashfs-factory.raw \
-append "root=/dev/sda1 console=ttyS0 nandsim.parts=64,64,64,64,64,64,64,64,64,64 \
rdinit=/firmadyne/preInit.sh rw debug ignore_loglevel print-fatal-signals=1 user_debug=31 firmadyn \
-nographic
If i use the vmlinux.elf provided by firmadyne toolkit (kernel 2.6.39.4+) everything works.
If i want to use a vmlinux.elf (kernel 5.4) provided by openwrt-imagebuilder (or compiled by me) i encountered an error this error:
The following two regions overlap (in the memory address space):
vmlinux-5.4.111.mipsel ELF program header segment 0 (addresses 0x0000000000001000 - 0x000000000084b910)
prom (addresses 0x0000000000002000 - 0x0000000000003040)
I've tried everything. How can it be fixed?
QEMU is complaining that the ELF file you've asked it to load is overlapping with the blob of 'prom' data that contains data to pass to the kernel such as memory size and the kernel command line. That PROM data always starts at address 0x2000. You need to build your kernel so that it doesn't try to put anything at that address.

Function uploadData not found in contract SmartContract - Hyperledger Fabric

I'm altering the fabcar version of hyperledger fabric and wrote some functions. When I executed, I got an error mentioned below (command mentioned below is of shell script)
$ peer chaincode invoke -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --tls --cafile $ORDERER_CA -C $CHANNEL_NAME -n cloud $PEER_CONN_PARMS --isInit -c '{"function":"uploadData","Args":["DATA1","ID12345","/home/samplefile___pdf","3"]}'
Error: endorsement failure during invoke. response: status:500 message:"error in simulation: transaction returned with failure: Function uploadData not found in contract SmartContract"
Below is the chaincode (abstractly mentioned)
type SmartContract struct {
contractapi.Contract
}
type Data struct {
Owner string `json:"owner"`
File string `json:"file"`
FileChunkNumber string `json:"filechunknumber"`
SHA256 string `json:"sha256"`
}
// Uploads new data to the world state with given details
func (s *SmartContract) uploadData(ctx contractapi.TransactionContextInterface, args []string) error {
/*...*/
}
I don't get where to alter the changes
I assume that you have updated the chaincode version number or chaincode name while installation and instantiation. (1.4.6)
Have you tried pre-existing functions of the chaincode,Are they working with your invoke command.
If not,please follow this invoke command:
peer chaincode invoke -o orderer.example.com:7050 -C $CHANNEL_NAME -n cloud $PEER_CONN_PARMS -c '{"Args":["uploadData","DATA1","ID12345","/home/samplefile___pdf","3"]}'
I had faced a similar problem before; there can be 2 possible errors:
Fabric might be using the old chaincode docker image; hence try
deleting that image and re-create the docker image with the updated
chaincode.
There might be some problem in the body of your uploadData function (could be a syntactical or logical error) which you'll have to debug.
Hope that helps!

How to send binary flashing file to embedded system with only serial console?

I have an embedded Linux system that uses ramdisk boot so it has run time no persistent storage available (it does have Flash to store kernel and ramdisk).
The only connectivity is RS-232 serial login console. So I am limited by what is provided by its built in busybox. I want to retrieve the ramdisk, modify it, and rewrite the ramdisk. The kernel does not have Flash filesystem support built-in. The ramdisk partition size is about 10 MBytes. When all files in the user directory are deleted, the free ramdisk size is about 14 MBytes.
The command dd is available so I can copy the ramdisk partition to the ramdisk, and can write to the flash from a ramdisk file. flashcp is also available.
So my problem is now how to receive and send binary files through the RS-232 serial console?
I research the followings and none is useful for me:
Linux command to send binary file to serial port with HW flow control? on stackoverflow
Binary data over serial terminal on stackoverflow
Transferring files using serial console on k.japko.eu
File transfer over a serial line on superuser.com
How to get file to a host when all you have is a serial console? on stackexchange
Mostly because x/y/zmodem are not available in the busybox.
Any idea? Thanks!
Per the request, here's what I should have included in the first place.
Available u-boot commands:
U-Boot >?
? - alias for 'help'
askenv - get environment variables from stdin
base - print or set address offset
bdinfo - print Board Info structure
boot - boot default, i.e., run 'bootcmd'
bootd - boot default, i.e., run 'bootcmd'
bootm - boot application image from memory
cmp - memory compare
coninfo - print console devices and information
cp - memory copy
crc32 - checksum calculation
crc32_chk_uimage- checksum calculation of an image for u-boot
echo - echo args to console
editenv - edit environment variable
env - environment handling commands
exit - exit script
false - do nothing, unsuccessfully
fatinfo - print information about filesystem
fatload - load binary file from a dos filesystem
fatls - list files in a directory (default /)
fatwrite- write file into a dos filesystem
go - start application at address 'addr'
gpio - input/set/clear/toggle gpio pins
help - print command description/usage
i2c - I2C sub-system
iminfo - print header information for application image
imxtract- extract a part of a multi-image
itest - return true/false on integer compare
loadb - load binary file over serial line (kermit mode)
loads - load S-Record file over serial line
loady - load binary file over serial line (ymodem mode)
loop - infinite loop on address range
md - memory display
mdc - memory display cyclic
mm - memory modify (auto-incrementing address)
mw - memory write (fill)
mwc - memory write cyclic
nm - memory modify (constant address)
printenv- print environment variables
reset - Perform RESET of the CPU
run - run commands in an environment variable
saveenv - save environment variables to persistent storage
saves - save S-Record file over serial line
setenv - set environment variables
sf - SPI flash sub-system
showvar - print local hushshell variables
sleep - delay execution for some time
source - run script from memory
sspi - SPI utility command
test - minimal test like /bin/sh
true - do nothing, successfully
usb - USB sub-system
usbboot - boot from USB device
version - print monitor, compiler and linker version
U-Boot >
Available busybox commands:
BusyBox v1.13.2 (2015-03-16 10:50:56 EDT) multi-call binary
Copyright (C) 1998-2008 Erik Andersen, Rob Landley, Denys Vlasenko
and others. Licensed under GPLv2.
See source distribution for full notice.
Usage: busybox [function] [arguments]...
or: function [arguments]...
BusyBox is a multi-call binary that combines many common Unix
utilities into a single executable. Most people will create a
link to busybox for each function they wish to use and BusyBox
will act like whatever it was invoked as!
Currently defined functions:
[, [[, addgroup, adduser, ar, ash, awk, basename, blkid,
bunzip2, bzcat, cat, chattr, chgrp, chmod, chown, chpasswd,
chroot, chvt, clear, cmp, cp, cpio, cryptpw, cut, date,
dc, dd, deallocvt, delgroup, deluser, df, dhcprelay, diff,
dirname, dmesg, du, dumpkmap, dumpleases, echo, egrep, env,
expr, false, fbset, fbsplash, fdisk, fgrep, find, free,
freeramdisk, fsck, fsck.minix, fuser, getopt, getty, grep,
gunzip, gzip, halt, head, hexdump, hostname, httpd, hwclock,
id, ifconfig, ifdown, ifup, inetd, init, insmod, ip, kill,
killall, klogd, last, less, linuxrc, ln, loadfont, loadkmap,
logger, login, logname, logread, losetup, ls, lsmod, makedevs,
md5sum, mdev, microcom, mkdir, mkfifo, mkfs.minix, mknod,
mkswap, mktemp, modprobe, more, mount, mv, nc, netstat,
nice, nohup, nslookup, od, openvt, passwd, patch, pidof,
ping, ping6, pivot_root, poweroff, printf, ps, pwd, rdate,
rdev, readahead, readlink, readprofile, realpath, reboot,
renice, reset, rm, rmdir, rmmod, route, rtcwake, run-parts,
sed, seq, setconsole, setfont, sh, showkey, sleep, sort,
start-stop-daemon, strings, stty, su, sulogin, swapoff,
swapon, switch_root, sync, sysctl, syslogd, tail, tar, tcpsvd,
tee, telnet, telnetd, test, tftp, tftpd, time, top, touch,
tr, traceroute, true, tty, udhcpc, udhcpd, udpsvd, umount,
uname, uniq, unzip, uptime, usleep, vconfig, vi, vlock,
watch, wc, wget, which, who, whoami, xargs, yes, zcat
In uboot you could use loady/loadx to get file from pc via uart.I usually use teraterm to send file.
The process should be this:
run loady in uboot
use teraterm send data
the file is transfer to you device's memory located in 0x01000000.
Independently I found a way to upload binary files through the Linux console and I'll document the steps here in case others find it useful since I had a hard time looking for this information on the net.
Here's the theory: change the console mode to raw so all the binary traffic are't interpretted as console command, e.g. ctrl-C. Turn off echo so it doesn't add extra serial traffic. Run tar to accept input from the stdin. Since ctrl-C won't work, and tar won't know when to terminate, use a background task to kill the login shell so you can login again to do your staff.
Steps:
Create a script to run in the background. Change myvar variable so it kills the login shell after the transfer is complete. Currently 120 corresponds to 1200 seconds, sufficient for a 10 MBytes file. In addition edit the 808 to match your login shell PID:
create bg file:
myvar=120
while [ $myvar -gt 0 ]
do
myvar=$(( $myvar-1 ))
echo -e " $myvar \n"
ls -l
sleep 10
done
kill -9 808
Launch the script in the background:
in console type:
source ./bg &
Use stty to change console to raw mode and do not echo
in console type:
stty raw -echo
Start tar to untar stdin. Note: I have to use ctrl-J since no longer work after the stty command
in console type and ends with ctrl-j, not :
tar zx -f - 1> 1.log 2> 2.log
Start Teraterm to send binary file
Wait for completion and the new login prompt
I forgot I asked this question. I figured out how to make ssh connection which in turn allows many more things to be done more easily. Of course it requires sshd in addition to nc and stty so you are out of luck if these are not available on your embedded Linux. I have tried it several times and it seems to work well, allowing multiple ssh sessions to be established, and mc to transfer files.
You will need two shell sessions on the host computer, one to loop the serial port to socket, and the other for the ssh, and more if you want to establish more ssh sessions.
First you need to setup the serial port. The '--noreset' option for picocom does this:
sudo picocom --noreset -b 115200 -e b /dev/ttyUSB3
Quit picocom once this is done (^B^X to exit).
Next we need to verify that the line endings are not translated or else ssh won't work. In the first shell run:
cat /dev/ttyUSB3 | hexdump -C
In the second shell run:
echo "echo -e \"LFLF\\n\\nCRCR\\r\\rEND\"" > /dev/ttyUSB3
You may see that \n (0x0A) is translated to \r\n (0x0D0x0A)
Use stty to set raw mode without echo and you should see no more translation:
echo "stty raw -echo" > /dev/ttyUSB3
echo "echo -e \"LFLF\\n\\nCRCR\\r\\rEND\"" > /dev/ttyUSB3
Finally in the first shell run nc to funnel local traffic between the serial port and ssh socket:
cat /dev/ttyUSB3 | nc -l -p 2222 > /dev/ttyUSB3
and funnel remote serial traffic to sshd:
echo "while true ; do nc localhost 22 ; done" > /dev/ttyUSB3
and connect ssh with port forwarding:
ssh -vvv root#localhost -p 2222 -L 0.0.0.0:22022:localhost:22
you can make more ssh connections simultaneously:
ssh -vvv root#localhost -p 22022
if you use mc, you can connect to it so you can easily browse the remote file system and copy files:
sh://root#localhost:22022
Last words: nc strips the TCP headers so the ssh packets are no checksumed and are not retried. If there were data error, the connection will break. If you remember your login shell PID, you can kill it and login again, otherwise you have to reboot. The '-vvv' flag for the ssh is for debugging.

"gclient sync" fails due to SSL3 certificate verify failed

I have been trying to fetch chromium source code. However, I got stuck on gclient sync for 2 days.
gclient sync fails every time due to error related to SSL certificate verification failure.
LOG is as below:
rna#rna-P580:~/workspace/project$ gclient sync
Syncing projects: 98% (83/84), done.
________ running 'download_from_google_storage --no_resume --platform=linux* --no_auth --bucket chromium-gn -s src/buildtools/linux32/gn.sha1' in '/home/rna/workspace/project'
/home/rna/workspace/project/depot_tools/third_party/boto/pyami/config.py:75: UserWarning: Unable to load AWS_CREDENTIAL_FILE ()
warnings.warn('Unable to load AWS_CREDENTIAL_FILE (%s)' % full_path)
Failure: [Errno 1] _ssl.c:509: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed.
Error: Command download_from_google_storage --no_resume --platform=linux* --no_auth --bucket chromium-gn -s src/buildtools/linux32/gn.sha1 returned non-zero exit status 1 in /home/rna/workspace/project
I am guessing this happens because i am behind company firewall.
So I requested to open http & https. But still no luck.
Can someone help me out, please? I'm on ubuntu 13.10
I ran into this problem as well, what fixed it for me was doing: sudo apt-get update and sudo apt-get upgrade.
I modified DEPS below /trunk directory, comment some code as :
#{
# # Download test resources, i.e. video and audio files from Google Storage.
# "pattern": "\\.sha1",
# "action": ["download_from_google_storage",
# "--directory",
# "--recursive",
# "--num_threads=10",
# "--no_auth",
# "--bucket", "chromium-webrtc-resources",
# Var("root_dir") + "/resources"],
# },
,
and retry to run gclient runhooks and I can get a correct result.
FROM:
https://code.google.com/p/webrtc/issues/detail?id=3314

Erlang and its consumption of Heap Memory

I have been running a highly concurrent application on my HP Proliant Servers. The application is a file system indexer i coded in erlang. It spawns a process per Folder it finds on the file system and records all file paths in a fragmented Mnesia Database. (Database consists of disc_only_copies type of tables and a screen shot of its file system can be viewed here.)
The Snippet of code that does the high intensive job of going through the file system is shown below:
%%% -------- COPYRIGHT NOTICE --------------------------------------------------------------------
%% #author Muzaaya Joshua, <joshmuza#gmail.com> [http://joshanderlang.blogspot.com]
%% #version 1.0 free software, but modification prohibited
%% #copyright Muzaaya Joshua (file_scavenger-1.0) 2011 - 2012 . All rights reserved
%% #reference OpenSource Erlang WebSite
%%
%%% ---------------- EDOC INTRODUCTION TO THE MODULE ----------------------------------------------
%% #doc This module provides the low level APIs for reading, writing,
%% searching, joining and moving within directories.The module implementation
%% took place on #date at #time.
%% #end
-module(file_scavenger_utilities).
%%% ------- EXPORTS -------------------------------------------------------------------------------
-compile(export_all).
%%% ------- INCLUDES -----------------------------------------------------------------------------
%%% -------- MACROS ------------------------------------------------------------------------------
-define(IS_FOLDER(X),filelib:is_dir(X)).
-define(IS_FILE(X),filelib:is_file(X)).
-define(FAILED_TO_LIST_DIR(X),error_logger:error_report(["*** File Scavenger Utilities Error ***** ",{error,"Failed to List Directory"},{directory,X}])).
-define(NOT_DIR(X),error_logger:error_report(["*** File Scavenger Utilities Error ***** ",{error,"Not a Directory"},{alleged,X}])).
-define(NOT_FILE(X),error_logger:error_report(["*** File Scavenger Utilities Error ***** ",{error,"Not a File"},{alleged,X}])).
%%%--------- TYPES -------------------------------------------------------------------------------
%% #type dir() = string().
%% Must be containing forward slashes, not back slashes. Must not end with a slash
%% after the exact directory.e.g this is wrong: "C:/Program Files/SomeDirectory/"
%% but this is right: "C:/Program Files/SomeDirectory"
%% #type file_path() = string().
%% Must be containing forward slashes, not back slashes.
%% Should include the file extension as well e.g "C:/Program Files/SomeFile.pdf"
%% -----------------------------------------------------------------------------------------------
%% #doc Enters a directory and executes the fun ForEachFileFound/2 for each file it finds
%% If it finds a directory, it executes the fun %% ForEachDirFound/2.
%% Both funs above take the parent Dir as the first Argument. Then, it will spawn an
%% erlang process that will spread the found Directory too in the same way as the parent directory
%% was spread. The process of spreading goes on and on until every File (wether its in a nested
%% Directory) is registered by its full path.
%% #end
%%
%% #spec spread_directory(dir(),dir(),funtion(),function())-> ok.
spread_directory(Dir,Top_Directory,ForEachFileFound,ForEachDirFound) when is_function(ForEachFileFound),is_function(ForEachDirFound) ->
case ?IS_FOLDER(Dir) of
false -> ?NOT_DIR(Dir);
true ->
F = fun(X)->
FileOrDir = filename:absname_join(Dir,X),
case ?IS_FOLDER(FileOrDir) of
true ->
(catch ForEachDirFound(Top_Directory,FileOrDir)),
spawn(fun() -> ?MODULE:spread_directory(FileOrDir,Top_Directory,ForEachFileFound,ForEachDirFound) end);
false ->
case ?IS_FILE(FileOrDir) of
false -> {error,not_a_file,FileOrDir};
true -> (catch ForEachFileFound(Top_Directory,FileOrDir))
end
end
end,
case file:list_dir(Dir) of
{error,_} -> ?FAILED_TO_LIST_DIR(Dir);
{ok,List} -> lists:foreach(F,List)
end
end.
The function spread_directory/4 is generic in a way that it takes two funs. One fun: ForEachFileFound/2 takes along with the Top Most Directory, the found file and does anything with it and the other fun: ForEachDirFound/2 takes along with the Top Most Directory, the folder it finds and uses it in any way it wants.
The start script i use for this application makes sure that erlang will be able to spawn as many processes as possible. Once a process finishes indexing a folder it exits.
#!/usr/bin/env sh
echo "Starting File Scavenger System. Layer 1 on the P2P File Sharing System....."
erl \
-name file_scavenger#127.0.0.1 \
+P 13421779 \
-pa ./ebin ./lib/*/ebin ./include \
-mnesia dir '"./database"' \
-mnesia dump_log_write_threshold 10000 \
-eval "application:load(file_scavenger)" \
-eval "application:start(file_scavenger)"
There is a gen_server which interfaces the intensive module with the database in which i record all paths. A snippet of where it starts the spread_directory work is shown here below:
handle_cast(index_dirs,#scavenger{directory_paths = Dirs} = State)->
{File,Folder} = case {State#scavenger.verbose,State#scavenger.verbose_to} of
{true,tty} ->
{
fun(TopDir,Fl)->
io:format(" File: ~p~n",[Fl]),
file_scavenger_database:insert_file(filename:basename(Fl),file,Fl,TopDir,filename:extension(Fl))
end,
fun(TopDir,Fd) ->
io:format(" Folder: ~p~n",[Fd]),
file_scavenger_database:insert_file(Fd,folder,Fd,TopDir,undefined)
end
};
{true,SomeFile}->
{
fun(TopDir,Fl)->
os:cmd("echo File: " ++ Fl ++ " >> " ++ SomeFile),
file_scavenger_database:insert_file(filename:basename(Fl),file,Fl,TopDir,filename:extension(Fl))
end,
fun(TopDir,Fd)->
os:cmd("echo Folder: " ++ Fd ++ " >> " ++ SomeFile),
file_scavenger_database:insert_file(Fd,folder,Fd,TopDir,undefined)
end
}
end,
Main = fun(Dir) ->
error_logger:info_msg("*** File scavenger Server indexing directory: ~p~n",[Dir]),
spawn(fun() -> file_scavenger_utilities:spread_directory(Dir,Dir,File,Folder) end)
end,
lists:foreach(Main,Dirs),
{noreply,State};
handle_cast(stop, State) -> {stop, normal, State}.
More Source details can be found in the whole application.
The application entire Source and build can be found here: File_scavenger-1.0.zip.
Now, i start the application on the Server (HP Proliant G6, containing Intel processors (2 processors, each 4 cores, 2.4 GHz speed each core, 8 MB Cache size), 20 GB RAM size, 1.5 Terabytes disk space. Now, 2 of these high power machines are in our disposal. System Database should be replicated across the two. Each server runs Solaris 10, 64 bit), whose terminal now looks like this below:
bash-3.00# sh file_scavenger.sh
Starting File Scavenger System. Layer 1 on the P2P File Sharing System.....
Erlang R14B03 (erts-5.8.4) [source] [smp:8:8] [rq:8] [async-threads:0] [hipe] [kernel-poll:false]
Eshell V5.8.4 (abort with ^G)
(file_scavenger#127.0.0.1)1>
=INFO REPORT==== 18-Aug-2011::09:36:04 ===
Starting File Scavenger Database......
=INFO REPORT==== 18-Aug-2011::09:36:04 ===
Database Successfully Started....
=INFO REPORT==== 18-Aug-2011::09:36:04 ===
Starting File Scavenger Database......
=INFO REPORT==== 18-Aug-2011::09:36:04 ===
Database Successfully Started....
=INFO REPORT==== 18-Aug-2011::09:36:04 ===
File Scavenger Server starting with default verbose settings....
(file_scavenger#127.0.0.1)1> file_scavenger_server:index_dirs().
The server starts to run and verboses to the terminal all files and folders it finds. The server is equipped with too much RAM (20 GB), and Swap space (Swap is 16 GB). However, it ran for about 18 hours and finally, the erlang Virtual machine reported this:
File: "/proc/4324/root/opt/csw/gcc4/share/locale/ja/LC_MESSAGES/gcc.mo"
Folder: "/proc/4324/root/opt/csw/gcc4/share/locale/da"
Folder: "/proc/4324/root/opt/csw/gcc4/share/locale/es/LC_MESSAGES"
File: "/proc/4324/root/proc/4984/root/.thumbnails/normal/dc259e3897e8af4b379c6d956b6c1393.png"
File: "/proc/4324/root/proc/4984/root/.thumbnails/fail/gnome-thumbnail-factory/223c19786421b7101d14075bdec46f61.png"
File: "/proc/4324/root/opt/csw/gcc4/libexec/gcc/i386-pc-solaris2.10/4.5.1/install-tools/mkheaders"
File: "/proc/4324/root/opt/csw/gcc4/libexec/gcc/i386-pc-solaris2.10/4.5.1/cc1plus"
File: "/proc/4324/root/opt/csw/gcc4/lib/libsupc++.la"
Crash dump was written to: erl_crash.dump
eheap_alloc: Cannot allocate 153052320 bytes of memory (of type "heap").
Abort - core dumped
bash-3.00#
Question 1. With such a powerful server, why would the operating system fail to provide such memory to the application (it was the only application running)?
Question 2. The Erlang Emulator i start is instructed to be able to spawn as many processes as it may need. the value +P 13421779. Is Erlang VM failing to access this memory or failing to allocate it to its processes ?
Question 3. To Solaris, it sees one process: epmd, perhaps containing and starting thousands of micro threads. What configurations can i make to Solaris to be able to never stop my application however much "memory hungry" it may be? Swap space available is 16 GB, RAM 20 GB, honestly, there must be something wrong.
Question 4. Which configurations can i make to the Erlang Emulator, to avoid these heap memory crash dumps especially when all the memory it may need is available on the server? How will i run more memory consuming apps on this server if Erlang still fails to allocate such memory to a simple file system indexer (well its heavily concurrent)?
finally, all other tweaks i could do to avoid heap memory problems on such capable hardware are welcome. Thanks in advance
I haven't had time to look at the source, but here are some comments:
Question 1. With such a powerful server, why would the operating
system fail to provide such memory to the application (it was the only
application running)?
Because the Erlang VM tried to consume more than the available free memory.
Question 2. The Erlang Emulator i start is instructed to be able to
spawn as many processes as it may need. the value +P 13421779. Is
Erlang VM failing to access this memory or failing to allocate it to
its processes ?
No. If you would have run out of processess, the Erlang VM would have said so (and the VM would still be up and running):
=ERROR REPORT==== 18-Aug-2011::10:04:04 ===
Error in process <0.31775.138> with exit value: {system_limit,[{erlang,spawn_link, [erlang,apply,[#Fun<shell.3.130303173>,[]]]},{erlang,spawn_link,1},{shell,get_command,5}, {shell,server_loop,7}]}
Question 3. To Solaris, it sees one process: epmd, perhaps containing
and starting thousands of micro threads. What configurations can i
make to Solaris to be able to never stop my application however much
"memory hungry" it may be? Swap space available is 16 GB, RAM 20 GB,
honestly, there must be something wrong.
epmd is the Erlang port mapping deamon. It's responsible for managing distributet Erlang and has nothing to with your individual Erlang application. The processes you should look for will be name beam.smp most likely. These will show the OS memory consumption of the Erlang VM etc.
Question 4. Which configurations can i make to the Erlang Emulator, to
avoid these heap memory crash dumps especially when all the memory it
may need is available on the server? How will i run more memory
consuming apps on this server if Erlang still fails to allocate such
memory to a simple file system indexer (well its heavily concurrent)?
The Erlang VM should be able to use all of the available memory in your machine. However, it depends on how your application is written. There can be many reasons for memory leaks:
Atom table filling up (you create too many unique atoms)
ETS or Mnesia tables are not garbage collected (you do not delete old unused elements)
Not enough memory for processes (you spawn too many processess)
Too many binaries are created (you might keep unused references to old binaries)