I'm using siege to locate some problem pages on our new sitemap and am having trouble getting it to stop after it runs through the urls.txt file. I have tried using reps=once in the command line, as well as in the .siegerc config file. I find that I have to use the config file, as I want the output written verbosely to a log file so that I can see page load times, 302 and 404 errors, etc. and import them into excel. However, no matter what I try I cannot get siege to stop when it completes the url.txt file- it just reruns it over again.
I have configured for 40 concurrent users, the time and reps variable is commented out in config, the url.txt file is in config. The syntax I run at cmd line is...
sudo siege --reps=once -v > outputfile.csv
I have tried setting the reps in config, no luck. Any ideas where I'm going wrong?
I ran into similar problems and trying multiple options I got it to work with:
# siege -c 10 -b -r 10 -f urls.txt
where urls.txt is a simple list of URLs like
http://ip-address/url1.html
http://ip-address/url2.html
....
....
The logs were written into a file specified in the siegerc file. ${HOME}/var/siege.log
2016-08-05 17:52:59, 100, 0.88, 4, 0.09, 113.64, 4.55, 9.67, 100, 0
2016-08-05 17:53:00, 100, 0.91, 4, 0.09, 109.89, 4.40, 9.76, 100, 0
2016-08-05 17:53:01, 100, 0.90, 4, 0.09, 111.11, 4.44, 9.78, 100, 0
2016-08-05 17:53:02, 100, 0.89, 4, 0.09, 112.36, 4.49, 9.64, 100, 0
2016-08-05 17:53:03, 100, 0.86, 4, 0.08, 116.28, 4.65, 9.84, 100, 0
2016-08-05 17:53:04, 100, 0.89, 4, 0.09, 112.36, 4.49, 9.80, 100, 0
2016-08-05 17:53:05, 100, 0.88, 4, 0.09, 113.64, 4.55, 9.83, 100, 0
2016-08-05 17:53:06, 100, 0.88, 4, 0.09, 113.64, 4.55, 9.89, 100, 0
2016-08-05 17:53:07, 100, 0.87, 4, 0.09, 114.94, 4.60, 9.79, 100, 0
2016-08-05 17:53:07, 100, 0.88, 4, 0.09, 113.64, 4.55, 9.85, 100, 0
}
I also observed that the logfile option is either buggy or very strict.
'-l filename.log' does not work.
$ siege -c 10 -b -r 10 -f urls.txt -l ./siege.log
** SIEGE 2.70
** Preparing 10 concurrent users for battle.
The server is now under siege...
done.
Transactions: 0 hits
Availability: 0.00 %
Elapsed time: 0.08 secs
Data transferred: 0.00 MB
Response time: 0.00 secs
Transaction rate: 0.00 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 0.00
Successful transactions: 0
Failed transactions: 100
Longest transaction: 0.00
Shortest transaction: 0.00
FILE: /home/xxxx/var/siege.log
You can disable this annoying message by editing
the .siegerc file in your home directory; change
the directive 'show-logfile' to false.
But --log=filename.log works. e.g.
# siege -c 10 -b -r 10 -f urls.txt --log=./siege.log
$ siege -c 10 -b -r 10 -f urls.txt --log=./siege.log
** SIEGE 2.70
** Preparing 10 concurrent users for battle.
The server is now under siege...
HTTP/1.1 200 0.08 secs: 45807 bytes ==> /8af6cacb-50ed-40b6-995f-49480f9f74fa.html
HTTP/1.1 200 0.08 secs: 45807 bytes ==> /8af6cacb-50ed-40b6-995f-49480f9f74fa.html
HTTP/1.1 200 0.09 secs: 45807 bytes ==> /8af6cacb-50ed-40b6-995f-49480f9f74fa.html
HTTP/1.1 200 0.09 secs: 45807 bytes ==> /8af6cacb-50ed-40b6-995f-49480f9f74fa.html
HTTP/1.1 200 0.10 secs: 45807 bytes ==> /8af6cacb-50ed-40b6-995f-49480f9f74fa.html
HTTP/1.1 200 0.10 secs: 45807 bytes ==> /8af6cacb-50ed-40b6-995f-49480f9f74fa.html
HTTP/1.1 200 0.10 secs: 45807 bytes ==> /8af6cacb-50ed-40b6-995f-49480f9f74fa.html
HTTP/1.1 200 0.10 secs: 45807 bytes ==> /8af6cacb-50ed-40b6-995f-49480f9f74fa.html
HTTP/1.1 200 0.10 secs: 45807 bytes ==> /8af6cacb-50ed-40b6-995f-49480f9f74fa.html
HTTP/1.1 200 0.10 secs: 45807 bytes ==> /8af6cacb-50ed-40b6-995f-49480f9f74fa.html
HTTP/1.1 200 0.10 secs: 55917 bytes ==> /create_and_delete_networks.html
HTTP/1.1 200 0.10 secs: 55917 bytes ==> /create_and_delete_networks.html
HTTP/1.1 200 0.10 secs: 55917 bytes ==> /create_and_delete_networks.html
HTTP/1.1 200 0.10 secs: 55917 bytes ==> /create_and_delete_networks.html
HTTP/1.1 200 0.09 secs: 55917 bytes ==> /create_and_delete_networks.html
done.
Transactions: 100 hits
Availability: 100.00 %
Elapsed time: 0.89 secs
Data transferred: 4.60 MB
Response time: 0.09 secs
Transaction rate: 112.36 trans/sec
Throughput: 5.16 MB/sec
Concurrency: 9.74
Successful transactions: 100
Failed transactions: 0
Longest transaction: 0.15
Shortest transaction: 0.05
FILE: ./siege.log
You can disable this annoying message by editing
the .siegerc file in your home directory; change
Hope this helps.
Related
I'm using ffmpeg to capture the activity of chrome (thanks to chrome driver) and record it into an mp4 file. However, the memory consumed by ffmpeg is quickly blowing up, and after a minute or so my 8GB of memory get saturated and I have to reboot the PC.
Unbuntu 16.0
ALSA loopback (installed by modprobe snd-aloop)
ffmpeg version 4.2.2-1ubuntu1~16.04.york0
This is the command line to ffmpeg:
ffmpeg -y -v info -f x11grab -draw_mouse 0 -r 30 -s 1280x720 -thread_queue_size 4096
-i :0.0+0,0 -f alsa -thread_queue_size 4096 -i plug:cloop -acodec aac -strict -2 -ar 44100
-c:v libx264 -preset veryfast -profile:v main -level 3.1 -pix_fmt yuv420p -r 30
-crf 25 -g 60 -tune zerolatency -f mp4 file.mp4
If I remove all the sound input (-i plug:cloop -acodec aac -strict -2 -ar 44100) then the memory is OK, stable, but the file generated can't be played with VLC or media player.
The logs from ffmpeg looks normal to me:
root$ ffmpeg -rtbufsize 15M -y -v info -f x11grab -draw_mouse 0 -r 30 -s 1280x720 -thread_queue_size 4096 -i :0.0+0,0 -f alsa -thread_queue_size 4096 -i hw:0 -acodec ac3_fixed -strict -2 -ar 44100 -c:v libx264 -preset veryfast -profile:v main -level 3.1 -pix_fmt yuv420p -r 30 -crf 25 -g 60 -tune zerolatency -f mp4 /tmp/recordings/stuff.mp4
ffmpeg version 2.8.15-0ubuntu0.16.04.1 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.10) 20160609
configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv
libavutil 54. 31.100 / 54. 31.100
libavcodec 56. 60.100 / 56. 60.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 40.101 / 5. 40.101
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
Input #0, x11grab, from ':0.0+0,0':
Duration: N/A, start: 1587042601.295126, bitrate: N/A
Stream #0:0: Video: rawvideo (BGR[0] / 0x524742), bgr0, 1280x720, 30 fps, 30 tbr, 1000k tbn, 30 tbc
Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, alsa, from 'hw:0':
Duration: N/A, start: 1587042601.304179, bitrate: 1536 kb/s
Stream #1:0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s
[libx264 # 0x928ee0] using cpu capabilities: MMX2 SSE Cache64
[libx264 # 0x928ee0] profile Main, level 3.1
Output #0, mpegts, to '/tmp/recordings/stuff.ts':
Metadata:
encoder : Lavf56.40.101
Stream #0:0: Video: h264 (libx264), yuv420p, 1280x720, q=-1--1, 30 fps, 90k tbn, 30 tbc
Metadata:
encoder : Lavc56.60.100 libx264
Stream #0:1: Audio: ac3 (ac3_fixed), 44100 Hz, stereo, s16p, 192 kb/s
Metadata:
encoder : Lavc56.60.100 ac3_fixed
Stream mapping:
Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
Stream #1:0 -> #0:1 (pcm_s16le (native) -> ac3 (ac3_fixed))
Press [q] to stop, [?] for help
frame= 7 fps=0.0 q=21.0 size= 5kB time=00:00:00.23 bitrate= 180.5kbits/s frame= 15 fps= 15 q=21.0 size= 15kB time=00:00:00.50 bitrate= 240.6kbits/s frame= 21 fps= 14 q=21.0 size= 21kB time=00:00:00.70 bitrate= 249.2kbits/s frame= 28 fps= 14 q=21.0 size= 28kB time=00:00:00.93 bitrate= 246.5kbits/s frame= 35 fps= 13 q=21.0 size= 35kB time=00:00:01.16 bitrate= 244.9kbits/s frame= 42 fps= 13 q=21.0 size= 44kB time=00:00:01.40 bitrate= 260.0kbits/s frame= 49 fps= 13 q=21.0 size= 51kB time=00:00:01.63 bitrate= 256.9kbits/s frame= 56 fps= 13 q=21.0 size= 58kB time=00:00:01.86 bitrate= 254.6kbits/s frame= 63 fps= 13 q=22.0 size= 66kB time=00:00:02.10 bitrate= 255.7kbits/s frame= 70 fps= 13 q=21.0 size= 75kB time=00:00:02.33 bitrate= 263.0kbits/s frame= 77 fps= 13 q=21.0 size= 82kB time=00:00:02.56 bitrate= 261.3kbits/s frame= 79 fps= 13 q=21.0 Lsize= 85kB time=00:00:02.63 bitrate= 264.4kbits/s
video:8kB audio:61kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 22.604090%
[libx264 # 0x928ee0] frame I:2 Avg QP: 5.10 size: 760
[libx264 # 0x928ee0] frame P:77 Avg QP: 6.78 size: 88
[libx264 # 0x928ee0] mb I I16..4: 99.9% 0.0% 0.1%
[libx264 # 0x928ee0] mb P I16..4: 0.0% 0.0% 0.0% P16..4: 0.0% 0.0% 0.0% 0.0% 0.0% skip:100.0%
[libx264 # 0x928ee0] coded y,uvDC,uvAC intra: 0.0% 0.1% 0.0% inter: 0.0% 0.0% 0.0%
[libx264 # 0x928ee0] i16 v,h,dc,p: 91% 0% 9% 0%
[libx264 # 0x928ee0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 0% 0% 100% 0% 0% 0% 0% 0% 0%
[libx264 # 0x928ee0] i8c dc,h,v,p: 98% 2% 0% 0%
[libx264 # 0x928ee0] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 # 0x928ee0] kb/s:25.23
Exiting normally, received signal 2.
This is my .asoundrc file.
# playback PCM device: using loopback subdevice 0,0
pcm.amix {
type dmix
ipc_key 219345
slave.pcm "hw:Loopback,0,0"
}
# capture PCM device: using loopback subdevice 0,1
pcm.asnoop {
type dsnoop
ipc_key 219346
slave.pcm "hw:Loopback,0,1"
}
# duplex device combining our PCM devices defined above
pcm.aduplex {
type asym
playback.pcm "amix"
capture.pcm "asnoop"
}
# ------------------------------------------------------
# for jack alsa_in and alsa_out: looped-back signal at other ends
pcm.ploop {
type plug
slave.pcm "hw:Loopback,1,1"
}
pcm.cloop {
type dsnoop
ipc_key 219348
slave.pcm "hw:Loopback,1,0"
}
# ------------------------------------------------------
# default device
pcm.!default {
type plug
slave.pcm "aduplex"
}
I'm not sure how to debug this kind of issue, any idea why the memory blow up so fast like that ?
I'm working on centos 7 and has this json request :
curl --output 'json.data.json' -vvv -x '' -X POST -H "Content-Type: application/json" -H "Connection: keep-alive" -d '{"jsonrpc":"2.0","method":"item.get","params":{"output": ["name","lastvalue","lastclock","hostid"],"groupids": ["5"],"filter":{"name":["LDT Security Flag"]},"sortfield": "name" },"auth":"c1cxxxxxxxxx","id":1}' $CURLADDR
Now,
In middle of results I see :
734247","lastvalue":"0"},{"itemid":"192890","name":"LDT Fl* transfer closed with outstanding read data remaining
100 86797 0 86569 100 228 75094 197 0:00:01 0:00:01 --:--:-- 75146
* Closing connection 0
curl: (18) transfer closed with outstanding read data remaining
And after that some more json results.
But I noticed that not matter how many times I run it and redirect output to .json file , file always at the same size : 88K like there some kind of limit on info download size ? what can i do ?
UPDATE:
So i added it and now the error : curl: (18) transfer closed with outstanding Read data remaining is gone. but is still cuts at the middle with : * Closing Connection 0 and file is 88K – Batchen Regev 11 mins ago
also connection data :
> POST /api_jsonrpc.php HTTP/1.0
> User-Agent: curl/7.29.0
> Host: XXXXXX:1080
> Content-Type: application/json;charset=utf-8
> Accept: application/json, text/plain, */*
> Content-Length: 224
>
} [data not shown]
* upload completely sent off: 224 out of 224 bytes
100 224 0 0 100 224 0 223 0:00:01 0:00:01 --:--:-- 223< HTTP/1.1 200 OK
< Server: nginx
< Date: Tue, 25 Dec 2018 15:21:07 GMT
< Content-Type: application/json
< Connection: close
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Headers: Content-Type
< Access-Control-Allow-Methods: POST
< Access-Control-Max-Age: 1000
<
{ [data not shown]
100 86840 0 86616 100 224 77583 200 0:00:01 0:00:01 --:--:-- 77612
* Closing connection 0
This seems to be a server side issue.
You might try to work it around by forcing HTTP 1.0 connection (to avoid chunked transfer which might cause this problem) with the --http1.0 option.
CURLE_PARTIAL_FILE (18)
A file transfer was shorter or larger than expected. This happens when the server first reports an expected transfer size, and then delivers data that doesn't match the previously given size.
Source: https://curl.haxx.se/libcurl/c/libcurl-errors.html
I have a netcat connection open between a server and a client and i am trying to craft a packet using hping to print the text on the client.
My issue is I am able to craft a very similar packet to what is needed but I am missing the TCP options that are in the packets that are sent from server to the client via netcat.
here is my hping command
hping3 -A -y -M 717766814 -L 3830111434 -N 37033 -w 227 -b -p 55526 -s 5555 -P 192.168.0.116 -c 1 -d 8 -E task4.txt
here is the packet i craft
11:16:45.116157 00:a0:98:64:9f:40 > 00:a0:98:36:c8:07, ethertype IPv4 (0x0800), length 62: (tos 0x0, ttl 64, id 37033, offset 0, flags [DF], proto TCP (6), length 48)
192.168.0.216.5555 > 192.168.0.116.55526: Flags [P.], cksum 0x5600 (incorrect -> 0x0355), seq 717766814:717766822, ack 3830111434, win 227, length 8
0x0000: 4500 0030 90a9 4000 4006 2782 c0a8 00d8 E..0..#.#.'.....
0x0010: c0a8 0074 15b3 d8e6 2ac8 409e e44a dcca ...t....*.#..J..
0x0020: 5018 00e3 5600 0000 4243 4445 4647 410a P...V...BCDEFGA.
the actual packet i need to craft
11:16:52.352624 00:a0:98:64:9f:40 > 00:a0:98:36:c8:07, ethertype IPv4 (0x0800), length 74: (tos 0x0, ttl 64, id 38493, offset 0, flags [DF], proto TCP (6), length 60)
192.168.0.216.5555 > 192.168.0.116.55526: Flags [P.], cksum 0x82cb (incorrect -> 0x0ce8), seq 717766814:717766822, ack 3830111434, win 227, options [nop,nop,TS val 1099353487 ecr 208117467], length 8
0x0000: 4500 003c 965d 4000 4006 21c2 c0a8 00d8 E..<.]#.#.!.....
0x0010: c0a8 0074 15b3 d8e6 2ac8 409e e44a dcca ...t....*.#..J..
0x0020: 8018 00e3 82cb 0000 0101 080a 4186 cd8f ............A...
0x0030: 0c67 9edb 4142 4344 4546 470a .g..ABCDEFG.
the packets are identical other than missing the options and the checksum
How can i add the options to my crafted packet or is there a another method to getting test to appear on the client using hping?
As you saw, hping3 does not provide a way to set TCP options out-of-the-box.
However, good news is that the TCP options are right next to the TCP payload in the packet. So you can prepend your data with the TCP options:
Instead of just the data, put the TCP options + data in the file you provide to hping3:
echo "0101080a4186cd8f0c679edb414243444546470a" | python3 -c "import sys, binascii; sys.stdout.buffer.write(binascii.unhexlify(input().strip()))" > /tmp/task4.txt
Send using hping3, you will need to change the data size to 20 and set the data offset to 8 (default data offset is 5 32 bits words) to properly identify the added TCP options:
-O --tcpoff
Set fake tcp data offset. Normal data offset is tcphdrlen / 4.
hping3 -A -y -M 717766814 -L 3830111434 -N 37033 -w 227 -b -p 55526 -s 5555 -P 192.168.134.161 -c 1 -d 20 -O 8 -E task4.txt
Resulting crafted packet:
08:27:07.956095 IP (tos 0x0, ttl 64, id 37033, offset 0, flags [DF], proto TCP (6), length 60)
192.168.134.142.5555 > 192.168.134.161.55526: Flags [P.], cksum 0x5451 (incorrect -> 0x0104), seq 0:8, ack 1, win 227, options [nop,nop,TS val 1099353487 ecr 208117467], length 8
0x0000: 4500 003c 90a9 4000 4006 1b92 c0a8 868e E..<..#.#.......
0x0010: c0a8 86a1 15b3 d8e6 2ac8 409e e44a dcca ........*.#..J..
0x0020: 8018 00e3 5451 0000 0101 080a 4186 cd8f ....TQ......A...
0x0030: 0c67 9edb 4142 4344 4546 470a .g..ABCDEFG.
I created an API server with Flask, I use gunicorn with eventlet to run it. I noticed a long response time from Flask server when calling APIs. I did a profiling with my client, one ran from my laptop, one ran directly in Flask API server.
From my laptop:
302556 function calls (295712 primitive calls) in 5.594 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
72 4.370 0.061 4.370 0.061 {method 'poll' of 'select.epoll' objects}
16 0.374 0.023 0.374 0.023 {method 'connect' of '_socket.socket' objects}
16 0.213 0.013 0.213 0.013 {method 'load_verify_locations' of '_ssl._SSLContext' objects}
16 0.053 0.003 0.058 0.004 httplib.py:798(close)
52 0.034 0.001 0.034 0.001 {method 'do_handshake' of '_ssl._SSLSocket' objects}
On server:
231449 function calls (225936 primitive calls) in 3.320 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
12 2.132 0.178 2.132 0.178 {built-in method read}
13 0.286 0.022 0.286 0.022 {method 'poll' of 'select.epoll' objects}
12 0.119 0.010 0.119 0.010 {_ssl.sslwrap}
12 0.095 0.008 0.095 0.008 {built-in method do_handshake}
855/222 0.043 0.000 0.116 0.001 sre_parse.py:379(_parse)
1758/218 0.029 0.000 0.090 0.000 sre_compile.py:32(_compile)
1013 0.027 0.000 0.041 0.000 sre_compile.py:207(_optimize_charset)
12429 0.023 0.000 0.029 0.000 sre_parse.py:182(__next)
So, I saw my client took long time to wait from server response base on the profile result.
I used gunicorn with eventlet to serve Flask app, with the flowing configuration:
import multiprocessing
bind = ['0.0.0.0:8000']
backlog = 2048
workers = multiprocessing.cpu_count() * 2 + 1
worker_class = 'eventlet'
user = 'www-data'
group = 'www-data'
loglevel = 'info'
My client is an custom HTTP client using eventlet to patch httplib2 and create a pool to connect to server.
I stuck here with the troubleshooting. All server stats were normal. How can I detect the bottle neck of my API server?
top - 12:24:22 up 1 day, 18:37, 2 users, load average: 1.19, 1.77, 1.59
Tasks: 166 total, 4 running, 162 sleeping, 0 stopped, 0 zombie
Cpu(s): 20.1%us, 5.8%sy, 0.0%ni, 62.4%id, 10.8%wa, 0.0%hi, 1.0%si, 0.0%st
Mem: 987780k total, 979052k used, 8728k free, 17240k buffers
Swap: 2104432k total, 106760k used, 1997672k free, 174100k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4116 mysql 15 0 356m 105m 4176 S 12.0 11.0 139:58.69 mysqld
5722 apache 15 0 160m 17m 4224 S 10.0 1.8 0:00.78 httpd
5741 apache 15 0 161m 17m 4220 S 9.3 1.8 0:00.37 httpd
5840 apache 15 0 161m 17m 4148 S 8.3 1.8 0:00.52 httpd
5846 apache 15 0 161m 17m 4132 S 6.3 1.8 0:00.47 httpd
5744 apache 15 0 162m 18m 4224 S 2.0 1.9 0:00.37 httpd
5725 apache 15 0 161m 17m 4424 S 1.3 1.8 0:00.34 httpd
5755 apache 15 0 105m 14m 4248 R 1.3 1.5 0:00.17 httpd
5564 apache 15 0 163m 19m 4360 S 1.0 2.0 0:00.65 httpd
5322 apache 16 0 162m 19m 4456 S 0.7 2.0 0:02.26 httpd
5586 apache 15 0 161m 18m 4468 S 0.7 1.9 0:01.77 httpd
5852 apache 16 0 99.9m 11m 3424 S 0.7 1.2 0:00.02 httpd
5121 root 18 0 98.3m 10m 4320 S 0.3 1.1 0:00.07 httpd
5723 apache 15 0 161m 17m 4240 S 0.3 1.8 0:00.31 httpd
5833 apeadm 15 0 12740 1128 808 R 0.3 0.1 0:00.03 top
5834 apache 15 0 160m 16m 4172 S 0.3 1.7 0:00.20 httpd
5836 apache 15 0 98.5m 9388 2912 S 0.3 1.0 0:00.01 httpd
1 root 15 0 10348 592 560 S 0.0 0.1 0:00.72 init
2 root RT -5 0 0 0 S 0.0 0.0 0:00.01 migration/0
Server Spec:
CENTOS CPU:E5200 RAM:1G
Software: Zencart X 3, Piwik x1
Server always down,How to tuning apache and mysql?
Thanks a lot.
httpd.conf
<IfModule mpm_prefork_module>
StartServers 8
MinSpareServers 8
MaxSpareServers 15
ServerLimit 450
MaxClients 450
MaxRequestsPerChild 10000
</IfModule>
<IfModule mpm_worker_module>
StartServers 2
ServerLimit 450
MaxClients 450
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxRequestsPerChild 10000
</IfModule>
<IfModule mpm_beos_module>
StartThreads 10
MaxClients 50
MaxRequestsPerThread 10000
</IfModule>
<IfModule mpm_netware_module>
ThreadStackSize 65536
StartThreads 250
MinSpareThreads 25
MaxSpareThreads 250
MaxThreads 1000
MaxRequestsPerChild 10000
MaxMemFree 100
</IfModule>
<IfModule mpm_mpmt_os2_module>
StartServers 2
MinSpareThreads 5
MaxSpareThreads 10
MaxRequestsPerChild 10000
</IfModule>
my.cnf
[mysqld]
set-variable = query_cache_limit=1M
set-variable = query_cache_size=16M
set-variable = query_cache_type=1
set-variable = max_connections=400
set-variable = interactive_timeout=100
set-variable = wait_timeout=100
set-variable = connect_timeout=100
set-variable = thread_cache_size=16
#
# Set key_buffer to 5 - 50% of your RAM depending on how much
# you use MyISAM tables, but keep key_buffer_size + InnoDB
# buffer pool size < 80% of your RAM
set-variable = key_buffer=32M
set-variable = join_buffer=1M
set-variable = max_allowed_packet=8M
set-variable = table_cache=1024
set-variable = record_buffer=1M
set-variable = sort_buffer_size=2M
set-variable = read_buffer_size=2M
set-variable = max_connect_errors=10
set-variable = myisam_sort_buffer_size=16M
#Useful for SMP
set-variable = thread_concurrency=8
Server always down,How to tuning apache and mysql?
eh? If its always down how did you get those figures? Tuning isn't going to fix stability issues.
A short answer to how to tune Apache and mysql would fill a large book. Here's some links to books on MysQL: http://forums.mysql.com/read.php?24,92131,92131
And for Apache try searching on Amazon.
Also, you've not provded any information about what's running in between Apache and MySQL - understanding what's going on here is pretty critical too - as is understanding how to improve HTTP and browser performance (caching, keepalives, compression, javascript....).