I'm currently developing an practice application in node.js. This applications consists of a JSON REST web service which allows two services.
Insert log (a PUT request to /log, with the message to log)
Last 100 logs (a GET request to /log, that returns the latest 100 logs)
The current stack is formed by a node.js server that has the application logic and a mongodb database that takes care of the persistence. To offer the JSON REST web services I'm using the node-restify module.
I'm currently executing some stress tests using apache bench (using 5000 requests with a concurrency of 10) and get the following results:
Execute stress tests
1) Insert log
Requests per second: 754.80 [#/sec] (mean)
2) Last 100 logs
Requests per second: 110.37 [#/sec] (mean)
I'm surprised of the difference there is in performance, the query I'm executing uses an index. Interestingly enough it seems that the JSON output generation seems to get all the time on deeper tests I have performed.
Can node applications be profiled in detail?
Is this behaviour normal? Retrieving data takes so much more than inserting data?
EDIT:
Full test information
1) Insert log
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Server Software: log-server
Server Hostname: localhost
Server Port: 3010
Document Path: /log
Document Length: 0 bytes
Concurrency Level: 10
Time taken for tests: 6.502 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Total transferred: 2240634 bytes
Total PUT: 935000
HTML transferred: 0 bytes
Requests per second: 768.99 [#/sec] (mean)
Time per request: 13.004 [ms] (mean)
Time per request: 1.300 [ms] (mean, across all concurrent requests)
Transfer rate: 336.53 [Kbytes/sec] received
140.43 kb/s sent
476.96 kb/s total
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 3
Processing: 6 13 3.9 12 39
Waiting: 6 12 3.9 11 39
Total: 6 13 3.9 12 39
Percentage of the requests served within a certain time (ms)
50% 12
66% 12
75% 12
80% 13
90% 15
95% 24
98% 26
99% 30
100% 39 (longest request)
2) Last 100 logs
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Server Software: log-server
Server Hostname: localhost
Server Port: 3010
Document Path: /log
Document Length: 4601 bytes
Concurrency Level: 10
Time taken for tests: 46.528 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Total transferred: 25620233 bytes
HTML transferred: 23005000 bytes
Requests per second: 107.46 [#/sec] (mean)
Time per request: 93.057 [ms] (mean)
Time per request: 9.306 [ms] (mean, across all concurrent requests)
Transfer rate: 537.73 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 1
Processing: 28 93 16.4 92 166
Waiting: 26 85 18.0 86 161
Total: 29 93 16.4 92 166
Percentage of the requests served within a certain time (ms)
50% 92
66% 97
75% 101
80% 104
90% 113
95% 121
98% 131
99% 137
100% 166 (longest request)
Retrieving data from the database
To query the database I use the mongoosejs module. The log schema is defined as:
{
date: { type: Date, 'default': Date.now, index: true },
message: String
}
and the query I execute is the following:
Log.find({}, ['message']).sort('date', -1).limit(100)
Can node applications be profiled in detail?
Yes. Use node --prof app.js to create a v8.log, then use linux-tick-processor, mac-tick-processor or windows-tick-processor.bat (in deps/v8/tools in the node src directory) to interpret the log. You have to build d8 in deps/v8 to be able to run the tick processor.
Here's how I do it on my machine:
apt-get install scons
cd ~/development/external/node-0.6.12/deps/v8
scons arch=x64 d8
cd ~/development/projects/foo
node --prof app.js
D8_PATH=~/development/external/node-0.6.12/deps/v8 ~/development/external/node-0.6.12/deps/v8/tools/linux-tick-processor > profile.log
There are also a few tools to make this easier, including node-profiler and v8-profiler (with node-inspector).
Regarding your other question, I would like some more information on how you fetch your data from Mongo, and what the data looks like (I agree with beny23 that it looks like a suspiciously low amount of data).
I strongly suggest taking a look at the DTrace support of Restify. It will likely become your best friend when profiling.
http://mcavage.github.com/node-restify/#DTrace
Related
One of the SGE job was running slow and killed by qmaster to enforce the h_rt=1200.
Is that possible SGE admin dynamically change the setting to make the job(id=2771780) running slow? If yes, what could be the setting to do so? If not, what could cause this?
qname test.q
hostname abc
group domain
owner jenkins
project NONE
department defaultdepartment
jobname top
jobnumber 2771780
taskid undefined
account sge
priority 0
qsub_time Mon Dec 20 11:46:06 2021
start_time Mon Dec 20 11:46:07 2021
end_time Mon Dec 20 12:06:08 2021
granted_pe NONE
slots 1
failed 37 : qmaster enforced h_rt, h_cpu, or h_vmem limit
exit_status 137 (Killed)
ru_wallclock 1201s
ru_utime 0.088s
ru_stime 8.797s
ru_maxrss 5.559KB
ru_ixrss 0.000B
ru_ismrss 0.000B
ru_idrss 0.000B
ru_isrss 0.000B
ru_minflt 23574
ru_majflt 0
ru_nswap 0
ru_inblock 128
ru_oublock 240
ru_msgsnd 0
ru_msgrcv 0
ru_nsignals 0
ru_nvcsw 24156
ru_nivcsw 66
cpu 1454.650s
mem 54.658GBs
io 495.010GB
iow 0.000s
maxvmem 1014.082MB
arid undefined
ar_sub_time undefined
category -U arusers,digital -q test.q -l h_rt=1200
If you are saying that usually the job finishes in 1200s, but ran slowly on this particular occasion, this could be for various external factors such as contention for storage or network bandwidth. You may have also landed on a different compute node type that had slower CPU. An SGE admin can change various resource settings before the job starts executing such as the number of cores, but the more likely issue is contention for storage/io or even throttled cpu for thermal reasons.
In GCP Cloud SQL for mySQL,
Say for Network throughput (MB/s) it specified
250 of 2000
Is 250 the average value and 2000 the maximum value attainable?
If you click on the question mark (to the left of your red square] it will lead you to this doc. You will see that limiting factor is 16 Gbps.
Now using converter, 16 Gbps is 2000MB/s.
If you change your machine type to high mem, 8 vCPU or 16, you will see the cap at 2000. So i suspect 250MB/s is the allocated value for the machine type you chose, the 1cpu.
I am trying to parse a string response from a server to JSON format. I am new to golang and need some help in understanding the right way to achieve a solution. Here is the response I am getting from the server -
Test 1: local 1.1.1.1 remote 2.2.2.2 state GOOD
Test ID: 2.2.2.2
Test Type: ABD
Admin State: START
DFD: Disabled
Address family: ipv4-unicast
Options: < Refresh >
Updates Received: 0, Updates Sent: 7
Data Received: 853, Data Sent: 860
Time since last received update: n/a
Number of transitions to GOOD: 1
Time since last entering GOOD state: 22384 seconds
Retry Interval: 120 seconds
Hold Time: 90 seconds, Keep Test Time: 30 seconds
Test 2: local 1.1.1.1 remote 2.2.2.2 state GOOD
Test ID: 2.2.2.2
Test Type: ABD
Admin State: START
DFD: Disabled
Address family: ipv4-unicast
Options: < Refresh >
Updates Received: 0, Updates Sent: 7
Data Received: 853, Data Sent: 860
Time since last received update: n/a
Number of transitions to GOOD: 1
Time since last entering GOOD state: 22384 seconds
Retry Interval: 120 seconds
Hold Time: 90 seconds, Keep Test Time: 30 seconds
Test 3: local 1.1.1.1 remote 2.2.2.2 state GOOD
Test ID: 2.2.2.2
Test Type: ABD
Admin State: START
DFD: Disabled
Address family: ipv4-unicast
Options: < Refresh >
Updates Received: 0, Updates Sent: 7
Data Received: 853, Data Sent: 860
Time since last received update: n/a
Number of transitions to GOOD: 1
Time since last entering GOOD state: 22384 seconds
Retry Interval: 120 seconds
Hold Time: 90 seconds, Keep Test Time: 30 seconds
Thanks.
You are going to have to write a custom parser that will cut that input up into a way you can retrieve your keys and values. The strings package should be very helpful, specifically strings.Split.
I wrote a basic example that works on at least one section of your input. You are going to want to tweak it to work for your entire input. As it stands, mine will overwrite keys when continuing to read. You will want to add some sort of array structure to handle using the same keys. Also, mine uses all values as strings. I'm not sure if that is useful to you.
http://play.golang.org/p/FZ_cQ-b-bx
However, if you control the server and application that you are getting this output from, the preferred solution would be to have that application convert it to JSON.
NOTE: The code above is very brittle.
About 2 months ago, I imported EnWikipedia data(http://dumps.wikimedia.org/enwiki/20120211/) into mysql.
After finished importing EnWikipedia data, I have been creating index in the tables of the EnWikipedia database in mysql for about 2 month.
Now, I have reached the point of creating index in "pagelinks".
However, it seems to take an infinite time to pass that point.
Therefore, I checked the time remaining to pass to ensure that my intuition was correct or not.
As a result, the expected time remaining was 60 days(assuming that I create index in "pagelinks" again from the beginning.)
My EnWikipedia database has 7 tables:
"categorylinks"(records: 60 mil, size: 23.5 GiB),
"langlinks"(records: 15 mil, size: 1.5 GiB),
"page"(records: 26 mil, size 4.9 GiB),
"pagelinks"(records: 630 mil, size: 56.4 GiB),
"redirect"(records: 6 mil, size: 327.8 MiB),
"revision"(records: 26 mil, size: 4.6 GiB) and "text"(records: 26 mil, size: 60.8 GiB).
My server is...
Linux version 2.6.32-5-amd64 (Debian 2.6.32-39),Memory 16GB, 2.39Ghz Intel 4 core
Is that common phenomenon for creating index to take so long days ?
Does anyone have a good solution to create index more quickly ?
Thanks in advance !
P.S: I made following operations for checking the time remaining.
References(Sorry,following page is written in Japanese): http://d.hatena.ne.jp/sh2/20110615
1st. I got records in "pagelink".
mysql> select count(*) from pagelinks;
+-----------+
| count(*) |
+-----------+
| 632047759 |
+-----------+
1 row in set (1 hour 25 min 26.18 sec)
2nd. I got the amount of records increased per minute.
getHandler_write.sh
#!/bin/bash
while true
do
cat <<_EOF_
SHOW GLOBAL STATUS LIKE 'Handler_write';
_EOF_
sleep 60
done | mysql -u root -p -N
command
$ sh getHandler_write.sh
Enter password:
Handler_write 1289808074
Handler_write 1289814597
Handler_write 1289822748
Handler_write 1289829789
Handler_write 1289836322
Handler_write 1289844916
Handler_write 1289852226
3rd. I computed the speed of recording.
According to the result of 2. ,the speed of recording is
7233 records/minutes
4th. Then the time remaining is
(632047759/7233)/60/24 = 60 days
Those are pretty big tables, so I'd expect the indexing to be pretty slow. 630 million records is a LOT of data to index. One thing to look at is partitioning, with data sets that large, without correctly partitioned tables, performance will be sloooow. Here's some useful links:
using partioning on slow indexes you could also try looking at the buffer size settings for building the indexes (the default is 8MB, do for your large table that's going to slow you down a fair bit. buffer size documentation
I am performing a number of trace routes to different IP's throughout the course of 1 week. Ive got a script that performs a set of trace routes and writes and appends them to the same .log file.
This file is obviously now quite large as I'm performing trace route 3 times a day on 6 targets for a week. Im trying to write a simple program that will convert my log files into CSV format for analysis in Excel.
Before each trace route runs it prints ''--- START ---'' and finishes with ''--- END ---''. See the following example:
--- START ---
Mon Mar 12 22:45:05 GMT 2012
traceroute to xxxxxxxx (xxxxxx), 30 hops max, 60 byte packets
1 xxxxxxx (xxxxxxx) 1.085 ms 1.662 ms 2.244 ms
2 xxxxxx (xxxxxx) 0.792 ms 0.782 ms 0.772 ms
3 xxxxxx (xxxxxx) 8.545 ms 9.170 ms 9.644 ms
4 etc
5 etc
--- END ---
--- START ---
Mon Mar 12 22:45:05 GMT 2012
traceroute to xxxxxx (xxxxx), 30 hops max, 60 byte packets
1 xxxxxxx (xxxxxxx) 0.925 ms 1.318 ms 1.954 ms
2 xxxxx (xxxxxx) 0.345 ms 0.438 ms 0.496 ms
3 xxxxxxx (xxxxxx) 0.830 ms 2.553 ms 0.809 ms
4 etc
5 etc
--- END ---
I was going to use the START and END to delimit and separate each trace route from one another. I also need to take the total number of jumps that each trace routes makes, that being the last number on the line before ''--- END ---".
If anyone could help me out it would be great. I need something that will run through each trace route, separating them. And then showing the number of hops each trace route makes... Im currently using MATLAB.
Cheers.
The best way to solve your problem is using regex. Just find those start and end tags and for each match make the necessary processing :)