Creating index takes too long time - mysql

About 2 months ago, I imported EnWikipedia data(http://dumps.wikimedia.org/enwiki/20120211/) into mysql.
After finished importing EnWikipedia data, I have been creating index in the tables of the EnWikipedia database in mysql for about 2 month.
Now, I have reached the point of creating index in "pagelinks".
However, it seems to take an infinite time to pass that point.
Therefore, I checked the time remaining to pass to ensure that my intuition was correct or not.
As a result, the expected time remaining was 60 days(assuming that I create index in "pagelinks" again from the beginning.)
My EnWikipedia database has 7 tables:
"categorylinks"(records: 60 mil, size: 23.5 GiB),
"langlinks"(records: 15 mil, size: 1.5 GiB),
"page"(records: 26 mil, size 4.9 GiB),
"pagelinks"(records: 630 mil, size: 56.4 GiB),
"redirect"(records: 6 mil, size: 327.8 MiB),
"revision"(records: 26 mil, size: 4.6 GiB) and "text"(records: 26 mil, size: 60.8 GiB).
My server is...
Linux version 2.6.32-5-amd64 (Debian 2.6.32-39),Memory 16GB, 2.39Ghz Intel 4 core
Is that common phenomenon for creating index to take so long days ?
Does anyone have a good solution to create index more quickly ?
Thanks in advance !
P.S: I made following operations for checking the time remaining.
References(Sorry,following page is written in Japanese): http://d.hatena.ne.jp/sh2/20110615
1st. I got records in "pagelink".
mysql> select count(*) from pagelinks;
+-----------+
| count(*) |
+-----------+
| 632047759 |
+-----------+
1 row in set (1 hour 25 min 26.18 sec)
2nd. I got the amount of records increased per minute.
getHandler_write.sh
#!/bin/bash
while true
do
cat <<_EOF_
SHOW GLOBAL STATUS LIKE 'Handler_write';
_EOF_
sleep 60
done | mysql -u root -p -N
command
$ sh getHandler_write.sh
Enter password:
Handler_write 1289808074
Handler_write 1289814597
Handler_write 1289822748
Handler_write 1289829789
Handler_write 1289836322
Handler_write 1289844916
Handler_write 1289852226
3rd. I computed the speed of recording.
According to the result of 2. ,the speed of recording is
7233 records/minutes
4th. Then the time remaining is
(632047759/7233)/60/24 = 60 days

Those are pretty big tables, so I'd expect the indexing to be pretty slow. 630 million records is a LOT of data to index. One thing to look at is partitioning, with data sets that large, without correctly partitioned tables, performance will be sloooow. Here's some useful links:
using partioning on slow indexes you could also try looking at the buffer size settings for building the indexes (the default is 8MB, do for your large table that's going to slow you down a fair bit. buffer size documentation

Related

How to get 100% GPU usage using CUDA

I wonder how I can generate high load in a GPU, step by step, though.
What I'm trying to do is a program which put the maximum load in a MP, then in other, until reach the total number of MP.
It would be similar to execute a "while true" in every single core of a CPU, but I'm not sure if the same paradigm would work on a GPU with CUDA.
Can you help me?
If you want to do a stress-test/power consumption test, you'll need to pick the workload. The highest power consumption with compute-only code you'll most likely get with some synthetic benchmark that feed the GPUs with the optimal mix and sequence of operations. Otherwise, BLAS level 3 is probably quite close to optimal.
Putting load only on a certain number of multi-processors will require that you tweak the workload to limit the block-level parallelism.
Briefly, this is what I'd do:
Pick a code that is well-optimized and known to utilize the GPU to a great extent (high IPC, high power consumption, etc.). Have a look around on the CUDA developer forums, you should be able to find hand-tuned BLAS code or something alike.
Change the code to force it to run on a given number of multi-processors. This will require that you tune the number of blocks and threads to produce exactly the right amount of load for the number of processors you want to utilize.
Profile: the profiler counters can show you the amount of instruction per multi-processor which gives you a check that you are indeed only running on the desired number of processors as well as other counters that can indicate how efficiently is the code running.
as well as
Measure. If you have a Tesla or Quadro you get power consumption out of the box. Otherwise, try the nvml fix. Without a power measurement it will be hard for you to know how far are you from the TDP and especially weather the GPU is throttling.
Some of my benchmarks carry out the same calculations via CUDA, OpenMP and programmed multithreading. The arithmetic operations executed are of the form x[i] = (x[i] + a) * b - (x[i] + c) * d + (x[i] + e) * f with 2, 8 or 32 adds or subtracts and multiplies on each data element. A range of data sizes are also used. [Free] Benchmarks, source codes and results for Linux are via my page:
http://www.roylongbottom.org.uk/linux%20benchmarks.htm
I also provide Windows varieties. Following are some CUDA results, showing maximum speed of 412 GFLOPS using a GeForce GTX 650. On the quad core/8 thread Core i7, OpenMP produced up to 91 GFLOPS and multithreading up to 93 GFLOPS using SSE instructions and 178 GFLOPS with AVX 1. See also section on Burn-In and Reliability Apps, where the most demanding CUDA test is run for a period to show temperature gains, at the same time as CPU stress tests.
Core i7 4820K 3.9 GHz Turbo Boost GeForce GTX 650
Linux CUDA 3.2 x64 32 Bits SP MFLOPS Benchmark 1.4 Tue Dec 30 22:50:52 2014
CUDA devices found
Device 0: GeForce GTX 650 with 2 Processors 16 cores
Global Memory 999 MB, Shared Memory/Block 49152 B, Max Threads/Block 1024
Using 256 Threads
Test 4 Byte Ops Repeat Seconds MFLOPS First All
Words /Wd Passes Results Same
Data in & out 100000 2 2500 0.837552 597 0.9295383095741 Yes
Data out only 100000 2 2500 0.389646 1283 0.9295383095741 Yes
Calculate only 100000 2 2500 0.085709 5834 0.9295383095741 Yes
Data in & out 1000000 2 250 0.441478 1133 0.9925497770309 Yes
Data out only 1000000 2 250 0.229017 2183 0.9925497770309 Yes
Calculate only 1000000 2 250 0.051727 9666 0.9925497770309 Yes
Data in & out 10000000 2 25 0.369060 1355 0.9992496371269 Yes
Data out only 10000000 2 25 0.201172 2485 0.9992496371269 Yes
Calculate only 10000000 2 25 0.048027 10411 0.9992496371269 Yes
Data in & out 100000 8 2500 0.708377 2823 0.9571172595024 Yes
Data out only 100000 8 2500 0.388206 5152 0.9571172595024 Yes
Calculate only 100000 8 2500 0.092254 21679 0.9571172595024 Yes
Data in & out 1000000 8 250 0.478644 4178 0.9955183267593 Yes
Data out only 1000000 8 250 0.231182 8651 0.9955183267593 Yes
Calculate only 1000000 8 250 0.053854 37138 0.9955183267593 Yes
Data in & out 10000000 8 25 0.370669 5396 0.9995489120483 Yes
Data out only 10000000 8 25 0.202392 9882 0.9995489120483 Yes
Calculate only 10000000 8 25 0.049263 40599 0.9995489120483 Yes
Data in & out 100000 32 2500 0.725027 11034 0.8902152180672 Yes
Data out only 100000 32 2500 0.407579 19628 0.8902152180672 Yes
Calculate only 100000 32 2500 0.113188 70679 0.8902152180672 Yes
Data in & out 1000000 32 250 0.497855 16069 0.9880878329277 Yes
Data out only 1000000 32 250 0.261461 30597 0.9880878329277 Yes
Calculate only 1000000 32 250 0.060132 133042 0.9880878329277 Yes
Data in & out 10000000 32 25 0.375882 21283 0.9987964630127 Yes
Data out only 10000000 32 25 0.207640 38528 0.9987964630127 Yes
Calculate only 10000000 32 25 0.054718 146204 0.9987964630127 Yes
Extra tests - loop in main CUDA Function
Calculate 10000000 2 25 0.018107 27613 0.9992496371269 Yes
Shared Memory 10000000 2 25 0.007775 64308 0.9992496371269 Yes
Calculate 10000000 8 25 0.025103 79671 0.9995489120483 Yes
Shared Memory 10000000 8 25 0.008724 229241 0.9995489120483 Yes
Calculate 10000000 32 25 0.036397 219797 0.9987964630127 Yes
Shared Memory 10000000 32 25 0.019414 412070 0.9987964630127 Yes

neo4j batchimporter is slow with big IDs

i want to import csv-Files with about 40 million lines into neo4j. For this i try to use the "batchimporter" from https://github.com/jexp/batch-import.
Maybe it's a problem that i provide own IDs. This is the example
nodes.csv
i:id
l:label
315041100 Person
201215100 Person
315041200 Person
rels.csv :
start
end
type
relart
315041100 201215100 HAS_RELATION 30006
315041200 315041100 HAS_RELATION 30006
the content of batch.properties:
use_memory_mapped_buffers=true
neostore.nodestore.db.mapped_memory=1000M
neostore.relationshipstore.db.mapped_memory=5000M
neostore.propertystore.db.mapped_memory=4G
neostore.propertystore.db.strings.mapped_memory=2000M
neostore.propertystore.db.arrays.mapped_memory=1000M
neostore.propertystore.db.index.keys.mapped_memory=1500M
neostore.propertystore.db.index.mapped_memory=1500M
batch_import.node_index.node_auto_index=exact
./import.sh graph.db nodes.csv rels.csv
will be processed without errors, but it takes about 60 seconds!
Importing 3 Nodes took 0 seconds
Importing 2 Relationships took 0 seconds
Total import time: 54 seconds
When i use smaller IDs - for example 3150411 instead of 315041100 - it takes just 1 second!
Importing 3 Nodes took 0 seconds
Importing 2 Relationships took 0 seconds
Total import time: 1 seconds
Actually i would take even bigger IDs with 10 digits. I don't know what i'm doing wrong. Can anyone see an error?
JDK 1.7
batchimporter 2.1.3 (with neo4j 2.1.3)
OS: ubuntu 14.04
Hardware: 8-Core-Intel-CPU, 16GB RAM
I think the problem is that the batch importer is interpreting those IDs as actual physical ids on disk. And so the time is spent in the file system, inflating the store files up to the size where they can fit those high ids.
The ids that you're giving are intended to be "internal" to the batch import, or? Although I'm not sure how to tell the batch importer that is the case.
#michael-hunger any input there?
the problem is that those ID's are internal to Neo4j where they represent disk record-ids. if you provide high values there, Neo4j will create a lot of empty records until it reaches your ids.
So either you create your node-id's starting from 0 and you store your id as normal node property.
Or you don't provide node-id's at all and only lookup nodes via their "business-id-value"
i:id id:long l:label
0 315041100 Person
1 201215100 Person
2 315041200 Person
start:id end:id type relart
0 1 HAS_RELATION 30006
2 0 HAS_RELATION 30006
or you have to configure and use an index:
id:long:people l:label
315041100 Person
201215100 Person
315041200 Person
id:long:people id:long:people type relart
0 1 HAS_RELATION 30006
2 0 HAS_RELATION 30006
HTH Michael
Alternatively you can also just write a small java or groovy program to import your data if handling those ids with the batch-importer is too tricky.
See: http://jexp.de/blog/2014/10/flexible-neo4j-batch-import-with-groovy/

How to Pretty Print Hive Output

How do I have Hive print out nicely formatted results, with column names and pleasantly space, such as mysql? For example:
$ hive -f performanceStatistics.hql
...
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_201306211023_1053
Hadoop job information for Stage-1: number of mappers: 8; number of reducers: 1
2013-09-04 17:30:56,092 Stage-1 map = 0%, reduce = 0%
2013-09-04 17:31:03,132 Stage-1 map = 25%, reduce = 0%, Cumulative CPU 13.87 sec
...
MapReduce Total cumulative CPU time: 2 minutes 5 seconds 260 msec
Ended Job = job_201306211023_1053
MapReduce Jobs Launched:
Job 0: Map: 8 Reduce: 1 Cumulative CPU: 125.26 sec HDFS Read: 1568029694 HDFS Write: 93 SUCCESS
Total MapReduce CPU Time Spent: 2 minutes 5 seconds 260 msec
OK
19.866045211878546 0.023310810810810812 10 0 824821 25 1684.478659112734 0.16516737901191694
Time taken: 34.324 seconds
How do I get the results with the column names and good spacing? I would also like to have an extended view like mysql \G or \x in PostgreSQL.
Use
set hive.cli.print.header=true;
to print column names [1].
As for the spacing, the output is already tab separated so how you process it further is up to you.
[1] https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-CommandLineInterface
You can now also use the Beeline command line tool which outputs data in a pretty format. [0]
Should you want vertical output, like MySQL \G, you can set --outputformat=vertical.
[0] https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-Beeline%E2%80%93NewCommandLineShell

How to profile a web service?

I'm currently developing an practice application in node.js. This applications consists of a JSON REST web service which allows two services.
Insert log (a PUT request to /log, with the message to log)
Last 100 logs (a GET request to /log, that returns the latest 100 logs)
The current stack is formed by a node.js server that has the application logic and a mongodb database that takes care of the persistence. To offer the JSON REST web services I'm using the node-restify module.
I'm currently executing some stress tests using apache bench (using 5000 requests with a concurrency of 10) and get the following results:
Execute stress tests
1) Insert log
Requests per second: 754.80 [#/sec] (mean)
2) Last 100 logs
Requests per second: 110.37 [#/sec] (mean)
I'm surprised of the difference there is in performance, the query I'm executing uses an index. Interestingly enough it seems that the JSON output generation seems to get all the time on deeper tests I have performed.
Can node applications be profiled in detail?
Is this behaviour normal? Retrieving data takes so much more than inserting data?
EDIT:
Full test information
1) Insert log
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Server Software: log-server
Server Hostname: localhost
Server Port: 3010
Document Path: /log
Document Length: 0 bytes
Concurrency Level: 10
Time taken for tests: 6.502 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Total transferred: 2240634 bytes
Total PUT: 935000
HTML transferred: 0 bytes
Requests per second: 768.99 [#/sec] (mean)
Time per request: 13.004 [ms] (mean)
Time per request: 1.300 [ms] (mean, across all concurrent requests)
Transfer rate: 336.53 [Kbytes/sec] received
140.43 kb/s sent
476.96 kb/s total
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 3
Processing: 6 13 3.9 12 39
Waiting: 6 12 3.9 11 39
Total: 6 13 3.9 12 39
Percentage of the requests served within a certain time (ms)
50% 12
66% 12
75% 12
80% 13
90% 15
95% 24
98% 26
99% 30
100% 39 (longest request)
2) Last 100 logs
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Server Software: log-server
Server Hostname: localhost
Server Port: 3010
Document Path: /log
Document Length: 4601 bytes
Concurrency Level: 10
Time taken for tests: 46.528 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Total transferred: 25620233 bytes
HTML transferred: 23005000 bytes
Requests per second: 107.46 [#/sec] (mean)
Time per request: 93.057 [ms] (mean)
Time per request: 9.306 [ms] (mean, across all concurrent requests)
Transfer rate: 537.73 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 1
Processing: 28 93 16.4 92 166
Waiting: 26 85 18.0 86 161
Total: 29 93 16.4 92 166
Percentage of the requests served within a certain time (ms)
50% 92
66% 97
75% 101
80% 104
90% 113
95% 121
98% 131
99% 137
100% 166 (longest request)
Retrieving data from the database
To query the database I use the mongoosejs module. The log schema is defined as:
{
date: { type: Date, 'default': Date.now, index: true },
message: String
}
and the query I execute is the following:
Log.find({}, ['message']).sort('date', -1).limit(100)
Can node applications be profiled in detail?
Yes. Use node --prof app.js to create a v8.log, then use linux-tick-processor, mac-tick-processor or windows-tick-processor.bat (in deps/v8/tools in the node src directory) to interpret the log. You have to build d8 in deps/v8 to be able to run the tick processor.
Here's how I do it on my machine:
apt-get install scons
cd ~/development/external/node-0.6.12/deps/v8
scons arch=x64 d8
cd ~/development/projects/foo
node --prof app.js
D8_PATH=~/development/external/node-0.6.12/deps/v8 ~/development/external/node-0.6.12/deps/v8/tools/linux-tick-processor > profile.log
There are also a few tools to make this easier, including node-profiler and v8-profiler (with node-inspector).
Regarding your other question, I would like some more information on how you fetch your data from Mongo, and what the data looks like (I agree with beny23 that it looks like a suspiciously low amount of data).
I strongly suggest taking a look at the DTrace support of Restify. It will likely become your best friend when profiling.
http://mcavage.github.com/node-restify/#DTrace

Mysql InnoDB optimisation

I'm having some trouble understanding InnoDB usage - we have a drupal based DB (5:1 read:write) running on mysql (Server version: 5.1.41-3ubuntu12.10-log (Ubuntu)). Our current Innodb data/index sizing is:
Current InnoDB index space = 196 M
Current InnoDB data space = 475 M
Looking around on the web and reading books like 'High performance sql' suggest to have 10% increase on data size - i have set the buffer pool to be (data+index)+10% and noticed that the buffer pool was at 100%...even increasing about this to 896Mb still makes it 100% (even though the data + indexes are only ~671Mb?
I've attached the output of the innodb section of mysqlreport below. Pages free of 1 seems to be suggesting a major problem also as well. The innodb_flush_method is set at its default - I will investigate setting this to O_DIRECT but want to sort out this issue before.
__ InnoDB Buffer Pool __________________________________________________
Usage 895.98M of 896.00M %Used: 100.00
Read hit 100.00%
Pages
Free 1 %Total: 0.00
Data 55.96k 97.59 %Drty: 0.01
Misc 1383 2.41
Latched 0 0.00
Reads 405.96M 1.2k/s
From file 15.60k 0.0/s 0.00
Ahead Rnd 211 0.0/s
Ahead Sql 1028 0.0/s
Writes 29.10M 87.3/s
Flushes 597.58k 1.8/s
Wait Free 0 0/s
__ InnoDB Lock _________________________________________________________
Waits 66 0.0/s
Current 0
Time acquiring
Total 3890 ms
Average 58 ms
Max 3377 ms
__ InnoDB Data, Pages, Rows ____________________________________________
Data
Reads 21.51k 0.1/s
Writes 666.48k 2.0/s
fsync 324.11k 1.0/s
Pending
Reads 0
Writes 0
fsync 0
Pages
Created 84.16k 0.3/s
Read 59.35k 0.2/s
Written 597.58k 1.8/s
Rows
Deleted 19.13k 0.1/s
Inserted 6.13M 18.4/s
Read 196.84M 590.6/s
Updated 139.69k 0.4/s
Any help on this would be greatly apprectiated.
Thanks!