Inspired by this xckd cartoon I wondered exactly what is the best mechanism to provide an estimate to the user of a file copy / movement?
The alt tag on xkcd reads as follows:
They could say "the connection is probably lost," but it's more fun to do naive time-averaging to give you hope that if you wait around for 1,163 hours, it will finally finish.
Ignoring the funny, is that really how it's done in Windows? How about other OS? Is there a better way?
Have a look at my answer to a similar question (and the other answers there) on how the remaining time is estimated in Windows Explorer.
In my opinion, there is only one way to get good estimates:
Calculate the exact number of bytes to be copied before you begin the copy process
Recalculate you estimate regularly (every 1, 5 or 10 seconds, YMMV) based on the current transfer speed
The current transfer speed can fluctuate heavily when you are copying on a network, so use an average, for example based on the amount of bytes transfered since your last estimate.
Note that the first point may require quite some work, if you are copying many files. That is probably why the guys from Microsoft decided to go without it. You need to decide yourself if the additional overhead created by that calculation is worth giving your user a better estimate.
I've done something similar to estimate when a queue will be empty, given that items are being dequeued faster than they are being enqueued. I used linear regression over the most recent N readings of (time,queue size).
This gives better results than a naive
(bytes_copied_so_far / elapsed_time) * bytes_left_to_copy
Start a global timer that fires say, every 1000 milliseconds and update a total elpased time counter. Let's call this variable "elapsedTime"
While the file is being copied, update some local variable with the amount already copied. Let's call this variable "totalCopied"
In the timer event that is periodically raised, divide totalCopied by totalElapsed to give the number of bytes copied per timer interval (in this case, 1000ms). Let's call this variable "bytesPerSec"
Divide the total file size by bytesPerSec and obtain the total number of seconds theoretically required to copy this file. Let's call this variable remainingTime
Subtract elapsedTime from remainingTime and you a somewhat accurate calculation for file copy time.
I think dialogs should just admit their limitations. It's not annoying because it's failing to give a useful time estimate, it's annoying because it's authoritatively offering an estimate that's obvious nonsense.
So, estimate however you like, based on current rate or average rate so far, rolling averages discarding outliers, or whatever. Depends on the operation and the typical durations of events which delay it, so you might have different algorithms when you know the file copy involves a network drive. But until your estimate has been fairly consistent for a period of time equal to the lesser of 30 seconds or 10% of the estimated time, display "oh dear, there seems to be some kind of holdup" when it's massively slowed, or just ignore it if it's massively sped up.
For example, dialog messages taken at 1-second intervals when a connection briefly stalls:
remaining: 60 seconds // estimate is 60 seconds
remaining: 59 seconds // estimate is 59 seconds
remaining: delayed [was 59 seconds] // estimate is 12 hours
remaining: delayed [was 59 seconds] // estimate is infinity
remaining: delayed [was 59 seconds] // got data: estimate is 59 seconds
// six seconds later
remaining: 53 seconds // estimate is 53 seconds
Most of all I would never display seconds (only hours and minutes). I think it's really frustrating when you sit there and wait for a minute while the timer jumps between 10 and 20 seconds. And always display real information like: xxx/yyyy MB copied.
I would also include something like this:
if timeLeft > 5h --> Inform user that this might not work properly
if timeLeft > 10h --> Inform user that there might be better ways to move the file
if timeLeft > 24h --> Abort and check for problems
I would also inform the user if the estimated time varies too much
And if it's not too complicated, there should be an auto-check function that checks if the process is still alive and working properly every 1-10 minutes (depending on the application).
speaking about network file copy, the best thing is to calculate file size to be transfered, network response and etc. An approach that i used once was:
Connection speed = Ping and calculate the round trip time for packages with 15 Kbytes.
Get my file size and see, theorically, how many time it would take if i would break it in
15 kb packages using my connection speed.
Recalculate my connection speed after transfer is started and ajust the time that will be spended.
I've been pondering on this one myself. I have a copy routine - via a Windows Explorer style interface - which allows the transfer of selected files from an Android Device, to a PC.
At the start, I know the total size of the file(s) that are to be copied, and as I am using C#.NET, I am using a Stopwatch, to get the elapsed time, and while the copy is in progress, I am keeping a total of what is copied so far, in terms of bytes.
I haven't actually tested it yet, but the best way seems to be this -
estimated = elapsed * ((totalSize - copiedSoFar) / copiedSoFar)
I never saw it the way you guys are explaining it-by trasfeed bytes & total bytes.
The "experience" always made a lot more sense (not good/accurate) if you instead use bytes of each file, and file count. This is how the estimate swings wildly.
If you are transferring large files first, the estimate goes long-even with the connection static. It is like it naively thinks that all files are the average size of those thus transferred, and then makes a guess assuming that the average file size will remain accurate for the entire time.
This, and the other ways, all get worse when the connection 'speed' varies...
Related
I have some gperf tool files:
the first one was running about 2 minites,file is 18M;
others running about 2 hours and the files are about 800M
when I try to use :pprof --text to get the report, found the the first one has 1300 samples but these 2 hours running just 5500 samples.
I excepted the larger files have about 2*3600*100 samples(because "by default the gperf tools take 100 samples a second").
The same procedures and the same operating environment, why the samples too few?
sorry for my poor english.
I looks like it's I/O bound. In the 120-second job, you're getting 13 seconds of samples. In the 120-minute job, you're getting about 1 minute of samples. The actual fraction of time spent computing vs. I/O can vary pretty widely, especially if there is some constant startup overhead.
If the time ought to be roughly linear in file size, that 120-minute job should really only be about 40 minutes, so I would do some manual sampling on the big job, to see what's happening.
I am using a sort of code_ping for the time it took to process the whole page, to all my pages in my webportal.
I figured if I do a $count_start in the header initialised with current timestamp and a $count_end in the footer, the same, the difference is a meter to roughly let me know how well optimised the page is (queries, loading time of all things in that particular page).
Say for one page i get 0.0075 seconds, for others I get 0.045 etc...i'm working on optimising the queries better this way.
My question is. If one page says by this meter "rough loading time" that has 0.007 seconds,
will 1000 users querying the same page at the same time get each the result in 0.007 * 1000 = 7 seconds ? meaning they will each get the page after 7 seconds ?
thanks
Luckily, it doesn't usually mean that.
The missing variable in your equation is how your database and your application server and anything else in your stack handles concurrency.
To illustrate this strictly from the MySQL perspective, I wrote a test client program that establishes a fixed number of connections to the MySQL server, each in its own thread (and so, able to issue a query to the server at approximately the same time).
Once all of the threads have signaled back that they are connected, a message is sent to all of them at the same time, to send their query.
When each thread gets the "go" signal, it looks at the current system time, then sends the query to the server. When it gets the response, it looks at the system time again, and then sends all of the information back to the main thread, which compares the timings and generates the output below.
The program is written in such a way that it does not count the time required to establish the connections to the server, since in a well-behaved application the connections would be reusable.
The query was SELECT SQL_NO_CACHE COUNT(1) FROM ... (an InnoDB table with about 500 rows in it).
threads 1 min 0.001089 max 0.001089 avg 0.001089 total runtime 0.001089
threads 2 min 0.001200 max 0.002951 avg 0.002076 total runtime 0.003106
threads 4 min 0.000987 max 0.001432 avg 0.001176 total runtime 0.001677
threads 8 min 0.001110 max 0.002789 avg 0.001894 total runtime 0.003796
threads 16 min 0.001222 max 0.005142 avg 0.002707 total runtime 0.005591
threads 32 min 0.001187 max 0.010924 avg 0.003786 total runtime 0.014812
threads 64 min 0.001209 max 0.014941 avg 0.005586 total runtime 0.019841
Times are in seconds. The min/max/avg are the best/worst/average times observed running the same query. At a concurrency of 64, you notice the best case wasn't all that different than the best case with only 1 query. But biggest take-away here is the total runtime column. That value is the difference in time from when the first thread sent its query (they all send their query at essentially the same time, but "precisely" the same time is impossible since I don't have a 64-core machine to run the test script on) to when the last thread received its response.
Observations: the good news is that the 64 queries taking an average of 0.005586 seconds definitely did not require 64 * 0.005586 seconds = 0.357504 seconds to execute... it didn't even require 64 * 0.001089 (the best case time) = 0.069696 All of those queries were started and finished within 0.019841 seconds... or only about 28.5% of the time it would have theoretically taken for them to run one-after-another.
The bad news, of course, is that the average execution time on this query at a concurrency of 64 is over 5 times as high as the time when it's only run once... and the worst case is almost 14 times as high. But that's still far better than a linear extrapolation from the single-query execution time would suggest.
Things don't scale indefinitely, though. As you can see, the performance does deteriorate with concurrency and at some point it would go downhill -- probably fairly rapidly -- as we reached whichever bottleneck occurred first. The number of tables, the nature of the queries, any locking that is encountered, all contribute to how the server performs under concurrent loads, as do the performance of your storage, the size, performance, and architecture, of the system's memory, and the internals of MySQL -- some of which can be tuned and some of which can't.
But of course, the database isn't the only factor. The way the application server handles concurrent requests can be another big part of your performance under load, sometimes to a larger extent than the database, and sometimes less.
One big unknown from your benchmarks is how much of that time is spent by the database answering the queries, how much of the time is spent by the application server executing the logic business, and how much of the time is spent by the code that is rendering the page results into HTML.
I have a script that generates about 20,000 small objects with about 8 simple properties. My desire was to toss these objects into ScriptDb for later processing of the data.
What I'm experiencing though is that even with a savebatch operation that the process takes much longer then desired and then silently stops. By too much time, it's often greater then the 5 min execution limit, though without throwing any error. The script runs so long that I've not attempted to check a mutation result to see what didn't make it, but from a check after exectution it appears that most do not.
So though I'm quite certain that my collection of objects is below the storage size limit, is there a lesser known limit or throttle on accesses that is causing me problems? Are the number of objects the culprit here, should I be instead attempting to save one big object that's a collection of the lessers?
I think it's the amount of data you're writing. I know you can store 20,000 small objects, you just can't write that much in 5 minutes. Write 1000 then quit. Write the next thousand, etc. Run your function 20 times and the data is loaded. If you need to do this more/automated, use ScriptApp.
I'm painfully aware there probably isn't a magic bullet to this, but it's becoming a problem. Each user has hundreds of thousands of rows of metrics data across 3 tables, this is updated on a second by second basis.
When a user logs in, I want to quickly deliver them top line stats for a number of their assets (i.e. alongside each asset in navi they have top level stats).
I've tried a number of ideas; but please - if someone has some advice or experience in this area it'd be great. Stuff tried or looked into so far:-
Produce static versions of top line stats every hour or so - This is intensive across all users and all assets. So how this can be done regularly, I'm not sure.
Call stats via AJAX, so they can be processed and fill in (getting top level stats right now can take up to 10 seconds for a larger user) once page has loaded. This could also cache stats in session to save redoing queries each page load.
Query run at 30 min intervals, i.e. you log on, it'll query and then it'll hopefully use query cache every time it's loaded (only 1/2 seconds) until the next 30min interval.
The first one seems to have most legs, but I'm not sure how to do this, given only a small number of users will be needing those stats - it seems awfully expensive to do it for everyone all the time.
Produce static versions of top line stats every hour or so - This is
intensive across all users and all assets. So how this can be done
regularly, I'm not sure.
Call stats via AJAX, so they can be processed and fill in (getting
top level stats right now can take up to 10 seconds for a larger
user) once page has loaded. This could also cache stats in session to
save redoing queries each page load.
Query run at 30 min intervals, i.e. you log on, it'll query and then
it'll hopefully use query cache every time it's loaded (only 1/2
seconds) until the next 30min interval.
Your option 1 and 3 in mySQL is known as a materialized view MySQL doesn't currently support them but the concept can be completed link provides examples
hundreds of thousands of records isn't that much. good indexes and the use of analytic queries will get you quite far. Sadly this concept isn't implemented in full but there are workarounds as well as indicated in the link provided.
It really depends on top line stats. are you wanting real time data down to the second or are 10-20 or even 30 minute intervals acceptable? Using event scheduler one can schedule the creation/update of reporting table(s) which contain summarized data faster to query. This data then is available at fractions of seconds delivery time as all the heavy lifting has already been completed. Your focus can then be on indexing these tables to improve performance without worrying about impacts to production tables.
You are in the datawarehousing domain with your setup. This means, that not all the NF1 rules apply. So my approach would be to use triggers to fill a seperate stats table.
The hadoop documentation states:
The right number of reduces seems to be 0.95 or 1.75 multiplied by
( * mapred.tasktracker.reduce.tasks.maximum).
With 0.95 all of the reduces can launch immediately and start
transferring map outputs as the maps finish. With 1.75 the faster
nodes will finish their first round of reduces and launch a second
wave of reduces doing a much better job of load balancing.
Are these values pretty constant? What are the results when you chose a value between these numbers, or outside of them?
The values should be what your situation needs them to be. :)
The below is my understanding of the benefit of the values:
The .95 is to allow maximum utilization of the available reducers. If Hadoop defaults to a single reducer, there will be no distribution of the reducing, causing it to take longer than it should. There is a near linear fit (in my limited cases) to the increase in reducers and the reduction in time. If it takes 16 minutes on 1 reducer, it takes 2 minutes on 8 reducers.
The 1.75 is a value that attempts to optimize the performance differences o the machines in a node. It will create more than a single pass of reducers so that the faster machines will take on additional reducers while slower machines do not.
This figure (1.75) is one that will need to be adjusted much more to your hardware than the .95 value. If you have 1 quick machine and 3 slower, maybe you'll only want 1.10. This number will need more experimentation to find the value that fits your hardware configuration. If the number of reducers is too high, the slow machines will be the bottleneck again.
To add to what Nija said above, and also a bit of personal experience:
0.95 makes a bit of sense because you are utilizing the maximum capacity of your cluster, but at the same time, you are accounting for some empty task slots for what happens in case some of your reducers fail. If you're using 1x the number of reduce task slots, your failed reduce has to wait until at least one reducer finishes. If you're using 0.85, or 0.75 of the reduce task slots, you're not utilizing as much of your cluster as you could.
We can say that these numbers are not valid anymore. Now acording to the book "Hadoop: definitive guide" and hadoop wiki we target that reducer should process by 5 minutes.
Fragment from the book:
Chosing the Number of Reducers The single reducer default is something
of a gotcha for new users to Hadoop. Almost all real-world jobs should
set this to a larger number; otherwise, the job will be very slow
since all the intermediate data flows through a single reduce task.
Choosing the number of reducers for a job is more of an art than a
science. Increasing the number of reducers makes the reduce phase
shorter, since you get more parallelism. However, if you take this too
far, you can have lots of small files, which is suboptimal. One rule
of thumb is to aim for reducers that each run for five minutes or so,
and which produce at least one HDFS block’s worth of output.