Today my co-worker noticed that when adding a decimal place to a progress indicator leads to the impression that the program is running faster than without. (i.e. instead of 1,2,3... it shows 1, 1.2, 1.4, 1.6, ...) I checked it and I was surprised that I got the same impression even though I knew it was faked.
That makes me wonder: What other things are there to create the impression of a fast application?
Of course the best way is to actually make the application faster, but from an algorithmic point of view often there's not much you can do. Additionally I think making a user less frustrated is a good thing, even though it is more or less a psychologic trick.
This effect can be very dramatic: doing relatively large amounts of work to give users a correct and often updating status of progress can of course slow down the actual running time of the application (screen updates, progress display needed calculations, etc) while still giving the user the feeling it takes less time.
Some of the things you could do in GUIs:
make sure your application remains responsive (resizing the forms remains possible, perhaps give a cancel button for the operation?) while background processing is occurring
be very consistent in showing status messages/hourglass cursors throughout the application
if you have something updating during an operation, make sure it updates often (like the almost ridiculous showing of filenames and registry keys during an install), or make sure there's an option to make it do this for users that like this behavior
Present some intermediate, interesting results first. "We've found 2,359 zetuyls matching your request, we're just calculating their future value".
I've seen transport reservation systems do that sort of thing quite nicely.
Showing details (such as the names of files being copied in an installation process) can often make things seem like they're going faster because there's constant, noticeable activity (as opposed to a slowly-creeping progress bar).
If your algorithm is such that it generates a list of results, and you have some way of displaying results as they're generated (as opposed to all at once at the end), do so - the sooner the user has something else to look at besides a spinner, the better.
Allow the user to do something else, while your application is processing data or waiting for a result. In application-scope you could allow to do some refinement of a search query or collect information for preparing next steps. Or just present some other "work" necessary to do or just some hints, documentation, statistics, entertainment..
Use one of those animated progress bars which look like they are doing something even when they aren't progressing. Also, as peSHIr said - print each filename that you copy and update it really fast - you could even fake it by cycling through a large string array N times a second.
I've read somewhere that if the process seems to be speeding up, it seems to be faster than when it's progressing at a steady pace. I can't find the reference right now, but it should be simple to implement.
(10 minutes later...)
A further look down Google lane unearthed the following references:
http://www.azarask.in/blog/post/hacking-memory/
http://blogs.msdn.com/time/
Here is an article about "Expressing time in your UI" and user perception of time. I do not know if it is exactly what you expect as an answer, but it is definitely worth the read.
Add a thread sleep at critical points. With each passing version, reduce the delay.
Related
I am building a Flash AS3 application that allows users to modify images, (drag and drop, select, scale, alter saturation, etc) and then submit-save them to a server.
The user will then have the ability to log in and access these saved images via a separate admin tool in a thumbnail gallery. They can either delete an image or click a thumbnail to view it at original size.
I am architecting and building the front end only and will have design ready assets supplied.
Since I have been burned in working to fixed quote before, would appreciate ANY feedback advice on quoting this project!
Thanks in advance!
I've done a lot of estimating, and I've found that the only way I can get a reliable estimate is to break down all of the tasks and subtasks to as granular a level as I can, estimate all of those elements, and then add it up. This usually takes me several passes, and a couple of times waking up in the middle of the night.
It's time-intensive, but works out really well in at least three ways.
Obviously, the first way is that you end up with a pretty reliable estimate.
You also think of all kinds of things that you wouldn't have thought of if you hadn't sat down and wrote everything out (which is a big part of why estimates turn out to be wrong, in the first place). You also give yourself the chance to really think through your overall approach, and you end up making better decisions on things like which framework to use.
Writing everything out to the detail level helps a lot in sequencing the work you're doing with the work of other teammates. Makes it easy to see that at a given point you'll be roadblocked if you don't have an API from the server team, etc. Also helps you realize how you will potentially roadblock your teammates, and gives you the ability to deal with that.
Hope that's helpful. Making myself work hard at the estimation end of a project has really helped me be successful in the actual development aspect.
I've recently begun to unveil and slowly roll out a homemade CMS. The site allows a lot of customization with movement towards internationalization and customization onto a level that doesn't require source code. This is a personal project, and the entire intent was to see how far I can push my own programming limits (the question of distrubtion of a CMS that handles blog, webcomic, and a small forum isn't one that I'm willing to consider, not until I clean it up and work on it some more--as well, seeing as it's an amateur project, I doubt it has any gravity compared to other, more refined projects... but those are not topics that concern the topic at hand.)
I've instituted a series of code that allows me to see how fast each page is generated and how many queries are ran; on average, I'm seeing 9-13, upwards to 12 MySQL queries performed per page. Average time to generate a page is somewhere between 10-20 ms. Now, not having any experience with professional design, what is the optimum that I should be striving for?
What are ways to reduce generation time (or, with an average of 15 ms/page, is this not even a concern), or tactics on reducing the number of a queries on a page where most of the content is loaded FROM an MySQL database, including things like menu items.
Mind you, this is a very broad question; it isn't my intent to ask a general question or spark conversation, but to find out ways of reducing the load (if any) on a server that such a system could create.
Using a PHP opcode cache will dramatically cut down on the time taken to open and compile PHP scripts, by skipping the parsing and compilation into bytecode.
Turning on the MySQL query cache is generally (though not always) a good idea.
Rather than focusing on the number of queries, focus on reducing the time those queries take by optimising your queries. It is often much more efficient to have a larger number of small, optimised queries than to try and reduce the number of queries.
Use a profiler such as the one built in to XDebug. Together with an interpreter like KCacheGrind or WinCacheGrind, optimising code really helps when you know what to focus on. It's not worth optimising something that contributes only a negligible amount to your total execution time. It's worth getting to know what everything in *CacheGrind means.
My PHP content management system usually loads a page in about the same amount of time (down to minimum 8ms where everything is a cache hit). But very occasionally, when you do something complex it may take over 500ms. When concerned about user experience the typical time is more important, not so much the outliers, but when concerned about server load the average time is more important, so those 500ms outliers are suddenly quite important.
If you are mainly developing these sites for small companies or other reasons where you don't predict or imagine a high traffic (i.e. nothing like Digg/Facebook/etc) then an average of 15ms should be fine.
May I ask what the 12 queries are for? I imagine they are for getting menus items, getting page content and the like. There are various methods of combining/optimising queries, so if you perhaps post a few I (and other stackers) may be able to help you optimise your queries.
It depends...as it always does with performance questions, if the system currently meets your performance requirement then don't worry too much.
Generally if your page generation time is 15ms it will only be a fraction of the total click to glass time that the user experiences, see the yahoo exception performance pages There will be other things to look at in order to get the fastest possible page load time.
On the server, the chances are that the db is going to be caching the results to nearly all if not all of the queries you are running, hence the very fast page timing. You might want to load up a larger set of data if you haven't already done so to test the app, you might find a performance will degrade with the size of the data set.
I'd like to have a special part of administrate area which shows a list of pages/queries that was slow for the past week.
Is there a gem / plugin or other easy way to implement/automate such functionality for the Rails framework?
http://www.newrelic.com/features.html
And a short screencast for RPM:
http://railslab.newrelic.com/2009/01/22/new-relic-rpm
P.S. I don't work for New Relic:)
Here's one example, and another and yet another. See google for even more.
Yeah, it seems like a nice idea. I´ve never seen it before but I imagine you could just log the time each request takes to a DB; then a simple query will show you the slowest requests.
How you optimize your app depends entirely on your code; I guess there´s no silver bullet for that.
First thing that comes to mind: one could track execution time of the queries and if it passes some threshold which is considered normal (average maybe) it gets logged along with some profiling information (which is discarded otherwise).
It's might also be feasible to profile individual parts of the query (like data acquisition from db, logic and so on), then again compare the times against some averages.
One pitfall is that some pages/queries are bound to be processed significantly longer then others because of the difference in the amount of "work" they do. One would have to keep a whole lot of averages for different parts of the site / different types of queries in order to get rid of constant flow of normal queries that execute longer by design.
This is a very simple approach though, I'm sure there are better ways to do it.
I mean, I always was wondered about how the hell somebody can develop algorithms to break/cheat the constraints of legal use in many shareware programs out there.
Just for curiosity.
Apart from being illegal, it's a very complex task.
Speaking just at a teoretical level the common way is to disassemble the program to crack and try to find where the key or the serialcode is checked.
Easier said than done since any serious protection scheme will check values in multiple places and also will derive critical information from the serial key for later use so that when you think you guessed it, the program will crash.
To create a crack you have to identify all the points where a check is done and modify the assembly code appropriately (often inverting a conditional jump or storing costants into memory locations).
To create a keygen you have to understand the algorithm and write a program to re-do the exact same calculation (I remember an old version of MS Office whose serial had a very simple rule, the sum of the digit should have been a multiple of 7, so writing the keygen was rather trivial).
Both activities requires you to follow the execution of the application into a debugger and try to figure out what's happening. And you need to know the low level API of your Operating System.
Some heavily protected application have the code encrypted so that the file can't be disassembled. It is decrypted when loaded into memory but then they refuse to start if they detect that an in-memory debugger has started,
In essence it's something that requires a very deep knowledge, ingenuity and a lot of time! Oh, did I mention that is illegal in most countries?
If you want to know more, Google for the +ORC Cracking Tutorials they are very old and probably useless nowdays but will give you a good idea of what it means.
Anyway, a very good reason to know all this is if you want to write your own protection scheme.
The bad guys search for the key-check code using a disassembler. This is relative easy if you know how to do this.
Afterwards you translate the key-checking code to C or another language (this step is optional). Reversing the process of key-checking gives you a key-generator.
If you know assembler it takes roughly a weekend to learn how to do this. I've done it just some years ago (never released anything though. It was just research for my game-development job. To write a hard to crack key you have to understand how people approach cracking).
Nils's post deals with key generators. For cracks, usually you find a branch point and invert (or remove the condition) the logic. For example, you'll test to see if the software is registered, and the test may return zero if so, and then jump accordingly. You can change the "jump if equals zero (je)" to "jump if not-equals zero (jne)" by modifying a single byte. Or you can write no-operations over various portions of the code that do things that you don't want to do.
Compiled programs can be disassembled and with enough time, determined people can develop binary patches. A crack is simply a binary patch to get the program to behave differently.
First, most copy-protection schemes aren't terribly well advanced, which is why you don't see a lot of people rolling their own these days.
There are a few methods used to do this. You can step through the code in a debugger, which does generally require a decent knowledge of assembly. Using that you can get an idea of where in the program copy protection/keygen methods are called. With that, you can use a disassembler like IDA Pro to analyze the code more closely and try to understand what is going on, and how you can bypass it. I've cracked time-limited Betas before by inserting NOOP instructions over the date-check.
It really just comes down to a good understanding of software and a basic understanding of assembly. Hak5 did a two-part series on the first two episodes this season on kind of the basics of reverse engineering and cracking. It's really basic, but it's probably exactly what you're looking for.
A would-be cracker disassembles the program and looks for the "copy protection" bits, specifically for the algorithm that determines if a serial number is valid. From that code, you can often see what pattern of bits is required to unlock the functionality, and then write a generator to create numbers with those patterns.
Another alternative is to look for functions that return "true" if the serial number is valid and "false" if it's not, then develop a binary patch so that the function always returns "true".
Everything else is largely a variant on those two ideas. Copy protection is always breakable by definition - at some point you have to end up with executable code or the processor couldn't run it.
The serial number you can just extract the algorithm and start throwing "Guesses" at it and look for a positive response. Computers are powerful, usually only takes a little while before it starts spitting out hits.
As for hacking, I used to be able to step through programs at a high level and look for a point where it stopped working. Then you go back to the last "Call" that succeeded and step into it, then repeat. Back then, the copy protection was usually writing to the disk and seeing if a subsequent read succeeded (If so, the copy protection failed because they used to burn part of the floppy with a laser so it couldn't be written to).
Then it was just a matter of finding the right call and hardcoding the correct return value from that call.
I'm sure it's still similar, but they go through a lot of effort to hide the location of the call. Last one I tried I gave up because it kept loading code over the code I was single-stepping through, and I'm sure it's gotten lots more complicated since then.
I wonder why they don't just distribute personalized binaries, where the name of the owner is stored somewhere (encrypted and obfuscated) in the binary or better distributed over the whole binary.. AFAIK Apple is doing this with the Music files from the iTunes store, however there it's far too easy, to remove the name from the files.
I assume each crack is different, but I would guess in most cases somebody spends
a lot of time in the debugger tracing the application in question.
The serial generator takes that one step further by analyzing the algorithm that
checks the serial number for validity and reverse engineers it.
There are a few posts on usability but none of them was useful to me.
I need a quantitative measure of usability of some part of an application.
I need to estimate it in hard numbers to be able to compare it with future versions (for e.g. reporting purposes). The simplest way is to count clicks and keystrokes, but this seems too simple (for example is the cost of filling a text field a simple sum of typing all the letters ? - I guess it is more complicated).
I need some mathematical model for that so I can estimate the numbers.
Does anyone know anything about this?
P.S. I don't need links to resources about designing user interfaces. I already have them. What I need is a mathematical apparatus to measure existing applications interface usability in hard numbers.
Thanks in advance.
http://www.techsmith.com/morae.asp
This is what Microsoft used in part when they spent millions redesigning Office 2007 with the ribbon toolbar.
Here is how Office 2007 was analyzed:
http://cs.winona.edu/CSConference/2007proceedings/caty.pdf
Be sure to check out the references at the end of the PDF too, there's a ton of good stuff there. Look up how Microsoft did Office 2007 (regardless of how you feel about it), they spent a ton of money on this stuff.
Your main ideas to approach in this are Effectiveness and Efficiency (and, in some cases, Efficacy). The basic points to remember are outlined on this webpage.
What you really want to look at doing is 'inspection' methods of measuring usability. These are typically more expensive to set up (both in terms of time, and finance), but can yield significant results if done properly. These methods include things like heuristic evaluation, which is simply comparing the system interface, and the usage of the system interface, with your usability heuristics (though, from what you've said above, this probably isn't what you're after).
More suited to your use, however, will be 'testing' methods, whereby you observe users performing tasks on your system. This is partially related to the point of effectiveness and efficiency, but can include various things, such as the "Think Aloud" concept (which works really well in certain circumstances, depending on the software being tested).
Jakob Nielsen has a decent (short) article on his website. There's another one, but it's more related to how to test in order to be representative, rather than how to perform the testing itself.
Consider measuring the time to perform critical tasks (using a new user and an experienced user) and the number of data entry errors for performing those tasks.
First you want to define goals: for example increasing the percentage of users who can complete a certain set of tasks, and reducing the time they need for it.
Then, get two cameras, a few users (5-10) give them a list of tasks to complete and ask them to think out loud. Half of the users should use the "old" system, the rest should use the new one.
Review the tapes, measure the time it took, measure success rates, discuss endlessly about interpretations.
Alternatively, you can develop a system for bucket-testing -- it works the same way, though it makes it far more difficult to find out something new. On the other hand, it's much cheaper, so you can do many more iterations. Of course that's limited to sites you can open to public testing.
That obviously implies you're trying to get comparative data between two designs. I can't think of a way of expressing usability as a value.
You might want to look into the GOMS model (Goals, Operators, Methods, and Selection rules). It is a very difficult research tool to use in my opinion, but it does provide a "mathematical" basis to measure performance in a strictly controlled environment. It is best used with "expert" users. See this very interesting case study of Project Ernestine for New England Telephone operators.
Measuring usability quantitatively is an extremely hard problem. I tackled this as a part of my doctoral work. The short answer is, yes, you can measure it; no, you can't use the results in a vacuum. You have to understand why something took longer or shorter; simply comparing numbers is worse than useless, because it's misleading.
For comparing alternate interfaces it works okay. In a longitudinal study, where users are bringing their past expertise with version 1 into their use of version 2, it's not going to be as useful. You will also need to take into account time to learn the interface, including time to re-understand the interface if the user's been away from it. Finally, if the task is of variable difficulty (and this is the usual case in the real world) then your numbers will be all over the map unless you have some way to factor out this difficulty.
GOMS (mentioned above) is a good method to use during the design phase to get an intuition about whether interface A is better than B at doing a specific task. However, it only addresses error-free performance by expert users, and only measures low-level task execution time. If the user figures out a more efficient way to do their work that you haven't thought of, you won't have a GOMS estimate for it and will have to draft one up.
Some specific measures that you could look into:
Measuring clock time for a standard task is good if you want to know what takes a long time. However, lab tests generally involve test subjects working much harder and concentrating much more than they do in everyday work, so comparing results from the lab to real users is going to be misleading.
Error rate: how often the user makes mistakes or backtracks. Especially if you notice the same sort of error occurring over and over again.
Appearance of workarounds; if your users are working around a feature, or taking a bunch of steps that you think are dumb, it may be a sign that your interface doesn't give the tools to figure out how to solve their problems.
Don't underestimate simply asking users how well they thought things went. Subjective usability is finicky but can be revealing.