Related
So, I've done a bit of programming in my day. Java, C#, C++, and I've always had a fascination with computers in general. One thing that I would really like to learn, and, what I think would really help my programming skills, is how software tells the hardware what to do.
I'm aware that's quite the tall order: I know that's different per language; per OS. I'm not asking for an actual answer, as much as I'm asking for a starting point. Also, if this is actually a waste of time, like, if it wouldn't really help my programming and/or wouldn't be worth it because it's a massive amount of stuff to learn and it would take years for it to actually pay off, saying that would be helpful too.
I can't escape the feeling that I'm asking a stupid question.
What we commonly call hardware can be thought of as a (big) number of electrical devices that function according to some specific rules. By putting some electrons in the input(s), the output(s) will vary after after a fixed rule ( similar devices behave the same). The best known device is the transistor. Transistors can be connected in such a way that they perform logical functions, the most used being NAND ( not and). Using NAND gates any kind of logic can be(and is) implemented. To sum it up, hardware does logic functions by moving electrons around.
Now comes the interresting question. What is software? People tend to think that because there is thought involved in writing software, that it doesn't exist in the real world. Which isn't true. The program is stored in RAM* when you write it, effectively being a pattern of electrons. Now this pattern suffers some transformations ( compiler , assembler ), during those steps the pattern changes from something that is meaningfull to humans to something that can be used as input to the logic functions from above.
On a tangent: A RS flip flop is an interresting device. It uses two NAND blocks to create a memory cell.
Have you thought of hardware design? Either studying it by reading up, or by actually designing your own hardware. You could buy yourself a Raspberry PI, or Arduino, or something else if you don't want to get your hands dirty. Use any of these options to get your hands on hardware, or even use something like Vbox and write your own operating system.
Some random thoughts to consider. And, no your question isn't a stupid one at all.
How do you deal with a client who has different time estimates for the software product than yours?
I am going to describe a scenario that is not mine, but that captures broadly the same problem. I am working as a subcontractor to a large company that has a programming department. The software project we are working on is in an area that the department believe they have a handle on, but because their expertise and mine are very different we tend to get different results.
Example: At the start of the project I suggested one way of development which they rubbished as being unrealistically difficult and suggested integrating a different framework (one they are familiar with) with the programming language we are using (Python) to get more or less the same result.
Their estimate for this integration: less than a week (they haven't done the integration before).
My estimate for the integration: above two weeks.
Using my suggested way to get the result needed (including using matplotlib among other libraries used elsewhere within the project): 45 minutes. This is not an estimate, the bit was actually finished in 45 minutes.
Example: for the software to be integrated with their internal system, they needed to provide a web service for me to use. They provided a broken one, though it does work with their internal tool (doesn't work with .Net or Java mainstream packages among other options). They maintain that it is my fault that the integration has taken longer than the time estimated.
The problem is not that they don't know, the problem is that they have enough knowledge about programming to be dangerous (in my opinion). Is there some guidelines for how to deal with this type of situation? A way for expectation management? Or may be I shouldn't get involved in such projects from the start and in this case what are the telltale signs?
If a client isn't happy with a time estimate, don't do the work. If they think they can do it better or faster, tell them to go ahead.
The one thing I never allow is for my estimates to be modified. That's something that caught me out early on in my career but we learn our lessons.
If clients were so good at doing the work, they wouldn't be hiring me. I'd simply point out that they hired me for my expertise so why are they disregarding that expertise. Of course, if they were to allow the scope of the project to change (i.e., less work), that would be another matter, and one up for discussion.
If you didn't lock in exactly what they were meant to provide as part of the deal, then it's a "he says, she says" situation and, unfortunately, the customer controls the purse strings. However, often, the greatest power you can have is the ability to just walk away.
No-one says you have to do the job.
Of course, all that advice above is worth every cent you paid for it :-)
I don't know your specific circumstances.
Or may be I shouldn't get involved in such projects from the start and in this case what are the telltale signs?
My answer for sure. If you can avoid those projects, do it.
Some signs : people thinking they know how to do things when you can guess they can't. The "oh no let's not use this perfectly suitable tool because I don't know it" is a major indicator that the person is technically challenged.
first of all, it is no fun to be in such an environment.
So, if you like to have fun at your job, and you do not need to take this job for extenuating financial reasons, then simply do not take the job that is not fun.
Since that is hardly realistic in many cases, you will end up with the job and need to manage the situation as best you can. One way is to make sure there is a paper trail documenting your objections and concerns with the plan. Try not to be overtly negative, but try to be constructive and present valid alternatives. Here you will need to feel out the political landscape, determine if the 'boss' will be appreciative or threatened by your commentary, and act accordingly.
Many times there are other issues that management is dealing with that you are not aware of. Be cautious of this fact, and maybe ask the management team if this is the case, again without being condescending or negative.
Finally, if you have alternatives that take less time than the meetings it would take to discuss them, just try it in a sandbox, and show it off. This would go a long way to 'proving' your points. Caution here is that you could be accused of not being a team player, or of wasting resources, or not following direction. Make sure this is mitigated by doing these types of things on your own time, or after careful consideration of how long you are spending on these things as well as how vested your boss seems to be on the alternatives.
hth
I ran into the same problem with integration. Example: for the
software to be integrated with their internal system, they needed to
provide a web service for me to use...They maintain that it is my
fault that the integration has taken longer than the time estimated.
Wow very similar to what I was experiencing with a client. The best thing I can suggest is to keep good documentation. In the end that is what saved me. When it came to finger pointing I had all of the emails and facts in order and was prepared to defend my self.
One thing I would suggest is to separate out a target/goal and an estimation. I would not change my estimate unless it involved actually removing features or something is revealed that would make it easier. Tell them you will try to hit the target in anyway you can and you care about the business goal. However, your estimate will not change. If its getting no where and they are just dense then smile and nod and take it if its the only gig around.
Was just writing about this in my blog
How to estimate the WRONG way
There are a few posts on usability but none of them was useful to me.
I need a quantitative measure of usability of some part of an application.
I need to estimate it in hard numbers to be able to compare it with future versions (for e.g. reporting purposes). The simplest way is to count clicks and keystrokes, but this seems too simple (for example is the cost of filling a text field a simple sum of typing all the letters ? - I guess it is more complicated).
I need some mathematical model for that so I can estimate the numbers.
Does anyone know anything about this?
P.S. I don't need links to resources about designing user interfaces. I already have them. What I need is a mathematical apparatus to measure existing applications interface usability in hard numbers.
Thanks in advance.
http://www.techsmith.com/morae.asp
This is what Microsoft used in part when they spent millions redesigning Office 2007 with the ribbon toolbar.
Here is how Office 2007 was analyzed:
http://cs.winona.edu/CSConference/2007proceedings/caty.pdf
Be sure to check out the references at the end of the PDF too, there's a ton of good stuff there. Look up how Microsoft did Office 2007 (regardless of how you feel about it), they spent a ton of money on this stuff.
Your main ideas to approach in this are Effectiveness and Efficiency (and, in some cases, Efficacy). The basic points to remember are outlined on this webpage.
What you really want to look at doing is 'inspection' methods of measuring usability. These are typically more expensive to set up (both in terms of time, and finance), but can yield significant results if done properly. These methods include things like heuristic evaluation, which is simply comparing the system interface, and the usage of the system interface, with your usability heuristics (though, from what you've said above, this probably isn't what you're after).
More suited to your use, however, will be 'testing' methods, whereby you observe users performing tasks on your system. This is partially related to the point of effectiveness and efficiency, but can include various things, such as the "Think Aloud" concept (which works really well in certain circumstances, depending on the software being tested).
Jakob Nielsen has a decent (short) article on his website. There's another one, but it's more related to how to test in order to be representative, rather than how to perform the testing itself.
Consider measuring the time to perform critical tasks (using a new user and an experienced user) and the number of data entry errors for performing those tasks.
First you want to define goals: for example increasing the percentage of users who can complete a certain set of tasks, and reducing the time they need for it.
Then, get two cameras, a few users (5-10) give them a list of tasks to complete and ask them to think out loud. Half of the users should use the "old" system, the rest should use the new one.
Review the tapes, measure the time it took, measure success rates, discuss endlessly about interpretations.
Alternatively, you can develop a system for bucket-testing -- it works the same way, though it makes it far more difficult to find out something new. On the other hand, it's much cheaper, so you can do many more iterations. Of course that's limited to sites you can open to public testing.
That obviously implies you're trying to get comparative data between two designs. I can't think of a way of expressing usability as a value.
You might want to look into the GOMS model (Goals, Operators, Methods, and Selection rules). It is a very difficult research tool to use in my opinion, but it does provide a "mathematical" basis to measure performance in a strictly controlled environment. It is best used with "expert" users. See this very interesting case study of Project Ernestine for New England Telephone operators.
Measuring usability quantitatively is an extremely hard problem. I tackled this as a part of my doctoral work. The short answer is, yes, you can measure it; no, you can't use the results in a vacuum. You have to understand why something took longer or shorter; simply comparing numbers is worse than useless, because it's misleading.
For comparing alternate interfaces it works okay. In a longitudinal study, where users are bringing their past expertise with version 1 into their use of version 2, it's not going to be as useful. You will also need to take into account time to learn the interface, including time to re-understand the interface if the user's been away from it. Finally, if the task is of variable difficulty (and this is the usual case in the real world) then your numbers will be all over the map unless you have some way to factor out this difficulty.
GOMS (mentioned above) is a good method to use during the design phase to get an intuition about whether interface A is better than B at doing a specific task. However, it only addresses error-free performance by expert users, and only measures low-level task execution time. If the user figures out a more efficient way to do their work that you haven't thought of, you won't have a GOMS estimate for it and will have to draft one up.
Some specific measures that you could look into:
Measuring clock time for a standard task is good if you want to know what takes a long time. However, lab tests generally involve test subjects working much harder and concentrating much more than they do in everyday work, so comparing results from the lab to real users is going to be misleading.
Error rate: how often the user makes mistakes or backtracks. Especially if you notice the same sort of error occurring over and over again.
Appearance of workarounds; if your users are working around a feature, or taking a bunch of steps that you think are dumb, it may be a sign that your interface doesn't give the tools to figure out how to solve their problems.
Don't underestimate simply asking users how well they thought things went. Subjective usability is finicky but can be revealing.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Suppose you're working on an enterprise project in which you have to get management signoff in order for you to develop a new feature set. Usually your management has no problem signing off on some bright shiny new UI feature. Unfortunately they have a hard time appreciating some behind-the-scenes issues that are crucial to the application's well-being such as transactions, data integrity, workflow routing, configurability, security, etc. Since they're non-technical and these issues are not immediately visible, it's not obvious to them that this is crucial.
How have you convinced them that these infrastructural issues have to be dealt with and that it is important to their business process?
Every craft has its unsexy sides. Things that HAVE to be done, but nobody notices them directly. In a grocery store somebody has to organize how and when to fill the grocery shelves so they always look fresh. In a laundry you need somebody who thinks about how the processes should be optimized so that the customer gets his clothes in time.
The tricky part is: The customer won't notice when these subtle things have been done right UNTIL HE NOTICES THEY ARE MISSING! Like when the laundry is not ready on time but two days late, or the veggies in the super market have brown spots and look terrible.
Same goes for IT. You don't notice good transactions until your major customer knocks on your door and tells you that an important and expensive project has failed because the database entries of your product were mysteriously mixed up. You don't notice good security until customer credit card information shows up in Elbonia (and soon after word is in the national newspapers warning customers of your company).
The thing you really have to hammer in again and again and again is that software is NOT static. It has to be cared for even after its initial development phase is over. It is not just a product you buy once and forget about. Every car manufacturer knows that services is of prime importance to the products they build, simply because things WILL occur that have to be fixed and improved. It's the same with software.
So make a presentation, visualize, verbalize, translate your technical information into benefits. Business people don't care about your wish for code aesthetics in a refactoring project, but they WILL understand that your changes will help the product to become more reliable, gain a better reputation and reduce the amount of future service requests. Make them understand by showing them the benefits!
Same thing folks have been doing for thousands of years: draw pictures. Diagram the problems, use visual metaphors familiar to your audience, drag the problem into their territory.
Assuming they're not being intentionally obtuse...
A big +1 for analogies and metaphors. If possible, find one that will resonate with the personal interests of your audience (if it's 1-2 people). For general metaphors, I often find myself using commuter traffic or subways, for some reason.
e.g. We are currently migrating an app from an OODB to Postgres/Hibernate: the bulk of this work is done in Release '4'. Many domain experts have been asking why there are so few user-facing features in R4. I regularly tell them that we have been 'tearing up the city to put in a subway. It is very expensive and undeniably risky, but once it is done, the benefits in R5+ will be astounding, truly.' The true conversation is more involved, but I can return to this theme again and again, well after R4. Months from now, I hope to say "You asked for X and it is now very easy -- precisely because you let us put in that subway back in R4".
The surest way to get upper level management to buy off on development work is to present it in a quantifiable way. Ideally this quantifiable measure is in $$. You need to explain to them the consequences of skimping on data integrity, security, transactions, etc. and how that will affect the customer\user community and eventually the bottom line. You should be careful in these situations because sometimes management expects these non-functional requirements to "just work." If this is the case, you should either estimate high and work on these items alongside the visible UI work (ignorance is bliss) or you need to document these areas of need as you communicate with management so if things do go bad as you anticipate, it's not your job that is on the line.
Unfortunately, it usually takes a disaster or two before this stuff gets the attention it deserves.
It really depends what your management is like, but I've had luck with good old honest-to-goodness fearmongering. If you go through a couple of disaster scenarios, and point out someone's going to get blamed if they occur, that can be enough to make their arsecovering instincts kick in and finally pay attention :)
Car analogies.
Everybody knows that 'system' and it's sufficiently complex to depict the dire situation.
I'm battling with essentially the same kind of situation. Whether it is sign-off by management or acceptance by a user/sponsor, the problem remains one of different vocabularies, priorities and perspectives. I asked a simmilar question here.
I also got diverse answers, tantalizingly close to useful, but not quite definitive enough. Browsing and searching SO using relevant keywords led me to find usable insights in various answers spread out over many unrelated questions. To find and extract these gems led me to pose this question on site-mining.
It would have been useful to be able to flag the various answers and see them all in a single list, but alas, that functionality is not yet available in SO. I suggested it on uservoice.
Hope you find something you can use from the references I gave.
The right kind of countering question is the secret.
Is it okay to crash every 5 web pages?
Do we have to protect the credit card numbers?
Is is okay to have to pay contractors to deploy a patch every weekend?
Did you want it now or did you want it to work?
Robustness. When it comes down to it, you need to talk their language, which is how it affects their bottom line. If its a security or correctness issue, you need to tell them that customers aren't going to want incorrectly acting products, no matter how nice they look.
I like the idea of Technical Debt, because it enables technical issues to be translated (albeit loosely) into money issues -- and money is something most managers do understand.
Although the idea of technical debt was originally applied to architectural issues, it can be used more broadly for any type of situation where there is pressure to take a shortcut -- that is, go into technical debt -- rather than do it right the first time. (Doing it right is the equivalent to saving up to buy something -- it takes time -- rather than buying it on credit and going into debt.)
Just as debts can be good (e.g. home loans) and bad (e.g. credit cards), so technical debt can be good and bad. I won't try to characterise the differences completely, but good technical debt can be tracked accurately, so that you know how much in debt you are.
So, try to explain your important, non-UI problem in terms of technical debt, and the cost of not fixing it in terms of paying interest on that debt.
A descriptive picture really helps non-technical people understand what you are talking about. For example, below is an example from Sun describing how information is processed in one of their somewhat complex applications.
(source: sun.com)
Trying to explain this application in words would be impossible to a non-techy. Pointing at the diagram and say look, this part is our weak point, we need to improve it. That will make sense to them. If they feel like they have some understanding of what you are doing, they will be far more willing to support your request.
If one has a peer-to-peer system that can be queried, one would like to
reduce the total number of queries across the network (by distributing "popular" items widely and "similar" items together)
avoid excess storage at each node
assure good availability to even moderately rare items in the face of client downtime, hardware failure, and users leaving (possibly detecting rare items for archivists/historians)
avoid queries failing to find matches in the event of network partitions
Given these requirements:
Are there any standard approaches? If not, is there any respected, but experimental, research? I'm familiar some with distribution schemes, but I haven't seen anything really address learning for robustness.
Am I missing any obvious criteria?
Is anybody interested in working on/solving this problem? (If so, I'm happy to open-source part of a very lame simulator I threw together this weekend, and generally offer unhelpful advice).
#cdv: I've now watched the video and it is very good, and although I don't feel it quite gets to a pluggable distribution strategy, it's definitely 90% of the way there. The questions, however, highlight useful differences with this approach that address some of my further concerns, and gives me some references to follow up on. Thus, I'm provisionally accepting your answer, although I consider the question open.
There are multiple systems out there with various aspects of what you seek and each making different compromises, including but not limited to:
Amazon's Dynamo: http://s3.amazonaws.com/AllThingsDistributed/sosp/amazon-dynamo-sosp2007.pdf
Kai: http://www.slideshare.net/takemaru/kai-an-open-source-implementation-of-amazons-dynamo-472179
Hadoop: http://hadoop.apache.org/core/docs/current/hdfs_design.html
Chord: http://pdos.csail.mit.edu/chord/
Beehive: http://www.cs.cornell.edu/People/egs/beehive/
and many others. After building a custom system along those lines, I let some of the building blocks out in open source form as well: http://code.google.com/p/distributerl/
(that's not a whole system, but a few libraries useful in building one)