Related
I've seen a lot of code in Sitecore where Assert.IsNull is used before any logic;
e.g.
Database database = Factory.GetDatabase(itemUri.DatabaseName);
Assert.IsNotNull(database, itemUri.DatabaseName);
return database.GetItem(attribute);
Could someone help me understand why I would use this?
This topic isn't really specific to Sitecore, even though in this case the assert methods are within the Sitecore library.
In general, assertions are used to ensure your code is correct during development, and exception handling makes sure your code copes in unpredictable circumstances.
Take a look at these SO questions for some very good explanations.
When to use an assertion and when to use an exception
When to use assert() and when to use try catch?
Here's an article specifically about the use of Sitecore assertions:
http://briancaos.wordpress.com/2012/01/20/sitecore-diagnostics-assert-statements/
For TDD you have to
Create a test that fail
Do the simplest thing that could possible work to pass the test
Add more variants of the test and repeat
Refactor when a pattern emerge
With this approach you're supposing to cover all the cases ( that comes to my mind at least) but I'm wonder if am I being too strict here and if it is possible to "think ahead" some scenarios instead of simple discover them.
For instance, I'm processing a file and if it doesn't conform to a certain format I am to throw an InvalidFormatException
So my first test was:
#Test
void testFormat(){
// empty doesn't do anything nor throw anything
processor.validate("empty.txt");
try {
processor.validate("invalid.txt");
assert false: "Should have thrown InvalidFormatException";
} catch( InvalidFormatException ife ) {
assert "Invalid format".equals( ife.getMessage() );
}
}
I run it and it fails because it doesn't throw an exception.
So the next thing that comes to my mind is: "Do the simplest thing that could possible work", so I :
public void validate( String fileName ) throws InvalidFormatException {
if(fileName.equals("invalid.txt") {
throw new InvalidFormatException("Invalid format");
}
}
Doh!! ( although the real code is a bit more complicated, I found my self doing something like this several times )
I know that I have to eventually add another file name and other test that would make this approach impractical and that would force me to refactor to something that makes sense ( which if I understood correctly is the point of TDD, to discover the patterns the usage unveils ) but:
Q: am I taking too literal the "Do the simplest thing..." stuff?
I think your approach is fine, if you're comfortable with it. You didn't waste time writing a silly case and solving it in a silly way - you wrote a serious test for real desired functionality and made it pass in - as you say - the simplest way that could possibly work. Now - and into the future, as you add more and more real functionality - you're ensuring that your code has the desired behavior of throwing the correct exception on one particular badly-formatted file. What comes next is to make that behavior real - and you can drive that by writing more tests. When it becomes simpler to write the correct code than to fake it again, that's when you'll write the correct code. That assessment varies among programmers - and of course some would decide that time is when the first failing test is written.
You're using very small steps, and that's the most comfortable approach for me and some other TDDers. If you're more comfortable with larger steps, that's fine, too - but know you can always fall back on a finer-grained process on those occasions when the big steps trip you up.
Of course your interpretation of the rule is too literal.
It should probably sound like "Do the simplest potentially useful thing..."
Also, I think that when writing implementation you should forget the body of the test which you are trying to satisfy. You should remember only the name of the test (which should tell you about what it tests). In this way you will be forced to write the code generic enough to be useful.
I too am a TDD newbie struggling with this question. While researching, I found this blog post by Roy Osherove that was the first and only concrete and tangible definition of "the simplest thing that could possibly work" that I have found (and even Roy admitted it was just a start).
In a nutshell, Roy says:
Look at the code you just wrote in your production code and ask yourself the following:
“Can I implement the same solution in a way that is ..”
“.. More hard-coded ..”
“.. Closer to the beginning of the method I wrote it in.. “
“.. Less indented (in as less “scopes” as possible like ifs, loops, try-catch) ..”
“.. shorter (literally less characters to write) yet still readable ..”
“… and still make all the tests pass?”
If the answer to one of these is “yes” then do that, and see all the tests still passing.
Lots of comments:
If validation of "empty.txt" throws an exception, you don't catch it.
Don't Repeat Yourself. You should have a single test function that decides if validation does or does not throw the exception. Then call that function twice, with two different expected results.
I don't see any signs of a unit-testing framework. Maybe I'm missing them? But just using assert won't scale to larger systems. When you get a result from validation, you should have a way to announce to a testing framework that a given test, with a given name, succeeded or failed.
I'm alarmed at the idea that checking a file name (as opposed to contents) constitutes "validation". In my mind, that's a little too simple.
Regarding your basic question, I think you would benefit from a broader idea of what the simplest thing is. I'm also not a fundamentalist TDDer, and I'd be fine with allowing you to "think ahead" a little bit. This means thinking ahead to this afternoon or tomorrow morning, not thinking ahead to next week.
You missed point #0 in your list: know what to do. You say you are processing a file for validation purposes. Once you have specified what "validation" means (hint: do this before writing any code), you might have a better idea of how to a) write tests that, well, test the specification as implemented, and b) write the simplest thing.
If, e.g., validation is "must be XML", your test case is just some non-xml-conformant string, and your implementation is using an XML library and (if necessary) transform its exceptions into those specified for your "validation" feature.
One thing of note to future TDD learners - the TDD mantra doesn't actually include "Do the simplest thing that could possibly work." Kent Beck's TDD Book has only 3 steps:
Red— Write a little test that doesn't work, and perhaps doesn't even
compile at first.
Green— Make the test work quickly, committing
whatever sins necessary in the process.
Refactor— Eliminate all of the duplication created in merely getting the test to work.
Although the phrase "Do the simplest thing..." is often attributed to Ward Cunningham, he actually asked a question "What's the simplest thing that could possibly work?", but that question was later turned into a command - which Ward believes may confuse rather help.
Edit: I can't recommend reading Beck's TDD Book strongly enough - it's like having a pairing session with the master himself, giving you his insights and thoughts on the Test Driven Development process.
Like a method should do one thing only, one test should test one thing (behavior) only. To address the example given, I'd write two tests, for instance, test_no_exception_for_empty_file and test_exception_for_invalid_file. The second could indeed be several tests - one per sort of invalidity.
The third step of the TDD process shall be interpreted as "add a new variant of the test", not "add a new variant to the test". Indeed, a unit test shall be atomic (test one thing only) and generally follows the triple A pattern: Arrange - Act - Assert. And it's very important to verify the test fails first, to ensure it is really testing something.
I would also separate the responsibility of reading the file and validating its content. That way, the test can pass a buffer to the validate() function, and the tests do not have to read files. Usually unit tests do not access to the filesystem cause this slow them down.
Just as the title said. What ways do you use to test your own code so that it wouldn't be a boring task? Do you use any tool? For my projects, I use a spreadsheet to list all the possible routines i.e. from the basic CRUD and also all the weird routines. i make about 10 routines.
I get about 2-3 bugs and sometimes major ones by doing this. And if i'm not doing this the client reports another bug.
So tell me what technique do you use in testing your own code in such a way that it doesn't bore you?
Edit:
I forgot to mention that i am particularly working on web based apps and my language is PHP & Cakephp framework.
Have fast tests. The (more) immediate feedback helps to acchieve short iterations. This can almost make you addicted to starting the next test run.
If you find testing boring this is because testing your code is a necessary evil... least is how I perceived you see it.
All you need here is a change in your point of view towards testing... and more specifically... a change in HOW you are testing. You love programming a lot more than testing... well program your tests... then it is just as fun as programming the thing to begin with... and when you are done you have
a program that works
a test suite that remains and test it every builds
So leave that excel sheet and step by step debugger and join the fun :-)
Of course there is more to that and this where test frameworks (junit, testNG, Dunit, NUnit ...) will come in handy, they will take the little pains away and only leave the coding part of the test..
Happy coding and by extension.. happy testing :-)
Few references you may find useful, I am not a PHP expert, far from it but it seemed to fit the purpose.
http://www.simpletest.org/
http://www.phpunit.de/
http://laughingmeme.org/2003/08/05/a-few-tips-for-writing-useful-libraries-in-php/
I used to think the same as you. When I first started programming, we had to work out what the output would be on paper and then do visual comparisons of the actual and expected output. Talk about tedious. A couple of years ago, I discovered Test Driven Development and xUnit and now I love tests.
Basically, in TDD, you have a framework designed to allow you to write tests and run them very easily. So, writing tests just becomes writing code. The process is:
Just write enough to allow you to write a test. E.g you're adding a method to a class, so you just write the method sig and any return statement needed to get it to compile.
Then you write your first test and run the framework to see that it fails.
Then you add code to/refactor your method to get the test to pass.
Then you add the next test and see that it fails.
Repeat 3 and 4 until you can't think of any more tests.
You've finished.
That's one of the nice things about TDD: once your code passes every test you can think of, you know you're finished - without TDD, sometimes it's difficult to know when to stop. Where do your tests come from? They come from the spec. TDD often helps you to realise that the spec. is full of holes as you think of test cases for things that weren't in the spec. You can get these questions answered before you start writing the code to deal with them.
Another nice thing is that when you discover a bug later, you can start reworking your code safe in the knowledge that all of the existing tests will prove your code still works for all the known cases, whilst the new tests you've written to recreate the bug will show you when you've fixed it.
You can add unit tests to existing code - just add them for the bits you're changing. As you keep coming back to it, the tests will get more and more coverage.
xUnit is the generic name for a bunch of frameworks that support different languages: JUnit for Java, NUnit for .NET, etc. There's probably already one for whatever language you use. You can even write your own framework. Read this book - it's excellent.
For new code, work out what the code should do, write a test that asserts that the code does it, work out how to do it, then write the code.
For finding bugs in existing code, a test which reproduces the bug makes it easier to test.
This isn't boring, because in both cases the tests have a high likelihood of failure.
For UAT, then I haven't found any non-boring way - you go through the requirements one by one and make as many tests are required for the functionality. Ideally for new projects, that would have been mostly done up-front as part of the elaboration, but not always. It's only when you're writing tests after the fact that you have to a long list of tests which you already know will pass that it gets boring.
I dont see how it can be boring since it's a large part of the programming itself. Finding and removing bugs is very important, but if you think it's boring maybe you would rather write code in which case you can write a few lines that test critical parts in your code.
Use a test first approach \ pair programming test first.
If you writing them after you have written your own code, then your target is to find mistakes in your work = sad target.
Conversely, if you write your tests before you code, then your target is to write flawless software = happy target.
You probably mean tedious, rather than boring.
If so, this article may help
"No testing, no boring."
Write automatic unit tests, with PhpUnit or Simpletest since you're using PHP, or any other unit-testing framework available for your language of choice. Following Test-Driven Development (TDD), you will build a test suite along with your code. You won't have the impression you're testing anything. Really.
"* test a little, code a little*".
One of the advices I give to my team is that concerning a new features 90% of the logic should run out of the context of the application.
Features that can run outside of the application context are always easy to test.
If you are using .net, you can investigate NUnit.
You can also look at Pex. It seems to be an amazing test framework.
However, your question is a little generic because there are a lot testing types.
Have fun testing :).
I try to write my Tests first and try to design the class around it. So i am really test focussed. I am using JUnit etc.
If you try Programming in that way..testing becomes more and more fun, from my point of view.
I work for a small company yet we have a separate test team. This is because developers are often blind for their own errors, thus they tend to be bad testers.
Our test team is made up of experienced Test Engineers who work according to predefined test-plans and who often use automated test-tools to test the applications we create. (Including websites!) They are not developers!
These testers use TMap for the automated testing. The rest is just manual labor, reading the functional designs and making sure that whatever is mentioned in the functional design will work exactly as described in the final version.
Any errors are reported back to the developers by using an internal bug reporting tool.
Write some unit tests/automated tests, which will run automatically e.g. after a new build has been done.
Use encapsulation and try to test against interfaces only.
Write some small tools to help you test your modules/classes.
Making an easy to use, test suite is easy to do for Perl programs. There is a standard way to do testing in Perl using the Test Anything Protocol.
Basically you write a bunch of files with the .t extension, in the t/ directory of your project, and then run prove.
The files in t/ basically look like this:
#!/usr/bin/perl
use strict;
use warnings;
use Test::More tests => 8;
use Date::ICal;
$ical = Date::ICal->new( year => 1964, month => 10, day => 16,
hour => 16, min => 12, sec => 47,
tz => '0530' );
ok( defined $ical, 'new() returned something' );
ok( $ical->isa('Date::ICal'), " and it's the right class" );
is( $ical->sec, 47, ' sec()' );
is( $ical->min, 12, ' min()' );
is( $ical->hour, 16, ' hour()' );
is( $ical->day, 17, ' day()' );
is( $ical->month, 10, ' month()' );
is( $ical->year, 1964, ' year()' );
For more information you can read the tutorial.
There are many languages which have modules designed to work with The TAP, have a look here for more information.
Unfortunately, TAP has only recently been used for other languages than Perl, so there isn't as much support for them, as there exists for Perl.
Do not write tests for trivial stuff - at least not until it breaks i.e. on rare occasion. If you do then you will feel discomfort every time you need to come and maintain those tests. It's absolutely normal, boredom laziness frustration etc. is your natural instinct reaction to pointless work.
Quite opposite, writing tests for non-trivial algorithms & logic, discovering corner cases which you didn't even think about is actually fun and very rewarding experience.
Do you use the debugger of the language that you work in to step through code to understand what the code is doing, or do you find it easy to look at code written by someone else to figure out what is going on? I am talking about code written in C#, but it could be any language.
I use the unit tests for this.
Yes, but generally only to investigate bugs that prove resistant to other methods.
I write embedded software, so running a debugger normally involves having to physically plug a debug module into the PCB under test, add/remove links, solder on a debug socket (if not already present), etc - hence why I try to avoid it if possible. Also, some older debugger hardware/software can be a bit flaky.
I will for particularly complex sections of code, but I hope that generally my fellow developers would have written code that is clear enough to follow without it.
Depending on who wrote the code, even a debugger doesn't help to understand how it works: I have a co-worker who prides himself on being able to get as much as possible done in every single line of code. This can lead to code that is often hard to read, let alone understand what it does in the long run.
Personally I always hope to find code as readable as the code I try to write.
I will mostly use debugger to setup breakpoints on exceptions.
That way I can execute any test or unit test I wrote, and still be able to be right where the code fails if any exception should occur.
I won't say I used all the time, but I do use it fairly often. The domain I work in is automation and controls. You often need the debugger to see the various internal states of the system. It is usually difficult to impossible to determine these simply from looking at code.
Yes, but only as a last resort when there's no unit test coverage and the code is particularly hard to follow. Using the debugger to step through code is a time consuming process and one I don't find too fun. I tend to find myself using this technique a lot when trying to follow VBA code.
Being new to test based development, this question has been bugging me. How much is too much? What should be tested, how should it be tested, and why should it be tested? The examples given are in C# with NUnit, but I assume the question itself is language agnostic.
Here are two current examples of my own, tests on a generic list object (being tested with strings, the initialisation function adds three items {"Foo", "Bar", "Baz"}):
[Test]
public void CountChanging()
{
Assert.That(_list.Count, Is.EqualTo(3));
_list.Add("Qux");
Assert.That(_list.Count, Is.EqualTo(4));
_list[7] = "Quuuux";
Assert.That(_list.Count, Is.EqualTo(8));
_list.Remove("Quuuux");
Assert.That(_list.Count, Is.EqualTo(7));
}
[Test]
public void ContainsItem()
{
Assert.That(_list.Contains("Qux"), Is.EqualTo(false));
_list.Add("Qux");
Assert.That(_list.Contains("Qux"), Is.EqualTo(true));
_list.Remove("Qux");
Assert.That(_list.Contains("Qux"), Is.EqualTo(false));
}
The code is fairly self-commenting, so I won't go into what's happening, but is this sort of thing taking it too far? Add() and Remove() are tested seperately of course, so what level should I go to with these sorts of tests? Should I even have these sorts of tests?
I would say that what you're actually testing are equivalence classes. In my view, there is no difference between a adding to a list that has 3 items or 7 items. However, there is a difference between 0 items, 1 item and >1 items. I would probably have 3 tests each for Add/Remove methods for these cases initially.
Once bugs start coming in from QA/users, I would add each such bug report as a test case; see the bug reproduce by getting a red bar; fix the bug by getting a green bar. Each such 'bug-detecting' test is there to stay - it is my safety net (read: regression test) that even if I make this mistake again, I will have instant feedback.
Think of your tests as a specification. If your system can break (or have material bugs) without your tests failing, then you don't have enough test coverage. If one single point of failure causes many tests to break, you probably have too much (or are too tightly coupled).
This is really hard to define in an objective way. I suppose I'd say err on the side of testing too much. Then when tests start to annoy you, those are the particular tests to refactor/repurpose (because they are too brittle, or test the wrong thing, and their failures aren't useful).
A few tips:
Each testcase should only test one thing. That means that the structure of the testcase should be "setup", "execute", "assert". In your examples, you mix these phases. Try splitting your test-methods up. That makes it easier to see exactly what you are testing.
Try giving your test-methods a name that describes what it is testing. I.e. the three testcases contained in your ContainsItem() becomes: containsReportsFalseIfTheItemHasNotBeenAdded(), containsReportsTrueIfTheItemHasBeenAdded(), containsReportsFalseIfTheItemHasBeenAddedThenRemoved(). I find that forcing myself to come up with a descriptive name like that helps me conceptualize what I have to test before I code the actual test.
If you do TDD, you should write your test firsts and only add code to your implementation when you have a failing test. Even if you don't actually do this, it will give you an idea of how many tests are enough. Alternatively use a coverage tool. For a simple class like a container, you should aim for 100% coverage.
Is _list an instance of a class you wrote? If so, I'd say testing it is reasonable. Though in that case, why are you building a custom List class?
If it's not code you wrote, don't test it unless you suspect it's in some way buggy.
I try to test code that's independent and modular. If there's some sort of God-function in code I have to maintain, I strip out as much of it as possible into sub-functions and test them independantly. Then the God function can be written to be "obviously correct" -- no branches, no logic, just passing results from one well-tested subfunction to another.