Why is E(dfa) a decidable language? - language-agnostic

I don't understand why the Turing Machine T, ACCEPTS when no accept state is marked and rejects when an accept state is marked:
E(dfa) = {| A is a DFA and L(A) = the empty set(don't have the symbol)}
E(dfa) is a decidable language.
Proof: A DFA accepts some string iff reaching an accept state from the start state by >traveling along the arrows of the DFA is possible. To test this condition, we can design a >TM T that uses a marking algorithm similar to that used in Example 3.23.
T= "On input , where A is a DFA:
1. Mark the start state of A.
2. Repeat until no new states get marked:
3. Mark any state that has a transition coming into it from any state that is
already marked.
4. If no accept state is marked, accept; otherwise, reject."
This seems backwards to me. Can anyone explain this?
Thank you.

I believe that your confusion results from the use of the words "accept" and "reject" in different contexts. At a high level, it's easy to avoid this confusion, because you can define your Turing machine, T to not refer to the process a DFA, A, does its own accepting and rejecting.
L(T) is { A | L(A) is empty }. This is the same as E(dfa) defined in your question, but using L(T) makes it more explicit that we're dealing with two separate languages here, one just happens to be defined in terms of another.
If we work from high-level to low-level, we can say:
L(T) accepts A whenever L(A) is empty.
But how do we determine whether L(A) is empty? Well L(A) is empty when A rejects all strings.
How do we know a string is rejected in A? It does not end in an accept state.
We can also work from low to high now quite easily:
If a string given to A does not end in an accept state, it is rejected.
If all strings are rejected by A, then L(A) is empty.
If L(A) is empty, then L(T) accepts A.
Now your proof goes into a bit more detail as to how T goes about deciding whether to accept A or not, but I think your confusion revolved more around the multiple uses of accept and reject. Very broadly, you can say T accepts A iff A rejects everything.

Related

Is an exception a valid postcondition?

Consider the following interface:
public interface AISPI
{
public Path getPath(Entity entity, Entity target, World world) throws NoPathException;
}
Granted that entity, target, and world are all valid input. But the algorithm used to find a path (it is Astar in this case) fails to find a path, due to eg. the position of target being surrounded by concrete walls.
Is it valid to state that the postcondition is either a Path from entity to target (start to goal) or a NoPathException (given that a path was not found)?
- Or should the precondition state that there must be a valid path from start to goal?
This is not homework, but a question for improving our semester project report. I am not looking to learn about any frameworks, this is purely a question of standards and formalities in regards to design by contract. Thanks for any clarification on the matter.
It depends on the definition of the term postcondition. In general, a precondition is a relation on input state and input values at routine entry, and a postcondition is a relation on input state, input values and output state and output values at routine exit.
Because a routine can exit either normally or exceptionally, it is possible to define a postcondition for normal termination and a postcondition for abnormal termination. Clearly both involve input values, input state and output state. The key difference is in output values. In the first case this is a value specified in the routine signature, in the second - it depends on the language. In your example it might be NoPathException, but what if there is a memory allocation error, stack overflow or other exception or signal that is not specified in the signature? It may indeed seem to be possible to have a precondition that guarantees that there is always a valid result that does not involve exceptions. But this is not the case, e.g. when there is communication to external world, concurrency, etc. Also if a precondition is too costly to compute, it does not look nice to do the same work twice - on the client side to make sure a call is applicable and on the supplier's side to do essentially the same computation to get the result.
According to the Design by Contract philosophy a postcondition is what the client can safely rely on after calling a routine. Coming back to the exceptional case, from the practical point of view it makes sense to make the abnormal postcondition strong enough so that a program can continue execution, but weak enough so that the cases that are not or cannot be specified in the signature, but are possible in practice, are allowed.
So, unless the language does really guarantee all possible exceptional cases and nothing else, the most important part is output state that should not make the associated objects unusable. And this could either be expressed in an explicit or implicit postcondition or as a class invariant.
As to the specific example with getPath, the situation when a path does not exist is normal, i.e. it may happen, is expected. Some guidelines recommend to use normal values to indicate normal termination cases. Here it would be value null. Using null may lead to other issues on the caller's side, such as NullPointerException if result is not checked for null-ness, but in some languages that guarantee absence of such exceptions (e.g., void-safety in Eiffel) this would be the preferred way to indicate absence of a path (the return type would be detachable PATH in that case).

Questions about the Boundary Value Check

I'm doing my JUnit homework and need some explanations here.
Here's the quotation from my homework description:
One of the issues with boundary conditions is that the system needs to behave well even if the boundary is approached multiple times. This should be obvious, but it doesn't always happen in practice.
Remember that we can characterize an object as state and behavior. Typically, the state is not directly accessible, but instead, is accessed indirectly by means of the behavior. That is, the behavior reflects the state of the object.
Now, if we think about boundaries in math, it might not be too surprising to imagine the the value at some boundary will be different if we approach that boundary in different ways. So, if the value can be likened to the state, the state at the boundary may vary depending on how we got there. This would mean that the behavior could be different.
To make objects that behave consistently, we would have to insure that the internal state at those boundaries is consistent. So, test cases should check this assumption. To receive challenge points for this homework assignment enhance your test cases so that potential problems around the boundaries may be discovered.
Clearly mark the Challenge test cases with the string "### challenge ###" in the comments. Include in those comments what boundary is being tested, and how you're guessing that the state of the object may be different depending on how the boundary is being approached.
I don't understand this especially the highlighted part. What does he mean by "object behave consistently" and the "potential probelms"?
Also, how is this different than general boundary check that will just throws the exception and i expected in the JUnit?
Thank you!
Without knowing the details of the homework, an answer could only be somewhat generic, but I'll try.
Boundary checking is not just exception checking, its about seeing which paths in your code are execution on what condition. If you have control statements, loops, if-else, switch, etc you have to verify, on what conditions (of your internal state) those statements are processed in what way.
To me, boundary testing is that you change certain values of an instance field in a way that would cause the behavior to run through different branches of your code.
for example, you have this behavior:
if(someInstanceValue > 5) {
return "great";
} else {
return "poor";
}
Now you could test with data for someInstanceValue that define the boundary
4 : "poor"
5 : "great"
If you have multiple fields in your class, all of them define the state but only some of them may affect a certain path in your code. As the test is a specification of your class under test, written in code, you should specify which fields are relevant to a function, and which are not (by leaving them out).
So you should set up your instance-under-test accordingly (calling all setters) or if you require more complex objects, you could use frameworks like Mockito to specify the state (in a when().thenReturn() syntax).
If you want to verify if you covered all your boundaries, you could run a mutation test against your suite using a mutation testing tool like PIT. It will flip the switches in your code (i.e. replacing a < with a >=) to check whether your test will fail. Often, it's a good source of inspiration for improving the way you test.
Neverthelss, some parts of the homework assignment sound a bit confusing to me. You may approach a boundary from two sides, ok, but there is no such thing as a state that represents THE boundary, you're either on one or the other side of the boundary. If the way, how you approached one side of a boundary matters, and the object behaves differently depending on that "history" of how you reached that state, the history becomes part of the state. In other words: different history = different state.
Keep in mind: every instance field is part of the state. Every possible combination of values of your instance fields defines a single state. Every transition from one combination to another is a state transition triggered by calling a behavior. No think of your test describing this statemachine, be listing the triple of {currentState,input} -> nextState (with input being method invocation). Wich is basically the Given-When-Then structure good tests should have.

assert() vs enforce(): Which to choose?

I'm having a hard time choosing whether I should "enforce" a condition or "assert" a condition in D. (This is language-neutral, though.)
Theoretically, I know that you use assertions to find bugs, and you enforce other conditions in order to check for atypical conditions. E.g. you might say assert(count >= 0) for an argument to your method, because that indicates that there's a bug with the caller, and that you would say enforce(isNetworkConnected), because that's not a bug, it's just something that you're assuming that could very well not be true in a legitimate situation beyond your control.
Furthermore, assertions can be removed from code as an optimization, with no side effects, but enforcements cannot be removed because they must always execute their condition code. Hence if I'm implementing a lazy-filled container that fills itself on the first access to any of its methods, I say enforce(!empty()) instead of assert(!empty()), because the check for empty() must always occur, since it lazily executes code inside.
So I think I know that they're supposed to mean. But theory is easier than practice, and I'm having a hard time actually applying the concepts.
Consider the following:
I'm making a range (similar to an iterator) that iterates over two other ranges, and adds the results. (For functional programmers: I'm aware that I can use map!("a + b") instead, but I'm ignoring that for now, since it doesn't illustrate the question.) So I have code that looks like this in pseudocode:
void add(Range range1, Range range2)
{
Range result;
while (!range1.empty)
{
assert(!range2.empty); //Should this be an assertion or enforcement?
result += range1.front + range2.front;
range1.popFront();
range2.popFront();
}
}
Should that be an assertion or an enforcement? (Is it the caller's fault that the ranges don't empty at the same time? It might not have control of where the range came from -- it could've come from a user -- but then again, it still looks like a bug, doesn't it?)
Or here's another pseudocode example:
uint getFileSize(string path)
{
HANDLE hFile = CreateFile(path, ...);
assert(hFile != INVALID_HANDLE_VALUE); //Assertion or enforcement?
return GetFileSize(hFile); //and close the handle, obviously
}
...
Should this be an assertion or an enforcement? The path might come from a user -- so it might not be a bug -- but it's still a precondition of this method that the path should be valid. Do I assert or enforce?
Thanks!
I'm not sure it is entirely language-neutral. No language that I use has enforce(), and if I encountered one that did then I would want to use assert and enforce in the ways they were intended, which might be idiomatic to that language.
For instance assert in C or C++ stops the program when it fails, it doesn't throw an exception, so its usage may not be the same as what you're talking about. You don't use assert in C++ unless you think that either the caller has already made an error so grave that they can't be relied on to clean up (e.g. passing in a negative count), or else some other code elsewhere has made an error so grave that the program should be considered to be in an undefined state (e.g. your data structure appears corrupt). C++ does distinguish between runtime errors and logic errors, though, which may roughly correspond but I think are mostly about avoidable vs. unavoidable errors.
In the case of add you'd use a logic error if the author's intent is that a program which provides mismatched lists has bugs and needs fixing, or a runtime exception if it's just one of those things that might happen. For instance if your function were to handle arbitrary generators, that don't necessarily have a means of reporting their length short of destructively evaluating the whole sequence, you'd be more likely consider it an unavoidable error condition.
Calling it a logic error implies that it's the caller's responsibility to check the length before calling add, if they can't ensure it by the exercise of pure reason. So they would not be passing in a list from a user without explicitly checking the length first, and in all honesty should count themselves lucky they even got an exception rather than undefined behavior.
Calling it a runtime error expresses that it's "reasonable" (if abnormal) to pass in lists of different lengths, with the exception indicating that it happened on this occasion. Hence I think an enforcement rather than an assertion.
In the case of filesize: for the existence of a file, you should if possible treat that as a potentially recoverable failure (enforcement), not a bug (assertion). The reason is simply that there is no way for the caller to be certain that a file exists - there's always someone with more privileges who can come along and remove it, or unmount the entire fielsystem, in between a check for existence and a call to filesize. It's therefore not necessarily a logical flaw in the calling code when it doesn't exist (although the end-user might have shot themselves in the foot). Because of that fact it's likely there will be callers who can treat it as just one of those things that happens, an unavoidable error condition. Creating a file handle could also fail for out-of-memory, which is another unavoidable error on most systems, although not necessarily a recoverable one if for example over-committing is enabled.
Another example to consider is operator[] vs. at() for C++'s vector. at() throws out_of_range, a logic error, not because it's inconceivable that a caller might want to recover, or because you have to be some kind of numbskull to make the mistake of accessing an array out of range using at(), but because the error is entirely avoidable if the caller wants it to be - you can always check the size() before access if you have no other way of knowing whether your index is good or not. And so operator[] doesn't guarantee any checks at all, and in the name of efficiency an out of range access has undefined behavior.
assert should be considered a "run-time checked comment" indicating an assumption that the programmer makes at that moment. The assert is part of the function implementation. A failed assert should always be considered a bug at the point where the wrong assumption is made, so at the code location of the assert. To fix the bug, use a proper means to avoid the situation.
The proper means to avoid bad function inputs are contracts, so the example function should have a input contract that checks that range2 is at least as long as range1. The assertion inside the implementation could then still remain in place. Especially in longer more complex implementations, such an assert may inprove understandability.
An enforce is a lazy approach to throwing runtime exceptions. It is nice for quick-and-dirty code because it is better to have a check in there rather then silently ignoring the possibility of a bad condition. For production code, it should be replaced by a proper mechanism that throws a more meaningful exception.
I believe you have partly answered your question yourself. Assertions are bound to break the flow. If your assertion is wrong, you will not agree to continue with anything. If you enforce something you are making a decision to allow something to happen based on the situation. If you find that the conditions are not met, you can enforce that the entry to a particular section is denied.

Do you use tense when naming methods of boolean return type?

So, when you are writing a boolean method, do you use tense, like "has" or "was", in your return method naming, or do you solely use "is"?
The following is a Java method I recently wrote, very simply ..
boolean recovered = false;
public boolean wasRecovered()
{
return recovered;
}
In this case, recovered is a state that may or may not have already occurred at this point in the code, so grammatically "was" makes sense. But does it make the same sense in code, where the "is" naming convention is usually standard?
I prefer to use IsFoo(), regardless of tense, simply because it's a well-understood convention that non-native speakers will still generally understand. Non-native speakers of English are a regular consideration in today's global dev't industry.
I use the tense which is appropriate the meaning of the value. To do otherwise essentially creates code which reads one way and behaves another. Lets look at a real world example in the .Net Framework: Thread.IsAlive
This property is presented with the present tense. This has the implication the value refers to the present and makes code like the following read very well
if (thread.IsAlive ) {
// Code that depends on the thread being alive
...
The problem here is that the property does not represent the present state of the object it represents a past state. Once the value is calculated to be true, the thread in question can immediately exit and invalidate the value. Hence the value can only safely be used to identify the past state of the thread and a past tense property is more appropriate. Lets now revisit the sample which reads a bit differently
if ( thread.WasAlive ) {
// Code that depends on the thread being alive
...
They behave the same but one reads very poorly because it in fact represents poor code.
Here's a list of some other offenders
File.Exists
Directory.Exists
DriveInfo.IsReady
WeakReference.IsAlive
The isXxx prefix is a widespread naming convention, so it's generally the best choice.
For order-sensitive operations, wasXxx is appropriate. For example, in JDBC, retrieving the value of a database column might return zero when the field is actually NULL (unset); in this case, a follow-up call to wasNull determines which it is after the actual retrieval was performed.
For retrieving attribute settings, hasXxx may be more appropriate. It's a grammar preference, as in "the object's flag is set" versus "the object has an attribute".
Then there are capability tests canXxx. For example, calling canWrite to see if a file is writable. But names like these can probably be renamed to the isXxx form, such as isWritable.
I tend to, yes. For example in error checking:
$errors = false;
public function hasErrors()
{
return $this->errors;
}
I am not sure that you are thinking about this correctly. The reason one would use the Recovered property is because that is the state the object is in now, not because that was the state the object used to be in. There may have been some process in the past (The Recovery) that has now completed, but the fact that we are accessing this property now means that there is something about that completed process that altered current state, and that current state is important. To me "Recovered" captures the nature of that state. For this example (and most similar situations) I would use IsRecovered to name the predicate that indicates this condition. (This also matches normal English: "This is a recovered document.")
It is extremely rare that I would use anything other than present tense to name a predicate (IsDirty, HasCoupon) or boolean function (IsPrime(x)) in a program.
An exception would be to indicate state that has since been changed that might need to be reinstated (DocumentWindow.WasMaximizedAtLastExit).
I would usually use an infinitive for future tense (ToBeCopied rather than WillBeCopied), since the best laid plans of software are sometimes altered (or cancelled).
It depends on whether or not you care about the past or future state of the property in question.
To try to simplify the semantics, realize that there are a few scenarios that make the IsXXX form debatable and some very common scenarios where the IsXXX form is the only useful one.
Below is the 'truth table' for Thread.IsAlive() based on possible states of the thread over time. Forget about why a thread might flip flop states, we need to focus on the language used.
Scenarios of possible thread states over time:
Past Present Future
===== ======= =======
1. alive alive alive
2. alive alive dead
3. alive dead dead
4. dead dead dead
5. dead dead alive
6. dead alive alive
7. dead alive dead
8. alive dead alive
Note: I talk about the Future state below for consistency. Knowing whether a thread will die is very likely unknowable as a subset of The Halting Problem)
When we interrogate an object by calling a method, there is a common assumption "Is this thread alive, at the time I asked? For these cases, the answer in the "Present" column is all we care about and using the IsXXX form works fine.
Scenarios #1(always alive) and #4(always dead) are the simplest and most common. The answer to IsAlive() will not change between calls. The battle over language that comes up is due to the other 6 cases where the result of calling IsAlive() depends on when it is called.
Scenarios #2(will die) and #3(has died) transitions from alive to dead.
Scenarios #5(will start) and #6(has started) transitions from dead to alive.
For these four (2, 3, 5, 6) the answer to IsAlive() is not constant. The question becomes, do I care about the Present state, IsAlive(), or am I interested in the Past/Future state, WasAlive() and WillBeAlive()? Unless you can predict the future, the WillBeAlive() call becomes meaningless for all but the most specific designs.
When dealing with a thread pool, we might need to restart threads that are in the 'dead' state to service connect requests and it doesn't matter whether they were ever alive, just that they are currently dead. In this case we might actually want to use WasDead(). Of course we should try to guarantee we don't restart a thread that was just restarted but that is a design problem, not a semantic one. Assuming that no one else can restart the thread, it doesn't matter much whether we use IsAlive() == false or WasDead() == true.
Now for the last two scenarios. Scenario #7(was dead, is alive, will be dead) is practically the same as #6. Do you know when in the future it will die? In 10 seconds, 10 minutes, 10 hours? Are you going to wait before deciding what to do. No, you only care about the current (Present) state. We're talking about naming here, not multi-threaded design.
Scenario #8(was alive, is dead, will be alive), is practically the same as #3. If you are reusing threads, then they can cycle through the alive/dead states several times. Worrying about the difference between #3 and #8 goes back to the Halting Problem and so can be disregarded.
IsAlive() should work for all cases. IsAlive() == false works (for #5 and #6) instead of adding WasAlive().
I don't mind wasRecovered that much. Recovery is a past event that may or may not have happened - this tells you whether it did or not. But if you're using it because of some consequence of recovery, I'd prefer isCached, isValid, or some other description of what those consequences actually are. Just because you've recovered something doesn't inherently mean you haven't lost it again since.
Always beware that in English, the use of a past participle as an adjective is ambiguous between transitive and intransitive verbs (and perhaps between active and passive voice). isRecovered might mean that the object has been recovered by something else, or it might mean that the object has recovered. If your object represents a patient at a hospital, does "isRecovered" mean that the patient is fit and well, or that someone has fetched the patient back from the X-ray department? wasRecovered might therefore be better for the latter.
The conceit for method naming is that you are retrieving information about the object in question. For it to be named in the past tense, it would have to be information about a previous state of the object, rather than its current state.
The only reason I could ever think of for using past tense is if I was checking a cached result of something that previously occurred but is no longer the case. For a contrived example, perhaps retriveing the previous value after something like a swap() call. It could be useful in operations that are atomic by design. Not real likely in the wild though.
Since your question is specific to Java, the method name should start with "is" if your class is a JavaBean and the method is an accessor method for a property.
http://download.oracle.com/javase/tutorial/javabeans/properties/properties.html

Should I always/ever/never initialize object fields to default values?

Code styling question here.
I looked at this question which asks if the .NET CLR will really always initialize field values. (The answer is yes.) But it strikes me that I'm not sure that it's always a good idea to have it do this. My thinking is that if I see a declaration like this:
int myBlorgleCount = 0;
I have a pretty good idea that the programmer expects the count to start at zero, and is okay with that, at least for the immediate future. On the other hand, if I just see:
int myBlorgleCount;
I have no real immediate idea if 0 is a legal or reasonable value. And if the programmer just starts reading and modifying it, I don't know whether the programmer meant to start using it before they set a value to it, or if they were expecting it to be zero, etc.
On the other hand, some fairly smart people, and the Visual Studio code cleanup utility, tell me to remove these redundant declarations. What is the general consensus on this? (Is there a consensus?)
I marked this as language agnostic, but if there is an odd case out there where it's specifically a good idea to go against the grain for a particular language, that's probably worth pointing out.
EDIT: While I did put that this question was language agnostic, it obviously doesn't apply to languages like C, where no value initialization is done.
EDIT: I appreciate John's answer, but it is exactly what I'm not looking for. I understand that .NET (or Java or whatever) will do the job and initialize the values consistently and correctly. What I'm saying is that if I see code that is modifying a value that hasn't been previously explicitly set in code, I, as a code maintainer, don't know if the original coder meant it to be the default value, or just forgot to set the value, or was expecting it to be set somewhere else, etc.
Think long term maintenance.
Keep the code as explicit as possible.
Don't rely on language specific ways to initialize if you don't have to. Maybe a newer version of the language will work differently?
Future programmers will thank you.
Management will thank you.
Why obfuscate things even the slightest?
Update: Future maintainers may come from a different background. It really isn't about what is "right" it is more what will be easiest in the long run.
You are always safe in assuming the platform works the way the platform works. The .NET platform initializes all fields to default values. If you see a field that is not initialized by the code, it means the field is initialized by the CLR, not that it is uninitialized.
This concern is valid for platforms which do not guarantee initialization, but not here. In .NET, is more often indicates ignorance from the developer, thinking initialization is necessary.
Another unnecessary hangover from the past is the following:
string foo = null;
foo = MethodCall();
I've seen that from people who should know better.
I think that it makes sense to initialize the values if it clarifies the developer's intent.
In C#, there's no overhead as the values are all initialized anyway. In C/C++, uninitialized values will contain garbage/unknown values (whatever was in the memory location), so initialization was more important.
I think it should be done if it really helps to make the code more understandable.
But I think this is a general problem with all language features. My opinion on that is: If it is an official feature of the language, you can use it. (Of course there are some anti-features which should be used with caution or avoided at all, like a missing option explicit in Visual Basic or diamond inheritance in C++)
There was I time when I was very paranoid and added all kinds of unnecessary initializations, explicit casts, über-paranoid try-finally blocks, ... I once even thought about ignoring auto-boxing and replacing all occurrences with explicit type conversions, just "to be on the safe side".
The problem is: There is no end. You can avoid almost all language features, because you do not want to trust them.
Remember: It's only magic until you understand it :)
I agree with you; it may be verbose, but I like to see:
int myBlorgleCount = 0;
Now, I always initial strings though:
string myString = string.Empty;
(I just hate null strings.)
In the case where I cannot immediately set it to something useful
int myValue = SomeMethod();
I will set it to 0. That is more to avoid having to think about what the value would be otherwise. For me, the fact that integers are always set to 0 is not on the tip of my fingers, so when I see
int myValue;
it will take me a second to pull up that fact and remember what it will be set to, disrupting my thought process.
For someone who has that knowledge readily available, they will encounter
int myValue = 0;
and wonder why the hell is that person setting it to zero, when the compiler would just do it for them. This thought would interrupt their thought process.
So do which ever makes the most sense for both you and the team you are working in. If the common practice is to set it, then set it, otherwise don't.
In my experience I've found that explicitly initializing local variables (in .NET) adds more clutter than clarity.
Class-wide variables, on the other hand should always be initialized. In the past we defined system-wide custom "null" values for common variable types. This way we could always know what was uninitialized by error and what was initialized on purpose.
I always initialize fields explicitly in the constructor. For me, it's THE place to do it.
I think a lot of that comes down to past experiences.
In older and unamanged languages, the expectation is that the value is unknown. This expectation is retained by programmers coming from these languages.
Almost all modern or managed languages have defined values for recently created variables, whether that's from class constructors or language features.
For now, I think it's perfectly fine to initialize a value; what was once implicit becomes explicit. In the long run, say, in the next 10 to 20 years, people may start learning that a default value is possible, expected, and known - especially if they stay consistent across languages (eg, empty string for strings, 0 for numerics).
You Should do it, there is no need to, but it is better if you do so, because you never know if the language you are using initialize the values. By doing it yourself, you ensure your values are both initialized and with standard predefined values set.
There is nothing wrong on doing it except perhaps a bit of 'time wasted'. I would recommend it strongly. While the commend by John is quite informative, on general use it is better to go the safe path.
I usually do it for strings and in some cases collections where I don't want nulls floating around.
The general consensus where I work is "Not to do it explicitly for value types."
I wouldn't do it. C# initializes an int to zero anyways, so the two lines are functionally equivalent. One is just longer and redundant, although more descriptive to a programmer who doesn't know C#.
This is tagged as language-agnostic but most of the answers are regarding C#.
In C and C++, the best practice is to always initialize your values. There are some cases where this will be done for you such as static globals, but there shouldn't be a performance hit of any kind for redundantly initializing these values with most compilers.
I wouldn't initialise them. If you keep the declaration as close as possible to the first use, then there shouldn't be any confusion.
Another thing to remember is, if you are gonna use automatic properties, you have to rely on implicit values, like:
public int Count { get; set; }
http://www.geekherocomic.com/2009/07/27/common-pitfalls-initialize-your-variables/
If a field will often have new values stored into it without regard for what was there previously, and if it should behave as though a zero was stored there initially but there's nothing "special" about zero, then the value should be stored explicitly.
If the field represents a count or total which will never have a non-zero value written to it directly, but will instead always have other amounts added or subtracted, then zero should be considered an "empty" value, and thus need not be explicitly stated.
To use a crude analogy, consider the following two conditions:
`if (xposition != 0) ...
`if ((flags & WoozleModes.deluxe) != 0) ...
In the former scenario, comparison to the literal zero makes sense because it is checking for a position which is semantically no different from any other. In the second scenario, however, I would suggest that the comparison to the literal zero adds nothing to readability because code isn't really interested in whether the value of the expression (flags & WoozleModes.deluxe) happens to be a number other than zero, but rather whether it's "non-empty".
I don't know of any programming languages that provide separate ways of distinguishing numeric values for "zero" and "empty", other than by not requiring the use of literal zeros when indicating emptiness.