AVR-Studio how to output? - output

I do not have experience with micorcontrollers but I have something related to them. Here is and explanation of my issue:
I have an algorithm, and I want to calculate how many cycles my algorithm would cost on a specific avr microcontroller.
To do that I downloaded AVR-STudio 6, and I used the simulator. I succeeded in obtaining the number of cycles for my algorithm. What I wan to know is that how can I make sure that my algorithm is working as it should be. AVR-Studio allows me to debug using the simulator but I am not able to see the output of my algorithm.
To simplify my question, I would like some help in implementing the hello world example in AVR-Studio, that is I want to see "hello world" in the output window, if that is possible.
My question is not how to program the microcontroller, my question is that how could I see the output of a program in AVR-Studio.
Many thanks

As Hanno Binder suggested in his comment:
Atmel Studio still does not provide any means to display debug messages sent by the program simulated. Your only option is to place breakpoints at apropriate locations and then inspect the state of the device in the simulator. For example the locations in RAM where your result is stored, or the registers in which it may reside; maybe have a 'watch' set on a variable or expression.
I think this is the best answer, watch vairables and memory while in debug mode.
Note: turn off optimization when you want to debug for infomation, or some variables will be optimized away.

the best thing to test if algorithms work is to run them in a regular PC program and feed them with data and compare the results with ground trouth.
Clearly to be able to do this a good programming style is neccessary that separates hardware related tasks from the actual data processing. Additionally you have to keep architectural differences in mind (eg: int=16bit vs. int=32bit --> use inttypes.h)

Related

How does our computer actually converts decimal number into binary?

We know that computer performs its all operations in binary only. It cannot work with decimal or any other base numbers.
If computer cannot perform any operation in decimals, then how does it convert them to binary? I think there are different stages during the conversion at which addition and multiplication are required. How the computer can add or multiply any number even before getting its binary equivalent ?
I have searched this in many place but couldn't find a convincing answer.
Note: This stackexchange site is not the right place to ask this question. I am still answering it, better shift it to the appropriate one or delete question after getting your answer.
Well it doesn't care what input you supply to it. Think of it as your tv switch. When you switch it on, your tv starts to work. This happens because it got the exact current flow it required to work. Similarly in a computer, there is a particular voltage, let's say 5V. Below 5V is all considered what you call as '0' otherwise '1'. You may have seen an AND, OR etc gates. If you supply both '1' to AND it results in '1' otherwise '0'. There are many such digital circuits. Some examples are a binary adder, a latch, a flip flop etc. These work with these current signals (which are characterised as 0 or 1 as explained above). A computer is a combination of millions of such circuits.
When you talk about converting decimal to binary or something like that, its actually not like that. Every program (spreadsheets, games etc) are written in some language. Most common ones are compiled or interpreted. Some languages that get compiled are C, javaetc. and some interpreted ones are python, rubyetc. Job of a compiler or interpreter if to convert the code you wrote in that language to assembly code as per the rules of that language. Assembly code is then converted into machine code when it has to run. Machine code is pure zeros and ones. These zeros and ones just define triggers on what to execute and when.
Don't confuse this with what you see. Desktop that displays you the data is a secondary thing that is specifically made just to make things easy for us.
In a computer a clock keeps running. Like you must have heard 2.5Ghz processor or something like that. This is the frequency with which instructions are executed. Seems odd but yes whether you are doing work or not, when computer is working, it executes instructions continuously and if you are not doing anything it keeps on checking for interaction.
Imagine correctly
1) you opened your pc, the hardware got ready for your commands and kept checking for interaction
2)you opened a folder. Now think to open a folder you obviously need to touch the keyboard, mouse or do some voice interaction. This interaction is followed by your computer. Pressing a down arrow produces a zero or one signal at the right place. Now after this it gets displayed to you. It is not that what is being displayed is being done. Instead what is being done is getting displayed for you to follow it easily.

How can I analyze live data from webcam?

I am going to be working on self-chosen project for my college networking class and I just had a couple questions to help get me started in the right direction.
My project will involve creating a new "physical" link over which data, in the form of text, will be transmitted from one computer to another. This link will involve one computer with a webcam that reads a series of flashing colors (black/white) as binary and converts it to text. Each series of flashes will simulate a packet of data. I will be using OSX an the integrated webcam in a Macbook, the flashing computer will either be windows or osx.
So my questions are: which programming languages or API's would be best for reading live webcam data and analyzing the color of a certain area as well as programming and timing the flashes? Also, would I need to worry about matching the flash rate of the "writing" computer and the frame capture rate of the "reading" computer?
Thank you for any help you might be able to provide.
Regarding the frame capture rate, Shannon sampling theorem says that "perfect reconstruction of a signal is possible when the sampling frequency is greater than twice the maximum frequency of the signal being sampled". In other words if your flashing light switches 10 times per second, you need a camera of more than 20fps to properly capture that. So basically check your camera specs, divide by 2, lower the resulting a little and you have your maximum flashing rate.
Whatever can get the frames will work. If the light conditions in which the camera works are gonna be stable, and the position of the light on images is gonna be static then it is gonna be very very easy with checking the average pixel values of a certain area.
If you need additional image processing you should probably also find out about OpenCV (it has bindings to every programming language).
To answer your question about language choice, I would recommend java. The Java Media Framework is great and easy to use. I have used it for capturing video from webcams in the past. Be warned, however, that everyone you ask will recommend a different language - everyone has their preferences!
What are you using as the flashing device? What kind of distance are you trying to achieve? Something worth thinking about is how are you going to get the receiver to recognise where within the captured image to look for the flashes. Some kind of fiducial marker might be necessary. Longer ranges will make this problem harder to resolve.
If you're thinking about shorter ranges, have you considered using a two-dimensional transmitter? (given that you're using a two-dimensional receiver, it makes sense) and maybe have a transmitter that shows a sequence of QR codes (or similar encodings) on a monitor?
You will have to consider some kind of error-correction encoding, such as a hamming code. While encoding would increase the data footprint, it might give you overall better bandwidth given that you can crank up the speed much higher without having to worry about the odd corrupt bit.
Some 'evaluation' type material might include you discussing the obvious security risks in using such a channel - anyone with line of sight to the transmitter can eavesdrop! You could suggest in your writeup using some kind of encryption, a block cipher in CBC would do, but would require a key-exchange prior to transmission, so you could think about public key encryption.

Simulator or Emulator? What is the difference?

While I understand what simulation and emulation mean in general, I almost always get confused about them. Assume that I create a piece of software that mimics existing hardware/software, what should I call it? A simulator or an emulator?
Could anyone explain the difference in terms of programming?
Bonus: What is the difference in English between these two terms? (Sorry, I am not a native speaker :))
Emulation is the process of mimicking the outwardly observable behavior to match an existing target. The internal state of the emulation mechanism does not have to accurately reflect the internal state of the target which it is emulating.
Simulation, on the other hand, involves modeling the underlying state of the target. The end result of a good simulation is that the simulation model will emulate the target which it is simulating.
Ideally, you should be able to look into the simulation and observe properties that you would also see if you looked into the original target. In practice, there may some shortcuts to the simulation for performance reasons -- that is, some internal aspects of the simulation may actually be an emulation.
MAME is an arcade game emulator; Hyperterm is a (not very good) terminal emulator. There's no need to model the arcade machine or a terminal in detail to get the desired emulated behavior.
Flight Simulator is a simulator; SPICE is an electronics simulator. They model as much as possible every detail of the target to represent what the target does in reality.
EDIT: Other responses have pointed out that the goal of an emulation is to able to substitute for the object it is emulating. That's an important point. A simulation's focus is more on the modeling of the internal state of the target -- and the simulation does not necessarily lead to emulation. In particular, a simulation may run far slower than real-time. SPICE, for example, cannot substitute for an actual electronics circuit (even if assuming there was some kind of magical device that perfectly interfaces electrical circuits to a SPICE simulation.)
A simulation does not always lead to emulation --
If a flight-simulator could transport you from A to B then it would be a flight-emulator.
An emulator can replace the original for real use.
A Virtual PC emulates a PC.
A simulator is a model for study and analysis.
An emulator will always have to operate close to real-time. For a simulator that is not always the case. A geological simulation could do 1000 years/second or more.
Simulation = For analysis and study
Emulation = For usage as a substitute
A simulator is an environment which models but an emulator is one that replicates the usage as on the original device or system.
Simulator mimics the activity of something that it is simulating. It "appears"(a lot can go with this "appears", depending on the context) to be the same as the thing being simulated. For example the flight simulator "appears" to be a real flight to the user, although it does not transport you from one place to another.
Emulator, on the other hand, actually "does" what the thing being emulated does, and in doing so it too "appears to be doing the same thing". An emulator may use different set of protocols for mimicking the thing being emulated, but the result/outcome is always the same as the original object. For example, EMU8086 emulates the 8086 microprocessor on your computer, which obviously is not running on 8086 (= different protocols), but the output it gives is what a real 8086 would give.
It's a difference in focus. Emulators1 focus on recreating the behavior of a system, with no regard for how the system functions internally. Simulators2 focus on modeling the components of a system. You use an emulator when you care mostly about what a system does, and a simulator when you care about how it does it.
As for their general English meanings, emulation is "the endeavor to equal or to excel another in qualities or actions", while simulation is "to model, replicate, duplicate the behavior, appearance or properties of". Not much difference. Emulation comes from æmulus, "striving, rivaling," and is related to "imitate" and "image," which suggests a surface-lever resemblance. "Simulation" comes from similis "like", as does the word "similar," which perhaps suggests a deeper congruence.
References:
Wikipedia: Emulator
Wikipedia: Computer Simulation
Wiktionary: emulation
Wiktionary: simulation
Etymology Online: emulation
Etymology Online: simulation
I don't think emulator and simulator can be compared. Both mimic something, but are not part of the same scope of reasonning, they are not used in the same context.
In short: an emulator is designed to copy some features of the orginial and can even replace it in the real environment. A simulator is not desgined to copy the features of the original, but only to appear similar to the original to human beings. Without the features of the orginal, the simulator cannot replace it in the real environment.
An emulator is a device that mimics something close enough so that it can be substituted to the real thing. E.g you want a circuit to work like a ROM (read only memory) circuit, but also wants to adjust the content until it is what you want. You'll use a ROM emulator, a black box (likely to be CPU-based) with a physical and electrical interfaces compatible with the ROM you want to emulate. The emulator will be plugged into the device in place of the real ROM. The motherboard will not see any difference when working, but you will be able to change the emulated-ROM content easily. Said otherwise the emulator will act exactly as the actual thing in its motherboard context (maybe a little bit slower due to actual internal model) but there will be additional functions (like re-writing) visible only to the designer, out of the motherboard context. So emulator definition would be: something that mimic the original, has all of its functional features, can actually replace it to some extend in the real world, and may have additional features not visible in the normal context.
A simulator is used in another thinking context, e.g a plane simulator, a car simulator, etc. The simulation will take care only of some aspect of the actual thing, usually those related to how a human being will perceive and control it. The simulator will not perform the functions of the real stuff, and cannot be sustituted to it. The plane simulator will not fly or carry someone, it's not its purpose at all. The simulator is not intended to work, but to appear to the pilot somehow like the actual thing for purposes other than its normal ones, e.g. to allow ground training (including in unusual situations like all-engine failure). So simulator definition would be: something that can appear to human, to some extend, like the original, but cannot replace it for actual use. In addition the pilot will know that the simulator is a simulator.
I don't think we'll see any ROM simulator, because ROM are not interacting with human beings, nor we'll see any plane emulator, because planes cannot have a replacement performing the same functions in the real world.
In my view the model inside an emulator or a simulator can be anything, and has not to be similar to the model of the original. A ROM emulator model will likely be software instead of hardware, MS Flight Simulator cannot be more software than it is.
This comparison of both terms will contradict the currently selected answer (from Toybuilder) which puts the difference on the internal model, while my suggestion is that the difference is whether the fake can or cannot be used to perform the actual function in the actual world (to some accepted extend, indeed).
Note that the plane simulator will have also to simulate the earth, the sun, the wind, etc, which are not part of the plane, so a plane simulator will have to mimic some aspects of the plane, as well as the environment of the plane because it is not used in this actual environment, but in a training room.
This is a big difference with the emulator which emulates only the orginal, and its purpose is to be used in the environment of the original with no need to emulate it. Back to the plane context... what could be a plane emulator? Maybe a train that will connect two airports -- actually two plane steps -- carrying passengers, with stewardesses onboard, with car interior looking like an actual plane cabin, and with captain saying "ladies and gentlemen our altitude is currenlty 10 kms and the temperature at our destination is 24°C". Its benefit is difficult to see, hum...
As a conclusion, the emulator is a real thing intended to work, the simulator is a fake intended to trick the user.
To understand the difference between a simulator and an emulator, keep in mind that a simulator tries to mimic the behavior of a real device. For example, in the case of the iOS Simulator, it simulates the real behavior of an actual iPhone/iPad device. However, the Simulator itself uses the various libraries installed on the Mac (such as QuickTime) to perform its rendering so that the effect looks the same as an actual iPhone. In addition, applications tested on the Simulator are compiled into x86 code, which is the byte-code understood by the Simulator. A real iPhone device, conversely, uses ARM-based code.
In contrast, an emulator emulates the working of a real device. Applications tested on an emulator are compiled into the actual byte-code used by the real device. The emulator executes the application by translating the byte-code into a form that can be executed by the host computer running the emulator.
To understand the subtle difference between simulation and emulation, imagine you are trying to convince a child that playing with knives is dangerous. To simulate this, you pretend to cut yourself with a knife and groan in pain. To emulate this, you actually cut yourself.
In more or less normal parlance: If your software can do everything the mimicked system can do, it's an emulator. If it only approximates the results of a system (IT or otherwise), it's a simulator.
An emulator is a model of a system which will accept any valid input that that the emulated system would accept, and produce the same output or result. So your software is an emulator, only if it reproduces the behavior of the emulated system precisely.
Some years ago I came up with a very short adage that, I believe, captures the essence of the difference quite nicely:
A simulator is an emulator on a mission.
By that I mean that you use an emulator when you can't use the real thing, and you use a simulator when you can't use the real thing and you want to find something out about it.
Simple Explanation.
If you want to convert your PC (running Windows) into Mac, you can do either of these:
(1) You can simply install a Mac theme on your Windows. So, your PC feels more like Mac, but you can't actually run any Mac programs. (SIMULATION)
(or)
(2) You can program your PC to run like Mac (I'm not sure if this is possible :P ). Now you can even run Mac programs successfully and expect the same output as on Mac. (EMULATION)
In the first case, you can experience Mac, but you can't expect the same output as on Mac.
In the second case, you can expect the same output as on Mac, but still the fact remains that it is only a PC.
Simulator: it is similar to interpreter.
i.e. it actually executes the real code in line by line to mimic the behaviour
Emulator: it is similar executable.
i.e. it takes compiled code and executes it.
The distintion between the two terms is a bit fuzzy. Coming from a world where "Emulators" are pieces of hardware that allow you debug embedded systems. And remember products that allowed you to have ICE (In Circuit Emulation) capabilities to debug a PC platform, I find the use of the term "Emulation" to be a somewhat of a misnomer for software that SIMULATES the behaviour of a piece of hardware.
My justification for the current use of the term is Emulation is that it may "augment" the functionality, and only is concerned with a "reasonable" approximation of the behaviour of the system.
ICE: (In Circuit Emulation)
A piece of hardware that is plugged into a board in place of the actual processor. It allows you to run the system as if the actual processor was present. Typically these have a variant of the processor on them to actually execute the software with glue logic to allow the user to break executation and single step under hardware control. Some would also provide logging capability. Most modern processors development systems have replace ICE type emulation with JTAG Emulation, where the JTAG just talks to the processor via a special purpose serial link and all execution is perform by the processor mounted on the board.
Software EMULATOR:
An 0x86 emulator is only concerned with being able to execute 0x86 assembly language, not providing accurate cycle per cycle behaviourial model of a SPECIFIC 0x86 processor. Bochs is an example of this. QEMU does this, but also allows "virtualization" using special kernel modules.
SIMULATOR:
Texas Instruments provides a CYCLE ACCURATE behaviourial model of there processors for software development that is intended to be a accurate SIMULATION of SPECIFIC processor cores behavior for the developers to use prior to having working hardware.
Software EMULATOR augmenting functionality:
BLEEM not only allowed you to run Playstation Software, but also allowed the display to be output with higher resolution than the Playstation was able to provide, and also took advantage of more advanced capabilities of GPUs that were avaliable. (i.e. Better blending and smoothing of textures.)
An emulator is an alternative to the real system but a simulator is used to optimize, understand and estimate the real system.
A simulation is a system that behaves similar to something else, but is implemented in an entirely different way. It provides the basic behavior of a system but may not necessarily abide by all of the rules of the system being simulated. It is there to give you an idea about how something works.
An emulation is a system that behaves exactly like something else, and abides by all of the rules of the system being emulated. It is effectively a complete replication of another system, right down to being binary compatible with the emulated system's inputs and outputs, but operating in a different environment to the environment of the original emulated system. The rules are fixed, and cannot be changed or the system fails.
Both terms are something completely different and only intersect very little. To find the right term is actually very easy, just think about following:
A simulation does not do anything for real. You can study it, for example how computer work, but it usually has no outcome other than that. A plane crash in a Flight Simulator causes no real harm. A weather forecast simulation itself does not change the weather.
An emulation does something for real. You can work with an emulated computer like with a physical one and create documents with it. And a plane crash in a Flight Emulator would have an outcome, like people experiencing the real impact including possible physical harm.
Your confusion probably stems from the fact, that "studying the simulation" and "accessing the emulation" often is quite the same thing.
You are not alone with your confusion. The Film "Matrix" speaks of a simulation. However The Matrix is running an emulation, as it has real impact on all members of The Matrix. In contrast the training room has no real impact, so this is a simulation (of The Matrix).
Let's see some examples.
Simulated vs. Emulated Rain
Take a water hose in the garden and let it rain. What's the difference between simulation and emulation here?
When you are simulating rain, people still will blame you for getting wet. Your rain has some real impact on the world, but your simulation hasn't, as the simulation does not fool anybody in that it is real rain.
In contrast, when you are emulating rain, people would blame the weather. This is, your emulated rain really behaves like rain in reality.
This rain emulation hence distorts reality,
in making people belie in the wrong culprit.
It took me quite some time to understand that.
Hence it isn't easy nor obvious which explains all the confusion.
Keep in mind that a simulation can have sideeffects,
like the weather forecast is based on simulations,
which takes quite some computing power and thus electrical energy,
which has an environmental impact.
Hence in the example of "simulated rain", people getting wet just is a sideffect and not part of the simulation. Same is true if you simulate a rainbow with this simulated rain. While the property of "how rainbows work" is part of this simulation, the simulation itself is not providing the rainbow, this just happens due to refraction of the sun on the sideffect of the waterdrops.
Simulated vs. Emulated Computer
While you might think "a simulated computer can have an outcome" this is practically wrong reasoning. If you save files onto a simulated harddrive, these files cannot leave the simulated drive outside of the simulation. You can obtain the files by studying the simulated drive, but this is not part of the simulation itself.
In case the harddrive saves the data such, that the data is actually usable outside of the simulation, you have an emulated harddrive within the simulation to do so.
So an emulation can be part of a simulation and vice versa.
Simulated vs. Emulated Filesystem
If you simulate a filesystem, you probably, for practicability, will choose to save the files onto your real filesystem as-is (perhaps with some additional meta-information). In that case the simulation seems to create real "value" outside of the simulation: Usable files!
But this is just by coincidence, because your simulated filesystem actually emulates a filesystem as well. You actually emulated the outside filesystem inside your simulation!
Simulated vs. Emulated TPM or HSM
A good example of the difference is, when you think of security. A TPM is a specific device to keep it's own keys secure (source of identity) while an HSM is a general device to secure foreign keys (verify identity).
Fun Fact: My fingers constantly type TMP instead of TPM.
If you simulate a TPM this has a huge effect on security, because then you can observe the internal states of the TPM. Which renders all the security void. Even that such a simulation can give you valuable hints of improving the design of a TPM itself, you won't want to expose precious data to the simulated TPM for real.
However if you emulate a TPM you will try to hide these internal states to the outside as good as you can. Such an emulated TPM then can be possibly used to really secure something else better than without it.
With a real TPM you cannot emulate the properties of a real HSM. All you can archive is to simulate an HSM, but this will not have the security properties of a real HSM, so all data which is stored in this simulated HSM will not be protected (they will only be protected within the simulation itself).
In contrast, with a real HSM you can emulate a TPM with all properties of a real TPM. For this the HSM needs to be constructed such, that no information needs to leave the HSM which does not leave a TPM as well.
(Please note that I do not know anything about HSMs or TPMs in particular, so it might be that there are no HSMs out there which are able to provide emulated TPMs.)
Simulated vs. Emulated World
If our world is simulated, we are simulations, too. Hence some spectator (let's call her God) can look at us and change the simulation any time. Also we cannot find out if we are simulated or not. As I am pretty sure that I know that I am, I do not think I am simulated, because self-awareness looks like an effect with a real component to me, which contradicts simulation. This also means, our world cannot be a simulation, too, as a simulation can only affect me like the world does, if I am part of the simulation.
But our world still can be emulated (like in the Film "Matrix"), as all I have to "prove the world" is my state of mind and sensory input, which I cannot verify, as I cannot leave myself. If I am not part of the emulation, then there should be a chance to observe discontinuity (like in the film "Matrix"), in case the emulation does not work flawlessly.
This changes when I emulated, too, like running an OS in an emulator. Then I cannot observe such errors, as my state can be reset from within the emulation (call it: Sleep) without observable discontinuation.
However I rather think that the world is a holographic hallucination than something like an emulation. Because if it is emulated, then I am pwned by somebody (call him Rick) who is running the emulation for some purpose, while a hallucination is purely my own thing.
I stop here, because hallucinations lead us to something completely different.
This question is probably best answered by taking a look at historical practice.
In the past, I've seen gaming console emulators on PC for the PlayStation & SEGA.
Simulators are commonplace when referring to software that tries to mimic real life actions, such as driving or flying. Gran Turismo and Microsoft Flight Simulator spring to mind as classic examples of simulators.
As for the linguistic difference, emulation usually refers to the action of copying someone's (or something's) praiseworthy characteristics or behaviors. Emulation is distinct from imitation, in which a person is copied for the purpose of mockery.
The linguistic meaning of the verb 'simulation' is essentially to pretend or mimic someone or something.
In computer science both a simulation and emulation produce the same outputs, from the same inputs, that the original system does; However, an emulation also uses the same processes to achieve it and is made out of the same materials. A simulation uses different processes from the original system. Also worth noting is the term replication, which is the intermediate of the two - using the same processes but being made out of a different material.
So if I want to run my old Super Mario Bros game on my PC I use an SNES emulator, because it is using the same or similar computer code (processes) to run the game, and uses the same or similar materials (silicon chip).
However, if I want to fly a Boeing 747 jet on my PC I use a flight simulator because it uses completely different processes from the original (there are no actual wings, lift or aerodynamics involved!).
Here are the exact definitions taken from a computer science glossary:
A simulation is a model of a system that captures the functional connections between inputs and outputs of the system, but without necessarily being based on processes that are the same as, or similar to, those of the system itself.
A replication is a model of a system that captures the functional connections between inputs and outputs of the system and is based on processes that are the same as, or similar to, those of the system itself.
An emulation is a model of some system that captures the functional connections between inputs and outputs of the system, based on processes that are the same as, or similar to, those of that system, and that is built of the same materials as that system.
Reference: The Open University, M366 Glossary 1.1, 2007
Both are models of an object that you have some means of controlling inputs to and observing outputs from.
The key difference is that:
With an emulator, you want the output exactly match what the object you are emulating would produce.
With a simulator, you want certain properties of your output to be similar to what the object would produce.
Let me give an example -- suppose you want to do some system testing to see how adding a new sensor (like a thermometer) to a system would affect the system. You know that the thermometer sends a message 8 time a second containing its measurement.
Simulation -- if you do not have the thermometer yet, but you want to test that this message rate will not overload you system, you can simulate the sensor by attaching a unit that sends a random number 8 times a second. You can run any test that does not rely on the actual value the sensor sends.
Emulation -- suppose you have a very expensive thermometer that measures to 0.001 C, and you want to see if you can get by with a cheaper thermometer that only measures to the nearest 0.5 C. You can emulate the cheaper thermometer using an expensive thermometer and then rounding the reading to the nearest 0.5 C and running tests that rely on the temperature values.
Note that simulations can also be used for forecasting or predicting behavior. Finite element analysis simulations are used in many applications, including weather prediction and virtual wind tunnels.
The definitions of the terms:
emulation -- surpass or exactly match
simulate -- imitate in appearance or character
The definitions of the words describe the difference the best. A google search gives the following definitions of simulate and emulate:
simulate imitate the appearance or character of.
emulate match or surpass (a person or achievement), typically by imitation.
A simulation imitates a system. An emulation simulates a system so well that it could replace it or may even surpass it.
In computing, an emulation would be a drop in replacement for the system it is emulating. Often times it will even outperform the system it is imitating. For example, game console emulators usually make improvements such as greater hardware compatibility, better performance, and improved audio/video quality.
Simulations, on the other hand, are limited by them being models. They are a best attempt to mimic a system, but not replacements for it. There are hardware emulators because hardware can be imitated and it would be hard to tell the difference. There is no Farming Emulator because there is no emulation that could replace actual farming. We can only simulate a model of farming to gain insight on how to farm better.
A Virtual PC tries to emulate a Computer, from the point of view of a Programmer BUT, at the same time, it simulates a Computer from the point of view of a Electrical Engineer.
Simulator is something more broader than Emulator and it seems like the duality of this terms is overthought in the posts above.
Emulator
People decided to use a new word emulation in the "computer world" when they started replacing some hardware parts of the existing system in straightforward manner - imitating their behaviour and relying on the computational nature to be sure to not break something and leave everything in the equivalent state. So we have emulated the piece of this! (and the whole still works as before)
Emulator usually used in narrow sense in digital area as replacement and virtualization - presenting in digital form as a piece of software - of something known and existed before (virtual chips, circuit boards, electronic devices). So when the world became more digital and brought the emulator word to the masses, the masses added uncertainty to it (or additional reasons).
Simulator
First of all, I saw many comments about emulators do or replace something real but simulators not.
BUT flight simulator is used for a real thing - it trains pilots, gives them skill up and knowledge and it replaces expensive real planes and saves much of money. And we cannot just say a plane-emulator because we have inner feeling that this is much more than that, so we call it simulator :) Plane simulator could contain emulated radar or transponder that is true.
Contra-statements that simulators are used for analysis and study (and emulators for something real), but that analysis and study not less a real thing than emulated GSM boards (even more in the informational age we live in). Analysis adds a value to the business, cuts costs or points out to profits not less than the replaced (emulated) hardware.
Simulator is similar to modelling of something that we can't obtain for some reason (cost, technology, physical impossibility). It is usually simulated for something new or intangible or complex or not properly known to us like market, weather, combustion, user. So here comes the flight, black hole, stock exchange, simulations.
So finally:
Simulator is broader than Emulator
Simulator tends to imitate/model more global processes/things in general with ability to narrow the imitation down (e.g. capacitor simulator with presets representing some known models)
Emulator tends to imitate certain hardware devices with certain specification, known characteristics and properties (e.g. SNES emulator, Intel 8087 or Roland TB-303)
As for words origin
All came from Latin and mean:
emulate is "to be equal" (looks like more aggressive and straightforward - rivalry)
simulate is "to be similar" (looks like more sly and tricky - imitation)
Emulator:
Consider a situation that you know only English and you are in China. In order to interact with a Chinese person you need a translator. Now, role of translator is that it will seek input from you in English and convert to Chinese and and give that input to the Chinese person and gets response from the Chinese person and convert to English and give the output to you in English. Now that translator and Chinese person is the emulator. Both combine will provide similar functionality as if you were communicating with the English person. So hardware may be different but functionality will be same.
Simulator:
I can't give better example than SPICE or flight simulator. Both will replace hardware component behavior with the software or mathematical model which will behave similar to the hardware.
In the end it depends on the context that which solution better suits project needs.
Emulation is like
Abstruction.
It shows what it can do.
Example: Car driving emulation.
Simulation is like
Encaptulation.
It shows how it can do
Example: Car engine inner activity.
The simulator is necessarily a scale model.
Emulators pretend to be a 1:1 model.

What is Code Coverage?

I have 3 questions :
What is CodeCoverage ?
What is it good for ?
What tools are used for
analyzing Code Coverage ?
You can get very good information from SO WEB SITE
Free code coverage tools
What is Code Coverage and how do YOU measure it?
Code Coverage is a measurement of how many lines/blocks/arcs of your code are executed while the automated tests are running.CC is collected by using a specialized tool to instrument the binaries to add tracing calls and run a full set of automated tests against the instrumented product. A good CC tools will give you not only the percentage of the code that is executed, but also will allow you to drill into the data and see exactly which lines of code were executed during particular test.
Code coverage algorithms were first created to address the problem of assessing a source code by looking directly at the source code. Code coverage belongs to the structural testing category because of the assertions made on the internal parts of the program and not on system outputs. Therefore code coverage aims at finding parts of the code that are not worth testing.
http://www.stickyminds.com/sitewide.asp?Function=edetail&ObjectType=ART&ObjectId=7580
alt text http://www.codecoveragetools.com/images/stories/software_lifecycle.jpg
Its Good for
Functional coverage aiming at finding how many functions or procedures were executed.
Statement or line coverage which identifies the number of lines in the source code has been executed.
Condition coverage or decision coverage answers the question about the number of loop conditions were executed in the program.
Path coverage which focuses on finding all possible paths from a given starting point in the code has been executed.
Entry and exit coverage which finds how many functions (C/C++, Java) or procedures (Pascal) were executing from the beginning to the end.
TOOLS
http://www.codecoveragetools.com/
http://java-source.net/open-source/code-coverage
http://www.codecoveragetools.com/index.php/coverage-process/code-coverage-tools-java.html
http://open-tube.com/10-code-coverage-tools-c-c/
http://csharp-source.net/open-source/code-coverage
http://www.kdedevelopers.org/node/3190
From wikipedia article
Code coverage is a measure used in
software testing. It describes the
degree to which the source code of a
program has been tested. It is a form
of testing that inspects the code
directly and is therefore a form of
white box testing1. Currently, the
use of code coverage is extended to
the field of digital hardware, the
contemporary design methodology of
which relies on Hardware description
languages (HDLs).
Advocating the use of code coverage
A code coverage tool simply keeps
track of which parts of your code get
executed and which parts do not.
Usually, the results are granular down
to the level of each line of code. So
in a typical situation, you launch
your application with a code coverage
tool configured to monitor it. When
you exit the application, the tool
will produce a code coverage report
which shows which lines of code were
executed and which ones were not. If
you count the total number of lines
which were executed and divide by the
total number of lines which could have
been executed, you get a percentage.
If you believe in code coverage, the
higher the percentage, the better. In
practice, reaching 100% is extremely
rare.
The use of a code coverage tool is
usually combined with the use of some
kind of automated test suite. Without
automated testing, a code coverage
tool merely tells you which features a
human user remembered to use. Such a
tool is far more useful when it is
measuring how complete your test suite
is with respect to the code you have
written.
Related articles
The Future of Code-Coverage Tools
The effectiveness of code coverage tools in software testing
Tools
Open Source Code Coverage Tools in Java
Code coverage is a metric, showing how "well" the source code is tested. There are several types of code coverage: line coverage, function coverage, branch coverage.
In order to measure the coverage, you shall run the application either manually or by automated test.
Tools can be divided in two categories:
- the ones that run the compiled code in a modified environment (like the debugger), counting the required points (functions, lines, etc.);
- the ones that require special compilation - in this case the resulting binary already contains the code which actually does the counting.
There are several tools for measuring and visualizing the result, they depend from platform, from source code's language.
Please read article on Wikipedia
To provide you tools, please define for which OS and language do you use.
Code coverage is a measure used in software testing. It describes the degree to which the source code of a program has been tested.
http://en.wikipedia.org/wiki/Code_coverage
The wikipedia definition is pretty good, but in my own words code coverage tells you how much automated testing you have accounted for. 100% would mean that ever single line of code in your application is being covered by a unit test.
NCover is an application for .NET
The term refers to how well your program is covered by your tests. See the following wikipedia article for more info:
http://en.wikipedia.org/wiki/Code_coverage
The other answers already cover what Code Coverage is. The think I'd like to stress is that you need to be careful not to treat high coverage as implicitly meaning you've tested all scenarios. It doesn't necessarily say how well you've tested the code or the quality of your tests, just that you've hit a certain percentage of code as part of the tests running.
High Code Coverage does not necessarily mean High Test Quality, but High Test Quality does mean High Code Coverage
In practice, I usually aim for 90-95% code coverage which is often achievable. The last few % are often too expensive to be worth trying to hit.
There are many ways to develop applications. One of those is "Extreme Programming" or "Test Driven Design (TDD)". It states that all code should be tested. Code Coverage is a means of measuring how much is tested.
I'd like to make a small remark about this: I don't think all code should be tested, nor that one should set a specific percentage of code coverage. Neither do I think that code shouldn't be tested with Unit Tests (code testing code). I do think one should decide what makes sense to test. Due to this reason I generally don't use code coverage.
One thing that some tools provide, is highlight the parts that are tested. This way you might run into some code that isn't tested but actually should be, which is the only thing I use it for.
Good answers.
My two cents is that there is no method of testing that catches all errors, but less testing will never catch more errors, so any testing is good. To my mind, coverage testing is not to show what code has been exercised, but to show what code has not been exercised, because that is where bugs love to lurk.
If you combine it with single-stepping, it is a very good way to review code and catch bugs. Here's an example.
Another useful tool for ensuring code quality(which encompasses code coverage) that I recently used is Sonar.
Here is the link http://www.sonarqube.org/

how are serial generators / cracks developed?

I mean, I always was wondered about how the hell somebody can develop algorithms to break/cheat the constraints of legal use in many shareware programs out there.
Just for curiosity.
Apart from being illegal, it's a very complex task.
Speaking just at a teoretical level the common way is to disassemble the program to crack and try to find where the key or the serialcode is checked.
Easier said than done since any serious protection scheme will check values in multiple places and also will derive critical information from the serial key for later use so that when you think you guessed it, the program will crash.
To create a crack you have to identify all the points where a check is done and modify the assembly code appropriately (often inverting a conditional jump or storing costants into memory locations).
To create a keygen you have to understand the algorithm and write a program to re-do the exact same calculation (I remember an old version of MS Office whose serial had a very simple rule, the sum of the digit should have been a multiple of 7, so writing the keygen was rather trivial).
Both activities requires you to follow the execution of the application into a debugger and try to figure out what's happening. And you need to know the low level API of your Operating System.
Some heavily protected application have the code encrypted so that the file can't be disassembled. It is decrypted when loaded into memory but then they refuse to start if they detect that an in-memory debugger has started,
In essence it's something that requires a very deep knowledge, ingenuity and a lot of time! Oh, did I mention that is illegal in most countries?
If you want to know more, Google for the +ORC Cracking Tutorials they are very old and probably useless nowdays but will give you a good idea of what it means.
Anyway, a very good reason to know all this is if you want to write your own protection scheme.
The bad guys search for the key-check code using a disassembler. This is relative easy if you know how to do this.
Afterwards you translate the key-checking code to C or another language (this step is optional). Reversing the process of key-checking gives you a key-generator.
If you know assembler it takes roughly a weekend to learn how to do this. I've done it just some years ago (never released anything though. It was just research for my game-development job. To write a hard to crack key you have to understand how people approach cracking).
Nils's post deals with key generators. For cracks, usually you find a branch point and invert (or remove the condition) the logic. For example, you'll test to see if the software is registered, and the test may return zero if so, and then jump accordingly. You can change the "jump if equals zero (je)" to "jump if not-equals zero (jne)" by modifying a single byte. Or you can write no-operations over various portions of the code that do things that you don't want to do.
Compiled programs can be disassembled and with enough time, determined people can develop binary patches. A crack is simply a binary patch to get the program to behave differently.
First, most copy-protection schemes aren't terribly well advanced, which is why you don't see a lot of people rolling their own these days.
There are a few methods used to do this. You can step through the code in a debugger, which does generally require a decent knowledge of assembly. Using that you can get an idea of where in the program copy protection/keygen methods are called. With that, you can use a disassembler like IDA Pro to analyze the code more closely and try to understand what is going on, and how you can bypass it. I've cracked time-limited Betas before by inserting NOOP instructions over the date-check.
It really just comes down to a good understanding of software and a basic understanding of assembly. Hak5 did a two-part series on the first two episodes this season on kind of the basics of reverse engineering and cracking. It's really basic, but it's probably exactly what you're looking for.
A would-be cracker disassembles the program and looks for the "copy protection" bits, specifically for the algorithm that determines if a serial number is valid. From that code, you can often see what pattern of bits is required to unlock the functionality, and then write a generator to create numbers with those patterns.
Another alternative is to look for functions that return "true" if the serial number is valid and "false" if it's not, then develop a binary patch so that the function always returns "true".
Everything else is largely a variant on those two ideas. Copy protection is always breakable by definition - at some point you have to end up with executable code or the processor couldn't run it.
The serial number you can just extract the algorithm and start throwing "Guesses" at it and look for a positive response. Computers are powerful, usually only takes a little while before it starts spitting out hits.
As for hacking, I used to be able to step through programs at a high level and look for a point where it stopped working. Then you go back to the last "Call" that succeeded and step into it, then repeat. Back then, the copy protection was usually writing to the disk and seeing if a subsequent read succeeded (If so, the copy protection failed because they used to burn part of the floppy with a laser so it couldn't be written to).
Then it was just a matter of finding the right call and hardcoding the correct return value from that call.
I'm sure it's still similar, but they go through a lot of effort to hide the location of the call. Last one I tried I gave up because it kept loading code over the code I was single-stepping through, and I'm sure it's gotten lots more complicated since then.
I wonder why they don't just distribute personalized binaries, where the name of the owner is stored somewhere (encrypted and obfuscated) in the binary or better distributed over the whole binary.. AFAIK Apple is doing this with the Music files from the iTunes store, however there it's far too easy, to remove the name from the files.
I assume each crack is different, but I would guess in most cases somebody spends
a lot of time in the debugger tracing the application in question.
The serial generator takes that one step further by analyzing the algorithm that
checks the serial number for validity and reverse engineers it.