We know that computer performs its all operations in binary only. It cannot work with decimal or any other base numbers.
If computer cannot perform any operation in decimals, then how does it convert them to binary? I think there are different stages during the conversion at which addition and multiplication are required. How the computer can add or multiply any number even before getting its binary equivalent ?
I have searched this in many place but couldn't find a convincing answer.
Note: This stackexchange site is not the right place to ask this question. I am still answering it, better shift it to the appropriate one or delete question after getting your answer.
Well it doesn't care what input you supply to it. Think of it as your tv switch. When you switch it on, your tv starts to work. This happens because it got the exact current flow it required to work. Similarly in a computer, there is a particular voltage, let's say 5V. Below 5V is all considered what you call as '0' otherwise '1'. You may have seen an AND, OR etc gates. If you supply both '1' to AND it results in '1' otherwise '0'. There are many such digital circuits. Some examples are a binary adder, a latch, a flip flop etc. These work with these current signals (which are characterised as 0 or 1 as explained above). A computer is a combination of millions of such circuits.
When you talk about converting decimal to binary or something like that, its actually not like that. Every program (spreadsheets, games etc) are written in some language. Most common ones are compiled or interpreted. Some languages that get compiled are C, javaetc. and some interpreted ones are python, rubyetc. Job of a compiler or interpreter if to convert the code you wrote in that language to assembly code as per the rules of that language. Assembly code is then converted into machine code when it has to run. Machine code is pure zeros and ones. These zeros and ones just define triggers on what to execute and when.
Don't confuse this with what you see. Desktop that displays you the data is a secondary thing that is specifically made just to make things easy for us.
In a computer a clock keeps running. Like you must have heard 2.5Ghz processor or something like that. This is the frequency with which instructions are executed. Seems odd but yes whether you are doing work or not, when computer is working, it executes instructions continuously and if you are not doing anything it keeps on checking for interaction.
Imagine correctly
1) you opened your pc, the hardware got ready for your commands and kept checking for interaction
2)you opened a folder. Now think to open a folder you obviously need to touch the keyboard, mouse or do some voice interaction. This interaction is followed by your computer. Pressing a down arrow produces a zero or one signal at the right place. Now after this it gets displayed to you. It is not that what is being displayed is being done. Instead what is being done is getting displayed for you to follow it easily.
Related
For quite some time I have been wondering how automatic differentiation works. However, I am a bit confused on how the forward mode works -- I am not equipped to deal with reverse mode at the moment. I have tried to read the source code of some libraries (mainly autodiff) and read some papers (e.g. FAD) in order to understand how people are doing it, with little success.
My main issue is I don't get how dual numbers are used. For example, let's say we define a class of dual numbers (in C++) that holds two numbers; value and derivative. Then, we can overload different mathematical functions and operators, in order to define the dual number algebra (as in the complex number case). Then, and this is my problem, no matter we do, we are only going to get first derivatives.
I keep reading about implementation of hyper-dual numbers, which are described as duals that store values, Jacobian, Hessian, etc. If this is true, then if I have a function of 15 variables and I need the third derivative wrt all of them, my computer is going to blow up... Since there are very efficient libraries out there that do such calculations, I am clearly missing something.
I don't have a specific coding question, I would appreciate any input on how forward mode autodiff can be implemented in a practical way.
More info
I have written a basic dual number library in C++, which you can find on github. However, once I finished writing the class and a few function overloads, I gave up due to the problem I describe above (DualNumbers.cpp has several examples, thogouh).
Recently I also started again, this time using expression templates (because I wanted to learn how to use them) -- see github, but this approach has another issue I describe in another question.
I do not have experience with micorcontrollers but I have something related to them. Here is and explanation of my issue:
I have an algorithm, and I want to calculate how many cycles my algorithm would cost on a specific avr microcontroller.
To do that I downloaded AVR-STudio 6, and I used the simulator. I succeeded in obtaining the number of cycles for my algorithm. What I wan to know is that how can I make sure that my algorithm is working as it should be. AVR-Studio allows me to debug using the simulator but I am not able to see the output of my algorithm.
To simplify my question, I would like some help in implementing the hello world example in AVR-Studio, that is I want to see "hello world" in the output window, if that is possible.
My question is not how to program the microcontroller, my question is that how could I see the output of a program in AVR-Studio.
Many thanks
As Hanno Binder suggested in his comment:
Atmel Studio still does not provide any means to display debug messages sent by the program simulated. Your only option is to place breakpoints at apropriate locations and then inspect the state of the device in the simulator. For example the locations in RAM where your result is stored, or the registers in which it may reside; maybe have a 'watch' set on a variable or expression.
I think this is the best answer, watch vairables and memory while in debug mode.
Note: turn off optimization when you want to debug for infomation, or some variables will be optimized away.
the best thing to test if algorithms work is to run them in a regular PC program and feed them with data and compare the results with ground trouth.
Clearly to be able to do this a good programming style is neccessary that separates hardware related tasks from the actual data processing. Additionally you have to keep architectural differences in mind (eg: int=16bit vs. int=32bit --> use inttypes.h)
I am going to be working on self-chosen project for my college networking class and I just had a couple questions to help get me started in the right direction.
My project will involve creating a new "physical" link over which data, in the form of text, will be transmitted from one computer to another. This link will involve one computer with a webcam that reads a series of flashing colors (black/white) as binary and converts it to text. Each series of flashes will simulate a packet of data. I will be using OSX an the integrated webcam in a Macbook, the flashing computer will either be windows or osx.
So my questions are: which programming languages or API's would be best for reading live webcam data and analyzing the color of a certain area as well as programming and timing the flashes? Also, would I need to worry about matching the flash rate of the "writing" computer and the frame capture rate of the "reading" computer?
Thank you for any help you might be able to provide.
Regarding the frame capture rate, Shannon sampling theorem says that "perfect reconstruction of a signal is possible when the sampling frequency is greater than twice the maximum frequency of the signal being sampled". In other words if your flashing light switches 10 times per second, you need a camera of more than 20fps to properly capture that. So basically check your camera specs, divide by 2, lower the resulting a little and you have your maximum flashing rate.
Whatever can get the frames will work. If the light conditions in which the camera works are gonna be stable, and the position of the light on images is gonna be static then it is gonna be very very easy with checking the average pixel values of a certain area.
If you need additional image processing you should probably also find out about OpenCV (it has bindings to every programming language).
To answer your question about language choice, I would recommend java. The Java Media Framework is great and easy to use. I have used it for capturing video from webcams in the past. Be warned, however, that everyone you ask will recommend a different language - everyone has their preferences!
What are you using as the flashing device? What kind of distance are you trying to achieve? Something worth thinking about is how are you going to get the receiver to recognise where within the captured image to look for the flashes. Some kind of fiducial marker might be necessary. Longer ranges will make this problem harder to resolve.
If you're thinking about shorter ranges, have you considered using a two-dimensional transmitter? (given that you're using a two-dimensional receiver, it makes sense) and maybe have a transmitter that shows a sequence of QR codes (or similar encodings) on a monitor?
You will have to consider some kind of error-correction encoding, such as a hamming code. While encoding would increase the data footprint, it might give you overall better bandwidth given that you can crank up the speed much higher without having to worry about the odd corrupt bit.
Some 'evaluation' type material might include you discussing the obvious security risks in using such a channel - anyone with line of sight to the transmitter can eavesdrop! You could suggest in your writeup using some kind of encryption, a block cipher in CBC would do, but would require a key-exchange prior to transmission, so you could think about public key encryption.
I mean, I always was wondered about how the hell somebody can develop algorithms to break/cheat the constraints of legal use in many shareware programs out there.
Just for curiosity.
Apart from being illegal, it's a very complex task.
Speaking just at a teoretical level the common way is to disassemble the program to crack and try to find where the key or the serialcode is checked.
Easier said than done since any serious protection scheme will check values in multiple places and also will derive critical information from the serial key for later use so that when you think you guessed it, the program will crash.
To create a crack you have to identify all the points where a check is done and modify the assembly code appropriately (often inverting a conditional jump or storing costants into memory locations).
To create a keygen you have to understand the algorithm and write a program to re-do the exact same calculation (I remember an old version of MS Office whose serial had a very simple rule, the sum of the digit should have been a multiple of 7, so writing the keygen was rather trivial).
Both activities requires you to follow the execution of the application into a debugger and try to figure out what's happening. And you need to know the low level API of your Operating System.
Some heavily protected application have the code encrypted so that the file can't be disassembled. It is decrypted when loaded into memory but then they refuse to start if they detect that an in-memory debugger has started,
In essence it's something that requires a very deep knowledge, ingenuity and a lot of time! Oh, did I mention that is illegal in most countries?
If you want to know more, Google for the +ORC Cracking Tutorials they are very old and probably useless nowdays but will give you a good idea of what it means.
Anyway, a very good reason to know all this is if you want to write your own protection scheme.
The bad guys search for the key-check code using a disassembler. This is relative easy if you know how to do this.
Afterwards you translate the key-checking code to C or another language (this step is optional). Reversing the process of key-checking gives you a key-generator.
If you know assembler it takes roughly a weekend to learn how to do this. I've done it just some years ago (never released anything though. It was just research for my game-development job. To write a hard to crack key you have to understand how people approach cracking).
Nils's post deals with key generators. For cracks, usually you find a branch point and invert (or remove the condition) the logic. For example, you'll test to see if the software is registered, and the test may return zero if so, and then jump accordingly. You can change the "jump if equals zero (je)" to "jump if not-equals zero (jne)" by modifying a single byte. Or you can write no-operations over various portions of the code that do things that you don't want to do.
Compiled programs can be disassembled and with enough time, determined people can develop binary patches. A crack is simply a binary patch to get the program to behave differently.
First, most copy-protection schemes aren't terribly well advanced, which is why you don't see a lot of people rolling their own these days.
There are a few methods used to do this. You can step through the code in a debugger, which does generally require a decent knowledge of assembly. Using that you can get an idea of where in the program copy protection/keygen methods are called. With that, you can use a disassembler like IDA Pro to analyze the code more closely and try to understand what is going on, and how you can bypass it. I've cracked time-limited Betas before by inserting NOOP instructions over the date-check.
It really just comes down to a good understanding of software and a basic understanding of assembly. Hak5 did a two-part series on the first two episodes this season on kind of the basics of reverse engineering and cracking. It's really basic, but it's probably exactly what you're looking for.
A would-be cracker disassembles the program and looks for the "copy protection" bits, specifically for the algorithm that determines if a serial number is valid. From that code, you can often see what pattern of bits is required to unlock the functionality, and then write a generator to create numbers with those patterns.
Another alternative is to look for functions that return "true" if the serial number is valid and "false" if it's not, then develop a binary patch so that the function always returns "true".
Everything else is largely a variant on those two ideas. Copy protection is always breakable by definition - at some point you have to end up with executable code or the processor couldn't run it.
The serial number you can just extract the algorithm and start throwing "Guesses" at it and look for a positive response. Computers are powerful, usually only takes a little while before it starts spitting out hits.
As for hacking, I used to be able to step through programs at a high level and look for a point where it stopped working. Then you go back to the last "Call" that succeeded and step into it, then repeat. Back then, the copy protection was usually writing to the disk and seeing if a subsequent read succeeded (If so, the copy protection failed because they used to burn part of the floppy with a laser so it couldn't be written to).
Then it was just a matter of finding the right call and hardcoding the correct return value from that call.
I'm sure it's still similar, but they go through a lot of effort to hide the location of the call. Last one I tried I gave up because it kept loading code over the code I was single-stepping through, and I'm sure it's gotten lots more complicated since then.
I wonder why they don't just distribute personalized binaries, where the name of the owner is stored somewhere (encrypted and obfuscated) in the binary or better distributed over the whole binary.. AFAIK Apple is doing this with the Music files from the iTunes store, however there it's far too easy, to remove the name from the files.
I assume each crack is different, but I would guess in most cases somebody spends
a lot of time in the debugger tracing the application in question.
The serial generator takes that one step further by analyzing the algorithm that
checks the serial number for validity and reverse engineers it.
A friend of mine brought up this questiont he other day, he's recently bought a garmin heart rate moniter device which keeps track of his heart rate and allows him to upload his heart rate stats for a day to his computer.
The only problem is there are no linux drivers for the garmin USB device, he's managed to interpret some of the data, such as the model number and his user details and has identified that there are some binary datatables essentially which we assume represent a series of recordings of his heart rate and the time the recording was taken.
Where does one start when reverse engineering data when you know nothing about the structure?
I had the same problem and initially found this project at Google Code that aims to complete a cross-platform version of tools for the Garmin devices ... see: http://code.google.com/p/garmintools/. There's a link on the front page of that project to the protocols you need, which Garmin was thoughtful enough to release publically.
And here's a direct link to the Garmin I/O specification: http://www.garmin.com/support/pdf/IOSDK.zip
I'd start looking at the data in a hexadecimal editor, hopefully a good one which knows the most common encodings (ASCII, Unicode, etc.) and then try to make sense of it out of the data you know it has stored.
As another poster mentioned, reverse engineering can be hairy, not in practice but in legality.
That being said, you may be able to find everything related to your root question at hand by checking out this project and its' code...and they do handle the runner's heart rate/GPS combo data as well
http://www.gpsbabel.org/
I'd suggest you start with checking the legality of reverse engineering in your country of origin. Most countries have very strict laws about what is allowed and what isn't regarding reverse engineering devices and code.
I would start by seeing what data is being sent by the device, then consider how such data could be represented and packed.
I would first capture many samples, and see if any pattern presents itself, since heart beat is something which is regular and that would suggest it is measurement related to the heart itself. I would also look for bit fields which are monotonically increasing, as that would suggest some sort of time stamp.
Having formed a hypothesis for what is where, I would write a program to test it and graph the results and see if it makes sense. If it does but not quite, then closer inspection would probably reveal you need some scaling factors here or there. It is also entirely possible I need to process the data first before it looks anything like what their program is showing, i.e. might need to integrate the data points. If I get garbage, then it is back to the drawing board :-)
I would also check the manufacturer's website, or maybe run strings on their binaries. Finding someone who works in the field of biomedical engineering would also be on my list, as they would probably know what protocols are typically used, if any. I would also look for these protocols and see if any could be applied to the data I am seeing.
I'd start by creating a hex dump of the data. Figure it's probably blocked in some power-of-two-sized chunks. Start looking for repeating patterns. Think about what kind of data they're probably sending. Either they're recording each heart beat individually, or they're recording whatever the sensor is sending at fixed intervals. If it's individual beats, then there's going to be a time delta (since the last beat), a duration, and a max or avg strength of some sort. If it's fixed intervals, then it'll probably be a simple vector of readings. There'll probably be a preamble of some sort, with a start timestamp and the sampling rate. You can try decoding the timestamp yourself, or you might try simply feeding it to ctime() and see if they're using standard absolute time format.
Keep in mind that lots of cheap A/D converters only produce 12-bit outputs, so your readings are unlikely to be larger than 16 bits (and the high-order 4 bits may be used for flags). I'd recommend resetting the device so that it's "blank", dumping and storing the contents, then take a set of readings, record the results (whatever the device normally reports), then dump the contents again and try to correlate the recorded results with whatever data appeared after the "blank" dump.
Unsure if this is what you're looking for but Garmin has created an API that runs with your browser. It seems OSX is supported, as well as Windows browsers... I would try it from Google Chromium to see if it can be used instead of this reverse engineering...
http://developer.garmin.com/web-device/garmin-communicator-plugin/
API Features
Auto-detection of devices connected to a computer Access to device
product information like product name and software version Read
tracks, routes and waypoints from supported recreational, fitness and
navigation devices Write tracks, routes and waypoints to supported
recreational, fitness and navigation devices Read fitness data from
supported fitness devices Geo-code address and save to a device as a
waypoint or favorite Read and write Garmin XML files (GPX and TCX) as
well as binary files. Support for most Garmin devices (USB, USB
mass-storage, most serial devices) Support for Internet Explorer,
Firefox and Chrome on Microsoft Windows. Support for Safari, Firefox
and Chrome on Mac OS X.
Can you synthesize a heart beat using something like a computer speaker? (I have no idea how such devices actually work). Watch how the binary results change based on different inputs.
Ripping apart the device and checking out what's inside would probably help too.