What is a "badboy" in reverse engineering - reverse-engineering

Ok so I've recently started doing some reverse engineering, and I keep coming across a term (I think) I have no idea what it means? A badboy?
00013F92 7E 24 JLE SHORT function.00013FB8 ; badboy
Could anyone explain?

Maybe this is the answer:
http://www.codeproject.com/Articles/30815/An-Anti-Reverse-Engineering-Guide
Search for "bad boy".
Let me paste that in, four and a half years after the fact, to satisfy the moderator:
There are three types of breakpoints available to a reverse engineer:
hardware, memory, and INT 3h breakpoints. Breakpoints are essential to
a reverse engineer, and without them, live analysis of a module does
him or her little good. Breakpoints allow for the stopping of
execution of a program at any point where one is placed. By utilizing
this, reverse engineers can put breakpoints in areas like Windows
APIs, and can very easily find where a badboy message (a messagebox
saying you entered a bad serial, for example) is coming from. In fact,
this is probably the most utilized technique in cracking, the only
competition would be a referenced text string search. This is why
breakpoint checks are done over important APIs like MessageBox,
VirtualAlloc, CreateDialog, and others that play an important role in
the protecting user information process. The first example will cover
the most common type of breakpoint which utilizes the INT 3h
instruction.

Related

How does our computer actually converts decimal number into binary?

We know that computer performs its all operations in binary only. It cannot work with decimal or any other base numbers.
If computer cannot perform any operation in decimals, then how does it convert them to binary? I think there are different stages during the conversion at which addition and multiplication are required. How the computer can add or multiply any number even before getting its binary equivalent ?
I have searched this in many place but couldn't find a convincing answer.
Note: This stackexchange site is not the right place to ask this question. I am still answering it, better shift it to the appropriate one or delete question after getting your answer.
Well it doesn't care what input you supply to it. Think of it as your tv switch. When you switch it on, your tv starts to work. This happens because it got the exact current flow it required to work. Similarly in a computer, there is a particular voltage, let's say 5V. Below 5V is all considered what you call as '0' otherwise '1'. You may have seen an AND, OR etc gates. If you supply both '1' to AND it results in '1' otherwise '0'. There are many such digital circuits. Some examples are a binary adder, a latch, a flip flop etc. These work with these current signals (which are characterised as 0 or 1 as explained above). A computer is a combination of millions of such circuits.
When you talk about converting decimal to binary or something like that, its actually not like that. Every program (spreadsheets, games etc) are written in some language. Most common ones are compiled or interpreted. Some languages that get compiled are C, javaetc. and some interpreted ones are python, rubyetc. Job of a compiler or interpreter if to convert the code you wrote in that language to assembly code as per the rules of that language. Assembly code is then converted into machine code when it has to run. Machine code is pure zeros and ones. These zeros and ones just define triggers on what to execute and when.
Don't confuse this with what you see. Desktop that displays you the data is a secondary thing that is specifically made just to make things easy for us.
In a computer a clock keeps running. Like you must have heard 2.5Ghz processor or something like that. This is the frequency with which instructions are executed. Seems odd but yes whether you are doing work or not, when computer is working, it executes instructions continuously and if you are not doing anything it keeps on checking for interaction.
Imagine correctly
1) you opened your pc, the hardware got ready for your commands and kept checking for interaction
2)you opened a folder. Now think to open a folder you obviously need to touch the keyboard, mouse or do some voice interaction. This interaction is followed by your computer. Pressing a down arrow produces a zero or one signal at the right place. Now after this it gets displayed to you. It is not that what is being displayed is being done. Instead what is being done is getting displayed for you to follow it easily.

How can I analyze live data from webcam?

I am going to be working on self-chosen project for my college networking class and I just had a couple questions to help get me started in the right direction.
My project will involve creating a new "physical" link over which data, in the form of text, will be transmitted from one computer to another. This link will involve one computer with a webcam that reads a series of flashing colors (black/white) as binary and converts it to text. Each series of flashes will simulate a packet of data. I will be using OSX an the integrated webcam in a Macbook, the flashing computer will either be windows or osx.
So my questions are: which programming languages or API's would be best for reading live webcam data and analyzing the color of a certain area as well as programming and timing the flashes? Also, would I need to worry about matching the flash rate of the "writing" computer and the frame capture rate of the "reading" computer?
Thank you for any help you might be able to provide.
Regarding the frame capture rate, Shannon sampling theorem says that "perfect reconstruction of a signal is possible when the sampling frequency is greater than twice the maximum frequency of the signal being sampled". In other words if your flashing light switches 10 times per second, you need a camera of more than 20fps to properly capture that. So basically check your camera specs, divide by 2, lower the resulting a little and you have your maximum flashing rate.
Whatever can get the frames will work. If the light conditions in which the camera works are gonna be stable, and the position of the light on images is gonna be static then it is gonna be very very easy with checking the average pixel values of a certain area.
If you need additional image processing you should probably also find out about OpenCV (it has bindings to every programming language).
To answer your question about language choice, I would recommend java. The Java Media Framework is great and easy to use. I have used it for capturing video from webcams in the past. Be warned, however, that everyone you ask will recommend a different language - everyone has their preferences!
What are you using as the flashing device? What kind of distance are you trying to achieve? Something worth thinking about is how are you going to get the receiver to recognise where within the captured image to look for the flashes. Some kind of fiducial marker might be necessary. Longer ranges will make this problem harder to resolve.
If you're thinking about shorter ranges, have you considered using a two-dimensional transmitter? (given that you're using a two-dimensional receiver, it makes sense) and maybe have a transmitter that shows a sequence of QR codes (or similar encodings) on a monitor?
You will have to consider some kind of error-correction encoding, such as a hamming code. While encoding would increase the data footprint, it might give you overall better bandwidth given that you can crank up the speed much higher without having to worry about the odd corrupt bit.
Some 'evaluation' type material might include you discussing the obvious security risks in using such a channel - anyone with line of sight to the transmitter can eavesdrop! You could suggest in your writeup using some kind of encryption, a block cipher in CBC would do, but would require a key-exchange prior to transmission, so you could think about public key encryption.

How to make a good anti-crack protection?

I will start off with saying I know that it is impossible to prevent your software from reverse engineering.
But, when I take a look at crackmes.de, there are crackmes with a difficulty grade of 8 and 9 (on a scale of 1 to 10). These crackmes are getting cracked by genius brains, who write a tutorial on how to crack it. Some times, such tutorials are 13+ pages long!
When I try to make a crackme, they crack it in 10 minutes. Followed by a "how-to-crack" tutorial with a length of 20 lines.
So the questions are:
How can I make a relatively good anti-crack protection.
Which techniques should I use?
How can I learn it?
...
Disclaimer: I work for a software-protection tools vendor (Wibu-Systems).
Stopping cracking is all we do and all we have done since 1989. So we thoroughly understand how SW gets cracked and how to avoid it. Bottom line: only with a secure hardware dongle, implemented correctly, can you guarantee against cracking.
Most strong anti-cracking relies on encryption (symmetric or public key). The encryption can be very strong, but unless the key storage/generation is equally strong it can be attacked. Lots of other methods are possible too, even with good encryption, unless you know what you are doing. A software-only solution will have to store the key in an accessible place, easily found or vulnerable to a man-in-the-middle attack. Same thing is true with keys stored on a web server. Even with good encryption and secure key storage, unless you can detect debuggers the cracker can just take a snapshot of memory and build an exe from that. So you need to never completely decrypt in memory at any one time and have some code for debugger detection. Obfuscation, dead code, etc, won't slow them down for long because they don't crack by starting at the beginning and working through your code. They are far more clever than that. Just look at some of the how-to cracking videos on the net to see how to find the security detection code and crack from there.
Brief shameless promotion: Our hardware system has NEVER been cracked. We have one major client who uses it solely for anti-reverse engineering. So we know it can be done.
Languages like Java and C# are too high-level and do not provide any effective structures against cracking. You could make it hard for script kiddies through obfuscation, but if your product is worth it it will be broken anyway.
I would turn this round slightly and think about:
(1) putting in place simple(ish) measures so that your program isn't trivial to hack, so e.g. in Java:
obfuscate your code so at least make your enemy have to go to the moderate hassle of looking through a decompilation of obfuscated code
maybe write a custom class loader to load some classes encrypted in a custom format
look at what information your classes HAVE to expose (e.g. subclass/interface information can't be obfuscated away) and think about ways round that
put some small key functionality in a DLL/format less easy to disassemble
However, the more effort you go to, the more serious hackers will see it as a "challenge". You really just want to make sure that, say, an average 1st year computer science degree student can't hack your program in a few hours.
(2) putting more subtle copyright/authorship markers (e.g. metadata in images, maybe subtly embed a popup that will appear in 1 year's time to all copies that don't connect and authenticate with your server...) that hackers might not bother to look for/disable because their hacked program "works" as it is.
(3) just give your program away in countries where you don't realistically have a chance of making a profit from it and don't worry about it too much-- if anything, it's a form of viral marketing. Remember that in many countries, what we see in the UK/US as "piracy" of our Precious Things is openly tolerated by government/law enforcement; don't base your business model around copyright enforcement that doesn't exist.
I have a pretty popular app (which i won't specify here, to avoid crackers' curiosity, of course) and suffered with cracked versions some times in the past, fact that really caused me many headaches.
After months struggling with lots of anti-cracking techniques, since 2009 i could establish a method that proved to be effective, at least in my case : my app has not been cracked since then.
My method consists in using a combination of three implementations :
1 - Lots of checks in the source code (size, CRC, date and so on : use your creativity. For instance, if my app detects tools like OllyDbg being executed, it will force the machine to shutdown)
2 - CodeVirtualizer virutalization in sensitive functions in source code
3 - EXE encryption
None of these are really effective alone : checks can be passed by a debugger, virtualization can be reversed and EXE encryption can be decrypted.
But when you used altogether, they will cause BIG pain to any cracker.
It's not perfect although : so many checks makes the app slower and the EXE encrypt can lead to false positive in some anti-virus software.
Even so there is nothing like not be cracked ;)
Good luck.
Personaly I am fan of server side check.
It can be as simple as authentication of application or user each time it runs. However that can be easly cracked. Or puting some part of code to server side and that would requere a lot more work.
However your program will requere internet connection as must have and you will have expenses for server. But that the only way to make it relatively good protected. Any stand alone application will be cracked relatively fast.
More logic you will move to server side more hard to crack it will get. But it will if it will be worth it. Even large companies like Blizzrd can't prevent theyr server side being reversed engineered.
I purpose the following:
Create in home a key named KEY1 with N bytes randomly.
Sell the user a "License number" with the Software. Take note of his/her name and surname and tell him/her that those data are required to activate the Software, also an Internet conection.
Upload within the next 24 hours to your server the "License number", and the name and surname, also the KEY3 = (KEY1 XOR hash_N_bytes(License_number, name and surname) )
The installer asks for a "Licese_number" and the name and surname, then it sends those data to the server and downloads the key named "KEY3" if those data correspond to a valid sell.
Then the installer makes KEY1 = KEY3 XOR hash_N_bytes(License_number, name and surname)
The installer checks KEY1 using a "Hash" of 16 bits. The application is encrypted with the KEY1 key. Then it decrypts the application with the key and it's ready.
Both the installer and application must have a CRC content check.
Both could check is being debugged.
Both could have encrypted parts of code during execution time.
What do you think about this method?

Usability: speech recognition versus keypad

We are seeing more and more speech recognition implemented and request for libraries that does good speech recognition. What's the rationale (in term of usability) behind it versus a keyboard or keypad? What reasons would you have to invest in this development?
For example, let's take the call centers. A few years ago, almost every call center used an IVR that prompted for a key for the menus. Now, we're seeing more and more menus with prompt for a spoken keyword and/or a pressed keypad: "please say invoice or press 1 to see your invoice". Or we are seeing the same thing in companies' phone directory: "please say the name of the person you are trying to reach" ... "Franck Loyd" ... "Did you say Jack Freud? Please say yes if you want to reach this person or say no to try again".
I guess it's a plus when you're in your car without holding your phone but is it worth the additional waiting time? Longer interaction for all the choices, longer prompt time while trying to analyze if something was said and so on? Also, reliability is better than it was, definitely, but sometime it feels more like an toy someone decided to plugged into the system so it can feel futuristic.
Any experience designing IVR or software that used (or chose not to) speech recognition?
Thanks!
What's the rationale (in term of
usability) behind it versus a keyboard
or keypad?
Usability is a very broad term. If I were to attempt to enter my address with a touch pad, it wouldn't be considered very usable. Some argue that using a speech engine with an overall success rate of 70-80% isn't very usable either. As indicated in other posts, hands free input can be much easier for those on a mobile phone. However, using words versus numeric input can actually be less intuitive than a touch tone phone if the topic is somewhat foreign to the caller. A caller hearing terms and phrases that aren't very familiar can't remember them in the 10-30 seconds of the prompt but they can hover over the best sounding choice with their finger or remember the order of choices.
What reasons would you have
to invest in this development?
This is an odd question. Usually the decision to use speech or not in an IVR environment is not driven from the development view of the world. Unless you have a specific requirement that really requires speech, you are almost always reducing overall success rates. Speech is usually a factor of corporate image ... or having the latest technological toy.
I guess it's a plus when you're in your car without holding your phone
but is it worth the additional waiting time?
Speech recognition latencies aren't very high these days when using modern ASRs. In most cases, input is handled in parallel with speech and time between end of speech recognition is .5 to 1s. Be aware that many IVRs then need to perform data look-ups after some inputs and this can appear as a slower system. Normal inputs pushing beyond 1s is usually the sign of an under-powered deployment.
It may not have been under-powered when original implemented, but through tuning efforts, you make a lot of performance versus accuracy decisions. To get that next .1%, resources can be pushed beyond what they should be at peak.
Also, reliability is better than it was, definitely,
but sometime it feels more like an toy someone decided
to plugged into the system so it can feel futuristic.
In general, yes. On the reliability note, you need to really look at the overall numbers to get a sense of the system. It is a battle of statistics where the individual isn't very important (unless they hold the title of VP or above). Through optimization of the input (shifting prompting), resource usage and other speech reco tuning parameters you attempt to maximize accuracy. For basic natural language responses, you can get in the upper 90s. However, your overall success rate is much lower. Imagine 5 prompts all at 98% (in reality, you tend to have a bunch 99 and then a few mid 90s or slightly below): .98 * .98 * .98 * .98 * .98 = 90%. That means 1 out of 10 failing. That is before caller confusion and business rules. DTMF input is usually very near 100%, even after several inputs.
Any experience designing IVR or software that
used (or chose not to) speech recognition?
Yes. But, I suspect that really isn't the question you want. As someone on the technology side, this is usually not your decision and you have limited influence on it. If you are really looking for the pros/cons of speech:
Pros:
Cool/hip (note, speech alone isn't sufficient. You need a great VUI and voice talent)
Good for a highly mobile crowd that shuns ear pieces. The future is supposed to be blending speech with tactile input. Maybe. It probably won't come from the IVR side of the market.
Good for tasks that can't be done with DTMF. Note, many of these problems tend to have low success rates in speech as well. Cost (versus humans) is usually the driving factor not usability. Dropping a call into a voicemail box for things like address change can be very cost effective.
Cons:
Expensive to development, deploy and maintain. Adding new choices can have a significant impact on success rates if you aren't careful. Always monitor the impact of change.
Is often deployed inappropriately. For example, just say your numeric menu choice. This is nearly often a case of we want speech coolness, but can't afford what it really takes to achieve speech coolness.
Success rates will be lower and therefore call center costs will be higher.
Failures tend to focus on specific prompts and individual callers. A caller that regularly experiences problems with your system will be very unhappy with you.
Callers get angry when they aren't understood. Is your goal to identify a subset of your customer base and really get them angry ?
I think that speech-recognition like any method of input has it's pro's and con's.
Pro's
No learning curve, we have been speaking since a very young age.
Very user-intuitive.
On the phone, no need to constantly move the headset from your ear.
Con's
Longer wait time
If bad sound quality, takes multiple attempts to get the selection right.
In some cases a company is required to handle rotary phones. It might be found as more cost affective to just setup the recognition system instead of both.
Voice recognition has a lot more overhead than touch tones. If you want the best results you need to constantly tweak the app and train the system on unrecognized word pronunciations. You also need to be very particular on how you prompt the user with voice recognition or you may get unexpected responses.
Overall touch tone is a lot easier as there are only a limited set of possible options at any given time.
If your app is straight forward enough you voice rec many only complicate it. Press 2 for some other language..
Speech recognition is definetly the wave of the future when combined with touchscreen technology. As example I use tazti speech recognition. It's available in XP and Vista version. Since Microsoft's touchscreen "Surface" platform runs on Vista, I'm sure tazti will work with the touchscreen technology. When I tried tazti speech recognition the built in commands worked great. Also it let's me create my own speech commands and those also work great. Voice searching Google and Yahoo, Wikipedia Youtube and many other search engines works great. Has many other features as well. But it doesn't have dictation. I found that I eliminate 70% or more of my internet generated clicks.... maybe more. NOTE: Tazti is a free download from their website.

how are serial generators / cracks developed?

I mean, I always was wondered about how the hell somebody can develop algorithms to break/cheat the constraints of legal use in many shareware programs out there.
Just for curiosity.
Apart from being illegal, it's a very complex task.
Speaking just at a teoretical level the common way is to disassemble the program to crack and try to find where the key or the serialcode is checked.
Easier said than done since any serious protection scheme will check values in multiple places and also will derive critical information from the serial key for later use so that when you think you guessed it, the program will crash.
To create a crack you have to identify all the points where a check is done and modify the assembly code appropriately (often inverting a conditional jump or storing costants into memory locations).
To create a keygen you have to understand the algorithm and write a program to re-do the exact same calculation (I remember an old version of MS Office whose serial had a very simple rule, the sum of the digit should have been a multiple of 7, so writing the keygen was rather trivial).
Both activities requires you to follow the execution of the application into a debugger and try to figure out what's happening. And you need to know the low level API of your Operating System.
Some heavily protected application have the code encrypted so that the file can't be disassembled. It is decrypted when loaded into memory but then they refuse to start if they detect that an in-memory debugger has started,
In essence it's something that requires a very deep knowledge, ingenuity and a lot of time! Oh, did I mention that is illegal in most countries?
If you want to know more, Google for the +ORC Cracking Tutorials they are very old and probably useless nowdays but will give you a good idea of what it means.
Anyway, a very good reason to know all this is if you want to write your own protection scheme.
The bad guys search for the key-check code using a disassembler. This is relative easy if you know how to do this.
Afterwards you translate the key-checking code to C or another language (this step is optional). Reversing the process of key-checking gives you a key-generator.
If you know assembler it takes roughly a weekend to learn how to do this. I've done it just some years ago (never released anything though. It was just research for my game-development job. To write a hard to crack key you have to understand how people approach cracking).
Nils's post deals with key generators. For cracks, usually you find a branch point and invert (or remove the condition) the logic. For example, you'll test to see if the software is registered, and the test may return zero if so, and then jump accordingly. You can change the "jump if equals zero (je)" to "jump if not-equals zero (jne)" by modifying a single byte. Or you can write no-operations over various portions of the code that do things that you don't want to do.
Compiled programs can be disassembled and with enough time, determined people can develop binary patches. A crack is simply a binary patch to get the program to behave differently.
First, most copy-protection schemes aren't terribly well advanced, which is why you don't see a lot of people rolling their own these days.
There are a few methods used to do this. You can step through the code in a debugger, which does generally require a decent knowledge of assembly. Using that you can get an idea of where in the program copy protection/keygen methods are called. With that, you can use a disassembler like IDA Pro to analyze the code more closely and try to understand what is going on, and how you can bypass it. I've cracked time-limited Betas before by inserting NOOP instructions over the date-check.
It really just comes down to a good understanding of software and a basic understanding of assembly. Hak5 did a two-part series on the first two episodes this season on kind of the basics of reverse engineering and cracking. It's really basic, but it's probably exactly what you're looking for.
A would-be cracker disassembles the program and looks for the "copy protection" bits, specifically for the algorithm that determines if a serial number is valid. From that code, you can often see what pattern of bits is required to unlock the functionality, and then write a generator to create numbers with those patterns.
Another alternative is to look for functions that return "true" if the serial number is valid and "false" if it's not, then develop a binary patch so that the function always returns "true".
Everything else is largely a variant on those two ideas. Copy protection is always breakable by definition - at some point you have to end up with executable code or the processor couldn't run it.
The serial number you can just extract the algorithm and start throwing "Guesses" at it and look for a positive response. Computers are powerful, usually only takes a little while before it starts spitting out hits.
As for hacking, I used to be able to step through programs at a high level and look for a point where it stopped working. Then you go back to the last "Call" that succeeded and step into it, then repeat. Back then, the copy protection was usually writing to the disk and seeing if a subsequent read succeeded (If so, the copy protection failed because they used to burn part of the floppy with a laser so it couldn't be written to).
Then it was just a matter of finding the right call and hardcoding the correct return value from that call.
I'm sure it's still similar, but they go through a lot of effort to hide the location of the call. Last one I tried I gave up because it kept loading code over the code I was single-stepping through, and I'm sure it's gotten lots more complicated since then.
I wonder why they don't just distribute personalized binaries, where the name of the owner is stored somewhere (encrypted and obfuscated) in the binary or better distributed over the whole binary.. AFAIK Apple is doing this with the Music files from the iTunes store, however there it's far too easy, to remove the name from the files.
I assume each crack is different, but I would guess in most cases somebody spends
a lot of time in the debugger tracing the application in question.
The serial generator takes that one step further by analyzing the algorithm that
checks the serial number for validity and reverse engineers it.