OllyDBG, follow Call Function - reverse-engineering

I recently started learning reversing again, and I encountered a problem using my OllyDBG. When debugging an EXE which has buttons that every button does a different thing, I can't seem to find a way to follow a specific button's code.
For example: I have a KeygenMe with 3 buttons: "Login", "About", "Exit".
I want OllyDbg to follow what happens when I press the "Login" button.
How do I do that? I know it is possible as I've done it before.

You can follow a button by asking for olly to stop when the program returns from a funcion.
Do this:
Start debugging your KeygenMe.
Focus on ollydbg window and press Ctrol+F9
Focus on the KeygenMe and click on the button.
Olly will stop on the return of the button function.
Some times olly may stop a little bit far from where you want to go like in user32.dll, so you'll need to trace back your way.
you can do this using two tectiques(that i know):
(Use one after you landed on the return)
Use trace back:
Run your program normally and then hit trace over Ctrol+F11
Then go back using - (Minus Key from numeric keyboard)
or Use Breakpoints
Put breakpoints till you find from were this function is called
Using Right click on the code find the references for the struction that you find on the first step.
keep doint step 1 and 2 till you find your function
(i use both but some times the first one don't work)

The way described above I think is the general one,and it should work on the majority of cases. However, if you already know in which compiler the app was built, you can use a specific approach for it and eventually you can reach faster and more precise what you are looking for.
Supposing the worse case that your exe wasn't built with .NET and you can't decompile it easily. There are some tricks.
For instance Delphi/C++ builder apps make a table in the binary with public object event and addresses, it is extremely easy to decode it, in fact there are some Olly scripts to do that.
On the other hand, if it was made with Visual C using MFC or something like that you can easily reach it if you know how MFC is called.

Related

Good way to draw Console UIs in Cosmos?

So I'm developing an OS using C# and Cosmos. It's called Memphis, and I want it to be entirely command-line for now (like DOS).
But, something that most command-line OSes I've seen (Arch, DOS, etc) all have something like a console library that lets you create simple UIs with buttons, menus, inputs, etc.
I've already tried to write my own, but it was completely futile. I can draw a window and text on-screen, but not much else (I did have a basic window manager that kind of worked but didn't, you press tab to switch between windows, and left/right to select inputs like buttons, textboxes, etc and it did accept keyboard input and I could see the windows updating, but nothing actually changed despite the code theoretically working)
So I'm looking for a way for me to draw UIs on the console for my OS, but it has to be ENTIRELY managed, and must ONLY use what's in System.Console. It also CANNOT contain events. This is because, Cosmos can only really work with code that's 100% managed, and System.Console is implemented almost fully in Cosmos, and events throw an 'OpCode Mkrefany not yet implemented' when Cosmos' IL2CPU assembler tries to convert the compiled c# code to x86 ASM.

Is there anyway to undo only parts of code that are highlighted in sublime text

Are there any plugins that would do this? Let's say you highlight a code block, when you press undo, it undoes the last change with in that code block?
I've done a lot of digging into Sublime's internals, and I don't think this is possible. Commands (processes executed when you select a menu item or hit a key combination) are implemented in one of two ways: either in Python, making use of the API, or internally in C++ and compiled directly into the executable or a library. If a command is implemented in pure Python, such as delete_word (source in Packages/Default/delete_word.py), you can edit the source if necessary or take portions to use in your own code. However, if a command is implemented internally, there's not much you can do to modify it, unless it has options that are documented somewhere. You basically have to use it as-is.
Which brings us to the undo/redo commands, and the edit history. As far as I can tell (since Sublime is not open source - yet), this entire functionality is completely implemented internally, with only the command names exposed. I have been completely unable to find any way of viewing or accessing the actual changes made to the undo/redo stack. The command_history() method of the sublime.View class accesses the commands in the undo/redo stack, but not the actual changes they made.
So, all of this is to say that one could not likely make a plugin that could access the change history of an arbitrary selection in Sublime. One of the major issues (aside from the fact that the change history of the view is not accessible) is that the text you select now might not correspond to anything at a certain point in the history - it might not exist, or could have been altered so fundamentally that it would be essentially impossible to identify which changes should be associated with the selection, and which not. I have never heard of a similar feature in any other editor, most likely for that exact reason.

What's debug section in IDA Pro?

I try to analyze a dll file with my poor assembly skills, so forgive me if I couldn't achieve something very trivial. My problem is that, while debugging the application, I find the code I'm looking for only in debug session, after I stop the debugger, the address is gone. The dll doesn't look to be obfuscated, as many of the code is readable. Take a look at the screenshot. The code I'm looking for is located at address 07D1EBBF in debug376 section. BTW, where did I get this debug376 section?
So my question is, How can I find this function while not debugging?
Thanks
UPDATE
Ok, as I said, as soon as I stop the debugger, the code is vanished. I can't even find it via sequence of bytes (but I can in debug mode). When I start the debugger, the code is not disassembled imediately, I should add a hardware breakpoint at that place and only when the breakpoint will be hit, IDA will show disassembled code. take a look at this screenshot
You see the line of code I'm interested in, which is not visible if the program is not running in debug mode. I'm not sure, but I think it's something like unpacking the code at runtime, which is not visible at design time.
Anyway, any help would be appreciated. I want to know why that code is hidden, until breakpoint hit (it's shown as "db 8Bh" etc) and how to find that address without debugging if possible. BTW, could this be a code from a different module (dll)?
Thanks
UPDATE 2
I found out that debug376 is a segment created at runtime. So simple question: how can I find out where this segment came from :)
So you see the code in the Debugger Window once your program is running and as you seem not to find the verry same opcodes in the raw Hex-Dump once it's not running any more?
What might help you is taking a Memory Snapshot. Pause the program's execution near the instructions you're interested in to make sure they are there, then choose "Take memory snapshot" from the "Debugger" Menu. IDA will then ask you wether to copy only the Data found at the segments that are defined as "loder segments" (those the PE loader creates from the predefined table) or "all segments" that seem to currently belong to the debugged program (including such that might have been created by an unpacking routine, decryptor, whatever). Go for "All segments" and you should be fine seeing memory contents including your debug segments (a segment
created or recognized while debugging) in IDA when not debugging the application.
You can view the list of segements at any time by pressing Shift+F7 or by clicking "Segments" from View > Open subviews.
Keep in mind that the programm your trying to analyze might choose to create the segment some other place the next time it is loaded to make it harder to understand for you what's going on.
UPDATE to match your second Question
When a program is unpacking data from somewhere, it will have to copy stuff somewhere. Windows is a virtual machine that nowadays get's real nasty at you when trying to execute or write code at locations that you're not allowed to. So any program, as long as we're under windows will somehow
Register a Bunch of new memory or overwrite memory it already owns. This is usually done by calling something like malloc or so [Your code looks as if it could have been a verry pointer-intensive language... VB perhaps or something object oriented] it mostly boils down to a call to VirtualAlloc or VirtualAllocEx from Windows's kernel32.dll, see http://msdn.microsoft.com/en-us/library/windows/desktop/aa366887(v=vs.85).aspx for more detail on it's calling convention.
Perhaps set up Windows Exception handling on that and mark the memory range als executable if it wasn't already when calling VirtualAlloc. This would be done by calling VirtualProtect, again from kernel32.dll. See http://msdn.microsoft.com/en-us/library/windows/desktop/aa366898(v=vs.85).aspx and http://msdn.microsoft.com/en-us/library/windows/desktop/aa366786(v=vs.85).aspx for more info on that.
So now, you should take a step trough the programm, starting at its default Entrypoint (OEP) and look for calls tho one of those functions, possibly with the memory protection set to PAGE_EXECUTE or a descendant. After that will possibly come some sort of loop decrypting the memory contents, copying them to their new location. You might want to just step over it, depending on what your interest in the program is by justr placing the cursor after the loop (thick blue line in IDA usually) and clicking "Run to Cursor" from the menu that appears upon right clicking the assembler code.
If that fails, just try placing a Hardware Breakpoint on kernel32.dll's VirtualAlloc and see if you get anything interestin when stepping into the return statement so you end up wherever the execution chain will take you after the Alloc or Protect call.
You need to find the Relative Virtual Address of that code, this will allow you to find it again regardless of the load address (pretty handy with almost all systems using ASLR these days). the RVA is generally calculated as virtual address - base load address = RVA, however, you might also need to account for the section base as well.
The alternative is to use IDA's rebasing tool to rebase the dll to the same address everytime.

AS3 Error #1502

AS3
Error: Error #1502: A script has executed for longer than the default timeout period of 15 seconds.
Is there a way to temporarily suppress this on a specific block of code?
I am creating a HUGE dynamic 3d array of objects, 1000x1000x1000 and need the build to actually finish the initializing.
Your best bet would be to try and refactor your code. Perhaps you can make use of this tutorial which deals with the exact problem you are having.
http://www.senocular.com/flash/tutorials/asyncoperations/
Increasing the timeout is one option, however I would also suggest considering an approach that would build your arrays over multiple frames, that is splitting the work up into separate jobs. As long as you give control back to the Flash Player every once in a while, you will not get this exception.
I'm not certain of the specifics of your problem, however you will need to find a way to parallelize or just simply segment your calculations. If your algorithm centers around one major loop, then consider creating a function that takes all of the arguments necessary to record the context of a single iteration. Then, create a simple control loop that will call this function and determine when to wait until the next frame and when not to. Leveraging AS3 closures can also help with this.
Look for the script execution time limit in the "Publish Settings" (Flash). If you're using Flex, maybe this one can be useful: http://livedocs.adobe.com/flex/3/html/help.html?content=compilers_14.html (check default-script-limits, max-recursion-depth, max-execution-time). Oh! It seems there's apparently no way to make it behave in a different way on a specific piece of code (it is a global setting).
I do not approve the increse timeout option. Because for all this time your appllication is just hangs the whole Flash player. And normaly user thinks it is down, and forses it to quit.
check this one out: How to show the current progressBar value of process within a loop in flex-as3?
And then you can even show the progress which would be really more confident for you and for user.

What do you think of developing for the command line first?

What are your opinions on developing for the command line first, then adding a GUI on after the fact by simply calling the command line methods?
eg.
W:\ todo AddTask "meeting with John, re: login peer review" "John's office" "2008-08-22" "14:00"
loads todo.exe and calls a function called AddTask that does some validation and throws the meeting in a database.
Eventually you add in a screen for this:
============================================================
Event: [meeting with John, re: login peer review]
Location: [John's office]
Date: [Fri. Aug. 22, 2008]
Time: [ 2:00 PM]
[Clear] [Submit]
============================================================
When you click submit, it calls the same AddTask function.
Is this considered:
a good way to code
just for the newbies
horrendous!.
Addendum:
I'm noticing a trend here for "shared library called by both the GUI and CLI executables." Is there some compelling reason why they would have to be separated, other than maybe the size of the binaries themselves?
Why not just call the same executable in different ways:
"todo /G" when you want the full-on graphical interface
"todo /I" for an interactive prompt within todo.exe (scripting, etc)
plain old "todo <function>" when you just want to do one thing and be done with it.
Addendum 2:
It was mentioned that "the way [I've] described things, you [would] need to spawn an executable every time the GUI needs to do something."
Again, this wasn't my intent. When I mentioned that the example GUI called "the same AddTask function," I didn't mean the GUI called the command line program each time. I agree that would be totally nasty. I had intended (see first addendum) that this all be held in a single executable, since it was a tiny example, but I don't think my phrasing necessarily precluded a shared library.
Also, I'd like to thank all of you for your input. This is something that keeps popping back in my mind and I appreciate the wisdom of your experience.
I would go with building a library with a command line application that links to it. Afterwards, you can create a GUI that links to the same library. Calling a command line from a GUI spawns external processes for each command and is more disruptive to the OS.
Also, with a library you can easily do unit tests for the functionality.
But even as long as your functional code is separate from your command line interpreter, then you can just re-use the source for a GUI without having the two kinds at once to perform an operation.
Put the shared functionality in a library, then write a command-line and a GUI front-end for it. That way your layer transition isn't tied to the command-line.
(Also, this way adds another security concern: shouldn't the GUI first have to make sure it's the RIGHT todo.exe that is being called?)
Joel wrote an article contrasting this ("unix-style") development to the GUI first ("Windows-style") method a few years back. He called it Biculturalism.
I think on Windows it will become normal (if it hasn't already) to wrap your logic into .NET assemblies, which you can then access from both a GUI and a PowerShell provider. That way you get the best of both worlds.
My technique for programming backend functionality first without having the need for an explicit UI (especially when the UI isn't my job yet, e.g., I'm desigining a web application that is still in the design phase) is to write unit tests.
That way I don't even need to write a console application to mock the output of my backend code -- it's all in the tests, and unlike your console app I don't have to throw the code for the tests away because they still are useful later.
I think it depends on what type of application you are developing. Designing for the command line puts you on the fast track to what Alan Cooper refers to as "Implementation Model" in The Inmates are Running the Asylum. The result is a user interface that is unintuitive and difficult to use.
37signals also advocates designing your user interface first in Getting Real. Remember, for all intents and purposes, in the majority of applications, the user interface is the program. The back end code is just there to support it.
It's probably better to start with a command line first to make sure you have the functionality correct. If your main users can't (or won't) use the command line then you can add a GUI on top of your work.
This will make your app better suited for scripting as well as limiting the amount of upfront Bikeshedding so you can get to the actual solution faster.
If you plan to keep your command-line version of your app then I don't see a problem with doing it this way - it's not time wasted. You'll still end up coding the main functionality of your app for the command-line and so you'll have a large chunk of the work done.
I don't see working this way as being a barrier to a nice UI - you've still got the time to add one and make is usable etc.
I guess this way of working would only really work if you intend for your finished app to have both command-line and GUI variants. It's easy enough to mock a UI and build your functionality into that and then beautify the UI later.
Agree with Stu: your base functionality should be in a library that is called from the command-line and GUI code. Calling the executable from the UI is unnecessary overhead at runtime.
#jcarrascal
I don't see why this has to make the GUI "bad?"
My thought would be that it would force you to think about what the "business" logic actually needs to accomplish, without worrying too much about things being pretty. Once you know what it should/can do, you can build your interface around that in whatever way makes the most sense.
Side note: Not to start a separate topic, but what is the preferred way to address answers to/comments on your questions? I considered both this, and editing the question itself.
I did exactly this on one tool I wrote, and it worked great. The end result is a scriptable tool that can also be used via a GUI.
I do agree with the sentiment that you should ensure the GUI is easy and intuitive to use, so it might be wise to even develop both at the same time... a little command line feature followed by a GUI wrapper to ensure you are doing things intuitively.
If you are true to implementing both equally, the result is an app that can be used in an automated manner, which I think is very powerful for power users.
I usually start with a class library and a separate, really crappy and basic GUI. As the Command Line involves parsing the Command Line, I feel like i'm adding a lot of unneccessary overhead.
As a Bonus, this gives an MVC-like approach, as all the "real" code is in a Class Library. Of course, at a later stage, Refactoring the library together with a real GUI into one EXE is also an option.
If you do your development right, then it should be relatively easy to switch to a GUI later on in the project. The problem is that it's kinda difficult to get it right.
Kinda depends on your goal for the program, but yeah i do this from time to time - it's quicker to code, easier to debug, and easier to write quick and dirty test cases for. And so long as i structure my code properly, i can go back and tack on a GUI later without too much work.
To those suggesting that this technique will result in horrible, unusable UIs: You're right. Writing a command-line utility is a terrible way to design a GUI. Take note, everyone out there thinking of writing a UI that isn't a CLUI - don't prototype it as a CLUI.
But, if you're writing new code that does not itself depend on a UI, then go for it.
A better approach might be to develop the logic as a lib with a well defined API and, at the dev stage, no interface (or a hard coded interface) then you can wright the CLI or GUI later
I would not do this for a couple of reasons.
Design:
A GUI and a CLI are two different interfaces used to access an underlying implementation. They are generally used for different purposes (GUI is for a live user, CLI is usually accessed by scripting) and can often have different requirements. Coupling the two together is not a wise choice and is bound to cause you trouble down the road.
Performance:
The way you've described things, you need to spawn an executable every time the GUI needs to d o something. This is just plain ugly.
The right way to do this is to put the implementation in a library that's called by both the CLI and the GUI.
John Gruber had a good post about the concept of adding a GUI to a program not designed for one: Ronco Spray-On Usability
Summary: It doesn't work. If usability isn't designed into an application from the beginning, adding it later is more work than anyone is willing to do.
#Maudite
The command-line app will check params up front and the GUI won't - but they'll still be checking the same params and inputting them into some generic worker functions.
Still the same goal. I don't see the command-line version affecting the quality of the GUI one.
Do a program that you expose as a web-service. then do the gui and command line to call the same web service. This approach also allows you to make a web-gui, and also to provide the functionality as SaaS to extranet partners, and/or to better secure the business logic.
This also allows your program to more easily participate in a SOA environement.
For the web-service, don't go overboard. do yaml or xml-rpc. Keep it simple.
In addition to what Stu said, having a shared library will allow you to use it from web applications as well. Or even from an IDE plugin.
There are several reasons why doing it this way is not a good idea. A lot of them have been mentioned, so I'll just stick with one specific point.
Command-line tools are usually not interactive at all, while GUI's are. This is a fundamental difference. This is for example painful for long-running tasks.
Your command-line tool will at best print out some kind of progress information - newlines, a textual progress bar, a bunch of output, ... Any kind of error it can only output to the console.
Now you want to slap a GUI on top of that, what do you do ? Parse the output of your long-running command line tool ? Scan for WARNING and ERROR in that output to throw up a dialog box ?
At best, most UI's built this way throw up a pulsating busy bar for as long as the command runs, then show you a success or failure dialog when the command exits. Sadly, this is how a lot of UNIX GUI programs are thrown together, making it a terrible user experience.
Most repliers here are correct in saying that you should probably abstract the actual functionality of your program into a library, then write a command-line interface and the GUI at the same time for it. All your business logic should be in your library, and either UI (yes, a command line is a UI) should only do whatever is necessary to interface between your business logic and your UI.
A command line is too poor a UI to make sure you develop your library good enough for GUI use later. You should start with both from the get-go, or start with the GUI programming. It's easy to add a command line interface to a library developed for a GUI, but it's a lot harder the other way around, precisely because of all the interactive features the GUI will need (reporting, progress, error dialogs, i18n, ...)
Command line tools generate less events then GUI apps and usually check all the params before starting. This will limit your gui because for a gui, it could make more sense to ask for the params as your program works or afterwards.
If you don't care about the GUI then don't worry about it. If the end result will be a gui, make the gui first, then do the command line version. Or you could work on both at the same time.
--Massive edit--
After spending some time on my current project, I feel as though I have come full circle from my previous answer. I think it is better to do the command line first and then wrap a gui on it. If you need to, I think you can make a great gui afterwards. By doing the command line first, you get all of the arguments down first so there is no surprises (until the requirements change) when you are doing the UI/UX.
That is exactly one of my most important realizations about coding and I wish more people would take such approach.
Just one minor clarification: The GUI should not be a wrapper around the command line. Instead one should be able to drive the core of the program from either a GUI or a command line. At least at the beginning and just basic operations.
When is this a great idea?
When you want to make sure that your domain implementation is independent of the GUI framework. You want to code around the framework not into the framework
When is this a bad idea?
When you are sure your framework will never die