Reading data from a process (game) - reverse-engineering

I want to read data from a game. In this case it's Rocket League (Basically football with cars).
This data can be:
Current score
Player names / Team names
Game duration
etc.
Reading thru the internet I found to approaches:
Memory reading
I try to find the addresses of the variables in the memory the game is using. This could be done by using a low-level programming language such as C++ (?).
Package sniffing
This is probably a bad idea, however since Rocket League is a multiplayer game, data is send over the internet. One could catch and analyze the packages, provided that the connection is not encrypted.
I know that is by far not an easy task. What are your advises?
Feel free to share good articles regarding this topic.

Related

Scaling a Server Horizontally

I have a question about scalability. Let's say I have a multiplayer game, such as Uno, where the server handles everything. (Assume this is a text-only game for simplicity). For example, to get information printed out to the user in the client, the server might send PRINT string, or CHOOSE data (to pick a card to play), etc. In this regard, the client is "dumb" and the server handles the game logic.
A quick example of how this might work on a protocol level:
Server sends: PRINT Choose a card
Server sends: CHOOSE Red 1,Blue 1 (user shown a button or something, and picks Red 1)
Client sends: Red 1
Let's say I have this architecture:
Player Class: stores the cards the user has, maybe some methods (such as tellData(String data) which would send PRINT data, sendPM() which could private message a user)
Server Class: handles authentication, allows users to create new games, shows users a list of games they can join
Game Class: handles users playing a card, handles switching to a new player for his or her turn, calls methods on player class like tellData(), pickCard(), etc
How would I scale this, to run the server on multiple computers? Right now, all of the users connect to one server, and require the Player, Server, and Game class to interact with each other. If someone could provide some suggestions, and/or point me to some good resources/books on this, it would be greatly appreciated (no, this is not a homework assignment or something for a business, this is just a personal project and curiosity of mine). In terms of scalability, I'd like to just be able to add another server, and handle the additional load of players--but the most concurrent connections would be 1000.
Also, would this become significantly more difficult of a scalability challenge if we added in more games?
Furthermore, what is the best way to store game data? In a SQL database, or serializing objects, or what? By this, I mean let's say 3 users are in a game of Uno, and want to return to it later. I don't want to store their cards and information about the game in the Player/Server/Game class (RAM) forever - I want to dump this somewhere, so when the user logs in, the info can be loaded from however this was dumped into RAM, and then the appropriate Player/Game objects.
Finally, how can I make changes to the server without having to kill it, and restart it? Assume the server was written in Java or Python.
If anyone can provide suggestions or some resources it would be greatly appreciated - this includes changing the architecture I originally stated.
Thanks for any and all help!!
EDIT: Are there any good books or talks you all would recommend on the subject?
1.Scalability:
Involves an application architecture there across multiple server instances the session is replicated/shared and load balanced. You can choose to implement a message queue (rabbitmq) / ESB(enterprise service bus) architecture for your app.
2.Ease of scaling:
Depends on deployment and the servers you choose.
3.Pesistance:
Game for a person involves his particular game state at any point of time. If you could represent state information semantically you can have the data in markup savefiles, or store the state information directly into a DB.
Else, you may need to serialize objects and store them on filesystem / as a BLOB in DB in case the state space is humongous.
4.Hot deployment:
JVM mostly always will need a restart to reload class files, hence on java server side you will always need to restart. In Ruby/Rails is certain parts of the application can be hot deployed. If your need 100% hot deployability, perhaps Erlang is the answer.
To improve concurrency you can also use evented server/app architectures: thin/eventmachine for ruby or apache mina, jboss netty for java.

How to secure Flash Games from Cheat Engine?

Other questions didn't really have an answer. I'm making a singleplayer game that saves to a leaderboard, and I can't have the scores be 999999999999999999999999 from Cheat Engine. How do I secure my AS3 Flash Game so that Cheat Engine does nothing?
You can't. It's unavoidable. Abandon all hope. Your game is client-side and can be tampered with. The score is sent from the client which can be intercepted and changed before it is sent to the server. Anything you do to try encode the score will fail because your SWF can be decompiled and the algorithm reverse-engineered. Even if you put yourself through hell to obfuscate your SWF and the logic used to encode a decode a score, you will not prevail. All you can do is make it not worthwhile to cheat by maximising the difficulty of sending faux scores; make the criteria of a valid score strict and hard to determine e.g. a multiple of a given number minus x.
There are ways to make it very difficult, although as others have mentioned, it will never be completely secure.
Neopets, a popular site with flash games, combats this problem by sending extra information to the server. For example, in a game you might send to the server:
Time to complete level
Number of enemies killed
Number of items collected
Score
Then on the server, check if the values make sense. If they did not kill enough enemies, reject their score. If they completed the level too quickly, reject their score.
Bind your data to something dynamically changing like time. Because cheat engine does not have such option to trace time in data. Additionally if the trainer is programmed so professionally you can combine this method and other distraction methods together like multiplying the number into another dynamic data.

Any good threads related job-interview question?

When interviewing graduates I usually ask them questions about data structures, algorithms and complexity theory. I would really like to ask a question that will enable them to show their familiarity with multi-threaded concepts, without dwelling into language specific issues.
Any good questions? The only question I could think of is how to write a Singleton that supports multi-threaded access.
I find the classic "write me a consumer-producer queue" question to be quite good. You can talk about synchronization in a handwavy way beforehand for five minutes or so (e.g. start with "What does Object.wait() do? What other methods on Object is it related to? Can you give me an example of when you might use these? What other concurrency techniques might you use in practice [because really, it's quite rare that actually using the wait/notify primitives is the best approach]?"). Make sure the candidate addresses (or at least makes clear he is aware of) both atomicity ("missed updates") and volatility (visibility of the new value on other threads)
Then after you've had a chat about the theory of these, get them to spend a few minutes actually writing the code for a primitive producer-consumer queue. This should be straightforward to anyone who actually understands what they were talking about above, yet it will weed out those who can "talk the talk" but don't actually understand it in practice (arguably the most dangerous group).
What I like about these mini-coding exercises, is that they're often easy to extend. For instance, if the candidate completes the task easily, you can ask how they would extend it for situation XXX - invent requirements that you know will push the limits of the noddy solution you asked for. This not only lets you tailor the depth of questions you're asking but gives some insight into how well the candidate handles clarification of requirements, and modifications of existing design (which is pretty important in this industry).
Here you can find some topics to discuss:
threads implementation ( kernel vs user space)
thread local storage
synchronization primitives
deadlocks, livelocks
Differences between mutex and
semaphore.
Use of condition variables.
When not to use threads. (eg. IO multiplexing)
Talk with them about a popular, but not well-known topic, where thread handling is essential.
I recommend you, build a web server with them, of course, only on paper or just in words. The result should look something like this: there is a main thread, it's listening on a socket. When something arrives, it passes the socket into the pool, then this thread returns back to socket listening. The pool has fixed number of slots. The request processing threads are dedicated to get job from the pool. Find out, what's better, if the threads are checking the pool concurrently, or the listner main thread selects a free slot/thread for the new incoming request. Try to write a small pseudocode, or a graph for both side of the pool handling.
Let's introduce a small application: page counter, which tells that how many page request has been made since server startup. Don't tell them that the counter must be protected against concurrent modification, let them to find it out how to do this with mutexes or synchronization or whatsoever. Maybe you could skip the web server part, the page counter app is easier to specify.
Another example is a chat, with 2+ clients and a server, find out, how to solve the problem, that all the messages should arrive in the same order for all clients. Or reflex game: the server waits for 1..5 secs random, then says "peek-a-boo", and the player wins who presses space key first. Specify it with 2 player, then try to expand it to N players.
Also, be aware of NPPs. NPP stands: "non-programming programmer". There are dudes, who can talk about programming issues, they know all the 3/4-letter abbrevations (there're lot in the Java world, EJB, JSP, XSLT, and my favourite: POJO, which means Pure Old Java Objects, lol), they understand and modify codes, or make similar programs from a base, but they fail even with small problems, it it has to do it theirselves, e.g. finding the nearest element to a base in an array. Sometimes it takes months, until it turns out. They performs well at interviews, because they prepare for it. Maybe they don't even known, that they're NPPs, this is a known effect: http://en.wikipedia.org/wiki/Dunning-Kruger_effect
It's hard to recognize the opposite dudes, who have not heard about trendy libraries or patterns, but they can learn it even at the job interview. (Personal remark: my last interview was in 1999, and it seems that I will not do interview anymore. I have never heard of dynamic web pages before, but I've figured out the term "session" during the interview, the question was that how to build a simple hanging man web app. I was hired.)

Flow Based Programming

I have been doing a little reading on Flow Based Programming over the last few days. There is a wiki which provides further detail. And wikipedia has a good overview on it too. My first thought was, "Great another proponent of lego-land pretend programming" - a concept harking back to the late 80's. But, as I read more, I must admit I have become intrigued.
Have you used FBP for a real project?
What is your opinion of FBP?
Does FBP have a future?
In some senses, it seems like the holy grail of reuse that our industry has pursued since the advent of procedural languages.
1. Have you used FBP for a real project?
We've designed and implemented a DF server for our automation project (dispatcher, component iterface, a bunch of components, DF language, DF compiler, UI). It is written in bare C++, and runs on several Unix-like systems (Linux x86, MIPS, avr32 etc., Mac OSX). It lacks several features, e.g. sophisticated flow control, complex thread control (there is only a not too advanced component for it), so it is just a prototype, even it works. We're now working on a full-featured server. We've learnt lot during implementing and using the prototype.
Also, we'll make a visual editor some day.
2. What is your opinion of FBP?
2.1. First of all, dataflow programming is ultimate fun
When I met dataflow programming, I was feel like 20 years ago, when I met programming first. Altough, DF programming differs from procedural/OOP programming, it's just a kind of programming. There are lot of things to discover, even sooo simple ones! It's very funny, when, as an experienced programmer, you met a DF problem, which is a very-very basic thing, but it was completely unknown for you before. So, if you jump into DF programming, you will feel like a rookie programmer, who first met the "cycle" or "condition".
2.2. It can be used only for specific architectures
It's just a hammer, which are for hammering nails. DF is not suitable for UIs, web server and so on.
2.3. Dataflow architecture is optimal for some problems
A dataflow framework can make magic things. It can paralellize procedures, which are not originally designed for paralellization. Components are single-threaded, but when they're organized into a DF graph, they became multi-threaded.
Example: did you know, that make is a DF system? Try make -j (see man, what -j is used for). If you have multi-core machine, compile your project with and without -j, and compare times.
2.4. Optimal split of the problem
If you're writing a program, you often split up the problem for smaller sub-problems. There are usual split points for well-known sub-problems, which you don't need to implement, just use the existing solutions, like SQL for DB, or OpenGL for graphics/animation, etc.
DF architecture splits your problem a very interesting way:
the dataflow framework, which provides the architecture (just use an existing one),
the components: the programmer creates components; the components are simple, well-separated units - it's easy to make components;
the configuration: a.k.a. dataflow programming: the configurator puts the dataflow graph (program) together using components provided by the programmer.
If your component set is well-designed, the configurator can build such system, which the programmer has never even dreamed about. Configurator can implement new features without disturbing the programmer. Customers are happy, because they have personalised solution. Software manufacturer is also happy, because he/she don't need to maintain several customer-specific branches of the software, just customer-specific configurations.
2.5. Speed
If the system is built on native components, the DF program is fast. The only time loss is the message dispatching between components compared to a simple OOP program, it's also minimal.
3. Does FBP have a future?
Yes, sure.
The main reason is that it can solve massive multiprocessing issues without introducing brand new strange software architectures, weird languages. Dataflow programming is easy, and I mean both: component programming and dataflow configuration building. (Even dataflow framework writing is not a rocket science.)
Also, it's very economic. If you have a good set of components, you need only put the lego bricks together. A DF program is easy to maintain. The DF config building requires no experienced programmer, just a system integrator.
I would be happy, if native systems spread, with doors open for custom component creating. Also there should be a standard DF language, which means that it can be used with platform-independent visual editors and several DF servers.
Interesting discussion! It occurred to me yesterday that part of the confusion may be due to the fact that many different notations use directed arcs, but use them to mean different things. In FBP, the lines represent bounded buffers, across which travel streams of data packets. Since the components are typically long-running processes, streams may comprise huge numbers of packets, and FBP applications can run for very long periods - perhaps even "perpetually" (see a 2007 paper on a project called Eon, mostly by folks at UMass Amherst). Since a send to a bounded buffer suspends when the buffer is (temporarily) full (or temporarily empty), indefinite amounts of data can be processed using finite resources.
By comparison, the E in Grafcet comes from Etapes, meaning "steps", which is a rather different concept. In this kind of model (and there are a number of these out there), the data flowing between steps is either limited to what can be held in high-speed memory at one time, or has to be held on disk. FBP also supports loops in the network, which is hard to do in step-based systems - see for example http://www.jpaulmorrison.com/cgi-bin/wiki.pl?BrokerageApplication - notice that this application used both MQSeries and CORBA in a natural way. Furthermore, FBP is natively parallel, so it lends itself to programming of grid networks, multicore machines, and a number of the directions of modern computing. One last comment: in the literature I have found many related projects, but few of them have all the characteristics of FBP. A list that I have amassed over the years (a number of them closer than Grafcet) can be found in http://www.jpaulmorrison.com/cgi-bin/wiki.pl?FlowLikeProjects .
I do have to disagree with the comment about FBP being just a means of implementing FSMs: I think FSMs are neat, and I believe they have a definite role in building applications, but the core concept of FBP is of multiple component processes running asynchronously, communicating by means of streams of data chunks which run across what are now called bounded buffers. Yes, definitely FSMs are one way of building component processes, and in fact there is a whole chapter in my book on FBP devoted to this idea, and the related one of PDAs (1) - http://www.jpaulmorrison.com/fbp/compil.htm - but in my opinion an FSM implementing a non-trivial FBP network would be impossibly complex. As an example the diagram shown in
is about 1/3 of a single batch job running on a mainframe. Every one of those blocks is running asynchronously with all the others. By the way, I would be very interested to hearing more answers to the questions in the first post!
1: http://en.wikipedia.org/wiki/Pushdown_automaton Push-down automata
Whenever I hear the term flow based programming I think of LabView, conceptually. Ie component processes who's scheduling is driven primarily by a change to its input data. This really IS lego programming in the sense that the labview platform was used for the latest crop of mindstorm products. However I disagree that this makes it a less useful programming model.
For industrial systems which typically involve data collection, control, and automation, it fits very well. What is any control system if not data in transformed to data out? Ie what component in your control scheme would you not prefer to represent as a black box in a bigger picture, if you could do so. To achieve that level of architectural clarity using other methodologies you might have to draw a data domain class diagram, then a problem domain run time class relationship, then on top of that a use case diagram, and flip back and forth between them. With flow driven systems you have the luxury of being able to collapse a lot of this information together accurately enough that you can realistically design a system visually once the components are build and defined.
One question I never had to ask when looking at an application written in labview is "What piece of code set this value?", as it was inherent and easy to trace backwards from the data, and also mistakes like multiple untintended writers were impossible to create by mistake.
If only that was true of code written in a more typically procedural fashion!
1) I build a small FBP framework for an anomaly detection project, and it turns out to have been a great idea.
You can also have a look at some of the KNIME videos, that give a good idea of what a flow based framework feels like when the framework is put together by a great team. Admittedly, it is batch based and not created for continuous operation.
By far the best example of flow based programming, however, is UNIX pipes which is one of the oldest, most overlooked FBP framework. I don't think I have to elaborate on the power of nix pipes...
2) FBP is a very powerful tool for a large set of problems. The intrinsic parallelism is a great advantage, and any FBP framework can be made completely network transparent by using adapter modules. Smart frameworks are also absurdly fault tolerant, and able to dynamically reload crashed modules when necessary. The conceptual simplicity also allows cleaner communication with everybody involved in a project, and much cleaner code.
3) Absolutely! Pipes are here to stay, and are one of the most powerful feature of unix. The power inherent in a FBP framework compared to a static program are many, and trivialise change, to the point where some frameworks can be reconfigured while running with no special measures.
FBP FTW! ;-)
In automotive development, they have a language agnostic messaging protocol which is part of the MOST specification (Media Oriented Systems Transport), this was designed to communicate between components over a network or within the same device. Systems usually have both a real and visualized message bus - therefore you effectively have a form of flow based programming.
That was what made the light bulb go on for me several years ago and brought me here. It really is a fantastic way to work and so much more fun than conventional programming. The message catalog form the central specification and point of reference. It works well for both developers and management. i.e. Management are able to browse the message catalog instead of looking at source.
With integrated logging also referencing the catalog to produce intelligible analysis things can get really productive. I have real world experience of developing commercial products in this way. I am interested in taking things further, particularly with regards to tools and IDEs. Unfortunately I think many people within the automotive sector have missed the point about how great this is and have failed to build on it. They are now distracted by other fads and failed to realize that there was far more to most development than the physical bus.
I've used Spring Web Flow extensively in Java Web applications to model (typically) application processes, which tend to be complex wizard-like affairs with lots of conditional logic as to which pages to display. Its incredibly powerful. A new product was added and I managed to recut the existing pieces into a completely new application process in an hour or two (with adding a couple of new views/states).
I also looked into using OS Workflow to model business processes but that project got canned for various reasons.
In the Microsoft world you have Windows Workflow Foundation ("WWF"), which is becoming more popular, particularly in conjunction with Sharepoint.
FBP is just a means of implementing a finite state machine. It's nothing new.
I realize that it is not exactly the same thing, but this model has been used for years in PLC programming. ISO calls it Sequential Flow Chart, but many people call it Grafcet after a popular implementation. It offers parallel processing and defines transitions between states.
It's being used in the Business Intelligence world these days to mashup and process data. Data processing steps like ETL, querying, joining , and producing reports can be done by the end-user. I'm a developer on an open system - ComposableAnalytics.com In CA, the flow-based apps can be shared and executed via the browser.
This is what MQ Series, MSMQ and JMS are for.
This is cornerstone of Web Services and Enterprise Service Bus implementations.
Products like TIBCO and Sun's JCAPS are basically flow-based without using this particular buzz-word.
Most of the work of the application is done with small modules that pass messages through a processing network.

Reverse engineering war stories [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Sometimes you don't have the source code and need to reverse engineer a program or a black box. Any fun war stories?
Here's one of mine:
Some years ago I needed to rewrite a device driver for which I didn't have source code. The device driver ran on an old CP/M microcomputer and drove a dedicated phototypesetting machine through a serial port. Almost no documentation for the phototypesetting machine was available to me.
I finally hacked together a serial port monitor on a DOS PC that mimicked the responses of the phototypesetting machine. I cabled the DOS PC to the CP/M machine and started logging the data coming out of the device driver as I feed data in through the CP/M machine. This enabled me to figure out the handshaking and encoding used by the device driver and re-create an equivalent one for a DOS machine.
Read the story of FCopy for the C-64 here:
Back in the 80s, the Commodore C-64 had an intelligent floppy drive, the 1541, i.e. an external unit that had its own CPU and everything.
The C-64 would send commands to the drive which in turn would then execute them on its own, reading files, and such, then send the data to the C-64, all over a propriatory serial cable.
The manual for the 1541 mentioned, besides the commands for reading and writing files, that one would read and write to its internal memory space. Even more exciting was that one could download 6502 code into the drive's memory and have it executed there.
This got me hooked and I wanted to play with that - execute code on the drive. Of course, there was no documention on what code could be executed there, and which functions it could use.
A friend of mine had written a disassembler in BASIC. and so I read out all its ROM contents, which was 16KB of 6502 CPU code, and tried to understand what it does. The OS on the drive was quite amazing and advanced IMO - it had a kind of task management, with commands being sent from the communication unit to the disk i/o task handler.
I learned enough to understand how to use the disk i/o commands to read/write sectors of the disc. Actually, having read the Apple ]['s DOS 3.3 book which explained all of the workings of its disk format and algos in much detail, was a big help in understanding it all.
(I later learned that I could have also found reserve-eng'd info on the more 4032/4016 disk drives for the "business" Commodore models which worked quite much the same as the 1541, but that was not available to me as a rather disconnected hobby programmer at that time.)
Most importantly, I also learned how the serial comms worked. I realized that the serial comms, using 4 lines, two for data, two for handshake, was programmed very inefficiently, all in software (though done properly, using classic serial handshaking).
Thus I managed to write a much faster comms routine, where I made fixed timing assumtions, using both the data and the handshake line for data transmission.
Now I was able to read and write sectors, and also transmit data faster than ever before.
Of course, it would have been great if one could simply load some code into the drive which speeds up the comms, and then use the normal commands to read a file, which in turn would use the faster comms. This was no possible, though, as the OS on the drive did not provide any hooks for that (mind that all of the OS was in ROM, unmodifiable).
Hence I was wondering how I could turn my exciting findings into a useful application.
Having been a programmer for a while already, dealing with data loss all the times (music tapes and floppy discs were not very realiable back then), I thought: Backup!
So I wrote a backup program which could duplicate a floppy disc in never-before seen speed: The first version did copy an entire 170 KB disc in only 8 minutes (yes, minutes), the second version did it even in about 4.5 minutes. Whereas the apps before mine took over 25 minutes. (Mind you, the Apple ][, which had its disc OS running on the Apple directly, with fast parallel data access, did this all in a minute or so).
And so FCopy for the C-64 was born.
It became soon extremely popular. Not as a backup program as I had intended it, but as the primary choice for anyone wanting to copy games and other software for their friends.
Turned out that a simplification in my code, which would simply skip unreadable sectors, writing a sector with a bad CRC to the copy, did circumvent most of the then-used copy protection schemes, making it possible to copy most formerly uncopyable discs.
I had tried to sell my app and sold it actually 70 times. When it got advertised in the magazines, claiming it would copy a disc in less than 5 minutes, customers would call and not believe it, "knowing better" that it can't be done, yet giving it a try.
Not much later, others started to reverse engineer my app, and optimize it, making the comms even faster, leading to copy apps that did it even in 1.5 minutes. Faster was hardly possible, because, due to the limited amount of memory available on the 1541 and the C-64, you had to swap discs several times in the single disc drive to copy all 170 KB of its contents.
In the end, FCopy and its optimized successors were probably the most-popular software ever on the C-64 in the 80s. And even though it didn't pay off financially for me, it still made me proud, and I learned a lot about reverse-engineering, futility of copy protection and how stardom feels. (Actually, Jim Butterfield, an editor for a C-64 magazine in Canada, told its readers my story, and soon he had a cheque for about 1000 CA$ for me - collected by the magazine from many grateful users sending 5$-cheques, which was a big bunch of money back then for me.)
I actually have another story:
A few years past my FCopy "success" story, I was approached by someone who asked me if I could crack a slot machine's software.
This was in Germany, where almost every pub had one or two of those: You'd throw some money in what amounts to about a US quarter, then it would spin three wheels and if you got lucky with some pattern, you'd then have the choice to "double or nothing" your win on the next play, or get the current win. The goal of the play was to try to double your win a few times until you'd get in the "series" mode where any succeeding win, no matter how minor, would get you a big payment (of about 10 times your spending per game).
The difficulty was to know when to double and when not. For an "outsider" this was completely random, of course. But it turned out that those German-made machines were using simple pseudo-randomized tables in their ROMs. Now, if you watched the machine play for a few rounds, you'd could figure out where this "random table pointer" was and predict its next move. That way, a player would know when to double and when to pass, leading him eventually to the "big win series".
Now, this was already a common thing when this person approached me. There was an underground scene which had access to the ROMs in those machines, find the tables and create software for computers such as a C-64 to use for prediction of the machine's next moves.
Then came a new type of machine, though, which used a different algorithm: Instead of using pre-calc'd tables, it did something else and none of the resident crackers could figure that out. So I was approached, being known as a sort of genius since my FCopy fame.
So I got the ROMs. 16KB, as usual. No information on what it did and how it worked whatsoever. I was on my own. Even the code didn't look familiar (I knew 6502 and 8080 by then only). After some digging and asking, I found it was a 6809 (which I found to be the nicest 8 bit CPU to exist, and which had analogies to the 680x0 CPU design, which was much more linear than the x86 family's instruction mess).
By that time, I had already a 68000 computer (I worked for the company "Gepard Computer" which built and sold such a machine, with its own developer OS and, all) and was into programming Modula-2. So I wrote a disassembler for the 6809, one that helped me with reverse engineering by finding subroutines, jumps, etc. Slow I got a an idea of the flow control of the slot machine's program. Eventually I found some code that looked like a mathmatical algorithm and it dawned on me that this could be the random generating code.
As I never had a formal education in computer sciences, up to then I had no idea how a typical randomgen using mul, add and mod worked. But I remember seeing something mentioned in a Modula-2 book and then realized what it was.
Now I could quickly find the code that would call this randomgen and learn which "events" lead to a randomgen iteration, meaning I knew how to predict the next iterations and their values during a game.
What was left was to figure out the current position of the randomgen. I had never been good with abstract things such as algebra. I knew someone who studied math and was a programmer too, though. When I called him, he quickly knew how to solve the problem and quabbled a lot about how simple it would be to determine the randomgen's seed value. I understood nothing. Well, I understood one thing: The code to accomplish this would take a lot of time, and that a C-64 or any other 8 bit computer would take hours if not days for it.
Thus, I decided to offer him 1000 DM (which was a lot of money for me back then) if he could write me an assembler routine in 68000. Didn't take him long and I had the code which I could test on my 68000 computer. It took usually between 5 and 8 minutes, which was acceptable. So I was almost there.
It still required a portable 68000 computer to be carried to the pub where the slot machine stands. My Gepard computer was clearly not of the portable type. Luckly, someone else I knew in Germany produced entire 68000 computers on a small circuit board. For I/O it only had serial comms (RS-232) and a parallel port (Centronics was the standard of those days). I could hook up some 9V block battieries to it to make it work. Then I bought a Sharp pocket computer, which had a rubber keyboard and a single-line 32 chars display. Running on batteries, which was my terminal. It had a RS-232 connector which I connected to the 68000 board. The Sharp also had some kind of non-volatile memory, which allowed me to store the 68000 random-cracking software on the Sharp, transfer it on demand to the 68000 computer, which then calculated the seed value. Finally I had a small Centronics printer which printed on narrow thermo paper (which was the size of what cash registers use to print receipts). Hence, once the 68000 had the results, it would send a row of results for the upcoming games on the slot machine to the Sharp, which printed them on paper.
So, to empty one of these slot machines, you'd work with two people: You start playing, write down its results, one you had the minimum number of games required for the seed calculation, one of you would go to the car parked outside, turn on the Sharp, enter the results, it would have the 68000 computer rattle for 8 minutes, and out came a printed list of upcoming game runs. Then all you needed was this tiny piece of paper, take it back to your buddy, who kept the machine occupied, align the past results with the printout and no more than 2 minutes later you were "surprised" to win the all-time 100s series. You'd then play these 100 games, practically emptying the machine (and if the machine was empty before the 100 games were played, you had the right to wait for it to be refilled, maybe even come back next day, whereas the machine was stopped until you came back).
This wasn't Las Vegas, so you'd only get about 400 DM out of a machine that way, but it was quick and sure money, and it was exciting. Some pub owners suspected us of cheating but had nothing against us due to the laws back then, and even when some called the police, the police was in favor of us).
Of course, the slot making company soon got wind of this and tried to counteract, turning off those particular machines until new ROMs were installed. But the first few times they only changed the randomgen's numbers. We only had to get hold of the new ROMs, and it took me a few minutes to find the new numbers and implement them into my software.
So this went on for a while during which me and friends browsed thru pubs of several towns in Germany looking for those machines only we could crack.
Eventually, though, the machine maker learned how to "fix" it: Until then, the randomgen was only advanced at certain predictable times, e.g. something like 4 times during play, and once more per the player's pressing of the "double or nothing" button.
But then they finally changed it so that the randomgen would continually be polled, meaning we were no longer able to predict the next seed value exactly on time for the pressing of the button.
That was the end of it. Still, making the effort of writing a disassembler just for this single crack, finding the key routines in 16KB of 8 bit CPU code, figuring out unknown algorithms, investing quite a lot of money to pay someone else to develop code I didn't understand, finding the items for a portable high-speed computer involving the "blind" 68000 CPU with the Sharp as a terminal and the printer for the convenient output, and then actually emptying the machines myself, was one of the most exciting things I ever did with my programming skills.
Way back in the early 90s, I forgot my Compuserve password. I had the encrypted version in CIS.INI, so I wrote a small program to do a plaintext attack and analysis in an attempt to reverse-engineer the encryption algorithm. 24 hours later, I figured out how it worked and what my password was.
Soon after that, I did a clean-up and published the program as freeware so that Compuserve customers could recover their lost passwords. The company's support staff would frequently refer these people to my program.
It eventually found its way onto a few bulletin boards (remember them?) and Internet forums, and was included in a German book about Compuserve. It's still floating around out there somewhere. In fact, Google takes me straight to it.
Once, when playing Daggerfall II, I could not afford the Daedric Dai-Katana so I hex-edited the savegame.
Being serious though, I managed to remove the dongle check on my dads AutoCAD installation using SoftICE many years ago. This was before the Internet was big. He works as an engineer so he had a legitimate copy. He had just forgotten the dongle at his job and he needed to do some things and I thought it would be a fun challenge. I was very proud afterwards.
Okay, this wasn't reverse engineering (quite) but a simple hardware hack born of pure frustration. I was an IT manager for a region of Southwestern Bell's cell phone service in the early 90s. My IT department was dramatically underfunded and so we spent money on smart people rather than equipment.
We had a WAN between major cities, used exclusively for customer service, with critical IP links. Our corporate bosses were insistent that we install a network monitoring system to notify us when the lines went down (no money for redundancy, but spend bucks for handling failures. Sigh.)
The STRONGLY recommended solution ran on a SPARC workstation and started at $30K plus the cost of a SPARC station (around $20K then), together which was a substantial chunk of my budget. I couldn't see it - this was a waste of $$. So I decided a little hacking was in order.
I took an old PC scheduled for destruction and put a copy of ProComm (remember ProComm?) and had it ping each of the required nodes along the route (this was one of the later versions of ProComm that scripted FTP as well as serial lines, KERMIT, etc.) A little logic in the coding fired-off a pager message when a node couldn't be reached. I had already used it to cobble together a pager system for our techs, so I reused the pager code. The script ran continuously, sending a ping once each minute across each of the critical links, and branched into the pager code when a ping wasn't returned.
We duplicated this system at each critical location for a cost of less than $500 and had a very fast notification when a link went down. Next issue - one of our first trouble-shooting methods was to power-cycle our routers and/or terminal servers. I got some dial-up X10 controllers and a few X10 on/off appliance power switches. You had to know the correct phone number to use and the correct tones to push, but we printed up a cheat card for each technician and they kept it with their pager. Instant fast response! One of my techs then programmed the phones we all had to reset specific equipment at each site as a speed-dial. One-tech solves the problem!
Now the "told-you-so" unveiling.
I'm sitting at lunch with our corporate network manager in Dallas who is insisting on a purchase of the Sun-based network management product. I get a page that one of our links is down, and then a second page. Since the pager messages are coming from two different servers, I know exactly which router is involved (it was a set-up, I knew anyway, as the tech at the meeting with me was queued to "down a router" during the meal so we could show-off.) I show the pager messages to the manager and ask him what he would do to resolve this problem. He eyes me suspiciously, since he hasn't yet successfully been paged by his Solaris NMS system that is supposed to track critical links. "Well, I guess you'd better call a tech and get them to reset the router and see if that fixes it." I turned to the tech who was lunching with us and asked him to handle it. He drew out his cell phone (above the table this time) and pressed the speed-dial he had programmed to reset the router in question. The phone dialed the X10 switch, told it to power-off the router, paused for five seconds, told it to power-up the router, and disconnected. Our ProComm script sent us pages telling us the link was back up within three minutes of this routine. :-)
The corporate network manager was very impressed. He asked me what the cost was for my new system. When I told him less than $1K, he was apoplectic. He had just ordered a BIG set of the Sun Solaris network management solution just for the tasks I'd illustrated. I think he had spent something like $150K. I told him how the magic was done and offered him the ProComm script for the price of lunch. TANSTAAFL. He told me he'd buy me lunch to keep my mouth shut.
Cleaning out my old drawers of disks and such, I found a copy of the code - "Pingasaurus Rex" was the name I had given it. That was hacking in the good old days.
The most painful for me was for this product where we wanted to include an image on a Excel Spreadsheet (few years back before the open standards). So I had to get and "understanding" if such thing exists of the internal format for the docs, as well. I ended up doing some Hex comparison between files with and without the image to figure out how to put it there, plus working on some little endian math....
I once worked on a tool that would gather inventory information from a PC as it logged into the network. The idea was to keep track of all the PCs in your company.
We had a new requirement to support the Banyan VINES network system, now long forgotten but pretty cool at the time it came out. I couldn't figure out how to get the Ethernet MAC address from Banyan's adapter as there was no documented API to do this.
Digging around online, I found a program that some other Banyan nerd had posted that performed this exact action. (I think it would store the MAC address in an environment variable so you could use it in a script). I tried writing to the author to find out how his program worked, but he either didn't want to tell me or wanted some ridiculous amount of money for the information (I don't recall).
So I simply fired up a disassembler and took his utility apart. It turned out he was making one simple call to the server, which was an undocumented function code in the Banyan API. I worked out the details of the call pretty easily, it was basically asking the server for this workstations address via RPC, and the MAC was part of the Banyan network address.
I then simply emailed the engineers at Banyan and told them what I needed to do. "Hey, it appears that RPC function number 528 (or whatever) returns what I need. Is that safe to call?"
The Banyan engineers were very cool, they verified that the function I had found was correct and was pretty unlikely to go away. I wrote my own fresh code to call it and I was off and running.
Years later I used basically the same technique to reverse engineer an undocumented compression scheme on an otherwise documented file format. I found a little-known support tool provided by the (now defunct) company which would decompress these files, and reverse engineered it. It turned out to be a very straightforward Lempel-Ziv variant applied within the block structure of their file format. The results of that work are recorded for posterity in the Wireshark source code, just search for my name.
Almost 10 years ago, I picked up the UFO/XCOM Collector's Edition in the bargain bin at a local bookstore, mostly out of nostalgia. When I got back home, I got kinda excited that it had been ported to Windows (the DOS versions didn't run under win2k)... and then disappointed that it had garbled graphics.
I was about to shrug my shoulders (bargain bin and all), but then my friend said "Haven't you... bugfixed... software before?", which led to a night of drinking lots of cola and reverse-engineering while hanging out with my friend. In the end, I had written a bugfix loader that fixed the pitch vs. width issue, and could finally play the two first XCOM games without booting old hardware (DOSBOX wasn't around yet, and my machine wasn't really powerful enough for fullblown virtualization).
The loader gained some popularity, and was even distributed with the STEAM re-release of the games for a while - I think they've switched to dosbox nowadays, though.
I wrote a driver for the Atari ST that supported Wacom tablets. Some of the Wacom information could be found on their web sites, but I still had to figure out a lot on my own.
Then, once I had written a library to access the wacom tables (and a test application to show the results) - it dawned on me that there was no API for the OS (GEM windowing system) to actually place the mouse cursor somewhere. I ended up having to hook some interrupts in something called the VDI (like GDI in windows), and be very very careful not to crash the computer inside there. I had some help (in the form of suggestions) from the developers of an accelerated version of the VDI (NVDI), and everything was written in PurePascal. I still occasionally have people asking me how to move the mouse cursor in GEM, etc.
I've had to reverse engineer a video-processing app, where I only had part of the source code. It took me weeks and weeks to even work out the control-flow, as it kept using CORBA to call itself, or be called from CORBA in some part of the app that I couldn't access.
Sheer idiocy.
I recently wrote an app that download the whole content from a Domino Webmail server using Curl. This is because the subcontractor running the server asks for a few hundred bucks for every archive request.
They changed their webmail version about one week after I released the app for the departement but managed to make it working again using a GREAT deal of regex and XML
When I was in highschool they introduced special hours each week (there were 3 hours if i recall correctly) in which we had to pick a classroom with a teacher there to help with any questions on their subject. No of course everybody always wanted to spend their time in the computer room to play around on the computers there.
To choose the room where you would have to be there was an application that would monitor how many students would go to a certain room, and so you had to reserve your slot on time or otherwise there was not much choice where to go.
At that time I always liked to play around on the computers there and I had obtained administrator access already, only for this application that would not help me so much. So I used my administrator access to make a copy of the application and take it home to examine. Now I don't remember all the details but I discoverd this application used some access database file located on a hidden network share. Then after taking a copy of this database file I found there was a password on the database. Using some linux access database tools I could easily work around that and after that it was easy to import this database in my own mysql server.
Then though a simple web interface I could find details for every student in the school to change their slots and promote myself to sit in my room of choice every time.
The next step was to write my own application that would allow me to just select a student from the list and change anything without having to look up their password which was implemented in only a few hours.
While not such a very impressive story as some others in this thread, I still remember it was a lot of fun to do for a highschool child back then.