As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Sometimes you don't have the source code and need to reverse engineer a program or a black box. Any fun war stories?
Here's one of mine:
Some years ago I needed to rewrite a device driver for which I didn't have source code. The device driver ran on an old CP/M microcomputer and drove a dedicated phototypesetting machine through a serial port. Almost no documentation for the phototypesetting machine was available to me.
I finally hacked together a serial port monitor on a DOS PC that mimicked the responses of the phototypesetting machine. I cabled the DOS PC to the CP/M machine and started logging the data coming out of the device driver as I feed data in through the CP/M machine. This enabled me to figure out the handshaking and encoding used by the device driver and re-create an equivalent one for a DOS machine.
Read the story of FCopy for the C-64 here:
Back in the 80s, the Commodore C-64 had an intelligent floppy drive, the 1541, i.e. an external unit that had its own CPU and everything.
The C-64 would send commands to the drive which in turn would then execute them on its own, reading files, and such, then send the data to the C-64, all over a propriatory serial cable.
The manual for the 1541 mentioned, besides the commands for reading and writing files, that one would read and write to its internal memory space. Even more exciting was that one could download 6502 code into the drive's memory and have it executed there.
This got me hooked and I wanted to play with that - execute code on the drive. Of course, there was no documention on what code could be executed there, and which functions it could use.
A friend of mine had written a disassembler in BASIC. and so I read out all its ROM contents, which was 16KB of 6502 CPU code, and tried to understand what it does. The OS on the drive was quite amazing and advanced IMO - it had a kind of task management, with commands being sent from the communication unit to the disk i/o task handler.
I learned enough to understand how to use the disk i/o commands to read/write sectors of the disc. Actually, having read the Apple ]['s DOS 3.3 book which explained all of the workings of its disk format and algos in much detail, was a big help in understanding it all.
(I later learned that I could have also found reserve-eng'd info on the more 4032/4016 disk drives for the "business" Commodore models which worked quite much the same as the 1541, but that was not available to me as a rather disconnected hobby programmer at that time.)
Most importantly, I also learned how the serial comms worked. I realized that the serial comms, using 4 lines, two for data, two for handshake, was programmed very inefficiently, all in software (though done properly, using classic serial handshaking).
Thus I managed to write a much faster comms routine, where I made fixed timing assumtions, using both the data and the handshake line for data transmission.
Now I was able to read and write sectors, and also transmit data faster than ever before.
Of course, it would have been great if one could simply load some code into the drive which speeds up the comms, and then use the normal commands to read a file, which in turn would use the faster comms. This was no possible, though, as the OS on the drive did not provide any hooks for that (mind that all of the OS was in ROM, unmodifiable).
Hence I was wondering how I could turn my exciting findings into a useful application.
Having been a programmer for a while already, dealing with data loss all the times (music tapes and floppy discs were not very realiable back then), I thought: Backup!
So I wrote a backup program which could duplicate a floppy disc in never-before seen speed: The first version did copy an entire 170 KB disc in only 8 minutes (yes, minutes), the second version did it even in about 4.5 minutes. Whereas the apps before mine took over 25 minutes. (Mind you, the Apple ][, which had its disc OS running on the Apple directly, with fast parallel data access, did this all in a minute or so).
And so FCopy for the C-64 was born.
It became soon extremely popular. Not as a backup program as I had intended it, but as the primary choice for anyone wanting to copy games and other software for their friends.
Turned out that a simplification in my code, which would simply skip unreadable sectors, writing a sector with a bad CRC to the copy, did circumvent most of the then-used copy protection schemes, making it possible to copy most formerly uncopyable discs.
I had tried to sell my app and sold it actually 70 times. When it got advertised in the magazines, claiming it would copy a disc in less than 5 minutes, customers would call and not believe it, "knowing better" that it can't be done, yet giving it a try.
Not much later, others started to reverse engineer my app, and optimize it, making the comms even faster, leading to copy apps that did it even in 1.5 minutes. Faster was hardly possible, because, due to the limited amount of memory available on the 1541 and the C-64, you had to swap discs several times in the single disc drive to copy all 170 KB of its contents.
In the end, FCopy and its optimized successors were probably the most-popular software ever on the C-64 in the 80s. And even though it didn't pay off financially for me, it still made me proud, and I learned a lot about reverse-engineering, futility of copy protection and how stardom feels. (Actually, Jim Butterfield, an editor for a C-64 magazine in Canada, told its readers my story, and soon he had a cheque for about 1000 CA$ for me - collected by the magazine from many grateful users sending 5$-cheques, which was a big bunch of money back then for me.)
I actually have another story:
A few years past my FCopy "success" story, I was approached by someone who asked me if I could crack a slot machine's software.
This was in Germany, where almost every pub had one or two of those: You'd throw some money in what amounts to about a US quarter, then it would spin three wheels and if you got lucky with some pattern, you'd then have the choice to "double or nothing" your win on the next play, or get the current win. The goal of the play was to try to double your win a few times until you'd get in the "series" mode where any succeeding win, no matter how minor, would get you a big payment (of about 10 times your spending per game).
The difficulty was to know when to double and when not. For an "outsider" this was completely random, of course. But it turned out that those German-made machines were using simple pseudo-randomized tables in their ROMs. Now, if you watched the machine play for a few rounds, you'd could figure out where this "random table pointer" was and predict its next move. That way, a player would know when to double and when to pass, leading him eventually to the "big win series".
Now, this was already a common thing when this person approached me. There was an underground scene which had access to the ROMs in those machines, find the tables and create software for computers such as a C-64 to use for prediction of the machine's next moves.
Then came a new type of machine, though, which used a different algorithm: Instead of using pre-calc'd tables, it did something else and none of the resident crackers could figure that out. So I was approached, being known as a sort of genius since my FCopy fame.
So I got the ROMs. 16KB, as usual. No information on what it did and how it worked whatsoever. I was on my own. Even the code didn't look familiar (I knew 6502 and 8080 by then only). After some digging and asking, I found it was a 6809 (which I found to be the nicest 8 bit CPU to exist, and which had analogies to the 680x0 CPU design, which was much more linear than the x86 family's instruction mess).
By that time, I had already a 68000 computer (I worked for the company "Gepard Computer" which built and sold such a machine, with its own developer OS and, all) and was into programming Modula-2. So I wrote a disassembler for the 6809, one that helped me with reverse engineering by finding subroutines, jumps, etc. Slow I got a an idea of the flow control of the slot machine's program. Eventually I found some code that looked like a mathmatical algorithm and it dawned on me that this could be the random generating code.
As I never had a formal education in computer sciences, up to then I had no idea how a typical randomgen using mul, add and mod worked. But I remember seeing something mentioned in a Modula-2 book and then realized what it was.
Now I could quickly find the code that would call this randomgen and learn which "events" lead to a randomgen iteration, meaning I knew how to predict the next iterations and their values during a game.
What was left was to figure out the current position of the randomgen. I had never been good with abstract things such as algebra. I knew someone who studied math and was a programmer too, though. When I called him, he quickly knew how to solve the problem and quabbled a lot about how simple it would be to determine the randomgen's seed value. I understood nothing. Well, I understood one thing: The code to accomplish this would take a lot of time, and that a C-64 or any other 8 bit computer would take hours if not days for it.
Thus, I decided to offer him 1000 DM (which was a lot of money for me back then) if he could write me an assembler routine in 68000. Didn't take him long and I had the code which I could test on my 68000 computer. It took usually between 5 and 8 minutes, which was acceptable. So I was almost there.
It still required a portable 68000 computer to be carried to the pub where the slot machine stands. My Gepard computer was clearly not of the portable type. Luckly, someone else I knew in Germany produced entire 68000 computers on a small circuit board. For I/O it only had serial comms (RS-232) and a parallel port (Centronics was the standard of those days). I could hook up some 9V block battieries to it to make it work. Then I bought a Sharp pocket computer, which had a rubber keyboard and a single-line 32 chars display. Running on batteries, which was my terminal. It had a RS-232 connector which I connected to the 68000 board. The Sharp also had some kind of non-volatile memory, which allowed me to store the 68000 random-cracking software on the Sharp, transfer it on demand to the 68000 computer, which then calculated the seed value. Finally I had a small Centronics printer which printed on narrow thermo paper (which was the size of what cash registers use to print receipts). Hence, once the 68000 had the results, it would send a row of results for the upcoming games on the slot machine to the Sharp, which printed them on paper.
So, to empty one of these slot machines, you'd work with two people: You start playing, write down its results, one you had the minimum number of games required for the seed calculation, one of you would go to the car parked outside, turn on the Sharp, enter the results, it would have the 68000 computer rattle for 8 minutes, and out came a printed list of upcoming game runs. Then all you needed was this tiny piece of paper, take it back to your buddy, who kept the machine occupied, align the past results with the printout and no more than 2 minutes later you were "surprised" to win the all-time 100s series. You'd then play these 100 games, practically emptying the machine (and if the machine was empty before the 100 games were played, you had the right to wait for it to be refilled, maybe even come back next day, whereas the machine was stopped until you came back).
This wasn't Las Vegas, so you'd only get about 400 DM out of a machine that way, but it was quick and sure money, and it was exciting. Some pub owners suspected us of cheating but had nothing against us due to the laws back then, and even when some called the police, the police was in favor of us).
Of course, the slot making company soon got wind of this and tried to counteract, turning off those particular machines until new ROMs were installed. But the first few times they only changed the randomgen's numbers. We only had to get hold of the new ROMs, and it took me a few minutes to find the new numbers and implement them into my software.
So this went on for a while during which me and friends browsed thru pubs of several towns in Germany looking for those machines only we could crack.
Eventually, though, the machine maker learned how to "fix" it: Until then, the randomgen was only advanced at certain predictable times, e.g. something like 4 times during play, and once more per the player's pressing of the "double or nothing" button.
But then they finally changed it so that the randomgen would continually be polled, meaning we were no longer able to predict the next seed value exactly on time for the pressing of the button.
That was the end of it. Still, making the effort of writing a disassembler just for this single crack, finding the key routines in 16KB of 8 bit CPU code, figuring out unknown algorithms, investing quite a lot of money to pay someone else to develop code I didn't understand, finding the items for a portable high-speed computer involving the "blind" 68000 CPU with the Sharp as a terminal and the printer for the convenient output, and then actually emptying the machines myself, was one of the most exciting things I ever did with my programming skills.
Way back in the early 90s, I forgot my Compuserve password. I had the encrypted version in CIS.INI, so I wrote a small program to do a plaintext attack and analysis in an attempt to reverse-engineer the encryption algorithm. 24 hours later, I figured out how it worked and what my password was.
Soon after that, I did a clean-up and published the program as freeware so that Compuserve customers could recover their lost passwords. The company's support staff would frequently refer these people to my program.
It eventually found its way onto a few bulletin boards (remember them?) and Internet forums, and was included in a German book about Compuserve. It's still floating around out there somewhere. In fact, Google takes me straight to it.
Once, when playing Daggerfall II, I could not afford the Daedric Dai-Katana so I hex-edited the savegame.
Being serious though, I managed to remove the dongle check on my dads AutoCAD installation using SoftICE many years ago. This was before the Internet was big. He works as an engineer so he had a legitimate copy. He had just forgotten the dongle at his job and he needed to do some things and I thought it would be a fun challenge. I was very proud afterwards.
Okay, this wasn't reverse engineering (quite) but a simple hardware hack born of pure frustration. I was an IT manager for a region of Southwestern Bell's cell phone service in the early 90s. My IT department was dramatically underfunded and so we spent money on smart people rather than equipment.
We had a WAN between major cities, used exclusively for customer service, with critical IP links. Our corporate bosses were insistent that we install a network monitoring system to notify us when the lines went down (no money for redundancy, but spend bucks for handling failures. Sigh.)
The STRONGLY recommended solution ran on a SPARC workstation and started at $30K plus the cost of a SPARC station (around $20K then), together which was a substantial chunk of my budget. I couldn't see it - this was a waste of $$. So I decided a little hacking was in order.
I took an old PC scheduled for destruction and put a copy of ProComm (remember ProComm?) and had it ping each of the required nodes along the route (this was one of the later versions of ProComm that scripted FTP as well as serial lines, KERMIT, etc.) A little logic in the coding fired-off a pager message when a node couldn't be reached. I had already used it to cobble together a pager system for our techs, so I reused the pager code. The script ran continuously, sending a ping once each minute across each of the critical links, and branched into the pager code when a ping wasn't returned.
We duplicated this system at each critical location for a cost of less than $500 and had a very fast notification when a link went down. Next issue - one of our first trouble-shooting methods was to power-cycle our routers and/or terminal servers. I got some dial-up X10 controllers and a few X10 on/off appliance power switches. You had to know the correct phone number to use and the correct tones to push, but we printed up a cheat card for each technician and they kept it with their pager. Instant fast response! One of my techs then programmed the phones we all had to reset specific equipment at each site as a speed-dial. One-tech solves the problem!
Now the "told-you-so" unveiling.
I'm sitting at lunch with our corporate network manager in Dallas who is insisting on a purchase of the Sun-based network management product. I get a page that one of our links is down, and then a second page. Since the pager messages are coming from two different servers, I know exactly which router is involved (it was a set-up, I knew anyway, as the tech at the meeting with me was queued to "down a router" during the meal so we could show-off.) I show the pager messages to the manager and ask him what he would do to resolve this problem. He eyes me suspiciously, since he hasn't yet successfully been paged by his Solaris NMS system that is supposed to track critical links. "Well, I guess you'd better call a tech and get them to reset the router and see if that fixes it." I turned to the tech who was lunching with us and asked him to handle it. He drew out his cell phone (above the table this time) and pressed the speed-dial he had programmed to reset the router in question. The phone dialed the X10 switch, told it to power-off the router, paused for five seconds, told it to power-up the router, and disconnected. Our ProComm script sent us pages telling us the link was back up within three minutes of this routine. :-)
The corporate network manager was very impressed. He asked me what the cost was for my new system. When I told him less than $1K, he was apoplectic. He had just ordered a BIG set of the Sun Solaris network management solution just for the tasks I'd illustrated. I think he had spent something like $150K. I told him how the magic was done and offered him the ProComm script for the price of lunch. TANSTAAFL. He told me he'd buy me lunch to keep my mouth shut.
Cleaning out my old drawers of disks and such, I found a copy of the code - "Pingasaurus Rex" was the name I had given it. That was hacking in the good old days.
The most painful for me was for this product where we wanted to include an image on a Excel Spreadsheet (few years back before the open standards). So I had to get and "understanding" if such thing exists of the internal format for the docs, as well. I ended up doing some Hex comparison between files with and without the image to figure out how to put it there, plus working on some little endian math....
I once worked on a tool that would gather inventory information from a PC as it logged into the network. The idea was to keep track of all the PCs in your company.
We had a new requirement to support the Banyan VINES network system, now long forgotten but pretty cool at the time it came out. I couldn't figure out how to get the Ethernet MAC address from Banyan's adapter as there was no documented API to do this.
Digging around online, I found a program that some other Banyan nerd had posted that performed this exact action. (I think it would store the MAC address in an environment variable so you could use it in a script). I tried writing to the author to find out how his program worked, but he either didn't want to tell me or wanted some ridiculous amount of money for the information (I don't recall).
So I simply fired up a disassembler and took his utility apart. It turned out he was making one simple call to the server, which was an undocumented function code in the Banyan API. I worked out the details of the call pretty easily, it was basically asking the server for this workstations address via RPC, and the MAC was part of the Banyan network address.
I then simply emailed the engineers at Banyan and told them what I needed to do. "Hey, it appears that RPC function number 528 (or whatever) returns what I need. Is that safe to call?"
The Banyan engineers were very cool, they verified that the function I had found was correct and was pretty unlikely to go away. I wrote my own fresh code to call it and I was off and running.
Years later I used basically the same technique to reverse engineer an undocumented compression scheme on an otherwise documented file format. I found a little-known support tool provided by the (now defunct) company which would decompress these files, and reverse engineered it. It turned out to be a very straightforward Lempel-Ziv variant applied within the block structure of their file format. The results of that work are recorded for posterity in the Wireshark source code, just search for my name.
Almost 10 years ago, I picked up the UFO/XCOM Collector's Edition in the bargain bin at a local bookstore, mostly out of nostalgia. When I got back home, I got kinda excited that it had been ported to Windows (the DOS versions didn't run under win2k)... and then disappointed that it had garbled graphics.
I was about to shrug my shoulders (bargain bin and all), but then my friend said "Haven't you... bugfixed... software before?", which led to a night of drinking lots of cola and reverse-engineering while hanging out with my friend. In the end, I had written a bugfix loader that fixed the pitch vs. width issue, and could finally play the two first XCOM games without booting old hardware (DOSBOX wasn't around yet, and my machine wasn't really powerful enough for fullblown virtualization).
The loader gained some popularity, and was even distributed with the STEAM re-release of the games for a while - I think they've switched to dosbox nowadays, though.
I wrote a driver for the Atari ST that supported Wacom tablets. Some of the Wacom information could be found on their web sites, but I still had to figure out a lot on my own.
Then, once I had written a library to access the wacom tables (and a test application to show the results) - it dawned on me that there was no API for the OS (GEM windowing system) to actually place the mouse cursor somewhere. I ended up having to hook some interrupts in something called the VDI (like GDI in windows), and be very very careful not to crash the computer inside there. I had some help (in the form of suggestions) from the developers of an accelerated version of the VDI (NVDI), and everything was written in PurePascal. I still occasionally have people asking me how to move the mouse cursor in GEM, etc.
I've had to reverse engineer a video-processing app, where I only had part of the source code. It took me weeks and weeks to even work out the control-flow, as it kept using CORBA to call itself, or be called from CORBA in some part of the app that I couldn't access.
Sheer idiocy.
I recently wrote an app that download the whole content from a Domino Webmail server using Curl. This is because the subcontractor running the server asks for a few hundred bucks for every archive request.
They changed their webmail version about one week after I released the app for the departement but managed to make it working again using a GREAT deal of regex and XML
When I was in highschool they introduced special hours each week (there were 3 hours if i recall correctly) in which we had to pick a classroom with a teacher there to help with any questions on their subject. No of course everybody always wanted to spend their time in the computer room to play around on the computers there.
To choose the room where you would have to be there was an application that would monitor how many students would go to a certain room, and so you had to reserve your slot on time or otherwise there was not much choice where to go.
At that time I always liked to play around on the computers there and I had obtained administrator access already, only for this application that would not help me so much. So I used my administrator access to make a copy of the application and take it home to examine. Now I don't remember all the details but I discoverd this application used some access database file located on a hidden network share. Then after taking a copy of this database file I found there was a password on the database. Using some linux access database tools I could easily work around that and after that it was easy to import this database in my own mysql server.
Then though a simple web interface I could find details for every student in the school to change their slots and promote myself to sit in my room of choice every time.
The next step was to write my own application that would allow me to just select a student from the list and change anything without having to look up their password which was implemented in only a few hours.
While not such a very impressive story as some others in this thread, I still remember it was a lot of fun to do for a highschool child back then.
Related
I'm a little worried about my Raspberry Pi's SD-card life.
On the Raspberry, there's a MySQL(MariaDB)-server running.
A program of mine is reading from the database every second,
then looks something up on the internet and only rarely,
when something happens, it's going to write to the database.
I used to use commit() only once every 5 minutes, but apparently if I don't commit,
the program doesn't see changes from other programs even though those are from tables
it doesn't write to.
1) Concerns about a Raspberries-SD-card's life are all over the internet, so my question is,
how to best call commit()?
2) If it just reads from the database but doesn't change anything, will commit even access the disk?
Is there a way to see the new changes without commiting?
3) And if I do have to commit every second in order to see the changes in time, how bad is it?
PS: I'm using Python3 with mysql-connector, an 8 GB SD with the OS the raspberry imager program recommended to me
So I guess since it's just a couple hundred writings a day, it's totally fine.
But why have you guys all answered in the comments?
Who am I gonna pick as best answer?
As a developer, I love the new hosting platform raising lately, such as Pantheon, Platform.sh or Acquia Cloud.
Automated workflows and task runner based on Git and simple YAML files are great features.
However, those platforms are quite expensive for one who simply wants to host a personal website.
I'm wondering why are PAAS (aka managed hosts) so expensive these days compare to other hosting solutions such as shared or VPS. The later have seen their price being reduced significantly in the last few years.
In my opinion, the price of a hosting service should be mainly based on...
the amount of traffic
the disk usage
... not on the technology sustaining the platform.
I am really not sure this is a stackoverflow kind of question (more of a Quora?). Still, I am going to try and have a stab at answering this as I may provide some insight (I work for Platform.sh):
What services like ours propose is a bargain of time vs money. Basically, if we can save 100 hours a month worth of work for a skilled engineer that is $10,000 of value delivered to a company.
Now in order to achieve this companies like ones you have cited invest heavily in R&D... but also in support and ops people that are around 24/7.
The time spent on the project doesn't disappear, it gets shared between anonymous/invisible people. You may not see it, but its there. The time you did not spend on updating the Kernel or a PHP version... the time you did not spend on defending against a vulnerability. Someone spent that time (it is shared between many, many projects and rolled-up into the global price.
... and there is also a lot of "invisible infrastructure" that you don't see as the resources assigned to your specific project.
I think all three providers you cited offer multiple development/staging clusters (in our case 3 staging clusters). So when you see 5GB there are probably 20GB allocated...
If you were to install and run all the software that is necessary to run your single project at the same conditions... well you will see CPU, memory and storage costs pile-up quite rapidly: Redundant storage nodes, storage for backups, monitoring and logging systems, multiple fire-wall layers, a build farm ... and the coordination and control servers needed to run all of that ...
So this is very much apple and oranges.
Often when you work on a side/personal project, you don't have resources other than your time. So of course this kind of trade-off may be less appealing. You may think : well I just need the CPU, I am a competent SysAdmin, I don't need scaling and I don't need monitoring, nobody would scream if my blog is down ... I am doing this by myself and I don't need multiple development environments... and you might very well be right...
I don't think providers as the ones you have cited are a fit for every type of possible project. And none of us were put in a magic cauldron at our birth. Everything that we do can be replicated. Well, if you spend enough time and resources on that...
I hope this answer gives you some clarity to why they can not possibly cost the same as bare-bones hosting.
I created an AIR app which sends an ID to my server to verify the user's licence.
I created it using
NetworkInfo.networkInfo.findInterfaces() and I use the first "name" value for "displayName" containing "LAN" (or first mac address I get if the user is on a MAC).
But I get a problem:
sometime users connect to internet using an USB stick (given from a mobile phone company) and it changes the serial number I get; probably the USB stick becomes the first value in the vector of findInterfaces().
I could take the last value, but I think I could get similar problems too.
So is there a better way to identify the computer even with this small hardware changes?
It would be nice to get motherboard or CPU serial, but it seems to be not possible. I've found some workaround to get it, but working on WIN and not on a MAC.
I don't want to store data on the user computer for authentication to set "a little" more difficult to hack the software.
Any idea?
Thanks
Nadia
So is there a better way to identify the computer even with this small hardware changes?
No, there is no best practices to identify personal computer and build on this user licensing for the software. You should use server-side/licensing-manager to provide such functional. Also it will give your users flexibility with your desktop software. It's much easier as for product owner (You don't have call center that will respond on every call with changed Network card, hard drive, whatever) and for users to use such product.
Briefly speaking, user's personal computer is insecure (frankly speaking you don't have options to store something valuable) and very dynamic environment (There is very short cycle on the hardware to use it as part of licensing program).
I am in much the same boat as you, and I am now finally starting to address this... I have researched this for over a year and there are a couple options out there.
The biggest thing to watch out for when using a 3rd party system is the leach effect. Nearly all of them want a percentage of your profit - which in my mind makes it nothing more than vampireware. This is on top of a percentage you WILL pay to paypal, merchant processor, etc.
The route I will end up taking is creating a secondary ANE probably written in Java because of 1) Transitioning my knowledge 2) Ability to run on various architectures. I have to concede this solution is not fool proof since reverse engineering of java is nearly as easy as anything running on FP. The point is to just make it harder, not bullet proof.
As a side note - any naysayers of changing CPU / Motherboard - this is extremely rare if not no longer even done. I work on a laptop and obviously once that hardware cycle is over, I need to reregister everything on a new one. So please...
Zarqon was developed by: Cliff Hall
This appears to be a good solution for small scale. The reason I do not believe it scales well based on documentation (say beyond a few thousand users) is it appears to be a completely manual process ie-no ability to tie into a payment system to then auto-gen / notify the user of the key (I could be wrong about this).
Other helpful resources:
http://www.adobe.com/devnet/flex/articles/flex_paypal.html
I have a Microsoft Access business critical database that was originally created in the 90's and has been enlarged and upgraded up to Access 2007 at this point. We have been using this database as a front end for a custom written ERP system essentially. We have moved most of the data over to an SQL server long ago, but we are still using MS Access as a front end. AS the project grows, we have a full time developer, we have started having stability problems and extremely frequent crashes of unknown causes.
As an example: 1 time out of 10 or so, a certain form will crash if I change the data in 1 specific field. There is no code firing at the time, the data is in a local temporary table that typically only has 5 rows most of the time. If I change the data in the table nothing goes wrong, but if I change it on the form Access will hard crash and dump me to the desktop. There are other examples I could provide of unexplained crashes
I am looking for advice on where to go at this point -- the access front end has all of the business logic for running our company essentially so I can't just abandon it. Ideally we would re-write the entire front end in some other language. The problem is that as a small company we don't have the resources to re-write the entire system in anything resembling a good time frame, and don't have the cash flow to pay someone else to do it. My ideal solution would be a conversion of some sort from the access front end to another end point -- whether web or local windows -- but my searches here and on google make that seem like a non-starter.
So essentially every avenue I look at seems to be a dead-end:
We can't find the source of the crashes to stabilize our current system,
We can't stop production in our current system for as long as it would take to re-write it,
We can't afford to pay someone else to write a new system,
Automated conversion tools seem like a waste of money
Are there other options or which of the options that I have thought of seems best?
We have an enterprise level program with an Access front end and an SQL Server back end. I wonder if it might not help to split the program up into different pieces for diagnostics. For instance if you have Order Entry and Inventory Management you could have one front end for each function. (Yes I can hear the howling in the background but if it was only for the purpose of diagnosis maybe it would help... )
You can also export the Access Database objects to text files and then import them into a fresh new database to get rid of weird errors in some occasions.
Well, I guess I will rephrase the basic issue here. If you have two Computing Science graduates on staff then they should have long ago anticipated that they reached the limits of Access. I fail to see this as any different than an overworked, oven in a restaurant that now have too many customers or a delivery truck that does not have the capacity to deliver goods to customers.
Since funds don't exist to re-write then your staff failed to put aside funds on a monthly bases to deal with this situation and now your choices as a result of this delayed action to deal with this growing problem places you in a difficult situation.
The computing science people you have on staff should have long ago seen this wall and limit you hit coming. In my experience is with most CS people is they consider Access rather limited, and thus even MORE alarm bells should have been ringing here and this means even less excuse exists for you to be placed in this unfortunate predicament.
So, assuming the computing science staff you have are well maintaining this application (and I graciously accept this is the case), then then a logical conclusion is this application has reached or exceeded the limits of Access. As noted such limits should have been long ago anticipated.
As you well point out that funds now do not exist for a re-write, then few choices exist without such funds.
However, you case may not be so bad since we NOW know you have experienced developers on staff. Given this case, then my suggestion would be to consider breaking out modules or small manageable parts and features one at a time from the application and having your well trained and experienced developers build either a web interface, or perhaps even using something like .net if you wish to stay 100% desktop. So this "window" of opportunity is a great chance to consider a change in architecture)
Since the data limits are SQL server and NOT Access, then both applications (existing access front end) and the new parts can BOTH well and easy operate on the same data. As you do this, you then break out and remove the existing parts from the Access application. This would suggest you eventaully return to accetable stability in the Access applicaiton. At that point , you could continue, or stop to save funds.
As noted, without funds to re-write, then the only choice is to find some means to free up SOME limited resources on a monthly basis to solve this problem.
At the end of the day, the solution to this problem is more resources, but without such resources then few technical choices and options exist here. Based on the information given so far YOU have made it clear you don't have resources here. However, the solution to this problem here requires resource allocation and planning.
In other words a technical fix to this problem without resources allocated is not likely an available option for you.
I apologize sincerely for not being able to give you a technology solution here, but this looks to be a solution that will require resources to be allocated to the problem and no simple shortcut or trick or magic silver bullet exists here.
I will start off with saying I know that it is impossible to prevent your software from reverse engineering.
But, when I take a look at crackmes.de, there are crackmes with a difficulty grade of 8 and 9 (on a scale of 1 to 10). These crackmes are getting cracked by genius brains, who write a tutorial on how to crack it. Some times, such tutorials are 13+ pages long!
When I try to make a crackme, they crack it in 10 minutes. Followed by a "how-to-crack" tutorial with a length of 20 lines.
So the questions are:
How can I make a relatively good anti-crack protection.
Which techniques should I use?
How can I learn it?
...
Disclaimer: I work for a software-protection tools vendor (Wibu-Systems).
Stopping cracking is all we do and all we have done since 1989. So we thoroughly understand how SW gets cracked and how to avoid it. Bottom line: only with a secure hardware dongle, implemented correctly, can you guarantee against cracking.
Most strong anti-cracking relies on encryption (symmetric or public key). The encryption can be very strong, but unless the key storage/generation is equally strong it can be attacked. Lots of other methods are possible too, even with good encryption, unless you know what you are doing. A software-only solution will have to store the key in an accessible place, easily found or vulnerable to a man-in-the-middle attack. Same thing is true with keys stored on a web server. Even with good encryption and secure key storage, unless you can detect debuggers the cracker can just take a snapshot of memory and build an exe from that. So you need to never completely decrypt in memory at any one time and have some code for debugger detection. Obfuscation, dead code, etc, won't slow them down for long because they don't crack by starting at the beginning and working through your code. They are far more clever than that. Just look at some of the how-to cracking videos on the net to see how to find the security detection code and crack from there.
Brief shameless promotion: Our hardware system has NEVER been cracked. We have one major client who uses it solely for anti-reverse engineering. So we know it can be done.
Languages like Java and C# are too high-level and do not provide any effective structures against cracking. You could make it hard for script kiddies through obfuscation, but if your product is worth it it will be broken anyway.
I would turn this round slightly and think about:
(1) putting in place simple(ish) measures so that your program isn't trivial to hack, so e.g. in Java:
obfuscate your code so at least make your enemy have to go to the moderate hassle of looking through a decompilation of obfuscated code
maybe write a custom class loader to load some classes encrypted in a custom format
look at what information your classes HAVE to expose (e.g. subclass/interface information can't be obfuscated away) and think about ways round that
put some small key functionality in a DLL/format less easy to disassemble
However, the more effort you go to, the more serious hackers will see it as a "challenge". You really just want to make sure that, say, an average 1st year computer science degree student can't hack your program in a few hours.
(2) putting more subtle copyright/authorship markers (e.g. metadata in images, maybe subtly embed a popup that will appear in 1 year's time to all copies that don't connect and authenticate with your server...) that hackers might not bother to look for/disable because their hacked program "works" as it is.
(3) just give your program away in countries where you don't realistically have a chance of making a profit from it and don't worry about it too much-- if anything, it's a form of viral marketing. Remember that in many countries, what we see in the UK/US as "piracy" of our Precious Things is openly tolerated by government/law enforcement; don't base your business model around copyright enforcement that doesn't exist.
I have a pretty popular app (which i won't specify here, to avoid crackers' curiosity, of course) and suffered with cracked versions some times in the past, fact that really caused me many headaches.
After months struggling with lots of anti-cracking techniques, since 2009 i could establish a method that proved to be effective, at least in my case : my app has not been cracked since then.
My method consists in using a combination of three implementations :
1 - Lots of checks in the source code (size, CRC, date and so on : use your creativity. For instance, if my app detects tools like OllyDbg being executed, it will force the machine to shutdown)
2 - CodeVirtualizer virutalization in sensitive functions in source code
3 - EXE encryption
None of these are really effective alone : checks can be passed by a debugger, virtualization can be reversed and EXE encryption can be decrypted.
But when you used altogether, they will cause BIG pain to any cracker.
It's not perfect although : so many checks makes the app slower and the EXE encrypt can lead to false positive in some anti-virus software.
Even so there is nothing like not be cracked ;)
Good luck.
Personaly I am fan of server side check.
It can be as simple as authentication of application or user each time it runs. However that can be easly cracked. Or puting some part of code to server side and that would requere a lot more work.
However your program will requere internet connection as must have and you will have expenses for server. But that the only way to make it relatively good protected. Any stand alone application will be cracked relatively fast.
More logic you will move to server side more hard to crack it will get. But it will if it will be worth it. Even large companies like Blizzrd can't prevent theyr server side being reversed engineered.
I purpose the following:
Create in home a key named KEY1 with N bytes randomly.
Sell the user a "License number" with the Software. Take note of his/her name and surname and tell him/her that those data are required to activate the Software, also an Internet conection.
Upload within the next 24 hours to your server the "License number", and the name and surname, also the KEY3 = (KEY1 XOR hash_N_bytes(License_number, name and surname) )
The installer asks for a "Licese_number" and the name and surname, then it sends those data to the server and downloads the key named "KEY3" if those data correspond to a valid sell.
Then the installer makes KEY1 = KEY3 XOR hash_N_bytes(License_number, name and surname)
The installer checks KEY1 using a "Hash" of 16 bits. The application is encrypted with the KEY1 key. Then it decrypts the application with the key and it's ready.
Both the installer and application must have a CRC content check.
Both could check is being debugged.
Both could have encrypted parts of code during execution time.
What do you think about this method?