TVR bits match TAC Online, but transaction does NOT go online? - emv

I have a scenario where the EMV Contactless card image (American Express) SHOULD decline offline; however, the Ingenico PinPad is going online and approving and the VeriFone is declining offline.
Even though, this scenario SHOULD decline offline - I am convinced this scenario should go ONLINE. I think the VeriFone is a false-positive and the Ingenico is doing the right thing by going ONLINE.
The purpose of this scenario is to ensure that the terminal declines a transaction offline when CDA fails.
The card image has an IAC Denial of "0000000000" and IAC Online of "F470C49800".
The TVR that gets generated during 1AC is '0400008000'.
The TAC Denial is set to "0010000000" and the TAC Online is set to "DE00FC9800".
TVR = "0400008000"
IAC_Denial = "0000000000"
TAC_Denial = "0010000000"
IAC_Online = "F470C49800"
TAC_Online = "DE00FC9800"
When comparing the TVR to the TAC Denial (which should happen first) according to the EMV Book 3 - Terminal Action Analysis - there are NO matching bits. So the next thing that should happen is the TVR should be matched with the TAC Online. When comparing the bits from the TVR to the TAC Online - the bits that match are: "CDA Failed, Exceeds Floor Limit".
This indicates to me that this should go ONLINE; however, as previously stated the scenario is ensuring that it declines OFFLINE.
In a nutshell, the VeriFone PinPad is giving a false-positive by declining OFFLINE without using the Terminal Action Analysis logic.
However, the Ingenico seems to be doing the right thing by going ONLINE.
Is there something that I am missing?
Is there any configurations that can override the Terminal Action Analysis from matching the TVR to TACs to prevent a transaction to go online?
Could this be an issue with the VeriFone kernel?
Thanks.

I often got this error when my POS terminal was not properly configured.
Often, scenarios like this one will have thresholds to configure in your terminal accordingly to its standards. For instance, my terminal was configured accordingly to SEPA-FAST standards.
There was a threshold for the maximum amount value to approve offline. This is useful for merchants that want to approve small amounts offline for effectiveness and speed when they have long lines of customers to process. Think of a cafeteria or a bus line. Of course, this is slightly risky and many merchants won't approve high amounts without an online approval to reduce their loss due to invalid/fraudulent payments.
In my opinion, your offline threshold looks fine. The transaction amount exceeds it and it is refused offline for the obvious reasons I explained to you before.
Perhaps your maximum threshold is badly configured. Most scenarios require you to set a maximum amount threshold over which the transaction is refused offline.
One other thing that could be wrong is your EMV Terminal capabilities 0x9F33 that supports Online PIN authentication and shouldn't. Maybe you aren't using the terminal prescribed by the scenario. What is your CVM? Should it be supported by your terminal? There is also the EMV Terminal Transaction Qualifiers (TTQ) field 0x0F66 for NFC transactions that plays a similar role in defining what a terminal can and cannot do. Maybe your terminal should be offline only in this scenario. This could happen for pizza deliveries or in situations where an internet connexion is not available.

Related

NullPointerExceptions while executing LoadTest on WSO2BPS

While performing loadtests on WSO2 BPS 3.2.0 we`ve ran onto the problem.
Let me tell you more about out project and our actions.
Our BPS process is designed to manage some interactions with 3 systems. Basically it is "spread" on two parts - first one to CREATE INSTANCE in one of systems, then waiting a bit, and then SELECT OFFER in instance context.
In real life it looks like: user wants to get a product, the application asks system for an offers and then the user selects offer from available ones.
IN BPS the first part is a straight-forward process, the second part is spread on two flows - one to refresh information with a new offers, and another is to wait if the user chooses one of them.
Our aim is to stand about 1000-1500 simulatious threads on the load-test. An external systems are simulated by mockups executed by LoadUI.
We can achieve our goal if we disable "Process-Level Monitoring Events" in deployment descriptor (set it to "none") of our process. Everything goes well and smooth for hours.
But if we enable this feature (and we need to), everything falls with an error very soon (on about 100-200 run):
[2015-07-28 17:47:02,573] ERROR {org.wso2.carbon.bpel.core.ode.integration.BPELProcessProxy} - Error processing response for MEX null
java.lang.NullPointerException
at org.wso2.carbon.bpel.core.ode.integration.BPELProcessProxy.onResponse(BPELProcessProxy.java:402)
at org.wso2.carbon.bpel.core.ode.integration.BPELProcessProxy.onAxisServiceInvoke(BPELProcessProxy.java:187)
at
[....Et cetera....]
After the first appearance of this error another one type appears - other threads just fall after the timeout.
It seems that database is ok (by the way, it is MySQL 5.6.25). The dashboard shows no extreme levels of input or output.
So I think the BPS itself makes a bottleneck. We have gave it 8gb heap and its conf options are set for extreme amounts of threads (if it possible negative values are set and if not - just ridiculously big like 100000).
Anyone has ever faced this problem? Appreciate any help very much.
Solved in BPS 3.5.0 version, refer to release-notes

Can I connect to my car's can bus with an elm327 interface?

I've been fiddeling around with a bluetooth elm327 device I bought a few months ago and am able to get standard obd infos like vin, rpm, speed etc.
But as I just read about recently obd2 and can are not the same. I've tried to sniff on my can bus with th AT MA command, but I get no response, so I guess the can network is decoupled from the obd2 interface. Is there any chance to get access to the can network? Or might I need a different device to do so?
Maybe this info helps: I have a 2011 Skoda.
On many modern vehicles there are actually multiple CAN buses controlling the numerous functions needed by the car. Some of these CAN buses are high-speed for important systems like engine control, and some are low-speed for less critical functions such as climate control (or in your case, diagnostics through the OBD2 port). These multiple CAN buses are usually interconnected through a gateway device in the car that arbitrates which CAN messages can be sent between buses. This is a safety net that prevents lower priority CAN buses from interfering with the more critical CAN buses.
In an example case, the CAN bus used for engine control may be able to communicate with the radio CAN bus so that the radio volume gets increased when the engine is revving to higher RPMs for comfort reasons. This would likely be a one-way connection though the gateway though, as it would be in the interest of safety to not allow the radio's CAN bus to send signals back to the engine (this could lead to potential problems if using aftermarket radios for example).
As a result of everything mentioned above, a connection to the OBD2 port's CAN lines most likely will not have full access to the complete CAN network on your car. One way to confirm this would be to look for the Factory Service Manual for your particular vehicle to see how the CAN bus(es) are setup for your car (there are actually quite a few cars that operate on only a single CAN bus in order to cut costs).
Keep in mind that as an alternative to using the OBD2 port, you can always tap directly into the CAN bus that you are interested in. For example, if you remove the radio from your car to expose the radio harness, you can usually tap directly into the CAN lines for the radio bus with the correct equipment.
Hope this helps!
If your vehicle uses the CAN protocol then you shold be able to issue atma from the elm327 device.
Here are the conditions I met to get an ATMA dump:
my vehicle supports protocol 6 -- iso 15765-4 can-11 (500 kbaud)
ATSP6 // I am using protocol 6, not auto mode
ATSH7E0 // now I am talking to the engine ECU
ATMA // returned a page full of data before getting a buffer full message

Adobe Air unique id issue

I created an AIR app which sends an ID to my server to verify the user's licence.
I created it using
NetworkInfo.networkInfo.findInterfaces() and I use the first "name" value for "displayName" containing "LAN" (or first mac address I get if the user is on a MAC).
But I get a problem:
sometime users connect to internet using an USB stick (given from a mobile phone company) and it changes the serial number I get; probably the USB stick becomes the first value in the vector of findInterfaces().
I could take the last value, but I think I could get similar problems too.
So is there a better way to identify the computer even with this small hardware changes?
It would be nice to get motherboard or CPU serial, but it seems to be not possible. I've found some workaround to get it, but working on WIN and not on a MAC.
I don't want to store data on the user computer for authentication to set "a little" more difficult to hack the software.
Any idea?
Thanks
Nadia
So is there a better way to identify the computer even with this small hardware changes?
No, there is no best practices to identify personal computer and build on this user licensing for the software. You should use server-side/licensing-manager to provide such functional. Also it will give your users flexibility with your desktop software. It's much easier as for product owner (You don't have call center that will respond on every call with changed Network card, hard drive, whatever) and for users to use such product.
Briefly speaking, user's personal computer is insecure (frankly speaking you don't have options to store something valuable) and very dynamic environment (There is very short cycle on the hardware to use it as part of licensing program).
I am in much the same boat as you, and I am now finally starting to address this... I have researched this for over a year and there are a couple options out there.
The biggest thing to watch out for when using a 3rd party system is the leach effect. Nearly all of them want a percentage of your profit - which in my mind makes it nothing more than vampireware. This is on top of a percentage you WILL pay to paypal, merchant processor, etc.
The route I will end up taking is creating a secondary ANE probably written in Java because of 1) Transitioning my knowledge 2) Ability to run on various architectures. I have to concede this solution is not fool proof since reverse engineering of java is nearly as easy as anything running on FP. The point is to just make it harder, not bullet proof.
As a side note - any naysayers of changing CPU / Motherboard - this is extremely rare if not no longer even done. I work on a laptop and obviously once that hardware cycle is over, I need to reregister everything on a new one. So please...
Zarqon was developed by: Cliff Hall
This appears to be a good solution for small scale. The reason I do not believe it scales well based on documentation (say beyond a few thousand users) is it appears to be a completely manual process ie-no ability to tie into a payment system to then auto-gen / notify the user of the key (I could be wrong about this).
Other helpful resources:
http://www.adobe.com/devnet/flex/articles/flex_paypal.html

Reverse engineering war stories [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Sometimes you don't have the source code and need to reverse engineer a program or a black box. Any fun war stories?
Here's one of mine:
Some years ago I needed to rewrite a device driver for which I didn't have source code. The device driver ran on an old CP/M microcomputer and drove a dedicated phototypesetting machine through a serial port. Almost no documentation for the phototypesetting machine was available to me.
I finally hacked together a serial port monitor on a DOS PC that mimicked the responses of the phototypesetting machine. I cabled the DOS PC to the CP/M machine and started logging the data coming out of the device driver as I feed data in through the CP/M machine. This enabled me to figure out the handshaking and encoding used by the device driver and re-create an equivalent one for a DOS machine.
Read the story of FCopy for the C-64 here:
Back in the 80s, the Commodore C-64 had an intelligent floppy drive, the 1541, i.e. an external unit that had its own CPU and everything.
The C-64 would send commands to the drive which in turn would then execute them on its own, reading files, and such, then send the data to the C-64, all over a propriatory serial cable.
The manual for the 1541 mentioned, besides the commands for reading and writing files, that one would read and write to its internal memory space. Even more exciting was that one could download 6502 code into the drive's memory and have it executed there.
This got me hooked and I wanted to play with that - execute code on the drive. Of course, there was no documention on what code could be executed there, and which functions it could use.
A friend of mine had written a disassembler in BASIC. and so I read out all its ROM contents, which was 16KB of 6502 CPU code, and tried to understand what it does. The OS on the drive was quite amazing and advanced IMO - it had a kind of task management, with commands being sent from the communication unit to the disk i/o task handler.
I learned enough to understand how to use the disk i/o commands to read/write sectors of the disc. Actually, having read the Apple ]['s DOS 3.3 book which explained all of the workings of its disk format and algos in much detail, was a big help in understanding it all.
(I later learned that I could have also found reserve-eng'd info on the more 4032/4016 disk drives for the "business" Commodore models which worked quite much the same as the 1541, but that was not available to me as a rather disconnected hobby programmer at that time.)
Most importantly, I also learned how the serial comms worked. I realized that the serial comms, using 4 lines, two for data, two for handshake, was programmed very inefficiently, all in software (though done properly, using classic serial handshaking).
Thus I managed to write a much faster comms routine, where I made fixed timing assumtions, using both the data and the handshake line for data transmission.
Now I was able to read and write sectors, and also transmit data faster than ever before.
Of course, it would have been great if one could simply load some code into the drive which speeds up the comms, and then use the normal commands to read a file, which in turn would use the faster comms. This was no possible, though, as the OS on the drive did not provide any hooks for that (mind that all of the OS was in ROM, unmodifiable).
Hence I was wondering how I could turn my exciting findings into a useful application.
Having been a programmer for a while already, dealing with data loss all the times (music tapes and floppy discs were not very realiable back then), I thought: Backup!
So I wrote a backup program which could duplicate a floppy disc in never-before seen speed: The first version did copy an entire 170 KB disc in only 8 minutes (yes, minutes), the second version did it even in about 4.5 minutes. Whereas the apps before mine took over 25 minutes. (Mind you, the Apple ][, which had its disc OS running on the Apple directly, with fast parallel data access, did this all in a minute or so).
And so FCopy for the C-64 was born.
It became soon extremely popular. Not as a backup program as I had intended it, but as the primary choice for anyone wanting to copy games and other software for their friends.
Turned out that a simplification in my code, which would simply skip unreadable sectors, writing a sector with a bad CRC to the copy, did circumvent most of the then-used copy protection schemes, making it possible to copy most formerly uncopyable discs.
I had tried to sell my app and sold it actually 70 times. When it got advertised in the magazines, claiming it would copy a disc in less than 5 minutes, customers would call and not believe it, "knowing better" that it can't be done, yet giving it a try.
Not much later, others started to reverse engineer my app, and optimize it, making the comms even faster, leading to copy apps that did it even in 1.5 minutes. Faster was hardly possible, because, due to the limited amount of memory available on the 1541 and the C-64, you had to swap discs several times in the single disc drive to copy all 170 KB of its contents.
In the end, FCopy and its optimized successors were probably the most-popular software ever on the C-64 in the 80s. And even though it didn't pay off financially for me, it still made me proud, and I learned a lot about reverse-engineering, futility of copy protection and how stardom feels. (Actually, Jim Butterfield, an editor for a C-64 magazine in Canada, told its readers my story, and soon he had a cheque for about 1000 CA$ for me - collected by the magazine from many grateful users sending 5$-cheques, which was a big bunch of money back then for me.)
I actually have another story:
A few years past my FCopy "success" story, I was approached by someone who asked me if I could crack a slot machine's software.
This was in Germany, where almost every pub had one or two of those: You'd throw some money in what amounts to about a US quarter, then it would spin three wheels and if you got lucky with some pattern, you'd then have the choice to "double or nothing" your win on the next play, or get the current win. The goal of the play was to try to double your win a few times until you'd get in the "series" mode where any succeeding win, no matter how minor, would get you a big payment (of about 10 times your spending per game).
The difficulty was to know when to double and when not. For an "outsider" this was completely random, of course. But it turned out that those German-made machines were using simple pseudo-randomized tables in their ROMs. Now, if you watched the machine play for a few rounds, you'd could figure out where this "random table pointer" was and predict its next move. That way, a player would know when to double and when to pass, leading him eventually to the "big win series".
Now, this was already a common thing when this person approached me. There was an underground scene which had access to the ROMs in those machines, find the tables and create software for computers such as a C-64 to use for prediction of the machine's next moves.
Then came a new type of machine, though, which used a different algorithm: Instead of using pre-calc'd tables, it did something else and none of the resident crackers could figure that out. So I was approached, being known as a sort of genius since my FCopy fame.
So I got the ROMs. 16KB, as usual. No information on what it did and how it worked whatsoever. I was on my own. Even the code didn't look familiar (I knew 6502 and 8080 by then only). After some digging and asking, I found it was a 6809 (which I found to be the nicest 8 bit CPU to exist, and which had analogies to the 680x0 CPU design, which was much more linear than the x86 family's instruction mess).
By that time, I had already a 68000 computer (I worked for the company "Gepard Computer" which built and sold such a machine, with its own developer OS and, all) and was into programming Modula-2. So I wrote a disassembler for the 6809, one that helped me with reverse engineering by finding subroutines, jumps, etc. Slow I got a an idea of the flow control of the slot machine's program. Eventually I found some code that looked like a mathmatical algorithm and it dawned on me that this could be the random generating code.
As I never had a formal education in computer sciences, up to then I had no idea how a typical randomgen using mul, add and mod worked. But I remember seeing something mentioned in a Modula-2 book and then realized what it was.
Now I could quickly find the code that would call this randomgen and learn which "events" lead to a randomgen iteration, meaning I knew how to predict the next iterations and their values during a game.
What was left was to figure out the current position of the randomgen. I had never been good with abstract things such as algebra. I knew someone who studied math and was a programmer too, though. When I called him, he quickly knew how to solve the problem and quabbled a lot about how simple it would be to determine the randomgen's seed value. I understood nothing. Well, I understood one thing: The code to accomplish this would take a lot of time, and that a C-64 or any other 8 bit computer would take hours if not days for it.
Thus, I decided to offer him 1000 DM (which was a lot of money for me back then) if he could write me an assembler routine in 68000. Didn't take him long and I had the code which I could test on my 68000 computer. It took usually between 5 and 8 minutes, which was acceptable. So I was almost there.
It still required a portable 68000 computer to be carried to the pub where the slot machine stands. My Gepard computer was clearly not of the portable type. Luckly, someone else I knew in Germany produced entire 68000 computers on a small circuit board. For I/O it only had serial comms (RS-232) and a parallel port (Centronics was the standard of those days). I could hook up some 9V block battieries to it to make it work. Then I bought a Sharp pocket computer, which had a rubber keyboard and a single-line 32 chars display. Running on batteries, which was my terminal. It had a RS-232 connector which I connected to the 68000 board. The Sharp also had some kind of non-volatile memory, which allowed me to store the 68000 random-cracking software on the Sharp, transfer it on demand to the 68000 computer, which then calculated the seed value. Finally I had a small Centronics printer which printed on narrow thermo paper (which was the size of what cash registers use to print receipts). Hence, once the 68000 had the results, it would send a row of results for the upcoming games on the slot machine to the Sharp, which printed them on paper.
So, to empty one of these slot machines, you'd work with two people: You start playing, write down its results, one you had the minimum number of games required for the seed calculation, one of you would go to the car parked outside, turn on the Sharp, enter the results, it would have the 68000 computer rattle for 8 minutes, and out came a printed list of upcoming game runs. Then all you needed was this tiny piece of paper, take it back to your buddy, who kept the machine occupied, align the past results with the printout and no more than 2 minutes later you were "surprised" to win the all-time 100s series. You'd then play these 100 games, practically emptying the machine (and if the machine was empty before the 100 games were played, you had the right to wait for it to be refilled, maybe even come back next day, whereas the machine was stopped until you came back).
This wasn't Las Vegas, so you'd only get about 400 DM out of a machine that way, but it was quick and sure money, and it was exciting. Some pub owners suspected us of cheating but had nothing against us due to the laws back then, and even when some called the police, the police was in favor of us).
Of course, the slot making company soon got wind of this and tried to counteract, turning off those particular machines until new ROMs were installed. But the first few times they only changed the randomgen's numbers. We only had to get hold of the new ROMs, and it took me a few minutes to find the new numbers and implement them into my software.
So this went on for a while during which me and friends browsed thru pubs of several towns in Germany looking for those machines only we could crack.
Eventually, though, the machine maker learned how to "fix" it: Until then, the randomgen was only advanced at certain predictable times, e.g. something like 4 times during play, and once more per the player's pressing of the "double or nothing" button.
But then they finally changed it so that the randomgen would continually be polled, meaning we were no longer able to predict the next seed value exactly on time for the pressing of the button.
That was the end of it. Still, making the effort of writing a disassembler just for this single crack, finding the key routines in 16KB of 8 bit CPU code, figuring out unknown algorithms, investing quite a lot of money to pay someone else to develop code I didn't understand, finding the items for a portable high-speed computer involving the "blind" 68000 CPU with the Sharp as a terminal and the printer for the convenient output, and then actually emptying the machines myself, was one of the most exciting things I ever did with my programming skills.
Way back in the early 90s, I forgot my Compuserve password. I had the encrypted version in CIS.INI, so I wrote a small program to do a plaintext attack and analysis in an attempt to reverse-engineer the encryption algorithm. 24 hours later, I figured out how it worked and what my password was.
Soon after that, I did a clean-up and published the program as freeware so that Compuserve customers could recover their lost passwords. The company's support staff would frequently refer these people to my program.
It eventually found its way onto a few bulletin boards (remember them?) and Internet forums, and was included in a German book about Compuserve. It's still floating around out there somewhere. In fact, Google takes me straight to it.
Once, when playing Daggerfall II, I could not afford the Daedric Dai-Katana so I hex-edited the savegame.
Being serious though, I managed to remove the dongle check on my dads AutoCAD installation using SoftICE many years ago. This was before the Internet was big. He works as an engineer so he had a legitimate copy. He had just forgotten the dongle at his job and he needed to do some things and I thought it would be a fun challenge. I was very proud afterwards.
Okay, this wasn't reverse engineering (quite) but a simple hardware hack born of pure frustration. I was an IT manager for a region of Southwestern Bell's cell phone service in the early 90s. My IT department was dramatically underfunded and so we spent money on smart people rather than equipment.
We had a WAN between major cities, used exclusively for customer service, with critical IP links. Our corporate bosses were insistent that we install a network monitoring system to notify us when the lines went down (no money for redundancy, but spend bucks for handling failures. Sigh.)
The STRONGLY recommended solution ran on a SPARC workstation and started at $30K plus the cost of a SPARC station (around $20K then), together which was a substantial chunk of my budget. I couldn't see it - this was a waste of $$. So I decided a little hacking was in order.
I took an old PC scheduled for destruction and put a copy of ProComm (remember ProComm?) and had it ping each of the required nodes along the route (this was one of the later versions of ProComm that scripted FTP as well as serial lines, KERMIT, etc.) A little logic in the coding fired-off a pager message when a node couldn't be reached. I had already used it to cobble together a pager system for our techs, so I reused the pager code. The script ran continuously, sending a ping once each minute across each of the critical links, and branched into the pager code when a ping wasn't returned.
We duplicated this system at each critical location for a cost of less than $500 and had a very fast notification when a link went down. Next issue - one of our first trouble-shooting methods was to power-cycle our routers and/or terminal servers. I got some dial-up X10 controllers and a few X10 on/off appliance power switches. You had to know the correct phone number to use and the correct tones to push, but we printed up a cheat card for each technician and they kept it with their pager. Instant fast response! One of my techs then programmed the phones we all had to reset specific equipment at each site as a speed-dial. One-tech solves the problem!
Now the "told-you-so" unveiling.
I'm sitting at lunch with our corporate network manager in Dallas who is insisting on a purchase of the Sun-based network management product. I get a page that one of our links is down, and then a second page. Since the pager messages are coming from two different servers, I know exactly which router is involved (it was a set-up, I knew anyway, as the tech at the meeting with me was queued to "down a router" during the meal so we could show-off.) I show the pager messages to the manager and ask him what he would do to resolve this problem. He eyes me suspiciously, since he hasn't yet successfully been paged by his Solaris NMS system that is supposed to track critical links. "Well, I guess you'd better call a tech and get them to reset the router and see if that fixes it." I turned to the tech who was lunching with us and asked him to handle it. He drew out his cell phone (above the table this time) and pressed the speed-dial he had programmed to reset the router in question. The phone dialed the X10 switch, told it to power-off the router, paused for five seconds, told it to power-up the router, and disconnected. Our ProComm script sent us pages telling us the link was back up within three minutes of this routine. :-)
The corporate network manager was very impressed. He asked me what the cost was for my new system. When I told him less than $1K, he was apoplectic. He had just ordered a BIG set of the Sun Solaris network management solution just for the tasks I'd illustrated. I think he had spent something like $150K. I told him how the magic was done and offered him the ProComm script for the price of lunch. TANSTAAFL. He told me he'd buy me lunch to keep my mouth shut.
Cleaning out my old drawers of disks and such, I found a copy of the code - "Pingasaurus Rex" was the name I had given it. That was hacking in the good old days.
The most painful for me was for this product where we wanted to include an image on a Excel Spreadsheet (few years back before the open standards). So I had to get and "understanding" if such thing exists of the internal format for the docs, as well. I ended up doing some Hex comparison between files with and without the image to figure out how to put it there, plus working on some little endian math....
I once worked on a tool that would gather inventory information from a PC as it logged into the network. The idea was to keep track of all the PCs in your company.
We had a new requirement to support the Banyan VINES network system, now long forgotten but pretty cool at the time it came out. I couldn't figure out how to get the Ethernet MAC address from Banyan's adapter as there was no documented API to do this.
Digging around online, I found a program that some other Banyan nerd had posted that performed this exact action. (I think it would store the MAC address in an environment variable so you could use it in a script). I tried writing to the author to find out how his program worked, but he either didn't want to tell me or wanted some ridiculous amount of money for the information (I don't recall).
So I simply fired up a disassembler and took his utility apart. It turned out he was making one simple call to the server, which was an undocumented function code in the Banyan API. I worked out the details of the call pretty easily, it was basically asking the server for this workstations address via RPC, and the MAC was part of the Banyan network address.
I then simply emailed the engineers at Banyan and told them what I needed to do. "Hey, it appears that RPC function number 528 (or whatever) returns what I need. Is that safe to call?"
The Banyan engineers were very cool, they verified that the function I had found was correct and was pretty unlikely to go away. I wrote my own fresh code to call it and I was off and running.
Years later I used basically the same technique to reverse engineer an undocumented compression scheme on an otherwise documented file format. I found a little-known support tool provided by the (now defunct) company which would decompress these files, and reverse engineered it. It turned out to be a very straightforward Lempel-Ziv variant applied within the block structure of their file format. The results of that work are recorded for posterity in the Wireshark source code, just search for my name.
Almost 10 years ago, I picked up the UFO/XCOM Collector's Edition in the bargain bin at a local bookstore, mostly out of nostalgia. When I got back home, I got kinda excited that it had been ported to Windows (the DOS versions didn't run under win2k)... and then disappointed that it had garbled graphics.
I was about to shrug my shoulders (bargain bin and all), but then my friend said "Haven't you... bugfixed... software before?", which led to a night of drinking lots of cola and reverse-engineering while hanging out with my friend. In the end, I had written a bugfix loader that fixed the pitch vs. width issue, and could finally play the two first XCOM games without booting old hardware (DOSBOX wasn't around yet, and my machine wasn't really powerful enough for fullblown virtualization).
The loader gained some popularity, and was even distributed with the STEAM re-release of the games for a while - I think they've switched to dosbox nowadays, though.
I wrote a driver for the Atari ST that supported Wacom tablets. Some of the Wacom information could be found on their web sites, but I still had to figure out a lot on my own.
Then, once I had written a library to access the wacom tables (and a test application to show the results) - it dawned on me that there was no API for the OS (GEM windowing system) to actually place the mouse cursor somewhere. I ended up having to hook some interrupts in something called the VDI (like GDI in windows), and be very very careful not to crash the computer inside there. I had some help (in the form of suggestions) from the developers of an accelerated version of the VDI (NVDI), and everything was written in PurePascal. I still occasionally have people asking me how to move the mouse cursor in GEM, etc.
I've had to reverse engineer a video-processing app, where I only had part of the source code. It took me weeks and weeks to even work out the control-flow, as it kept using CORBA to call itself, or be called from CORBA in some part of the app that I couldn't access.
Sheer idiocy.
I recently wrote an app that download the whole content from a Domino Webmail server using Curl. This is because the subcontractor running the server asks for a few hundred bucks for every archive request.
They changed their webmail version about one week after I released the app for the departement but managed to make it working again using a GREAT deal of regex and XML
When I was in highschool they introduced special hours each week (there were 3 hours if i recall correctly) in which we had to pick a classroom with a teacher there to help with any questions on their subject. No of course everybody always wanted to spend their time in the computer room to play around on the computers there.
To choose the room where you would have to be there was an application that would monitor how many students would go to a certain room, and so you had to reserve your slot on time or otherwise there was not much choice where to go.
At that time I always liked to play around on the computers there and I had obtained administrator access already, only for this application that would not help me so much. So I used my administrator access to make a copy of the application and take it home to examine. Now I don't remember all the details but I discoverd this application used some access database file located on a hidden network share. Then after taking a copy of this database file I found there was a password on the database. Using some linux access database tools I could easily work around that and after that it was easy to import this database in my own mysql server.
Then though a simple web interface I could find details for every student in the school to change their slots and promote myself to sit in my room of choice every time.
The next step was to write my own application that would allow me to just select a student from the list and change anything without having to look up their password which was implemented in only a few hours.
While not such a very impressive story as some others in this thread, I still remember it was a lot of fun to do for a highschool child back then.

How do download accelerators work?

We require all requests for downloads to have a valid login (non-http) and we generate transaction tickets for each download. If you were to go to one of the download links and attempt to "replay" the transaction, we use HTTP codes to forward you to get a new transaction ticket. This works fine for a majority of users. There's a small subset, however, that are using Download Accelerators that simply try to replay the transaction ticket several times.
So, in order to determine whether we want to or even can support download accelerators or not, we are trying to understand how they work.
How does having a second, third or even fourth concurrent connection to the web server delivering a static file speed the download process?
What does the accelerator program do?
You'll get a more comprehensive overview of Download Accelerators at wikipedia.
Acceleration is multi-faceted
First
A substantial benefit of managed/accelerated downloads is the tool in question remembers Start/Stop offsets transferred and uses "partial" and 'range' headers to request parts of the file instead of all of it.
This means if something dies mid transaction ( ie: TCP Time-out ) it just reconnects where it left off and you don't have to start from scratch.
Thus, if you have an intermittent connection, the aggregate transfer time is greatly lessened.
Second
Download accelerators like to break a single transfer into several smaller segments of equal size, using the same start-range-stop mechanics, and perform them in parallel, which greatly improves transfer time over slow networks.
There's this annoying thing called bandwidth-delay-product where the size of the TCP buffers at either end do some math thing in conjunction with ping time to get the actual experienced speed, and this in practice means large ping times will limit your speed regardless how many megabits/sec all the interim connections have.
However, this limitation appears to be "per connection", so multiple TCP connections to a single server can help mitigate the performance hit of the high latency ping time.
Hence, people who live near by are not so likely to need to do a segmented transfer, but people who live in far away locations are more likely to benefit from going crazy with their segmentation.
Thirdly
In some cases it is possible to find multiple servers that provide the same resource, sometimes a single DNS address round-robins to several IP addresses, or a server is part of a mirror network of some kind. And download managers/accelerators can detect this and apply the segmented transfer technique across multiple servers, allowing the downloader to get more collective bandwidth delivered to them.
Support
Supporting the first kind of acceleration is what I personally suggest as a "minimum" for support. Mostly, because it makes a users life easy, and it reduces the amount of aggregate data transfer you have to provide due to users not having to fetch the same content repeatedly.
And to facilitate this, its recommended you, compute how much they have transferred and don't expire the ticket till they look "finished" ( while binding traffic to the first IP that used the ticket ), or a given 'reasonable' time to download it has passed. ie: give them a window of grace before requiring they get a new ticket.
Supporting the second and third give you bonus points, and users generally desire it at least the second, mostly because international customers don't like being treated as second class customers simply because of the greater ping time, and it doesn't objectively consume more bandwidth in any sense that matters. The worst that happens is they might cause your total throughput to be undesirable for how your service operates.
It's reasonably straight forward to deliver the first kind of benefit without allowing the second simply by restricting the number of concurrent transfers from a single ticket.
I believe the idea is that many servers limit or evenly distribute bandwidth across connections. By having multiple connections, you're cheating that system and getting more than your "fair" share of bandwidth.
It's all about Little's Law. Specifically each stream to the web server is seeing a certain amount of TCP latency and so will only carry so much data. Tricks like increasing the TCP window size and implementing selective acks help but are poorly implemented and generally cause more problems than they solve.
Having multiple streams means that the latency seen by each stream is less important as the global throughput increases overall.
Another key advantage with a download accelerator even when using a single thread is that it's generally better than using the web browsers built in download tool. For example if the web browser decides to die the download tool will continue. And the download tool may support functionality like pausing/resuming that the built-in brower doesn't.
My understanding is that one method download accelerators use is by opening many parallel TCP connections - each TCP connection can only go so fast, and is often limited on the server side.
TCP is implemented such that if a timeout occurs, the timeout period is increased. This is very effective at preventing network overloads, at the cost of speed on individual TCP connections.
Download accelerators can get around this by opening dozens of TCP connections and dropping the ones that slow to below a certain threshold, then opening new ones to replace the slow connections.
While effective for a single user, I believe it is bad etiquette in general.
You're seeing the download accelerator trying to re-authenticate using the same transaction ticket - I'd recommend ignoring these requests.
From: http://askville.amazon.com/download-accelerator-protocol-work-advantages-benefits-application-area-scope-plz-suggest-URLs/AnswerViewer.do?requestId=9337813
Quote:
The most common way of accelerating downloads is to open up parllel downloads. Many servers limit the bandwith of one connection so opening more in parallel increases the rate. This works by specifying an offset a download should start which is supported for HTTP and FTP alike.
Of course this way of acceleration is quite "unsocial". The limitation of bandwith is implemented to be able to serve a higher number of clients so using this technique lowers the maximum number of peers that is able to download. That's the reason why many servers are limiting the number of parallel connection (recognized by IP), e.g. many FTP-servers do this so you run into problems if you download a file and try to continue browsing using your browser. Technically these are two parallel connections.
Another technique to increase the download-rate is a peer-to-peer-network where different sources e.g. limited by asynchron DSL on the upload-side are used for downloading.
Most download 'accelerators' really don't speed up anything at all. What they are good at doing is congesting network traffic, hammering your server, and breaking custom scripts like you've seen. Basically how it works is that instead of making one request and downloading the file from beginning to end, it makes say four requests...the first one downloads from 0-25%, the second from 25-50%, and so on, and it makes them all at the same time. The only particular case where this helps any, is if their ISP or firewall does some kind of traffic shaping such that an individual download speed is limited to less than their total download speed.
Personally, if it's causing you any trouble, I'd say just put a notice that download accelerators are not supported, and have the users download them normally, or only using a single thread.
They don't, generally.
To answer the substance of your question, the assumption is that the server is rate-limiting downloads on a per-connection basis, so simultaneously downloading multiple chunks will enable the user to make the most of the bandwidth available at their end.
Typically download-accelerators depend on partial content download - status code 206. Just like the streaming media players, media players ask for a small chunk of the full file to the server and then download it and play. Now the catch is if a server restricts partial-content-download then the download accelerator won't work!. It's easy to configure a server like Nginx to restrict partial-content-download.
How to know if a file can be downloaded via ranges/partially?
Ans: check for a header value Accept-Ranges:. If it does exist then you are good to go.
How to implement a feature like this in any programming language?
Ans: well, it's pretty easy. Just spin up some threads/co-routines(choose threads/co-routines over processes in I/O or network bound system) to download the N-number of chunks in parallel. Save the partial files in the right position in the file. and you are technically done. Calculate the download speed by keeping a global variable downloaded_till_now=0 and increment it as one thread completes downloading a chunk. don't forget about mutex as we are writing to a global resource from multiple thread so do a thread.acquire() and thread.release(). And also keep a unix-time counter. and do math like
speed_in_bytes_per_sec = downloaded_till_now/(current_unix_time-start_unix_time)