Specfically, how does an EMV device talk with the card issuer? - emv

When doing an EMV online transaction (ARQC), an EMV device needs to communicate with the issuer (or gateway) to get approval/denial. I am writing POS software and need to support EMV, thus I need to support this interaction. What I can't seem to answer is, is it part of the EMV specification for the EMV device to communicate directly with the issuer, over internet? Or do I need to be looking for some sort of send function in the device's API?
I know this question could be directed at a hardware manufacturer's design, but I have read a few API's for different EMV devices and non of them seem to detail this communication. Most of them have a function to initialize the EMV capabilities (with the transaction amount) and then a callback/event when the transaction is completed. This leads me to believe that all I need to provide is a good internet connection to the device and the magic will happen.
As a followup to that, I see some devices have USB communications (instead of ethernet). These devices (obviously) couldn't talk directly to an outside network. Is it safe to assume these devices are going to do every EMV transaction offline? Or am I missing something?

As far as I have come to understand, EMV covers the finer details of the communication between a card and the reader device, then gives the procedure/standards to be followed when delivering that data online. Thus, once you have performed the local processing of the card, you will use whichever means you can find to deliver that info to an online acquirer (assuming its an online transaction) and that communication must fulfill the EMV (and also PCI) security requirements. So Yes, you will need an Internet connection for online transactions. That part, which will "encode" the data according to financial standards and protocols and send it to a specified acquirer/issuer, will need to be created by the developer (you).

After much research and headaches, I think I have answered my question. And it is..... Level 2 Kernel. This is the piece that I couldn't find anywhere because (i think) the whole EMV thing is so new in America. As Peter posted in his reply, EMV covers the finer details between the card and device, but only gives suggestions on delivering that data. The Kernel is the "vehicle" by which the Cryptogram (created by the card and device talking) is delivered to the card issuer for approval. Because the Kernel is (usually) a piece of software running on a computer (as a network service for example), it can house both IP communication with the pin pad or monitor a USB port.
From there I realized that I had two choices, first develop my own kernel and go through the whole pinpad integration AND EMV certification (ummm no thank you). Or second, find a company that has already done the certification and pay for use of their solution (yes please).
I found a company named CreditCall offering a product named ChipDNA which is an interface to an existing certified Level 2 Kernel. They have a Microsoft and Java integration. They have already integrated with certain keypads and handle the entire EMV online communication (end to end). They offer a lite version of their API for free, which I created a little console prog for. Worked like a champ! Cut a ton of development time off and a big [certification] cost.
....now I've got to get the accountants to "ok" the ChipDNA cost.

Related

EMV on chip application

Who writes on chip EMV application that communicates with terminal?
Are those developed by entities like VISA, Mastercard and given to issuing bank or the card issuing bank develops it and loads onto chip
In most simple terms,
Card manufacturers manufacture card and also installs operating system and applet ( if java - open platform).
Card issuers personalize these cards( I guess you know what will be there in an emv card).
After personalization, card is ready to use.
Anyone who wrote EMV application (either native or javacard applet) and pass the payment scheme certification (functional or security), he could sell his product.
The general idea is that the application developed have to pass the mandated certification test before going to the market and carrying the payment scheme brand.
There is a specification (ISO/IEC 7816-4) which specifies the protocol of an EMV card. It is on the net. Just google emvco and you would find them. The specification is complicated and takes time to understand. Here is a youtube channel
https://youtu.be/iWg8EBhsfjY
explaining the specification in a simple way. Especially for beginner. Check it out if you are interested.

EMV application development Questions [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am new to EMV, currently I have an emergency EMV application development project, anybody could help me answer the below questions:
what is EMV L2 application kernel? Is an API or just an executable EMV application?
During an EMV payment transaction, what kind of data(message) information need to be captured from Chip&Pin card so that it could submit to bank card issuer for authorization. Which ISO specification that the payment transaction data should apply for.
what kind of connectivity between EMV terminal and acquirer? IP or Serial Port?
Any testing tools for EMV application development? Such as acquirer host simulation.
5.How much time it will take for an EMV application development?
1] what is EMV L2 application kernel? Is an API or just an executable EMV application?
It is more an API than an application. That's a piece of software that will use the underlying hardware to communicate with your EMV card, and will manage all of the EMV application level protocol (APDUs). If you're developing for a specific payment terminal, you'll have to contact the manufacturer to buy its kernel (ex : Ingenico, VeriFone). If you develop for a PC solution, you can buy some generic kernel (ex : EmvX). You probably don't want to write your own kernel, this blog estimates the cost of doing so :
EMV recommends to take around 18 month time to develop and certify a contact kernel. [...] Something between 200’000 and 400’000 Euro is a normal value.
2] During an EMV payment transaction, what kind of data(message) information need to be captured from Chip&Pin card so that it could submit to bank card issuer for authorization. Which ISO specification that the payment transaction data should apply for.
The documentation for the EMV protocol is publicly available at EMVco.com. An EMV card is a chip card, meaning you don't capture info from the card to later submit it to your bank (acquirer). In (very brief), your card will provide its characteristics to your application, and require a variable set of parameters (ex : amount, date, tip, etc.). Your application will reply with the required info and the card will then eventually decide if it accepts the transaction offline, accepts it online (after validation by the issuer), or rejects it.
3] what kind of connectivity between EMV terminal and acquirer? IP or Serial Port?
Between terminal and acquirer, it's a dial-up connection most of time (60% of merchants in the U.S. in 2012), or IP connection.
4] Any testing tools for EMV application development? Such as acquirer host simulation.
A bunch. You'll need a card issuer simulator (Visa, Mastercard, etc.), an acquirer (bank), simulator which will depend on the acquirer you're working with (in Canada, it could be Base24). You'll then need tools to troubleshoot communication problems between your application and EMV card (ex : SmartSpy), and eventually tools to prepare for certification (ex : from ICC Solutions, or Fime)
5] How much time it will take for an EMV application development?
A lot. Where I work, it just took a little bit more than 1 year to a 6 developers team with a strong experience in EMV transactions and payment applications to write a new payment application from scratch for an Ingenico terminal and to get it ready for certification. One of the most painful part is to succeed certification tests.
Targeting a PC environment may make development easier (easier debugging, more online resources and documentation, etc), but not having in-house skills and experience will increase significantly the cost
I can at least add to #nicolas-riousset 's answer for a couple.
1) I unfortunately do not have anything to add here.
2) Answer is check the specification on the applicability of your terminal and the CVM I believe of the terminal and the card as well as any processor specific requirements.
3) IP yes, but there are established protocols and most utilizing SSL these days. I believe even the dial-ups number has significantly dropped as those 'dial-up' ones have migrated to internet based but I don't drive POS terminals to be able to definitely confirm that.
4) A single simulator platform could accomplish a lot of this as getting a Base24, Postilion, Connex, SmartVista is no small under taking. We have the VISA & MasterCard simulators in-house as well as a few others and the VISA & MasterCard ones would be my last choice to pursue as they are least helpful for terminal to host. My short list of ones to look at that can do acquirer and issuer and processor simulation all on a single workstation would be the following, all have their quirks.
Paragon's FasTest
ACI Worldwide's "ASSET"
Clear2Pay's Lexcel (recently purchased by FIS)
5) Based on the complexity, nuances, backlog of talent, etc on EMV I think a year seems reasonable if not longer.

In which domains are message oriented middleware like AMQP useful?

What problem do MOM (Message Oriented Middleware) solve? Scalability? Integration?
In which domain are they typically used and in which domains are they typically not used?
For example, say, is Google using such solution for it's main search engine or to power GMail?
What about big websites like Walmart, eBay, FedEx (pretty much a Java shop) and buy.com (pretty much an MS shop)? Does MOM solve a need there?
Does it make any sense when you're writing a Webapp where you control the server-side and have an homogenous environment (say tens of Amazon EC2 instances all running Linux + Java JVMs) there and where the clients are, well, Web browsers?
Does it make sense for desktop apps that need to communicate with a server?
Or is it 'only' for big enterprise stuff where you typically have a happy mix of countless of different systems that needs to communicate in a way or another?
I'm a bit confused as to what they're useful for and I think that with example of where they're appropriate and where they're not appropriate I could better understand their use.
This is a great question.
The main uses of messaging are: scaling, offloading work, integration, monitoring, event handling, routing, networking, push, mobility, buffering, queueing, task sharing, alerts, management, logging, batch, data delivery, pubsub, multicast, audit, scheduling, ... and more. Basically: anything where you need data but don't want to make a database request. (Caching is another, longer story).
Another way of looking at this is to notice that many applications used to be built by assuming that users (people) would perform actions that would be fulfilled by executing a transaction on a database (including reads, writes). But today, many actions are not user-initiated. Instead they are application-initiated. For example "tell me when the book that I want to buy is in stock". The best way to solve this class of problems is with messaging of some sort. Whether you call it middleware or web push or real time salad dressing does not matter. It's all messaging.
When you enable applications to initiate or react to events, then it is much easier to scale because your architecture can be based on loosely coupled components. It is also much easier to integrate those components if your messaging is based on a stable, scalable, serviceable tool, preferably using open standard APIs and protocols.
I hope this helps. We try to maintain a list of useful links about messaging here
Please get in touch with questions and comments on any of this, we are dead easy to find.
To address your specific questions:
In which domain are they typically used and in which domains are they typically not used?
Like databases, messaging systems crop up everywhere.
For example, say, is Google using such solution for it's main search engine or to power GMail?
Google uses a lot of home grown technology, but a lot of their open source contributions and known use cases suggest that messaging is (or should be) central to some of the main services.
What about big websites like Walmart, eBay, FedEx (pretty much a Java shop) and buy.com (pretty much an MS shop)? Does MOM solve a need there?
Very much so.
An example use case is scaling web page requests. When the user makes a web request, the web server puts it onto a queue for background processing. This means that the web server can keep working while the request is processed. It also means that the web server does not need to know how the request is handled, making system maintenance, upgrade and rollback much simpler because the main parts are 'decoupled'.
So, anyway, the web request gets processed by a back end service, or possibly by many services, eg 'look up book titles', 'draw shopping cart', 'get advertisement', 'check user account'... Finally all the results get put onto another queue, ready for collection and user response by the web server. Typically the system will include a timeout of around 100ms so that any late requests just get thrown away. The user sees anything that got processed in the time interval. This is one reason why some large ecommerce sites have pages that appear to load in stages.
There are many more use cases...
Does it make any sense when you're writing a Webapp where you control the server-side and have an homogenous environment (say tens of Amazon EC2 instances all running Linux + Java JVMs) there and where the clients are, well, Web browsers?
Definitely. If you have an unknown, or unbounded, number of users, server side instances, and application latencies, then it makes sense to use messaging, even if just as a scalable substrate for non-blocking RPC.
Does it make sense for desktop apps that need to communicate with a server?
In lots of cases. One very common case is when the server pushes events to the desktop app, eg game event, tweets, price feeds in finance, system alerts....
Or is it 'only' for big enterprise stuff where you typically have a happy mix of countless of different systems that needs to communicate in a way or another?
Definitely not only for those 'legacy integration' cases but they are important too. At RabbitMQ, the biggest customers we have in terms of pure scale or message volume are cloud providers and big web application providers.
I will answer only one answer, from prior experience - take a look at this middle-ware that is employed by big companies here - middle-ware has one purpose - to glue dis-connected systems (written in disparate languages) together so that they can interact with one another and streamline the business process - Entera as I have had experience with, creates a middle layer in which the unix box using processes written in C, interact with the mainframe system (DB2, COBOL) via a front-end written in PowerBuilder (I am not naming the company!).
From the description I have given, Entera is a middle-ware which hosts a number of things - smooth integration of the flow of data regardless of the endian format, ability for different languages to talk to the middle-ware broker (a broker is a CORBA or DCE like process, that conforms to 'The Open Group) that listens on a particular port) and is specified by an IDL which makes a process appear to be local - if you understand the terminology used in Remoting under Microsoft's .NET Framework, you are not far off the mark! The middle-ware generates stubs which are linked at compile-time and manages the creation of the process, hosting it off a port, multi-threading at run-time, and also, the modern front-ends (such as .NET, Java, PowerBuilder even the unspeakable VB6...ok...VB.NET for the purists out there) can interact by opening a connection to the specified port on a particular IP address, and using the stubs generated, can interact with it directly.
Obviously, from what was described you can see how the legacy systems can have new life breathed into it and thus scalability of the process, the major downside of this is the cost factor which can run into thousdands of dollars. Big companies who uses mainframes as their back-end processing systems for billing/invoicing, who generate a huge revenue can obviously afford such an expensive product - to them it would seem like throwing pennies into a pool of water...because of the use of middle-ware which prolongs the business process, and breathe new life into it, can extend the business by a good number of years into the future without worrying about 'legacy' tag attached to it.
Incidentally, I carried this out as part of my thesis for my BSc. in Information Systems which covered this commercial front-end. There was an open source version of the middle-ware available on sourceforge called FreeDCE, but development efforts have declined or stopped.
Edit:
#cocotwo: That is exactly what middle-ware does as you said it is a plumbing tool...message oriented middle-ware is not really heard of AFAIK because I would imagine, the processes (functions) would need to be called as if they are locally visible within the application domain of the front-end to make it easy to interact with.
Using messages may have its advantages over RPC calls in that the messages are queued in a safe-holding area in the event that a network disconnection occurs - there may be some data caching going on within that aspect to allow the front-end to continue regardless...it would be useful in the instances of 'updating a status of a particular billing/invoice number' - a one-way write-data to the back-end via the middle-ware.
Ok, big companies would have advanced systems infrastructure in that technicians are constantly around the clock to ensure a smooth delivery of data-flow so that would have to be factored in. The company that I worked with had IBM Global Support contract to fulfill in order to ensure a maximum uptime 99% with 6 nine's after the decimal point...with hot-swapping/balanced-clusters/mirroring systems in place...
Whereas with RPC, if the disconnection occurs, the front-end would have to be restarted or would have to handle the disconnection event. It really depends if the message-queueing middle-ware handles each message in real-time and pass back results to the front-end immediately...
This is where each (Message-queueing and RPC related middle-ware) have their strengths and weaknesses...and also the cost mitigation factor such as support, maximum up-time, development efforts and training - that's a biggie here as middle-ware are really proprietary (despite following the 'The Open Group' layout/standards) and complex to setup and to glue the whole thing together via scripts.
Good answers and discussion here. Our consulting team has two preferred "messaging" solutions: RabittMQ and NXTera a high speed RPC middleware, the contemporary version of Entera mentioned above. My partners and I have developed several solutions using RabittMQ, it is the best tool available in that space right now. Additionally, I happen to work for the company that makes NXTera/Entera.
From experience I can clearly say that both of these products meet the need for reliability and low maintenance as discussed above. There are situations where a messaging service, like RabittMQ, is the right choice -- where Publish and subscribe, certified delivery, Queuing or store-and-forward are required.
In other cases, RPC's (remote procedure calls) are the best and fastest solutions for transactional and distributed processing for enterprise or cloud-based applications. When it is right to use an RPC, but SOAP/.NET (yes these are RPC implementations) are too slow, expensive or complex, a lightwieght high speed RPC middleware like NXTera/Entera is the right choice for us.
There is some use case overlap between RPC middleware and message oriented middleware, and where there are you can use either successfully. But both are strong and dependable choices.
The large companies I work with use both RPC and MoM side-by-side. As far as Internet companies, Google (Protocol Buffers) and Facebook (Thrift) show that RPC's have a roll to play in modern web and cloud-based development.

Should I code for browser or PC? (fleet management)

I have to architect a commercial vehicle fleet tracking system.
Each vehicle (a few 100, max a few 1,000) will have a GPS and satellite transmitter and will periodically report its position. Positions will be stored in a database and used to create a Google Map.
There will of course be other functionalities. Security, log in, etc and probably lots of interaction with other corporate databses (drivers start/stop time for salary purposes, etc).
Question: pure GoogleMaps is probably best implemented as a browser based app (Php & MySql?), but with the additional functionality of a commercial vehicle fleet tracking system, would it be better doing something PC based (Windows/Linux)?
Any other advice? Thanks
I think with the capabilities of modern browsers, along with various mature client-side frameworks, we are witnessing an always thinning distinction between web and desktop interfaces.
You may want to take into consideration that a web application automatically solves some important problems for you:
Distribution: No need to distribute your application. Simply provide a URL.
Updates: Upgrading and fixing problems in your software will be easier and quicker if you distribute it through a web interface.
Security: Deriving from the above, you are able to fix security vulnerabilities more promptly.
Compatibility: Your application will be able to work on any operating system that can launch a web browser.
Last but not least, remember that the Google Maps API is not free for this type of application. Article 10.9.C of Google Maps API Terms and Conditions explicitly restrict using the standard Google Maps API for fleet management and asset tracking. You would need the Google Maps API Premier to legally use Google Maps for your application.
According to one unofficial source (dated April 2008), this would cost USD 10,000 per year, which entitles you to track 100 vehicles. If you exceed the 100 vehicles, you would need to add USD 24 per additional vehicle per year.
Implement solution for the domain problems first. It means data storage, data transmission between vehicles and your system, methods of data analysis, aggregation and visualisation.
These will likely to sit as a head-less system on a server and provide access to it remotely, in both directions: to input data and to query data.
Now, PC or Web is more related to presentation on a client side. You can make both if you like. Web client as well as desktop application can serve as a client to remote data and operational server.
Don't forget that you can always host a web control in a thick client app. This is actually trivial with .Net on the Windows platform with the IE control. You can also access the browser's DOM this way and do some neat things. So just because there's a strong web component to what you're doing you're not necessarily "stuck" writing a pure web app.
One big question is what kind of hardware you'll be able to put in the vehicles. Will they be laptops or small PCs with full fledged OSs or something more mobile like CE or a pared-down Linux distro?
Google Maps is JavaScript based so you can do most things with it, e.g browser based, widgets, etc. However due to the licensing Google won't allow you to use it in anything other than an Internet environment unless you use there Enterprise License.
In terms of integrating it into other systems, its really difficult to say what's best without knowing what other software you are using, what protocols they use, are web services available, etc. I agree with Daniel though in that any distributed system not implemented in a browser better have some good reasons not to, simply because the benefits are substantial. You'll need to weight them up though with a full break down of all the different systems you will need to interact with and work out what fits best.
The great thing is that with it being JavaScript based you have a lot of flexibility in what you can do with it.
This is more an extension to #Daniel Vassallo's answer. Although a web based application would solve most problems there may be the small potential issue of bandwidth usage and reception for internet access. This may or may not be an issue for the fleet management, depending on how that is tackled on the hardware side of things.
An offline solution may assist with this issue but then a clever architect could find a way to create an initial web based solution which can be accessed with an offline application which can pick up the slack and/or provide predictive reasoning until a connection is re-established.

Why is Google's "face recognition" feature available only in Picasa WEB and not Picasa for the PC?

I friend asked me this today.
Picasa Web has a cool (and frightening :-) feature where it will recognize all the faces in your photo album.
But the PC (desktop) version doesn't have this.
Several reasons I can think of:
They just haven't gotten around to writing the PC version of the code.
They are licensing that feature and it costs a lot more (or isn't available) on the PC.
Takes a lot of processing power (this seems odd b/c MY PC cycles are free to Google, but they have to pay for for cycles consumed on their server.
Any other thoughts?
I'm certain it'll make it out in coming releases but Google is a funny company when it comes to its own competing/complementing services. One thing is for sure, only somebody on the Picasa team could give an accurate answer.
But we could hypothesise several things...
They don't want their code reverse-engineered.
(As you say), they aren't licensed to redist
It's blocked in the dev version by other new features that aren't complete yet
They don't want to release it because they want people to use PicasaWeb as a social photo network.
I don't think processing power is an issue. If they're running it in bulk on their own servers for free, a modern desktop could probably run it without issue.
From my limited contact with face recognition software, it's probably the redistribution issue. When I dealt with it, face recognition was its own little world with extremely high per-CPU licensing costs and tremendous paranoia about code getting loose.
I'm not so sure it's not a processing issue. It took Google's massive servers 30 minutes to run through all my photos. I can only imagine that same task would have taken days on my local machine.
Actually, its in, just in limited functionality when you do a search, there's an icon to find only photos with faces. The experimental passport feature also works that way.
So the answer is:
Not the same base (APIs) available or used and not the same language so its not directly portable.
Not the same software and there are no stated goals to make both apps feature equivalent.
Programmers are limited and their time is too. They make choices as to what implement now.
No idea if this is the case for Picasa, but there's another case where licensing could be the issue. If the server-side code is using code with a restrictive license with DRM (GPL, for example) which restricts how you can distribute modules using the code. Running that module on a web server, where the user only gets the output, is legal under such licenses. If that code was distributed, there would be many legal requirements attached which would likely be very undesirable for commercial software companies, including google. This is one very good reason to have some capabilities only accessible through web services.
This was also the case with Riya (who was arguably the first to market with reliable facial recognition for consumer photo collections).
The biggest reasons are likely:
Processing Time (they can't control
how fast your CPU is and therefore
they can't control the experience).
Facial recognition is very likely to
be process intensive (this was Riya's
stated reason for not doing it
client-side)
The recognition process requires a
LARGE volume of data for processing
that is only accessible on the
server? (In other words, the process needs to spin through millions of faces, not just the faces that you have on your hard drive?)