Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am new to EMV, currently I have an emergency EMV application development project, anybody could help me answer the below questions:
what is EMV L2 application kernel? Is an API or just an executable EMV application?
During an EMV payment transaction, what kind of data(message) information need to be captured from Chip&Pin card so that it could submit to bank card issuer for authorization. Which ISO specification that the payment transaction data should apply for.
what kind of connectivity between EMV terminal and acquirer? IP or Serial Port?
Any testing tools for EMV application development? Such as acquirer host simulation.
5.How much time it will take for an EMV application development?
1] what is EMV L2 application kernel? Is an API or just an executable EMV application?
It is more an API than an application. That's a piece of software that will use the underlying hardware to communicate with your EMV card, and will manage all of the EMV application level protocol (APDUs). If you're developing for a specific payment terminal, you'll have to contact the manufacturer to buy its kernel (ex : Ingenico, VeriFone). If you develop for a PC solution, you can buy some generic kernel (ex : EmvX). You probably don't want to write your own kernel, this blog estimates the cost of doing so :
EMV recommends to take around 18 month time to develop and certify a contact kernel. [...] Something between 200’000 and 400’000 Euro is a normal value.
2] During an EMV payment transaction, what kind of data(message) information need to be captured from Chip&Pin card so that it could submit to bank card issuer for authorization. Which ISO specification that the payment transaction data should apply for.
The documentation for the EMV protocol is publicly available at EMVco.com. An EMV card is a chip card, meaning you don't capture info from the card to later submit it to your bank (acquirer). In (very brief), your card will provide its characteristics to your application, and require a variable set of parameters (ex : amount, date, tip, etc.). Your application will reply with the required info and the card will then eventually decide if it accepts the transaction offline, accepts it online (after validation by the issuer), or rejects it.
3] what kind of connectivity between EMV terminal and acquirer? IP or Serial Port?
Between terminal and acquirer, it's a dial-up connection most of time (60% of merchants in the U.S. in 2012), or IP connection.
4] Any testing tools for EMV application development? Such as acquirer host simulation.
A bunch. You'll need a card issuer simulator (Visa, Mastercard, etc.), an acquirer (bank), simulator which will depend on the acquirer you're working with (in Canada, it could be Base24). You'll then need tools to troubleshoot communication problems between your application and EMV card (ex : SmartSpy), and eventually tools to prepare for certification (ex : from ICC Solutions, or Fime)
5] How much time it will take for an EMV application development?
A lot. Where I work, it just took a little bit more than 1 year to a 6 developers team with a strong experience in EMV transactions and payment applications to write a new payment application from scratch for an Ingenico terminal and to get it ready for certification. One of the most painful part is to succeed certification tests.
Targeting a PC environment may make development easier (easier debugging, more online resources and documentation, etc), but not having in-house skills and experience will increase significantly the cost
I can at least add to #nicolas-riousset 's answer for a couple.
1) I unfortunately do not have anything to add here.
2) Answer is check the specification on the applicability of your terminal and the CVM I believe of the terminal and the card as well as any processor specific requirements.
3) IP yes, but there are established protocols and most utilizing SSL these days. I believe even the dial-ups number has significantly dropped as those 'dial-up' ones have migrated to internet based but I don't drive POS terminals to be able to definitely confirm that.
4) A single simulator platform could accomplish a lot of this as getting a Base24, Postilion, Connex, SmartVista is no small under taking. We have the VISA & MasterCard simulators in-house as well as a few others and the VISA & MasterCard ones would be my last choice to pursue as they are least helpful for terminal to host. My short list of ones to look at that can do acquirer and issuer and processor simulation all on a single workstation would be the following, all have their quirks.
Paragon's FasTest
ACI Worldwide's "ASSET"
Clear2Pay's Lexcel (recently purchased by FIS)
5) Based on the complexity, nuances, backlog of talent, etc on EMV I think a year seems reasonable if not longer.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm new with FIWARE and I'm disoriented by the large amount of information related to this platform, the amount of components called Generic Enablers that exists. I don't know where to start.
Any advice?
Thanks!
The best place to start would be the FIWARE Developer's Website - this gives you an overview as to what FIWARE is and how the various enablers fit together
Core Context Management holds the state of the system and allows other elements to subscribe to changes of state
IoT and Robots provides sensor data and transforms it into the NGSI standard interface
Processing/Analysis/Visualization provides dashboards and complex event processing
The pillar on the right - API management adds security, as well as interface to systems to publish data externally.
Deployment tools helps the instantiation of a FIWARE system easier
If you are looking for more information, the Tour Guide describes each enabler in depth and holds a series of link to Videos, User Guides, presentation and so on.
If you learn-by-doing the Tutorials will present a series of exercises to build up a "Powered by FIWARE" application - it also describes how various Generic enablers fit together. The first six tutorial concentrate on the use of the Orion Context Broker which forms the central block of FIWARE.
Basically the the NGSI v2 standard provides a common interface to allow a series of disparate building blocks to talk to each other using a common language, these are things you're probably going to need in a generic Smart application, but are not unique to your application - you would be providing the "special sauce" by either creating custom sensors sending context data into the system (i.e. the block at the bottom), or complex processing algorithms which are reading and altering the context state (i.e. the block at the top).
If you want to speed development you can buy-not-build and use the existing free open source generic enablers. The whole system is modular so its easier to experiment and add and refactor things as necessary.
In addition to Jason Fox answer,
Every year the FIWARE foundation organizes summits, that are pretty good. There are tutorials and hands on sessions, and all slides I think that are available here. It could also be a good starting point
On the other hand, most of the FIWARE software components (GEs), are available in Docker HUB. Therefore, if you feel comfortable using Docker you could set up a bunch of FIWARE GEs in a few minutes.
Finally, there is this FIWARE-IOT-Stack website, where you can find a kind of FIWARE architecture for IoT. I do not know if it's official, but in my case it was very useful.
Regards!
Who writes on chip EMV application that communicates with terminal?
Are those developed by entities like VISA, Mastercard and given to issuing bank or the card issuing bank develops it and loads onto chip
In most simple terms,
Card manufacturers manufacture card and also installs operating system and applet ( if java - open platform).
Card issuers personalize these cards( I guess you know what will be there in an emv card).
After personalization, card is ready to use.
Anyone who wrote EMV application (either native or javacard applet) and pass the payment scheme certification (functional or security), he could sell his product.
The general idea is that the application developed have to pass the mandated certification test before going to the market and carrying the payment scheme brand.
There is a specification (ISO/IEC 7816-4) which specifies the protocol of an EMV card. It is on the net. Just google emvco and you would find them. The specification is complicated and takes time to understand. Here is a youtube channel
https://youtu.be/iWg8EBhsfjY
explaining the specification in a simple way. Especially for beginner. Check it out if you are interested.
When doing an EMV online transaction (ARQC), an EMV device needs to communicate with the issuer (or gateway) to get approval/denial. I am writing POS software and need to support EMV, thus I need to support this interaction. What I can't seem to answer is, is it part of the EMV specification for the EMV device to communicate directly with the issuer, over internet? Or do I need to be looking for some sort of send function in the device's API?
I know this question could be directed at a hardware manufacturer's design, but I have read a few API's for different EMV devices and non of them seem to detail this communication. Most of them have a function to initialize the EMV capabilities (with the transaction amount) and then a callback/event when the transaction is completed. This leads me to believe that all I need to provide is a good internet connection to the device and the magic will happen.
As a followup to that, I see some devices have USB communications (instead of ethernet). These devices (obviously) couldn't talk directly to an outside network. Is it safe to assume these devices are going to do every EMV transaction offline? Or am I missing something?
As far as I have come to understand, EMV covers the finer details of the communication between a card and the reader device, then gives the procedure/standards to be followed when delivering that data online. Thus, once you have performed the local processing of the card, you will use whichever means you can find to deliver that info to an online acquirer (assuming its an online transaction) and that communication must fulfill the EMV (and also PCI) security requirements. So Yes, you will need an Internet connection for online transactions. That part, which will "encode" the data according to financial standards and protocols and send it to a specified acquirer/issuer, will need to be created by the developer (you).
After much research and headaches, I think I have answered my question. And it is..... Level 2 Kernel. This is the piece that I couldn't find anywhere because (i think) the whole EMV thing is so new in America. As Peter posted in his reply, EMV covers the finer details between the card and device, but only gives suggestions on delivering that data. The Kernel is the "vehicle" by which the Cryptogram (created by the card and device talking) is delivered to the card issuer for approval. Because the Kernel is (usually) a piece of software running on a computer (as a network service for example), it can house both IP communication with the pin pad or monitor a USB port.
From there I realized that I had two choices, first develop my own kernel and go through the whole pinpad integration AND EMV certification (ummm no thank you). Or second, find a company that has already done the certification and pay for use of their solution (yes please).
I found a company named CreditCall offering a product named ChipDNA which is an interface to an existing certified Level 2 Kernel. They have a Microsoft and Java integration. They have already integrated with certain keypads and handle the entire EMV online communication (end to end). They offer a lite version of their API for free, which I created a little console prog for. Worked like a champ! Cut a ton of development time off and a big [certification] cost.
....now I've got to get the accountants to "ok" the ChipDNA cost.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I have scenario for which I am looking for a message queue service which supports below:
Ease to Use
Very high in performance
Message once read shouldn't be available for other consumers.
Should have capability to delete the message once read.
Message once published should not get dropped.
The scenario which I have is described below:
There are many publishers.
There will be many consumers.
Queuing server and consumers residing on same machine, but publishers are residing on different machines.
Please let me know best queuing service apart from Rabbitmq and sqs satisfying above points
I would recommend Apache Kafka: http://kafka.apache.org/
If you want to know a comparison between Kafka and RabbitMQ you should read this article: http://www.quora.com/RabbitMQ/RabbitMQ-vs-Kafka-which-one-for-durable-messaging-with-good-query-features
Also, you should take a look to this: ActiveMQ or RabbitMQ or ZeroMQ or
Kafka as far I know is mainly meant for real data propagation and I think my requirement doesn't require something like kafka. I have used SQS but the only problem I have with SQS is high latency. Publisher pushed message to queue and consumer keep on polling for new message, this implementation is hitting me with very high latency. My requirement is simple as follows:
Queue service should have high availability and reliability like SQS
Latency should be very high lets say not more than 10ms. (here 10ms includes publishing and receiving the message).
Also my message size is very small say not more than 20-30 bytes.
I have thought of using redis, in which I will be pushing the messages to a list and workers will keep on popping them back to back till list becomes empty, but I have not done any benchmarking on that. So here I really need suggestion so I go in right direction.
Thanks,
For some of my system integration projects I met the MQ-tasks. Several rich costumers wants the production solutions like IBM WebSphere MQ, but I think it's too much expressive and difficult.
I found and used the simple and stable analog: e-mail server.
All integrated systems got the local e-mail boxes. Messages are e-mail, with command-code in subject and json in attachments. E-mail server listen and dispatch all queues, to recipients or to groups of them. E-mail protocols are stable and all developers know a lot of tools to work with it. Sysadmins and testers use the simple e-mail clients for testing and auditing. All e-mail servers have a logging tools.
It's best and easy solution, and I suggest it for most of integration projects.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have been writing software for several decades now and these days everything is web.
Before the web we had Client Server apps that were basically thick client applications that spoke directly to the database. They had some disadvantages, such as deployment was cumbersome, Did not scale because DB handled all traffic. Of course back then distribution of apps was limited to being on a desktop on a corporate network. The benefits to these apps were that they had fewer layers and were quick to develop.
There are times when the requirements call for an app behind a firewall with a dedicated database and a relatively small amount of clients. I suggest (sometimes on StackOverflow) the old Client/Server type architecture and everybody looks at me like I have 3 legs and 6 arms.
With modern technologies that allow automatic deployments of apps and the tools we have today. Is there a reason this technology is not viable ? Is it that the new generation of developers only know web stuff ?
I can think of at least two large-ish markets where client-server is still big:
Online games and virtual worlds, such as Battlefield or Second Life. Usually you need a thick client plus a connection to a shared server.
Custom-made scientific software. Complex technical or scientific software, especially if it needs an interactive graphical UI that does direct manipulation, is sometimes written in this fashion too.
I'm sure thick clients are still being developed, even today.
Having said that, choosing a web-based architecture is not about the "new generation of developers" only knowing web stuff, you do get a lot of advantages if you can make your application web-based:
Deployment is dead simple. Even with things like ClickOnce, automatic updates, etc, nothing beats simply refreshing the page to get the latest version
You can use something like Silverlight to get 99% of the benefits of a desktop application (in terms of the ability to run code on the client)
Web applications can be made available remotely much more easily than desktop applications (a lot of companies have remote workers these days, setting up a VPN is a pain if all you want to do is access payroll (or whatever))
But at the end of the day, it's all about the right tool for the job. Web applications don't help when you want to write plugins for Office (Word, Outlook, etc), they don't help if you have to control custom hardware (POS terminals, etc - although you could write that into the server in some cases...), and probably a few more cases as well.
We have some Flex apps that communicate with XML based web services that are pretty close to old school Client Server apps. But rather than using SQL, they speak a custom XML language and render SOAP responses.
We currently develop and deploy numerous client/server applications annually. The development is simple and automated. We are not limited to the database technologies we are able to deploy. Client/server deployments are faster for calculations, form updates and reporting. The Web/Cloud based applications are less responsive than an application running on a client station (thick client).
This is because of the distribution of cpu load. Whereas a server side application requires the server to perform all calculations the client side can run this on the local machine. As any system gets more complex the moments that a user has to wait for results increases. These moments of employee time are more expensive as they involve more of the paid employees. These moments add up within an organization as a great many "man hours" over a year.
The problems with updates are solved within our development tool set. Just as when you may open your favorite browser it notices that the version you are using is not the most recent we embed that same process within our client/server applications. In fact we don't give them a choice to update. Since updates may, many times, require database changes we force the update to happen before the user is allowed to run the software.
To improve visibility of the information contained with our custom client/server systems we offer custom developed web sites that have specific applications such as field dispatch or customer support forum integration into the desktop client/server applications. From my perspective I see a complete integration of client server and responsive web applications taking a better position in the years to come.