The payment was refused: Risk Score 100+ - adyen

While testing the integration of an Adyen library/SDK the payment can be declined even when using the Adyen Test cards.
The setup and code are correct, however the risk score is 100 or more.
Is it possible to control/disable the Risk Rules so the integration can be tested without being affected by the risk factors?

There are 2 options to disable the Risk Rules. It is recommended to do this only when necessary (ie for example at the start of implementing the integration with Adyen, but then make sure to test with the Risk Rules enabled)
Skip risk rules
During your development/testing you can instruct the platform to skip risk checks (https://docs.adyen.com/risk-management/skip-risk-rules), by adding riskdata.skipRisk as additionalData of the payment request, for example when you initiate the session
amount: {
currency: "EUR",
value: 1000
},
merchantAccount: "xyz",
reference: "abc",
additionalData: {
"riskdata.skipRisk": true
}
The transaction will always have a Risk Score equals to 0.
Disable Risk Profile
Access the Risk Settings (if you are a Risk Admin) and turn off completely the Risk system (https://docs.adyen.com/risk-management/configure-risk-settings)

Related

How to send massive data of sensors in Orion

Let's suppose to have 100 sensors that send an attribute any second to Orion. How could I manage this massive data?
via batch operation (but I don't know if it can support them)
using an edge (to aggregate data) and sending to Orion (after 1 minute)
Thank you
Let’s consider 100 tps a high load for a given infrastructure (load and throughput must be always related to infrastructure and e2e scenarios).
The main problem you may encounter is not related to the update itself, Orion Context Broker and its fork Orion LD, can handle a lot of updates. The main problem in real/productive scenarios, like the ones handled by Orion Context Broker and NGSI v2, are the NOTIFICATIONS related to those UPDATES.
If you need a 1:1 (or even a 1:2 or 1:4) ratio between UPDATES:NOTIFICATIONS, for example you want to keep track of the history of every measure and also send the measures to the CEP for some post-processing, then it’s not only a matter of how many updates Orion may handle, but how many update-notifications the E2E can handle. If you got a slow notification endpoint Orion will saturate its notification queues and you will be losing notifications (not keeping track of those updates within en historic, or CEP…).
Batch updates are not helping on this since the UPDATE REQUEST SERVER is not the bottleneck and they are internally managed as single updates.
To alleviate this problem I should recommend you to enable NGSI V2 (only available in V2) flow control mechanism, so the update process may be automatically slowed down when the notification throughput requires so.
And of course, in any IoT scenario if you don’t need all the data the earlier you aggregate the better. So if your E2E doesn’t need to keep track of every single measure, data loggers are more than welcome.
For 100 sensors sending one update per second (did I understand that correctly?) ... that's nothing. The broker can handle 2-3 thousand updates per second running in a single core and with ~4 GB of RAM (mongodb needs about 3 times that).
And, if it's more (a lot more), then yes, the NGSI-LD API defines batch operations (for Create, Update, Upsert, and Delete of entities), and Orion-LD implements them all.
However, there's no batch op for attribute update. You'd need to use "batch update entity", the update mode (not replace). Check the NGSI-LD API spec for details.

Dealing with exceptions in an event driven world

I'm trying to understand how exceptions are handled in an event driven world using micro-services (using apache kafka). For example, if you take the following order scenario whereby the following actions need to happen before the order can be completed.
1) Authorise the payment with the payment service provider
2) Reserve the item from stock
3.1) Capture the payment with the payment service provider
3.2) Order the item
4) Send a email notification accepting the order with a receipt
At any stage in this scenario, there could be a failure such as:
The item is no longer in stock
The payment information was incorrect
The account the payee is using doesn't have the funds available
External calls such as those to the payment service provider fail, such as downtime
How do you track that each stage has been called for and/or completed?
How do you deal with issues that arise? How would you notify the frontend of the failure?
Some of the things you describe are not errors or exceptions, but alternative flows that you should consider in your distributed architecture.
For example, that an item is out of stock is a perfectly valid alternative flow in your business process. One that possibly requires human intervention. You could move the message to a separate queue and provide some UI where a human operator can deal with the problem, solve it and cause the flow of events to continue.
A similar thing could be said of the payment problems you describe. If an order cannot successfully be settled, a human operator will need to investigate the case and solve it. For that matter, your design must contemplate that alternative flow as part of it, and make it so a human can intervene somehow when the messages end up in a queue that requires a person to review them.
Those cases should be differentiated from errors or exceptions being thrown by the program. Those cases, depending on the circumstance, might in fact require to move the message to a dead letter queue (DLQ) for an engineer to take a look at them.
This is a very broad topic and entire books could written about this.
I believe you could probably benefit from gaining more understanding of concepts like:
Compensating Transactions Pattern
Try/Cancel/Confirm Pattern
Long Running Transactions
Sagas
The idea behind compensating transactions is that every ying has its yang: if you have one transaction that can place an order, then you could undo that with a transaction that cancels that order. This latter transaction is a compensating transaction. So, if you carry out a number of successful transactions and then one of them fails, you can trace back your steps and compensate every successful transaction you did and, as a result, revert their side effects.
I particularly liked a chapter in the book REST from Research to Practice. Its chapter 23 (Towards Distributed Atomic Transactions over RESTful Services) goes deep in explaining the Try/Cancel/Confirm pattern.
In general terms it implies that when you do a group of transactions, their side effects are not effective until a transaction coordinator gets a confirmation that they all were successful. For example, if you make a reservation in Expedia and your flight has two legs with different airlines, then one transaction would reserve a flight with American Airlines and another one would reserve a flight with United Airlines. If your second reservation fails, then you want to compensate the first one. But not only that, you want to avoid that the first reservation is effective until you have been able to confirm both. So, initial transaction makes the reservation but keeps its side effects pending to confirm. And the second reservation would do the same. Once the transaction coordinator knows everything is reserved, it can send a confirmation message to all parties such that they confirm their reservations. If reservations are not confirmed within a sensible time window, they are automatically reversed by the affected system.
The book Enterprise Integration Patterns has some basic ideas on how to implement this kind of event coordination (e.g. see process manager pattern and compare with routing slip pattern which are similar ideas to orchestration vs choreography in the Microservices world).
As you can see, being able to compensate transactions might be complicated depending on how complex is your distributed workflow. The process manager may need to keep track of the state of every step and know when the whole thing needs to be undone. This is pretty much that idea of Sagas in the Microservices world.
The book Microservices Patterns has an entire chapter called Managing Transactions with Sagas that delves in detail on how to implement this type of solution.
A few other aspects I also typically consider are the following:
Idempotency
I believe that a key to a successful implementation of your service transactions in a distributed system consists in making them idempotent. Once you can guarantee a given service is idempotent, then you can safely retry it without worrying about causing additional side effects. However, just retrying a failed transaction won't solve your problems.
Transient vs Persistent Errors
When it comes to retrying a service transaction, you shouldn't just retry because it failed. You must first know why it failed and depending on the error it might make sense to retry or not. Some types of errors are transient, for example, if one transaction fails due to a query timeout, that's probably fine to retry and most likely it will succeed the second time; but if you get a database constraint violation error (e.g. because a DBA added a check constraint to a field), then there is no point in retrying that transaction: no matter how many times you try it will fail.
Embrace Error as an Alternative Flow
As mentioned at the beginning of my answer, not everything is an error. Some things are just alternative flows.
In those cases of interservice communication (computer-to-computer interactions) , when a given step of your workflow fails, you don't necessarily need to undo everything you did in previous steps. You can just embrace error as part of you workflow. Catalog the possible causes of error and make them an alternative flow of events that simply requires human intervention. It is just another step in the full orchestration that requires a person to intervene to make a decision, resolve an inconsistency with the data or just approve which way to go.
For example, maybe when you're processing an order, the payment service fails because you don't have enough funds. So, there is no point in undoing everything else. All we need is to put the order in a state that some problem solver can address it in the system and, once fixed, you can continue with the rest of the workflow.
Transaction and Data Model State are Key
I have discovered that this type of transactional workflows require a good design of the different states your model has to go through. As in the case of Try/Cancel/Confirm pattern, this implies initially applying the side effects without necessarily making the data model available to the users.
For example, when you place an order, maybe you add it to the database in a "Pending" status that will not appear in the UI of the warehouse systems. Once payments have been confirmed the order will then appear in the UI such that a user can finally process its shipments.
The difficulty here is discovering how to design transaction granularity in way that even if one step of your transaction workflow fails, the system remains in a valid state from which you can resume once the cause of the failure is corrected.
Designing for Distributed Transactional Workflows
So, as you can see, designing a distributed system that works in this way is a bit more complicated than individually invoking distributed transactional services. Now every service invocation may fail for a number of reasons and leave your distributed workflow in a inconsistent state. And retrying the transaction may not always solve the problem. And your data needs to be modeled like a state machine, such that side effects are applied but not confirmed until the entire orchestration is successful.
That‘s why the whole thing may need to be designed in a different way than you would typically do in a monolithic client–server application. Your users may now be part of the designed solution when it comes to solving conflicts, and contemplate that transactional orchestrations could potentially take hours or even days to complete depending on how their conflicts are resolved.
As I was originally saying, the topic is way too broad and it would require a more specific question to discuss, perhaps, just one or two of these aspects in detail.
At any rate, I hope this somehow helped you with your investigation.

TVR bits match TAC Online, but transaction does NOT go online?

I have a scenario where the EMV Contactless card image (American Express) SHOULD decline offline; however, the Ingenico PinPad is going online and approving and the VeriFone is declining offline.
Even though, this scenario SHOULD decline offline - I am convinced this scenario should go ONLINE. I think the VeriFone is a false-positive and the Ingenico is doing the right thing by going ONLINE.
The purpose of this scenario is to ensure that the terminal declines a transaction offline when CDA fails.
The card image has an IAC Denial of "0000000000" and IAC Online of "F470C49800".
The TVR that gets generated during 1AC is '0400008000'.
The TAC Denial is set to "0010000000" and the TAC Online is set to "DE00FC9800".
TVR = "0400008000"
IAC_Denial = "0000000000"
TAC_Denial = "0010000000"
IAC_Online = "F470C49800"
TAC_Online = "DE00FC9800"
When comparing the TVR to the TAC Denial (which should happen first) according to the EMV Book 3 - Terminal Action Analysis - there are NO matching bits. So the next thing that should happen is the TVR should be matched with the TAC Online. When comparing the bits from the TVR to the TAC Online - the bits that match are: "CDA Failed, Exceeds Floor Limit".
This indicates to me that this should go ONLINE; however, as previously stated the scenario is ensuring that it declines OFFLINE.
In a nutshell, the VeriFone PinPad is giving a false-positive by declining OFFLINE without using the Terminal Action Analysis logic.
However, the Ingenico seems to be doing the right thing by going ONLINE.
Is there something that I am missing?
Is there any configurations that can override the Terminal Action Analysis from matching the TVR to TACs to prevent a transaction to go online?
Could this be an issue with the VeriFone kernel?
Thanks.
I often got this error when my POS terminal was not properly configured.
Often, scenarios like this one will have thresholds to configure in your terminal accordingly to its standards. For instance, my terminal was configured accordingly to SEPA-FAST standards.
There was a threshold for the maximum amount value to approve offline. This is useful for merchants that want to approve small amounts offline for effectiveness and speed when they have long lines of customers to process. Think of a cafeteria or a bus line. Of course, this is slightly risky and many merchants won't approve high amounts without an online approval to reduce their loss due to invalid/fraudulent payments.
In my opinion, your offline threshold looks fine. The transaction amount exceeds it and it is refused offline for the obvious reasons I explained to you before.
Perhaps your maximum threshold is badly configured. Most scenarios require you to set a maximum amount threshold over which the transaction is refused offline.
One other thing that could be wrong is your EMV Terminal capabilities 0x9F33 that supports Online PIN authentication and shouldn't. Maybe you aren't using the terminal prescribed by the scenario. What is your CVM? Should it be supported by your terminal? There is also the EMV Terminal Transaction Qualifiers (TTQ) field 0x0F66 for NFC transactions that plays a similar role in defining what a terminal can and cannot do. Maybe your terminal should be offline only in this scenario. This could happen for pizza deliveries or in situations where an internet connexion is not available.

How to store json-patch operations in redis queue and guarantee their consistency?

I have collaborative web application that handles JSON-objects like the following:
var post = {
id: 123,
title: 'Sterling Archer',
comments: [
{text: 'Comment text', tags: ['tag1', 'tag2', 'tag3']},
{text: 'Comment test', tags: ['tag2', 'tag5']}
]
};
My approach is using rfc6902 (JSONPatch) specification with jsonpatch library for patching JSON document. All such documents store in MongoDB database and as you know the last one very slow for frequent writes.
To get more speed and highload application I use redis as queue for a patch operations like the following:
{ "op": "add", "path": "/comments/2", "value": {text: 'Comment test3', tags: ['tag4']}" }
I just store all such patch operations in queue and at midnight run cron script to get all patches and construct full document and update it in MongoDB database.
I don't understand yet what should I do in case corrupted patch like:
{ "op": "add", "path": "/comments/0/tags/5", "value": 'tag4'}
The patch above don't gets applied to document above because tags array has length only 3 (according official docs https://www.rfc-editor.org/rfc/rfc6902#page-5)
The specified index MUST NOT be greater than the number of elements in the array.
So when user is online he don't get any errors because his patch operations get stored in redis queue but next day he get broken document due broken patch that don't got applied in cron script.
So my question if how could I guarantee that all patches that stored in redis queue is correct and don't corrupts primary document?
As with any system that can become inconsistent, you must allow for patches to be applied as quickly as possible if you wish to catch conflicts sooner and decrease the likelihood of running into them. That is likely your main issue if you are not notifying the other clients of any updated data as soon as possible (and are just waiting for the CRON to run to update the shared data that the other clients can access).
As others have asked, it's important to understand how a "bad" patch got into the operation queue in the first place. Here are some guesses from my standpoint:
A user had applied some operations that got lost in translation. How? I don't know, but it would explain the discrepancy.
Operations are not being applied in the correct order. How? I don't know. I have no code to go off of.
Although I have no code to go off of, I can take a shot in the dark and help you analyze the latter point. The first thing we need to analyze is the different scenarios that may come up with updating a "shared" resource. It's important to note that, in any system that must eventually be consistent, we care about the:
Order of the operations.
How we will deal with conflicts.
The latter is really up to you, and you will need a good notification/messaging system to update the "truth" that clients see.
Scenario 1
User A applies operations 1 & 2. The document is updated on the server and then User B is notified of this. User B was going to apply operations 3 & 4, but these operations (in this order) do not conflict with operations 1 & 2. All is well in the world. This is a good situation.
Scenario 2
User A applies operations 1 & 2. User B applies operations 3 & 4.
If you apply the operations atomically per user, you can get the following queues:
[1,2,3,4] [3,4,1,2]
Anywhere along the line, if there is a conflict, you must notify either User A or User B based on "who got there first" (or any other weighting semantics you wish to use). Again, how you deal with conflicts is up to you. If you have not read up on vector clocks, you should do so.
If you don't apply operations atomically per user, you can get the following queues:
[1,2,3,4] [3,4,1,2] [1,3,2,4] [3,1,4,2] [3,1,2,4] [1,3,4,2]
As you can see, forgoing atomic updates per user increases the combinations of updates and will therefore increase the likelihood of a collision happening. I urge you to ensure that operations are being added to the queue atomically per user.
A Recap
Some important things you should remember:
Make sure updates to the queue are atomically applied per user.
Figure out how you will deal with several versions of a shared resource arising from multiple mutations from different clients (again I suggest you read up on vector clocks).
Don't update a shared resource that may be accessed by several clients in real-time as a cron job.
When there is a conflict that cannot be resolved, figure out how you will deal with it.
As a result of point 3, you will need to come up with a notification system so that clients can get updated resources quickly. As a result of point 4, you may choose to include telling clients that something went wrong with their update. Something that has just come to the top of my head is that you're already using Redis, which has pub/sub capabilities.
EDIT:
It seems like Google Docs handles conflict resolutions with transformations. That is, by shifting whole characters/lines over to make way for a hybrid application of all operations: https://drive.googleblog.com/2010/09/whats-different-about-new-google-docs_22.html
As I had said before, it's all up to how you want to handle your own conflicts, which should largely be determined by the application/product itself and its use cases.
IMHO you are introducing unneeded complexity instead of simpler solution. These would be my alternate suggestions instead of your approach of a json patch cron which is very hard to make consistent and atomic.
Use mongodb only : With proper database design and indexing, and proper hdarware allocation/sharding, the write performance of mongodb is really fast. And the kind of operations you are using in jsonpatch are natively supported in mongodb BSON documents and their query language .e.g $push,$set,$inc,$pull etc.
Perhaps you want to not interrupt users activities with a syncronous write to Mongodb , for that the solution is using async queus as mentioned in point#2.
2.Use task queues & mongodb: Instead of storing patches in redis like you do now, you can push the patching task to a task queue, which will asyncronously do the mongodb update , and user will not experience any slow performance. One very good task queue is Celery , which can use Redis as a broker & messaging backend. So, each users updates get a single task, and will get applied to mongodb by the task queue, and there will be no performance hit.

How to use a DHT for a social trading environment

I'm trying to understand if a DHT can be used to solve a problem I'm working on:
I have a trading environment where professional option traders can get an increase in their risk limit by requesting that fellow traders lend them some of their risk limit. The lending trader can either search for traders with certain risk parameters which are part of every trader's profile, i.e. Greeks, or the lending trader can subscribe to requests from certain traders who are looking for risk.
I want this environment to be scalable and decentralized, but I don't know how traders can search for specific profile parameters when the data is contained in a DHT. Could anybody explain how this can be done?
Update:
An example that might make it easier to understand might be SO, but instead of running as a web application, the Risk Exchange runs as a desktop application on each trader's workstation. The request for risk are like questions (which may be tagged by contract, exchange, etc) and each user has a profile which shows their history of requests, their return on borrowed risk, etc.
Obviously the "exchange" can be run on a server, but I was hoping to decentralize it and make it scalable so that the system may support an arbitrary number of traders. How can I search for keywords, tags, and other data pertaining to a trader's profile if this information is stored in a distributed hash table?
Your question holds a contradiction in my ears. DHT is a great way of distributing data in a decentralized manner, but cannot provide the nodes with an information overview. This means that any overview action, such as questioning the network for certain data, will have to be done at a centralized collection point. Solutions to this contradiction has been created, but their fault tolerance does not match a critical system such as financial trading.
So my answer would be to use a centralized server to hold an overview cache of the DHT network.