With Braintree how to not save duplicate cards when creating a transaction? - duplicates

I know that when I create a payment method I can use failOnDuplicatePaymentMethod to block duplicate cards. But when I use the storeInVaultOnSuccess option with Braintree_Transaction::sale, what is the best way to not save the card if it is already saved?
Edit:
Let me clarify my situation to make sure. On my checkout page I am currently using this JavaScript:
braintree.setup(
myToken,
'custom',
{
id: 'my-form-id',
hostedFields: {
...
},
onPaymentMethodReceived: function(obj) {
...
},
onError: function(obj) {
...
}
}
);
The customer fills in their CC number, CVV and expiration date and clicks submit and then that onPaymentMethodReceived callback fires. In that JS callback I make an AJAX call to my back-end and pass the nonce. On the back-end I call Braintree_Transaction::sale in order to charge the customer.
I always need Braintree_Transaction::sale to successfully complete so that the sale goes through. And then in addition so this sale, I want the card to be saved if the customer has checked "save my card" and the card isn't already saved.
On this checkout page, the customer does have the option to select a saved card instead of inputting all their card info again, but they may type all the card info in again(for an already saved card) instead of selecting the saved card.
How would you do it given this setup? Does your below setup still apply? If so how exactly would I integrate the below with my above setup? Or do I need to rearrange my UI/UX for this(I think this is a pretty standard checkout flow)?

Full disclosure: I work at Braintree. If you have any further questions, feel free to contact support.
There isn't a way to prevent duplicate payment methods when making a Braintree_Transaction::sale API call. However, you can still achieve your goal with some settings on your client. Here are those steps:
On your server, create a client token and include the customer_ID and the failOnDuplicatePaymentMethod parameters:
```
$clientToken = $gateway->clientToken()->generate([
"customerId" => "aCustomerId",
"options" => [
"failOnDuplicatePaymentMethod" => true
] ]);
```
Use this client token as your authorization when creating the Braintree client instance:
```
var createClient = require('braintree-web/client').create;
createClient({
authorization: CLIENT_AUTHORIZATION
}, function (createErr, clientInstance) {
// ...
});
Per Braintree's docs regarding generating a client Token,
If [the failOnDuplicatePaymentMethod] option is passed and the same payment method has already been
added to the Vault for any customer, the request will fail. This can
only be passed if a $customerId is passed as well. If the check fails,
this option will stop the Drop-in from returning a
$paymentMethodNonce. This option will be ignored for PayPal, Pay with
Venmo, Apple Pay, and Google Pay payment methods.

Related

REST API which needs multiple different resources?

I'm designing a REST api for running jobs on virtual machines in different domains (Active Directory domains, the virtual machines with the same name can exist in different domains).
/domains
/domains/{dname}
/domains/{dname}/vms
/domains/{dname}/vms/{cname}
And for jobs, which will be stored in a database
/jobs
/jobs/{id}
Now I need to add a new API for the following user stories.
As a user, I want to run a job (just job definition, not the stored job) on an existing VM.
As a user, I want to run a job (just job definition, not the stored job) on VM named x, which may or may not exist. The system should create the VM if x doesn't exist.
How should the api be designed?
Approach 1:
PUT /domains/{dname}
{ "state": "running_job", "vm": "vm_name", "job_definition": { .... } }
Approach 2:
PUT /domains/{dname}/vms/{vm_name}
{ "state": "running_job", "job_definition": { .... } }
Approach 3:
PUT /jobs
{ "state": "running", "domain": "name", "vm": "vm_name", "job_definition": { .... } }
Approach 4: create a new resource, saying scheduler,
PUT /scheduler
{ "domain": "name", "vm": "vm_name", "job_definition": { .... } }
(what if I need to update some attributes of scheduler in the future?)
In general, hwo to design the REST API url which needs multiple resources?
How should the api be designed?
How would you design this on the web?
There would be an HTML form, right? With a bunch of input controls to collect information from the operator about what job to use, and which VM to target, and so on. Operator would fill in the details, submit the form. The browser would then use the form to create the appropriate HTTP request to send to the server (the request-target being computed from the form meta data).
Since the server gets to decide what the request-target should be (benefits of using hypertext), it can choose any resource identifier it wants. In HTTP, a successful unsafe request happens to invalidate previously cached responses with the same request target, so one possible strategy is to consider which is the most important resource changed by successfully handling the request, and use that resource as the target.
In this specific case, we might have a resource that represents the job queue (ex /jobs), and what we are doing here is submitting a new entry in the queue, so we might expect
POST /jobs HTTP/1.1
....
If the server, in its handling of the request, also creating new resources for the specific job, then those would be indicated in the response
HTTP/1.1 201 Created
Location: /jobs/931a8a02-1a87-485a-ba5b-dd6ee716c0ef
....
Could you instead just use PUT?
PUT /jobs/931a8a02-1a87-485a-ba5b-dd6ee716c0ef HTTP/1.1
???
Yes, if (a) the client knows what spelling to use for the request-target and (b) is the client knows what the representation of the resource should look like.
Which unsafe HTTP method you use in the messages that trigger you business activities doesn't actually matter very much. You need to use the methods correctly (so that general purpose HTTP connectors don't get misled).
In particular, the important thing to remember about PUT is that the request body should be a complete representation of the resource - in other words, the request body for a PUT should match the response body of a GET. Think "save file"; we've made local edits to our copy of a resource, and we send back a copy of the entire document.

Building a card and updating it after fetching data in Google Apps Script

I am trying to build a Gmail addon which includes 2 external API calls. The first one is fast (~200ms) and the second one is slow (~5s). Because of this I would like to first build the card with the results of the first fetch, and then update the card after the second call finishes.
Would it be possible to either:
Call fetchAll and build and render the card each time a request finishes
Trigger a function after the initial rendering is done (after return card.build())
Update the root card without returning it (I tried CardService.newNavigation().popToRoot().updateCard(card.build()) without success)
Any preferred way to render a card and then update it after data is fetched would be appreciated!
Below is an example function if useful.
function onGmailMessage(e) {
// Fetching email
var messageId = e.gmail.messageId;
var accessToken = e.gmail.accessToken;
GmailApp.setCurrentMessageAccessToken(accessToken);
var message = GmailApp.getMessageById(messageId);
// Preparing requests
var data = {
'text': message.getPlainBody(),
};
var options = {
'method' : 'post',
'contentType': 'application/json',
'payload' : JSON.stringify(data)
};
// Fetching responses. Here I would love to first display
// createCard(response_1) and then when the second call finishes
// return createCard(response_1 + '/n' + response_2)
var response_1 = UrlFetchApp.fetch('http://API_1/', options);
var response_2 = UrlFetchApp.fetch('http://API_2/', options);
return createCard(response_1 + '/n' + response_2);
Answer:
Unfortunately, this is not possible to do.
More Information:
This is a bit tricky so I'll split this answer down into your three points:
[Is it possible to] call fetchAll and build and render the card each time a request finishes?
A fetchAll function could be made to get all API responses, but you'll still end up waiting for API 2 to respond before updating what can be seen in the card.
The problem with this is that in order to display the rendered card, you need to make a return of some kind. Once you return the response of the first API your second API won't be made at all as the function will have already executed. Which leads onto point two:
[Is it possible to] trigger a function after the initial rendering is done (after return card.build())
I did a test with this, instead of returning API 1's response directly I stored its value in a Script Property and made a trigger execute 200 ms later with the call to API 2:
function onGmailMessage(e) {
// previous code
var response_1 = UrlFetchApp.fetch('http://API_1/', options);
ScriptApp.newTrigger("getSecondResponse").timeBased().after(200).create();
PropertiesService.getScriptProperties().setProperty('response1', response_1);
return createCard(response_1);
}
function getSecondResponse() {
// options 2 definition here;
var response_1 = PropertiesService.getScriptProperties().getProperty("response1");
var response_2 = UrlFetchApp.fetch('http://API_2/', options);
return createCard(response_1 + '/n' + response_2);
}
and adding the correct scopes in the manifest:
{
"oauthScopes": [
"https://www.googleapis.com/auth/script.external_request",
"https://www.googleapis.com/auth/script.locale",
"https://www.googleapis.com/auth/gmail.addons.current.action.compose",
"https://www.googleapis.com/auth/gmail.addons.execute",
"https://mail.google.com/",
"https://www.googleapis.com/auth/script.scriptapp"
]
}
And which this did call the first API, display the response in the card and make the trigger, the card didn't update. I presume this is because the trigger acts as a cron job being executed from somewhere which isn't the add-on itself, so the second card return is never seen in the UI.
[Is it possible to] update the root card without returning it (I tried CardService.newNavigation().popToRoot().updateCard(card.build()) without success)
updateCard() is a method of the Navigation class. There's a whole page in the documentation which details the uses of Card navigation but the important parts to take away here is that the navigation methods are used in response to user interaction. From the documentation:
If a user interaction or event should result in re-rendering cards in the same context, use Navigation.pushCard(), Navigation.popCard(), and Navigation.updateCard() methods to replace the existing cards.
The following are navigation examples:
If an interaction or event changes the state of the current card (for example, adding a task to a task list), use updateCard().
If an interaction or event provides further detail or prompts the user for further action (for example, clicking an item's title to see more details, or pressing a button to create a new Calendar event), use pushCard() to show the new page while allowing the user to exit the new page using the back-button.
If an interaction or event updates state in a previous card (for example, updating an item's title from with the detail view), use something like popCard(), popCard(), pushCard(previous), and pushCard(current) to update previous card and the current card.
You can create multiple cards which have different content - for example one which contains response_1 and one which contains response_1 + "\n" + response_2, but some kind of interaction from a user is still needed to switch between the two views, and it won't get around the wait time you need to get a response from API 2.
Feature Request:
You can however let Google know that this is a feature that is important and that you would like to request they implement it. Google's Issue Tracker is a place for developers to report issues and make feature requests for their development services. I would suggest using the feature request template for G Suite Add-ons for this, rather than Apps Script directly.
References:
Class Navigation | Apps Script | Google Developers
Card navigation | G Suite Add-ons | Google Developers

How to subscribe and listen to channels with feathers client?

Please point me on the right path. I use angular for the client and I get my data with:
private getOrders(query: {}) {
return from(this._feathers.service('orders').find({ query }));
}
this works great and I get an observable in return.
But I don't know how to get the messages on the client side.
For instance, the app template in channels.ts mentions something like this:
app.service('messages').publish(() => {
return [
app.channel(`userIds/${data.createdBy}`),
app.channel(`emails/${data.recipientEmail}`)
];
});
Well, how can I get the data from the client for emails/${data.recipientEmail}?
What is the syntax?
Things are actually pretty straight forward!
My confusion was that I had experience with socket.io and I created a channel per each service I had back then.
In feathers, the channels are a means of sending the data. So every service can post to any channel and it all depends to which channel you subscribe to!
On the client side I had the confusion of how I can access that data thinking that I need a 'custom' channel name and method. This actually does not matter as the clients are subscribed to different channels and in the end data does come from a service and method in that service!
I hope this makes sense and clears up confusion for people that are like me. :)
This article clears things up: https://blog.feathersjs.com/feathersjs-channel-subscriptions-647c771ca6c8

Creating a user in openedx by REST api (or anything else then traditional method)

I need to create users in Open edX and sign them in via an API call, and thus do all the API stuff. What the major idea here is to create a one log in system where my user can log into this software we have and thus browse all the courseware and attend classes and track his data through software. The interaction between course and the software will be done by the REST API.
Is copying his identity into the valid table/database of the openedx do the job, but it still won't solve the online problem.
This is not possible at the moment in Open edX, as there is no API to create users. See the list of available APIs.
But it would not be too difficult to create an extra endpoint to create new users. To that end, I suggest you make use of the UserProfileFactory available in students.tests.factories: https://github.com/edx/edx-platform/blob/master/common/djangoapps/student/tests/factories.py#L39
It's a factory used for testing but that can also be used in production -- it's a dirty hack, but it works.
It possible to create/register user in Open edX using the REST API.
Send POST method to this url: youredxdomain.com/user_api/v1/account/registration/
Send the method to body using form-data.
In the url above you will get field in json like this
{
restrictions: {
min_length: 3,
max_length: 254
},
required: true,
name: "email",
errorMessages: { },
placeholder: "username#domain.com",
defaultValue: "",
instructions: "",
type: "email",
label: "Email"
},
When you send POST method to this url make sure to set key based on value of name in the json, for example the code key of the json above is email and set value to your email. Do the same on other field.
Hope it help.

Retaining HTTP POST data when a request is interrupted by a login page

Say a user is browsing a website, and then performs some action which changes the database (let's say they add a comment). When the request to actually add the comment comes in, however, we find we need to force them to login before they can continue.
Assume the login page asks for a username and password, and redirects the user back to the URL they were going to when the login was required. That redirect works find for a URL with only GET parameters, but if the request originally contained some HTTP POST data, that is now lost.
Can anyone recommend a way to handle this scenario when HTTP POST data is involved?
Obviously, if necessary, the login page could dynamically generate a form with all the POST parameters to pass them along (though that seems messy), but even then, I don't know of any way for the login page to redirect the user on to their intended page while keeping the POST data in the request.
Edit : One extra constraint I should have made clear - Imagine we don't know if a login will be required until the user submits their comment. For example, their cookie might have expired between when they loaded the form and actually submitted the comment.
This is one good place where Ajax techniques might be helpful. When the user clicks the submit button, show the login dialog on client side and validate with the server before you actually submit the page.
Another way I can think of is showing or hiding the login controls in a DIV tag dynamically in the main page itself.
You might want to investigate why Django removed this feature before implementing it yourself. It doesn't seem like a Django specific problem, but rather yet another cross site forgery attack.
2 choices:
Write out the messy form from the login page, and JavaScript form.submit() it to the page.
Have the login page itself POST to the requesting page (with the previous values), and have that page's controller perform the login verification. Roll this into whatever logic you already have for detecting the not logged in user (frameworks vary on how they do this). In pseudo-MVC:
CommentController {
void AddComment() {
if (!Request.User.IsAuthenticated && !AuthenticateUser()) {
return;
}
// add comment to database
}
bool AuthenticateUser() {
if (Request.Form["username"] == "") {
// show login page
foreach (Key key in Request.Form) {
// copy form values
ViewData.Form.Add("hidden", key, Request.Form[key]);
}
ViewData.Form.Action = Request.Url;
ShowLoginView();
return false;
} else {
// validate login
return TryLogin(Request.Form["username"], Request.Form["password"]);
}
}
}
Just store all the necessary data from the POST in the session until after the login process is completed. Or have some sort of temp table in the db to store in and then retrieve it. Obviously this is pseudo-code but:
if ( !loggedIn ) {
StorePostInSession();
ShowLoginForm();
}
if ( postIsStored ) {
RetrievePostFromSession();
}
Or something along those lines.
Collect the data on the page they submitted it, and store it in your backend (database?) while they go off through the login sequence, hide a transaction id or similar on the page with the login form. When they're done, return them to the page they asked for by looking it up using the transaction id on the backend, and dump all the data they posted into the form for previewing again, or just run whatever code that page would run.
Note that many systems, eg blogs, get around this by having login fields in the same form as the one for posting comments, if the user needs to be logged in to comment and isn't yet.
I know it says language-agnostic, but why not take advantage of the conventions provided by the server-side language you are using? If it were Java, the data could persist by setting a Request attribute. You would use a controller to process the form, detect the login, and then forward through. If the attributes are set, then just prepopulate the form with that data?
Edit: You could also use a Session as pointed out, but I'm pretty sure if you use a forward in Java back to the login page, that the Request attribute will persist.