How to identify the Requests received in azure API management - azure-api-management

we have an production issue where the order is submitted twice. Currently we have an API for order and we are exposing this to client using API management and in these we have policies for URL mapping for customer facing to actual .
Now , our actual API got 2 request so we thought customer submitted twice but they have confirmed that they have not submitted twice , so either there is issue with API management which fired 2 request.
How can i Identify the request received by the API management ?
Is there any chance that API management will fire the request twice ?
Appreciate any pointers

The only way to fire request twice in APIM would be by the means of Retry policy or manually using SendRequest. Otherwise it should be a client calling your API two times. Each request in APIM get it's own unique id accessible in policies as context.RequestId, this is the main way to track and identify them. But these ids are produced inside APIM itself thus are useful only if you're tracking a call from APIM and into backend.
Your best option now is to try to identify requests by client ip, method, uri, and time frame. APIM allows you to grab logs for certain periods of time (better if kept short) in JSON or CSV with data I mentioned above. To do that look into byRequest report (https://learn.microsoft.com/en-us/rest/api/apimanagement/reports#ReportByRequest), grab JSON/CSV and try to identify calls of interest,
For future you could look into onboarding your service to azure monitor (https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-use-azure-monitor) or log analytics those provide easier way to traverse logs.

Related

Clarification on API RESTful request

Im trying to create an applet in IFTTT however i need to obtain an auth token to allow the lights to call the service each time.
Im trying to obtain an auth token via the below:
Account information
GET Request auth token
https://environexus-us-oem-autha1.mios.com/autha/auth/username/{{user}}?SHA1Password={{sha1-password}}&PK_Oem=6&TokenVersion=2
The Nero API is RESTful and stateless and therefore requires authentication tokens to accompany every request. Once these tokens are requested they can be stored in a database for quick reuse.
This is the intial request to the API servers that collects the tokens and various IDs required for all subsequent calls. Tokens are valid 24 hours but should always be checked against the response in case this changes.
Request
{{user}} is the portal login
{{sha1-password}} is the hash of:
sha1(lowercase({username}).{password}.oZ7QE6LcLJp6fiWzdqZc)
(concatenated together - no additional characters should be inserted,
salt at end is static for all accounts)
PK_Oem and TokenVersion are static and provided above.
However im not sure what to put in for the "sha1-password"section.
Any help would be appreciated?
You need to calculate the SHA1 hash for the information above, which is the username, password and 'static salt' concatenated together with each value separated by a period.
Don't know what language you are using but most languages have libraries that will do this for you (e.g. Apache Commons library for Java)
This API is not particularly well designed in this respect, as client side hashing does not bring any benefits (when transmitting over HTTPS) and the 'static salt' as they call it is utterly pointless, as it's public.

Using Web Services to Load Customer Orders using .Net

I would like to create web service(s) that I can publish to external facing network to allow our customers team to send us CRUD operations on customer orders.
What would be the best practice in this case of using Microsoft or Open-Source technologies to serve the customer reqeusts?
Option1:
The web service accepts data XML/JSON
Stores the data locally in a file
A task picks up the file and attempts data load it in the background
Send an email for records that failed
Drawback here is the response from the web service will not be realtime and validation will be limited.
Option2:
The web service accepts data XML/JSON
Attempt data load
Respond immediately if load was success or failure
Drawback here is if the volume of orders increases increases several folds in near future if the infrastructure can handle it.
I am open to using REST with WCF or Web API and any other helpful technologies that can be scaled when demand grows.
Have you tried message queueing?
Basically, in this architecture, there is a client application (called producer) that submits a message to the message broker (message queue), and there is another application (called consumer) that connects to the broker and subscribes for the message to be processed.
The message can be just simple information or a task that will be processed by another application.
The application can act both as producer and consumer.
There are many message queue software, one of them is rabbitmq.
Here is the complete intro about this: https://www.cloudamqp.com/blog/2015-05-18-part1-rabbitmq-for-beginners-what-is-rabbitmq.html
Since the communication is done through the middleman (aka the message queue) it will not provide an immediate response. But you don't need to send the process result (i.e. Order processing in your case) to the email since the application can subscribe for the message of the result.
It is perfect to handle a huge load of processes. As always you can start small (even free) and scale up in the future.
Take a look at the pricing details https://www.cloudamqp.com/ that provides rabbitmq software as a service.
I do this using ActiveMQ as a central message broker (you can also use Azure Service Bus if you have an Azure subscription) and pre-baked domain objects. For your scenario it might look like:
The web service accepts data XML/JSON
yes, you have a REST service that accepts multipart requests, lets say JSON as it's easier to work with on the client side. Before you can send messages it's usually best to convert the incoming client message to a domain message, so all message consumers know the exact format to expect and can therefore validate the message. I usually create these using xsd.exe on Windows using an XSD file that describes the format of the object. xsd.exe turns that XSD into a C# class. It's then just a process of taking the JSON fields and populating an Order class. That Order then gets sent as a message to the broker. At that point you're now in guaranteed messaging land as JMS will take care of that and ActiveMQ will take care of message persistence.
Stores the data locally in a file
rather than a file, you convert the incoming JSON to a domain class, e.g. an Order instance. You'll never see JSON or XML beyond this point as it's all domain classes from here.
A task picks up the file and attempts data load it in the background
yes, the broker has routes defined in a Camel config that tells it to, for example, send messages coming in on the /client topic to the /orders topic. The task is set up as a durable topic subscriber so automatically gets that Order domain object.
Send an email for records that failed
if the Order object contains information about the client (email etc) then the task can send the email on failure but a better pattern is to route the failed Order to the /error topic where a different task, which again is a durable topic subscriber picks it up and logs/sends email/audits etc
if the volume of orders increases increases several folds in near
future
you can cluster the brokers and run multiple Order consumers. If you separate the failure handling into another route, all the order task has to do is process the order and route the message to either the /error or /success topic depending on the outcome. So each route provides a small piece of the puzzle and you can scale up the pieces if the puzzle gets too big.

How to secure my api key?

I want to use from my android/ios app the autocomplete api. For this I need to call url like:
https://maps.googleapis.com/maps/api/place/autocomplete/json?input=paris&key=<myapikey>
the problem is: What can make that someone else will not extract from my app my api key and use it for his own purpose ? It's important because at the end it's me who will be billed by google for the usage ...
Your intention is to call a Places API web service. Google Maps web services support only IP address restrictions.
You can check what type of restriction is supported by each API on the following page:
https://developers.google.com/maps/faq#keysystem
In order to protect an API key that is used with your sample request you should create an intermediate server and send your requests from this server. So your application should send request to intermediate server, intermediate server should send Places autocomplete request with protected API key to Google and pass response back to your app. In this case you can use an IP address of your intermediate server to protect unauthorized access with your API key.
I hope this helps!
What if you create and intermediate server and create a token for each single user, and also create a monitoring service which block suspicious behavior?
for example, a normal user would request x times/per day || hour || ...
Or
when a user runs application for the first time, application receives the [encrypted api + decryption key] and store them to a safe place like keychain(for iOS)
As I know, if you request directly to google-map-api there is always a way to sniffing packets.

Implementing IoT PowerBI table schema

I'm currently implementing an IoT solution that has a bunch of sensors sending information in JSON format through a gateway.
I was reading about doing this on azure but couldn't quite figure out how the JSON scheme and the Event Hubs work to display the info on PowerBI?
Can I create a schema and upload it to PowerBI then connect it to my device?
there's multiple sides to this. To start with, the IoT ingestion in Azure is done tru Event Hubs as you've mentioned. If your gateway is able to do a RESTful call to the Event Hubs entry point, Event Hubs will get this data and store it temporarily for the retention period specified. Then stream analytics, will consume the data from Event Hubs and will enable you to do further processing and divert the data to different outputs. In your case, you can set one of the outputs to be a PowerBI dashboard which you can authorize with an organizational account (more on that later) and the output will automatically tied to PowerBI. The data schema part is interesting, the JSON itself defines the data table schema to be used on PowerBI side and will propagate from EventHubs to Stream Analytics to PowerBI with the first JSON package sent. Once the schema is there it is fixed and the rest of the data being streamed in should be in the same format.
If you don't have an organizational account at hand to use with PowerBI, you can register your domain under Azure Active Directory and use that account since it is considered within your org.
There may be a way of altering the schema afterwards using PowerBI rest api. Kindly find the links below..Haven't tried it myself tho.
https://msdn.microsoft.com/en-us/library/mt203557.aspx
Stream analytics with powerbi
Hope this helps, let me know if you need further info.
One way to achieve this is to send your data to Azure Events Hub, read it and send it to PowerBI with Stream Analytics. Listing all the steps here would be too long. I suggest that you take a look at a series of blog posts I wrote describing how I built a demo similar to what you try to achieve. That should give you enough info to get you started.
http://guyb.ca/IoTAzureDemo

Block unwanted use of json API

I have a website where you can request data using ajax from our servers as json (only to be used on our site). Now i found that people start using our requests to get data from our system. Is there a way to block users from using our public json API. Ideas that i have been thinking about is:
Some kind of checksum.
A session unique javascript value on the page that have to match server-side
Some kind of rolling password with 1000 different valid values.
All these are not 100% safe but makes it harder to use our data. Any other ideas or solutions would be great.
(The requests that you can do is lookup and translations of zip codes, phone numbers, ssn and so on)
You could use the same API-key authentication method Google uses to limit access to its APIs.
Make it compulsory for every user to have a valid API key, to request data.
Generate API key and store it in your database, when a user requests one.
Link: Relevant Question
This way, you can monitor usage of your API, and impose usage limits on it.
As #c69 pointed out, you could also bind the API keys you generate to the API-user's domain . You can then check the Referer URL ($_SERVER['HTTP_REFERER'] in PHP), and reject request, if it is not being made from the API-user's domain.