I would like to create web service(s) that I can publish to external facing network to allow our customers team to send us CRUD operations on customer orders.
What would be the best practice in this case of using Microsoft or Open-Source technologies to serve the customer reqeusts?
Option1:
The web service accepts data XML/JSON
Stores the data locally in a file
A task picks up the file and attempts data load it in the background
Send an email for records that failed
Drawback here is the response from the web service will not be realtime and validation will be limited.
Option2:
The web service accepts data XML/JSON
Attempt data load
Respond immediately if load was success or failure
Drawback here is if the volume of orders increases increases several folds in near future if the infrastructure can handle it.
I am open to using REST with WCF or Web API and any other helpful technologies that can be scaled when demand grows.
Have you tried message queueing?
Basically, in this architecture, there is a client application (called producer) that submits a message to the message broker (message queue), and there is another application (called consumer) that connects to the broker and subscribes for the message to be processed.
The message can be just simple information or a task that will be processed by another application.
The application can act both as producer and consumer.
There are many message queue software, one of them is rabbitmq.
Here is the complete intro about this: https://www.cloudamqp.com/blog/2015-05-18-part1-rabbitmq-for-beginners-what-is-rabbitmq.html
Since the communication is done through the middleman (aka the message queue) it will not provide an immediate response. But you don't need to send the process result (i.e. Order processing in your case) to the email since the application can subscribe for the message of the result.
It is perfect to handle a huge load of processes. As always you can start small (even free) and scale up in the future.
Take a look at the pricing details https://www.cloudamqp.com/ that provides rabbitmq software as a service.
I do this using ActiveMQ as a central message broker (you can also use Azure Service Bus if you have an Azure subscription) and pre-baked domain objects. For your scenario it might look like:
The web service accepts data XML/JSON
yes, you have a REST service that accepts multipart requests, lets say JSON as it's easier to work with on the client side. Before you can send messages it's usually best to convert the incoming client message to a domain message, so all message consumers know the exact format to expect and can therefore validate the message. I usually create these using xsd.exe on Windows using an XSD file that describes the format of the object. xsd.exe turns that XSD into a C# class. It's then just a process of taking the JSON fields and populating an Order class. That Order then gets sent as a message to the broker. At that point you're now in guaranteed messaging land as JMS will take care of that and ActiveMQ will take care of message persistence.
Stores the data locally in a file
rather than a file, you convert the incoming JSON to a domain class, e.g. an Order instance. You'll never see JSON or XML beyond this point as it's all domain classes from here.
A task picks up the file and attempts data load it in the background
yes, the broker has routes defined in a Camel config that tells it to, for example, send messages coming in on the /client topic to the /orders topic. The task is set up as a durable topic subscriber so automatically gets that Order domain object.
Send an email for records that failed
if the Order object contains information about the client (email etc) then the task can send the email on failure but a better pattern is to route the failed Order to the /error topic where a different task, which again is a durable topic subscriber picks it up and logs/sends email/audits etc
if the volume of orders increases increases several folds in near
future
you can cluster the brokers and run multiple Order consumers. If you separate the failure handling into another route, all the order task has to do is process the order and route the message to either the /error or /success topic depending on the outcome. So each route provides a small piece of the puzzle and you can scale up the pieces if the puzzle gets too big.
Related
A service provider is supposed to forward messages on to an endpoint (specified by me) but all I get to give them is a URL. How can I make this work.
I have signed up to a satellite service and I am trying make the first steps with their cloud API. I have hardware which sends simple messages over their satellite infrastructure to their cloud services. The provide the user (me) with a dashboard type interface to register the hardware as well as a desitnation (or multiple destinations) each destination is a single url. I dont get to specify usernames, passwords, code or anything just a single url. The service says
"the data will be forwarded to the pre-registered http(s) endpoint (the URL I have given them). Data is sent as a http POST request with Content-Type: application/json. All data is accompanied by an endpoint reference, timestamp, a unique identifier (UUID), and a digital signature that may be used to verify that the data originated from Myriota. Multiple packets may be batched into a single request."
I have a website so to start with I just want to get a single message to display on my page. I have completed and tested the code to display posts by GETing from https://www.mywebsite.com/wp-json/wp/v2/posts. This works.
the URL that I have given the service provider is the same as above. But none of the data reaches my site.
I dont really know how the data exchange or handshaking works here but I assume that for a third party to post to my site, they would need to include some sort of authentication. can this authentication data be included in the url? what is the authentication data? is it my Wordpress username and password? Is it safe send this data in a url? Can I turn off authentication so that anyone can post to my site? surely that isnt safe?
I have minimal experience with web development but plenty with embedded systems, I am working with a young software engineer and he is stumped also. together we have burned nearly a whole week on this so I have bit the bullet and turned to stackoverflow to see if anyone can help
we have an production issue where the order is submitted twice. Currently we have an API for order and we are exposing this to client using API management and in these we have policies for URL mapping for customer facing to actual .
Now , our actual API got 2 request so we thought customer submitted twice but they have confirmed that they have not submitted twice , so either there is issue with API management which fired 2 request.
How can i Identify the request received by the API management ?
Is there any chance that API management will fire the request twice ?
Appreciate any pointers
The only way to fire request twice in APIM would be by the means of Retry policy or manually using SendRequest. Otherwise it should be a client calling your API two times. Each request in APIM get it's own unique id accessible in policies as context.RequestId, this is the main way to track and identify them. But these ids are produced inside APIM itself thus are useful only if you're tracking a call from APIM and into backend.
Your best option now is to try to identify requests by client ip, method, uri, and time frame. APIM allows you to grab logs for certain periods of time (better if kept short) in JSON or CSV with data I mentioned above. To do that look into byRequest report (https://learn.microsoft.com/en-us/rest/api/apimanagement/reports#ReportByRequest), grab JSON/CSV and try to identify calls of interest,
For future you could look into onboarding your service to azure monitor (https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-use-azure-monitor) or log analytics those provide easier way to traverse logs.
I'm reading on JWT, there are so many tutorials and so many approaches, it's confusing.
I have couple of questions regarding proper usage of JWTs:
1) I keep seeing inconsistent means of transporting JWTs to and from the server. For example, here: one transport method for retrieving the token (via JSON-encoded object in POST body), another method for submitting it (via HTTP header). Why such inconsistency? Of course, it's up to the implementer to choose the methods, but wouldn't it be good practice at least to be consistent and use either only header or only body?
2) The JWT payload contains the information of state because the server is not maintaining it. It is obvious one should keep the size of the payload as small as possible, because the size of JWT is added to every request and response. Perhaps just a user id and cached permissions. When the client needs any information, it can receive it via (typically JSON-encoded) HTTP body and store it in the local storage, there seems to be no need to access the read-only JWT payload for the same purpose. So, why should one keep the JWT payload nonencrypted? Why mix the two ways of getting application data to the client and use both JWT payload and normal data-in-response-body? Shouldn't the best practice be to keep JWT always encrypted? It can be updated only on the server side anyway.
1) I keep seeing inconsistent means of transporting JWTs to and from the server. [...] wouldn't it be good practice at least to be consistent and use either only header or only body?
This may depend on the Client. While a web app can get a higher degree of security by storing the JWT in cookie storage, native apps may prefere local storage in order to access the JWT information. [1]
2) The JWT payload contains the information of state because the server is not maintaining it. It is obvious one should keep the size of the payload as small as possible, because the size of JWT is added to every request and response. Perhaps just a user id and cached permissions. When the client needs any information, it can receive it via (typically JSON-encoded) HTTP body and store it in the local storage, there seems to be no need to access the read-only JWT payload for the same purpose.
The JWT keeps the Backend state, not the client state. The Backend state may be that User 128 is logged in as administrator. This is (in my example) stored in the JWT in the fields Subject and Scopes. Instead of the client sending an ID of a Backend session that contains this information, the info is in the JWT directly. The backend does thus not have to keep a session that stores the logged in state of user 128. If the Client requests information of User 2, the BE may decide that this info is forbidden if the JWT tells that the logged in user has ID 1.
So, why should one keep the JWT payload nonencrypted?
The state is normally not secret to the client. the client cannot trust the information in the JWT since it does not have access to the secret key that is used to validate the JWT, but it can still adjust the GUI etc from the information in the JWT. (Like showing a button for the admin GUI or not.)
Why mix the two ways of getting application data to the client and use both JWT payload and normal data-in-response-body?
See above, the main purpose of the JWT is to keep information the the Backend, not the Client. Once the user loggs in, the Backend ask "Hey, can you hold this info for me and attach it to every request so that I can forget about you in the meantime?" Like if your manager asks you to wear a name sticker on your skirt so that he/she don't have to remember your name. :-) (And he/she signs it so that you cannot alter it without him/her noticing.
Shouldn't the best practice be to keep JWT always encrypted? It can be updated only on the server side anyway.
It doesn't really bring any security unless you store secret information in the JWT, and that bay be better to do server side. The decryption is a bit more cumbersome to decrypt compared to just verifying a signature.
[1] Local Storage vs Cookies
have been with mvc for a little while. the usual case when an action returning json, it is initialized by ajax in the view and the view is expecting info inside the json.
is there a case the action returning json to the view and is caught by something else instead of javascript? Thanks.
Yes, a JSON API can be consumed by a large variety of clients. It can be the browser sending an AJAX request, but it can also be a desktop application fetching data from the Internet, a server-side job scraping the data for analysis, etc.
For example, let's say you're running a stock exchange website, and you're publishing current stock values as JSON. You can use that JSON on your website to display the data, but you (or any other developer) can also write a desktop application which will get that data and process it on a local machine (to, for example, show the user which stocks they should buy). Or aggregate data from different sources.
Many websites make their APIs public, so that third party developers can write alternative clients, integrate the API's functionality in their own products, and so on. For example, GitHub's APIs are public - the GitHub website can utilize them for the AJAX requests, and GitHub for Windows can show you the list of repositories you own by making a request to that API using C#'s WebClient.
I'm currently implementing an IoT solution that has a bunch of sensors sending information in JSON format through a gateway.
I was reading about doing this on azure but couldn't quite figure out how the JSON scheme and the Event Hubs work to display the info on PowerBI?
Can I create a schema and upload it to PowerBI then connect it to my device?
there's multiple sides to this. To start with, the IoT ingestion in Azure is done tru Event Hubs as you've mentioned. If your gateway is able to do a RESTful call to the Event Hubs entry point, Event Hubs will get this data and store it temporarily for the retention period specified. Then stream analytics, will consume the data from Event Hubs and will enable you to do further processing and divert the data to different outputs. In your case, you can set one of the outputs to be a PowerBI dashboard which you can authorize with an organizational account (more on that later) and the output will automatically tied to PowerBI. The data schema part is interesting, the JSON itself defines the data table schema to be used on PowerBI side and will propagate from EventHubs to Stream Analytics to PowerBI with the first JSON package sent. Once the schema is there it is fixed and the rest of the data being streamed in should be in the same format.
If you don't have an organizational account at hand to use with PowerBI, you can register your domain under Azure Active Directory and use that account since it is considered within your org.
There may be a way of altering the schema afterwards using PowerBI rest api. Kindly find the links below..Haven't tried it myself tho.
https://msdn.microsoft.com/en-us/library/mt203557.aspx
Stream analytics with powerbi
Hope this helps, let me know if you need further info.
One way to achieve this is to send your data to Azure Events Hub, read it and send it to PowerBI with Stream Analytics. Listing all the steps here would be too long. I suggest that you take a look at a series of blog posts I wrote describing how I built a demo similar to what you try to achieve. That should give you enough info to get you started.
http://guyb.ca/IoTAzureDemo