NService bus message not coming in sequence (i.e as it is sent) - message-queue

We are using NService bus for our messaging framework.Sometime the message is not coming as par the sequence of sending .Sometimes last message is coming first and than later first message.
Please help me out Thanks

The nature of NServiceBus does not guarantee that messages will be received in the order they were sent. Each message is meant to be processed independently.
If an action can only be undertaken after two related messages arrive, then you need to utilize a Saga
Edit in response to first comment:
You mention you're sending the same message in chunks. Does this mean that you have a large payload that you have to split up into multiple parts to transmit via MSMQ?
If so, you have a few options:
Store the payload out of band, in a database or file system, and only put enough data in one message (an ID or file system path) to load the data from the message handler.
Make the message a MessagePart that contains a BundleID, PartNumber, TotalParts, and PayloadChunk. Then, create a saga for MessagePart that stores each part and when all parts have been received, reconstitute the chunks together and do what you need. Of course, if you need to then send the resulting large object back onto the Bus, this would get annoying really quickly, so then the out-of-band option would look much more attractive.
In any case, there are a ton of reasons why any MSMQ message, not just NServiceBus messages, could arrive out of order, so you have to be able to deal with it.

Would Bus.Sending a collection of Imessages work? NServiceBus allows batching of messages

Related

Writing multiple type of data into a list for it to be accessed by multiple threads

I send JSON's to my app via Postman in a list with a type of mapping(CRUD) to send it to my database.
I want my controller to put all this data, from multiple senders, in a list that will send the information to my DB. The problem is that i don't know how to store in the same list the Json and the Mapping, so when my threads do their work to know if that json must be inserted, updated, deleted and so on.
Do you guys have any ideea ?
PS: It is a spring-boot app that need to be able to send 12000 objects ( made from that jsons ) to the db.
I don't see a reason for putting all data in one list and sharing it later, each HTTP request receives own thread.
On decent server you can handle couple thousands of requests/sec which perform simple CRUD operations.

how does google-cloud-function generate function-execution-id?

A Cloud Function triggered by an HTTP request has a corresponding function-execution-id for each calling request (in the request and response header). It is used for tracing and viewing the log of a specific request in Stack Driver Logging. In my case, it is a string of 12 characters. When I continuously do HTTP requests to a cloud function and see the function-execution-id, I get the result below:
j8dorcyxyrwb
j8do4wolg4i3
j8do8bxu260m
j8do2xhqmr3s
j8dozkdlrjzp
j8doitxtpt29
j8dow25ri4on
On each line, the first 4 characters are the same "j8do" but the rest are different, so I wonder what is the structure of function-execution-id.
How was it generated?
The execution id is opaque, meaning that it doesn't contain any useful data. It is just a unique ID. How it was generated should not be of any issue to you, the consumer. From examination, it looks like it might be some time-based value similar to UUIDv1, but any code that you write that consumes these IDs should make no assumptions about how they were generated.

API POST Endpoints and Image Processing

So, for my android app, not only do I get certain data that I would like to POST to an API endpoint via JSON format, but one of the data pieces is also an image. Everything besides the image goes into a postgresql database. I want to put the images somewhere (no important where) then put the link to that image in the database.
Here's the thing, while that image is connected to the other pieces of data I send to the API endpoint that gets put into the database, I would be sending the image somewhere else and then the link be put in at a different time. So here's my mental gymnastic I am trying to get over:
How would I send these two separate data pieces (an image and then all other data in a single JSON object) and have the image associated with that JSON object that get's put into the database without the image and data getting all mixed up due to multiple users doing the same thing?
To simplify, say I have the following information as a single JSON object going to an endpoint called api.example.com/frontdoor. The object looks something like this:
{
"visitor_id": "5d548e53-c351-4016-9078-b0a572df0bca",
"name": "John Doe",
"appointment": false,
"purpose": "blahblahblah..."
}
That JSON object is consumed by the server and is then put into their respective tables in the database.
At the same time, and image is taken and given a uuid as a file name and send to api.example.com/face, then the server processes it and somehow a adds link to the image in the proper database entry row.
The question is, how do I accomplish that? How would I go about relating these two pieces of data that get sent to two different places?
In the end, I plan on having a separate endpoint such as api.example.com/visitors provide a JSON object with a list of all visits that looks something like:
{
"visits": [
{
"visitor_id": "5d548e53-c351-4016-9078-b0a572df0bca",
"name": "John Doe",
"appointment": false,
"purpose": "blahblahblah..."
"image": "imgbin.example.com/faces/c3118272-9e9d-4c54-8824-8cf4cfaa679f.png"
},
...
]
}
Mainly, I am trying to get my head around the design of all of this so I can start writing code. Any help would be appreciated.
As I understand, your question is about executing an action on the server side where two different sub services are involved - one service to update the text data in a sql db and another to store an image and then put the image's reference back to the main data. There are two approaches that come to my mind.
1) Generate a unique id on the client side and associate that to both json object upload an the image upload. And then when your image is uploaded, the image upload service can take this ID, find the corresponding record in SQL and update the image path. However generating client side unique IDs are not a recommended approach because there is a chance of collision such that more than 1 client generates the same ID which will break the logic. To work around this, before uploading, client can make a call to an ID generation service which will uniquely generate the ID on the server side and send it back to the client and then client can perform the upload step using the same approach. The downside to this approach is that the client needs to make an extra call to the server to get the unique ID. Advantage of this approach is that the UI can get separate updates for the data and the image as in when the data upload service is successful, it can say that the data is successfully updated and when the image is uploaded at some point in time later, then it can say that image upload is completed. Thus, the responses of each upload can be managed differently in this case. However if the data and image upload has to happen together and has to be atomic (the whole upload fails if either of data or image upload fails) then this approach can't be used because the server must group both these actions in a transaction.
2) Another approach is to have a common endpoint for both image and data upload. Both image and data get uploaded together in a single call to the server and the server first generates a unique ID and then makes two parallel calls to data upload service and image upload service and both these sub service calls get this unique ID as the parameter. If both uploads have to be atomic then the server must group these sub service calls in a transaction. Regarding returning response, it can be synchronous or asynchronous. If the UI needs to be kept waiting for the uploads to succeed, then the response will be synchronous and the server will have to wait for both these sub services to complete before returning a response. But if UI doesn't need to be kept waiting then the server can respond immediately after making calls to these sub services with a message that the upload request has been accepted. In this case, the sub services calls are processed asynchronously.
In my opinion, approach 2 is better because that way server has more control over grouping the related actions together. Regarding response, it depends on the use case. If the user cares about whether his post was properly recorded on the server (like making a payment) then it is better to have synchronous implementation. However if user initiates the action and leaves (as in the case of generating a report or sending an email) then it can have asynchronous implementation. Asynchronous implementation is better in terms of server utilization because server is free to accept other requests rather than waiting for the sub services' actions to complete.
These are 2 general approaches. I am sure there will be several variations or may be entirely different approaches for this problem.
Ah, too long of an answer, hope it helps. Let me know if further questions.

What are the common ways for deletion local/client objects using REST API?

Is there common design pattern for dispatching deleted objects to the requestor (client of the API)?
Challenges we are having:
When object is deleted on the API completely, client will not know
that object is gone and will keep it locally (as API only shows objects changed after the certain date)
If we enable object's property to show that is deleted (ex. "deleted = TRUE") then
eventually number of objects in the API grows and slows down the transfer rate.
Another option we looking into is to have separate Endpoints on the API to show list of deleted objects only (is this the pattern that anyone uses?).
I'm looking for most "RESTful way" to delete local objects.
The way I handle it is a variation on your #1: each item has a last updated field in the database, and if something is deleted, I make an entry in another table of deleted items, and it's updated value is when it was deleted.
The client makes a request asking for "changes since X" which is their own locally stored last updated value...it returns new data, and an array of deleted items. Then on the client I purge those values
Stale data is always a problem with client/server applications. If clients loads some data, then some object is deleted on the server, and then client sends a DELETE request, the RESTFul thing to do would be to return a 404, which indicated "not found". If the client knows that if it sends a DELETE, and gets a 404, the resource was deleting from underneath...
What if you think of your resource not as a list, but rather as a changeset?
Eg. changesets what you have in git or SVN.
This way, there's always a "head" version, and the client always has some version, and the resource is the change between client's last and head.
That way you can apply whatever you've learned by examining/using version control systems.
If you need anything more complex, the science behind is called Operational Transformation (OT) - http://en.wikipedia.org/wiki/Operational_transformation

BitTorrent Peer wire protocol (TCP)

How are the messages encoded or sent/received by peers?
If there is a message
have: <len=0005><id=4><piece index>
How is this sent(in binary,how is it translated to binary?) and received?
Is there a specific order in which the messages are sent to peers?
I have read the specification but it leaves me with questions.
Thanks
I'll answer the ordering question.
In general, you can send any message at any time. But there are some messages which have special rules. The BITFIELD message has to be sent out early for instance. Most clients send PIECEs back in the order they were REQUESTed, but I don't think that is a requirement if memory serves.
In general the messages are of two types. One kind are control-oriented messages telling peers about general status (HAVE messages falls into this group). The other kind are data-oriented messages that actually transfers the file and requests new data from the peer. These message types are "interleaved" and one of the reasons you send PIECE messages no larger than 16 kilobytes is to make sure control messages can be interleaved in between. A trick is that when a PIECE message has been sent, then send all control-oriented messages by priority before the next PIECE message. That way, you quickly tell the other party of your intent.
There is also a "bug" in the original protocol which is solved by the FAST extension. It effectively make each REQUEST result in either a PIECE message or a REJECT-REQUEST message. This is another example of an ordering. If you get a REJECT-REQUEST message for something you never REQUESTED you disconnect the peer.
Prior to declaring the have message the specification says:
All of the remaining messages in the protocol take the form of <length prefix><message ID><payload>. The length prefix is a four byte big-endian value. The message ID is a single decimal byte. The payload is message dependent.
You've got the binary format for length and id right there. The 'piece index' part is this message's specific payload. It should be four bytes long since the message has a fixed size of 5 bytes and 1 byte went to the message ID (viewing other messages with the same format should give you a clue).