I want to use a WebRTC data channel to exchange json messages between peers.
Can I safely assume that each json message arrives atomically remotely (not like in TCP where packets may be split or chunked together) or do I need implement something like a length prefix to know where one message ends and another begin?
Using a reliable channel and possibly a tcp turn server, if that's relevant.
Yes, according to the webRTC draft spec, whatever message you send() down a data channel should arrive in a single onmessage callback at the far end.
In real life however, Chrome sometimes calls onmessage with a partial message when it runs out of buffers. If you keep your messages <64k this seems not to happen.
Related
We have a requirement is to build spring boot command line applicarion where we have to send messages to queue.
Only request queue has been setup.
As there is no response queue setup, we are not getting any acknowledgement from client side if they receive a message or not.
Right now I am using Spring's JMSTemplate send() method to send message to request queue and SingleConnectionFactory to create one shared connection as this is commmand line application
As there is no acknowledgement/response to message we send to request queue, End to end testing is difficult.
If destination/request queue connection is obtained and message is sent without any exception, I consider it a successful test.
Is it a right to implement Spring JMS templates send() method only ? and not following jms template send/receive pattern
Note: It is not possible to setup a response queue and get any acknowledgement from client side.
In JMS (and in most other messaging systems) producers and consumers are logically separated (i.e. de-coupled). This is part of the fundamental design of the system to reduce complexity and increase scalability. With these constraints your producers shouldn't care whether or not the message is consumed. The producers simply send messages. Likewise, the consumers shouldn't care who sends the messages or how often, etc. Their job is simply to consume the messages.
Assuming your application is actually doing something with the message (i.e. there is some kind of functional output of message processing) then that is what your end-to-end test should measure. If you get the ultimate result you're looking for then you may deduce that the steps in between (e.g. sending a message, receiving a message, etc.) were completed successfully.
To be clear, it's perfectly fine to send a message with Spring's JMSTemplate without using a request/response pattern. Generally speaking, if you get no exceptions then that means the message was sent successfully. However, there are other caveats when using JMSTemplate. For example, Spring's JavaDoc says this:
The ConnectionFactory used with this template should return pooled Connections (or a single shared Connection) as well as pooled Sessions and MessageProducers. Otherwise, performance of ad-hoc JMS operations is going to suffer.
That said, it's important to understand the behavior of your specific JMS client implementation. Many implementations will send non-persistent JMS messages asynchronously (i.e. fire and forget) which means they may not make it to the broker and no exception will be thrown on the client. Sending persistent messages is generally sufficient to guarantee that the client will throw an exception in the event of any problem, but consult your client implementation documentation to confirm.
I am currently working on a web service in Go that essentially takes a request and sends back JSON, rather typical. However, this particular JSON takes 10+ seconds to actually complete and return. Because I am also making a website that depends on the JSON, and the JSON contents are subject to change, I implemented a route that quickly generates and returns (potentially updated or new) names as placeholders that would get replaced later by real values that correspond to the names. The whole idea behind that is the website would connect to the service, get back JSON almost immediately to populate a table, then wait until the actual data to fill in came back from the service.
This is where I encounter an issue, potentially because I am newish to Go and don't understand its vast libraries completely. The previous method that I used to send JSON back through the HTTP requests was ResponseWriter.Write(theJSON). However, Write() terminates the response, so the website would have to continually ping the service which could now and will be disastrous in the future
So, I am seeking some industry knowledge into my issue. Can HTTP connections be continuous like that, where data is sent piecewise through the same http request? Is that even a computationally or security smart feature, or are there better ways to do what I am proposing? Finally, does Go even support a feature like that, and how would I asynchronously handle it for performance optimization?
For the record, my website is using React.js.
i would use https websockets to achieve this effect rather than a long persisting tcp.con or even in addition to this. see the golang.org/x/net/websocket package from the go developers or the excellent http://www.gorillatoolkit.org/pkg/websocket from gorilla web toolkit for use details. You might use padding and smaller subunits to allow interruption and restart of submission // or a kind of diff protocol to rewrite previously submitted JSON. i found websocket pretty stable even with small connection breakdowns.
Go does have a keep alive ability net.TCPConn's SetKeepAlive
kaConn, _ := tcpkeepalive.EnableKeepAlive(conn)
kaConn.SetKeepAliveIdle(30*time.Second)
kaConn.SetKeepAliveCount(4)
kaConn.SetKeepAliveInterval(5*time.Second)
Code from felixqe
You can use restapi as webservice and can sent data as a json.SO you can continously sent data over a communication channel.
I've been working with AS3 sockets, and I noticed that small packets are 'Nagled' when sent. I tried to find a way to set NoDelay for the socket, but I didn't find a clue even in the documentation. Is there another way to turn Nagle'a algorithm off in AS3 TCP sockets?
You can tell Flash to send out the data through the socket using the Flush method on the socket Object.
Flushes any accumulated data in the socket's output buffer.
That said, flash does what it thinks is the better, and may doesn't want to send your data too often. Still, that shouldn't be over few milliseconds.
I've managed to get the socket opened, hands shaken, and even though that's all fun and so, I would like to handle the data itself now. The small thingy is that unlike the HTTP headers which are pure ascii, the content seems to be encoded:
ÅÅúÅ à›ÅÅ»öë∑âÅÅ«∆{UÅÅeæƒ$ÅÅvü
‡7ÅÅŸJêÏòÅÅ~}Z¥?ÅÅ9TÉHxÅÅ[ 1†ÅÅs óE2ÅÅ9\ÅyxÅÅ#´°ºbÅÅïôx ‘ÅÅ)Ÿ1–hÅÅ⁄}
That's what server received from Google Chrome client's
socket.send("A");
socket.send("A");
Just skimming the protocol definition, I didn't find anything about encoding besides base64, which this clearly isn't.
How should I handle the content serverside?
Edit: already looked quite a few articles, but nearly all are about the client side.
Data that is sent from the client to the server is masked (to protect misbehaving intermediaries from getting confused). It's a 4 byte running XOR with the mask sent as the first 4 bytes of the payload. It is described in the spec in section 5.3
Using Flex Builder 4.5 with rails running on localhost on a brand new MacBook Air. Using curl, the response time of the server of a read is around 200-300ms. When I use HTTPService, from send() to result received is between 1-5 seconds for less than ten lines of XML received from the server. If I use the browser to render the URL, it matches curl, ie it is instantaneous, I'm not waiting for the XML to render.
The number is the same for debug/non-debug mode. The HTTPService is called after creation complete, so the GUI is done. After data is received, the rest of my algorithms are completing under 20ms in the application.
Is this time expected, or am I doing something wrong, or have something configured incorrectly?
What you've described sounds like HTTPService isn't setting the TCP_NODELAY socket option (setsockopt(3)) on its sockets before sending a request. From my Linux tcp(7):
TCP_NODELAY
If set, disable the Nagle algorithm. This means that
segments are always sent as soon as possible, even if
there is only a small amount of data. When not set,
data is buffered until there is a sufficient amount to
send out, thereby avoiding the frequent sending of
small packets, which results in poor utilization of
the network. This option is overridden by TCP_CORK;
however, setting this option forces an explicit flush
of pending output, even if TCP_CORK is currently set.
Perhaps your platform has another way you can ask to disable Nagle's algorithm for a specific connection.
To expand on sarnold's answer, what you need to do is add the following line:
<socket-tcp-no-delay-enabled>true</socket-tcp-no-delay-enabled>