AS3 HttpService is slow, seeing 1-5 seconds response time - actionscript-3

Using Flex Builder 4.5 with rails running on localhost on a brand new MacBook Air. Using curl, the response time of the server of a read is around 200-300ms. When I use HTTPService, from send() to result received is between 1-5 seconds for less than ten lines of XML received from the server. If I use the browser to render the URL, it matches curl, ie it is instantaneous, I'm not waiting for the XML to render.
The number is the same for debug/non-debug mode. The HTTPService is called after creation complete, so the GUI is done. After data is received, the rest of my algorithms are completing under 20ms in the application.
Is this time expected, or am I doing something wrong, or have something configured incorrectly?

What you've described sounds like HTTPService isn't setting the TCP_NODELAY socket option (setsockopt(3)) on its sockets before sending a request. From my Linux tcp(7):
TCP_NODELAY
If set, disable the Nagle algorithm. This means that
segments are always sent as soon as possible, even if
there is only a small amount of data. When not set,
data is buffered until there is a sufficient amount to
send out, thereby avoiding the frequent sending of
small packets, which results in poor utilization of
the network. This option is overridden by TCP_CORK;
however, setting this option forces an explicit flush
of pending output, even if TCP_CORK is currently set.
Perhaps your platform has another way you can ask to disable Nagle's algorithm for a specific connection.

To expand on sarnold's answer, what you need to do is add the following line:
<socket-tcp-no-delay-enabled>true</socket-tcp-no-delay-enabled>

Related

Proper way to setup request specific read timeout on Spring 5 WebClient

Context
I'm trying to find the best way to combine Spring 5 WebClient and Hystrix. Using Hystrix, I set different timeouts for different type of requests done by the WebClient.
When Hystrix reaches it's timeout, I also want to make sure that WebClient closes its connection. Previously when using AsyncHttpClient, this was done by setting a requestTimeout before performing the specific call. However, setting the request timeout on WebClient is much more complicated and needs to be done on the ClientHttpConnector according to this answer.
Brian Cozel mentions that it is optimal to share the same ClientHttpConnector throughout the application. However, because the request specific timeout needs to be set on the ClientHttpConnector, this does not seem possible.
Question
In Spring's Reactive WebClient, is there a proper way to set request specific timeouts, but still use a single ClientHttpConnector?
The timeout operations that you can configure on the client connector are quite low level: they're about socket/connection timeouts. This configuration cannot be done at the request level, since connections might be shared and reused in a connection pool.
This question is about response timeouts, since you seem to care about the amount of time to get the response, on a per request basis.
In this case, you can use the timeout operator on a per request basis:
Mono<UserData> result = this.webClient.get()
.uri("/user")
.accept(MediaType.APPLICATION_JSON)
.retrieve()
.bodyToMono(UserData.class)
.timeout(Duration.ofSeconds(10));
The timeout operator will throw a TimeoutException in the pipeline; you can use one of the onError* operators to define what should be done in those cases. Alternatively, you can directly use the timeout(Duration, Mono) variant that provides a fallback.

Are WebRTC data channel packets atomic?

I want to use a WebRTC data channel to exchange json messages between peers.
Can I safely assume that each json message arrives atomically remotely (not like in TCP where packets may be split or chunked together) or do I need implement something like a length prefix to know where one message ends and another begin?
Using a reliable channel and possibly a tcp turn server, if that's relevant.
Yes, according to the webRTC draft spec, whatever message you send() down a data channel should arrive in a single onmessage callback at the far end.
In real life however, Chrome sometimes calls onmessage with a partial message when it runs out of buffers. If you keep your messages <64k this seems not to happen.

Sending continuous data over HTTP with Go

I am currently working on a web service in Go that essentially takes a request and sends back JSON, rather typical. However, this particular JSON takes 10+ seconds to actually complete and return. Because I am also making a website that depends on the JSON, and the JSON contents are subject to change, I implemented a route that quickly generates and returns (potentially updated or new) names as placeholders that would get replaced later by real values that correspond to the names. The whole idea behind that is the website would connect to the service, get back JSON almost immediately to populate a table, then wait until the actual data to fill in came back from the service.
This is where I encounter an issue, potentially because I am newish to Go and don't understand its vast libraries completely. The previous method that I used to send JSON back through the HTTP requests was ResponseWriter.Write(theJSON). However, Write() terminates the response, so the website would have to continually ping the service which could now and will be disastrous in the future
So, I am seeking some industry knowledge into my issue. Can HTTP connections be continuous like that, where data is sent piecewise through the same http request? Is that even a computationally or security smart feature, or are there better ways to do what I am proposing? Finally, does Go even support a feature like that, and how would I asynchronously handle it for performance optimization?
For the record, my website is using React.js.
i would use https websockets to achieve this effect rather than a long persisting tcp.con or even in addition to this. see the golang.org/x/net/websocket package from the go developers or the excellent http://www.gorillatoolkit.org/pkg/websocket from gorilla web toolkit for use details. You might use padding and smaller subunits to allow interruption and restart of submission // or a kind of diff protocol to rewrite previously submitted JSON. i found websocket pretty stable even with small connection breakdowns.
Go does have a keep alive ability net.TCPConn's SetKeepAlive
kaConn, _ := tcpkeepalive.EnableKeepAlive(conn)
kaConn.SetKeepAliveIdle(30*time.Second)
kaConn.SetKeepAliveCount(4)
kaConn.SetKeepAliveInterval(5*time.Second)
Code from felixqe
You can use restapi as webservice and can sent data as a json.SO you can continously sent data over a communication channel.

Disabling Nagle's Algorithm under Action Script 3

I've been working with AS3 sockets, and I noticed that small packets are 'Nagled' when sent. I tried to find a way to set NoDelay for the socket, but I didn't find a clue even in the documentation. Is there another way to turn Nagle'a algorithm off in AS3 TCP sockets?
You can tell Flash to send out the data through the socket using the Flush method on the socket Object.
Flushes any accumulated data in the socket's output buffer.
That said, flash does what it thinks is the better, and may doesn't want to send your data too often. Still, that shouldn't be over few milliseconds.

Memcache (northscale) socket pool question for Enyim

I'm using Northscale 1.0.0 and need a little help getting it to limp along for long enough to upgrade to the new version. I'm using C# and ASP.NET to work with it using the Enyim libraries. I currently suspect that the application does not have enough connections per the socketPool setting in my app.config. I also noted that the previous developer's code simply treats ANY exception from an attempted Get call to MemCache as if the item isn't in the cache, which (I believe) may be resulting in periodic spikes in calls to the database when the pool gets starved. We've been having oddball load spikes that don't seem to have any relation to server load. I suspect that he is not correctly managing the lifecycle on the connections to Northscale and that we are periodically experiencing starvation in the socket pool as a result, but I'm unable to prove it.
Is there a specific exception I should be looking for when I call the Get method to retrieve items from cache? I'm not really seeing much in the docs that gives me sufficient information on this. Anybody have any sample code on this? I'd even accept java or php code, as I think the .NET libraries were probably based on one of those anyway.
Any ideas?
Thanks,
Will
If you have made the connection correctly to the membase server(formerly Northscale) typically you only get an exception on 'get' when it's not a hit.