Context
I'm trying to find the best way to combine Spring 5 WebClient and Hystrix. Using Hystrix, I set different timeouts for different type of requests done by the WebClient.
When Hystrix reaches it's timeout, I also want to make sure that WebClient closes its connection. Previously when using AsyncHttpClient, this was done by setting a requestTimeout before performing the specific call. However, setting the request timeout on WebClient is much more complicated and needs to be done on the ClientHttpConnector according to this answer.
Brian Cozel mentions that it is optimal to share the same ClientHttpConnector throughout the application. However, because the request specific timeout needs to be set on the ClientHttpConnector, this does not seem possible.
Question
In Spring's Reactive WebClient, is there a proper way to set request specific timeouts, but still use a single ClientHttpConnector?
The timeout operations that you can configure on the client connector are quite low level: they're about socket/connection timeouts. This configuration cannot be done at the request level, since connections might be shared and reused in a connection pool.
This question is about response timeouts, since you seem to care about the amount of time to get the response, on a per request basis.
In this case, you can use the timeout operator on a per request basis:
Mono<UserData> result = this.webClient.get()
.uri("/user")
.accept(MediaType.APPLICATION_JSON)
.retrieve()
.bodyToMono(UserData.class)
.timeout(Duration.ofSeconds(10));
The timeout operator will throw a TimeoutException in the pipeline; you can use one of the onError* operators to define what should be done in those cases. Alternatively, you can directly use the timeout(Duration, Mono) variant that provides a fallback.
Related
I am implementing a grpc server(in go) where I need to respond with some sort of server busy/unavailable message in case my server is already servicing a set maximum number of RPCs (currently).
I have implemented a grpc server with grpc-python earlier where I achieved this with a combination of maximum_concurrent_rpcs and the max number of threads in the threadpool. I am looking for something similar in grpc-go. The closest I could find was the server setting which can be set by the ServerOptions returned by calling MaxConcurrentStreams. My application only supports unary RPCs and I am not sure if this setting will apply to that.
I am just looking to enforce/set a max number of active concurrent requests the server can handle. Would setting maxConcurrentStreams work or should I look at doing it in my code itself (I have done some rudimentary implementation for it but I would rather use something provided by grpc-go)?
I've never used MaxConcurrentStreams before, because for highload services you usually want to make the most from your hardware, and this limitation doesn't seem to make sense. Perhaps it's possible to achieve your goal with this setting, but you need to investigate, which kind of error is returned when MaxConcurrentStreams is achieved. I think that should be GRPC's transport error, not your own, so you'll not be able to control error message and code.
I am currently working on a web service in Go that essentially takes a request and sends back JSON, rather typical. However, this particular JSON takes 10+ seconds to actually complete and return. Because I am also making a website that depends on the JSON, and the JSON contents are subject to change, I implemented a route that quickly generates and returns (potentially updated or new) names as placeholders that would get replaced later by real values that correspond to the names. The whole idea behind that is the website would connect to the service, get back JSON almost immediately to populate a table, then wait until the actual data to fill in came back from the service.
This is where I encounter an issue, potentially because I am newish to Go and don't understand its vast libraries completely. The previous method that I used to send JSON back through the HTTP requests was ResponseWriter.Write(theJSON). However, Write() terminates the response, so the website would have to continually ping the service which could now and will be disastrous in the future
So, I am seeking some industry knowledge into my issue. Can HTTP connections be continuous like that, where data is sent piecewise through the same http request? Is that even a computationally or security smart feature, or are there better ways to do what I am proposing? Finally, does Go even support a feature like that, and how would I asynchronously handle it for performance optimization?
For the record, my website is using React.js.
i would use https websockets to achieve this effect rather than a long persisting tcp.con or even in addition to this. see the golang.org/x/net/websocket package from the go developers or the excellent http://www.gorillatoolkit.org/pkg/websocket from gorilla web toolkit for use details. You might use padding and smaller subunits to allow interruption and restart of submission // or a kind of diff protocol to rewrite previously submitted JSON. i found websocket pretty stable even with small connection breakdowns.
Go does have a keep alive ability net.TCPConn's SetKeepAlive
kaConn, _ := tcpkeepalive.EnableKeepAlive(conn)
kaConn.SetKeepAliveIdle(30*time.Second)
kaConn.SetKeepAliveCount(4)
kaConn.SetKeepAliveInterval(5*time.Second)
Code from felixqe
You can use restapi as webservice and can sent data as a json.SO you can continously sent data over a communication channel.
Using Flex Builder 4.5 with rails running on localhost on a brand new MacBook Air. Using curl, the response time of the server of a read is around 200-300ms. When I use HTTPService, from send() to result received is between 1-5 seconds for less than ten lines of XML received from the server. If I use the browser to render the URL, it matches curl, ie it is instantaneous, I'm not waiting for the XML to render.
The number is the same for debug/non-debug mode. The HTTPService is called after creation complete, so the GUI is done. After data is received, the rest of my algorithms are completing under 20ms in the application.
Is this time expected, or am I doing something wrong, or have something configured incorrectly?
What you've described sounds like HTTPService isn't setting the TCP_NODELAY socket option (setsockopt(3)) on its sockets before sending a request. From my Linux tcp(7):
TCP_NODELAY
If set, disable the Nagle algorithm. This means that
segments are always sent as soon as possible, even if
there is only a small amount of data. When not set,
data is buffered until there is a sufficient amount to
send out, thereby avoiding the frequent sending of
small packets, which results in poor utilization of
the network. This option is overridden by TCP_CORK;
however, setting this option forces an explicit flush
of pending output, even if TCP_CORK is currently set.
Perhaps your platform has another way you can ask to disable Nagle's algorithm for a specific connection.
To expand on sarnold's answer, what you need to do is add the following line:
<socket-tcp-no-delay-enabled>true</socket-tcp-no-delay-enabled>
I'm using Northscale 1.0.0 and need a little help getting it to limp along for long enough to upgrade to the new version. I'm using C# and ASP.NET to work with it using the Enyim libraries. I currently suspect that the application does not have enough connections per the socketPool setting in my app.config. I also noted that the previous developer's code simply treats ANY exception from an attempted Get call to MemCache as if the item isn't in the cache, which (I believe) may be resulting in periodic spikes in calls to the database when the pool gets starved. We've been having oddball load spikes that don't seem to have any relation to server load. I suspect that he is not correctly managing the lifecycle on the connections to Northscale and that we are periodically experiencing starvation in the socket pool as a result, but I'm unable to prove it.
Is there a specific exception I should be looking for when I call the Get method to retrieve items from cache? I'm not really seeing much in the docs that gives me sufficient information on this. Anybody have any sample code on this? I'd even accept java or php code, as I think the .NET libraries were probably based on one of those anyway.
Any ideas?
Thanks,
Will
If you have made the connection correctly to the membase server(formerly Northscale) typically you only get an exception on 'get' when it's not a hit.
I want to do this (no particular language):
print(foo.objects.bookdb.books[12].title);
or this:
book = foo.objects.bookdb.book.new();
book.title = 'RPC for Dummies';
book.save();
Where foo actually is a service connected to my program via some IPC, and to access its methods and objects, some layer actually sends and receives messages over the network.
Now, I'm not really looking for an IPC mechanism, as there are plenty to choose from. It's likely not to be XML based, but rather s. th. like Google's protocol buffers, dbus or CORBA. What I'm unsure about is how to structure the application so I can access the IPC just like I would any object.
In other words, how can I have OOP that maps transparently over process boundaries?
Not that this is a design question and I'm still working at a pretty high level of the overall architecture. So I'm pretty agnostic yet about which language this is going to be in. C#, Java and Python are all likely to get used, though.
I think the way to do what you are requesting is to have all object communication regarded as message passing. This is how object methods are handled in ruby and smalltalk, among others.
With message passing (rather than method calling) as your object communication mechanism, then operations such as calling a method that didn't exist when you wrote the code becomes sensible as the object can do something sensible with the message anyway (check for a remote procedure, return a value for a field with the same name from a database, etc, or throw a 'method not found' exception, or anything else you could think of).
It's important to note that for languages that don't use this as a default mechanism, you can do message passing anyway (every object has a 'handleMessage' method) but you won't get the syntax niceties, and you won't be able to get IDE help without some extra effort on your part to get the IDE to parse your handleMessage method to check for valid inputs.
Read up on Java's RMI -- the introductory material shows how you can have a local definition of a remote object.
The trick is to have two classes with identical method signatures. The local version of the class is a facade over some network protocol. The remote version receives requests over the network and does the actual work of the object.
You can define a pair of classes so a client can have
foo= NonLocalFoo( "http://host:port" )
foo.this= "that"
foo.save()
And the server receives set_this() and save() method requests from a client connection. The server side is (generally) non-trivial because you have a bunch of discovery and instance management issues.
You shouldn't do it! It is very important for programmers to see and feel the difference between an IPC/RPC and a local method call in the code. If you make it so, that they don't have to think about it, they won't think about it, and that will lead to very poorly performing code.
Think of:
foreach o, o.isGreen in someList {
o.makeBlue;
}
The programmer assumes that the loops takes a few nanoseconds to complete, instead it takes close to a second if someList happens to be remote.