In Couchbase Java client there are obsPollInterval & obsPollMax parameters that can be set on the client.
How do I set the equivalent in .NET client?
obsPollInterval & obsPollMax
Not exactly, the two clients have slightly different implementations. The closest thing would be the ObserveTimeout configuration value which defaults to 1 minute and internally the observe will happen every 500ms until either the timeout is reached or operation succeeds.
More information regarding ObserveTimeout can be found here: http://docs.couchbase.com/couchbase-sdk-net-1.3/#appendix-configuring-the-net-client-library
Related
I am implementing a grpc server(in go) where I need to respond with some sort of server busy/unavailable message in case my server is already servicing a set maximum number of RPCs (currently).
I have implemented a grpc server with grpc-python earlier where I achieved this with a combination of maximum_concurrent_rpcs and the max number of threads in the threadpool. I am looking for something similar in grpc-go. The closest I could find was the server setting which can be set by the ServerOptions returned by calling MaxConcurrentStreams. My application only supports unary RPCs and I am not sure if this setting will apply to that.
I am just looking to enforce/set a max number of active concurrent requests the server can handle. Would setting maxConcurrentStreams work or should I look at doing it in my code itself (I have done some rudimentary implementation for it but I would rather use something provided by grpc-go)?
I've never used MaxConcurrentStreams before, because for highload services you usually want to make the most from your hardware, and this limitation doesn't seem to make sense. Perhaps it's possible to achieve your goal with this setting, but you need to investigate, which kind of error is returned when MaxConcurrentStreams is achieved. I think that should be GRPC's transport error, not your own, so you'll not be able to control error message and code.
Context
I'm trying to find the best way to combine Spring 5 WebClient and Hystrix. Using Hystrix, I set different timeouts for different type of requests done by the WebClient.
When Hystrix reaches it's timeout, I also want to make sure that WebClient closes its connection. Previously when using AsyncHttpClient, this was done by setting a requestTimeout before performing the specific call. However, setting the request timeout on WebClient is much more complicated and needs to be done on the ClientHttpConnector according to this answer.
Brian Cozel mentions that it is optimal to share the same ClientHttpConnector throughout the application. However, because the request specific timeout needs to be set on the ClientHttpConnector, this does not seem possible.
Question
In Spring's Reactive WebClient, is there a proper way to set request specific timeouts, but still use a single ClientHttpConnector?
The timeout operations that you can configure on the client connector are quite low level: they're about socket/connection timeouts. This configuration cannot be done at the request level, since connections might be shared and reused in a connection pool.
This question is about response timeouts, since you seem to care about the amount of time to get the response, on a per request basis.
In this case, you can use the timeout operator on a per request basis:
Mono<UserData> result = this.webClient.get()
.uri("/user")
.accept(MediaType.APPLICATION_JSON)
.retrieve()
.bodyToMono(UserData.class)
.timeout(Duration.ofSeconds(10));
The timeout operator will throw a TimeoutException in the pipeline; you can use one of the onError* operators to define what should be done in those cases. Alternatively, you can directly use the timeout(Duration, Mono) variant that provides a fallback.
I'd like to know if two functionalities are available in infosphere streams, but I could not find it anywhere else.
1) To the best of my knowledge, when an InfoSphere Streams Application starts, all of the operators are deployed on the hosts in the cluster. Is it possible to deploy specific operator per results of previous Operator(s)? So that the deployment is happening during a job (and not only when a host fails).
2) Also, to the best of my knowledge, tags exists which allow specifying which Operators will be deployed to which hosts. Is it possible to change hosts tags during a Job runtime? Adding to question (1), is it possible that on runtime, I will deploy an Operator to a specific machine based on computations that occurred during the job?
Thanks, Tom.
answers to your questions ...
1.) operators can be placed relative to placement of other operators, but not based upon the results of an operator's execution.
2.) There is currently no way for a running operator to change host tags based upon its calculations.
The tags can be changed on a host while a job is running, but this must be done through administrator operations. Then the PEs must be stopped and restarted to take advantage of this new tagging configuration.
Using Flex Builder 4.5 with rails running on localhost on a brand new MacBook Air. Using curl, the response time of the server of a read is around 200-300ms. When I use HTTPService, from send() to result received is between 1-5 seconds for less than ten lines of XML received from the server. If I use the browser to render the URL, it matches curl, ie it is instantaneous, I'm not waiting for the XML to render.
The number is the same for debug/non-debug mode. The HTTPService is called after creation complete, so the GUI is done. After data is received, the rest of my algorithms are completing under 20ms in the application.
Is this time expected, or am I doing something wrong, or have something configured incorrectly?
What you've described sounds like HTTPService isn't setting the TCP_NODELAY socket option (setsockopt(3)) on its sockets before sending a request. From my Linux tcp(7):
TCP_NODELAY
If set, disable the Nagle algorithm. This means that
segments are always sent as soon as possible, even if
there is only a small amount of data. When not set,
data is buffered until there is a sufficient amount to
send out, thereby avoiding the frequent sending of
small packets, which results in poor utilization of
the network. This option is overridden by TCP_CORK;
however, setting this option forces an explicit flush
of pending output, even if TCP_CORK is currently set.
Perhaps your platform has another way you can ask to disable Nagle's algorithm for a specific connection.
To expand on sarnold's answer, what you need to do is add the following line:
<socket-tcp-no-delay-enabled>true</socket-tcp-no-delay-enabled>
I'm using Northscale 1.0.0 and need a little help getting it to limp along for long enough to upgrade to the new version. I'm using C# and ASP.NET to work with it using the Enyim libraries. I currently suspect that the application does not have enough connections per the socketPool setting in my app.config. I also noted that the previous developer's code simply treats ANY exception from an attempted Get call to MemCache as if the item isn't in the cache, which (I believe) may be resulting in periodic spikes in calls to the database when the pool gets starved. We've been having oddball load spikes that don't seem to have any relation to server load. I suspect that he is not correctly managing the lifecycle on the connections to Northscale and that we are periodically experiencing starvation in the socket pool as a result, but I'm unable to prove it.
Is there a specific exception I should be looking for when I call the Get method to retrieve items from cache? I'm not really seeing much in the docs that gives me sufficient information on this. Anybody have any sample code on this? I'd even accept java or php code, as I think the .NET libraries were probably based on one of those anyway.
Any ideas?
Thanks,
Will
If you have made the connection correctly to the membase server(formerly Northscale) typically you only get an exception on 'get' when it's not a hit.