Single HttpClientContext for multiple threads - apache-httpclient-4.x

I am using apache HttpClient 4.5 to process http request in java.
According to documentation HttpClient is thread safe so we can use same instance of HttpClient for all the threads but HttpContext should be maintain by each thread of execution.
For authentication (NTLM authentication) we need to set CredentialsProvider to the context, which will authenticate on the server.
Requirement
All the request will hit the same server with same authentication details. I want to authenticate only once when application will initialise or first request to the server, all other request should serve in same session but can be from different threads.
Can I use same context because hitting to the same server with same authentication details, or there is another way to achieve it?

Even though HttpContext instances ought not be shared between threads, there is nothing wrong with sharing thread-safe objects between multiple contexts. For instance, one can easily use the same CredentialsProvider and AuthCache instances with multiple concurrent contexts.
// External dependencies
CloseableHttpClient client;
CredentialsProvider credentialsProvider;
AuthCache authCache;
CookieStore cookieStore;
Principal userPrincipal;
// request execution
HttpClientContext context = HttpClientContext.create();
context.setCredentialsProvider(credentialsProvider);
context.setAuthCache(authCache);
context.setCookieStore(cookieStore);
context.setUserToken(userPrincipal);
HttpGet httpGet = new HttpGet("http://targethost/");
try (CloseableHttpResponse response1 = client.execute(httpGet, context)) {
System.out.println(response1.getStatusLine());
EntityUtils.consume(response1.getEntity());
}
VERY IMPORTANT: NTLM connections are stateful and can be re-used between contexts only if associated with the same user identity. One can either turn off connection state tracking when wiring up HttpClient instance (as below) or manually set up user identity in the execution context (as above).
CloseableHttpClient client = HttpClientBuilder.create()
.disableConnectionState()
.build();

Related

Orion Context Broker delivery guarantees?

Thinking of 'production' usage of Orion Context Broker, I wonder what kind of guarantees are provided by the Orion Context Broker in terms of delivery of messages -- both from producer and consumer perspective? In particular, keeping in mind various possible failure scenarios (CB failure/restart, network transient failure, consumer failure/restart, etc), as well as possibility of resource congestion in the CB. Few examples:
1) if a context update operation succeeds, is it guaranteed that consequent queries will return the latest data (e.g., even if CB failed right after acknowledging the update request, and then restarted)?
2) if a consumer subscribed for certain context information, is it guaranteed that it will receive all the relevant updates -- exactly once, at least once, or even at all? (e.g., in case of transient network failure between CB and the consumer)
3) if a consumer updated its subscription, is it guaranteed that the consequent updates will accurately reflect it? (e.g., if CB failed right after acknowledging the subscription request, and then restarted)
4) if a consumer is subscribed for context changes ('onchange', no throttling), and there are multiple consequent updates from the producer affecting the same attribute, is it guaranteed that each of the changes will be sent (or some might be skipped -- e.g., due to too many notifications that CB needs to send during a certain period of time), in any particular order?
etc...
Thanks!
Answering bullet by bullet:
In general, if the client receives a 2xx response (inside of the response payload in the case of NGSIv1, HTTP response code in the case of NGSIv2) it can assume that the update has been persisted in context database, so subsequent queries will return that data (except in the case of running CB with -writeConcern 0 if the DB fails before the update can be persisted from DB memory to disk).
In order to keep things simpler, CB uses a "fire and forget" notification policy. However, CB can be combined with HTTP relaying software (e.g. Rush, event buses, etc.) in order to implement retries, etc.
Similar to case 1, if the client receives a 2xx response (inside of the response payload in the case of NGSIv1, HTTP response code in the case of NGSIv2) it can assume that the update has been persisted in context database (except in the case of running CB with -writeConcern 0 if the DB fails before the update can be persisted from DB memory to disk), so notifications of such data (due to either existing subscriptions or new ones) will use the new value.
All notifications will be sent as long as thread saturation (in the case of -notificationMode transient) or queue saturation (-notification threadpool:q:n) don't occur. You can find more information about notification modes in Orion documentation.

why DeferredResult ends on setResult() on trying to use SSE

i am trying to implement a Server Sent Events (SSE) webpage which is powered by Spring. My test code does the following:
Browser uses EventSource(url) to connect to server. Spring accepts the request with the following controller code:
#RequestMapping(value="myurl", method = RequestMethod.GET, produces = "text/event-stream")
#ResponseBody
public DeferredResult<String> subscribe() throws Exception {
final DeferredResult<String> deferredResult = new DeferredResult<>();
resultList.add(deferredResult);
deferredResult.onCompletion(() -> {
logTimer.info("deferedResult "+deferredResult+" completion");
resultList.remove(deferredResult);
});
return deferredResult;
}
So mainly it puts the DeferredResult in a List and register a completion callback so that i can remove this thing from the List in case of completion.
Now i have a timer method, that will periodically output current timestamp to all registered "browser" via their DeferredResults.
#Scheduled(fixedRate=10000)
public void processQueues() {
Date d = new Date();
log.info("outputting to "+ LoginController.resultList.size()+ " connections");
LoginController.resultList.forEach(deferredResult -> deferredResult.setResult("data: "+d.getTime()+"\n\n"));
}
The data is sent to the browser and the following client code works:
var source = new EventSource('/myurl');
source.addEventListener('message', function (e) {
console.log(e.data);
$("#content").append(e.data).append("<br>");
});
Now the problem:
The completion callback on the DeferredResult is called on every setResult() call in the timer thread. So for some reason the connection is closed after the setResult() call. SSE in the browser reconnects as per spec and then same thing again. So on client side i have a polling behavior, but i want an kept open request where i can push data on the same DeferredResult over and over again.
Do i miss something here? Is DeferredResult not capable of sending multiple results? i put in a 10 seconds delay in the timer thread to see if the request only terminates after setResult(). So in the browser the request is kept open until the timer pushes the data but then its closed.
Thanks for any hint on that. One more note: I added async-supported to all filters/servlets in tomcat.
Indeed DeferredResult can be set only once (notice that setResult returns a boolean). It completes processing with the full range of Spring MVC processing options, i.e. meaning that all you know about what happens during a Spring MVC request remains more or less the same, except for the asynchronously produced return value.
What you need for SSE is something more focused, i.e. write each value to the response using an HttpMessageConverter. I've created a ticket for that https://jira.spring.io/browse/SPR-12212.
Note that Spring's SockJS support does have an SSE transport which takes care of a few extras such as cross-domain requests with cookies (important for IE). It's also used on top of a WebSocket API and WebSocket-style messaging (even if WebSocket is not available on either the client or the server side) which fully abstracts the details of HTTP long polling.
As a workaround you can also write directly to the Servlet response using an HttpMessageConverter.

Apache HttpClient 4.3.x has no default host?

HttpClient 4.3.x issue.
There does not seem to be a way to attach a default host on CloseableHttpClient for 4.3.x.
This is frustrating as it requires all of your request builders to know up front all the host info, rather than just building up the request parts specific to the call and letting the client fill in any left out defaults (eg. like a default host, port, etc).
With 4.2.x and previous, you could set the default host on the client and any request just needs a subpath + parameters.
But with 4.3.x you have confusing layers of setRoutePlanner(x) (which could have proxy settings) and setProxy(x) (which could be overridden by route-planner) and I'm confused how they settle with the actual client instance. And debugging it shows that route-planner will not get used for default_host, and the 4.3.2 version actually expects the deprecated ClientPNames.DEFAULT_HOST to be set (for case with null target host) which is maybe a defect.
I am finding apache httpclient to be going off a deep edge with all these changes.
Also the examples do not fully clarify http client usage unfortunately.
As an aside: the new design is such mud, why not just have setDefaultHost(x) ? and clear up the confusion on proxy layering(s).
Unless I'm missing something, how does one set the default host in http client 4.3.x?
Why do you think they changed and decided to make everything up front in the request objects vs. defaults in the client?
This how one can provide a default target host using a custom route planner
HttpRoutePlanner routePlanner = new DefaultRoutePlanner(DefaultSchemePortResolver.INSTANCE) {
#Override
public HttpRoute determineRoute(
final HttpHost target,
final HttpRequest request,
final HttpContext context) throws HttpException {
return super.determineRoute(
target != null ? target : new HttpHost("some.default.host", 80),
request, context);
}
};
CloseableHttpClient client = HttpClients.custom()
.setRoutePlanner(routePlanner)
.build();

How to fix cross-site origin policy for server and web-site

I'm using Dropwizard, which I'm hosting, along with a website, on the google cloud (GCE). This means that there are 2 locations currently active:
Some.IP.Address - UI
Some.IP.Address:8080 - Dropwizard server
When the UI tries to call anything from my dropwizard server, I get cross-site origin errors, which is understandable. However, this is posing a problem for me. How do I fix this? It would be great if I could somehow spoof the addresses so that I don't have to fully qualify the resource in the UI.
What I'm looking to do is this:
$.get('/provider/upload/display_information')
Or, if I have to fully qualify
$.get('http://Some.IP.Address:8080/provider/upload/display_information')
I tried setting Origin Filters in Dropwizard per this google groups thread (https://groups.google.com/forum/#!topic/dropwizard-user/ybDOTOxjlLI), but it doesn't seem to work.
In index.html that is served by the server at http://Some.IP.Address you might have a jQuery script that look as follows.
$.get('http://Some.IP.Address:8080/provider/upload/display_information', data, callback);
Of course your browser will not allow accessing http://Some.IP.Address:8080 due to the Same-Origin-Policy (SOP). The protocol (http, https) and the host as well as the port have to be the same.
To achieve Cross-Origin Resource Sharing (CORS) on Dropwizard, you have to add a CrossOriginFilter to the servlet environment. This filter will add some Access-Control-Headers to every response the server is sending. In the run method of your Dropwizard application write:
import org.eclipse.jetty.servlets.CrossOriginFilter;
public class SomeApplication extends Application<SomeConfiguration> {
#Override
public void run(TodoConfiguration config, Environment environment) throws Exception {
FilterRegistration.Dynamic filter = environment.servlets().addFilter("CORS", CrossOriginFilter.class);
filter.addMappingForUrlPatterns(EnumSet.allOf(DispatcherType.class), true, "/*");
filter.setInitParameter("allowedOrigins", "http://Some.IP.Address"); // allowed origins comma separated
filter.setInitParameter("allowedHeaders", "Content-Type,Authorization,X-Requested-With,Content-Length,Accept,Origin");
filter.setInitParameter("allowedMethods", "GET,PUT,POST,DELETE,OPTIONS");
filter.setInitParameter("preflightMaxAge", "5184000"); // 2 months
filter.setInitParameter("allowCredentials", "true");
// ...
}
// ...
}
This solution works for Dropwizard 0.7.0 and can be found on https://groups.google.com/d/msg/dropwizard-user/xl5dc_i8V24/gbspHyl4y5QJ.
This filter will add some Access-Control-Headers to every response. Have a look on http://www.eclipse.org/jetty/documentation/current/cross-origin-filter.html for a detailed description of the initialisation parameters of the CrossOriginFilter.

Is NServiceBus (AsA_Server) without DTC possible?

I am using NServiceBus for the first time and have a small, simple application where a user submits a form, the form fields are then sent to the queue, and the handler collects this data and writes it to the database using linq-to-sql.
Any changes within Component Services is a complete no-no as far as the DBA is concerned, so I'm now looking for an alternative to DTC (which is not enabled on the DB server), but using AsA_Server so that messages do not get purged.
I have tried removing AsA_Server after IConfigureThisEndpoint and specifying the configuration myself, but this doesn't seem to work (the console appears, page loads but nothing happens, it doesn't even stop at breakpoints.) AsA_Client does work, but as I understand it the messages will be purged at startup which I need to avoid.
Any suggestions?
Thanks,
OMK
EDIT: This has now been resolved by using wrapping the call to the database in a suppress transaction scope, which allows the database work to be done with no ambient transaction to enlist in:
using (TransactionScope sc = new TransactionScope(TransactionScopeOption.Suppress))
{
// code here
sc.Complete();
}
When you use AsA_Server, you are specifying you want durable queues and you will need to configure transactional queues.
With a transactional send/receive MSMQ requires you to send, transmit, receive, and process as part of one transaction. However, actually all these stages take place in their own transactions.
For example, the send transaction is complete when the sender sends a message onto their local MSMQ subsystem (even if the queue address is remote, the sender still sends to a local queue which acts as a kind of proxy to the remote queue).
The transmit transaction is complete when the MSMQ subsystem on the senders machine successfully transmits the message to the MSMQ subsystem on the receivers machine.
Even though this may all happen on one machine, I am guessing that your Handle() method is writing to a database on a different machine.
The problem here is that for the receive operation to complete satisfactorily from a transaction perspective, your call to the database must be successful. Only then will the message be de-queued from your input queue. This prevents any chance that the message is lost during processing failure.
However, in order to enforce that across the network you need to involve DTC to coordinate the distributed transaction to the database.
Bottom line, if you want durable queues in a distributed environment then you will need to use MSDTC.
Hope this helps.
There is an alternative. In your connection string you can add the option to not enlist in a distributed transaction and this will have your DB connection ignored in the DTC.
Of course, if this is set in the config then all database transactions for the application are ignored by the DTC rather than just a specific one.
Example:
<add key="DatabaseConnectionString" value="Data Source=SERVERNAME;Initial Catalog=DBNAME;Integrated Security=True;Enlist=False"/>
With NServiceBus 4.0 you can now do the following, which finally worked for me:
Configure.Transactions.Advanced(t =>
{
t.DisableDistributedTransactions();
t.DoNotWrapHandlersExecutionInATransactionScope();
});
When you use the As (AsA_Client, AsA_Server) interfaces, the configuration is applied after Init() so all the settings that you make there regarding MsmqTransport and UnicastBus are overriden.
It's possible to override those settings using IWantTheConfiguration in a IHandleProfile implementation. You get the Configuration after the default roles are applied but before the bus is started.
This way you can change the default profile settings and tailor them to your needs: deactivate transactions, enable impersonation...
Example:
public class DeactivateTransactions : IHandleProfile<Lite>, IWantTheEndpointConfig
{
private IConfigureThisEndpoint configure;
public IConfigureThisEndpoint Config
{
get { return configure; }
set
{
this.configure = value;
Configure.Instance.MsmqTransport()
.PurgeOnStartup(false)
.IsTransactional(false); // Or other changes
}
}
public void ProfileActivated()
{
}
}