Is there any better approach for sending bulk mail using JavaMail API.I use the below approach.
enter code here Transport tr = session.getTransport("smtp");
tr.connect(smtphost, username, password);
tr.sendMessage(msg, msg.getAllRecipients());
i used to send 'n'number of mails using the same connection.
I is there any other separate way for sending bulk mail.Kindly help me in this for getting better solution.
In what way do you want it to be "better"?
You can use multiple threads to send more messages in parallel,
up to the limit of what your mail server will allow.
You can use Thread pooling as it gives very good performance.I have implemented and sharing you the below code snippet.
try {
ExecutorService executor = Executors.newFixedThreadPool("no. of threads");
// no. of threads is depend on your cpu/memory usage it's better to test with diff. no. of threads.
Runnable worker = new MyRunnable(message);
// message is the javax.mail.Message
executor.execute(worker);
executor.shutdown();
executor.awaitTermination(Long.MAX_VALUE, TimeUnit.MILLISECONDS);
}
Related
It seems to me that the built in cancellation support of gRPC is quite aggressive in the following sense: If I invoke cancellation on client side, the communication channel is closed immediately. The server gets informed and can do cleanup work, but there seems to be no chance to inform the client after cleanup has been finished.
What am I supposed to do if I want the following "soft cancellation" behavior?
Client requests cancellation
Server receives cancellation request and starts cleanup
Server finishes cleanup and closes the communication channel
Client gets informed that cancellation procedure has been finished
This behavior could be achieved using a request stream and the oneof keyword:
service ServiceName{
rpc Communicate (stream Request) returns (Response);
}
message Request {
oneof request_oneof {
ActualRequest actual = 1;
CancellationRequest cancellation = 2;
}
}
This should not be too hard to implement, but it also looks quite cumbersome. What do you think is the intended way?
For unary RPCs, there is not any alternative to cancellation. You'd need to make it a client-streaming RPC in order to encode the additional communication events.
For client-streaming and bidi-streaming RPCs, a common pattern is for the client to delay half-close. When the client is done, the client half-closes, which notifies the server the client is done. The server then performs clean up and can close the stream cleanly with a status message. You could do the same approach, making use of half-close:
service ServiceName {
rpc Communicate (stream ActualRequest) returns (Response);
}
message ActualRequest {...}
Streaming allows you to make custom protocols like this, but nothing but your application will know how to interact with that protocol. A proxy, for example, could not initiate a "graceful cancellation" for this protocol. So you should only use streaming when you really need it and you should still properly implement the "hard" cancellation even though you may not use it as often.
I am using apache http client in version 4.5.13 within my java application to send a post request. I used following line of code to set up the http client.
SocketConfig socketConfig = SocketConfig.custom()
.setSoKeepAlive(true)
.setTcpNoDelay(true)
.build();
ConnectionConfig connectionConfig = ConnectionConfig.custom()
.setMalformedInputAction(CodingErrorAction.IGNORE)
.setUnmappableInputAction(CodingErrorAction.IGNORE)
.setCharset(Consts.UTF_8)
.setMessageConstraints(messageConstraints)
.build();
RequestConfig defaultRequestConfig = RequestConfig.custom()
.setCookieSpec(CookieSpecs.DEFAULT)
.setExpectContinueEnabled(true)
.setTargetPreferredAuthSchemes(Arrays.asList(AuthSchemes.NTLM, AuthSchemes.DIGEST))
.setContentCompressionEnabled(true)
.build();
BasicHttpClientConnectionManager connectionManager = new BasicHttpClientConnectionManager();
connectionManager.setSocketConfig(socketConfig);
connectionManager.setConnectionConfig(connectionConfig);
CloseableHttpClient httpClient = HttpClients.custom()
.setConnectionManager(connectionManager)
.setDefaultRequestConfig(defaultRequestConfig)
.build();
And I am sending the data via
CloseableHttpResponse response = httpClient.execute(postRequest);
The issue I am experiencing is that when I look how the messages are send (using tshark) I can see that the application data is split in two messages. The first one leaves my system around 0.5ms after the httpClient.execute(postRequest), but the second part is send around 10ms-20ms after the first one. It looks like the second part is waiting to receive the ack for the first part of the message. I tried to change a a lot of configurations (buffer sizes, TcpNoDelay, different TLS ...) but cannot figure out what is causing this behavior.
I also tried http.net client to send post requests. With this client the message was also split in two messages but they where both send right after each other (with around 0.3ms delay).
I am pretty new to network so I would appreciate a helpful answer and apologize upfront if I did not explain it very well (I do not know all the specific wordings).
Thanks
Try disabling expect-continue handshake.
Sorry if my question is dumb, I'm new to MassTransit.
My system consists of a server and multiple client devices.
I'd like to send a message from the server to a specific client or to a group of clients.
As far as I understand, IBusControl.Publish sends the message to all subscribers, and IBusControl.Send to the only one subscriber.
How can I achieve this using MassTransit?
My transports are RabbitMQ / Azure Service Bus.
Thanks!
MassTransit implements standard messaging patterns, which aren't MassTransit-specific. Point-to-point, publish-subscribe, invalid message channel, dead letter channel and so on:
You indeed have the choice between sending a message to one consumer using Send and to broadcast messages to all subscribers for that message type by using Publish.
Everything else can be easily done by adding code to consumers:
await bus.Publish(new MyMessage { ReceiverGroup = "group1", ... });
and
public async Task Consume(IContext<MyMessage> context)
{
if (context.Message.ReceiverGroup != myGroup) return;
...
}
Minimum To Achieve:- Send nearly or more than 1 mb/second to other websocket clients.
Questions:--
Is it possible video streaming with SuperWebSocket?
What options/features of SuperWebSocket can be used like Asynch
mode/JsonCommands/CustomSession/etc to achieve fastest data
transfer?
How to sequence a big data sent in chunks but if received without any order at client or server side? Is there anything built in to sequence these chunks or I have to manually send sequence nos in message itself?
What I have tried:--
Multiple secure sessions with same port and different paths in javascript code
ws = new WebSocket(wss://localhost:8089/1/1)
ws = new WebSocket(wss://localhost:8089/2/2)
ws = new WebSocket(wss://localhost:8089/3/3)
with above sessions I send large data in chunks but it's not receiving in expected order at server/client side and also after successfully sending large chunk (size=55000kb) that session closes automatically!
I am looking into sample projects of SuperWebSocket but not sure where to go! I am open to try any option inside SuperWebsocket. Thanks
1) I am not sure it does, but if it provides an API to send Byte[], it may be enough.
2) No idea about this one, the documentation may explain it.
3) What do you mean without order? WebSockets is TCP based, so data segments sent in the same connection will arrive in the same order they were sent.
4) Why would you open different connections to the same site? There is also probably limitations about the connections that you can open to the same host. One should be OK, open several is not going to increment your bandwidth, only will increment your problems.
I develop a WebSocket server component that handles messages as Stream derived objects and has an acceptable performance so far, you may like to give it a try.
We have some very long running ETL packages (some run for hours) that need to be kicked off by NServiceBus endpoints. We do not need to keep a single transaction alive for the entire process, and can break it up into smaller transactions. Since an NServiceBus handler will wrap itself in a transaction for the entirety, we do not want to handle this in a single transaction because it will time out--let alone create issues with locking in the DBMS.
My current thoughts are that we could spawn another process asynchronously, immediately return from the handler, and publish an event upon completion (success or failure). I have not found a lot of documentation on how to integrate the new NServiceBus 4.0 SQL Server Broker support with the traditional MSMQ transport. Is that even possible?
What is the preferred way to have a long running process in SQL Server 2012 (or an SSIS package) notify NServiceBus subscribers when it completes in an asynchronous manner?
It looks like it is possible to do a http request from SSIS, see How to make an HTTP request from SSIS?
With that in mind you can use send a message to NServiceBus via the Gateway (the Gateway is just an HttpListener) to your Publisher to tell it to publish a message informing all the subscribers that the long running ETL package has completed.
To send a message to the gateway you need to do something like:
var webRequest = (HttpWebRequest)WebRequest.Create("http://localhost:25898/Headquarters/");
webRequest.Method = "POST";
webRequest.ContentType = "text/xml; charset=utf-8";
webRequest.UserAgent = "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)";
webRequest.Headers.Add("Content-Encoding", "utf-8");
webRequest.Headers.Add("NServiceBus.CallType", "Submit");
webRequest.Headers.Add("NServiceBus.AutoAck", "true");
webRequest.Headers.Add("NServiceBus.Id", Guid.NewGuid().ToString("N"));
const string message = "<?xml version=\"1.0\" ?><Messages xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns=\"http://tempuri.net/NServiceBus.AcceptanceTests.Gateway\"><MyRequest></MyRequest></Messages>";
using (var messagePayload = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(message)))
{
webRequest.Headers.Add(HttpRequestHeader.ContentMd5, HttpUtility.UrlEncode(Hasher.Hash(messagePayload))); //Need to specify MD5 hash of the payload
webRequest.ContentLength = messagePayload.Length;
using (var requestStream = webRequest.GetRequestStream())
{
messagePayload.CopyTo(requestStream);
}
}
using (var myWebResponse = (HttpWebResponse) webRequest.GetResponse())
{
if (myWebResponse.StatusCode == HttpStatusCode.OK)
{
//success
}
}
Hope this helps!
There is actually a task in SSIS 2012 for placing messages in an MSMQ, the Message Queue Task. You just point it to your MSMQ connection and can use an Expression to customize your message with the package name, success/failure, row counts, etc.
Depending on how many packages we're talking about and how customized you want the messages to be, your best bet is to write a standalone utility to create messages in whatever format you desire, and then use an Execute Process Task to invoke that utility with whatever parameters from the package you want to pass in to be formatted into the message.
You could also use that same codebase and just create a custom SSIS task (a lot easier than it sounds.)
One thought I had to help adhere to the DRY principle would be to use a Master SSIS package.
In my mind, it would look something like an Execute Package Task with an X connected to that. Configure the package to take as a parameter a Package Name. Configure the Execute Package Task to use the Parameter for determining what package to call.
The X would probably be a Script Task but perhaps as #Kyle Hale points out, it might be the Message Queue Task. I leave that decision to those more versed in NServiceBus.
The important thing in my mind, is to not add this logic into every package as that'd be a maintenance nightmare.