When to send SMTP Status "500 Line too long" during DATA phase? - smtp

I'm currently writing an SMTP Server.
One of the SMTP Status Codes defined in RFC5321 ยง 4.5.3.1.9 is 500 Line too long. This status code must be returned whenever line-length limits are violated.
As with other SMTP Status Codes, this one must be returned to the client "before client sends the next command". It's easy to do this for stateless commands such as MAIL FROM and RCPT TO ...
... but when DATA has been sent by the client, the SMTP Server transitions into the "Receiving DATA" phase ... when is the right moment to return the 500 Line too long status code?
A) Immediately upon receipt of the too-long line?
This leads to
A1) Continue accepting lines until <CR><LF>.<CR><LF>, or
A2) Immediately exit the DATA phase and return to "Receiving Command" state
or
B) After the DATA phase have been terminated with <CR><LF>.<CR><LF>
I have searched high & low for weeks for an answer, searching even for "SMTP Server State Machine", but I cannot find any explicit instruction on whether (A) or (B) is the right answer. And if (A), whether to do (A1) or (A2)

My reading is that you have to wait for the <CRLF>.<CRLF> terminator before you send back any status code. See in particular section 4.2.5. which sort of implies this (but does not really spell it out).
What you do on the server side is up to you, but what would make sense is to discard any further input from the client and just wait for the terminator so you can tell them whatever they were trying to send was not accepted.

Related

How can one obtain the message from state-reverting exception using ethereum clients, when self did not broadcast transaction?

Suppose an ethereum smart contract has external function "foo" whose logic has state-reverting exception require(1 == 0, 'error: you broke the simulation!');.
If ethereum-client A broadcasts transaction "txA" which is a function call on foo, how can ethereum-client B access the state-reverting message corresponding to "txA"?
edit: by "how can", I mean how can a developer practically enable ethereum-client B to access this data. i.e. Can you please point me in the direction of the correct (lower-level.. not webui) api/rpc call from a particular tool?
Clearly this is possible since block explorers provide such messages for failed transactions. I read through some of the source of etherscan, but their javascript is minimized and not easily readable.
Thanks in advance!
See this: https://ethereum.stackexchange.com/questions/39817/are-failed-transactions-included-in-the-blockchain
Failed transactions often are included in the chain.
What you sometimes see, if you're using e.g. MetaMask, is a popup saying "this transaction will fail" that happens before the transaction is sent to the chain. This is MetaMask trying to be helpful and prevent you wasting gas. But you can force send the transaction anyway, and you'll get a failed/reverted transaction posted on-chain (like this one for this Solidity source).
So to answer the original question, if TxA was posted on-chain, then client B will process it and get the revert message itself. If TxA was not posted on-chain, then there is no record of it.

What implementations of SMTP typically do with the mail data in response to RSET after DATA?

Here is what I gathered from the RFC 5321:
4.1.1.5. RESET (RSET)
This command specifies that the current mail transaction will be aborted. Any stored sender, recipients, and mail data MUST be discarded, and all buffers and state tables cleared. The receiver MUST send a "250 OK" reply to a RSET command with no arguments. A reset command may be issued by the client at any time. It is effectively equivalent to a NOOP (i.e., it has no effect) if issued immediately after EHLO, before EHLO is issued in the session, after an end of data indicator has been sent and acknowledged, or immediately before a QUIT.
The emphases are mine. This says that if we receive the RSET after the end of data indicator ".", but before we sent the acknowledgement, then we must discard the content of the message, which is currently being delivered. This does not seem practical. Moreover, the server can easily acts as if it received the RSET after he sent the acknowledgement - the client would not be able to know. Trying to know what is usually done, I found this discussion https://www.ietf.org/mail-archive/web/ietf-smtp/current/msg00946.html where they say:
Under a RFC5321 compliant "No Quit/Mail" cancellation implementation, after
completing the DATA state, the server is waiting for a pending RSET, MAIL
or QUIT command:
QUIT - complete transaction, if any
MAIL - complete transaction, if any
perform a "reset"
RSET - cancel any pending DATA transaction delivery,
perform a "reset"
drop - cancel any pending DATA transaction delivery
We added this support in 2008 as a local policy option (EnableNoQuitCancel)
which will alter your SMTP state flow, your optimization and now you MUST
follow RSET vs QUIT/MAIL correctly. RSET (after DATA) aborts the
transaction, QUIT/MAIL (after DATA) does not. RSET is not an NOOP at this
point.
The specification says that discarding is a MUST. However, the above extract suggests that in practice it is interpreted as a MAY. I could look at the code of known implementations of SMTP/LMTP, such as Dovecot, but perhaps someone already reviewed that and this would save me time.
The text says "end of data indicator has been sent and acknowledged" which suggests that the client has received the server's response to the DATA command. Since the base protocol doesn't support command pipelining, I don't think sending anything after DATA but before the server's response (after the dot which terminates the DATA but before you receive a reply from the server) is well-defined behavior.
Personally, I can't think of any more reasonable server behavior than "pretend it didn't happen."
The answer is here : https://www.rfc-editor.org/rfc/rfc1047 . They basically says that you can acknowledge before you start the processing and it is actually recommended to do so. This does not violate RFC 5321. Of course, more information on this issue would be useful, but I am happy with rfc1047.

ssis send mail task: Error: An error occurred with the following error message: "The operation has timed out."

SSIS package in question runs a series of stored procedures and fills 13 different excel files with results and sends those excel files to 13 different users in attachments. Package run stops with the message in the title of this question, sometime right in the middle of sending or for example today, on the 4th user. The files get created because I can see them in their directories so only the send mail task is failing. When I go back to visual studio and execute each send task manually, send task works fine even though sometime it still gives me the error yet, still sends the right file to the right person but not thru SSIS package run in SQL server... I tried to delay SMTP processes thinking that might be in the way (to 660000 miliseconds) but did not help. Has this happened to anybody?.. Thanks for all your answers in advance.
Here is the full message for a task that sent the e-mail with attachment regardless the error when task was manually executed...
[Send Mail Task] Error: An error occurred with the following error message: "The operation has timed out.".
Progress: The SendMail task is completed. - 100 percent complete
Task Send Mail Task for Inventory Reports 038 failed
Finished, 12:03:03 PM, Elapsed time: 00:00:00.655
I think I figured it out why this was happening. In case somebody / anybody is interested, here is what I think has happened.
I was trying to expand the timeout period thru a script task, playing with Threading.Thread.Sleep value but I neglected to do the same in my SMTP connection properties. When I changed the timeout value in properties for SMTP connection, error messages stopped coming :)
I wish I could post a picture to show you where exactly that property is located but my reputation failed me!.. :( (less than 10 points yet)
I am in the process of completing all of my changes then I will post again with final result hoping that will resolve all of my problems.
Thanks to all who showed interest.

MySQL listen notify equivalent

Is there an equivalent of PostgresQL's notify and listen in MySQL? Basically, I need to listen to triggers in my Java application server.
Ok, so what I found is that you can make UDF functions in mysql that can do anything but need to be written in C/C++. They can be then called from triggers on updates in database and notify your application when update happened. I saw that there are some security concerns. I did not use it myself but from what I can see it looks like something that could accomplish what you want to do and more.
http://dev.mysql.com/doc/refman/5.6/en/adding-udf.html
The github project mysql-notification provides a MySQL user defined function MySQLNotification() as a plugin to MySQL that will send notification events via a socket interface. This project includes a sample NodeJS test server that receives the notification events that could be adapted for Java or any other socket service.
Example use:
$ DELIMITER ##
$ CREATE TRIGGER <triggerName> AFTER INSERT ON <table>
FOR EACH ROW
BEGIN
SELECT MySQLNotification(NEW.id, 2) INTO #x;
END##
Project includes full source code and installation instructions for OSX and Linux. License is GNU v3.
No, there aren't any built-in functions like these yet.
You need to "ping" (every 1-5 seconds) database with selecting with premade flag like "read" 0/1. After
SELECT * FROM mytable WHERE read = 0
update it with read = 1
I needed to do this, so I designed my application to send the update notices itself.
E.g.
--Scenario--
User A is looking at record 1
User B saves an update to record 1 while User A has it open.
Process:
I wrote my own socket server as a Windows Service. I designed a que like system which is basically,
EntityType EntityID NoticeType
Where the EntityType is the type of Poco in my data layer that needs to send out notices, EntityID is the primary key value of the row that changed in sql (the values of the poco), and NoticeType is 1 Updated, 2 Inserted, and 3 Deleted.
The socket server accepts connections from the server side application code on a secure connection "meaning client side code cannot make requests designed to be sent by the server side application code"
The socket server accepts a message like
900 1 1023 1
Which would mean the server needs to notify concerned client connections that Entity Type 1 "Person" with ID 1023 was Updated.
The server knows what users need to be notified because when User's look at a record, they are registered in the socket server as having an interest in the record and the record's ID which is done by the web socket code in the client side javascript.
Record 1 is a POCO in my app code that has an IsNew and IsDirty field. "Using EntityFrameWork6 and MySql" If UserB's save caused an actual change (and not just saving existing data) IsDirty will be true on the postback on UserB's POCO.
The application code see's the record is dirty then notifies the socket server with a server side sent socket "which will be allowed" that says Entity 1 with ID 1023 was Updated.
The socket server sees it, and puts it in the que.
Being .Net, I have a class for concerned users that uses the same pocos from the data layer running in the Socket Server window service. I use linq to select users who are working with an entity matching the entity type and primary key id of the entity in the que.
It then loops through those users and sends them a socket like
901 1 1023 1 letting them know the entity was updated.
The javascript in the client side receives it causing users B's page to do an ajax postback on Record 1, But what happens with UserA's is different.
If user A was in the process of making a change, they will get a pop up to show them what changed, and what their new value will be if they click save and asks them which change they want to keep. If UserA doesn't have a change it does an ajax postback with a notification bar at the top that says "Record Change: Refreshed Automatically" that expires after a few seconds.
The Cons to this,
1. It's very complex
2. It won't catch insert/update/delete operations from outside of the application.
In my case, 2 won't happen and if 2 does happen it's by myself or another dev who knows how to manually create the notify que requests "building an admin page for that".
You can use https://maxwells-daemon.io to do so.
It is based on mysql bin logs, when changes in database is occurred it will send json message with updates to kafka, rabbitmq or other streaming platforms

Send mail with SMTP adapter with retry, retryinterval and delivery notification

I have an orchestration that receives an XML with some email properties(like: to, from, cc, subject, etc..).
Then I want to send the emailmessage with a dynamic port (and I assigned some of the values according the input xml). After the email has been sent, I want to do some further processing but that processing may only execute when the mail has been delivered succesfully on the SMTP server.
In the functional design they want to have a retry per hour and maximum of one day, after that periode a message must be in the EventLog when it cannot be delivered successfully.
Therefore I set the dynamic port with the context properties BTS.RetryCount to 23 and BTS.RetryInterval to 60.
I have set the dynamic SMTP port delivery notification to "Transmitted" and I have a catch exception block to catch the DeliveryFailureException.
Is this enough ?
It is a litte bit confusing for me reading several blogs if I should mark the scope Synchronized...
Patrick,
You're right, the documentation on this aspect of BizTalk delivery notification is scarce and confusing. After extensive testing, I have not been able to identify a difference wether the Scope is set to Synchronized = true; or not.
The official documentation for the Synchronized setting only applies to shared variables when used in both branches of a Parallel execution.
As for the Delivery Notification itself, I'm currently facing a problem in production where the FILE adapter produces its ACK event before the entire contents of the file is written to the output folder - it renders this part of the solutiong useless!