554 SMTP synchronization error with exim4 and my code - smtp

I have run headfirst into this reject error from exim4:
2010-02-15 01:46:05 SMTP protocol synchronization error (input sent without waiting for greeting): rejected connection from H=ender [192.168.20.49] input="HELO 192.168.20.49\r\n"
I have modified my exim4 config to not enforce sync, like so:
smtp_enforce_sync='false'
acl_smtp_connect = nosync nosync:
control = no_enforce_sync
accept
But that doesn't seem to matter. What makes less sense to me is why I'm getting the 554 in the first place. I send a HELO, I wait for a response, and somehow in the midst of that, I manage to generate the "554 Error"
What am I doing wrong in the code below, that makes this fail 99% of the time (yes, it has worked twice). Yes, the socket is blocking, I hang in recv for ~5 seconds waiting for the rejection. On the 2 times when it has worked, it didn't pause at all.
I've tried sending EHLO instead of HELO, no better luck. I've even had grief getting a telnet session to connect and say HELO. However, i can use python smtp (from another machine) to send emails just fine against this same server!
hSocket = _connectServerSocket(server, port);
if (hSocket != INVALID_SOCKET) {
BYTE sReceiveBuffer[4096];
int iLength = 0;
int iEnd = 0;
char buf[4096];
strcpy(buf, "HELO ");
strcat(buf, "192.168.20.49");
strcat(buf, "\r\n");
printf("%s", buf);
if (send(hSocket, (LPSTR)buf, strlen(buf), NO_FLAGS) == SOCKET_ERROR) {
printf("Socket send error: %d\r\n", WSAGetLastError());
return (false);
}
iLength = recv(hSocket,
(LPSTR)sReceiveBuffer+iEnd,sizeof(sReceiveBuffer)-iEnd,
NO_FLAGS);
iEnd += iLength;
sReceiveBuffer[iEnd] = '\0';

Your code should wait for a 220 line from the smtp server before sending the HELO message. See section 3.1 of RFC 2821. That is probably what the Python library does.
There should be several free libraries available that can help you with this, for example libsmtp. Consider spending time on learning one of these instead of patching your own solution (unless your project is to write your own mail solution).

Related

How can I spoof a Ping reply with Scapy?

What's the simplest way to spoof a ping reply with Scapy? I have a compiled script that keep pinging a certain domain and I need to investigate it's behavior when it receive a ping reply. I thought Scapy would be my best option to do so, but I can't figured it out.
So far I've found the class scapy.layers.inet.ICMPEcho_am, but trying to import it from scapy.layers.inet throws an ImportError. Beside, I also need to fake a DNS respond, and I'm even more clueless on that.
Thanks in advance for any hint, solution, etc.
A ping (echo) reply is just an ICMP packet with a type and code of 0:
IP(src="FAKE INITIATOR ADDRESS", dst="THE SERVER ADDRESS") / ICMP(type=0, code=0)
or, alternatively:
IP(src="FAKE INITIATOR ADDRESS", dst="THE SERVER ADDRESS") / ICMP(type="echo-reply", code=0)
Obviously, "FAKE INITIATOR ADDRESS" and "THE SERVER ADDRESS" should be replaced by strings that hold the fake client address and the server address that you're spoofing a reply to.
The code=0 isn't actually necessary since 0 is the default, but I figured explicit it nice.
I made this program to send spoof IPv6 pings on a given interface. You need to take care of proper sequence number also in the packet
def sniffer(interface):
#scapy.sniff(iface=interface, filter="icmp6 && ip6[40] == 128", store=False, prn=process_packet)
scapy.sniff(iface=interface,filter="icmp6", store=False, prn=process_packet)
def process_packet(packet):
print("DUMP\n")
print(packet.show())
print(packet[Ether].src)
print(Ether().src)
if packet[Ether].src == Ether().src:
print("OUTGOING PACKET")
print(packet[IPv6].dst)
if packet.haslayer(ICMPv6EchoRequest):
print("OUTGOING ECHO REQUEST")
reply_packet = Ether(dst=packet[Ether].src)\
/IPv6(dst=packet[IPv6].src,
src=packet[IPv6].dst) \
/ ICMPv6EchoReply(seq=packet[ICMPv6EchoRequest].seq,
id=0x1)
scapy.sendp(reply_packet, iface="Wi-Fi")
else:
print("INCOMING PACKET")
interface = "Wi-Fi"
sniffer(interface)

NodeJS - Process out of memory for 100+ concurrent connections

I am working on an IoT application where the clients send bio-potential information every 2 seconds to the server. The client sends a CSV file containing 400 rows of data every 2 seconds. I have a Socket.IO websocket server running on my server which captures this information from each client. Once this information is captured, the server must push these 400 records into a mysql database every 2 seconds for each client. While this worked perfectly well as long as the number of clients were small, as the number of clients grew the server started throwing the "Process out of memory exception."
Following is the exception received :
<--- Last few GCs --->
98522 ms: Mark-sweep 1397.1 (1457.9) -> 1397.1 (1457.9) MB, 1522.7 / 0 ms [allocation failure] [GC in old space requested].
100059 ms: Mark-sweep 1397.1 (1457.9) -> 1397.0 (1457.9) MB, 1536.9 / 0 ms [allocation failure] [GC in old space requested].
101579 ms: Mark-sweep 1397.0 (1457.9) -> 1397.0 (1457.9) MB, 1519.9 / 0 ms [last resort gc].
103097 ms: Mark-sweep 1397.0 (1457.9) -> 1397.0 (1457.9) MB, 1517.9 / 0 ms [last resort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0x35cc9bbb4629 <JS Object>
2: format [/xxxx/node_modules/mysql/node_modules/sqlstring/lib/SqlString.js:~73] [pc=0x6991adfdf6f] (this=0x349863632099 <an Object with map 0x209c9c99fbd1>,sql=0x2dca2e10a4c9 <String[84]: Insert into rent_66 (sample_id,sample_time, data_1,data_2,data_3) values ? >,values=0x356da3596b9 <JS Array[1]>,stringifyObjects=0x35cc9bb04251 <false>,timeZone=0x303eff...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
Aborted
Following is the code for my server:
var app = require('express')();
var http = require('http').Server(app);
var io = require('socket.io')(http);
var mysql = require('mysql');
var conn = mysql.createConnection({
host: '<host>',
user: '<user>',
password: '<password>',
database: '<db>',
debug: false,
});
conn.connect();
io.on('connection', function (socket){
console.log('connection');
var finalArray = []
socket.on('data_to_save', function (from, msg) {
var str_arr = msg.split("\n");
var id = str_arr[1];
var timestamp = str_arr[0];
var data = str_arr.splice(2);
finalArray = [];
var dataPoint = [];
data.forEach(function(value){
dataPoint = value.split(",");
if(dataPoint[0]!=''){
finalArray.push([dataPoint[0],1,dataPoint[1],dataPoint[2],dataPoint[3]]);
finalArray.push([dataPoint[0],1,dataPoint[4],dataPoint[5],dataPoint[5]]);
}
});
var sql = "Insert into rent_"+id+" (sample_id,sample_time, channel_1,channel_2,channel_3) values ? ";
var query = conn.query (sql, [finalArray],function(err,result){
if(err)
console.log(err);
else
console.log(result);
});
conn.commit();
console.log('MSG from ' + str_arr[1] + ' ' + str_arr[0] );
});
});
http.listen(9000, function () {
console.log('listening on *:9000');
});
I was able to get the server to handle 100 concurrent connections after which I started receiving process out of memory exceptions. Before the database inserts were introduced, the server would simply store the csv as a file on disk. With that set up the server was able to handle 1200+ concurrent connections.
Based on the information available on the internet, looks like the database insert query (which is asynchronous) holds the 400 row array in memory till the insert goes through. As a result, as the number of clients grow, the memory foot-print of the server increases, thereby running out of memory eventually.
I did go through many suggestions made on the internet regarding --max_old_space_size, I am not sure that this is a long term solution. Also, I am not sure on what basis I should decide the value that should be mentioned here.
Also, I have gone through suggestions which talk about async utility module. However, inserting data serially may introduce a huge delay between the time when client inserts data and when the server saves this data to the database.
I have gone in circles around this problem many times. Is there a way the server can handle information coming from 1000+ concurrent clients and save that data into Mysql database with minimum latency. I have hit a road block here, and any help in this direction is highly appreciated.
I'll summarize my comments since they sent you on the correct path to address your issue.
First, you have to establish whether the issue is caused by your database or not. The simplest way to do that is to comment out the database portion and see how high you can scale. If you get into the thousands without a memory or CPU issue, then your focus can shift to figuring out why adding the database code into the mix causes the problem.
Assuming the issues is caused by your database, then you need to start understanding how it is handling things when there are lots of active database requests. Oftentimes, the first thing to use with a busy database is connection pooling. This gives you three main things that can help with scale.
It gives you fast reuse of previously opened connections so you don't have every single operation creating its own connection and then closing it.
It lets you specify the max number of simultaneous database connections in the pool you want at the same time (controlling the max load you throw at the database and also probably limiting the max amount of memory it will use). Connections beyond that limit will be queued (which is usually what you want in high load situations so you don't overwhelm the resources you have).
It makes it easier to see if you have a connection leak problem as rather than just leak connections until you run out of some resource, the pool will quickly be empty in testing and your server will not be able to process any more transactions (so you are much more likely to see the problem in testing).
Then, you probably also want to look at the transaction times for your database connections to see how fast they can handle any given transaction. You know how many transactions/sec you are trying to process so you need to see if your database and the way it's configured and resourced (memory, CPU, speed of disk, etc...) is capable of keeping up with the load you want to throw at it.
You should increase the default memory(512MB) by using the command below:
node --max-old-space-size=1024 index.js
This increases the size to 1GB. You can use this command to further increase the default memory.

MariaDB non-blocking with EPOLL

I have single threaded server written in C that accepts TCP/UDP connections based on EPOLL and supports plugins for the multitude of protocol layers we need to support. That bit is fine.
Due to the single threaded nature, I wanted to implement a database layer that could utilize the same EPOLL architecture rather then separately iterating over all of the open connections.
We use MariaDB and the MariaDB connector that supports non blocking functions in it's API.
https://mariadb.com/kb/en/mariadb/using-the-non-blocking-library/
But what I'm finding is not what I expected, and what I was expecting is described below.
First I fire the mysql_real_connect_start() and if it returns zero we dispatch the query immediately as this indicates no blocking was required, although this never happens.
Otherwise, I fetch the file descriptor that seems to be immediate and register it with EPOLL and bail back to the main EPOLL loop waiting for events.
s = mysql_get_socket(mysql);
if(s > 0)
{
brt_socket_set_fds(endpoint, s);
struct epoll_event event;
event.data.fd = s;
event.events = EPOLLRDHUP | EPOLLIN | EPOLLET | EPOLLOUT;
s = epoll_ctl(efd, EPOLL_CTL_ADD, s, &event);
if (s == -1) {
syslog(LOG_ERR, "brd_db : epoll error.");
// handle error.
}
...
So, then some time later I do get the EPOLLOUT indicating the socket has been opened.
And I dutifully call mysql_real_connect_cont() but at this stage it is still returning a non-zero value, indicating I must wait longer?
But then that is the last EPOLL event I get, except for the EPOLLRDHUP when I guess the MariaDB hangs up after 10 seconds.
Can anyone help me understand if this idea is even workable?
Thanks... Thanks... so much Thanks.
OK for anyone else that lands here, I fixed it or rather un-broke it.
Notice that - from the examples - the returned status from _start / _cont calls are passed in as a parameter to the next _cont. Turns out this is critical.
The status contains flags MYSQL_WAIT_READ, MYSQL_WAIT_WRITE, MYSQL_WAIT_EXCEPT, MYSQL_WAIT_TIMEOUT, and if not passed to the next _cont my guess is you are messing with the _cont state-machine.
I was not saving the state of status between different places where _start and _cont were being called.
struct MC
{
MYSQL *mysql;
int status;
} MC;
...
// Initial call
mc->status = mysql_real_connect_start(&ret, mc->mysql, host, user, password, NULL, 0, NULL, 0);
// EPOLL raised calls.
mc->status = mysql_real_connect_cont(&ret, mc->mysql, mc->status);
if(mc->status) return... // keep waiting check for errors.

Scala / Slick, "Timeout after 20000ms of waiting for a connection" error

The block of code below has been throwing an error.
Timeout after 20000ms of waiting for a connection.","stackTrace":[{"file":"BaseHikariPool.java","line":228,"className":"com.zaxxer.hikari.pool.BaseHikariPool","method":"getConnection"
Also, my database accesses seem too slow, with each element of xs.map() taking about 1 second. Below, getFutureItem() calls db.run().
xs.map{ x =>
val item: Future[List[Sometype], List(Tables.myRow)] = getFutureItem(x)
Await.valueAfter(item, 100.seconds) match {
case Some(i) => i
case None => println("Timeout getting items after 100 seconds")
}
}
Slick logs this with each iteration of an "x" value:
[akka.actor.default-dispatcher-3] [akka://user/IO-HTTP/listener-0/24] Connection was PeerClosed, awaiting TcpConnection termination...
[akka.actor.default-dispatcher-3] [akka://user/IO-HTTP/listener-0/24] TcpConnection terminated, stopping
[akka.actor.default-dispatcher-3] [akka://system/IO-TCP/selectors/$a/0] New connection accepted
[akka.actor.default-dispatcher-7] [akka://user/IO-HTTP/listener-0/25] Dispatching POST request to http://localhost:8080/progress to handler Actor[akka://system/IO-TCP/selectors/$a/26#-934408297]
My configuration:
"com.zaxxer" % "HikariCP" % "2.3.2"
default_db {
url = ...
user = ...
password = ...
queueSize = -1
numThreads = 16
connectionPool = HikariCP
connectionTimeout = 20000
maxConnections = 40
}
Is there anything obvious that I'm doing wrong that is causing these database accesses to be so slow and throw this error? I can provide more information if needed.
EDIT: I have received one recommendation that the issue could be a classloader error, and that I could resolve it by deploying the project as a single .jar, rather than running it with sbt.
EDIT2: After further inspection, it appears that many connections were being left open, which eventually led to no connections being available. This can likely be resolved by calling db.close() to close the connection at the appropriate time.
EDIT3: Solved. The connections made by slick exceeded the max connections allowed by my mysql config.
OP wrote:
EDIT2: After further inspection, it appears that many connections were being left open, which eventually led to no connections being available. This can likely be resolved by calling db.close() to close the connection at the appropriate time.
EDIT3: Solved. The connections made by slick exceeded the max connections allowed by my mysql config.

TCP Socket frames 'queuing' up on Windows. How can I force each message in it's own frame?

I use Actionscript 3 TCP sockets to connect with Javascript websockets. Sending data is primarily from the websocket to the AS socket.
On Mac OS X, no problem. On Windows however, successive TCP messages seem to queue up somewhere. This causes the ProgressEvent.SOCKET_DATA event to fire with quite a large time interval, which creates noticeable lag.
I used Wireshark to monitor the TCP packets on both OS X and Windows. The difference I see is that on OS X each message comes in it's own packet, while on Windows successive messages are 'concatenated' into one packet.
Is this just the way the socket is implemented, or is there any way I can improve on this?
EDIT 1: I found this post on actionscript.org which outlines the same problem
EDIT 2: I found a way to go around the problem. I pad every message with dummy text to increase the frame size. This causes the TCP stack to send every message in it's own frame instead if queuing them. This works, even though it's really, really ugly...
This is the code in the SOCKET_DATA event.
while(this.socket.bytesAvailable) {
var byte:uint = this.socket.readUnsignedByte();
if(byte == 0x00) {
trace("Start byte found. - " + new Date().time);
this.incomingMessageBytes = new ByteArray();
} else if (byte == 0xFF) {
trace("End byte found. Dispatching. - " + new Date().time);
this.incomingMessageBytes.position = 0;
var msg:String = incomingMessageBytes.readUTFBytes(incomingMessageBytes.bytesAvailable);
var decodedMessage:Object = JSON.decode(msg, false);
var message = new Message(decodedMessage.clientId, decodedMessage.command, decodedMessage.data);
this.dispatchEvent(new MessageReceivedEvent(MessageReceivedEvent.RECEIVED_MESSAGE, message));
} else {
//trace("Appending.");
this.incomingMessageBytes.writeByte(byte);
}
}
It sounds like you might be seeing the effects of Nagle's algorithm. I don't know if there is a way to disable Nagle's algorithm (aka setting the TCP_NODELAY flag) under ActionScript, but if there is, you might try doing that.