I have TCL code which i want to loop in case i receive "Connection closed by foreign host"
connecting to the router (IOL router)
Telnet to routem PC
connect sys -match_max 500000 "telnet $linux_machine $port"
# using the system passed login credentials
receive sys {login:} 120
sleep 5
transmit sys "$ta_user_id\r"
receive sys {Password: $}
transmit sys "$ta_user_passwd\r"
receive sys {%$}
transmit sys "su\r"
receive sys {Password: $} 20
transmit sys "$root_passwd\r"
log
log of failure scenario
++++ 05:13:50 sys Control::connect +++
Connect: sys -match_max 500000 {telnet ssr-lnx-iol 5012}
+--- 05:13:50 ---
++++ 05:13:50 sys Control::receive +++
Trying 172.25.195.183...
Connected to ssr-lnx-iol.
Escape character is '^]'.
Fedora release 9 (Sulphur)
Kernel 2.6.25-14.fc9.i686 on an i686 (0)
login:
--- 05:14:01 ---
++++ 05:14:06 sys Control::transmit +++
Transmit: root
+--- 05:14:06 ---
++++ 05:14:06 sys Control::receive +++
root
Password:
devtest-
Connection closed by foreign host.
Related
I have an architecture where the lambda would run when a irs data file is put on the S3 bucket, I can easily connect to my RDS on my local machine but for some very weird reason the Lambda is not able to access it and giving error:
"errorMessage": "2022-11-15T22:22:51.919Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Task timed out after 60.06 seconds"
2-11-15T22:21:53.402Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 URI updated to: https://irs-data.s3.amazonaws.com/?prefix=index&encoding-type=url
[DEBUG] 2022-11-15T22:21:53.402Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Calculating signature using v4 auth.
[DEBUG] 2022-11-15T22:21:53.402Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 CanonicalRequest:
GET
/
encoding-type=url&prefix=index
host:irs-data.s3.amazonaws.com
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20221115T222153Z
x-amz-security-token:IQoJb3JpZ2luX2VjEP///////////wEaCXVzLWVhc3QtMSJHMEUCIQC9x2awzo/kQIantnRem2kmylKVHw5fBV+ylz/PQeP0DwIgQHovdX5Jv9/cpe/PAaWDTBZGcc3TxXGUALQRJCh1XMsq7QII9///////////ARAAGgw5NzU4MjIzMjkxNDIiDDegGWv5Wxk3ihIEdCrBAlqSbCaW/e4tIn2SK5gAcePArZf5Ij7o1qhoqEyG2boXivxftDkd7vM3RGg9lK2YaMEx9ku3mCBFpS03T5zlbr2EnaQjRuvEZzdHBKY79qqbUOCqcITmYkQQK+GSCoAyfnckjbjY0yORD41/7OS6wRa9pRKzu0ib8V/aE8Uln5Eem9ylYSn7LdyNWanD2I0CNfYNMV+Xx0bduAhVyXP6HjXikjTG5e2gqlA61xQmq4NMXyRixxINUk47R1FWBqPnYVqQWOIPW1HKcbj26qlW+JJyh530ML1RK3qqkssnH7c0LGu8rJz9Ag9wldHcRlODljZcaOmX7OlErdwIImGoeb99ngcVKVrCc+QnegTQolsoAhU3AG68LrZrmY/zRborttAslMzeUpiZ4fkA86QKJJDdpJEL/sZc/ZXzBMCj2x/ZozD+odCbBjqeAVPiKRQMCuBUqK8LlnALW2ki6RwMyS8WmGFpSoDjUYcyFDhMkHSa8TnTa+0gdertafyc4c4NPfsWFBYTLavdkgmACCkug75ENt3LWAgpGvBMxp6f2hiZKjJzqQnOE6VofIUXU8PLycB+L9uaJuYplLuMoRmjURtHFj5whMZrGclS0+V9/eH2ep8x9SAiFIJ1yOimmox6FTw2DhvpuE8U
host;x-amz-content-sha256;x-amz-date;x-amz-security-token
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
[DEBUG] 2022-11-15T22:21:53.439Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 StringToSign:
AWS4-HMAC-SHA256
20221115T222153Z
20221115/us-east-1/s3/aws4_request
5d61ffe01d9d6dd6aee4b1faeecbf21721efb8696f94f969389c93b05579847c
[DEBUG] 2022-11-15T22:21:53.439Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Signature:
d6e70d2c6350adfa7231bd7b2a63e5ac2fd83583f5dde1dfada2b08854d493d2
[DEBUG] 2022-11-15T22:21:53.439Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Sending http request: <AWSPreparedRequest stream_output=False, method=GET, url=https://irs-data.s3.amazonaws.com/?prefix=index&encoding-type=url, headers={'User-Agent': b'Boto3/1.19.10 Python/3.9.13 Linux/4.14.255-285-225.501.amzn2.x86_64 exec-env/AWS_Lambda_python3.9 Botocore/1.22.12 Resource', 'X-Amz-Date': b'20221115T222153Z', 'X-Amz-Security-Token': b'IQoJb3JpZ2luX2VjEP///////////wEaCXVzLWVhc3QtMSJHMEUCIQC9x2awzo/kQIantnRem2kmylKVHw5fBV+ylz/PQeP0DwIgQHovdX5Jv9/cpe/PAaWDTBZGcc3TxXGUALQRJCh1XMsq7QII9///////////ARAAGgw5NzU4MjIzMjkxNDIiDDegGWv5Wxk3ihIEdCrBAlqSbCaW/e4tIn2SK5gAcePArZf5Ij7o1qhoqEyG2boXivxftDkd7vM3RGg9lK2YaMEx9ku3mCBFpS03T5zlbr2EnaQjRuvEZzdHBKY79qqbUOCqcITmYkQQK+GSCoAyfnckjbjY0yORD41/7OS6wRa9pRKzu0ib8V/aE8Uln5Eem9ylYSn7LdyNWanD2I0CNfYNMV+Xx0bduAhVyXP6HjXikjTG5e2gqlA61xQmq4NMXyRixxINUk47R1FWBqPnYVqQWOIPW1HKcbj26qlW+JJyh530ML1RK3qqkssnH7c0LGu8rJz9Ag9wldHcRlODljZcaOmX7OlErdwIImGoeb99ngcVKVrCc+QnegTQolsoAhU3AG68LrZrmY/zRborttAslMzeUpiZ4fkA86QKJJDdpJEL/sZc/ZXzBMCj2x/ZozD+odCbBjqeAVPiKRQMCuBUqK8LlnALW2ki6RwMyS8WmGFpSoDjUYcyFDhMkHSa8TnTa+0gdertafyc4c4NPfsWFBYTLavdkgmACCkug75ENt3LWAgpGvBMxp6f2hiZKjJzqQnOE6VofIUXU8PLycB+L9uaJuYplLuMoRmjURtHFj5whMZrGclS0+V9/eH2ep8x9SAiFIJ1yOimmox6FTw2DhvpuE8U', 'X-Amz-Content-SHA256': b'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': b'AWS4-HMAC-SHA256 Credential=ASIA6GM4LCU3GISECH6S/20221115/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date;x-amz-security-token, Signature=d6e70d2c6350adfa7231bd7b2a63e5ac2fd83583f5dde1dfada2b08854d493d2'}>
[DEBUG] 2022-11-15T22:21:53.459Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Certificate path: /var/task/botocore/cacert.pem
[DEBUG] 2022-11-15T22:21:53.459Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Starting new HTTPS connection (1): irs-data.s3.amazonaws.com:443
2022-11-15T22:22:51.919Z 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Task timed out after 60.06 seconds
END RequestId: 9f20c035-5a47-4c6f-be9f-407b4a43aee6
REPORT RequestId: 9f20c035-5a47-4c6f-be9f-407b4a43aee6 Duration: 60061.89 ms Billed Duration: 60000 ms Memory Size: 128 MB Max Memory Used: 116 MB Init Duration: 1040.64 ms
Lambda Code with S3 data:
import pandas as pd
import boto3
import os
from dotenv import load_dotenv
import logging
import sys
import time
import datetime as dt
import io
import pymysql
####### LOADING ENVIRONMENT VARIABLES #######
load_dotenv()
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
BUCKET = os.getenv('BUCKET')
BUCKET_PREFIX = os.getenv('BUCKET_PREFIX')
# Credentials to database connection
hostname= os.getenv('HOSTNAME')
dbname= os.getenv('DATABASE')
uname= os.getenv('USERNAME')
pwd= os.getenv('PASSWORD')
def lambda_handler(event, context):
try:
logger.info("TEST")
logger.info(BUCKET)
s3 = boto3.resource('s3')
# assigning the bucket:
my_bucket = s3.Bucket(BUCKET)
data_list = []
for my_bucket_object in my_bucket.objects.filter(Prefix=BUCKET_PREFIX):
if my_bucket_object.key.endswith(".csv"):
key=my_bucket_object.key
body=my_bucket_object.get()['Body'].read()
temp_data = pd.read_csv(io.BytesIO(body))
data_list.append(temp_data)
# concatenating all the files together:
df = pd.concat(data_list)
# Connect to MySQL Database
connection = pymysql.connect(host=hostname,user=uname,password=pwd,database=dbname)
cursor = connection.cursor()
# Truncate the table everytime before an ETL:
sql_trunc = "TRUNCATE TABLE `irs990`"
cursor.execute(sql_trunc)
# commit the results
connection.commit()
# creating columns from the dataframe:
cols = "`,`".join([str(i) for i in df.columns.tolist()])
# adding dataframe to mysql RDS
for i,row in df.iterrows():
sql = "INSERT INTO `irs990` (`" +cols + "`) VALUES (" + "%s,"*(len(row)-1) + "%s)"
cursor.execute(sql, tuple(row))
connection.commit()
# checking if data was successfully written:
sql = "SELECT * FROM `irs990`"
cursor.execute(sql)
result = cursor.fetchall()
for i in result:
print(i)
# closing MySQL connection:
connection.close()
except Exception as e:
logging.error(e)
My Lambda VPC details:
2
My RDS details:
3
4
5
Can somebody please help me what to do? I am assigning the lambda the same VPC as the RDS, I tried using the same security group as well and making sure the outbound IP address of lambda is in the inbound rules for RDS. But nothing :(
The proper security configuration should be:
A Security Group on the AWS Lambda function (Lambda-SG) that permits All outbound access (which is the default configuration)
A Security Group on the Amazon RDS database (DB-SG) that permits inbound connections on port 3306 from Lambda-SG
That is, DB-SG should specifically reference Lambda-SG. This will then permit the incoming connection from the Lambda function.
Merely putting the Lambda function and the RDS database "in the same Security Group" is insufficient because security groups apply to each resource individually. Unless the security group allows a connection from 'itself', this will not permit the desired access. Much better to use two security groups as described above.
I am trying to crytp using gpg2 the mails sent by Nagios3. For that, I have create this custom command on /etc/nagios3/commands.cfg :
/usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /usr/bin/gpg2 --armor --encrypt --recipient toto#titi.com | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$
}
Some points:
The e-mail is sent but it is "empty":
Sep 19 14:35:25 tutu nagios3: Finished daemonizing... (New PID=4313)
Sep 19 14:36:15 tutu nagios3: SERVICE ALERT:
tete_vm;HTTP;OK;HARD;4;HTTP OK: HTTP/1.1 200 OK - 347 bytes in 0.441
second response time Sep 19 14:36:15 tutu nagios3: SERVICE
NOTIFICATION: tata;tete_vm;HTTP;OK;notify-service-by-email;HTTP OK:
HTTP/1.1 200 OK - 347 bytes in 0.441 second response time
The command:
/usr/bin/gpg2 --armor --encrypt --recipient toto#titi.com | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$</code>
works very well on command line
I have tested this command:
/usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /usr/bin/gpg2 --armor --encrypt --recipient toto#titi.com >> /tmp/toto.txt
The file /tmp/toto.txt is created but "empty".
So, it seems to be a problem using /usr/bin/gpg2 on this file, but I cannot find why!
The most common mistake when encrypting from within services using GnuPG is that the recipient's key was imported by another (system) user than the one the service is running under, for example imported by root, but the service runs as nagios.
GnuPG maintains per-user "GnuPG home directories" (usually ~/.gnupg) with per-user keyrings in them. If you imported as root, other service accounts don't know anything about the keys in there.
The first step for debugging the issue would be to redirect gpg's stderr to a file, so you can read the error message by adding 2>>/tmp/gpg-error.log to the GnuPG call:
/usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /usr/bin/gpg2 --armor --encrypt --recipient toto#titi.com 2>>/tmp/gpg-error.log | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$
If the issue is something like "key not found" or similar, you've got two possibilities to resolve the issue:
Import to the service's user account. Switch to the service's user, and import the key again.
Hard-code the GnuPG home directory to somewhere else using the --homedir [directory] option, for example in a place you also store your Nagios plugins.
Be aware of using appropriate, restrictive permissions. GnuPG is very picky if other users than the owner are allowed to read the files!
I use tcpdump on openwrt to capture packets and send them to a raspberry pi with netcat.
the problem is that i want to use multiple routers to capture the requests, and forward them to the raspberry pi.
tcpdump -i wlan0 -e -s 256 -l type mgt subtype probe-req |nc 192.168.0.230 22222
And i recieve the packet info with a python script:
import socket
HOST = 'localhost' # use '' to expose to all networks
PORT = 12345
def incoming(host, port):
"""Open specified port and return file-like object"""
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# set SOL_SOCKET.SO_REUSEADDR=1 to reuse the socket if
# needed later without waiting for timeout (after it is
# closed, for example)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind((host, port))
sock.listen(0) # do not queue connections
request, addr = sock.accept()
return request.makefile('r', 0)
# /-- network ---
for line in incoming(HOST, PORT):
print line,
output:
15:17:57 801928 3933710786us tsft 1.0 Mb/s 2412 Mhz 11b -38dB signal antanna 1 BSSID: broadcast SA:xxxx ....
desired output:
192.168.0.130 15:17:57 801928 3933710786us tsft 1.0 Mb/s 2412 Mhz 11b -38dB signal antanna 1 BSSID: broadcast SA:xxxx ....
But how can i add the the Ip-address of the router to the command? so i can see witch router received the packet.
Or how can i just send and extra string like "router1" to identify the router?
You can send an extra string to the router with the script below:
#! /bin/bash
ip=$(ifconfig wlan0 | grep cast | awk -F: '{print $2}' | awk '{print $1}' )
tcpdump -i wlan0 -e -s 256 -l type mgt subtype probe-req |\
while read line; do
echo "$ip" "$(date +%T)" "$line"
done | nc 192.168.0.230 22222
It will insert ip address and time stamp at the beggining of each line of tcpdump's output and pipe it to netcat.
I'm trying to connect our ejabberd server to MySQL to add the mod_archive_odbc module. We're running ejabberd 2.1.13. The rest of the server uses mnesia for storage. I tried the DSN approach first, but that failed. I'm currently getting this error in erlang.log:
=PROGRESS REPORT==== 24-Sep-2013::13:50:27 ===
supervisor: {local,ejabberd_sup}
started: [{pid,<0.777.0>},
{name,'ejabberd_mod_archive_odbc_chat.hostname.com'},
{mfargs,
{mod_archive_odbc,start_link,
["chat.hostname.com",
[{database_type,"mysql"},
{default_auto_save,true},
{enforce_default_auto_save,false},
{default_expire,infinity},
{enforce_min_expire,0},
{enforce_max_expire,infinity},
{replication_expire,31536000},
{session_duration,1800},
{wipeout_interval,86400}]]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
=CRASH REPORT==== 24-Sep-2013::13:50:36 === crasher:
initial call: mod_archive_odbc:init/1
pid: <0.777.0>
registered_name: 'ejabberd_mod_archive_odbc_chat.hostname.com'
exception exit: {aborted,{no_exists,[sql_pool,"chat.hostname.com"]}}
in function gen_server:terminate/6
ancestors: [ejabberd_sup,<0.37.0>]
This is what the modules section looks like:
{mod_archive_odbc, [{database_type, "mysql"},
{default_auto_save, true},
{enforce_default_auto_save, false},
{default_expire, infinity},
{enforce_min_expire, 0},
{enforce_max_expire, infinity},
{replication_expire, 31536000},
{session_duration, 1800},
{wipeout_interval, 86400}]}
This is what the database section looks like:
{odbc_server, {mysql, "localhost", "ejabberd", "ejabberd", "password"}}.
I can connect to the mysql server locally and remotely using the ejabberd user as well.
Here is the ngrep output while the errors occur:
# ngrep port 3306
interface: eth0 (10.179.7.192/255.255.255.192)
filter: (ip or ip6) and ( port 3306 )
^Cexit
0 received, 0 dropped
# ngrep -d lo port 3306
interface: lo (127.0.0.0/255.0.0.0)
filter: (ip or ip6) and ( port 3306 )
^Cexit
0 received, 0 dropped
Here is ngrep output if I connect to MySQL with the ejabberd user via another computer on the network
# ngrep port 3306
interface: eth0 (10.179.7.192/255.255.255.192)
filter: (ip or ip6) and ( port 3306 )
####
T 10.179.7.235:3306 -> XX.XXX.XXX.XXX:55909 [AP]
J....5.5.32.....xxpKb-VK...................UKXV(a2rh6r].mysql_native_password.
##
T XX.XXX.XXX.XXX:55909 -> 10.179.7.235:3306 [AP]
>...................................ejabberd....).p.P..lt=BTK..w..
##
T 10.179.7.235:3306 -> XX.XXX.XXX.XXX:55909 [AP]
...........
#
T XX.XXX.XXX.XXX:55909 -> 10.179.7.235:3306 [AP]
!....select ##version_comment limit 1
#
T 10.179.7.235:3306 -> XX.XXX.XXX.XXX:55909 [AP]
.....'....def....##version_comment............................MySQL Community Server (GPL).........
##
T XX.XXX.XXX.XXX:55909 -> 10.179.7.235:3306 [AP]
.....
###
The MySQL module appears to be installed:
(ejabberd#ip-10-179-7-235)1> ejabberd_check:check_database_module(mysql).
ok
The problem was the changes I was making were not being updated within the actual service. Another question, How to uninstall odbc for Ejabberd?, and the comments above by #giavac made me realize that apparently the configuration file is not always authoritative for the configuration.
Specifically, I fixed my problem by adding this line:
override_local.
This is my first post and I hoping I can reach out to the group.
I'm pulling my hair out with this problem.
I'm running a EC2 Ubuntu micro instance with LAMP.
I'm using Java with JDBC to access the mysql database.
The issue is that the Java code keeps throwing a "ClassNotFound" Exception when I execute:
Class.forName("com.mysql.jdbc.Driver");
I have installed the following:
sudo apt-get install mysql-server
sudo apt-get install mysql-client
sudo apt-get install libmysql-java
My imports in the Java file are:
import java.text.CharacterIterator;
import java.text.StringCharacterIterator;
import java.util.regex.*;
import java.sql.*;
import java.util.Properties;
import java.net.*;
import java.sql.Connection;
import java.sql.DriverManager;
My $CLASSPATH shows:
.:/usr/share/java/mysql.jar:/usr/share/java/mysql-connector-java.jar:/usr/share/java/mysql.jar:/usr/share/java/mysql-connector-java.jar:/usr/share/java/mysql-5.1.10.jar:/usr/share/java/mysql-connector-java-5.1.10.jar
In /usr/share/java I have:
drwxr-xr-x 3 root root 4096 2012-05-25 02:01 .
drwxr-xr-x 316 root root 12288 2012-05-24 21:21 ..
-rwxrwxrwx 1 root root 448964 2009-11-23 22:38 gnome-java-bridge.jar
-rwxrwxrwx 1 root root 2621 2010-03-05 04:16 libintl.jar
lrwxrwxrwx 1 root root 31 2012-05-25 02:01 mysql-5.1.10.jar -> mysql-connector-java-5.1.10.jar
-rwxrwxrwx 1 root root 754057 2010-01-26 08:02 mysql-connector-java-5.1.10.jar
lrwxrwxrwx 1 root root 31 2012-05-25 02:01 mysql-connector-java.jar -> mysql-connector-java-5.1.10.jar
lrwxrwxrwx 1 root root 16 2012-05-25 02:01 mysql.jar -> mysql-5.1.10.jar
This is the code that always throws up the exception message to an output file:
try {
try {
Class.forName("com.mysql.jdbc.Driver");
outyyy.write("Class loaded \n");
}
catch (ClassNotFoundException e) {
outyyy.write("Class not found! \n");
outyyy.write(e.getMessage() + " \n");
}
this._connection = DriverManager.getConnection(url, this._user, this._pass);
this._isConnected = true;
}
catch (Exception e) {
this._isConnected = false;
}
I'm not sure if it's relevant but I can access and query the database just fine using PHP.
Any assistance is much appreciated.
Thanks, Andy
Set classpath
export CLASSPATH=$CLASSPATH:/usr/share/java/mysql-connector-java.jar
Source: http://marksman.wordpress.com/2009/03/01/setting-up-mysqljdbc-driver-on-ubuntu/