how exim show password on log files - smtp

I want to debug all auth session.
For example on /var/log/exim/mainlog display auth error like this;
login authenticator failed for (xx) [x.x.x.x]: 535 Incorrect
authentication data (set_id=xxx)
login authenticator failed for (xx) [x.x.x.x]: 535 Incorrect
authentication data (set_id=xxx)
login authenticator failed for (xx) [x.x.x.x]: 535 Incorrect
authentication data (set_id=xxx)
but i want to display password too like this;
login authenticator failed for (xx) [x.x.x.x]: 535 Incorrect
authentication data (set_id=xxx,set_pwd=yyy) login authenticator
failed for (xx) [x.x.x.x]: 535 Incorrect authentication data
(set_id=xxx,set_pwd=yyy) login authenticator failed for (xx)
[x.x.x.x]: 535 Incorrect authentication data (set_id=xxx,set_pwd=yyy)
I changed dovecot conf and added;
auth_verbose = yes
auth_debug = yes
auth_debug_passwords = yes
but /var/log/exim/mainlog still doesn't display password and /var/log/maillog doesn't give any information about smtp.
So, how can i catch auth error with cleared text password.

Configuration options for Exim should be edited in exim.conf, as the dovecot.conf only affects how dovecot works. They are two separate programs.
As far as I know, there is no way to directly configure Exim to log the password in cleartext in the logfile. What you can do is add lines like the following
server_debug_print = "running smtp auth $1 $2"
under the correct authenticator in your exim.conf (or all of them) and then run exim -d which enables the debugging mode (but also makes exim run in the foreground with all debug output going to stdout).

I just found a solution.
I changed dovecot.conf passdb options like;
passdb {
driver = checkpassword
args = /etc/dovecot/chk.sh
}
and write a bash script for write args on bash.log file.
like
#!/bin/bash
echo "$1 username and $2 password" > /etc/dovecot/log.txt

Related

SSL integration with Cloud SQL instance based MySQL

I enabled SSL in a MySQL Cloud SQL instance. In order to connect to the instance , I downloaded the necessary certficates and can connect fine using mysql command. The CloudSQL instance is running with Private IP on a sharedVPC network .
$ mysql -h 192.168.0.3 --ssl-ca=server-ca.pem --ssl-cert=client-cert.pem --ssl-key=client-key.pem -u testuser -p
Enter password:
Now to test connectivity from a code to connect to SQL instance I deployed the following in Cloud Functions
import pymysql
from sqlalchemy import create_engine
def sql_connect(request):
engine = create_engine('mysql+pymysql://testuser:<password>#192.168.0.3/mysql',echo=True)
tab = engine.execute('show databases;')
return str([t[0] for t in tab])
It shows "Access Denied" error as shown below
Error: function terminated. Recommended action: inspect logs for termination reason. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging Details:
(pymysql.err.OperationalError) (1045, "Access denied for testuser'#'192.168.60.4' (using password: YES)")
When I disable SSL it works fine as shown below
['information_schema', 'mysql', 'performance_schema', 'sys', 'testdb']
A) To enable SSL in code I did the following
ssl_args = {'sslrootcert':'server-ca.pem','sslcert':'client-cert.pem','sslkey':'client-key.pem'}
engine = create_engine('mysql+pymysql://testuser:<password>#192.168.0.3/mysql',echo=True,connect_args=ssl_args)
but it is failing with below error
__init__() got an unexpected keyword argument 'sslrootcert'
B) Also tried disabling ssl=False in code but it is failing with below error
Invalid argument(s) 'ssl' sent to create_engine(), using configuration MySQLDialect_pymysql/QueuePool/Engine
UPDATE:
Changed the code for SSL as follows:
ssl_args = {'ssl': {'ca':'./server-ca.pem', 'cert':'./client-cert.pem', 'key':'./client-key.pem'}}
Uploaded the certs to cloud function source
Added 0.0.0.0/0 as authorized networks in CloudSQL to allow connecting from Cloud functions
Now seeing the following error
"Can't connect to MySQL server on 'X.X.X.181' ([SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: IP address mismatch, certificate is not valid for 'X.X.X.181'. (_ssl.c:1091))") . However can connect using the same certificates using `mysql` command
Need help to resolve both A) fixing the error as observed so that the code is integrated with SSL and B) Modify code so that it does not uses SSL
Use ssl_ca for the root, ssl_cert for the cert and ssl_key for the key
ssl_args = {'ssl_ca':'server-ca.pem','ssl_cert':'client-cert.pem','ssl_key':'client-key.pem'}
engine = create_engine('mysql+pymysql://testuser:<password>#192.168.0.3/mysql',echo=True,connect_args=ssl_args)
Use SSL parameter in the form ssl = {"ssl":{"ca":"server-ca.pem"}} within the connect function
from pymysql import connect
from sqlalchemy import create_engine
import os
SQLALCHEMY_DATABASE_URI='mysql+pymysql://testuser:<password>#192.168.0.3/mysql?ssl_ca=server-ca.pem'
engine = create_engine(SQLALCHEMY_DATABASE_URI)
args, kwargs = engine.dialect.create_connect_args(engine.url)
# Create connection to the DB
db_conn = connect(kwargs["host"], kwargs["user"], kwargs["passwd"], kwargs["db"], ssl = {"ssl":{"ca":kwargs["ssl"]["ca"]}})
cursor = db_conn.cursor()
# Execute query
cursor.execute("show tables")
cursor.fetchall()

Unable to analyse MySQL error logs in OSSEC

I am trying to analyze MySQL error logs that are generated on my OSSEC agent and raise alerts using OSSEC server.
here is the code block added to /var/ossec/etc/ossec.conf on the agent side to read the error logs of MySQL from the agent:
<localfile>
<log_format>mysql_log</log_format>
<location>/var/log/mysql/error.log</location>
</localfile>
After doing so I have restarted the agent and server but unable test any error logs that are getting generated on the agent side like:
2020-09-15T04:09:24.164859Z 12 [Note] Access denied for user 'root'#'localhost' (using password: YES)
As per doc https://ossec-docs.readthedocs.io/en/latest/docs/programs/ossec-logtest.html under Caveats we need to add MySQL log: to the log generated for the ossec-logtest.
This will be added automatically when we send these logs to the OSSEC server for analysis from the agent.
ossec-logtest result for MySQL error log
ossec-logtest is working fine after adding MySQL log: to the beginning but they are not working in the realtime.
Can anyone please help me through this problem.
The fact that ossec-logtest trigger an alert means that mysql decoder and rules are working fine
Check on Agent
MySql is running. systemctl status mysqld.service
MySql configuration (loglevel and output file) allow to log that kind of event . See here
If the value is greater than 1, aborted connections are written to the
error log, and access-denied errors for new connection attempts are
written.
MySql is effectively logging 'Access denied': grep "Access denied" /var/log/mysql/error.log
Ossec and their processes is running ok: /var/ossec/bin/ossec-control status
Check on Manager
log_alert_level field in /var/ossec/etc/ossec.conf is lower o equal than 9 (loglevel showed in your ossec-logtest)

Error when using sendEmail

I'm trying to learn how to use sendEmail to send automated emails. This is the command I entered in Windows command prompt:
sendEmail -f myemail#gmail.com -t youremail#gmail.com -m This is a test message. -s smtp.gmail.com:465 -xu myemail#gmail.com -xp mypassword
However, I get the following error:
ERROR => Connection attempt to smtp.gmail.com:465 failed: IO::SOCKET::INET: Bad hostname 'smtp.gmail.com'
After researching this problem online, I ran telnet on smtp.gmail.com, and found that I could not open a connection. I think this is the problem, though I am still unsure what is causing it. What can I do to fix this?
Update /etc/hosts, add a ip address to smtp.gmail.com:
74.125.203.109 smtp.gmail.com
Update /etc/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4
IO::SOCKET::INET: IPV6 - Error may also cause such this kind of issues. [check it by typing ifconfig/ipconfig].
If there are multiple IPV6 addresses, then disconnect your network and reconnect it. [eth0 ifdown & eth0 ifup]

Can't receive email with msmtp using google account

I'm experiencing issue with msmtp on my backup server (OpenSUSE 12.2). I'm trying to send email everytime some of my backups fail. For this reason I would like to use msmtp. I have everything setted up. However, even though I see sent items in my "Sent" AND "Inbox" folder in Gmail, I never received single email on my desired Email account. Could anyone help me please? Scripts follow bellow. Please see that recipient is my gmail acc in log even though in text.txt is different.
.msmtprc
account default
host smtp.gmail.com
port 587
protocol smtp
from myemail#gmail.com
tls on
tls_starttls on
#tls_trust_file /etc/ssl/certs/ca-certificates.crt
tls_certcheck off
tls_nocertcheck
auth on
user myemail#gmail.com
password Mypassword
logfile ~/.msmtp
.msmtp
Feb 25 09:44:28 host=smtp.gmail.com tls=on auth=on user=myemail#gmail.com
from=myemail#gmail.com recipients=myemail#gmail.com mailsize=130 smtpstatus=250
smtpmsg='250 2.0.0 OK 1393317868 g1sm73904348eet.6 - gsmtp' exitcode=EX_OK
text.txt
From: Daily backups <myemail#gmail.com>
To: Recipient's Name <hisemail#domain.com>
Subject: Backup report
Sample text
Command for send email
$ cat text.txt | msmtp -a default myemail#gmail.com
Big thank to all of those who will try to help me.
David
This works for me....
account default
host smtp.gmail.com
port 587
logfile /tmp/msmtp.log
tls on
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
auth login
user myemail#gmail.com
password MyPassWord1
from First LastName
account account2
And then from raspberry Pi command line
echo -e "Subject: Test Mail\r\n\r\nThis is a test mail" |msmtp --debug --from=default -t destinationaddress#gmail.com

Documents visible but "doesn't exist"

I have been developing my application from a dev sandbox and want to push the reference data from "dev" to "prod". I thought I'd succeeded by executing the following commands:
On my OSX dev machine:
cbbackup http://127.0.0.1:8091 ~/couchbase-reference-data -b reference_data -u username -p password
Again on my OSX dev machine:
cbrestore ~/couchbase-reference-data http://prod.server.com:8091/ -u password -p password
Now when I go to the admin console on production I see this:
Looks good at this point. However, if I click any of the "Edit Document" button things go tragically wrong:
Any help would be GREATLY appreciated!
UPDATE:
I've noticed that now when I run the cbrestore command I get the following errors:
2013-06-03 16:53:48,295: s0 error: CBSink.connect() for send: error: SASL auth exception: aws.internal-ip.com:11210, user: reference_data
2013-06-03 16:53:48,295: s0 error: async operation: error: SASL auth exception: aws.internal-ip.com:11210, user: reference_data on sink: http://prod.server.com:8091/(reference_data#127.0.0.1:8091)
error: SASL auth exception: aws.internal-ip.com:11210, user: reference_data
This reminds me that I think what I did was copy the ~/couchbase-reference-data directory to the production environment and then ran the cbrestore from there. I have just done that now and get the following confirmation:
[####################] 100.0% (189/189 msgs)
bucket: reference_data, msgs transferred...
: total | last | per sec
batch : 1 | 1 | 16.1
byte : 36394 | 36394 | 585781.0
msg : 189 | 189 | 3042.1
done
After this process, however, the problem still exists in the same manner as described before.
UPDATE 2
I decided to delete, re-create, and re-import the bucket on production. All steps completed and I still have the same error but I'm wondering if the LOG file has any interesting information in it:
The things that stand out as interesting to me are:
The loading time was "0 seconds" ... as much as I'd like to believe that it may be a little too quick? It's not a ton of data but still.
The "module code" is named 'ns_memecached001' ... is that an issue? Memcached? I did double check that I set this up as a couchbase bucket. It is.
It seems as if your destination server is not OS X, but e.g. Linux. Here you have to use the "rehash"-extra-option.
Backup your data on your dev machine (using cbbackup)
Copy the data to your prod machine
Restore the data with the -x rehash=1 flag: (using cbrestore -x rehash=1)