How to set TLS for a RDS proxy with SQLAlchemy - mysql

I have an AWS Lambda function (Python3.8) in which I am trying to connect to a RDS proxy using SQLAlchemy. I've confirmed that the function configuration will allow for this by also connecting to the proxy directly using PyMySQL. When I run the function, I get an error message of "(pymysql.err.InternalError) (3159, 'This RDS Proxy requires TLS connections')\n(Background on this error at: http://sqlalche.me/e/2j85)". The "background" for that error says nothing about TLS. I understand what I need to do (tell SQLAlchemy to connect using SSL/TLS), but I cannot figure out the syntax to do so. Below is my current code.
import pymysql
import sqlalchemy
from database_info import make_connection_str
print('connecting to database')
CONN_STR = make_connection_str()
ENGINE = sqlalchemy.create_engine(CONN_STR)
METADATA = sqlalchemy.MetaData(ENGINE)
TABLE = sqlalchemy.Table('active_prospect', METADATA, autoload=True) #error comes with this line
Things I've tried have been related to the following
According to https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html ssl-mode=REQUIRED is needed.
pymysql.connect(... ssl={"true": True}) works.
Based on those two things, I've tried every combination of (ssl-mode, ssl) = ('true', 'required') in two different ways.
CONN_STR += '?ssl-mode=REQUIRED'
create_engine(CONN_STR, connect_args={'ssl': 'True'})
Results have varied. In some cases the error above has been replaced by AttributeError: 'str' object has no attribute 'get', still being raised from the same line.

I faced the same issue when i was trying to connect to RDS using RDS proxy from lambda. Got fixed by
In lambda configuration, you will see the section 'Database proxies'. Go to 'Add database proxy' and add the proxy here. Basically, it will modify the IAM Role of the lambda to connect to proxy.
In lambda code, I just replaced the host to the proxy endpoint (it worked without SSL argument)

Related

Google Cloud Function with Cloud MySQL Authorisation fails ([Errno 111] Connection refused)

I'm trying to connect to my Google Cloud MySQL database through a Google Cloud Function to read some data. The function build succeeds, but when executed only this is displayed:
Error: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'localhost' ([Errno 111] Connection refused)") (Background on this error at: http://sqlalche.me/e/e3q8)
Here is my connection code:
import sqlalchemy
# Depending on which database you are using, you'll set some variables differently.
# In this code we are inserting only one field with one value.
# Feel free to change the insert statement as needed for your own table's requirements.
# Uncomment and set the following variables depending on your specific instance and database:
connection_name = "single-router-309308:europe-west4:supermarkt-database"
db_name = "supermarkt-database"
db_user = "hidden"
db_password = "hidden"
# If your database is MySQL, uncomment the following two lines:
driver_name = 'mysql+pymysql'
query_string = dict({"unix_socket": "/cloudsql/{}".format(connection_name)})
# If the type of your table_field value is a string, surround it with double quotes. < SO note: I didn't really understand this line. Is this the problem?
def insert(request):
request_json = request.get_json()
stmt = sqlalchemy.text('INSERT INTO products VALUES ("Testid", "testname", "storename", "testbrand", "4.20", "1kg", "super lekker super mooi", "none")')
db = sqlalchemy.create_engine(
sqlalchemy.engine.url.URL(
drivername=driver_name,
username=db_user,
password=db_password,
database=db_name,
query=query_string,
),
pool_size=5,
max_overflow=2,
pool_timeout=30,
pool_recycle=1800
)
try:
with db.connect() as conn:
conn.execute(stmt)
except Exception as e:
return 'Error: {}'.format(str(e))
return 'ok'
I got it mostly from following this tutorial: https://codelabs.developers.google.com/codelabs/connecting-to-cloud-sql-with-cloud-functions#0 . I'm also using Python 3.7, as used in the tutorial.
SQLAlchemy describes it as not necessarily under the control of the programmer.
For context, the account through which is being connected has the SQL Cloud Admin role, and the Cloud SQL Admin API is enabled. Thanks in advance for the help!
PS: I did find this answer: Connecting to Cloud SQL from Google Cloud Function using Python and SQLAlchemy but have no idea where the settings for Firewall with SQL can be found. I didn't find them in SQL > Connection / Overview or Firewall.
Alright so I figured it out! In Edit Function > Runtime, Build and Connection Settings, head over to Connection Settings and make sure "Only route requests to private IPs through the VPC connector" is enabled. The VPC connector requires different authorization.
Also, apparently I needed my TABLE name, not my DATABASE name as the variable DB_NAME. Thanks #guillaume blaquiere for your assistance!

AWS: connecting to Aurora from Lambda Java Script using IAM as authentication method

I've been following this tutorial [1] to connect to an Aurora RDS cluster from a lambda JavaScript function using IAM authentication. I'm able to get an authentication token, but not able to use the token to get a connection to the data base. Also, I followed these instructions [2] to get an authentication token from CLI, and then use the token to connect 'mysql' command tool. All good from the command line, this kind of tells me that I have followed correctly all the steps from [1].
I'm stuck though. Although I can get an authentication token from my lambda, I can't use it to connect to my data base. My lambda is written in Java Script, and the mysql driver I'm using is [3]. This driver seem to not be compatible with IAM authentication. Here is why I think that:
in [1], there is an example for Python, where a mysql driver is being used. In the example, the instructions tell to use
auth_plugin="mysql_clear_password"
and the driver I'm using [3], from docs, I can't find an option that maps to "auth_plugin". The closer, is a flag called PLUGIN_AUTH, but docs for [3] say:
Uses the plugin authentication mechanism when connecting to the MySQL server. This feature is not currently
supported by the Node.js implementation so cannot be turned on. (Default off)
Seems like Node doesn't support this.
This is a piece of my code:
const mysql = require('mysql');
var config = {
host : theHost,
user : theUser,
password : token, --------> token previously generated
database : theDataBase
auth_plugin: "mysql_clear_password", --------> this option is not really part of driver [3], I was just trying
something out
ssl: {
ca: '/var/task/rds-combined-ca-bundle.pem' ----------> cert downloaded from [4]
}
};
var connection = mysql.createConnection(config);
The error I'm getting is:
ERROR error connecting: Error: unable to get local issuer certificate
I have checked that "/var/task/rds-combined-ca-bundle.pem" exists. It is part of the zip package for my function.
If I remove the "ssl" key from the connection object, I get:
error connecting: Error: ER_NOT_SUPPORTED_AUTH_MODE: Client does not support authentication protocol
requested by server; consider upgrading MySQL client
From AWS docs, I can't find a good example of using IAM authentication from a Lambda function implemented in JavaScript. So, my questions are:
If any, can you provide an example of a Lambda function implemented in Java Script of how to connect to Aurora using IAM authentication?
Are you aware if really [3] doesn't support IAM authentication?
Is there any other mysql driver that is really compatible with IAM authentication?
Thanks!
References:
[1] https://aws.amazon.com/blogs/database/iam-role-based-authentication-to-amazon-aurora-from-serverless-applications/
[2] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.html
[3] https://www.npmjs.com/package/mysql
[4] https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem
a bit late but mysql library has rds certs inside it as standard
so luckily to solve this issue add 'ssl: "Amazon RDS",' which will use the inbuilt certs
so something like this
const signer = new AWS.RDS.Signer({
'region': region,
'username': username,
'hostname':host,
'port': port
});
let token = await signer.getAuthToken({})
var config = {
host : host,
user : username,
password : token, --------> token previously generated
database : database
ssl: "Amazon RDS"
};
see mysql docs for more
https://github.com/mysqljs/mysql#ssl-options
SSL options
The ssl option in the connection options takes a string or an object. When given a string, it uses one of the predefined SSL profiles included. The following profiles are included:
"Amazon RDS": this profile is for connecting to an Amazon RDS server and contains the certificates from https://rds.amazonaws.com/doc/rds-ssl-ca-cert.pem and https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem
ps: word from the wise, aws rds to node is a bit tricky, there's lots of other tripropes like permissions, enabling iam, granting iam users to watch out for, create a question and ping me if you have other issues as well and i'll answer them there

AWS IAM Authentication to MySQL from dotnet core code running in Lambda

I have a lambda function written in dotnet core 2.1. This code uses the Oracle MySql provider (v8.0.19). A MySQL RDS (v5.7.26) is running with a user configured to use the AWSAuthenticationPlugin as described in the AWS article linked below.
I'm able to connect to this RDS using a normal username/password combination, but I want to use IAM Authentication instead. This article describes how to do this using the mysql client on a linux server: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.html
I've created the IAM policy and I'm able to load the token in the lambda function via RDSAuthTokenGenerator.GenerateAuthToken. The connection string is built as follows:
MySqlConnection c = new MySqlConnection();
MySqlConnectionStringBuilder csb = new MySqlConnectionStringBuilder();
csb.Server = "dbinstanceName.xxx.us-xxx-1.rds.amazonaws.com";
csb.Port = 3306;
csb.Database = "Database";
csb.UserID = "DatabaseUserName";
csb.Password = RDSAuthTokenGenerator.GenerateAuthToken("dbinstanceName.xxx.us-xxx-1.rds.amazonaws.com", 3306, "DatabaseUserName");
csb.SslMode = MySqlSslMode.VerifyFull;
c.ConnectionString = csb.ConnectionString;
When I pass this token as the value for password in the connection string, I get the exception:
"errorType": "MySqlException",
"errorMessage": "Authentication method 'mysql_clear_password' not supported by any of the available plugins."
This MySQL article describes Client-Side cleartext pluggable Authentication (https://dev.mysql.com/doc/refman/5.7/en/cleartext-pluggable-authentication.html), however I don't know how to enable this plugin in the MySQL provider in dotnet.
Can anyone suggest a way to send this RDS Token in the connection string in clear text in dotnet core? I can't seem to figure out how this is done.
I was able to make this work using a different MySQL provider. If you are stuck, try using the MySqlConnector provider instead:
https://mysqlconnector.net/
With this provider, I did not have to change anything about the code/connection string above.
Hope this is helpful for someone else.

unable to connect to AWS RDS instance in default VPC from AWS Lambda

I have a RDS mysql instance running
its assigned in default VPC to all default subnets
has a security group, inbound rule set to listen all Traffic, all
protocol,
all port ranges and source 0.0.0.0/0
Publicly accessible is set to True
I am able to connect to RDS from SQl Workbench and also from local python script
-In my python lambda function
-
have assigned role with AWSLambdaVPCAccessExecutionRole ,lambda_basic_execution
2.Lambda is not assigned to any VPC
I get following error message from lambda
"errorMessage": "RequestId: xx Process exited before completing request"
Code fails at a point where it tries to connect to DB get_database_connection() and in except block logging message logger.error("ERROR: Unexpected error: Could not connect to MySql instance.")
Is it even possible for lambda to connect to RDS instance in default VPC ?
lambda is not assigned to any VPC
Lambda Code
import sys
import logging
import package.pymysql
import logging
import package.pymysql.cursors
DATABASE_HOST = 'XXX'
DATABASE_USER = 'XXX'
DATABASE_PASSWORD = 'XXX'
DATABASE_DB_NAME = 'XXX'
port = 3306
def get_database_connection():
"Build a database connection"
conn = pymysql.connect(DATABASE_HOST, user=DATABASE_USER,
passwd=DATABASE_PASSWORD, db=DATABASE_DB_NAME, connect_timeout=5)
return conn
try:
conn = get_database_connection()
except:
logger.error("ERROR: Unexpected error: Could not connect to MySql instance.")
sys.exit()
logger.info("SUCCESS: Connection to RDS mysql instance succeeded")
def lambda_handler(event, context):
print("Lambda executed")
followed this link
[https://docs.aws.amazon.com/lambda/latest/dg/vpc-rds-deployment-pkg.html][1]
What you need to do is this:
Create 2 private subnets for the default VPC
xxx.xxx.64.0/20
xxx.xxx.128.0/20
Go to your Lambda function in the console.
Scroll down and on the left hand side select the default VPC.
Select the 2 Private Subnets as your subnets on your lambda function.
yes, your lambda is not in a vpc so the instance cant contact the rds public instance, follow this documentation for provide to your lambda function the internet "functionality"
https://aws.amazon.com/it/premiumsupport/knowledge-center/internet-access-lambda-function/
There are lots of documentation that says to have 2 private subnets
for lambda in your VPC and have internet connection using NAT gateway
etc..
Actually I was able to connect to RDS in default VPC directly from
lambda(without placing it in private subnets). Issue was I had imported pymysql file inside of pacakage folder, so I was getting
that connection Timeout error.
I just had to prefix package in from of pymysql (package.mysql)
except Exception as error: did trick for me

SQLAlchemy AppEngine standard - Lost connection to MySQL server

I'm trying to connect to a Google Cloud SQL second generation in Python from AppEngine standard (Python 2.7).
Until now, I was using MySQLDB driver directly and it was fine.
I've tried to switch to SQLAlchemy, but now I'm always having this error when the code is deployed (it seems to work fine in local) resulting in a error 500 (It's not just some connections which are lost, it constantly fails) :
OperationalError: (_mysql_exceptions.OperationalError) (2013, "Lost connection to MySQL server at 'reading initial communication packet', system error: 38") (Background on this error at: http://sqlalche.me/e/e3q8)
I don't understand because the setup doesn't differ from before, so it must be related to the way I use SQLAlchemy.
I use something like this :
create_engine("mysql+mysqldb://appuser:password#x.x.x.x/db_name?unix_socket=/cloudsql/gcpProject:europe-west1:instanceName")
I've tried different values (with, without the ip, ...). But it is still the same. Is is a version compatibility problem ?
I use
MySQL-python in the app.yaml and SQLAlchemy 1.2.4 :
app.yaml :
- name: MySQLdb
version: "latest"
requirements.txt :
SQLAlchemy==1.2.4
It was a problem in the url. I was adding in a specific part of the code "/dbname" at the end of the connection string, resulting in something like this :
mysql+mysqldb://appuser:password#/db_name?unix_socket=/cloudsql/gcpProject:europe-west1:instanceName/dbname
So in the end, the meaning of this error can also be that the unix socket is wrong.
There are a number of causes for connection loss to Google CloudSQL server but quite rightly, you have to ensure that your setup is appropriate first. I don't think this issue is about version compatibility.
According to the documentation, for your application to be able to connect to your Cloud SQL instance when the app is deployed, you require to add the user, password, database, and instance connection name variables from Cloud SQL to the related environment variables in the app.yaml file(Your displayed app.yaml does not seem to contain these environment variables).
I recommend you review the details in the link for details on how to set up your CloudSQL instance and connecting to the instance.