Node Lambda: MySQL query never runs - mysql

Testing with Postman, I'll try to make this as clear as possible, please advise if this is not making sense.
I have a Lambda that uses MySQL RDS database on AWS and works fine locally when accessing the database on AWS. After successfully getting a JWT from an auth endpoint I try to hit the login endpoint and I get a 502 Bad Gateway. Using the CloudWatch logs I can trace the failure to right before the login query runs. I've confirmed that my MySQL config is correct and that I have a connection to the database. The lambda and the database are in the same region DB: us-east-1f, lambda: us-east-1.
I've confirmed the OPTIONS and POST request methods for this endpoint both are set up with CORS enabled in the API Gateway. I'm using my serverless.yml to set cors: true on all the endpoints even though I'm using app.use(cors()) in my index file.
The error message for the 502 is, {"message": "Internal server error"}
Here is the point of failure in my code:
'use strict';
const mysql = require('./index');
module.exports = {
loginSql: async (email, password) => {
// MAKES IT HERE AND THE PARAMS ARE CORRECT
try {
console.log('IN TRY %%%%%%%%%%%%%%');
// SEEMS TO DIE HERE
const results = await mysql.query({
sql: `SELECT
id, first_name, last_name, email
FROM users
WHERE email = ?
AND password = ?`,
timeout: 50000,
values: [email, password],
});
// NEVER MAKES IT HERE /////////
console.log('QUERY RAN %%%%%%%%%%%%');
mysql.end();
if (results.length < 1) return false;
return results;
} catch (error) {
// DOESN'T THROW ERROR
console.log('LOGIN DB ERROR', error);
throw new Error('LOGIN DB ERROR THROWN', error);
}
},
};

I just created the exact same use case in that I have a LAMBDA function written in Java querying data from a MySQL RDS instance. It works perfectly.
Here is your issue:
To connect to the RDS instance from a Lambda function, you must set the inbound rules using the same security group as the RDS Instance. For details, How do I configure a Lambda function to connect to an RDS instance?.

Related

How to query an AWS Aurora MySQL database has been scaled to 0 ACUs

I am using AWS Amazon RDS Aurora serverless database for an API data source.
Aurora 2.08.3 compatible with MySQL 5.7
I have enabled the "Pause after inactivity feature" to "Scale the capacity to 0 ACUs when cluster is idle".
I am finding that when the database is scaled to 0 Aurora Capacity units, the API fails.
Even if I change the API configuration to time out out after 40 seconds, it fails if the database was at 0 ACUs when it was called. Calling the API again after some time yields a successful call.
Whether at connection.connect or connection.query, the failures come with no useful response - the response just doesn't come.
I have not been able to determine if the database needs a moment to scale up to determine if I need to pause the call. Logging to the console the connection info looks the same whether the database is scaled down or ready for a query.
Is there a way to programmatically check if a AWS Serverless v2 Aurora MySQL database is scaled to 0? connection.state does not seem to describe this.
I have tried many many approaches. This post seemed promising, but didn't solve my issue.
What I have now is...
var connection;
exports.getRecords = async (event) => {
await openConnection();
// Simplified connection.query code that works when the database is scaled up
var q = 'SELECT * FROM databaseOne.tableOne';
connection.query(q, function(err,results){
if(err) {
console.log('q err', err);
throw err;
}
console.log(results);
resolve(results);
})
}
async function openConnection() {
connection = mysql.createConnection({
host: process.env.hostname,
port: process.env.portslot,
user: process.env.username,
password: process.env.userpassword,
});
console.log('connection1', connection.state, connection);
try {
await connection.connect();
console.log('connection2', connection.state, connection);
} catch (err) {
console.log('c err', err);
}
}
Thank you

CredentialsError: Could not load credentials from ChainableTemporaryCredentials in AWS-SDK v2 for Javascript

I am trying to set up temporary credentials in the AWS-SDK v2 for Javascript:
const aws = require('aws-sdk')
aws.config = new aws.Config({
credentials: new aws.ChainableTemporaryCredentials({
params: {
RoleArn: roleArn, // Defined earlier
RoleSessionName: sessionName, // Defined earlier
DurationSeconds: 15 * 60
},
masterCredentials: new aws.Credentials({
accessKeyId: accessKeyId, // Defined earlier
secretAccessKey: awsSecretAccessKey // Defined earlier
})
}),
region: 'us-east-1',
signatureVersion: 'v4'
})
aws.config.getCredentials(function (err) {
if (err) console.log(err.stack)
else console.log('Access key:', aws.config.credentials.accessKeyId)
})
However, I'm keep getting the following error, which occurs when calling getCredentials:
CredentialsError: Could not load credentials from ChainableTemporaryCredentials
Note that it works fine if I set the credentials parameter to the master credentials instead of the temporary credentials, as shown below:
aws.config = new aws.Config({
credentials: new aws.Credentials({
accessKeyId: accessKeyId, // Defined earlier
secretAccessKey: awsSecretAccessKey // Defined earlier
}),
region: 'us-east-1',
signatureVersion: 'v4'
})
Does anyone know what's causing this issue? Here's the documentation I was referencing:
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Config.html
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Credentials.html
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/ChainableTemporaryCredentials.html
I was finally able to figure out the cause of this error.
What led me to figure out the cause of the error was when I printed out the full error instead of just the most recent error. One of the properties of the error was:
originalError: {
message: 'The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.',
code: 'SignatureDoesNotMatch',
time: 2021-12-11T19:49:52.395Z,
requestId: '402e4c32-7989-4287-a6a9-628bfc93f60f',
statusCode: 403,
retryable: false,
retryDelay: 39.60145242362791
}
So I realized the problem was that my masters credentials I provided were not correct!
I have actually always known that these credentials weren't correct, but for unit-testing purposes it seemed to work fine with these incorrect credentials as long as I didn't also supply the temporary credentials. But now I understand that the getCredentials function verifies the credentials with AWS if you're using temporary credentials, but doesn't verify with AWS when using just master credentials. That explains that strange behavior I was seeing.

Issue trying to access mysql through VPC in AWS-RDS through lambda

I have setup a VPC and an RDS database using mysql through a nodejs lambda using serverless.
The issue I am having is that I get a server internal error when testing the lambda. The lambda is using the same VPC as the RDS.
Could this be a permission issue where the lambda needs direct permission to the db instance. If so does anyone have any suggestions on what permissions would be required.
Thankyou
Here is part of the code I am using, this is to test a query and log the result to cloudwatch. It does not show up and only shows timed out.
it seems to work locally using serverless. This is just for educational purposes.
let pool = mysql.createPool({
host: host.length > 0 ? host : body.host,
port: port.length > 0 ? port : body.port,
user: body.username,
password: body.password,
database: body.dbname
});
pool.getConnection(function(err, connection) {
connection.query('SELECT 1 + 1 AS result', function (error, results, fields) {
// And done with the connection.
//connection.release();
// Handle error after the release.
if (error) throw error;
else Logger.info(JSON.stringify(results));
});
});
return {statusCode:200, body: JSON.stringify({})};
}

Send multiple requests to Flask API

I ran into this exception while I was working with Angular to show my Flask API REST data which is deployed on an nginx server:
{ "error": "(_mysql_exceptions.ProgrammingError) (2014, \"Commands out of sync; you can't run this command now\") [SQL: 'SELECT ......" }
This exception is caused by this function that sends two requests at the same time to the database:
ngOnInit() {
let url = this.baseUrl + `/items/${this.id}`;
this.httpClient.get(url).subscribe((data : Array<any>)=> {
this.ItemToEdit = data;
});
let url = this.baseUrl + '/products'
this.httpClient.get(url);
}
I use SQLAlchemy in the API with Mysql database. I thought that If I add a pool connection it will be resolved but It didn't, I still get same exception:
engine = create_engine(connection_string, pool_size=20, max_overflow=0)
What exactly should I do to handle this ?
Is there anything else to set on the server side to make this function work without getting an exception ?
EDIT:
Is this an observable handling problem or it can be fixeb by using Gunicorn with nginx on server side ?
Using Gunicorn solved my problem.
I launch the application this way:
gunicorn -b 0.0.0.0:5000 --workers=5 myapi:app
And I don't see the error anymore even with 3 mysql requests in ngOnInit

Intermittent timeouts between AWS Lambda and RDS

We are currently experiencing what I can only describe as random intermittent timeouts between AWS Lambda and RDS. After deploying our functions and running them successfully, they can randomly switch to a state of timing out with no configuration changes. Important to note, we are also monitoring the DB connections and can confirm that we aren't running into a max connection issue.
Here are the details on our setup:
Code being executed (using Node.JS v. 6.10):
const mysql = require('mysql');
exports.dbWrite = (events, context, callback) => {
const db = mysql.createConnection({
host: <redacted>,
user: <redacted>,
password: <redacted>,
database: <redacted>
});
db.connect(function (err) {
if (err) {
console.error('error connecting: ' + err.stack);
return;
}
console.log('connected !');
});
db.end();
};
We are using the Node.JS mysql library, v. 2.14.1.
From a networking perspective:
The Lambda function is in the same VPC as our RDS instance
The Lambda function has subnets assigned, which are associated with a routing table that does not have internet access (not associated with an internet gateway)
The RDS database is not publicly accessible.
A security group has been created and associated with the Lambda function that has wide open access on all ports (for now - once DB connectivity is reliable, that will change).
The above security group has been whitelisted on port 3306 within a security group associated with the RDS instance.
CloudWatch error:
{
"errorMessage": "connect ETIMEDOUT",
"errorType": "Error",
"stackTrace": [
"Connection._handleConnectTimeout
(/var/task/node_modules/mysql/lib/Connection.js:419:13)",
"Socket.g (events.js:292:16)",
"emitNone (events.js:86:13)",
"Socket.emit (events.js:185:7)",
"Socket._onTimeout (net.js:338:8)",
"ontimeout (timers.js:386:14)",
"tryOnTimeout (timers.js:250:5)",
"Timer.listOnTimeout (timers.js:214:5)",
" --------------------",
"Protocol._enqueue
(/var/task/node_modules/mysql/lib/protocol/Protocol.js:145:48)",
"Protocol.handshake
(/var/task/node_modules/mysql/lib/protocol/Protocol.js:52:23)",
"Connection.connect
(/var/task/node_modules/mysql/lib/Connection.js:130:18)",
"Connection._implyConnect
(/var/task/node_modules/mysql/lib/Connection.js:461:10)",
"Connection.query
(/var/task/node_modules/mysql/lib/Connection.js:206:8)",
"/var/task/db-write-lambda.js:52:12",
"getOrCreateEventTypeId (/var/task/db-write-lambda.js:51:12)",
"exports.dbWrite (/var/task/db-write-lambda.js:26:9)"
]
}
Amongst the references already reviewed:
https://forums.aws.amazon.com/thread.jspa?threadID=221928
(the invocation ID in CloudWatch is different on all timeout cases)
pretty much every post in this list: https://stackoverflow.com/search?q=aws+lambda+timeouts+to+RDS
In summary, the fact that these timeouts are intermittent makes this an issue that is totally confusing. AWS support has stated that NodeJS-mysql is a third-party tool, and is technically not supported, but I know folks are using this technique.
Any help is greatly appreciated!
Considering that the RDS connections are not exhausted, there is a possibility that the lambda running into a particular subnet is always failing to connect to db. I am assuming that the RDS instances and lambdas are running in separate subnets. One way to investigate this is to check flow logs.
Go to EC2 -> Network interfaces -> search for lambda name -> copy eni ref and then go to VPC -> Subnets -> select the subnet of lambda -> Flow Logs -> search by eni ref.
If you see "REJECT OK" in your flow logs for your db port means that there is missing config in Network ACLs.
Updating this issue: It turns out that the issue was related to the fact that the database connection was being made within the handler! Due to the asynchronous nature of Lambda and Node, this was the culprit for the intermittent timeouts.
Here's the revised code:
const mysql = require('mysql');
const database = getConnection();
exports.dbWrite = (events, context, callback) => {
database.connect(function (err) {
if (err) {
console.error('error connecting: ' + err.stack);
return;
}
console.log('connected !');
});
db.end();
function getConnection() {
let db = mysql.createConnection({
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PASS,
database: process.env.DB_NAME
});
console.log('Host: ' + process.env.DB_HOST);
console.log('User: ' + process.env.DB_USER);
console.log('Database: ' + process.env.DB_NAME);
console.log('Connecting to ' + process.env.DB_HOST + '...');
return db;
}