I have node.js Lambda function on AWS and a MySQL database. I have been using the following code to connect:
const mysql = require('mysql');
const con = mysql.createConnection({
host: "<endpoint>",
user: "<user>",
password: "<password>"
});
con.connect(function(err) {
if (err) throw err;
console.log("Connected!");
con.end();
});
When I try to connect to the endpoint I get the message:
{"message": "Internal server error"}
Any help would be much appreciated! Hope everyone is healthy.
It is most likely an RDS set-up.
Check the security settings on your security group.
Does it have a rule that only allows a connection from a certain security group?
Also, does your RDS have Public Accessibility? you would want to set it to 'Yes'
The port seems to be missing in your createConnection function, I've seen people skip this while others needed to have it.
Another thing for debugging purpose get the stack as well, it will take you there :)
conn.connect(function(err){
if (err){
console.error('db connection failed: ' + err.stack);
return;
}
console.log('connected to db');
});
connection.end();
Check this link to AWS docs, it is for elastic beanstalk, but it's the same :)
Here is another link/blog post to get you there: using was rds with nodejs
Related
I am using AWS Amazon RDS Aurora serverless database for an API data source.
Aurora 2.08.3 compatible with MySQL 5.7
I have enabled the "Pause after inactivity feature" to "Scale the capacity to 0 ACUs when cluster is idle".
I am finding that when the database is scaled to 0 Aurora Capacity units, the API fails.
Even if I change the API configuration to time out out after 40 seconds, it fails if the database was at 0 ACUs when it was called. Calling the API again after some time yields a successful call.
Whether at connection.connect or connection.query, the failures come with no useful response - the response just doesn't come.
I have not been able to determine if the database needs a moment to scale up to determine if I need to pause the call. Logging to the console the connection info looks the same whether the database is scaled down or ready for a query.
Is there a way to programmatically check if a AWS Serverless v2 Aurora MySQL database is scaled to 0? connection.state does not seem to describe this.
I have tried many many approaches. This post seemed promising, but didn't solve my issue.
What I have now is...
var connection;
exports.getRecords = async (event) => {
await openConnection();
// Simplified connection.query code that works when the database is scaled up
var q = 'SELECT * FROM databaseOne.tableOne';
connection.query(q, function(err,results){
if(err) {
console.log('q err', err);
throw err;
}
console.log(results);
resolve(results);
})
}
async function openConnection() {
connection = mysql.createConnection({
host: process.env.hostname,
port: process.env.portslot,
user: process.env.username,
password: process.env.userpassword,
});
console.log('connection1', connection.state, connection);
try {
await connection.connect();
console.log('connection2', connection.state, connection);
} catch (err) {
console.log('c err', err);
}
}
Thank you
I have setup a VPC and an RDS database using mysql through a nodejs lambda using serverless.
The issue I am having is that I get a server internal error when testing the lambda. The lambda is using the same VPC as the RDS.
Could this be a permission issue where the lambda needs direct permission to the db instance. If so does anyone have any suggestions on what permissions would be required.
Thankyou
Here is part of the code I am using, this is to test a query and log the result to cloudwatch. It does not show up and only shows timed out.
it seems to work locally using serverless. This is just for educational purposes.
let pool = mysql.createPool({
host: host.length > 0 ? host : body.host,
port: port.length > 0 ? port : body.port,
user: body.username,
password: body.password,
database: body.dbname
});
pool.getConnection(function(err, connection) {
connection.query('SELECT 1 + 1 AS result', function (error, results, fields) {
// And done with the connection.
//connection.release();
// Handle error after the release.
if (error) throw error;
else Logger.info(JSON.stringify(results));
});
});
return {statusCode:200, body: JSON.stringify({})};
}
We are currently experiencing what I can only describe as random intermittent timeouts between AWS Lambda and RDS. After deploying our functions and running them successfully, they can randomly switch to a state of timing out with no configuration changes. Important to note, we are also monitoring the DB connections and can confirm that we aren't running into a max connection issue.
Here are the details on our setup:
Code being executed (using Node.JS v. 6.10):
const mysql = require('mysql');
exports.dbWrite = (events, context, callback) => {
const db = mysql.createConnection({
host: <redacted>,
user: <redacted>,
password: <redacted>,
database: <redacted>
});
db.connect(function (err) {
if (err) {
console.error('error connecting: ' + err.stack);
return;
}
console.log('connected !');
});
db.end();
};
We are using the Node.JS mysql library, v. 2.14.1.
From a networking perspective:
The Lambda function is in the same VPC as our RDS instance
The Lambda function has subnets assigned, which are associated with a routing table that does not have internet access (not associated with an internet gateway)
The RDS database is not publicly accessible.
A security group has been created and associated with the Lambda function that has wide open access on all ports (for now - once DB connectivity is reliable, that will change).
The above security group has been whitelisted on port 3306 within a security group associated with the RDS instance.
CloudWatch error:
{
"errorMessage": "connect ETIMEDOUT",
"errorType": "Error",
"stackTrace": [
"Connection._handleConnectTimeout
(/var/task/node_modules/mysql/lib/Connection.js:419:13)",
"Socket.g (events.js:292:16)",
"emitNone (events.js:86:13)",
"Socket.emit (events.js:185:7)",
"Socket._onTimeout (net.js:338:8)",
"ontimeout (timers.js:386:14)",
"tryOnTimeout (timers.js:250:5)",
"Timer.listOnTimeout (timers.js:214:5)",
" --------------------",
"Protocol._enqueue
(/var/task/node_modules/mysql/lib/protocol/Protocol.js:145:48)",
"Protocol.handshake
(/var/task/node_modules/mysql/lib/protocol/Protocol.js:52:23)",
"Connection.connect
(/var/task/node_modules/mysql/lib/Connection.js:130:18)",
"Connection._implyConnect
(/var/task/node_modules/mysql/lib/Connection.js:461:10)",
"Connection.query
(/var/task/node_modules/mysql/lib/Connection.js:206:8)",
"/var/task/db-write-lambda.js:52:12",
"getOrCreateEventTypeId (/var/task/db-write-lambda.js:51:12)",
"exports.dbWrite (/var/task/db-write-lambda.js:26:9)"
]
}
Amongst the references already reviewed:
https://forums.aws.amazon.com/thread.jspa?threadID=221928
(the invocation ID in CloudWatch is different on all timeout cases)
pretty much every post in this list: https://stackoverflow.com/search?q=aws+lambda+timeouts+to+RDS
In summary, the fact that these timeouts are intermittent makes this an issue that is totally confusing. AWS support has stated that NodeJS-mysql is a third-party tool, and is technically not supported, but I know folks are using this technique.
Any help is greatly appreciated!
Considering that the RDS connections are not exhausted, there is a possibility that the lambda running into a particular subnet is always failing to connect to db. I am assuming that the RDS instances and lambdas are running in separate subnets. One way to investigate this is to check flow logs.
Go to EC2 -> Network interfaces -> search for lambda name -> copy eni ref and then go to VPC -> Subnets -> select the subnet of lambda -> Flow Logs -> search by eni ref.
If you see "REJECT OK" in your flow logs for your db port means that there is missing config in Network ACLs.
Updating this issue: It turns out that the issue was related to the fact that the database connection was being made within the handler! Due to the asynchronous nature of Lambda and Node, this was the culprit for the intermittent timeouts.
Here's the revised code:
const mysql = require('mysql');
const database = getConnection();
exports.dbWrite = (events, context, callback) => {
database.connect(function (err) {
if (err) {
console.error('error connecting: ' + err.stack);
return;
}
console.log('connected !');
});
db.end();
function getConnection() {
let db = mysql.createConnection({
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PASS,
database: process.env.DB_NAME
});
console.log('Host: ' + process.env.DB_HOST);
console.log('User: ' + process.env.DB_USER);
console.log('Database: ' + process.env.DB_NAME);
console.log('Connecting to ' + process.env.DB_HOST + '...');
return db;
}
Sorry in advance - using nodejs for the first time..
I have installed nodejs and npm manager on linux machine. Through the npm I installed mysql module and now I try to test the mysql connection using the simple code. The problem is - no output is printed to the console, when the mysql related code is run!
source:
var mysql = require("mysql");
console.log('1');
var connection = mysql.createConnection({
host: "127.0.0.1",
user: "xxx",
password: "xxxx",
database: "xxx",
port: 3306
});
console.log('2');
connection.connect(function(err){
if(err){
console.log('Error connecting to Db');
return;
}
console.log('Connection established');
});
console.log('3');
connection.end(function(err) {
console.log('Connection closed');
});
console.log('4');
process.exit();
The output is 1234:
Here are installed modules:
So my question is - why are there no messages coming from mysql connection? I tried to enter incorrect connection details on purpose - no messages were produced.
I also tried running node as sudo and also tried running nodejs index.js instead of node index.js. Oh, and I also tried to install and use nodejs-mysql module instead of mysql. Nothing seems to be working.
You're writing asynchronous code, so you need to wait for those operations to complete before you force kill the process.
What you're essentially saying is "Do this, do this, do this, and get back to me later. Also shut everything down right now."
Remove the process.exit call or move it inside a callback function so it's triggered at the right time:
var connection = mysql.createConnection({
...
});
console.log('2');
connection.connect(function(err){
if(err) {
console.log('Error connecting to Db');
return;
}
console.log('Connection established');
console.log('3');
connection.end(function(err) {
console.log('Connection closed');
console.log('4');
process.exit();
});
});
This aggressive nesting and re-nesting is why things like promises exist. Bluebird is a great library for implementing and using these and is used by database wrappers like Sequelize to keep your code organized.
What does the error mean?
{ [Error: Cannot enqueue Query after fatal error.] code: 'PROTOCOL_ENQUEUE_AFTER_FATAL_ERROR', fatal: false }
This code works in my test file:
function handleDisconnect() {
objConn = mysql.createConnection(db_config); // Recreate the connection, since
// the old one cannot be reused.
objConn.connect(function(err) { // The server is either down
if(err) { // or restarting (takes a while sometimes).
console.log('error when connecting to db:', err.code);
setTimeout(handleDisconnect, 2000); // We introduce a delay before attempting to reconnect,
}else{
console.log('Connected to db!');
} // to avoid a hot loop, and to allow our node script to
}); // process asynchronous requests in the meantime.
// If you're also serving http, display a 503 error.
objConn.on('error', function(err) {
if(err.code === 'PROTOCOL_CONNECTION_LOST') { // Connection to the MySQL server is usually
handleDisconnect(); // lost due to either server restart, or a
}else{
throw err;
}
});
}
handleDisconnect();
megaLoop();
function megaLoop(){
objConn.query('SELECT u.`email` FROM `users` as u', function(err, rows) {
console.log(err);
console.log(rows);
});
setTimeout(megaLoop, 100);
}
But when I use the function in my Express App I get the error:
{ [Error: Cannot enqueue Query after fatal error.] code: 'PROTOCOL_ENQUEUE_AFTER_FATAL_ERROR', fatal: false }
Why does it work in my test and not my app?
I had similar issue connecting with MySQL. I solved this issue using connection pool. The concept is to get a connection from a connection pool only when required and release it after use. That way the connection will not be in unstable state. You can get implementation details here.
Open MySql Workbench or MySql Command line and type this
ALTER USER 'root'#'localhost' IDENTIFIED WITH mysql_native_password BY 'password';
Now, your password for root user is 'password', use that in your DB Connection, it's work
I believe the issue is an express-sql-session. I'm experiencing the same issue right now. I think someone is on to something at this post: NodeJS running on MAC, and have an error occurred when deploying on centOS
Check this out too: https://github.com/felixge/node-mysql/issues/1166
I had the same issue , only to find out my DB name on Nodejs was incorrectly named
open your MySQL command line client and run this command
ALTER USER 'root'#'localhost' IDENTIFIED WITH mysql_native_password BY 'your_current_password';
insert your password in <your_current_password>
ALTER USER 'root'#'localhost' IDENTIFIED WITH mysql_native_password BY 'password';
1.use the above query in your mysql workbench replace 'password' with your actual password
2.then restart your server