I have two mysql server both are (MASTER,MASTER). how could we implement clustering in sequelize. if one of sql server has stopped then all request goes to other mysql server without restarting node server.
They haven't implemented clustering feature or fallback. I have found a way around for the same.
var Sequelize = require("sequelize");
sequelize.connectionManager.connect = function(){
return new Promise(function(resolve,reject){
// create your connection and return its instance in promise
resolve(connection);
});
}
sequelize.connectionManager.disconnect = function(connection){
// to disconnect the connection
if (!connection._protocol._ended) {
connection.release()
}
return Promise.resolve();
}
I can share code if above code is not enough.
Related
I am using AWS Amazon RDS Aurora serverless database for an API data source.
Aurora 2.08.3 compatible with MySQL 5.7
I have enabled the "Pause after inactivity feature" to "Scale the capacity to 0 ACUs when cluster is idle".
I am finding that when the database is scaled to 0 Aurora Capacity units, the API fails.
Even if I change the API configuration to time out out after 40 seconds, it fails if the database was at 0 ACUs when it was called. Calling the API again after some time yields a successful call.
Whether at connection.connect or connection.query, the failures come with no useful response - the response just doesn't come.
I have not been able to determine if the database needs a moment to scale up to determine if I need to pause the call. Logging to the console the connection info looks the same whether the database is scaled down or ready for a query.
Is there a way to programmatically check if a AWS Serverless v2 Aurora MySQL database is scaled to 0? connection.state does not seem to describe this.
I have tried many many approaches. This post seemed promising, but didn't solve my issue.
What I have now is...
var connection;
exports.getRecords = async (event) => {
await openConnection();
// Simplified connection.query code that works when the database is scaled up
var q = 'SELECT * FROM databaseOne.tableOne';
connection.query(q, function(err,results){
if(err) {
console.log('q err', err);
throw err;
}
console.log(results);
resolve(results);
})
}
async function openConnection() {
connection = mysql.createConnection({
host: process.env.hostname,
port: process.env.portslot,
user: process.env.username,
password: process.env.userpassword,
});
console.log('connection1', connection.state, connection);
try {
await connection.connect();
console.log('connection2', connection.state, connection);
} catch (err) {
console.log('c err', err);
}
}
Thank you
I have a MySQL table with millions of data.
For each row I have to apply a custom logic and update the modified data on another table.
Using knex.js I run the query to read the data using the stream() function
Once I get the Stream object I apply my logic to the data event.
Everything works correctly but at a certain point it stops without giving any errors.
I tried to pause the stream before each update operation in the new table and restart it after completing the update but the problem is not solved.
Trying to put a limit on the query, for example to 1000 results, the system works fine.
Sample code:
const readableStream = knex.select('*')
.from('big_table')
.stream();
readableStream.on('data', async(data) => {
readableStream.pause() // pause stream
const toUpdate = applyLogic(data) // sync func
const whereCond = getWhereCondition(data) // sync func
try {
await knex('to_update').where(whereCond).update(toUpdate)
console.log('UPDATED')
readableStream.resume() // resume stream
} catch (e) {
console.log('ERROR', e)
}
readableStream.resume() // resume stream
}).on('finish', () => {
console.log('FINISH')
}).on('error', (err) => {
console.log('ERROR', err)
})
Thanks!
I solved.
The problem is not due to knex.js or the streams but to my development environment.
I use k3d to simulate the production environment on the gcp. So to test my script locally I did a port-forward of the MySQL service.
It is not clear to me why the system crashes but by creating a container with my script so that it connects to the MySQL service, the algorithm works as I expect.
Thanks
Testing with Postman, I'll try to make this as clear as possible, please advise if this is not making sense.
I have a Lambda that uses MySQL RDS database on AWS and works fine locally when accessing the database on AWS. After successfully getting a JWT from an auth endpoint I try to hit the login endpoint and I get a 502 Bad Gateway. Using the CloudWatch logs I can trace the failure to right before the login query runs. I've confirmed that my MySQL config is correct and that I have a connection to the database. The lambda and the database are in the same region DB: us-east-1f, lambda: us-east-1.
I've confirmed the OPTIONS and POST request methods for this endpoint both are set up with CORS enabled in the API Gateway. I'm using my serverless.yml to set cors: true on all the endpoints even though I'm using app.use(cors()) in my index file.
The error message for the 502 is, {"message": "Internal server error"}
Here is the point of failure in my code:
'use strict';
const mysql = require('./index');
module.exports = {
loginSql: async (email, password) => {
// MAKES IT HERE AND THE PARAMS ARE CORRECT
try {
console.log('IN TRY %%%%%%%%%%%%%%');
// SEEMS TO DIE HERE
const results = await mysql.query({
sql: `SELECT
id, first_name, last_name, email
FROM users
WHERE email = ?
AND password = ?`,
timeout: 50000,
values: [email, password],
});
// NEVER MAKES IT HERE /////////
console.log('QUERY RAN %%%%%%%%%%%%');
mysql.end();
if (results.length < 1) return false;
return results;
} catch (error) {
// DOESN'T THROW ERROR
console.log('LOGIN DB ERROR', error);
throw new Error('LOGIN DB ERROR THROWN', error);
}
},
};
I just created the exact same use case in that I have a LAMBDA function written in Java querying data from a MySQL RDS instance. It works perfectly.
Here is your issue:
To connect to the RDS instance from a Lambda function, you must set the inbound rules using the same security group as the RDS Instance. For details, How do I configure a Lambda function to connect to an RDS instance?.
I'm pretty new to nodejs which is probably why I'm asking this question. I recently discovered that calls being made with nodejs to any database are async.
As a former C# .Net programmer this is little bit a surprise for me. I'm just used to code synchronous and it's ok to wait a little.
Currently I want to make a database call and with the returned result the code should continue to run. How to do this best? I found something about promises but I can't find the proper solution yet.
What I really want is something like this:
var requestLoop = setInterval(function(){
console.log('Trading bot (re)started..');
var wlist = [];
wlist = db_connection.getWatchList_DB() ==> Database call here
if(wlist.length > 0){
// Perform the rest of the code
}
}, 5000);//300000 five minutes
So, for me it's ok to wait for the database call and continue with the fetched results. Is there any simple solution for this?
You can try this mysql2 module which has inbuilt support for the promises.Code snippet from the official documentation
async function main() {
// get the client
const mysql = require('mysql2');
// create the pool
const pool = mysql.createPool({host:'localhost', user: 'root', database: 'test'});
// now get a Promise wrapped instance of that pool
const promisePool = pool.promise();
// query database using promises
const [rows,fields] = await promisePool.query("SELECT 1");
}
Also if you are very new to Node and async programming I would suggest you to learn about Callbacks,Promises and ofcourse Async-Await
First, I need to tell you that I am very new to the wonders of nodejs, socketstream, angularjs and JavaScript in general. I come from a Java background and this might explain my ignorance of the correct way of doing things async.
To toy around with things I installed the ss-angular-demo from americanyak. My problem is now that the Rpc seems to be a synchronous interface and my call the the mysql database has an asynchronous interface. How can I return the database results upon a call of the Rpc?
Here is what I did so far with socketstream 0.3:
In app.js I successfully tell ss to allow my mysql database connection to be accessed by putting ss.api.add('coolStore',mysqlConn); in there at the right place (as explained in the socketstream docs). I use the mysql npm, so I can call mysql within the Rpc
server/rpc/coolRpc.js
exports.actions = function (req, res, ss) {
// use session middleware
req.use('session');
return {
get: function(threshold){
var sql = "SELECT cool.id, cool.score, cool.data FROM cool WHERE cool.score > " + threshold;
if (!ss.arbStore) {
console.log("connecting to mysql arb data store");
ss.coolStore = ss.coolStore.connect();
}
ss.coolStore.query(sql, function(err, rows, fields) {
if(err) {
console.log("error fetching stuff", err);
} else {
console.log("first row = "+rows[0].id);
}
});
var db_rows = ???
return res(null, db_rows || []);
}
}
The console logs the id of my database entry, as expected. However, I am clueless how I can make the Rpc's return statement return the rows of my query. What is the right way of addressing this sort of problem?
Thanks for your help. Please be friendly with me, because this is also my first question on stackoverflow.
It's not synchronous. When your results are ready, you can send them back:
exports.actions = function (req, res, ss) {
// use session middleware
req.use('session');
return {
get: function(threshold){
...
ss.coolStore.query(sql, function(err, rows, fields) {
res(err, rows || []);
});
}
}
};
You need to make sure that you always call res(...) from an RPC function, even when an error occurs, otherwise you might get dangling requests (where the client code keeps waiting for a response that's never generated). In the code above, the error is forwarded to the client so it can be handled there.