I'm struggeling a bit with how to shoot several SQL update queries to the MySQL server from NodeJS. I need some kind of synchronous execution.
Situation:
I have an array of objects. (Lets say "formValues")
Some of these objects might have a corresponding row in a database table, others don't.
I need to do a MySQL query for each object in the array to know for which object a new row in MySQL needs to be created and which one only needs to be updated.
at the end I need to update / create the rows in MySQL table.
I need kind of one callback for the whole process.
This is more or less a general question on how to solve situations like this with NodeJS. As I understood, MySQL queries are executed asynchronously.
How can it be achieved, that I can loop through the array to build a list of entries which need to be updated an other to be created in MySQL table?
Is there any "synchronous" MySQL query?
Regards
Jens
You can probably really benefit from switching to an async/await MySQL client. Chances are that the client you are using already has support for this.
But even if yours doesn't, and you can't switch it's still by far the easiest to write a helper function that converts your callback-based mysql to a promise-based one.
Effectively the approach becomes:
for(const formValue of formValues) {
await mysql.query('....');
}
Doing this without async/await and with callbacks is significantly harder.
I imagine one approach might be something like:
function doQueriesForFormValue(formValues, cb) {
const queries = [];
for(const formvalue of formValues) {
queries.push('...');
}
runQueries(queries, cb);
}
function runQueries(queries, cb) {
// Grab the first query
const currentQuery = queries[0];
mysql.query(currentQuery, (res, err) {
if (err) {
cb(null, err);
}
if (currentQueries.length>1) {
// Do the next query
runQueries(currentQueries.slice(1), cb);
} else {
// call final callback
cb(true);
}
});
}
I wrote an article with some effective patterns for working with MySQL in Node.js. Perhaps it's helpful
Related
So I'm currently fetching some remote JSON data through an async call. By using a completion handler I'm able to use the data outside the call.
But now I'm wondering how I could process this data the right way.
For example I'm having different functions that rely on this data. Every time I need the data I could call it inside these functions. E.g.:
func generateRandomItem() {
DataManager().requestData() { (result: [Model]) in
//Generate Random item from the results of call here
}
}
func listAlphabetically() {
DataManager().requestData() { (result: [Model]) in
//List all data alphabetically from the results of the call here
}
}
But this approach will be wasting API calls I think since the data will only change once a week.
Another approach I was thinking about, was calling this function on launch of the application and store everything in a global var. This var can then be used in every single function whenever I need it. E.g.:
var allItems = [Model]()
func getAllItems() {
DataManager().requestData() { (result: [Model]) in
self.allItems = result
}
}
The disadvantage of this is that the data is only pulled in once, and never updated but only on app launch.
So this isn't much of a technical question but more a fundamental one. I'm pretty new to Swift and looking for the best design patterns. If my completion handler on the async call is also a wrong approach I would like to know how I could improve it!
I need to update or create data in a mysql table from a large array (few 1000s objects) with sequelize.
When I run the following code it uses up almost all my cpu power of my db server (vserver 2gb ram / 2cpu) and clogs my app for a few minutes until it's done.
Is there a better way to do this with sequelize? Can this be done in the background somehow or as a bulk operation so it doesn't effect my apps performance?
data.forEach(function(item) {
var query = {
'itemId': item.id,
'networkId': item.networkId
};
db.model.findOne({
where: query
}).then(function(storedItem) {
try {
if(!!storedItem) {
storedItem.update(item);
} elseĀ {
db.model.create(item);
}
} catch(e) {
console.log(e);
}
});
});
Your first line of your sample code data.forEach()... makes a whole mess of calls to your function(item){}. Your code in that function fires off, in turn, a whole mess of asynchronously completing operations.
Try using the async package https://caolan.github.io/async/docs.htm and doing this
async = require('async');
...
async.mapSeries(data, function(item){...
It should allow each iteration of your function (which iterates once per item in your data array) to complete before starting the next one. Paradoxically enough, doing them one at a time will probably make them finish faster. It will certainly avoid soaking up your resources.
Weeks later I found the actual reason for this. (And unfortunately using async didn't really help after all) It was as simple as stupid: I didn't have an MYSQL index for itemId so with every iteration the whole table was queried which caused the high CPU load (obviously).
What's the lifetime of a Web SQL transaction, or, if it's dynamic, what does it depend on?
From my experience opening a new transaction takes a considerable amount of time, so I was trying to keep the transaction open for the longest time possible.
I also wanted to keep the code clean, so I was trying to separate the JS into abstract functions and passing a transaction as a parameter - something I'm sure is not good practice but sometimes greatly improves performance when it works.
As an example:
db.transaction(function (tx) {
// First question: how many tx.executeSql
// calls are allowed within one transaction?
tx.executeSql('[some query]');
tx.executeSql('[some other query]', [], function (tx, results) {
// Do something with results
});
// Second question: passing the transaction
// works some times, but not others. Is this
// allowed by the spec, good practice, and/or
// limited by any external factors?
otherFunction(tx, 'some parameter');
});
function otherFunction(tx, param) {
tx.executeSql('[some query]');
}
Also, any suggestions on techniques for speedy access to the Web SQL database would be welcome as well.
I have a method which is called frequently from multiple threads. It involves writing to disk using await FileIO.WriteTextAsync. This works fine when it is called from a single thread, but once I start doing this in multiple threads, I get this error:
The process cannot access the file because it is being used by another process.
I know what the error means, but I'm not sure how to work around it. Normally, I would create a lock(object) statement, to ensure that the file is being accessed by only one thread at a time. However, this is an asynchronous method and as such I can't use the await operator in the body of the lock(object) statement.
Please advise on how to handle this scenario.
You can use SemaphoreSlim to act as an async-compatible lock:
SemaphoreSlim _mutex = new SemaphoreSlim(1);
async Task MyMethodAsync()
{
await _mutex.WaitAsync();
try
{
...
}
finally
{
_mutex.Release();
}
}
Personally, I don't like the finally, so I usually write my own IDisposable to release the mutex when disposed, and my code can look like this:
async Task MyMethodAsync()
{
// LockAsync is an extension method returning my custom IDisposable
using (await _mutex.LockAsync())
{
...
}
}
Previously I was PHP developer so this question might be stupid to some of you.
I am using mysql with node js.
client.query('SELECT * FROM users where id="1"', function selectCb(err, results, fields) {
req.body.currentuser = results;
}
);
console.log(req.body.currentuser);
I tried to assign the result set (results) to a variable (req.body.currentuser) to use it outside the function, but it is not working.
Can you please let me know a way around it.
The query call is asynchronous. Hence selectCb is executed at a later point than your console.log call. If you put the console.log call into selectCb, it'll work.
In general, you want to call everything that depends on the results of the query from the selectCb callback. It's one of the basic architectural principles in Node.JS.
The client.query call, like nearly everything in node.js, is asynchronous. This means that the method just initiates a request, but execution continues. So when it gets to the console.log, nothing has been defined in req.body.currentuser yet.
You can see if you move the console log inside the callback, it will work:
client.query('SELECT * FROM users where id="1"', function selectCb(err, results, fields) {
req.body.currentuser = results;
console.log(req.body.currentuser);
});
So you need to structure your code around this requirement. Event-driven functional programming (which is what this is) can be difficult to wrap your head around at first. But once you get it, it makes a lot of sense.