My server keeps track of game instances. If there are no ongoing games when a user hits a certain endpoint, the server creates a new one. If the endpoint is hit twice at the same time, I want to make sure only one new game is created. I'm attempting to do this via Sequelize's transactions:
const t = await sequelize.transaction({
isolationLevel: Sequelize
.Transaction
.ISOLATION_LEVELS
.SERIALIZABLE,
});
let game = await Game.findOne({
status: {[Op.ne]: "COMPLETED"},
transaction: t,
});
if(game) {
// ...
} else {
game = await Game.create({}, {
transaction: t,
});
// ...
}
await t.commit();
Unfortunately, when this endpoint is hit twice at the same time, I get the following error: SequelizeDatabaseError: Deadlock found when trying to get lock; try restarting transaction.
I looked at possible solutions here and here, and I understand why my code throws the error, but I don't understand how to accomplish what I'm trying to do (or whether transactions are the correct tool to accomplish it). Any direction would be appreciated!
Related
I have a MySQL table with millions of data.
For each row I have to apply a custom logic and update the modified data on another table.
Using knex.js I run the query to read the data using the stream() function
Once I get the Stream object I apply my logic to the data event.
Everything works correctly but at a certain point it stops without giving any errors.
I tried to pause the stream before each update operation in the new table and restart it after completing the update but the problem is not solved.
Trying to put a limit on the query, for example to 1000 results, the system works fine.
Sample code:
const readableStream = knex.select('*')
.from('big_table')
.stream();
readableStream.on('data', async(data) => {
readableStream.pause() // pause stream
const toUpdate = applyLogic(data) // sync func
const whereCond = getWhereCondition(data) // sync func
try {
await knex('to_update').where(whereCond).update(toUpdate)
console.log('UPDATED')
readableStream.resume() // resume stream
} catch (e) {
console.log('ERROR', e)
}
readableStream.resume() // resume stream
}).on('finish', () => {
console.log('FINISH')
}).on('error', (err) => {
console.log('ERROR', err)
})
Thanks!
I solved.
The problem is not due to knex.js or the streams but to my development environment.
I use k3d to simulate the production environment on the gcp. So to test my script locally I did a port-forward of the MySQL service.
It is not clear to me why the system crashes but by creating a container with my script so that it connects to the MySQL service, the algorithm works as I expect.
Thanks
I am currently building a data connector but would like to throw and error out to the user if the date range they have provided is not supported by my API endpoint (we don't have data for more than 90 days). I looked through the documentation and found this: https://developers.google.com/datastudio/connector/error-handling#user-facing-errors
And copied the code example exactly and tried to run it but my project still isn't showing the error dialog box back to the user.
I've also taken a look at how other people implement this in this repository (https://github.com/googledatastudio/community-connectors) but still can't see an issue with what I wrote.
function getData(request) {
try {
var dataSchema = getDataSchema(request);
var data = lookupRequestData(request, dataSchema);
} catch (e) {
console.log('pre throw');
// throw Error('some error!');
cc.newUserError()
.setDebugText('Error fetching data from API. Exception details: ' + e)
.setText('There was an error communicating with the service. Try again later, or file an issue if this error persists.')
.throwException();
console.log('post throw');
}
return {
schema: dataSchema,
rows: data
};
}
I can see both the pre throw and post throw strings in my log but there is still no error message being displayed. Just wondering if someone might be able to offer a bit of advice for other things to try.
Thanks
Some Knex errors log the file and line in which they occur, but many DO NOT. This makes debugging unnecessarily tedious. Is .catch((err)=>{console.log(err)}) supposed to take care of this?
The fact that code tries to repeat around 4 times (I want it to try once and stop, absolutely no need for more attempts, ever - it only messes things up when further entries are made to the database)?
Some Knex errors log the file and line in which they occur, but many DO NOT
Can you give us some of your query examples which silent the error?
I'm heavy Knex user, during my development, almost all errors show which file and line they occurred unless two kind of situations:
query in transaction which may complete early.
In this situation, we have to customize knex inner catch logic and do some knex injection such as Runner.prototype.query, identify the transactionEarlyCompletedError, and log more info: sql or bindings on catch clause.
pool connection error
such as mysql error: Knex:Error Pool2 - Error: Pool.release(): Resource not member of pool
this is another question which depends on your database env and connection package.
The fact that code tries to repeat around 4 times
if your repeat code written in Promise chain,I don't think it will throw 4 times, it should blows up at the first throw.
query1
.then(query2)
.then(query3)
.then(query4)
.catch(err => {})
concurrently executed queries
If any promise in the array is rejected, or any promise returned by the mapper function is rejected, the returned promise is rejected as well.
Promise.map(queries, (query) => {
return query.execute()
.then()
.catch((err) => {
return err;
})
}, { concurrency: 4})
.catch((err) => {
// handle error here
})
if you use try catch and async await
still it would not repeat 4 times, if you already know the error type, meanwhile, if you don't know what error will throw, why don't you execute it only once to find out the error?
async function repeatInsert(retryTimes = 0) {
try {
await knex.insert().into();
} catch(err) {
// handle known error
if (err.isKnown) {
throw err;
}
// and retry
if (retryTimes < 4) {
return await repeatInsert(retryTimes + 1);
}
}
}
TLDR: After writing a JSON (successfully) to my Firestore, the next request will give me Internal Server Error (500). I have a suspicion that the problem is that inserting is not yet complete.
So basically, I have this code:
const jsonToDb = express();
exports.jsondb = functions.region('europe-west1').https.onRequest(jsonToDb);
jsonToDb.post('', (req, res) => {
let doc;
try {
doc = JSON.parse(req.body);
} catch(error) {
res.status(400).send(error.toString()).end();
return;
}
myDbFuncs.saveMyDoc(doc);
res.status(201).send("OK").end();
}
The database functions are in another JS file.
module.exports.saveMyDoc = function (myDoc) {
let newDoc = db.collection('insertedDocs').doc(new Date().toISOString());
newDoc.set(myDoc).then().catch();
return;
};
So I have several theories, maybe one of them is not wrong, but please help me with this. (Also if I made some mistakes in this little snippet, just tell me.)
Reproduction:
I send the first request => everything is OK, Json in the database.
I send a second request after the first request give me OK status => it does not do anything for a few secs, then 500: Internal Server Error.
Logs: Function execution took 4345 ms, finished with status: 'connection error'.
I just don't understand. Let's imagine I'm using this as an API, several requests simultaneously. Can't it handle? (I suppose it can handle, just I do something stupid.) Deliberately, I'm sending the second request after the first has finished and this occurs.
Should I make the saveMyDoc async?
saveMyDoc isn't returning a promise that resolves when all the async work is complete. If you lose track of a promise, Cloud Functions will shut down the work and clean up before the work is complete, making it look like it simply doesn't work. You should only send a response from an HTTP type function after all the work is fully complete.
Minimally, it should look more like this:
module.exports.saveMyDoc = function (myDoc) {
let newDoc = db.collection('insertedDocs').doc(new Date().toISOString());
return newDoc.set(myDoc);
};
Then you would use the promise in your main function:
myDbFuncs.saveMyDoc(doc).then(() => {
res.status(201).send("OK").end();
}
See how the response is only sent after the data is saved.
Read more about async programming in Cloud Functions in the documentation. Also watch this video series that talks about working with promises in Cloud Functions.
I currently use knexjs.org, promise instead of regular callback and use pool connection for SQL query. At the first time, it run smoothly. But now i usually face pool connection error. The code something like this
knex('user_detail')
.select('id','full_name','phone','email')
.where('id', id_user)
.then((result) => {
resolve(result);
})
.catch((error) => {
reject(error);
})
But now i usually get error connection timeout and error pool connection in it. The first thing why it gets an error maybe because i haven't release the connection, but i have code like this,
knex('user_detail')
.select('id','full_name','phone','email')
.where('id', id_user)
.then((result) => {
resolve(result);
})
.catch((error) => {
reject(error);
})
.finally(() => {
knex.destroy()
})
It works for the first try, but failed at second try and get an error There is no pool defined on the current client and sometimes error The pool is probably full
Can someone explain to me what's going on and how i solve it? thanks.
There is not enough information in question to be able to tell why you are running out of pool connections in the first place.
The way you are calling some resolve()and reject() functions gives a hunch that you are using promises inefficiently or completely wrong...
If you add complete code example how are you able to get the the pool is probably full error I can edit the answer and be able to help more. For example by creating multiple transactions by accident which are not resolved the pool will fill up.
In the second code example you are calling knex.destroy() which doesn't destroy single pool connection, but completely destroys the knex instance and the pool you are using.
So after knex.destroy() you won't be able to use that knex instance anymore and you have to create completely new instance by giving database connection configuration again.
This way you don't need to handle connection it will automatically commit+release to pool back on return and rollback on throw error.
const resultsAfterTransactionIsComplete = await knex.transaction(async trx => {
const result = await trx('insert-table').insert(req.list).returning('*');
// insert logs in the same transaction
const logEntries = result.map(o=> ({event_id: 1, resource: o.id}));
await trx('log-table').insert(logEntries);
// returning from transaction handler automatically commits and frees connection back to the pool
return results;
});