What's the lifetime of a Web SQL transaction, or, if it's dynamic, what does it depend on?
From my experience opening a new transaction takes a considerable amount of time, so I was trying to keep the transaction open for the longest time possible.
I also wanted to keep the code clean, so I was trying to separate the JS into abstract functions and passing a transaction as a parameter - something I'm sure is not good practice but sometimes greatly improves performance when it works.
As an example:
db.transaction(function (tx) {
// First question: how many tx.executeSql
// calls are allowed within one transaction?
tx.executeSql('[some query]');
tx.executeSql('[some other query]', [], function (tx, results) {
// Do something with results
});
// Second question: passing the transaction
// works some times, but not others. Is this
// allowed by the spec, good practice, and/or
// limited by any external factors?
otherFunction(tx, 'some parameter');
});
function otherFunction(tx, param) {
tx.executeSql('[some query]');
}
Also, any suggestions on techniques for speedy access to the Web SQL database would be welcome as well.
Related
I am most likely overcomplicating this. I am fairly confident with MySQL but never used transactions before. I know the concept is begin(), do stuff, commit() or rollback() on a failure and I am pretty sure I can structure that with ease.
What I want to find out is during a transaction I want to update a table then use that updated value in another query during the same transaction. Heres an outline:
begin()
INSERT
SELECT FROM INSERT
UPDATE BASED ON SELECT
commit()
Obviously I have slimmed down the code here and this on its own means nothing. I would like to know if this concept works before I go too deep into transactions and find it doesn't work.
My actual transaction is going to be about 5 times larger and parts of it rely on other parts of the unfinished transaction as above.
I am using Laravel so my code uses DB::beginTransaction() DB::commit() and DB::rollback() if this makes any difference to the question.
So, the simple answer to this is yes.
I ran through a number of tests with the code I had been developing and this is the state, at least in my case.
DB::beingTransaction()
Now we do our thing. ALL STATEMENTS HERE ARE RELATIVE TO EACH OTHER AND PERFORMED IN ORDER.
DB::commit()
This then runs all the statements in order as long as the simulated run through was a success
DB::rollback()
This is called on ALL methods where changes could potentially fail within a try catch block
So let me provide a working (simplified) Laravel example to demonstrate the basics:
public function store(Request $request)
{
DB::beginTransaction();
try
{
if(!$this->setAuthorisation($request))
throw new Exception('Failed to set authorisation');
DB::commit();
return response()->json(['status' => 'OK'], 200);
}
catch (Exception $exception)
{
DB::rollBack();
return response()->json(['status' => 'Failed', 'error'=>$exception->getMessage()], 500);
}
}
Now from here we can access the changes made in the setAuthorisation() method say in a getAuthorisation() method and then apply the data from $data = getAuthorisation() to another set of code say for example in setData($data).
The entire transaction works in order of simulated events before commiting so it "applies" the changes before "actually applying" the changes. Its really hard to explain any more than that so I hope this answers my own question.
I need to update or create data in a mysql table from a large array (few 1000s objects) with sequelize.
When I run the following code it uses up almost all my cpu power of my db server (vserver 2gb ram / 2cpu) and clogs my app for a few minutes until it's done.
Is there a better way to do this with sequelize? Can this be done in the background somehow or as a bulk operation so it doesn't effect my apps performance?
data.forEach(function(item) {
var query = {
'itemId': item.id,
'networkId': item.networkId
};
db.model.findOne({
where: query
}).then(function(storedItem) {
try {
if(!!storedItem) {
storedItem.update(item);
} elseĀ {
db.model.create(item);
}
} catch(e) {
console.log(e);
}
});
});
Your first line of your sample code data.forEach()... makes a whole mess of calls to your function(item){}. Your code in that function fires off, in turn, a whole mess of asynchronously completing operations.
Try using the async package https://caolan.github.io/async/docs.htm and doing this
async = require('async');
...
async.mapSeries(data, function(item){...
It should allow each iteration of your function (which iterates once per item in your data array) to complete before starting the next one. Paradoxically enough, doing them one at a time will probably make them finish faster. It will certainly avoid soaking up your resources.
Weeks later I found the actual reason for this. (And unfortunately using async didn't really help after all) It was as simple as stupid: I didn't have an MYSQL index for itemId so with every iteration the whole table was queried which caused the high CPU load (obviously).
I have a situation where I need to perform dependent asynchronous operations. For example, check the database for data, if there is data, perform a database write (insert/update), if not continue without doing anything. I have written myself a promise based database API using promise-as3. Any database operation returns a promise that is resolved with the data of a read query, or with the Result object(s) of a write query. I do the following to nest promises and create one point of resolution or rejection for the entire 'initialize' operation.
public function initializeTable():Promise
{
var dfd:Deferred = new Deferred();
select("SELECT * FROM table").then(tableDefaults).then(resolveDeferred(dfd)).otherwise(errorHandler(dfd));
return dfd.promise;
}
public function tableDefaults(data:Array):Promise
{
if(!data || !data.length)
{
//defaultParams is an Object of table default fields/values.
return insert("table", defaultParams);
} else
{
var resolved:Deferred = new Deferred();
resolved.resolve(null);
return resolved.promise;
}
}
public function resolveDeferred(deferred:Deferred):Function
{
return function resolver(value:*=null):void
{
deferred.resolve(value);
}
}
public function rejectDeferred(deferred:Deferred):Function
{
return function rejector(reason:*=null):void
{
deferred.reject(reason);
}
}
My main questions:
Are there any performance issues that will arise from this? Memory leaks etc.? I've read that function variables perform poorly, but I don't see another way to nest operations so logically.
Would it be better to have say a global resolved instance that is created and resolved only once, but returned whenever we need an 'empty' promise?
EDIT:
I'm removing question 3 (Is there a better way to do this??), as it seems to be leading to opinions on the nature of promises in asynchronous programming. I meant better in the scope of promises, not asynchronicity in general. Assume you have to use this promise based API for the sake of the question.
I usually don't write those kind of opinion based answers, but here it's pretty important. Promises in AS3 = THE ROOTS OF ALL EVIL :) And I'll explain you why..
First, as BotMaster said - it's weakly typed. What this means is that you don't use AS3 properly. And the only reason this is possible is because of backwards compatibility. The true here is, that Adobe have spent thousands of times so that they can turn AS3 into strongly type OOP language. Don't stray away from that.
The second point is that Promises, at first place, are created so that poor developers can actually start doing some job in JavaScript. This is not a new design pattern or something. Actually, it has no real benefits if you know how to structure your code properly. The thing that Promises help the most, is avoiding the so called Wall of Hell. But there are other ways to fix this in a natural manner (the very very basic thing is not to write functions within functions, but on the same level, and simply check the passed result).
The most important here is the nature of Promises. Very few people know what they actually do behind the scenes. Because of the nature of JavaScript (and ECMA script at all), there is no real way to tell if a function completed properly or not. If you return false / null / undefined - they are all regular return values. The only way they could actually say "this operation failed" is by throwing an error. So every promisified method, can potentially throw an error. And each error must be handled, or otherwise your code can stop working properly. What this means, is that every single action inside Promise is within try-catch block! Every time you do absolutely basic stuff, you wrap it in try-catch. Even this block of yours:
else
{
var resolved:Deferred = new Deferred();
resolved.resolve(null);
return resolved.promise;
}
In a "regular" way, you would simply use else { return null }. But now, you create tons of objects, resolvers, rejectors, and finally - you try-catch this block.
I cannot stress more on this, but I think you are getting the point. Try-catch is extremely slow! I understand that this is not a big problem in such a simple case like the one I just mentioned, but imagine you are doing it more and on more heavy methods. You are just doing extremely slow operations, for what? Because you can write lame code and just enjoy it..
The last thing to say - there are plenty of ways to use asynchronous operations and make them work one after another. Just by googling as3 function queue I found a few. Not to say that the event-based system is so flexible, and there are even alternatives to it (using callbacks). You've got it all in your hands, and you turn to something that is created because lacking proper ways to do it otherwise.
So my sincere advise as a person worked with Flash for a decade, doing casino games in big teams, would be - don't ever try using promises in AS3. Good luck!
var dfd:Deferred = new Deferred();
select("SELECT * FROM table").then(tableDefaults).then(resolveDeferred(dfd)).otherwise(errorHandler(dfd));
return dfd.promise;
This is the The Forgotten Promise antipattern. It can instead be written as:
return select("SELECT * FROM table").then(tableDefaults);
This removes the need for the resolveDeferred and rejectDeferred functions.
var resolved:Deferred = new Deferred();
resolved.resolve(null);
return resolved.promise;
I would either extract this to another function, or use Promise.when(null). A global instance wouldn't work because it would mean than the result handlers from one call can be called for a different one.
Previously I was PHP developer so this question might be stupid to some of you.
I am using mysql with node js.
client.query('SELECT * FROM users where id="1"', function selectCb(err, results, fields) {
req.body.currentuser = results;
}
);
console.log(req.body.currentuser);
I tried to assign the result set (results) to a variable (req.body.currentuser) to use it outside the function, but it is not working.
Can you please let me know a way around it.
The query call is asynchronous. Hence selectCb is executed at a later point than your console.log call. If you put the console.log call into selectCb, it'll work.
In general, you want to call everything that depends on the results of the query from the selectCb callback. It's one of the basic architectural principles in Node.JS.
The client.query call, like nearly everything in node.js, is asynchronous. This means that the method just initiates a request, but execution continues. So when it gets to the console.log, nothing has been defined in req.body.currentuser yet.
You can see if you move the console log inside the callback, it will work:
client.query('SELECT * FROM users where id="1"', function selectCb(err, results, fields) {
req.body.currentuser = results;
console.log(req.body.currentuser);
});
So you need to structure your code around this requirement. Event-driven functional programming (which is what this is) can be difficult to wrap your head around at first. But once you get it, it makes a lot of sense.
When I make the same query twice, the second time it does not return new rows form the database (I guess it just uses the cache).
This is a Windows Form application, where I create the dataContext when the application starts.
How can I force Linq to SQL not to use the cache?
Here is a sample function where I have the problem:
public IEnumerable<Orders> NewOrders()
{
return from order in dataContext.Orders
where order.Status == 1
select order;
}
The simplest way would be to use a new DataContext - given that most of what the context gives you is caching and identity management, it really sounds like you just want a new context. Why did you want to create just the one and then hold onto it?
By the way, for simple queries like yours it's more readable (IMO) to use "normal" C# with extension methods rather than query expressions:
public IEnumerable<Orders> NewOrders()
{
return dataContext.Orders.Where(order => order.Status == 1);
}
EDIT: If you never want it to track changes, then set ObjectTrackingEnabled to false before you do anything. However, this will severely limit it's usefulness. You can't just flip the switch back and forward (having made queries between). Changing your design to avoid the singleton context would be much better, IMO.
It can matter HOW you add an object to the DataContext as to whether or not it will be included in future queries.
Will NOT add the new InventoryTransaction to future in memory queries
In this example I'm adding an object with an ID and then adding it to the context.
var transaction = new InventoryTransaction()
{
AdjustmentDate = currentTime,
QtyAdjustment = 5,
InventoryProductId = inventoryProductId
};
dbContext.InventoryTransactions.Add(transaction);
dbContext.SubmitChanges();
Linq-to-SQL isn't clever enough to see this as needing to be added to the previously cached list of in memory items in InventoryTransactions.
WILL add the new InventoryTransaction to future in memory queries
var transaction = new InventoryTransaction()
{
AdjustmentDate = currentTime,
QtyAdjustment = 5
};
inventoryProduct.InventoryTransactions.Add(transaction);
dbContext.SubmitChanges();
Wherever possible use the collections in Linq-to-SQL when creating relationships and not the IDs.
In addition as Jon says, try to minimize the scope of a DataContext as much as possible.