Database update outside application - mysql

am I correct assuming that if a different process updates the DB then my NHibernate powered application will be out-of-sync? I'm almost using non-lazy update.
My target DB is mysql 5.0, if it makes any difference.

There isn't a simple way to answer that without more context.
What type of application are you thinking about (web, desktop, other)?
What do you think would be out of sync exactly?
If you have a desktop application with an open window with an open session that has data loaded and you change the same entities somewhere else, of course the DB will be out of sync, but you can use Refresh to update those entities.
If you use NH second-level caching and you modify the cached entities somewhere else, the cache contents will be out of sync, but you can still use Refresh or cache-controlling methods to update directly from the DB.
In all cases, NH provides support for optimistic concurrency by using Version properties; those prevent modifications to out-of-sync entities.

Yes, the objects in your current session will be out of sync, the same way a DataSet/DataTable would be out of sync if you fetch it and another process updates the same data.

Related

Couchbase Sync Gateway- Server and Client API vs bucket shadowing

I am working on a project that uses Couchbase Server and Sync Gateway to synchronize the contents of a bucket with iOS and Android clients running Couchbase Lite. I also need read and write access to the Couchbase Server from a Node.js server application. From the research I've done, using shadowing is frowned upon (https://github.com/couchbase/sync_gateway/wiki/Bucket-Shadowing), which led me to look into the Sync Gateway API as a means to update the bucket from the Node.js application. Updating existing documents through the Sync Gateway API appears to require the most recent revision ID of the document to be passed in, requiring a separate read before the modification (http://mobile-couchbase.narkive.com/HT2kvBP0/cblite-sync-gateway-couchbase-server), which seems potentially inefficient. What is the best way to solve this problem?
Updating a document (which is really creating a new revision) requires the revision ID. Otherwise Couchbase can't associate the update with a parent. This breaks the whole approach to conflict resolution. (Couchbase uses a method known as multiversion concurrency control.)
The expectation is that you're updating the existing contents of a document. This implies you've read the document already, including the revision ID.
If for some reason you don't need to the old contents to update the document, you still need the revision ID. If you work around it (for example, by purging a document through Sync Gateway and then pushing your new version) you can end up with two versions of document in the system with no connection, which will cause a special kind of conflict.
So the short answer is no, there's no way to avoid this (without causing yourself other headaches).
I am not sure why your question was downvoted, as it seems like a reasonable question. You are correct, the Couchbase bucket that is used by Sync Gateway should probably best be thought of as "opaque", you should not be poking around in there and changing things. There are a number of implementations of Couchbase Lite, such as one for Java, .NET, and Mac OS X. Have you considered making a web service that, on one side, is serving your application, and on the other side is itself a Couchbase Lite client? You should be able to separate your data as necessary using channels.

Can I keep websql database open to improve performance?

I have an HTML5 mobile app running on iOS and Android. Users will normally have a little bit of local data stored in a few tables. Let's say five tables with an average of three records.
Performance of websql is really bad. I read in this post that much of the delay is probably in opening and closing the database for each transaction. My users will normally only do one transaction at a time, so the time needed to open and close the database for each operation will usually be a relatively big chunk of total time needed.
I am wondering if I could just open the database once, dispense with all the transaction wrappers and just execute the sql straight away?
The table is never used by any other person or process than the user updating their data, or the app reading the data after an update and sending the data to a server for calculations and statistics.
Most crucially: if I follow the above strategy, and the database is never closed, but the user or the OS closes the app (properly speaking: the webview), will the changed data persist or be lost?
Okay, I found the problem. I use the persistenceJS framework to deal with the local database. This keeps a copy of the websql data stored in a js object and keeps database and js object in sync. That's a process that takes a while, and I was putting everything in the "flush" handler, which comes after the sync.
I also keep the connection open. For IndexedDB, I could keep open on UI and background thread at the same time without observing problem. I believe WebSQL will be the same. If you are using just JS file, you could try out my own javascript library, it is very thin wrapper for both IndexedDB and WebSQL. But the library is written for IndexedDb style.

Configuration data in database or in file

I found information about this already, but of more general kind and focused on "if the data shuld change a lot...". I will try to be one step more specific here.
I am developing a web application. It's possible to configure what should be presented or not. E.g. In a form, there can be a number of different drop-down lists, but it should be configured which drop-down lists should be presented.
Hence, it's going to be a lot of reading of the config info. Updating the configuration will be done very seldom. Also, the configuration itself should be performed with using a web application as well.
What's the best strategy, using files or database for the config data?
I guess this depends on if you are already using a database for the rest of the web application. If you are then it makes sense to just add another table. Otherwise the overhead of setting up a database server and managing connections just for configuration is too much. In which case a flat file using structured text is probably your best bet.
If you are already using a database, you could cache the results so that the overhead of looking up the results is lower, then clear the cache when the config is updated.
The best strategy is encapsulation.
If you encapsulate access to your configuration data properly, you'll be able to start off with whichever implementation meets your short term requirements, safe in the knowledge that you can change it later.
Up until I read the requirement of
the configuration itself should be performed with using a web application,
I'd have said a flat file or PHP include would have sufficed, but given that requirement (and the availability of MySQL), I'd say use a database.
Plus, you never know when the config's update frequency will increase.

Core Data and JSON question

I know this question has been possed before, but the explanation was a little unclear to me, my question is a little more general. I'm trying to conceptualize how one would periodically update data in an iPhone app, using a remote web service. In theory a portion of the data on the phone would be synced periodically (only when updated). While other data would require the user be online, and be requested on the fly.
Conceptually, this seems possible using XML-RPC or JSON and Core data. I wonder if anyone has an opinion on the best way to implement this, I am a novice iPhone developer, but I understand much of the process conceptually.
Thanks
To synchronize a set of entities when you don't have control over the server, here is one approach:
Add a touched BOOL attribute to your entity description.
In a sync attempt, mark all entity instances as untouched (touched = [NSNumber numberWithBool:NO]).
Loop through your server-side (JSON) instances and add or update entities from your Core Data store to your server-side store, or vice versa. The direction of updating will depend on your synchronization policy, and what data is "fresher" on either side. Either way, mark added, updated or sync'ed Core Data entities as touched (touched = [NSNumber numberWithBool:YES])
Depending on your sync policy, delete all entity instances from your Core Data store which are still untouched. Untouched entities were presumably deleted from your server-side store, because no addition, update or sync event took place between the Core Data store and the server for those objects.
Synchronization is a fair amount of work to implement and will depend on what degree of synchronization you need to support. If you're just pulling data, step 3 is considerably simpler because you won't need to push object updates to the server.
Syncing is hard, very hard. Ideally you would want to receive deltas of the changes from the server and then using a unique id for each record in Core Data, update only those records that are new or changed.
Assuming you can do that, then the code is pretty straight forward. If you are syncing in both directions then things get more complicated because you need to track deltas on both sides and handle collisions.
Can you clarify what type of syncing you are wanting to accomplish? Is it bi-directional or pull only?
I have an answer, but it's sucky. I'm currently looking for a more acceptable/reliable solution (i.e. anything Marcus Zarra cooks up).
What I've done needs some work ... seriously, because it doesn't work all the time...
The mobile device has a json catalog of entities, their versions, and a url pointing to a json file with the entity contents.
The server has the same setup, the catalog listing the entities, etc.
Whenever the mobile device starts, it compares the entity versions of it's local catalog with the catalog on the server. If any of those versions on the server are newer, it offers the user an opportunity to download the entity updates.
When the user elects to update, the mobile device now has the url for each of the new/changed entities and downloads it. Once downloaded, the app will blow away all objects for each of the changed entities, and then insert the new objects from JSON. In the event of an error, the deletions/insertions are rolled back to pre-update status.
This works, sort of. I can't catch it in a debug session when it goes awry, so I'm not sure what might cause corruption or inconsistency in the process.

How should my application keep clients in sync with schema changes to HTML5 databases?

I'm wanting to incorporate HTML5 database storage into my web application to make it online-accessible.
I've done lots of development in server-side environments with databases, and we all know that database schema additions and modifications are often necessary.
I am wondering what should happen if my application uses an offline database schema, and that schema changes. How do I prevent the application from breaking on the client side? How do I ensure the database is always up to date on the client end?
Anyone have any solutions?
If you change the schema you might want to dump the browser db and re-sync it from the server.
This would at least be the most safe way to do it.
If offline clients have added data to the db you should of course handle and up-sync of this data first.
An easy way could be to have a info table telling you which version of the application/db was used for last sync, so you know how to handle it, and also if it should be updated to latest version.