I am looking at databases for a home project (ASP.NET MVC) which I might host eventually. After reading a similar question here on Stack Overflow I have decided to go with MySQL.
However, the easy of use & deployment of SQLite is tempting, and I would like to confirm my reasons before I write it off completely.
My goal is to maintain user status messages (like Twitter). This would mean mostly a single table with user-id/status-message couples. Read / Insert / Delete operation for status message. No modification is necessary.
After reading the following paragraph I have decided that SQLite can't work for me. I DO have a simple database, but since ALL my transaction work with the SAME table I might face some problems.
SQLite uses reader/writer locks on the entire database file. That means if any process is reading from any part of the database, all other processes are prevented from writing any other part of the database. Similarly, if any one process is writing to the database, all other processes are prevented from reading any other part of the database.
Is my understanding naive? Would SQLite work fine for me? Also does MySQL offer something that SQLite wouldn't when working with ASP.NET MVC? Ease of development in VS maybe?
If you're willing to wait half a month, the next SQLite release intends to support write-ahead logging, which should allow for more write concurrency.
I've been unable to get even the simple concurrency SQLite claims to support to work - even after asking on SO a couple of times.
Edit
Since I wrote the above, I have been able to get concurrent writes and reads to work with SQLite. It appears I was not properly disposing of NHibernate sessions - putting Using blocks around all code that created sessions solved the problem.
/Edit
But it's probably fine for your application, especially with the Write-ahead Logging that user380361 mentions.
Small footprint, single file installation, fast, works well with NHibernate, free, public domain - a very nice product in almost all respects!
Related
I have an HTML5 mobile app running on iOS and Android. Users will normally have a little bit of local data stored in a few tables. Let's say five tables with an average of three records.
Performance of websql is really bad. I read in this post that much of the delay is probably in opening and closing the database for each transaction. My users will normally only do one transaction at a time, so the time needed to open and close the database for each operation will usually be a relatively big chunk of total time needed.
I am wondering if I could just open the database once, dispense with all the transaction wrappers and just execute the sql straight away?
The table is never used by any other person or process than the user updating their data, or the app reading the data after an update and sending the data to a server for calculations and statistics.
Most crucially: if I follow the above strategy, and the database is never closed, but the user or the OS closes the app (properly speaking: the webview), will the changed data persist or be lost?
Okay, I found the problem. I use the persistenceJS framework to deal with the local database. This keeps a copy of the websql data stored in a js object and keeps database and js object in sync. That's a process that takes a while, and I was putting everything in the "flush" handler, which comes after the sync.
I also keep the connection open. For IndexedDB, I could keep open on UI and background thread at the same time without observing problem. I believe WebSQL will be the same. If you are using just JS file, you could try out my own javascript library, it is very thin wrapper for both IndexedDB and WebSQL. But the library is written for IndexedDb style.
I have several databases hosted on a shared server, and a local testing server which I use for development.
I would like to keep both set of databases somewhat synchronized (more or less daily).
So far, my ideas to solve the problem seem very clumsy. Anyway, for reference, here is what I have considered so far:
Make a database dump from online databases, trash local databases, and recreate the databases from the dump. It's a lot of work and requires a lot of download time (which guarantees I won't do it as much as I would like it to be done)
Write a small web service to access the new data, and write a small application locally to communicate with said web service, download the newest data, and update the local databases.
Both solutions sound like a lot of work for a problem that is probably already solved a zillion times over. Or maybe it's even an existing feature which I completely overlooked.
Is there an easy way to keep databases more or less in synch? Ideally something that I can set up once, schedule and forget about.
I am using MySQL 5 (MyISAM) databases on both servers.
=============
Edit: I had a look at replication, but it seems that I can't go that route because the shared hosting does not give me enough control on the server itself (I got most permissions on my databases, but not on the MySQL server itself)
I only need to keep the data synchronized, nothing else. Is there any other solution that doesn't require full control on the server?
Edit 2:
Sorry, I forgot to mention I am running on a LAMP stack on the shared server, so Windows-only solutions won't work.
I am surprised to see that there is no obvious off-the-shelves solution for this problem.
Have you considered replication? It's not to be trifled with but may be what you want. See here for more details... http://dev.mysql.com/doc/refman/5.0/en/replication-configuration.html
Take a look at Microsoft Sync Framework - you will need to code in .net, but it can resolve your issues.
http://msdn.microsoft.com/en-in/sync/default(en-us).aspx
Here is a sample for SQL server, but it can be adapted to mysql as well using ado.net provider for Mysql.
http://code.msdn.microsoft.com/sync/Release/ProjectReleases.aspx?ReleaseId=4835
You will need the additional tables for change tracking and anchors (keeping track of last synchronization) for this to work, in your mysql database, but you wont need full control as long as you can access the db.
Replication would have simpler :), but this might just work in your case.
I'm rewriting a PHP+MySQL site that averages 40-50 hits a day using Django.
Is SQLite a suitable database to use here? Are there any advantages/disadvantages between them?
I'm just using the db to store a blog and the users who can edit it. I am using fulltext search for the blog search, but no complex joins anywhere.
40-50 hits per day is very small and SQLLite can be used without any problem.
MySql might be better once you will get more hit because it handles in a better way multiple connexion (lock isn't the same with MySql and SqlLite).
The major problem with sqlite is concurrency. If you expect 40-50 hits a day, that's probably a non-issue. However, if that load increases you should be ready to migrate to a database daemon such as MySQL - better abstract your database specific code to make such a switch as painless as possible.
The performance section of the SQLite wiki might be of use to you.
Since you're already using an adequate database, I don't see a reason to migrate to a smaller one.
While sqlite might be perfectly adequate, too - changing to a less-capable platform from a more-capable one doesn't seem the best choice :)
SQLite will work just fine for you. It sounds as though you're largely using the database as read-only (with occasional writes to update the content). SQLite excels at this kind of access pattern. The only place where SQLite chokes is when you have a lot of writes to a database, because once a process attempts to write the file is locked until the write is complete. Also, if you do lots of writes (like updating rows in a loop) you should look into putting all those writes into a transaction - while the file is locked once the transaction hits a write query, the updates themselves take much less time because they're written to the file at once and not individually.
SQLite would be fine for this level of traffic. It actually performs quite well, the only thing that it is lacking is caching of data and queries because it needs to be spun up every time your page is accessed. That said, it is still very quick and it shouldn't be too hard to migrate to MySQL later if need be.
I wish to migrate the database of a legacy web app from SQL Server to MySQL. What are the limitations of MySQL that I must look out for ? And what all items would be part of a comprehensive checklist before jumping into actually modifying the code ?
First thing I would check is the data types - the exact definition of datatypes varies from database to database. I would create a mapping list that tellme what to map each of the datatypes to. That will help in building the new tables. I would also check for data tables or columns that are not being used now. No point in migrating them. Do the same with functions, job, sps, etc. Now is the time to clean out the junk.
How are you accessing the data through sps or dynamic queries from the database? Check each query by running it aganst a new dev database and make sure they still work. Again there are differences between how the two flavors of SQl work. I've not used my sql so I'm not sure what some of the common failure points are. While you are at it you might want to time new queries and see if they can be optimized. Optimization also varies from database to database and while you are at it, there are probably some poorly performing queries right now that you can fix as part of the migration.
User defined functions will need to be looked at as well. Don't forget these if you are doing this.
Don't forget scheduled jobs, these will need to be checkd and recreated in myslq as well.
Are you importing any data ona regular schedule? All imports will have to be rewritten.
Key to everything is to use a test database and test, test, test. Test everything especially quarterly or annual reports or jobs that you might forget.
Another thing you want to do is do everything through scripts that are version controlled. Do not move to production until you can run all the scripts in order on dev with no failures.
One thing I forgot, make sure the dev database you are running the migration from (the sql server database) is updated from production immediately before each test run. Hate to have something fail on prod because you were testing against outdated records.
Your client code is almost certain to be the most complex part to modify. Unless your application has a very high quality test suite, you will end up having to do a lot of testing. You can't rely on anything working the same, even things which you might expect to.
Yes, things in the database itself will need to change, but the client code is where the main action is, it will need heaps of work and rigorous testing.
Forget migrating the data, that is the last thing which should be on your mind; the database schema can probably be converted without too much difficulty; other database objects (SPs, views etc) could cause issues, but the client code is where the focus of the problems will be.
Almost every routine which executes a database query will need to be changed, but absolutely all of them will need to be tested. This will be nontrivial.
I am currently looking at migrating our application's main database from MySQL 4.1 to 5, that is much less of a difference, but it will still be a very, very large task.
I'm working on a Java based project that has a client program which needs to connect to a MySQL database on a remote server. This was implemented is as follows:
Use JDBC to write the SQL queries to be executed which are then hosted as a servlet using Apache Tomcat and made accessible via XML-RPC. The client code uses XML-RPC to remotely execute these JDBC based functions. This allows us to keep our MySQL database non-public, restricts use to the pre-defined functions, and allows Tomcat to manage the database transactions (which I've been told is better than letting MySQL do it alone, but I really don't understand why). However, this approach requires a lot of boiler-plate code, and Tomcat is a huge memory hog on our server.
I'm looking for a better way to do this. One way I'm considering is to make the MySQL database publicly accessible, re-writing the JDBC based code as stored procedures, and restricting public use to these procedures only. The problem I see with this are that translating all the JDBC code to stored procedures will be difficult and time consuming. I'm also not too familiar with MySQL's permissions. Can one grant access to a stored procedure which performs select statements on a table, but also deny arbitrary select statements on that same table?
Any other ideas are welcome, as are thoughts and or sugguestions on the stored procedure solution.
Thank you!
You can probably get the RAM upgraded in your server for less than the cost of even a few days development time, so don't write any code if that's all you're getting from the exercise. Also, just because the memory is used inside of tomcat, it doesn't mean that tomcat itself is using it. The memory could be used up by data or by technical flaws in your code.
If you've tried additional RAM and it is being eaten up, then that smells like a coding issue, so I'd suggest using a profiler, or log data to try and work out what the root cause is before changing anything. If the cause is large data sets then using the database directly will only delay the inevitable, instead you'd need to look at things like paging, summarisation, client side caching, or redesigning clients to reduce the use of expensive queries. Using a profiler, or simply reviewing the code base, will also tell you if something is creating too many objects (especially strings, or XML nodes) or leaking memory.
Boiler plate code can be avoided by refactoring creatively, and its good that you do avoid repetition. Its unclear how much structure you might already have, but with a little work its easy to centralise boilerplate JDBCs calls. There is no fundamental reason JDBC code should be repeated, perhaps you could tell us what code is being repeated?
Finally, I'll venture that there are many good reasons to put a web tier over your database. Flexibility (of deployment), compatibility, control (over the SQL) and security are all good reasons to keep the web tier.
MySQL 5.0.3+ does have an execute privilege that you can set (without setting select privileges) that should allow you to get the functionality you seek.
However, note this mysql bug report with JDBC (well and a lot of other drivers).
When calling the [procedure] with JDBC, I get "java.sql.SQLException: Driver requires
declaration of procedure to either contain a '\nbegin' or '\n' to follow argument
declaration, or SELECT privilege on mysql.proc to parse column types."
the workaround is:
See "noAccessToProcedureBodies" in /J 5.0.3 for a somewhat hackish, non-JDBC compliant
workaround.
I am sure you could implement your solution without much boiler-plate, esp. using something like Spring's remoting. Also, how much memory is Tomcat eating? I frankly believe that if it's just doing what you are describing, it could work in less than 128mb (conservative guess).
Your alternative is the "correct by the book" way of solving the problem. I say build a prototype and see how it works. The major problems you could have are:
MySQL having some important gotcha in this regard
MySQL's Stored Procedure support being too primitive and forcing you to do a lot of work
Some other strange hiccup
I'm probably one of those MySQL haters, so the situation might be better than I think.