using JPA ( hibernate) VS stored procedures - mysql

I am working on a project using ZK Framework, Hibernate, Spring and Mysql.
I need to generate some charts from Mysql database, but after I calculate the number of objects that I need to calculate the values of those charts I found it more than 1400 objects and same numbers of queries and transactions.
So i thought if using stored procedures in Mysql to calculate those values and save them in a separate tables (using an architecture close to Data Warehouse), and then use my web application to just read the values of those tables and display them as charts.
I want to know in terms of speed and performance, which of those methods is better?
And thank you

No way to tell, really, without many more details. However:
What you want to do is called Denormalisation. This is a recognised technique for speeding up reporting and making it easier. (If it doesn't, your denormalisation has failed!) When it works it has the following advantages:
Reports run faster
Report code is easier to write
On the other hand:
Report Data is out of date, containing only data as at the time you
last did the calculations
An extreme form of doing this is to take the OLTP database (a standard database) and export it into an Analysis database (aka a Cube or an OLAP database).
One of the problems of Denormalisation is that a) it is usually a significant effort, b) it adds extra code which adds complexity and thus increases support costs, and c) it might not make enough (or any) difference. Because of this, it is usual not to do it until you know you have a problem. This will happen when you have done your reports on the basic database and have found that they either are too difficult to write and/or run too slowly. I would strongly suggest that only when you reach that point do you go for Denormalisation.
There can be times when you don't need to do that, but I've only seen 1 such example in over 25 years of development; and that decision was helped by a desire to use an OLAP database by Management for political purposes.

Related

how much work should we do in the database?

how much work should we do in the database?
Ok I'm really confused as to exactly how much "work" should be done IN the database, and how much work had to be done instead at the application level?
I mean I'm not talking about obvious stuff like we should convert strings into SHA2 hashes at the application level instead of the database level..
But rather stuff that are more blur, including, but not limited to "should we retrieve the data for 4 column and do a uppercase/concatenation at the application level, or should we do those stuff at the database level and send the calculated result to the application level?
And if you could list any more other examples it would be great.
It really depends on what you need.
I like to do my business logic in the database, other people are religously against that.
You can use triggers and stored procedures/functions in SQL.
Links for MySQL:
http://dev.mysql.com/doc/refman/5.5/en/triggers.html
http://www.mysqltutorial.org/introduction-to-sql-stored-procedures.aspx
http://dev.mysql.com/doc/refman/5.5/en/stored-routines.html
My reasons for doing business logic in triggers and stored proces
Note that I'm not talking about bending the database structure towards the business logic, I'm talking about putting the business logic in triggers and stored procedures.
It centralizes your logic, the database is a central place, everything has to go through it. If you have multiple insert/update/delete points in your app (or you have multiple apps) you'll need to do the checks multiple times, if you do it in the database you only have to do the checks in one place.
It simplifies the application e.g., you can just add a member, the database will figure out if the member is already known and take the appopriate action.
It hides the internals of your database from the application, if you do all your logic in the application you will need intricate knowledge of your database in the application. If you use database code (triggers/procs) to hide that, you don't need to know every database detail in your app.
It makes it easier to restucture your database If you have the logic in your database, you can just change a tablelayout, replace the old table with a blackhole table, put a trigger on that and let the trigger do the updates to the new table, your app does not even need to know the database has changed, this allows legacy apps to keep working unchanged, whilst new apps can use the improved database layout.
Some things are easier in SQL
Some things work faster in SQL
I don't like to use (lots of and/or complicated) SQL code in my application, I like to put SQL code in a stored procedure/function and try to only put simple queries in my application code, that way I can just write code that explains what I mean in my application and let the database layer do the heavy lifting.
Some people disagree strongly with this, but this approach works well for me and has simplified debugging and maintenance of my applications a lot.
Generally, its a good practice to expect only "Data" from the Database. Its upto Application(s), to apply Business/Domain Logic and make sense of the data retrieved. Its highly recommended to do the following things in the Application Layer:
1) Formatting Date
2) Applying Math functions, such as interpolation/extrapolation, etc
3) Dynamic sorting (based on columns)
However, situations sometime warrant few things to be done at the database level.
In my opinion application should use data and database should provide them and that should be clear separation of concerns. So database gives records sorted, ordered and filtered according to requested conditions but it is up to application to apply some business logic to that records and "convert" them into something meaningful to the user.
For example, in my previous company we worked on big application for work time calculations. One of obvious functionalities in this kind of application is tracking vacation days of employees - how many days employee has per year, how many he used, how many left, etc. Basically we could write some triggers and procedures that would update those columns automatically. So when employee had his vacation days approved amount of days he applied for is taken from his "vacation pool" and added to "vacation days used". Pretty easy stuff but we decided to make it explicit on application level and boy, very soon we were happy we did it that way. Application had to be labor law compliant and it quickly turned out that not for all employees vacation days are calculated equally and sometimes vacation day can be not so vacation day at all but that is beside the point. Had we put this "easy" operation in database we had to version our database with every little change to a vacation days related logic and that would lead us straight to hell in customer support field due to a fact that it was possible to update only application without a need to update database (except clear "breakthrough" moments where database structure was changed of course).
In my experience I've found that many applications start with a straight-forward set of tables and then and handful of stored procedures to provide basic functionality. This works very well; it usually yields high performance and is simple to understand, it also mitigates any need for a complex middle-tier.
However, applications grow. It's not unusual to see large data-driven applications with thousands of stored procedures. Throw triggers into the mix and you have an application which, for anybody other than the original developers (if they're still working on it), is very difficult to maintain.
I will put a word in for applications which place most logic in the database - they can work well when you have some good database developers and/or you have a legacy schema which cannot be changed. The reason I say this is that ORMs take much of the pain out of this part of application development when you let them control the schema (if not, you often need to do a lot of fiddling to get it working).
If I was designing a new application then I would usually opt for a schema which is dictated by my application domain (the design of which will be in code). I would normally let an ORM handle the mapping between the objects and the database. I would treat stored procedures as exceptions to the rule when it came to data access (reporting can be much easier in sprocs than trying to coax an ORM into producing a complex output efficiently).
The most important thing to remember though, is that there are no "best practices" when it comes to design. It is up to you the developer to weigh up the pros and cons of each option in the context of your design.

MySQL stored procedures use them or not to use them

We are at the beginning of a new project, and we are really wondering if we should use stored procedures in MySQL or not.
We would use the stored procedures only to insert and update business model entities. There are several tables which represent a model entity, and we would abstract it in those stored procedures insert/update.
On the other hand, we can call insert and update from the Model layer but not in MySQL but in PHP.
In your experience, Which is the best option? advantages and disadvantages of both approaches. Which is the fastest one in terms of high performance?
PS: It is is a web project with mostly read and high performance is the most important requisite.
Unlike actual programming language code, they:
not portable (every db has its own version of PL/SQL. Sometimes different versions of the same database are incompatible - I've seen it!)
not easily testable - you need a real (dev) database instance to test them and thus unit testing their code as part of a build is virtually impossible
not easily updatable/releasable - you must drop/create them, ie modify the production db to release them
do not have library support (why write code when someone else has)
are not easily integratable with other technologies (try calling a web service from them)
they use a language about as primitive as Fortran and thus are inelegant and laborious to get useful coding done, so it is difficult to express business logic, even though typically that is what their primary purpose is
do not offer debugging/tracing/message-logging etc (some dbs may support this - I haven't seen it though)
lack a decent IDE to help with syntax and linking to other existing procedures (eg like Eclipse does for java)
people skilled in coding them are rarer and more expensive than app coders
their "high performance" is a myth, because they execute on the database server they usually increase the db server load, so using them will usually reduce your maximum transaction throughput
inability to efficiently share constants (normally solved by creating a table and questing it from within your procedure - very inefficient)
etc.
If you have a very database-specific action (eg an in-transaction action to maintain db integrity), or keep your procedures very atomic and simple, perhaps you might consider them.
Caution is advised when specifying "high performance" up front. It often leads to poor choices at the expense of good design and it will bite you much sooner than you think.
Use stored procedures at your own peril (from someone who's been there and never wants to go back). My recommendation is to avoid them like the plague.
Unlike programming code, they:
render SQL injection attacks almost
impossible (unless you are are
constructing and executing dynamic
SQL from within your procedures)
require far less data to be sent over
the IPC as part of the callout
enable the database to far better
cache plans and result sets (this is
admittedly not so effective with
MySQL due to its internal caching
structures)
are easily testable in isolation
(i.e. not as part of JUnit tests)
are portable in the sense that they
allow you to use db-specific
features, abstracted away behind a
procedure name (in code you are stuck
with generic SQL-type stuff)
are almost never slower than SQL
called from code
but, as Bohemian says, there are plenty of cons as well (this is just by way of offering another perspectve). You'll have to perhaps benchmark before you decide what's best for you.
As for performances, they have the potential to be really performant in a future MySQL version (under SQL Server or Oracle, they are a real treat!). Yet, for all the rest... They totally blow up competition. I'll summarize:
Security: You can give your app the EXECUTE right only, everything is fine. Your SP will insert update select ..., with no possible leak of any sort. It means global control over your model, and an enforced data security.
Security 2: I know it's rare, but sometimes php code leaks out from the server (i.e. becomes visible to public). If it includes your queries, possible attackers know your model. This is pretty odd but I wanted to signal it anyway
Task force: yes, creating efficient SQL SPs requires some specific resources, sometimes more expensive. But if you think you don't need these resources just because you're integrating your queries in your client... you're going to have serious problems. I'd mention the analogy of web development: it's good to separate the view from the rest because your designer can work on their own technology while the programmers can focus on programming the business layer.
Encapsulating business layer: using stored procedures totally isolates the business where it belongs: the damn database.
Quickly testable: one command line under your shell and your code is tested.
Independence from the client technology: if tomorrow you'd like to switch from php to something else, no problem. Ok, just storing these SQL in a separate file would do the trick too, that's right. Also, good point in the comments about if you decide to switch sql engines, you'd have a lot of work to do. You have to have a good reason to do that anyway, because for big projects and big companies, that rarely happens (due to the cost and HR management mostly)
Enforcing agile 3+-tier developments: if your database is not on the same server than your client code, you may have different servers but only one for the database. In that case, you don't have to upgrade any of your php servers when you need to change the SQL related code.
Ok, I think that's the most important thing I had to say on the subject. I developed in both spirits (SP vs client) and I really, really love the SP style one. I just wished Mysql had a real IDE for them because right now it's kind of a pain in the ass limited.
Stored procedures are good to use because they keep your queries organized and allow you to perform a batch at once. Stored procedures are normally quick in execution because they are pre-compiled, unlike queries that are compiled on every run. This has significant impact in situations where database is on a remote server; if queries are in a PHP script, there are multiple communication between the application and the database server - the query is send, executed, and result thrown back. However, if using stored procedures, it only need to send a small CALL statement instead of big, complicated queries.
It might take a while to adapt to programming a stored procedure because they have their own language and syntaxes. But once you are used to it, you'll see that your code is really clean.
In terms of performance, it might not be any significant gain if you use stored procedures or not.
I will let know my opinion, despite my toughts possibly are not directly related to the question.:
As in many issues, reply about using Stored Procedures or an application-layer driven solution relies on questions that will drive the overall effort:
What you want to get.
Are you trying to do either batch operations or on-line operations? are they completely transactional? how recurrent are those operations? how heavy is the awaited workload for the database?
What you have in order to get it.
What kind of database technology you have? What kind of infrastucture? Is your team fully trained in the database technology? Is your team better capable of building a database-aegnostic solution?
Time for get it.
No secrets about that.
Architecture.
Is your solution required to be distributed onto several locations? is your solution required to use remote communications? is your solution working on several database servers, or possibly using a cluster-based architecture?
Mainteinance.
How much is the application required to change? do you have personal specifically trained for maintain the solution?
Change Management.
Do you see your database technology will change at a short, middle, long time? do you see will be required to migrate the solution frequently?
Cost
How much will cost to implement that solution using one or another strategy?
The overall of those points will drive the answer. So you have to care each of this points when making a decision about using or not any strategy. There are cases where using of stored procedures are better than application-layer managed queries, and others when, conducting queries and using an application-layer based solution is best.
Using of stored procedures tends to be more addequate when:
Your database technology isn't provided to change at a short time.
Your database technology can handle parallelized operations, table partitions or anything else strategy for divide the workload onto several processors, memory and resources (clustering, grid).
Your database technology is fully integrated with the stored proceduce definition language, that is, support is inside the database engine.
You have a development team who aren't afraid about using a procedural language (3rd. Generation language) for getting a result.
Operations you wanna achieve are built-in or supported inside the database (Exporting to XML data, managing data integrity and coherence appropiately with triggers, scheduled operations, etc).
Portability isn't an important issue and you do not whatch a technology change at a short time into your organization, even, it is not desirable. Generally, portability is seen like a milestone by the application-driven and layered-oriented developers. From my point of view, portability isn't an issue when your application isn't required to be deployed for several platforms, less when there are no reasons for making a technology change, or the effort for migrating all the organizational data is higher than the benefit for making a change. What you can win by using an application-layer driven approach (portability) you can loose in performance and value obtained from your database (Why to spend thousands of dollars for to get a Ferrari that you'll drive no more than 60 milles/hr?).
Performance is an issue. First: In several cases, you can achieve better results by using a single stored procedure call than multiple requests for data from another application. Moreover, some characteristics you need to perform may be built-in your database and its use less expensive in terms of workload. When you use an application-layer driven solution you have to take in account the cost associated to make database connections, making calls to the database, network traffic, data wrapping (i.e., using either Java or .NET, there is an implicit cost when using JDBC/ADO.NET calls as you have to wrap your data into objects that represents the database data, so instantiation has an associated cost in terms of processing, memory, and network when data comes from and goes to outside).
Using of application-layer driven solutions tends to be more addequate when:
Portability is an important issue.
Application will be deployed onto several locations with only one or few database repositories.
Your application will use heavy business-oriented rules, that need to be agnostic of the underlying database technology.
You have in mind to do change technology providers based on market tendencies and budget.
Your database isn't fully integrated with the stored procedure language that calls to the database.
Your database capabilities are limited and your requirement goes beyond what you can achieve with your database technology.
Your application can support the penalty inherent to external calls, is more transactional-based with business-specific rules and has to abstract the database model onto a business model for the users.
Parallelizing database operations isn't important, moreover, your database has not parallelization capabilities.
You have a development team which is not well-trained onto the database technology and is better productive by using an application-driven based technology.
Hope this may help to anyone asking himself/herself what is better to use.
I would recommend you don't use stored procedures:
Their language in MySQL is very crappy
There is no way to send arrays, lists, or other types of data structure into a stored procedure
A stored procedure cannot ever change its interface; MySQL permits neither named nor optional parameters
It makes deploying new versions of your application more complicated - say you have 10x application servers and 2 databases, which do you update first?
Your developers all need to learn and understand the stored procedure language - which is very crap (as I mentioned before)
Instead, I recommend to create a layer / library and put all your queries in there
You can
Update this library and ship it on your app servers with your app
Have rich data types, such as arrays, structures etc passed around
Unit test this library, instead of the stored procedures.
On performance:
Using stored procedures will decrease the performance of your application developers, which is the main thing you care about.
It is extremely difficult to identify performance problems within a complicated stored procedure (it is much easier for plain queries)
You can submit a query batch in a single chunk over the wire (if CLIENT_MULTI_STATEMENTS flag is enabled), which means you don't get any more latency without stored procedures.
Application-side code generally scales better than database-side code
If your database is complex and not a forum type with responses, but true warehousing SP will definitely benefit. You can out all your business logic in there and not a single developer is going to care about it, they just call your SP's. I have been doing this joining over 15 tables is not fun, and you cannot explain this to a new developer.
Developers also don't have access to a DB, great! Leave that up to database designers and maintainers. If you also decide that the table structure is going to get changed, you can hide this behind your interface. n-Tier, remember??
High performance and relational DB's is not something that goes together, not even with MySQL InnoDB is slow, MyISAM should be thrown out of the window by now. If you need performance with a web-app, you need proper cache, memcache or others.
in your case, because you mentioned 'Web' I would not use stored procedures, if it was data warehouse I would definitely consider it (we use SP's for our warehouse).
Tip:
Since you mentioned Web-project, ever though about nosql sort of solution? Also, you need a fast DB, why not use PostgreSQL? (trying to advocate here...)
I used to use MySql and my understanding of sql was poor at best, I spent a fair amount of time using Sql Server, I have a clear separation of a data layer and an application layer, I currently look after a server with 0.5 terabytes.
I have felt frustrated at times not using an ORM as development is really quick with stored procedures it is much slower. I think much of our work could have been sped up by using an ORM.
When your application reaches critical mass, the ORM performance will suffer, a well written stored procedure, will give you your results faster.
As an example of performance I collect 10 different types of data in an application, then convert that to XML, which I process in the stored procedure, I have one call to the database rather than 10.
Sql is really good at dealing with sets of data, one thing that gets me frustrated is when I see someone getting data from sql in a raw form and using application code to loop over the results and format and group them, this really is bad practice.
My advice is to learn and understand sql enough and your applications will really benefit.
Lots of info here to confuse people, software development is a evolutionary. What we did 20 years ago isn't best practice now. Back in the day with classic client server you wouldnt dream of anything but SPs.
It is absolutely horses for courses, if you are a big organisation with you will use multi tier, and probably SPs but you will care little about them because a dedicated team will be sorting them out.
The opposite which is where I find myself trying to quickly knock up a web app solution, that fleshes out business requirements, it was super fast to leave the developer (remote to me) to knock up the pages and SQL queries and I define the DB structure.
However complexity is growing and without an easy way to provide APIs, I am staring to use SPs to contain the business logic. I think it is working well and sensible, I control this because I can build logic and provide a simple result set for my offshore developer to build a front end around.
Should I find my software a phenomenal success, then more separation of concerns will occur and different implementations of n teir will come about but for now SPs are perfect.
You should know all the tool sets available to you and match them is wise to start with. Unless you are building an enterprise system to start with then fast and simple is best.
I would recommend that you stay away from DB specific Stored Procedures.
I've been through a lot of projects where they suddently want to switch DB platform and the code inside a SP is usually not very portable = extra work and possible errors.
Stored Procedure development also requires the developer to have access directly to the SQL-engine, where as a normal connection can be changed by anyone in the project with code-access only.
Regarding your Model/layer/tier idea: yes, stick with that.
Website calls Business layer (BL)
BL calls Data layer (DL)
DL calls whatever storage (SQL, XML, Webservice, Sockets, Textfiles etc.)
This way you can maintain the logic level between tiers. IF and ONLY IF the DL calls seems to be very slow, you can start to fiddle around with Stored Procedures, but maintain the original none-SP code somewhere, if you suddently need to transfer the DB to a whole new platform. With all the Cloud-hosting in the business, you never know whats going to be the next DB platform...
I keep a close eye on Amazon AWS of the very same reason.
I think there is a lot of misinformation floating around about database stored queries.
I would recommend using MySQL Stored Procedures if you're doing many static queries for data manipulation. Especially if you're moving things from one table to another (i.e. moving from a live table to a historical table for whatever reason). There are drawbacks of course in that you'll have to keep a separate log of changes to them (you could in theory make a table that just holds changes to the stored procedures that the DBA's update). If you have many different applications interfacing with the database, especially if say you have a desktop program written in C# and a web program in PHP, it might be more beneficial to have some of your procedures stored in the database as they are platform independent.
This website has some interesting information on it you may find useful.
https://www.sitepoint.com/stored-procedures-mysql-php/
As always, build in a sandbox first, and test.
Try to update 100,000,000 records on a live system from a framework, and let me know how it goes. For small apps, SPs are not a must, but for large serious systems, they are a real asset.

SQL Assemblies vs Application code for complicated queries on large XML columns

I have a table with a few relational columns and one XML column which sometimes holds a fairly large chunk of data. I also have a simple webservice which uses the database. I need to be able to report on things like all the instances of a certain element within the XML column, a list of all the distinct values for a certain element, things like that.
I was able to get a list of all the distinct values for an element, but didn't get much further than that. I ended up writing incredibly complex T-SQL code to do something that seems pretty simple in C#: go through all the rows in this table, and apply this ( XPath | XQuery | XSLT ) to the XML column. I can filter on the relational columns to reduce the amount of data, but this is still a lot of data for some of the queries.
My plan was to embed an assembly in SQL Server (I'm using 2008 SP2) and have it create an indexed view on the fly for a given query (I'd have other logic to clean this view up). This would allow me to keep the network traffic down, and possibly also allow me to use tools like Excel and MSRS reports as a cheap user interface, but I'm seeing a lot of people saying "just use application logic rather than SQL assemblies". (I could be barking entirely up the wrong tree here, I guess).
Grabbing the big chunk of data to the web service and doing the processing there would have benefits as well - I'm less constrained by the SQL Server environment (since I don't live inside it) and my setup process is easier. But it does mean I'm bringing a lot of data over the network, storing it in memory while I process it, then throwing some of it away.
Any advice here would be appreciated.
Thanks
Edit:
Thanks guys, you've all been a big help. The issue was that we were generating a row in the table for a file, and each file could have multiple results, and we would doing this each time we ran a particular build job. I wanted to flatten this out into a table view.
Each execution of this build job checked thousands of files for several attributes, and in some cases each of these tests these were generating thousands of results (MSIVAL tests were the worst culprit).
The answer (duh!) is to flatten it out before it goes into the database! Based on your feedback, I decided to try creating a row for each result for each test on each file, and the XML just had the details of that one result - this made the query much simpler. Of course, we now have hundreds of thousands of rows each time we run this tool but the performance is much better. I now have a view which creates a flattened version of one of the classes of results that are emitted by the build job - this returns >200,000 and takes <5 seconds, compared to around 3 minutes for the equivalent (complicated) query before I went the flatter route, and between 10 and 30 minutes for the XML file processing of the old (non-database) version.
I now have some issues with the number of times I connect, but I have an idea of how to fix that.
Thanks again! +1's all round
I suggest using the standard xml tools in TSQL. (http://msdn.microsoft.com/en-us/library/ms189075.aspx). If you don't wish to use this I would recommend processing the xml on another machine.
SQLCLR is perfect for smaller functions, but with the restrictions on the usable methods it tends to become an exercise in frustration once you are trying to do more advanced things.
What you're asking about is really a huge balancing act and it totally depends on several factors. First, what's the current load on your database? If you're running this on a database that is already under heavy load, you're probably going to want to do this parsing on the web service. XML shredding and querying is an incredibly expensive procedure in SQL Server, especially if you're doing it on un-indexed columns that don't have a schema defined for them. Schemas and indexes help with this processing overhead, but they can't eliminate the fact that XML parsing isn't cheap. Secondly, the amount of data you're working with. It's entirely possible that you just have too much data to push over the network. Depending on the location of your servers and the amount of data, you could face insurmountable problems here.
Finally, what are the relative specs of your machines? If your web service machine has low memory, it's going to be thrashing data in and out of virtual memory trying to parse the XML which will destroy your performance. Maybe you're not running the most powerful database hardware and shredding XML is going to be performance prohibitive for the CPU you've got on your database machine.
At the end of the day, the only way to really know is to try both ways and figure out what makes sense for you. Doing the development on your web services machine will almost undoubtedly be easier as LINQ to XML is a more elegant way of parsing through XML than XQuery shoehorned into T-SQL is. My indication, given the information you provided in your question, is that T-SQL is going to perform better for you in the long run because you're doing XML parsing on every row or at least most rows in the database for reporting purposes. Pushing that kind of information over the network is just ugly. That said, if performance isn't that important, there's something to be said about taking the easier and more maintainable route of doing all the parsing on the application server.

How can I integrate advanced computations into a database field?

My biological research involves the measurement of a cellular structure as it changes length throughout the course of observation (capturing images every minute for several hours). As my data sets have become larger I am trying to store them in an Access database, from which I would like to perform various queries about their changes in size.
I know that the SELECT statement can incorporate some mathematical permutations, but I have been unable to incorporate many of my necessary calculations (probably due to my lack of knowledge). For example, one calculation involves determining the rate of change during specifically defined periods of growth. This calculation is entirely dependent on the raw data saved in the table, therefore I didn't this it would be appropriate to just calculate it in excel prior to entry into the field.
So my question is, what would be the most appropriate method of performing this calculation. Should I attempt to string together a huge SELECT calculation in my QUERY, or is there a way to use another language (I know perl?) which can be called to populate the new query field?
I'm not looking for someone to write the code, just where is it appropriate to incorporate each step. Also, I am currently using Office Access but would be interested in any mySQL answers as I may be moving to this platform at a later date. Thanks all!
You could encapsulate your logic and maths into a custom function in VBA and then call that in your select statement. This methodology would also work with other database engines but the exact wording might be slightly different
Doing it in SQL will be a lot faster, however much harder to debug (I'm guessing that you're looking at things like ANOVA, t-tests, chi^2 etc).
Having said that, you may want to to store and calculate interim values like the delay since the previous measurement, and the change in measurement.
OTOH, the metrics you describe are very simple to do in SQL:
one calculation involves determining the rate of change during specifically defined periods of growth
C.
If you need to populate a field with the result of a complex calculation, it will be easier to do the calculation in a full-fledged programming language, such as Perl, instead of trying to do it in SQL.
Perl has a very good database API, "DBI", with drivers for just about every database engine known to man. Here's a good short article on DBI:
http://www.perl.com/pub/a/1999/10/DBI.html
You would be at the mercy of the SQL implementation for calculation options, precision, etc. Better to use a separate language where you have control, extensibility and flexibility to store the results you want and need.
I'd propose that you use Access as a front-end tool for entering, editing and printing your data. You can store the data in any back-end database engine (MySQL, SQL Server, etc.), though Jet/ACE (the default Access database engine) is likely to be completely adequate unless your data set gets very, very large (it's limited to 2GBs but you don't really want to continue using it if your data grows to much over 1GB during regular usage).
For complex statistical analysis, though, I'd recommend considering exporting the data and using a proper statistics package for doing the analysis. This means your reporting might all be done from there.
In that situation, you could leverage Access's capabilities in allowing you to create an interface for selecting the datasets you wanted to export for analysis. The last time I did this for a client, they were using SPSS for the data analysis and I built them a very flexible export interface (they could choose any variables they liked for analysis).
Whether this is a helpful alternative depends on the extent and type of the analysis you're going to do. If you're using a lot of functions that Access VBA lacks and have to borrow them from Excel or write replacements for them, then you might be better off doing all of that in some other program.
Also, it may be that some or many or all of your calculations belong in the presentation layer and not in SQL. Access reports have a lot of capabilities here, and if you're summarizing data, it may be best done at that level, rather than in the SQL recordsources underlying your reports.

Large Analytics Database Responsive Retrieval (MYSQL)

I want to create a 'google analytics' type application for the web - i.e. a web-based tool to do some reporting and graphing for my database. The problem is that the database is HUGE, so I can't do the queries in real time because they will take too long and the tool will be unresponsive.
How can I use a cron job to help me? What is the best way to be able to make my graphs responsive? I think I will need to denomalize some of my database tables, but how do I make these queries faster? What intermediate values can I store in another database table to make it quicker?
Thanks!
Business Intelligence (BI) is a pretty mature discipline - and you'll find answers to your questions in any book on scaling databases for reporting & data warehousing.
A high-level list of tactics would include:
partitioning (because indexes are little help for most reporting)
summary tables (generated usually through a batch process submit via cron)
you need a good optimizer (some databases like mysql don't - so make poor joining decisions)
query parallelism (some databases will provide linear speedups just by splitting your query into multiple threads)
star-schema - a good data model is crucial to good performance
In general dynamic reporting beats the pants off static reporting - so if you're after powerful reporting I'd just try to copy data into an appropriate model, use aggregates, possibly change the database to get a good optimizer and the appropriate features rather than run reports in batch.