Related
I have a website on a shared host, where I expect a lot of visitors. I don't need a database for reading (everything presented on the pages is hardcoded in PHP) but I would like to store data that my users enter, so for writing only. In fact, I only store this to do a statistical analysis on it afterwards (on my local computer, after downloading it).
So my two questions:
Is MySQL a viable option for this? It is meant to run on shared hosting, with PHP/MySQL available, so I cannot really use much other fancy packages, but if e.g. writing to a file would be better for this purpose, that's possible too. As far as I understood, adding a line to a file is a relatively complex operation for huge files. On the other hand, 100+ users simultaneously connecting to a MySQL database is probably also a huge load, even if it's just for doing 1 cheap INSERT query.
If MySQL is a good option, how should the table best be configured? Currently I have one InnoDB table, with a primary key id that auto-increments (next to of course the columns storing the data). This is general-purpose configuration, so maybe there are more optimized ways given that I only need to write to the table, and not read from it?
Edit: I mainly fear that the website will go viral as soon as it's released, so I expect the users to visit in a very short timeframe. And of course I would not like to lose the data that they enter due to an overloaded database.
MySQL is a perfectly reasonable choice for this. Probably much better than a flat file, since you say you want to aggregate and analyze this data later. Doing so with a flat file might take a long time, especially if the file is large. Additionally, RDBMS are for aggregation and dataset manipulation. Ideal for creating report data.
Put whatever data columns you want in your table, and some kind of identifier to track a user, in addition to your existing row key. IP address is a logical choice for user tracking, or a magic cookie value could potentially work. It's only a single table, you don't need to think too hard about it. You may want to add nonclustered indexes on columns you'll frequently filter on for reports, e.g. IP address, access date, etc.
I mainly fear that the website will go viral as soon as it's released, so I expect the users to visit in a very short timeframe. And of course I would not like to lose the data that they enter due to an overloaded database.
RDBMS such as MySQL are explicitly designed to handle heavy loads, assuming appropriate hardware backing. Don't sweat it.
My site has a MySQL database with about 50 tables. I work hard to make it as safe and secure as possible.
Per our development plan, we will be adding a forum in the not too distant future.
I'm unsure about whether it is better to have the forum in its own database, or to insert all its tables into our existing database. I've listed the pros and cons of both approaches below as I understand them, and would appreciate some advice from those more knowledgeable and experienced than I, which is nearly all of you :-)
Merged into Existing Database
Pros
integrating forum data into existing site is easier (example: using forum thread tags to match threads to site pages and automatically display links to relevant discussions)
can merge existing users table into forum so users need not re-register to begin using the forum
all-in-one backups
Cons
I've instantly added a huge amount of new code, some of which has database access, and all of which is a much higher profile target for shenanigans, meaning my original database is now placed at much more risk of attack
updating the forum software will be more hands-on, as it will not be a straight database flop
Separate Databases for Forum and Main Site
Pros
easy install, testing, upgrade, tear down of forum
forum database security holes don't place my main site at risk (and vice-versa)
Cons
integration into existing site requires querying two databases at once. I suspect this would be fairly more difficult to program.
users would have to re-register on the forum
backing up 2 databases rather than one (this is a minor con, but it is a con)
Your thoughts? :-)
Querying from 2 databases:
select db1.a.field1, db2.b.field2 from db1.a
inner join db2.b on (db1.a.id = db2.b.id);
Just make sure your connect string has access two both databases.
And both databases need to be on the same machine.
The approach that has proven itself for me is:
Install the forum as separate system
write a thin layer to share login (if both use open id or something similar, be happy)
As time goes I slowly and carefully merge the two system where it make sense, usually it does not. I love to share data between the two databases using views.
Merged into Existing Database
Pros
Integrating forum data into existing site is easier .. [nope. from a coding perspective there isn't any difference between running a query on one database versus another. Also, your queries themselves can cross databases.]
can merge existing users table into forum so users need not re-register to begin using the forum. [nope. Yes you can do this, but you could do it even if the forum tables aren't in this database. So it's a wash]
all-in-one backups. [I think you're grasping here. Whether one database or two, the backup procedures are the same. The only difference is you have 1 or 2 files]
Cons
I've instantly added a huge amount of new code, some of which has database access, and all of which is a much higher profile target for shenanigans, meaning my original database is now placed at much more risk of attack. [maybe. IF the new code uses dynamic sql, and/or fails to use parameterized queries, then it's screwed regardless. Further, if your data layer allows the user the queries execute under full access to your server, which unfortunately seems to be par for the course on most applications, then it doesn't matter if the tables are in the same database or not. Interestingly the MySql site was cracked in this manner a month ago.]
updating the forum software will be more hands-on, as it will not be a straight database flop. [? I'm not entirely sure what you mean by this. I've never heard the term "flop" used in this context.]
Separate Databases for Forum and Main Site
Pros
easy install, testing, upgrade, tear down of forum [No.. You have the same issues regardless of what database it lives in]
forum database security holes don't place my main site at risk (and vice-versa). [depends on the types of holes and exactly how security was implemented]
Cons
integration into existing site requires querying two databases at once. I suspect this would be fairly more difficult to program. [It's not. It has exactly the same level of complexity. Also your queries can cross databases.]
users would have to re-register on the forum [Nope. You can reuse the same user table in the other table]
backing up 2 databases rather than one (this is a minor con, but it is a con). [I would disagree, but then again we have dozens of databases on our servers and all of our backups are automated. Heck, as soon as we create one the maintenance plans automatically add it to the nightly backup schedule so it's not even a thought.]
Quite frankly, I'd say the only potential issue is in how the new forum stuff accesses the database and exactly what user rights that account needs in order to do its job. If done right then there is no issue; but if it's done way wrong then the only real protection would be to place the forum software on it's own database server... and even then it might cause problems.
But this should be identified by a proper security audit.
Our company has a database of 17,000 entries. We have used MS Access for over 10 years for our various mailings. Is there something new and better out there? I'm not a techie, so keep in mind when answering. Our problems with Access are:
-no record of what was deleted,
-will not turn up a name in a search if cap's or punctuation
is not entered exactly,
-is complicated for us to understand the de-duping process.
- We'd like a more nimble program that we can access from more than one dedicated computer.
The only applications I know of that are comparable to Access are FileMaker Pro, and the database component of the Open Office suite. FM Pro is a full-fledged product and gets good marks for ease of use from non-technical users, while Base is much less robust and is not nearly as easy for creating an application.
All of the answers recommending different databases really completely miss the point here -- the original question is about a data store and application builder, not just the data store.
To the specific problems:
PROBLEM 1: no record of what was deleted
This is a design error, not a flaw in Access. There is no database that really keeps a record of what's deleted unless someone programs logging of deleted data.
But backing up a bit, if you are asking this question it suggest that you've got people deleting things that shouldn't be deleted. There are two solutions:
regular backups. That would mean you could restore data from the last backup and likely recover most of the lost data. You need regular backups with any database, so this is not really something that is specific to Access.
design your database so records are never deleted, just marked deleted and then hidden in data entry forms and reports, etc. This is much more complicated, but is very often the preferred solution, as it preserves all the data.
Problem #2: will not turn up a name in a search if cap's or punctuation is not entered exactly
There are two parts to this, one of which is understandable, and the other of which makes no sense.
punctuation -- databases are stupid. They can't tell that Mr. and Mister are the same thing, for instance. The solution to this is that for all data that needs to be entered in a regularized fashion, you use all possible methods to insure that the user can only enter valid choices. The most common control for this is a dropdown list (i.e., "combo box"), which limits the choices the user has to the ones offered in the list. It insures that all the data in the field conforms to a finite set of choices. There are other ways of maintaining data regularity and one of those involves normalization. That process avoids the issue of repeatedly storing, say, a company name in multiple records -- instead you'd store the companies in a different table and just link your records to a single company record (usually done with a combo box, too). There are other controls that can be used to help insure regularity of data entry, but that's the easiest.
capitalization -- this one makes no sense to me, as Access/Jet/ACE is completely case-insensitive. You'll have to explain more if you're looking for a solution to whatever problem you're encountering, as I can't conceive of a situation where you'd actually not find data because of differences in capitalization.
Problem #3: is complicated for us to understand the de-duping process
De-duping is a complicated process, because it's almost impossible for the computer to figure out which record among the candidates is the best one to keep. So, you want to make sure your database is designed so that it is impossible to accidentally introduce duplicate records. Indexing can help with this in certain kinds of situations, but when mailing lists are involved, you're dealing with people data which is almost impossible to model in a way where you have a unique natural key that will allow you to eliminate duplicates (this, too, is a very complicated topic).
So, you basically have to have a data entry process that checks the new record against the existing data and informs the user if there's a duplicate (or near match). I do this all the time in my apps where the users enter people -- I use an unbound form where they type in the information that is the bare minimum to create a new record (usually some combination of lastname, firstname, company and email), and then I present a list of possible matches. I do strict and loose matching and rank by closeness of the match, with the closer matches at the top of the list.
Then the user has to decide if there's a match, and is offered the opportunity to create the duplicate anyway (it's possible to have two people with the same name at the same company, of course), or instead to abandon adding the new record and instead go to one the existing records that was presented as a possible duplicate.
This leaves it up to the user to read what's onscreen and make the decision about what is and isn't a duplicate. But it maximizes the possibility of the user knowing about the dupes and never accidentally creating a duplicate record.
Problem #4: We'd like a more nimble program that we can access from more than one dedicated computer.
This one confuses me. Access is multi-user out of the box (and has been from the very beginning, nearly 20 years ago). There is no limitation whatsoever to a single computer. There are things you have to do to make it work, such as splitting your database into to parts, one part with just the data tables, and the other part with your forms and reports and such (and links to the data tables in the other file). Then you keep the back end data file on one of the computers that acts as a server, and give a copy of the front end (the reports, forms, etc.) to each user. This works very well, actually, and can easily support a couple of dozen users (or more, depending on what they are doing and how well your database is designed).
Basically, after all of this, I would tend to second #mwolfe02's answer, and agree with him that what you need is not a new database, but a database consultant who can design for you an application that will help you manage your mailing lists (and other needs) without you needing to get too deep into the weeds learning Access (or FileMaker or whatever). While it might seem more expensive up front, the end result should be a big productivity boost for all your users, as well as an application that will produce better output (because the data is cleaner and maintained better because of the improved data entry systems).
So, basically, you either need to spend money upfront on somebody with technical expertise who would design something that allows you to do better work (and more efficiently), or you need to invest time in upping your own technical skills. None of the alternatives to Access are going to resolve any of the issues you've raised without significant investment in interface design to further the goals you have (cleaner data, easier to find information, etc.).
At the risk of sounding snide, what you are really looking for is a consultant.
In the hands of a capable programmer, all of your issues with Access are easily handled. The problems you are having are not the result of using the wrong tool, but using that tool less than optimally.
Actually, if you are not a techie then Access is already the best tool for you. You will not find a more non-techie friendly way to build a data application from bottom to top.
That said, I'd say you have three options at this point:
Hire a competent database consultant to improve your application
Find commercial off-the-shelf (COTS) software that does what you need (I'm sure there are plenty of products to handle mailings; you'll need to research)
Learn about database normalization and building proper MS Access applications
If you can find a good program that does what you want then #2 above will maximize your Return on Investment (ROI). One caveat is that you'll need to convert all of your existing data, which may not be easy or even possible. Make sure you investigate that before you buy anything.
While it may be the most expensive option up-front, hiring a competent database consultant is probably your best option if you need a truly custom solution.
SQL Server sounds like a viable alternative to your scenario. If cost is a concern, you can always use SQL Server Express, which is free. Full blown SQL Server provides a lot more functionality that might not be needed right away. Express is a lot simpler as the number of features provided with it are much smaller. With either version though you will have centralized store for your data and the ability to allow all transactions to be recorded in the transaction log. Also, both have the ability to import data from an Access Database.
The newest version of SQL Server is 2008 R2
You probably want to take a look at modern databases. If you're into Microsoft-based products, start with SQL Server Express
EDIT: However, since I understand that you're not a programmer yourself, you'd probably be better off having someone experienced look into your technical problem more deeply, like the other answer suggests.
It sounds like you may want to consider a front-end for your existing Access data store. Microsoft has yet to replace Access per se, but they do have a new tool that is a lot lower on the programming totem pole than some other options. Check out Visual Studio Lightswitch - http://www.microsoft.com/visualstudio/en-us/lightswitch.
It's fairly new (still in beta) but showing potential. With it, just as with any Visual Studio project, you can connect to an MS Access datasource and design a front-end to interact with it. The plus-side here is programming requirements are much lower than with straight-up Visual Studio (read: Wizards).
Given that replacing your Access DB will require some font-end programming, you may look into VistaDB. It should allow your front end to be created in .NET with an XCopy database on the backend without requiring a server. One plus is that it retains SQL Server syntax, so if you do decide to move to SQL Server you'll be one step ahead.
(Since you're not a techie and may not understand my previous statement, you might pass my answer on to the consultant/programmer/database guy who is going to do the work for you.)
http://www.vistadb.net/
We're thinking of "growing" a little MS-Access DB with a few tables, forms and queries for multiple users. (Using a different back-end is another, but more long-term option that is unfortunately currently not acceptable.)
Most users will be read-only, but there will be a few (currently one or two) users that have to be able to do changes (while the read-only users are also using the DB). We're not so much concerned about the security aspects, but more about some of the following issues:
How can we make sure that the write-user can make changes to the table data while other users use the data? Do the read-users put locks on tables? Does the write-user have to put locks on the table? Does Access do this for us or do we have to explicitly code this?
Are there any common problems with "MS Access transactions" that we should be aware of?
Can we work on forms, queries etc. while they are being used? How can we "program" without being in the way of the users?
Which settings in MS Access have an influence on how things are handled?
Our background is mostly in Oracle, where is Access different in handling multiple users? Is there such thing as "isolation levels" in Access?
Any tips or pointers to helpful articles would be greatly appreciated.
I find the answers to this question to be problematic, confusing and incomplete, so I'll make an effort to do better.
Q1: How can we make sure that the write-user can make changes to the table data while other users use the data? Do the read-users put locks on tables? Does the write-user have to put locks on the table? Does Access do this for us or do we have to explicitly code this?
Nobody has really answered this in any complete fashion. The information on setting locks in the Access options has nothing to do with read vs. write locking. No Locks vs. All Records vs. Edited Record is how you set the default record locking for WRITES.
No locks means you are using OPTIMISTIC locking, which means you allow multiple users to edit the record and then inform them after the fact if the record has changed since they launched their own edits. Optimistic locking is what you should start with as it requires no coding to implement it, and for small users populations it hardly ever causes a problem.
All Records means that the whole table is locked any time an edit is launched.
Edited Record means that fewer records are locked, but whether or not it's a single record or more than one record depends on whether your database is set up to use record-level locking (first added in Jet 4) or page-level locking. Frankly, I've never thought it worth the trouble to set up record-level locking, as optimistic locking takes care of most of the problems.
One might think that you want to use record-level pessimistic locking, but the fact is that in the vast majority of apps, two users are almost never editing the same record. Now, obviously, certain kinds of apps might be exceptions to that, but if I ran into such an app, I'd likely try to engineer it away by redesigning the schema so that it would be very uncommon for two users to edit the same record (usually by going to some form of transactional editing instead, where changes are made by adding records, rather than editing the existing data).
Now, for your actual question, there are a number of ways to accomplish restricting some users to read-only and granting others write privileges. Jet user-level security was intended for this purpose and works fine insofar as it's "security" for any meaningful definition of the term. In general, as long as you're using a Jet/ACE data store, the best security you're going to get is that provided by Jet ULS. It's crackable, yes, but your users would be committing a firable offense by breaking it, so it might be sufficient.
I would tend to not implement Jet ULS at all and instead just architect the data editing forms such that they checked the user's Windows logon and made the forms read-only or writable depending on which users are supposed to get which access. Whether or not you want to record group membership in a data table, or maintain Windows security groups for this purpose is up to you. You could also use a Jet workgroup file to deal with it, and provide a different system.mdw file for the write users. The read-only users would log on transparently as admin, and those logged on as admin would be granted only read-only access. The write users would log on as some other username (transparently, in the shortcut you provide them for launching the app, supplying no password), and that would be used to set up the forms as read or write.
If you use Jet ULS, it can become really hairy to get it right. It involves locking down all the tables as read-only (or maybe not even that) and then using RWOP queries to provide access to the data. I haven't done but one such app in my 14 years of professional Access development.
To summarize my answers to the parts of your question:
How can we make sure that the write-user can make changes to the table data while other users use the data?
I would recommend doing this in the application, setting forms to read/only or editable at runtime depending on the user logon. The easiest approach is to set your forms to be read-only and change to editable for the write users when they open the form.
Do the read-users put locks on tables?
Not in any meaningful sense. Jet/ACE does have read locks, but they are there only for the purpose of maintaining state for individual views, and for refreshing data for the user. They do not lock out write operations of any kind, though the overhead of tracking them theoretically slows things down. It's not enough to worry about.
Does the write-user have to put locks on the table?
Access in combination with Jet/ACE does this for you automatically, particularly if you choose optimistic locking as your default. The key point here is that Access apps are databound, so as soon as a form is loaded, the record has a read lock, and as soon as the record is edited, whether or not it is write-locked for other users is determined by whether you are using optimistic or pessimistic locking. Again, this is the kind of thing Access takes care of for you with its default behaviors in bound forms. You don't worry about any of it until the point at which you encounter problems.
Does Access do this for us or do we have to explicitly code this?
Basically, other than setting editability at runtime (according to who has write access), there is no coding necessary if you're using optimistic locking. With pessimistic locking, you don't have to code, but you will almost always need to, as you can't just leave the user stuck with the default behaviors and error messages.
Q2: Are there any common problems with "MS Access transactions" that we should be aware of?
Jet/ACE has support for commit/rollback transactions, but it's not clear to me if that's what you mean in this question. In general, I don't use transactions except for maintaining atomicity, e.g., when creating an invoice, or doing any update that involves multiple tables. It works about the way you'd expect it to but is not really necessary for the vast majority of operations in an Access application.
Perhaps one of the issues here (particularly in light of the first question) is that you may not quite grasp that Access is designed for creating apps with data bound to the forms. "Transactions" is a topic of great importance for unbound and stateless apps (e.g., browser-based), but for data bound apps, the editing and saving all happens transparently.
For certain kinds of operations this can be problematic, and occasionally it's appropriate to edit data in Access with unbound forms. But that's very seldom the case, in my experience. It's not that I don't use unbound forms -- I use lots of them for dialogs and the like -- it's just that my apps don't edit data tables with unbound forms. With almost no exceptions, all my apps edit data with bound forms.
Now, unbound forms are actually fairly easy to implement in Access (particularly if you name your editing controls the same as the underlying fields), but going with unbound data editing forms is really missing the point of using Access, which is that the binding is all done for you. And the main drawback of going unbound is that you lose all the record-level form events, such as OnInsert, BeforeUpdate and so forth.
Q3. Can we work on forms, queries etc. while they are being used? How can we "program" without being in the way of the users?
This is one of the questions that's been well-addressed. All multi-user or replicated Access apps should be split, and most single-user apps should be, too. It's good design and also makes the apps more stable, as only the data tables end up being opened by more than one user at a time.
Q4. Which settings in MS Access have an influence on how things are handled?
"Things?" What things?
Q5. Our background is mostly in Oracle, where is Access different in handling multiple users? Is there such thing as "isolation levels" in Access?
I don't know anything specifically about Oracle (none of my clients could afford it even if they wanted to), but asking for a comparison of Access and Oracle betrays a fundamental misunderstanding somewhere along the line.
Access is an application development tool.
Oracle is an industrial strength database server.
Apples and oranges.
Now, of course, Access ships with a default database engine, originally called Jet and now revised and renamed ACE, but there are many levels at which Access the development platform can be entirely decoupled from Jet/ACE, the default database engine.
In this case, you've chosen to use a Jet/ACE back end, which will likely be just fine for small user populations, i.e., under 25. Jet/ACE can also be fine up to 50 or 100, particularly when only a few of the simultaneous users have write permission. While the 255-user limit in Jet/ACE includes both read-only and write users, it's the number of write users that really controls how many simultaneous users you can support, and in your case, you've got an app with mostly read-only users, so it oughtn't be terribly difficult to engineer a good app that has no problems with the back end.
Basically, I think your Oracle background is likely leading you to misunderstand how to develop in Access, where the expected approach is to bind your forms to recordsources that are updated without any need to write code. Now, for efficiency's sake it's a good idea to bind your forms to subsets of records, rather than to whole tables, but even with an entire table in the recordsource behind a data editing form, Access is going to be fairly efficient in editing Jet/ACE tables (the old myth about pulling the whole table across the wire is still out there) as long your data tables are efficiently indexed.
Record locking is something you mostly shouldn't have any cause to worry about, and one of the reasons for that is because of bound editing, where the form knows what's going on in the back end at all times (well, at intervals about a second apart, the default refresh interval). That is, it's not like a web page where you retrieve a copy of the data and then post your edits back to the server in a transaction completely unconnected to the original data retrieval operation. In a bound environment like Access, the locking file on the back-end data file is always going to be keeping track of the fact that someone has the record open for editing. This prevents a user's edits from stomping on someone else's edits, because Access knows the state and informs the user. This all happens without any coding on the part of the developer and is one of the great advantages of the bound editing model (aside from not having to write code to post the edits).
For all those who are experienced database programmers familiar with other platforms who are coming to Access for the first time, I strongly suggest using Access like an end user. Try out all the point and click features. Run the form and report wizards and check out the results that they produce. I can't vouch for all of them as demonstrating good practices, but they definitely demonstrate the default assumptions behind the way Access is intended to be used.
If you find yourself writing a lot of code, then you're likely missing the point of Access.
The first thing to do (if not already done) is to split your database into a front end (with all the forms/reports etc) and a back end(with all the data). The second thing is to setup version control on the front end.
The way I have done that in a lot of my databases is to have the users run a small “jumper” database to open the main database. This jumper does the following things
• Checks to see if the user has the database on their C drive
• If they do not then install and run
• If they do then check what version they have
• If the version numbers do not match then copy down the latest version
• Open the database
This whole checking process normally takes under half a second. Using this model you can do all your development on a separate database then when you are ready to “release” you just put the new mde up onto the network share and the next time the user opens the jumper the latest version is copied down.
There are also other things to think about in multiuser database and it might be worth checking for the common mistakes such as binding a form to a whole table etc
i think Access is a best choice for your case. But you have to split database, see:
http://accessblog.net/2005/07/how-to-split-database-into-be-and-fe.html
•How can we make sure that the write-user can make changes to the table data while other users use the data? Do the read-users put locks on tables? Does the write-user have to put locks on the table? Does Access do this for us or do we have to explicitly code this?
there are no read locks unless you put them explicitly. Just use "No Locks"
•Are there any common problems with "MS Access transactions" that we should be aware of?
should not be problems with 1-2 write users
•Can we work on forms, queries etc. while they are being used? How can we "program" without being in the way of the users?
if you split database - then no problem to work on FE design.
•Which settings in MS Access have an influence on how things are handled?
What do you mean?
•Our background is mostly in Oracle, where is Access different in handling multiple users? Is there such thing as "isolation levels" in Access?
no isolation levels in access.
BTW, you can then later move data to oracle and keep access frontend, if you have lot of users and big database.
Table or record locking is available in Access during data writes. You can control the Default record locking through Tools | Options | Advanced tab:
No Locks
All Records
Edited Record
You can set this on a form's Record Locks or in your DAO/ADO code for specific needs.
Transactions shouldn't be a problem if you use them correctly.
Best practice: Separate your tables from All your other code. Give each user their own copy of the code file and then share the data file on a network server. Work on a 'test' copy of the code (and a link to a test data file) and then update user's individual code files separately. If you need to make data file changes (add tables, columns, etc), you will have to have all users get out of the application to make the changes.
See other answers for Oracle comparison.
The correct way of building client/server Microsoft Access applications where the data is stored in a RDBMS is to use the Linked Table method. This ensures Data Isolation and Concurrency is maintained between the Microsoft Access client application and the RDBMS data with no additional and unnecessary programming logic and code which makes maintenance more difficult, and adds to development time.
see: http://claysql.blogspot.com/2014/08/normal-0-false-false-false-en-us-x-none.html
Access is a great multi-user database. It has lots of built in features to handle the multi-user situation. In fact, it is so very popular because it is such a great multi-user database. There is an upper limit on how many users can all use the database at the same time doing updates and edits - depending on how knowledgeable the developer is about access and how the database has been designed - anywhere from 20 users to approx 50 users. Some access databases can be built to handle up to 50 concurrent users, while many others can handle 20 or 25 concurrent users updating the database. These figures have been observed for databases that have been in use for several or more years and have been discussed many times on the access newsgroups.
I have found the SMB2 protocol introduced in Vista to lock the access databases.
It can be disabled by the following regedit:
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\LanmanServer\Parameters]
"Smb2"=dword:00000000
Ok, maybe I'm missing something here but I'm looking at various PHP hosting options and I see things like "10 MySQL databases", or 25 or even unlimited.
Now I've worked on sites with an Oracle backend that have 10,000+ concurrent users and we've had... one database.
The idea of a database is, of course, that you can store whatever you want in it. So why is it for MySQL that the number matters? Is there some table, row or overall database limit I'm not aware of (entirely possible)? Or is it a question or concurrent connections? Or some other performance issue (eg sharding)? The sharding aspect seems unlikely because even basic hosting options (ie under $5/month) I see with 10 databases.
If someone could clue me in on this one, it'd be great.
It's mostly a marketing tactic, although there are some technical and historical considerations.
First, apologies if this is obvious, but SCHEMAs are to Oracle as DATABASES are to MySQL (in over simplified terms, a logical collections of tables).
The host is saying you can have XX number of configured logical databases on a server. Lots of web applications need a database to run. Modern web applications like Wordpress, Movable Type, Joomla, etc., will let you name your tables with a custom prefix. However, if an application doesn't have this configuration feature that means you need one database per install. Also, in a similar vein, if two applications have the same table name, they can't coexist in a single database. Lots of early web applications started out like this, so early on number of databases was an important feature to consider.
There's also access and security. While MySQL (and other databases) can be configured to give users fine grained access-control down to the table and column level, it's often easier to create one user who has full permission on a logical Database. This is important to people who sell services but pass off the actual hosting of completed sites/applications to the shared web-host.
Some people like one database per app
It's marketing, not technical. They want something to advertise. "10" sounds like a good number.
For development purposes, sometimes it's good to make a copy of your entire database to test new software against. Beats renaming all the tables in your code (although apps like Wordpress let you specify a prefix for all your table names in case you don't have the luxury of multiple DBs).
When I used shared hosting, I set up a separate database for each site/client for custom apps, and if you use Fantastico to install applications it will use a database for each one by default.
I believe the limits are there to prompt you to upgrade to the next tier of service when you outgrow the current level.
Nick is partially correct, but it also has to do with people who will try to host multiple sites on one shared account and will use a different database for each and a script to serve the correct content with a little dns masquerading.
Additionally its possibly a marketing perspective.
If you're only setting up databases for yourself, the low count is fine. but for commerical users, whom may want to have multiple sites for multiple clients on the one service, trying to cut corners, you're likely to need 1 Database ( or more ) per client/project.
So putting a limit on number of databases controls somewhat the variety services you offer, and potentially limits potential for your "resale" value, ie: to stop you buying 1 plan and then selling it on to somebody else, like "subleasing".
This is mainly for when you are hosting multiple sites on the same box. For me, I buy/sell a lot websites so I need to be able to keep each website as detached from the others as possible.