Does the amount of MySql users affect MySql performance much? - mysql

When a user registers on my site, they have their own table created in one of my databases. This table stores all of the posts that the user makes.
What I would also like to do is also generate them their own MySql user - which ONLY has permission to read, write and delete from their table.
Creating that shouldn't be a problem - I've got Google for that.
What I'm wondering is, let's imagine that I clock myself 10,000,000 or more users at one point in the future, would having that many MySql users affect my database performance?

For the sake of answering your question, a quick points... before I explain why you are doing it wrong...
The performance hit will come from having massive amount of tables. (The limit is massive so should you ever reach that high, I would for gods sake hope that you recruit someone who can slap your database silly and explain why you have mutilated it so much). Excuse the harshness :)
Okay, now onto how you should actually be doing it.
Multi-Tenancy
First, you need to learn about how to design a database that is designed for multi tenant application. This is exactly what you are creating by the sounds of it, but you are doing it COMPLETELY wrong. I cannot stress that enough.
Here are some resources which you should read immediately.
Quick overview of what multi tenancy actually is (You can skim read this one).
Read this Multi-Tenant Data Archictecture article several times! Then repeat.
Then read this question:
- How to design a multi tenant mysql database
After you have done that. You should learn about ACL (Access Control Lists).
If you explain what sort of data you are trying to model, I will be happy to update this post with a simple table schema to match what you might require.

Yes it will drop your performance. Usually a server application uses a database connection pool with several connections (say app_user is connected 5 times). Every SQL request is handled by one of these connections. That way the overhead of creating a new connection, handling the query and dropping the connection is reduced to a minimum.
Now in your scenario every user would have his own table with its own user. That means if a user logs into your application he has to open his own connection, as he has to use his specific user account. Now instead of just 5 connections 10,000 connections have to be opened. That would not scale as each connection has its own thread and uses some ram space. Furthermore there are only about 64k ports available for your connections.
So your application would not scale for that many users.

Related

MySQL: Create a user for reading and another for writing?

I have been searching for this for a while and unable to find something useful.
Is it a good practice or even important to create 2 MySQL users, one for reading and then use that whenever I'm initiating a MySQL SELECT.
And on the other side, create another user for writing and use it whenever I'm doing an INSERT, UPDATE, DELETE, ...?
Would this help at anything for example if I'm writing and reading to the database at the same time?
Assume we're using InnoDB tables.
"good practice" is very hard to define - you've got a whole bunch of different things to trade off against each other.
I'm assuming that the database is being used as a back-end for some other system, and that your users don't have direct access to a SQL prompt. In that case, there are no real benefits to creating different MySQL users - it simply makes the front-end more complex, and an attacker who can reach the database and knows the "read-only" credentials almost certainly also knows the "read/write" credentials. From a security point of view, you should invest your time in network security of the database server, and secure storage of connection details.
From a concurrency point of view - two or more users reading and writing at the same time - you won't really gain anything either. This particular requirement is one of the things relational databases do very well, and I don't think it's affected at all by the permissions of the users - it's far more to do with whether you're using transactions, and how quickly your SQL executes.

Database design: how to effectively manage about 4000 databases with mysql

It sounds crazy, but i started a data intensive project[collecting online store inventories] which later grew to be very big. I currently have about 2000 users and each user has about 100 tables. So in essence, i created the system so that each user had his own mysql database and hosted it on a dedicated server. The problem is, the server becomes very slow and breaks due to the pressure and connections. Is there a tool i can use to optimize the db? or i should redesign to only 1 database, which will mean redesigning the whole system? I need an advice and help
4000 databases for one system?! Wowzer, did you invent Google?
I'd definitely say that you need to redesign that setup - unless your 'system' is actually database hosting and each user has paid for a private db, of course.
Nothing wrong with having multiple discrete databases, but 2-per-user is the wrong approach.
The 'right' approach will depend entirely on what your system is meant to do.
You mention everyone has a dedicated server too - this should prevent contention issues for other users. Are you sure it's not shared hosting?
Nine times out of ten, when someone structures an application database this way (segmenting identical data into different databases, or even into different tables) it's a mistake based on an unnecessary attempt to pre-optimize the system.
But without more information we cannot tell whether:
This is one one of the nine times it's a mistake, or the tenth time, when it's an appropriate design.
Whether the number of connections is what's causing the performance problems you see (which would be solved by switching to a single database) or something else.

One user per database vs single user for all databases

I'm working on SaaS application that uses the one DB per client model. It also has common "accounts" database where some basic information about the account is kept and also provides log-in functionality.
My question - is it worth creating new database user for each client database that has permissions only on that database or a single database user with access to all client databases makes more sense (i.e. "account\_%.*")?
If security is the concern, user per database is a way to go.
It's easy to think about creating all those databases.
But also please think about how you are going to maintain them all in the long run.
Will you have to run your database
scripts on an ever-increasing number
of databases?
You will have a script to run when you add a new client's database, and that will have to be continuously updated.
I'm not saying don't create multiple databases. I'm just suggesting that you think about the consequences.
I would create new databases, but it depends. Basically whatever floats your boat :)
one database per user:
+ security is easier
+ async parallel requests (if your server can handle it)
- a bit heavier on disk
one database:
+ one file to handle instead of a bunch (if that's even a +)
+ little bit more space efficient
- slow when data reaches big amounts
- no simultaneous connections meaning a heavy sql request from one user will dos all other

How many databases can MySQL handle?

My MySql server currently has 235 databases. Should I worry?
They all have same structure with MyISAM tables.
The hardware is a virtual machine with 2 GB RAM running on a Quad-Core AMD Opteron 2.2GHz.
Recently cPanel sent me an email saying that MySql has failed and a restart has been made.
New databases are being expected to be created and I wonder if I should add more memory or if I should simply add another virtual machine.
The "databases" in mysql are really catalogues, is has no effect on its limits whether you put all the tables in one or each in its own.
The main problem is the table cache. Without tuning it, you're going to have the default table cache (=64 typically), which means you will be closing a table every time you open one. This is incredibly bad.
Except in MyISAM, it's even worse, because closing a table throws its key blocks out of the key cache, which means subsequent index lookups or scans will be reading actual blocks from disc, which is horrible and slow and really needs to be avoided.
My advice is:
If possible, immediately increase the table cache to > the total number of tables
Monitor the global status variable Opened_Tables in your monitoring; if it increases rapidly, this is bad.
Carry out performance and robustness testing on your the same hardware in a non-production environment (if you are not doing so already).
(reposting my comment for better visibility)
Thank you all for your comments. The system is something similar with Google Analytics. Users website's visits are being logged into a "master" table. A native application is monitoring the master table and processes the registered visits and writes them to users' database. Each user has its own DB. This has been decided for sharding. Various reports and statistics are being run for each user. And it is faster if it only runs on specific DB (less data) I know this is not the best setup. But we have to deal with it for a while.
I dont believe there is a hard limit, the only thing that's really limiting you will be your hardware and the traffic these databases will be getting.
You seem to have very little memory, which probably means you dont have massive numbers of connections...
You should start by profiling usage for each database (or set of databases, depending on how they are used of course).
My suggestion - MySQL (or any database server for that matter) could use more memory. You can never have enough.
You are doing it wrong.
Comment with some specifics about your databases, and we can probably fill you in on where your design went wrong.

MySQL Databases. How Many for a Web App?

I'm building a web app. This app will use MySQL to store all the information associated with each user. However, it will also use MySQL to store sys admin type stuff like error logs, event logs, various temporary tokens, etc. This second set of information will probably be larger than the first set, and it's not as important. If I lost all my error logs, the site would go on without a hiccup.
I am torn on whether to have multiple databases for these different types of information, or stuff it all into a single database, in multiple tables.
The reason to keep it all in one, is that I only have to open up one connection. I've noticed a measurable time penalty for connection opening, particularly using remote mysql servers.
What do you guys do?
Fisrt,i must say, i think storing all your event logs, error logs in db is a very bad idea, instead you may want to store them on the filesystem.
You will only need error logs or event logs if something in your web app goes unexpected. Then you download the file, and examine it, thats all. No need to store it on the db. It will slow down your db and your web app.
As an answer to your question, if you really want to do that, you should seperate them, and you should find a way to keep your page running even your event og and error log databases are loaded and responding slowly.
Going with two distinct database (one for your application's "core" data, and another one for "technical" data) might not be a bad idea, at least if you expect your application to have a lot of users :
it'll allow you to put one DB on one server, and the other DB on a second server
and you can think about scaling a bit more, later : more servers for the "core" data, and still only one for the "technical" data -- or the opposite
if the "technical" data is not as important, you can (more easily) have two distinct backup processes / policies
having two distinct databases, and two distinct servers, also means you can have heavy calculations on the technical data, without impacting the DB server that hosts the "core" data -- and those calculations can be useful, on logs, or stuff like that.
as a sidenote : if you don't need that kind of "reporting" calculations, maybe storing those data to a DB is not useful, and files would do perfectly ?
Maybe opening two connections means a bit more time -- but that difference is probably rather negligible, is it not ?
I've worked a couple of times on applications that would use two database :
One "master" / "write" database, that would be used only for writes
and one "slave" database (a replication of the first one, to several slave servers), that would be used for reads
This way, yes, we sometimes open two connections -- bu one server alone would not have been able to handle the load...
Use connection pooling anyway. So the time to get a connection is not a problem. But if you have 2 connections, transaction handling become more complicated. On the other hand, sometimes it's handy to have 2 connections: if something goes wrong on the business transaction, you can rollback transaction and still log the failure on the admin transaction. But I would still stick to one database.
I would only use one databse - mostly for the reason you supply: You only need one connection to reach both logging and user stored data.
Depending on your programming language, some frameworks (J2EE as an example) provide connection pooling. With two databases you would need two pools. In PHP on the other hand, the performance come in to perspective when setting up a connection (or two).
I see no reason for two databases. It'd be perfectly acceptable to have tables that are devoted to "technical" and "business"data, but the logical separation should be sufficient.
Physical separation doesn't seem necessary to me, unless you mean an application and data warehouse star schema. In that case, it's either real-time updates or, more typically, a nightly batch ETL.
It makes no difference to mysql in any way whether you use separate "datbases", they are simply catalogues.
It may make setting permissions easier, this is a legitimate reason to do it. Other than that, it is exactly the same as keeping the tables in the same db (except you can have several tables with the same name ... but please don't)
Putting them on separate servers might be a good idea however, as you probably don't want your core critical (user info, for example) data mixed in with your high-volume, unimportant data. This is particularly true for old audit data, debug logs etc.
Also short-lived data, such as search results, sessions etc, could be placed on a different server - it presumably has no high availability[1] requirement.
Having said that, if you don't need to do this, dump it all on one server where it's easier to manage (backup, provide high availibilty, manage security etc).
It is not generally possible to take a consistent snapshot of data on >1 server. This is a good reason to only have one (or one that you care about for backup purposes)
[1] Of the data, not the database.
In MySQL, InnoDB has an option of storing all tables of a certain database in one file, or having one file per table.
Having one file per table is somewhat recommended anyway, and if you do that, it makes difference on the database storage level if you have one database or several.
With connection pooling, one database or several is probably not going to matter either.
So, in my opinion, the question is if you'd ever consider separating the "other half" of the database to a separate server - with the separate server having perhaps a very different hardware configuration, such as no RAID. If so, consider using separate databases. If not, use a single database.