What is the best way to handle mysql database users connection in PHP?
I have a web server running a PHP application on MySQL. I have created a database user for the application: dbuser1 with limited access - only for query, insert and update tables. No alter table.
Now the question is, should i use the same dbuser1 widely in my scripts, so if there are 100 current people using my system and hence 100 scripts running parallel they all connect to the database with the same dbuser1? or should i create a few users and assign each script a different user or load-balance between the dbusers ?
Just use the same user. As long as that user has appropriate access rights (and it sounds like you've got that covered), you'll be ok.
You might want to check that your MySQL installation is configured to allow enough concurrent connections to support your expected usage load, but the defaults should be fine for most sites.
Related
Our application uses a sql database for storing data which mustnt be modified by the user.
For now we are using a local sqlite db which is encrypted via sqlcipher and which gets decrypted on
application start with a private key set by us. This way the user cant modify any data without knowing
this key or even load the database in his favourite db browser.
We now want to allow for the database to be on a mysql server. But as far as i understand
an equal way of securing the data isnt possible. Especially because we want the user to be
able to host his own server (The same way as he used his "own" local sqlite file) I understand there is a so called "at rest" encryption for innodb in mysql now but this seems to be completely transparent to the user. So if the user connects to the db he doesnt have to enter a key for it to be decrypted but this will happen automatically for him in the background.
Is there a way to allow the user to use its own mysql server but prevent him from modifying
any database we create on it? Or is this only possible with a server we host ourselves?
Let me first give a short comment regarding the method you used until now.
I think that the concept has been wrong in the first place, because it is not secure. The decryption key has to be in the application because otherwise your users would not be able to open the database. As soon as the application runs, a user could extract that key from RAM using well-known methods / tools.
In contrast, when using a server in a locked room, you have real safety provided that the server software does not have bugs which allow users to attack it.
Thus, the answer to your question is:
Yes, it is wise to upgrade to MySQL.
Use one server for all users which physically is at a place where normal users don't have access to.
No, do not try to encrypt the MySQL table files on the disk if your only concern is that users shall not be able to change the data.
Instead, assign access privileges to your central database and tables properly. If the normal users have only read privilege on all tables, they will not have the chance to modify any data via network, but can read all data. As far as I have understood, this is what you want.
Oracle's database link allows user to query on multiple physical databases.
Is there any MySQL equivalent ? Workaround ?
I want to run a join query on two tables , which are in two physical databases. Is it possible in MySQL ?
I can think of four possible workarounds for your scenario:
use fully-qualified-table-names when querying for the external table. MySQL supports the dbname.tablename-syntax to access tables outside the current database scope. This requires that the currently connected user has the appropriate rights to read from the requested table in another physical db.
if your external database is running on a different MySQL server (either on the same machine or via a network connection) you could use replication to constantly update a read-only copy of the remote table. Replication is only possible if you're running two separate MySQL instances.
use the FEDERATED MySQL storage engine to virtually import the table into your current database. This lifts the requirement of giving the current user access rights into the second database as the credentials are given with the CREATE TABLE-statement when using the FEDERATED storage engine. This also works with the databases running on different physical servers or different MySQL instances. I think that this will be the poorest performing option and does have some limitations - more or less important depending on your usage scenario and your requirements.
This is an extension to method 1. Instead of having to specify the fully-qualified-table-names every time you request information from your external table, you simply can create a view inside your current database based on a simple SELECT <<columns>> FROM <<database>>.<<table>>. This resemble the way, the FEDERATED-method works, but is limited to tables on the same MySQL instance.
Personally I'd consider method (4) as the most useful - but the others could also be possible workarounds depending on your requirements.
There's no MySQL equavilent method at the moment, see this post. However as the poster suggest you can do a work-around if the databases are on the same machine, by just adding the database-name in front of the table-name.
Also see this, it's 6 years old, but still not resolved. It's closed and probably not on their todo-list anymore.
Is it unsafe to open the mysql server port to allow remote connections?
If it is unsafe, what is a better solution?
EDIT:
-I need read and write rights.
-Each user has a password to connect. That means that not any user can connect to the database.
What security problems does this enviroment have?
Is there a better solution?
In principle, MySQL has a rigorous permissions system which could be set up to allow remote users minimal levels of access to the tables they would need to do their job.
In practice, MySQL has had many exploits in the past, both in applying those permissions and in preventing access to the host server. It is reasonable to expect more in future; since very few admins allow untrusted access to a MySQL server, it is not strongly locked down against attacks (unlike, say, a web server like Apache).
MySQL's authentication model is also weak: passwords are stored in a table as unsalted hashes and there is no protection against brute-force password attacks. For communication between a trusted server app and the DB, you can get away with that; for authentication of not-wholly-trusted third parties it's not good enough.
If your “users” are database administrators, it's plausible to give them remote access, with access locked down by IP address/firewall or SSH tunnel. If the “users” are not-fully-trusted third parties you expect to be using the database as part of a client application, I wouldn't. And definitely don't open access to the whole public internet.
In any case, if we are talking about application users, your business rules are going to need more granularity in access rights than you can manage with table- or column-level controls. For example rules like “reviewer-class users may set article.state to 3 but only if article.state was previously 1 or 2”, or “setting article.state to 4 always causes the associated articlecontent to be deleted” cannot be reproduced in table permissions.
For that, you almost always need some component between the raw table storage and the remote client/application to manage the requests. That layer is traditionally a separate server application which is the only thing talking to the database. You could in theory write that component in database stored procedures, and give users access only to the procs not the tables. But doing anything complicated in stored procs is a super pain to write and maintain compared to a general-purpose programming language.
I am working with a client who is syncing between SQL Server and MySQL containing the exact same schema and data. We want to centralize that data into one database. Other then performance and maintainability issues, what else is bad about the original design?
You can create a linked server instance in SQL Server, with the MySQL instance.
Despite being completely proprietary, one of the nice connectivity features offered in SQL Server is the ability to query other servers through a Linked Server. Essentially, a linked server is a method of directly querying another RDBMS; this often happens through the use of an ODBC driver installed on the server.
Refer This article : step-by-step process SQL Server Linked Server to MySQL.
Providing you grant the MySQL user you connect on behalf of proper permissions, you can write to the MySQL instance accouding to you. So you can update stored procedures to do an additional step to insert records into MySQL.
Much easier solution is to use commercial application - Omega Sync from Spectral Core
Omega Sync can compare and synchronize both database schema and table data. You can even synchronize data of heterogeneous databases (for example, compare your local SQL Server database with a MySQL replica on your web site - and synchronize all the differences in just a few minutes).
on the otherhand I think you've already mentioned what possible problems you may encounter when synchronizing 2 db at the same time aside from this two I think it would be the resources. since there are different RDBMS working for the application they would also have a separate resources for each, like when I update a particular record of a user it still needs to check on which resource does it really exist, but I love to hear more from other people out there this is really an interesting topic to discuss. ;)
Oracle's database link allows user to query on multiple physical databases.
Is there any MySQL equivalent ? Workaround ?
I want to run a join query on two tables , which are in two physical databases. Is it possible in MySQL ?
I can think of four possible workarounds for your scenario:
use fully-qualified-table-names when querying for the external table. MySQL supports the dbname.tablename-syntax to access tables outside the current database scope. This requires that the currently connected user has the appropriate rights to read from the requested table in another physical db.
if your external database is running on a different MySQL server (either on the same machine or via a network connection) you could use replication to constantly update a read-only copy of the remote table. Replication is only possible if you're running two separate MySQL instances.
use the FEDERATED MySQL storage engine to virtually import the table into your current database. This lifts the requirement of giving the current user access rights into the second database as the credentials are given with the CREATE TABLE-statement when using the FEDERATED storage engine. This also works with the databases running on different physical servers or different MySQL instances. I think that this will be the poorest performing option and does have some limitations - more or less important depending on your usage scenario and your requirements.
This is an extension to method 1. Instead of having to specify the fully-qualified-table-names every time you request information from your external table, you simply can create a view inside your current database based on a simple SELECT <<columns>> FROM <<database>>.<<table>>. This resemble the way, the FEDERATED-method works, but is limited to tables on the same MySQL instance.
Personally I'd consider method (4) as the most useful - but the others could also be possible workarounds depending on your requirements.
There's no MySQL equavilent method at the moment, see this post. However as the poster suggest you can do a work-around if the databases are on the same machine, by just adding the database-name in front of the table-name.
Also see this, it's 6 years old, but still not resolved. It's closed and probably not on their todo-list anymore.