I can only create one database on my current host, which is already made. The problem is that i need another one.
So my question is, is it possible to upload my files to the first host, then signup for another host that has the option to create mysql database, then connect that database to my first host?
You should probably add more tables, but use a table prefix on them to keep them organized and not looking messy. Perhaps your tables might look like opencart_users, opencart_items, otherapplication_users, otherapplication_pages, etc, most CMSs support such a thing right out of the box.
Getting another host with MySQL support, while possible, is definitely not the best idea, as you might be sending every MySQL query over a larger network (maybe even the internet) then you should be, which can cause severe performance (and security) issues.
Related
For security purpose, we will create a database log that will contain all changes done on different tables on the database, to achieve this we will use triggers as stated here but my concern is that if the system admin or anyone who has the root privilege changes the data on the logs for their benefit it will then make having logs meaningless. thus, I would like to know if there is a way for me to prevent anyone and I mean no one at all from doing any changes on the logs table, i.e dropping the table, updating, and deleting a row. if this is even possible? also in regards to my logs table, is it possible to keep track of the previous data that was changed using the update query? I would like to have a previous and new data on my logs table so that we may know what changes were made.
The problem you are trying to fix is hard, as you want someone who can administer you system, but you don't want them to be able to actually do something with all parts of the system. That means you either need to administer the system yourself and give someone limited access, trust all administrators, or look for an external solution.
What you could do is write your logs to a system where only you (or at least: a different adminsitrotor then the first) have access.
Then, if you only ever write (and don't allow changes/updates and deletes) on this system, you will be able to keep a trusted log and even spot inconsistencies in case of tampering.
A second method would be to use a specific method to write logs, one that adds a signed message. In this manner you can be sure that the logs have been added by that system. If you'd also save (signed) message of the state of the complete system, you are probably going to be able to recognize any tampering. The 'system' used for signing should live on another machine obviously, making it somewhat equivalent to the first option.
There is no way to stop root access from having permissions to make alterations. A combination approach can help you detect tampering though. You could create another server that has more limited access and clone the database table there. Log all login activity on both servers and cross backup the logs between servers. also, make very regular off server backups. You could also create a hashing table that matches each row of the log table. They would not only have to find the code that creates the hash, but reverse engineer it and alter the time stamp to match. However, I think your best bet is to make a cloned server that has no net login. Physical login only. If you think there has been any tampering, you will have to do some forensics. You can even add a USB key to the physical clone server and keep it with a CEO or something. Of course, if you can't trust the sysadmin's, no matter what your job is very difficult. The trick is not to create solid wall, but a fine net and scrutinize everything coming through the net.
Once you setup the Master Slave relationship, and only give untrusted users access to the slave database, you won't need to alter your code. Just use the master database as the primary in your code. The link below is information on setting up a master slave replication. To be fully effective though, these need to be on different servers. I don't know how this solution would work on one server. It may be possible, I just don't know.
https://dev.mysql.com/doc/refman/5.1/en/replication.html
Open PhpMyAdmin
open the table
and assign table level privileges on the table
I have been searching for this for a while and unable to find something useful.
Is it a good practice or even important to create 2 MySQL users, one for reading and then use that whenever I'm initiating a MySQL SELECT.
And on the other side, create another user for writing and use it whenever I'm doing an INSERT, UPDATE, DELETE, ...?
Would this help at anything for example if I'm writing and reading to the database at the same time?
Assume we're using InnoDB tables.
"good practice" is very hard to define - you've got a whole bunch of different things to trade off against each other.
I'm assuming that the database is being used as a back-end for some other system, and that your users don't have direct access to a SQL prompt. In that case, there are no real benefits to creating different MySQL users - it simply makes the front-end more complex, and an attacker who can reach the database and knows the "read-only" credentials almost certainly also knows the "read/write" credentials. From a security point of view, you should invest your time in network security of the database server, and secure storage of connection details.
From a concurrency point of view - two or more users reading and writing at the same time - you won't really gain anything either. This particular requirement is one of the things relational databases do very well, and I don't think it's affected at all by the permissions of the users - it's far more to do with whether you're using transactions, and how quickly your SQL executes.
I have several databases hosted on a shared server, and a local testing server which I use for development.
I would like to keep both set of databases somewhat synchronized (more or less daily).
So far, my ideas to solve the problem seem very clumsy. Anyway, for reference, here is what I have considered so far:
Make a database dump from online databases, trash local databases, and recreate the databases from the dump. It's a lot of work and requires a lot of download time (which guarantees I won't do it as much as I would like it to be done)
Write a small web service to access the new data, and write a small application locally to communicate with said web service, download the newest data, and update the local databases.
Both solutions sound like a lot of work for a problem that is probably already solved a zillion times over. Or maybe it's even an existing feature which I completely overlooked.
Is there an easy way to keep databases more or less in synch? Ideally something that I can set up once, schedule and forget about.
I am using MySQL 5 (MyISAM) databases on both servers.
=============
Edit: I had a look at replication, but it seems that I can't go that route because the shared hosting does not give me enough control on the server itself (I got most permissions on my databases, but not on the MySQL server itself)
I only need to keep the data synchronized, nothing else. Is there any other solution that doesn't require full control on the server?
Edit 2:
Sorry, I forgot to mention I am running on a LAMP stack on the shared server, so Windows-only solutions won't work.
I am surprised to see that there is no obvious off-the-shelves solution for this problem.
Have you considered replication? It's not to be trifled with but may be what you want. See here for more details... http://dev.mysql.com/doc/refman/5.0/en/replication-configuration.html
Take a look at Microsoft Sync Framework - you will need to code in .net, but it can resolve your issues.
http://msdn.microsoft.com/en-in/sync/default(en-us).aspx
Here is a sample for SQL server, but it can be adapted to mysql as well using ado.net provider for Mysql.
http://code.msdn.microsoft.com/sync/Release/ProjectReleases.aspx?ReleaseId=4835
You will need the additional tables for change tracking and anchors (keeping track of last synchronization) for this to work, in your mysql database, but you wont need full control as long as you can access the db.
Replication would have simpler :), but this might just work in your case.
Oracle's database link allows user to query on multiple physical databases.
Is there any MySQL equivalent ? Workaround ?
I want to run a join query on two tables , which are in two physical databases. Is it possible in MySQL ?
I can think of four possible workarounds for your scenario:
use fully-qualified-table-names when querying for the external table. MySQL supports the dbname.tablename-syntax to access tables outside the current database scope. This requires that the currently connected user has the appropriate rights to read from the requested table in another physical db.
if your external database is running on a different MySQL server (either on the same machine or via a network connection) you could use replication to constantly update a read-only copy of the remote table. Replication is only possible if you're running two separate MySQL instances.
use the FEDERATED MySQL storage engine to virtually import the table into your current database. This lifts the requirement of giving the current user access rights into the second database as the credentials are given with the CREATE TABLE-statement when using the FEDERATED storage engine. This also works with the databases running on different physical servers or different MySQL instances. I think that this will be the poorest performing option and does have some limitations - more or less important depending on your usage scenario and your requirements.
This is an extension to method 1. Instead of having to specify the fully-qualified-table-names every time you request information from your external table, you simply can create a view inside your current database based on a simple SELECT <<columns>> FROM <<database>>.<<table>>. This resemble the way, the FEDERATED-method works, but is limited to tables on the same MySQL instance.
Personally I'd consider method (4) as the most useful - but the others could also be possible workarounds depending on your requirements.
There's no MySQL equavilent method at the moment, see this post. However as the poster suggest you can do a work-around if the databases are on the same machine, by just adding the database-name in front of the table-name.
Also see this, it's 6 years old, but still not resolved. It's closed and probably not on their todo-list anymore.
I'm building a web app. This app will use MySQL to store all the information associated with each user. However, it will also use MySQL to store sys admin type stuff like error logs, event logs, various temporary tokens, etc. This second set of information will probably be larger than the first set, and it's not as important. If I lost all my error logs, the site would go on without a hiccup.
I am torn on whether to have multiple databases for these different types of information, or stuff it all into a single database, in multiple tables.
The reason to keep it all in one, is that I only have to open up one connection. I've noticed a measurable time penalty for connection opening, particularly using remote mysql servers.
What do you guys do?
Fisrt,i must say, i think storing all your event logs, error logs in db is a very bad idea, instead you may want to store them on the filesystem.
You will only need error logs or event logs if something in your web app goes unexpected. Then you download the file, and examine it, thats all. No need to store it on the db. It will slow down your db and your web app.
As an answer to your question, if you really want to do that, you should seperate them, and you should find a way to keep your page running even your event og and error log databases are loaded and responding slowly.
Going with two distinct database (one for your application's "core" data, and another one for "technical" data) might not be a bad idea, at least if you expect your application to have a lot of users :
it'll allow you to put one DB on one server, and the other DB on a second server
and you can think about scaling a bit more, later : more servers for the "core" data, and still only one for the "technical" data -- or the opposite
if the "technical" data is not as important, you can (more easily) have two distinct backup processes / policies
having two distinct databases, and two distinct servers, also means you can have heavy calculations on the technical data, without impacting the DB server that hosts the "core" data -- and those calculations can be useful, on logs, or stuff like that.
as a sidenote : if you don't need that kind of "reporting" calculations, maybe storing those data to a DB is not useful, and files would do perfectly ?
Maybe opening two connections means a bit more time -- but that difference is probably rather negligible, is it not ?
I've worked a couple of times on applications that would use two database :
One "master" / "write" database, that would be used only for writes
and one "slave" database (a replication of the first one, to several slave servers), that would be used for reads
This way, yes, we sometimes open two connections -- bu one server alone would not have been able to handle the load...
Use connection pooling anyway. So the time to get a connection is not a problem. But if you have 2 connections, transaction handling become more complicated. On the other hand, sometimes it's handy to have 2 connections: if something goes wrong on the business transaction, you can rollback transaction and still log the failure on the admin transaction. But I would still stick to one database.
I would only use one databse - mostly for the reason you supply: You only need one connection to reach both logging and user stored data.
Depending on your programming language, some frameworks (J2EE as an example) provide connection pooling. With two databases you would need two pools. In PHP on the other hand, the performance come in to perspective when setting up a connection (or two).
I see no reason for two databases. It'd be perfectly acceptable to have tables that are devoted to "technical" and "business"data, but the logical separation should be sufficient.
Physical separation doesn't seem necessary to me, unless you mean an application and data warehouse star schema. In that case, it's either real-time updates or, more typically, a nightly batch ETL.
It makes no difference to mysql in any way whether you use separate "datbases", they are simply catalogues.
It may make setting permissions easier, this is a legitimate reason to do it. Other than that, it is exactly the same as keeping the tables in the same db (except you can have several tables with the same name ... but please don't)
Putting them on separate servers might be a good idea however, as you probably don't want your core critical (user info, for example) data mixed in with your high-volume, unimportant data. This is particularly true for old audit data, debug logs etc.
Also short-lived data, such as search results, sessions etc, could be placed on a different server - it presumably has no high availability[1] requirement.
Having said that, if you don't need to do this, dump it all on one server where it's easier to manage (backup, provide high availibilty, manage security etc).
It is not generally possible to take a consistent snapshot of data on >1 server. This is a good reason to only have one (or one that you care about for backup purposes)
[1] Of the data, not the database.
In MySQL, InnoDB has an option of storing all tables of a certain database in one file, or having one file per table.
Having one file per table is somewhat recommended anyway, and if you do that, it makes difference on the database storage level if you have one database or several.
With connection pooling, one database or several is probably not going to matter either.
So, in my opinion, the question is if you'd ever consider separating the "other half" of the database to a separate server - with the separate server having perhaps a very different hardware configuration, such as no RAID. If so, consider using separate databases. If not, use a single database.