MySQL Simple Table Synchronization? - mysql

I'm developing a website which to begin with will have three clear sub sites: Forum, News and a Calendar.
Each sub site will have it's own database and common to all of these databases will be a user table which needs to be in each database so that joins can be done.
How can I synchronize all the user tables so that it doesn't matter in which database I make an update, all the databases will have the same user table.
I'm not worried if there is a short sync delay (less than 1min) and I would prefer that the solution was a simple as possible.

Why do the sub-sites need to have their own databases? Can't you just use one database, with separate tables for each of the applications? Or, in PostgreSQL, you could use schemas to the same effect.

Though I would hardly endorse an architecture like this, federated tables may do what you want.

A single app can log into more than one database. While I'd advocate kquinn's answer of "all in one DB", becaue joins will work then, if you really must have separate databases, at least have the user table accessed from one database. "Cloning" a table across multiple databases is fraught with so much peril it's not funny.

I was over complicating the problems/solution.
Since the databases will (for the time being) exist on the same server, I can use a very simple View.

Related

Drawbacks of using manually created temporary tables in MySQL

I have many queries that use manually created temporary tables in MySQL.
I want to understand if there are any drawbacks associated with this.
I ask this because I use temporary tables for queries that fetch data shown on the home screen of a web application in the form of multiple widgets. In an organization with a significant number of users, this involves creation and deletion of temporary tables numerous times. How does this affect the MySQL Database Server ?
Execution plans can't be optimal when you frequently add/use/remove tables when we would talk about databases in general. As it takes a time to generate an execution plan, the DB is unable to create one when you use described approach.

create view on different databases on different hosts

Is it possible to create a view from tables from two different databases? Like:
creative view 'my_view' as
select names as value
from host_a.db_b.locations
union
select description as value
from host_b.db_b.items;
They currently are different database engines (MyISAM and InnoDB).
thx in advance
Yes, you need to access the remote table via the FEDERATED db engine, then create a view using your query.
However this is a rather messy way to solve the problem - particularly as (from your example query) your data is effectively sharded.
This structure won't allow updates/inserts on the view. Even for an updatable/insertable view, my gut feeling is that you'll run into problems if you try to anything other than auto-commit transactions, particularly as you're mixing table types. I'd recommend looking at replication as a better way to solve the problem.

Performing Heavy Crunching On a Table Without Affecting the Table

I'm looking for some general advice on the best way to perform heavy crunching/data-mining on a database table, without affecting the performance of regular site queries on the table. Some of the calculations may involve joining several tables, and involve complex sorting and ordering. So "use better indexes" isn't always the solution.
This question isn't really specific. I'm looking for a general way to solve a problem that's come up many times over the years. So I don't have a specific table schema to show, a specific query to show. I've considered dumping the table first using mysqldump, and then re-importing the table under a different name, and then performing my heavy crunching on that temp table. My sysadmin hates the idea, so I'm looking for any other solutions people have come up with to deal with this type of problem.
If your "heavy crunching" is all read only and you are not doing anything that needs to be written back into your production data, use a Master/Slave replication and use the Slave for all your reporting and data analysis needs. The replication link will keep the values up to date on the Slave, and you can hit the Slave with as much load as you want without slowing down the Master which is serving your production system.
If you want to avoid affecting performance of your production database, the only solution I have used previously is to run your queries on another database server.
I would take a backup of the entire database and then restore it on a separate server.
Obviously, you cannot do this if you want to analyze real-time data. But for most analysis, a snapshot from the previous day is sufficient.

performance effect of joining tables from different databases

I have a web site using a database named lets say "site1". I am planning to put another site on the same server which will also use some of the tables from "site1".
So should I use three different databases like "site1" (for first site specific data), "site2" (for second site specific data), and "general" (for common tables). In which there will be join statements between databases general and site1 and site2. Or should I put all tables in one database?
Which is the best practice to do?
How performances differ in each situation?
I am using MySQL. So how is the situation especially for MySQL?
Thanks in advance...
From the performance point of view, there won't be ANY difference. Just keep your indexes in place and you will not notice whether you are using single DB or multiple DBs.
Apart from performance, there are 2 small implications that I can think of:
1. You can not have foreign keys across DBs.
2. Partitioning tables in DB based on their usage or based on applications can help you manage permissions in easy way.
I can speak from recent personal experience. I have some old mysql queries in some PHP code that worked fine with a relatively small database, but as it grew the query got slower and slower.
I have freeradius running mysql in its own database along with another management php app that I wrote. The freeradius table is > 1.5 million rows. I was attempting to join tables from my app's database to the freeradius database. I can say for sure 1.5 million rows is too many. Running some queries locked up my app altogether. I ended up having to re-write portions of my php app to do things differently (ie not joining 2 tables from different database). I also indexed the radius accounting table on some key fields and optimized some queries (mysql EXPLAIN statement is wonderful to help with this). Things are MUCH faster now.
I will definitely be hesitant to join 2 tables from different databases in the future unless really really necessary.

Best approach to relating databases or tables?

What I have:
A MySQL database running on Ubuntu that maintains a
large table of articles (similar to
wordpress).
Need to relate a given article to
another set of data. This set of data
will be fairly large.
There maybe various sets of data that
will be related.
The query:
Is it better to contain these various large sets of data within the same database of articles, which will have a large set of traffic on it?
or
Is it better to create different databases (on the same server) that
relate by a primary key to the main database with the articles?
Put them all in the same DB initially, until you find that there is a performance issue. Much easier than prematurely optimising.
Modern RDBMS are very good at optimising data access.
If you need to connect frequently and read both of the records, you should put in a the same database. The server then won't have to run permission checks twice for each of your databases.
If you have serious traffic, you should consider using persistent connection for that query.
If you don't need to read them together frequently, consider to put on different machine. As the high traffic for the bigger database won't cause slow downs on the other.
Different databases on the same server gives you all the problems of a distributed architecture without any of the benefits of scaling out. One database per server is the way to go.
When you say 'same database' and 'different databases related' don't you mean 'same table' vs 'different tables'?
if that's the question, i'd say:
one table for articles
if these 'other sets of data' are all of the same structure, put them all in the same table. if not, one table per kind of data.
everything on the same database
if you grow big enough to make database size a performance issue (after many million records and lots of queries a second), consider table partitioning or maybe replacing the biggest table with a key/value store (couchDB, mongoDB, redis, tokyo cabinet, [etc][6]), which can be a little faster than MySQL but a lot easier to distribute for performance.
[6]:key-value store