Is it possible to create a view from tables from two different databases? Like:
creative view 'my_view' as
select names as value
from host_a.db_b.locations
union
select description as value
from host_b.db_b.items;
They currently are different database engines (MyISAM and InnoDB).
thx in advance
Yes, you need to access the remote table via the FEDERATED db engine, then create a view using your query.
However this is a rather messy way to solve the problem - particularly as (from your example query) your data is effectively sharded.
This structure won't allow updates/inserts on the view. Even for an updatable/insertable view, my gut feeling is that you'll run into problems if you try to anything other than auto-commit transactions, particularly as you're mixing table types. I'd recommend looking at replication as a better way to solve the problem.
Related
I have app that uses DB with 50+ tables. Now i'm in situation that i need to install another instance of app. Problem is that i would like to use some tables as "common" data, e.g. "brands" or "cities" or "countries" in both (for now there are only 2, but it might soon be more) apps.
I searched and find that i can make "common" DB with such tables, and have views in each DB instance that points to corresponding table.
Main app queries are heavily relying on that common tables, so i'm concerned if that will slow down my queries since views don't have indexes?
Are there some better practices? I'm looking now for replication in mysql manual. Is that way to go? Replicate tables from common DB to app instance DB's?
Can this be one-direction replication? (only tables in "common" DB can be changed and then replicated to other DB's)?
thanx for advice
Y
Why don't you create Materialized views alias Flexviews then you would create indexes on them and you would not have to worry if it would slow you down.
More on Flexviews at mariadb's site:
Flexviews
I'm considering adding some denormalized information in my database by adding one denormalized table fed by many (~8) normalized tables, specifically for improving select query times of a core use case on my site.
The problems with the current method of querying are:
Slow query times, there are between 8 and 12 joins (some of the left joins) to access the information for this Use Case this can take ~ 3000ms for some queries.
Table Locking / Blocking, When information is updated during busy times of the day or week, (because I'm using MyIsam tables) queries are locked / blocked and this can cause further issues (connections running out, worse performance)
I'm using Hibernate (3.5.2), Mysql 5.0 (all MyIsam tables) and Java 1.6
I'd like some specific suggestions (preferrably based on concrete experience) about exactly what would be the best way to update the the denormalized table.
The following come to my mind
Create a denormalized table with the InnoDb type so that I get row level locking rather than table locking
Create triggers on the properly normalized tables that update the denormalized table,
I'm looking for:
Gotchas - things that I may not be thinking about that will affect my desired result.
Specific MySql settings that may improve performance, reduce locking / blocking on the denormalized table.
Best approaches to writing the Triggers for this scenario.
?
Let me know if there is any other information needed to help answer this question.
Cheers.
I've now implemented this, so I thought I'd share what I did, I asked a mate who's a dba (Greg) for a few tips and his answers basically drove my implementation:
Anyway like "Catcall" implied using TRIGGERS (in my case at least) probably wasn't the best solution. Greg suggested creating two denormalized tables with the same schema, then creating a VIEW that would alternate between the two denormalised tables one being "active" and the other being "deactive" the active table would be the one that was being actively queried by my web application and the deactive table could be updated with the denormalised information.
My application would run queries against the VIEW whose name would stay the same.
That's the crux of it.
Some implementation details (mysql 5.0.n):
I used stored procedures to update the information and then switch the View from denorm_table_a to denorm_table_b.
Needed to update the permissions for my database user
GRANT CREATE, CREATE VIEW, EXECUTE, CREATE ROUTINE, ALTER ROUTINE, DROP, INSERT, DELETE, UPDATE, ALTER, SELECT, INDEX on dbname.* TO 'dbuser'#'%';
For creating a copy of a table the: CREATE TABLE ... LIKE ....; command is really useful (it also copies the index definitions as well)
Creating the VIEW was simple
CREATE OR REPLACE VIEW denorm_table AS SELECT * FROM denorm_table_a;
CREATE OR REPLACE VIEW denorm_table AS SELECT * FROM denorm_table_b;
I created a special "Denormalised Query" Object in my middle tier which then mapped (through hibernate) to the denormalised table (or View in fact) and allowed easy and flexible querying throught Hibernate Criteria mechanism.
Anyway hope that helps someone if anyone needs any more details let me know,
Cheers
Simon
Here is I solution that I used to denormalize a mysql one-to-many relation using a stored procedure and triggers:
https://github.com/martintaleski/mysql-denormalization
It explains a simple blog article to article image relation, you will need to change the fields the fields and queries to apply it to your scenario.
I have a web site using a database named lets say "site1". I am planning to put another site on the same server which will also use some of the tables from "site1".
So should I use three different databases like "site1" (for first site specific data), "site2" (for second site specific data), and "general" (for common tables). In which there will be join statements between databases general and site1 and site2. Or should I put all tables in one database?
Which is the best practice to do?
How performances differ in each situation?
I am using MySQL. So how is the situation especially for MySQL?
Thanks in advance...
From the performance point of view, there won't be ANY difference. Just keep your indexes in place and you will not notice whether you are using single DB or multiple DBs.
Apart from performance, there are 2 small implications that I can think of:
1. You can not have foreign keys across DBs.
2. Partitioning tables in DB based on their usage or based on applications can help you manage permissions in easy way.
I can speak from recent personal experience. I have some old mysql queries in some PHP code that worked fine with a relatively small database, but as it grew the query got slower and slower.
I have freeradius running mysql in its own database along with another management php app that I wrote. The freeradius table is > 1.5 million rows. I was attempting to join tables from my app's database to the freeradius database. I can say for sure 1.5 million rows is too many. Running some queries locked up my app altogether. I ended up having to re-write portions of my php app to do things differently (ie not joining 2 tables from different database). I also indexed the radius accounting table on some key fields and optimized some queries (mysql EXPLAIN statement is wonderful to help with this). Things are MUCH faster now.
I will definitely be hesitant to join 2 tables from different databases in the future unless really really necessary.
I want to create a query result page for a simple search, and i don't know , should i use views in my db, would it be better if i would write a query into my code with the same syntax like i would create my view.
What is the better solution for merging 7 tables, when i want to build a search module for my site witch has lots of users and pageloads?
(I'm searching in more tables at the same time)
you would be better off using a plain query with joins, instead of a view. VIEWS in MySQL are not optimized. be sure to have your tables properly indexed on the fields being used in the joins
If you always use all 7 tables, i think you should use views. Be aware that mysql changes your original query when creating the view so its always good practice to save your query elsewhere.
Also, remember you can tweak mysql's query cache env var so that it stores more data, therefore making your queries respond faster. However, I would suggest that you used some other method for caching like memcached. The paying version of mysql supports memcached natively, but Im sure you can implement it in the application layer no problem.
Good luck!
I'm developing a website which to begin with will have three clear sub sites: Forum, News and a Calendar.
Each sub site will have it's own database and common to all of these databases will be a user table which needs to be in each database so that joins can be done.
How can I synchronize all the user tables so that it doesn't matter in which database I make an update, all the databases will have the same user table.
I'm not worried if there is a short sync delay (less than 1min) and I would prefer that the solution was a simple as possible.
Why do the sub-sites need to have their own databases? Can't you just use one database, with separate tables for each of the applications? Or, in PostgreSQL, you could use schemas to the same effect.
Though I would hardly endorse an architecture like this, federated tables may do what you want.
A single app can log into more than one database. While I'd advocate kquinn's answer of "all in one DB", becaue joins will work then, if you really must have separate databases, at least have the user table accessed from one database. "Cloning" a table across multiple databases is fraught with so much peril it's not funny.
I was over complicating the problems/solution.
Since the databases will (for the time being) exist on the same server, I can use a very simple View.