MySQL docs say that tables that use the MERGE storage engine can only union underlying MyISAM tables, which don't allow use of transactions.
Is there an alternative or workaround so that I can have a table that contains the data of several transactional tables in MySQL?
Also, MySQL 4... I know, I know, but it's what I'm stuck with.
Perhaps you could use a view to accomplish this. I'm not too sure if you need the full insert, update, delete functionality or if you just want to select from many tables.
Related
I'm experimenting with various indexing settings for my mysql database.
I wonder though, by removing or adding indexes is there any possibility to damage data rows in any way? Obviously I realise that if I make any application queries fail, that can cause bad rows. I'm more talking just about the structural queries themselves.
Or will I simply affect the efficiency of the database?
I just want to know if I have safety to experiment or if I have to be cautious?
The data isn't in phpmyadmin, it's in mysql. Adding/removing an index will not affect your data integrity by default. With a unique index, and using the ignore keyword it can.
That said - you should always have a backup of your data, it's easy to run a test like:
CREATE TABLE t1 LIKE t;
INSERT INTO t1 SELECT * FROM t;
ALTER TABLE t1 CREATE INDEX ...;
Then compare the difference in tables (perhaps a COUNT is fine in your case).
Adding/removing indexes is safe in terms of the rows in your table. However as you note, too many indexes or poorly constructed indexes can be (very) detrimental to performance. Likewise, adding indexes on large tables can be a very expensive process, and can bring a MySQL server to its knees, so you're better off not "experimenting" on production tables.
I'm using MySQL MyISAM and I have 7 tables in my database linked by a primary key called ID. I want to PARTITION the data on one of these tables by its timestamp. When I want to delete a partition, I'd like to delete all records on the other tables with the same ID as the ones I deleted from the partition.
Can this be done at a similar speed as dropping a partition? I don't particularly want to go to each table and search for the right ID to delete as it would defeat the purpose of partitioning in the first place.
Cannot achieve what you want as you stated it.
Are all the tables the same size? Or perhaps the other tables are "normalization" tables? In that case, just leave the data there?
Please elaborate on what kind of data you have and the relationships (1:1, 1:many, etc) between the tables.
You can create triggers for those tables:
http://dev.mysql.com/doc/refman/5.7/en/create-trigger.html
Is it possible to create a view from tables from two different databases? Like:
creative view 'my_view' as
select names as value
from host_a.db_b.locations
union
select description as value
from host_b.db_b.items;
They currently are different database engines (MyISAM and InnoDB).
thx in advance
Yes, you need to access the remote table via the FEDERATED db engine, then create a view using your query.
However this is a rather messy way to solve the problem - particularly as (from your example query) your data is effectively sharded.
This structure won't allow updates/inserts on the view. Even for an updatable/insertable view, my gut feeling is that you'll run into problems if you try to anything other than auto-commit transactions, particularly as you're mixing table types. I'd recommend looking at replication as a better way to solve the problem.
I'm considering adding some denormalized information in my database by adding one denormalized table fed by many (~8) normalized tables, specifically for improving select query times of a core use case on my site.
The problems with the current method of querying are:
Slow query times, there are between 8 and 12 joins (some of the left joins) to access the information for this Use Case this can take ~ 3000ms for some queries.
Table Locking / Blocking, When information is updated during busy times of the day or week, (because I'm using MyIsam tables) queries are locked / blocked and this can cause further issues (connections running out, worse performance)
I'm using Hibernate (3.5.2), Mysql 5.0 (all MyIsam tables) and Java 1.6
I'd like some specific suggestions (preferrably based on concrete experience) about exactly what would be the best way to update the the denormalized table.
The following come to my mind
Create a denormalized table with the InnoDb type so that I get row level locking rather than table locking
Create triggers on the properly normalized tables that update the denormalized table,
I'm looking for:
Gotchas - things that I may not be thinking about that will affect my desired result.
Specific MySql settings that may improve performance, reduce locking / blocking on the denormalized table.
Best approaches to writing the Triggers for this scenario.
?
Let me know if there is any other information needed to help answer this question.
Cheers.
I've now implemented this, so I thought I'd share what I did, I asked a mate who's a dba (Greg) for a few tips and his answers basically drove my implementation:
Anyway like "Catcall" implied using TRIGGERS (in my case at least) probably wasn't the best solution. Greg suggested creating two denormalized tables with the same schema, then creating a VIEW that would alternate between the two denormalised tables one being "active" and the other being "deactive" the active table would be the one that was being actively queried by my web application and the deactive table could be updated with the denormalised information.
My application would run queries against the VIEW whose name would stay the same.
That's the crux of it.
Some implementation details (mysql 5.0.n):
I used stored procedures to update the information and then switch the View from denorm_table_a to denorm_table_b.
Needed to update the permissions for my database user
GRANT CREATE, CREATE VIEW, EXECUTE, CREATE ROUTINE, ALTER ROUTINE, DROP, INSERT, DELETE, UPDATE, ALTER, SELECT, INDEX on dbname.* TO 'dbuser'#'%';
For creating a copy of a table the: CREATE TABLE ... LIKE ....; command is really useful (it also copies the index definitions as well)
Creating the VIEW was simple
CREATE OR REPLACE VIEW denorm_table AS SELECT * FROM denorm_table_a;
CREATE OR REPLACE VIEW denorm_table AS SELECT * FROM denorm_table_b;
I created a special "Denormalised Query" Object in my middle tier which then mapped (through hibernate) to the denormalised table (or View in fact) and allowed easy and flexible querying throught Hibernate Criteria mechanism.
Anyway hope that helps someone if anyone needs any more details let me know,
Cheers
Simon
Here is I solution that I used to denormalize a mysql one-to-many relation using a stored procedure and triggers:
https://github.com/martintaleski/mysql-denormalization
It explains a simple blog article to article image relation, you will need to change the fields the fields and queries to apply it to your scenario.
We have a large MyISAM table that is used to archive old data. This archiving is performed every month, and except from these occasions data is never written to the table. Is there anyway to "tell" MySQL that this table is read-only, so that MySQL might optimize the performance of reads from this table? I've looked at the MEMORY storage engine, but the problem is that this table is so large that it would take a large portion of the servers memory, which I don't want.
Hope my question is clear enough, I'm a novice when it comes to db administration so any input or suggestions are welcome.
Instead of un-and re-compressing the history table: If you want to access a single table for the history, you can use a merge table to combine the compressed read-only history tables.
Thus assuming you have an active table and the compressed history tables with the same table structure, you could use the following scheme:
The tables:
compressed_month_1
compressed_month_2
active_month
Create a merge table:
create table history_merge like active_month;
alter table history_merge
ENGINE=MRG_MyISAM
union (compressed_month_1,compressed_month_2);
After a month, compress the active_month table and rename it to compressed_month_3. Now the tables are:
compressed_month_1
compressed_month_2
compressed_month_3
active_month
and you can update the history table
alter table history_merge
union (compressed_month_1, compressed_month_2, compressed_month_3);
Yes, you can compress the myisam tables.
Here is the doc from 5.0 : http://dev.mysql.com/doc/refman/5.0/en/myisampack.html
You could use myisampack to generate fast, compressed, read-only tables.
(Not really sure if that hurts performance if you have to return most of the rows; testing is advisable; there could be a trade-off between compression and disk reads).
I'd say: also certainly apply the usual:
Provide appropriate indexes (based on the most used queries)
Have a look at clustering the data (again if this is useful given the queries)