I am copying individual tables from a master databases to a number of read-only slaves (the mysql user is restricted to read-only). One easy way to copy tables is:
CREATE TABLE slave_db.x LIKE master_db.x;
INSERT INTO slave_db.x SELECT * FROM master_db.x;
This will not copy the foreign keys or set the auto increment index correctly. Is there any reason to transfer the constraints, given there is no possibility of slave database modifications?
No, if it is only to read purposes you don't need to copy the constraints. It will be even faster this way (Although you should create appropriate indexes).
Also, you can do as simple as:
CREATE TABLE slave_db.x as
SELECT * FROM master_db.x;
Related
I have a typical Access front-end with SQL Server back-end. I created some views in SQL Server and linked to them in Access. When I use "CREATE INDEX index_name ON view_name (field_name)" it creates a primary key even though I have not specified it to do so (and do not want it to do so). Why is that? and how can I create a non-primary key index?
How this works?
Any view, any linked table, in fact ANYTHING you hit, use, consume from SQLServer?
All indexing is setup 100% in SQLServer. The Access client side does not, cannot, and WILL not create any kind of index for you.
The create index command to specify and setup a primary key? It does not really create an index in Access but ONLY SETS and TELLS Access what PK to use.
In fact, when you link to a view, you are prompted to select the PK when linking to a view.
SQLServer views DO NOT have the concept or even a setting that tells you or EVEN LETS you specify the PK column. Part of the reason for this is in fact that a view can consist of more than one table - so which table now is to define the primary key. And in fact if your view has a join with say 5 tables? Then in fact that view has 5 different primary keys from 5 different tables).
So, when you link to a view in access, you will note this prompt:
If you don't select a column for the pk?
Then you have no PK set. However, you can use VBA to TELL ACCESS what row to be the PK setting.
So, say in above I did not select a PK when linking with the GUI. Or say I am using code to link to a view?
Then in code to set the PK value, I would and could and should execute the following command:
CurrentDb.Execute "CREATE UNIQUE INDEX IXPK ON dbo_ViewHotelsTest (ID) WITH PRIMARY"
AGAIN: Note above comments. The create unique index DOES NOT create an index in Access. Nor does it create an index on SQLServer. That command is how you can tell Access which column is to be seen and treated as the PK.
So, above command?
In plain English:
Please Mr. Access, will you set the PK column and we are using the above
command to do this.
In other words, there is no other command in code to "tell" Access what the PK is supposed to be, so the DDL sql create index command is used. But I STRESS AGAIN THIS does NOT really create an index, but ONLY tell Access what column to use as the PK.
This command results in the SAME and IDENTICAL results if you select a PK during a linking of a view.
If you want to create an index in SQLServer? Then go to SQLServer, and create your index(es) in SQLServer.
FYI:
As a further explanation, in 99% of cases you NEVER want, nor need, or even should even create an index on a view on the SQLServer side of things.
In EVERY case, if the base table used for the source of the view has an index that can be used, it WILL IN ALL cases be used if you build an on-the-fly query, build a SQLServer side view, or even create a sql stored procedure. IN ALL cases, a simple create of an index on the base source table (using SQLServer tools) will suffice, and in ALL cases, include views, and including linked view from Access will automatic use ANY and ALL existing indexes on the base table from SQLServer.
So, not only is there zero requirements to EVER try and create an index in Access on linked tables (or linked views), but in fact it not even possible. Of course the create index command DOES need to be used to set the PK column when linking to a view.
If you link to table, then Access can figure out which column is the PK, and will set this for you. But SQLServer does not have a setting, nor even the concept of a PK column for a view, and thus you have to select the PK during linking using the GUI, or as noted, you can in code execute the above command that tells access which column to use as the PK, and as noted, that command does not in fact even create an index, but that command is ONLY to tell Access client side which column to see/use as the PK.
You can for views that don't require you to "update" the data. So, a linked view without you selecting (or better said "setting") the PK column will be read only.
So, if you using the view for a combo box, or say just a report? Then you don't care, and don't need to set the PK for that view, and it will be "read only". So this means that you ONLY need to set the PK column for a view if you need to update that view (say in place of updating the base table that the view is based on).
So, in summary:
that create index command does not actually create an index.
That create index command is ONLY required if you need a linked view that allows Access client side to update such views. Without the setting, then the linked view will be read only. So the purpose, the act, the role, the "thing" that create index does on the linked view? It is ONLY to tell Access what column is to be used for the PK - it does not actually create an index anywhere - including NOT creating one in Access client side. (So, ONLY purpose is for TELLING access which column to use for the PK. Can't really say why they use that command that way but best guess was no other way existed to tell Access what column to use for the PK - so we use that command).
If you use the linked table manager, and re-fresh the table links? Access WILL remember the PK settings for a view. However, if during linking you change the database that the linked tables point to? Then the PK settings in views will be lost during that re-linking process. (and then you have to re-execute those commands to re-tell Access which column in the linked view is to be seen/used as PK column.
You don't need to ever create an index client side for Access in regards to linked tables, or views - all indexing is automatic, and if an index exists on the server table, it will and can be used.
So, create index command is HOW you setup a PK column for linked views. In all other cases (linked tables - but not a view), then that command is not required, and ANY and all existing indexes that exist and were created on the server side table will be used (and thus no need to try or create an index in Access, since all such indexes are handled by the server side - Access has no say, nor even control over how SQLServer uses indexes). But, a correct use of index on a SQLServer table will automatically be used by Access in the requests it makes to SQLServer. But that "job" of indexing is 100% managed by the server - not Access.
Wondering if there is a way to skip / ignore all temp tables using mysqldump. In our instance, these tables are prefixed as tmp{guid}.
These temp tables have a very short lifespan, they are used for building some sort of reports in its parent application. Lifetime may be up to 1 minute.
EDIT:
It has been suggested that I use the ignore-tables parameter, unfortunately this doesn't provide a way for me to specify a wildcard as the table name (tmp*).
You are not talking about tables from CREATE TEMPORARY TABLE ..., correct? Instead, you are talking about a set of tables with a particular naming convention?
Instead of trying to do it with table names, do it with a DATABASE:
CREATE TABLE TempTables;
CREATE TABLE TempTables.abcd (...);
And reference them via the db name:
INSERT INTO TempTables.abcd ...
SELECT ... FROM TempTables.abcd JOIN ...
Then use the suitable parameters on mysqldump to avoid that oneDATABASE` (or pick all the other databases to dump).
I want to replicate certain table from one database into another database in the same server. This tables contain exactly the same fields.
I was considering to use MySQL Replication to replicate that table but some people said that it will increase IO so i find another way to create 3 Trigger (Insert, update and Delete) that will perform exactly the same thing like what i expect.
My Question is, which way is better? Is it using MySQL replication is better even though it's in the same server or using Trigger to replicate the data is better.
Thanks.
I don't know what is your goal, but I got mine getting use of the VIEW functionality.
I had two different applications with separate databases but in the same Mysql server. Application2 needed to get a few data from Application1. In general, this is a trivial situation that you can handle with USE DB1; or USE DB2; as your needing, but my programming framework does not work very well with multiple DBs.
So, lets see my solution...
Here is my select query to retrieve this data:
SELECT id, name FROM DB1.customers;
So, using DB2 as default schema, I've created a VIEW:
USE DB2;
CREATE VIEW app1_customers AS SELECT id, name FROM DB1.customers;
Now I can retrieve this data in DB2 as a regular table with a regular SELECT statement.
SELECT * FROM DB2.app1_customers;
Hope ts useful. BR
Assuming you have two databases on the same server i.e DB1 and DB2 and the table is called tbl1 and it is sitting in DB1 you can query the table like this:
USE DB1;
SELECT * FROM tbl1;
USE DB2;
SELECT * FROM DB1.tbl1;
This way you wont need to copy the data and worry about extra space and extra code. You can query a table in another database on the same server. Replication and triggers are not your answer here. You could also create a view to encapsulate the SQL statement.
Definitely triggers is the way to go. Having another server (slave) will need to spare several MB for installation, logs, cpu and memory usage.
I'd use triggers to keep both tables equal. If you want to create a table with the same columns definition and data use:
USE db2;
CREATE TABLE t1 AS SELECT * FROM db1.t1;
After that, go ahead and create the triggers for Update, Insert and Delete statemetns.
Also you could ALTER the new table to a different engine like MEMORY or add indexes to see if you can improve something.
I have a MySQL database that is up to about 17 GB in size and has 38 million entries. At the moment, I need to both increase the size of one column (varchar 40 to varchar 80) and add more columns.
Many of the fields are indexed including the one that I need to change. It is part of a unique pair that is necessary for the applications to work. In attempting to just make the change yesterday, the query ran for almost four hours without finishing, when I decided to cut our outage and just bring the service back up.
What is the most efficient way to make changes to something of this size?
Many of these entries are also old and if there is a good way to sort of shard off entries but still have them available that might help with this problem by making the table a much more manageable size.
You have some choices.
In any case you should take a backup before you do this stuff.
One possibility is to take your service offline and do it in place, as you have tried. If you do that you should disable key checks and constraints.
ALTER TABLE bigtable DISABLE KEYS;
SET FOREIGN_KEY_CHECKS=0;
ALTER TABLE (whatever);
ALTER TABLE (whatever else);
...
SET FOREIGN_KEY_CHECKS=1;
ALTER TABLE bigtable ENABLE KEYS;
This will allow the ALTER TABLE operation to go faster. It will regenerate the indexes all at once when you do ENABLE KEYS.
Another possibility is to create a new table with the new schema you want, then disable the keys on the new table, then do as #Bader suggested and insert the contents of the old table.
After your new table is built you will re-enable the keys on it, then rename the old table to some name like "old_bigtable" then rename the new table to "bigtable".
It's possible that you can keep your service online while you're populating the new table. But that might work poorly.
A third possibility is to dump your giant table (to a flat file) and then load it to a new table with the new layout. That is pretty much like the second possibility except that you get a table backup for free. You can make this go pretty fast with SELECT DATA INTO OUTFILE and LOAD DATA INFILE. You'll need to have access to your server machine's file system to do this.
In all cases, disable, then re-enable, the constraints and keys to get things to go fast.
Create a new table with the new structure you want with a different name for example NewTable.
Then insert data into this new table from the old table using the following query:
INSERT INTO NewTable (field1, field2, etc...) SELECT field1, field2, ... FROM OldTable
After this is done, you can drop the old table and rename the new table to the original name
DROP TABLE `OldTable`;
RENAME TABLE `NewTable` TO `OldTable` ;
I have tried this approach on a very large table and it's much much faster than altering the table.
With MySQL 5.1 and again with 5.5 certain alter statements were enhanced to just modify the structure without rewriting the entire table ( http://dev.mysql.com/doc/refman/5.5/en/alter-table.html - search for in-place). The availability of this though varies by the type of change you are making and the engine in use, the most value comes from InnoDB Plugin. In the case of your specific changes though the entire table would be rewritten.
When we encounter these issues, we typically try to leverage replica databases. As long as you are adding and not removing you can run your DDL against the replica first and then schedule a brief outage for promoting the replica to the master role. If you happen to be on RDS this is even one of their suggested uses for their replica instances http://aws.amazon.com/about-aws/whats-new/2012/10/11/amazon-rds-mysql-rr-promotion/.
Some other alternatives include:
Selecting out a subset of records into a new table with the desired structure (use INTO OUTFILE to avoid a table lock). Once complete you can schedule a maintenance window and REPLACE INTO or UPDATE any records that have changed in the origin table since the initial data copy. Once the update is complete a RENAME TABLE... of both tables wraps the changes up.
Using a tool like Percona's pt-online-schema-change: http://www.percona.com/doc/percona-toolkit/2.1/pt-online-schema-change.html. This tool works with triggers so if you already have triggers on the tables you want to change this may not fit your needs.
What is the purpose of a temporary table like in the following statement? How is it different than a regular table?
CREATE TEMPORARY TABLE tmptable
SELECT A.* FROM batchinfo_2009 AS A, calibration_2009 AS B
WHERE A.reporttime LIKE '%2010%'
AND A.rowid = B.rowid;
Temp tables are kept only for the duration of your session with the sever. Once the connection's severed for any reason, the table's automatically dropped. They're also only visible to the current user, so multiple users can use the same temporary table name without conflict.
Temporary table ceases to exist when connection is closed. So, its purpose is for instance to hold temporary result set that has to be worked on, before it will be used.
Temporary tables are mostly used to store query results that need further processing, for instance if the result needs to be queried or refined again or is going to be used at different occasions by your application. Usually the data stored in a temporary database contains information from several regular tables (like in your example).
Temporary tables are deleted automatically when the current database session is terminated.
Support for temporary tables exists to allow procedural paradigms in a set-based 4GL, either because the coder has not switched their 3GL mindset to the new paradigm or to work around a performance or syntax issue (perceived or otherwise).