View All Table / Database Constraints in MySQL - mysql

How do you view all the constraints (Primary keys, Secondary keys, Assertions , Triggers, etc) in a MySQL Database / Table in the command line environment.

Just dump the database without data using
mysqldump --no-data (other options)
As if you were taking a backup. Use the same options as you do when taking a backup (maybe --lock-tables=0 - you don't need a lock when dumping the schema)
Without the data, you get just the schema which includes all those things you said above.

DESCRIBE is an alias for SHOW COLUMNS, like a shortcut. If you want all the other stuff then you need to use SHOW commands for your stuff. And SHOW COLUMNS for the rest.
Describe
Show

Related

mysqldump - sorting table output to avoid forward references in foreign keys

I'm currently attempting to create a script that will:
process a mysql schema and dump the descriptions of tables in the schema,
translate the MySQL-specific parts of the resulting DDL to another database system's equivalent (currently targeting H2, but any database that runs in-process in a Java environment and supports memory tables would be appropriate if I need to switch), and
then recreates them in a schema in the new database system.
Currently, I'm using mysqldump to perform the first part of the operation, and a sequence of string/regexp substitutions to perform the second.
The problem I have is that mysqldump is producing output with tables that have foreign keys before the tables they reference. This isn't a problem for mysql because in mysql you can just disable constraint checking and then create them -- unfortunately, while H2 does have an option for disabling constraints, it only applies to data changes and not table creation.
I'd really rather not have to parse the SQL enough to identify the correct order of the tables myself, as it seems at least vaguely tricky to do it right.
Therefore, is there either (1) any way to get mysqldump to produce the tables in the correct order or (2) an alternative approach to produce the correct order?
(I'm aware that it is at least theoretically possible to have a schema where no correct order is possible, but I know none of the schemas I'm likely to work with have this problem, so this isn't something I need to worry about)

mysqldump: how to fetch dependent rows

I'd like a snapshot of a live MySQL DB to work with on my development machine. The problem is that the DB is too large, so my thought was to execute:
mysqldump [connection-info-here] --no-autocommit --where="1 limit 1000" mydb > /dump.sql
I think this will give me the first thousand rows of every table in database mydb. I anticipate that the resulting dataset will break a lot of foreign key constraints since some records will be missing. As a result the application I mean to run on the dev machine will fail.
Is there a way to mysqldump a sample of the database while ensuring that all records dumped abide by key constraints? (for instance if a foreign key is dumped, the matching record in the foreign table will also be dumped).
If that isn't possible, how do you guys deal with this problem?
No, there's no option for mysqldump to dump only rows that match in foreign key relationships. You already know about the --where option, and that won't do it.
I've had the same task as you, to dump a subset of data but only data that is related. For example, for creating a test instance.
I've been using MySQL for many years, I've worked as a MySQL consultant and trainer, and I try to keep up with current tools. I have never heard of any MySQL tool that does this operation.
The only solution I can suggest is to write your own script to dump table by table using SELECT...INTO OUTFILE.
It's sometimes easier to write a custom script just for your specific schema, than for someone to write a general-purpose tool that works for everyone's schema.
How I have dealt with this problem in the past is I don't copy data from the live database. I find some other way to create a subset of fake data for testing. It's probably better to create synthetic data anyway, because then you don't risk accidentally using live data in your dev/test environment, in case some of it is private data.

Restoring data without recreating MySQL Tables

This may sounds like a stupid question but can't find anything on google, probably using the wrong key words.
Anyway, I have been working on a project - version 1 which has a MySQL Database. I ready to release to version 2 but there are changes to the database tables, e.g. extra columns.
If I backup the current database with the data and create a database with the new structure. How can I add the data from the old database into the new database.
I know there won't be any problems with the existing data being added to the new database structure as the existing fields haven't changed, its just extra columns.
Thanks for your help.
I use mysqldump with some addition keys in this case, something like
mysqldump --host=localhost --user=root --no-create-db --no-create-info --complete-insert --extended-insert
That will produce the complete inserts with column names, so you may not to worry about the final table structure, if you did not change the column names, even the order of columns may change in this case.
Consider using ALTER TABLE to resolve this issue.
The key is to take the new fields in your database and append them to the end of your entities, like so:
ALTER TABLE myTable ADD COLUMN myColumn (... further specification ...)
MySQL will expand the table and set the new fields to the defaults you specify. You can then layer any new data on top of the old, as long as there are no conflicts, as you describe.
Option B, when the online solution is expensive, is to use mysqldump, then alter the output to fit the new table specification. As long as the columns align properly (this may require a simple regular expression to parse, in the worst case), you should be able to recreate the data by importing it into the new schema.
See also, this answer.

How to accomplish "MySQL cross database reference" with PostgreSQL

We will migrate the database from mysql to postgresql in our product(through java). So we need to change the mysql query to postgresql query in java application. How to create the table i.e., databasename.tablename in postgresql.
For mysql, we can directly create the table e.g create table information.employee.
Here database name is "information" and table name is "employee" . Is it possible to achieve same query in postgresql.
I searched google it says cross database reference is not possible. Please help me.
I saw pg_class table it contains the table names in the specific database, like wise databse and tables relationships are stored in any other table.
This is normally done using schemas rather than databases, which is more or less like how MySQL organizes it anyway.
Instead of
create database xyz
use
create schema xyz
When you create tables, create them:
create table xyz.myTable
you will need to update your search path to see them on the psql command line tool, or if you want to query them without using the schema explicitly. The default schema is public, so when you create a table without a schema name, it ends up in public. If you modify your search_path as below, the default schema becomes the first in the list: xyz.
set search_path=xyz,public,pg_catalog;
and you must not have spaces in that statement. You can do it globally for a user/role too:
alter role webuser set search_path=xyz,public,pg_catalog;
Also, don't forget that postgresql string matches are case sensitive by default (this one catches people out a lot).
If you want to have different physical locations for the files for each schema, you can do that with tablespaces. If you have a look at the postgresql documentation page, they have info on how to do it, it's pretty easy.
database in MySQL == schema in PostgreSQL. So you will most probably want to migrate all your mysql dbs into one postgres db. Then you will be able to do "cross-database" queries.
See my answer to this question: Relationship between catalog, schema, user, and database instance

question about MySQL database migration

If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc..
Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database.
Using MySQLDump etc.. won't work. Here is the detail. For example: I have FOUR old databases, I can name them as 'DB_a', 'DB_b', 'DB_c', 'DB_d'. Now the old table A has 28 columns, I want to add each row in table A into the new database with a new column ID 'DB_x' (x to indicate which database it comes from). If I can't differentiate the database ID by the row's content, the only way I can identify them is going through some user input parameters.
Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while.
Thanks!!
I don't entirely understand your situation with the columns (wouldn't it be more sensible to add any new columns after migration?), but one of the arguably fastest methods to copy a database across servers is mysqlhotcopy. It can copy myISAM only and has a number of other requirements, but it's awfully fast because it skips the create dump / import dump step completely.
Generally when you migrate a database to new servers, you don't apply a bunch of schema changes at the same time, for the reasons that you're running into right now.
MySQL has a dump tool called mysqldump that can be used to easily take a snapshot/backup of a database. The snapshot can then be copied to a new server and installed.
You should figure out all the changes that have been done to your "new" database, and write out a script of all the SQL commands needed to "upgrade" the old database to the new version that you're using (e.g. ALTER TABLE a ADD COLUMN x, etc). After you're sure it's working, take a dump of the old one, copy it over, install it, and then apply your change script.
Use mysqldump to dump the data, then echo output.txt > msyql. Now the old data is on the new server. Manipulate as necessary.
Sure there are tools that can help you achieving what you're trying to do. Mysqldump is a premier example of such tools. Just take a glance here:
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
What you could do is:
1) You make a dump of the current db, using mysqldump (with the --no-data option) to fetch the schema only
2) You alter the schema you have dumped, adding new columns
3) You create your new schema (mysql < dump.sql - just google for mysql backup restore for more help on the syntax)
4) Dump your data using the mysqldump complete-insert option (see link above)
5) Import your data, using mysql < data.sql
This should do the job for you, good luck!
Adding extra rows can be done on a live database:
ALTER TABLE [table-name] ADD [row-name] MEDIUMINT(8) default 0;
MySql will default all existing rows to the default value.
So here is what I would do:
make a copy of you're old database with MySql dump command.
run the resulting SQL file against you're new database, now you have an exact copy.
write a migration.sql file that will modify you're database with modify table commands and for complex conversions some temporary MySql procedures.
test you're script (when fail, go to (2)).
If all OK, then goto (1) and go live with you're new database.
These are all valid approaches, but I believe you want to write a sql statement that writes other insert statements that support the new columns you have.