PostgreSQL, MonetDB and MySQL add primary key to existing table - mysql

When I add a primary key to a table that already has data, what does each of these database management systems do?
Do they analyze each and every value of the column to confirm it is unique ?
Or do they have some other optimized mechanism ? And if that's the case, what is that mechanism ?

Yes, at least in PostgreSQL and MySQL (probably MonetDB too) the DBMS will first check if all values are unique (like when you use a "unique" parameter in your sql query). You can simulate it by counting all rows and then counting a "unique" select of the same rows. If the row numbers are not equal, you will not be able to create the primary key.
An index really is created, but only to speed things up when you use the primary key after its created.

Related

MySQL constraint/trigger to prevent duplicated rows?

Is there a performance difference between using unique constraint and trigger to prevent duplicated rows in MySQL?
Monitor it. But well, the outcome should be obvious.
As the unique index exists specifically to enforce this constraint, it should be the first choice.
By using the trigger you would have to do additional operations to even check if there is the dataset (tablescan vs index lookup, or you set an index without constraint on the column but...), then react to it accordingly. So if there is nothing else you are trying to do (logging the failed attempt maybe), this would be unnecessary steps.

Migrate SQLIte Data in to MySQL and manage/update the foreign keys?

I'm developing an Android application in which the data is stored in a SQLite database.
I have made sync with a MySQL database, in the web, to where I'm sending the data stored in the SQLite in the device.
The problem is that I don't know how to maintain the relations between tables, because the primary keys are going to be updated with AUTO_INCREMENT, and the foreign keys remain the same, breaking the relations between tables.
If this is a full migration, don't use auto increment during migration - create tables with normal columns. Use ALTER TABLE to change the model after import.
For incremental sync, the easiest way I see is additional column in each MySQL table called sqlite_id and filled with original id. Then you can update references using UPDATE (with joins).
Alternatives involve temporary tables for storing data and an auxiliary table used for pairing. Tedious for bigger data model.
The approach I tend to use, if possible, is to avoid auto increment in such situations. I have usaully an auxiliary table with four columns like this: t_import(tablename, operationid, sqlite_id, mysqlid).
Process is the following:
Import the primary keys into t_import. Use operationid to separate parallel imports if needed.
Generate new keys for data tables and store them into t_import table. This can be combined with step one.
Import the actual data and use t_import for setting new primary keys and restore relations.
That should work for most scenarios I know about.
Thanks or the help, you have given me some ideas.
I will try to add a id2 field to the tables that will store the same value as the primary key (_id)
When I send the information from SQLite to MySQL and the primary key is incremented I will have the id2 field with the original value of the primary key so that I can compare it with the foreign key of the other tables and update it.
Let’s see if it works.
Thanks

Why it is much slower to use mysql to add primary key onto tables than postgres?

Currently I am using DBGen to generate the database which will be used to generate the TPC-H benchmarks. I am import the files (raw data directly from DBGen) into both mysql and postgres. After the data is imported, I need to add primary key as well as foreign keys onto the existing tables.
I am using the most simple command to add primary keys and foreign keys.
According to my own experience, postgres run much faster than mysql (especially handling big tables, 1.4 GB lineitem table in my case).
But does anyone know why it is the case?
Does it mean that the two systems do something very differently when they are trying to add primary key or foreign keys?
When you add and remove PRIMARY KEYs in MySQL, it rebuilds the entire table -- so effectively re-imports it by making a copy of it.
In addition to being a general limitation (this happens with MyISAM too), InnoDB is stored as a "Clustered Primary Key", that is, the rows are internally stored in a tree based on the primary key.. so the primary key is integral to how the table is stored, and sorted... so even if it could somehow do this without copying everything it would have to basically completely re-organise everything anyway.
See: https://dev.mysql.com/doc/refman/5.6/en/innodb-index-types.html
I would suggest adding the PRIMARY KEY before you import the data, so that you only need to do it once.
You should be able to add secondary indexes and foreign key references online, without a table copy. See:
https://dev.mysql.com/doc/refman/5.6/en/innodb-create-index-overview.html
https://dev.mysql.com/doc/refman/5.6/en/innodb-create-index-limitations.html

Creating UNIQUE constraint on multiple columns in MySQL Workbench EER diagram

In MySQL Workbench's EER diagram, there is a checkbox to make each column in a table unique, not null, primary key etc.
However, I would like to have a UNIQUE constraint on multiple columns. Is it possible to add it in in MySQL Workbench's EER diagram?
EDIT: Ok, I realised the unique checkbox, creates a UNIQUE INDEX, and not a UNIQUE CONSTRAINT
In the Alter Table dialog of MySQL Workbench:
Go to Indexes tab.
Double-click on a blank row to create a new index.
Choose 'UNIQUE' as the index type.
Check the columns that you want to be unique together.
There's some discussion as to whether this is weird, since an index is not the same as a constraint. I certainly wouldn't have thought to look there. However, apparently the `unique index' enforces uniqueness in the same way as a unique constraint, and may improve performance. For example, if I try to insert a row that would break unique together after using this method, it throws an '1062 Duplicate entry' error.
it does not seem to be available : http://bugs.mysql.com/bug.php?id=48468 . it seems what you can is to create a multi column unique index on the indexes tab but for a multi column unique constraint, you need to run the creation command manually.
With latest MWB (I'm on 6.0.8), it is possible to create composite keys
If you wish to create a composite primary key you can select multiple columns and check the PK check box. However, there is an additional step that is required, you must click the Indexes tab, then in the Index Columns panel you must set the desired order of the primary keys.

How do I remove redundant Primary Keys from a MYSqL Table?

Ihave been developing an app for some time. This involves entering and deleteing alot of useless data in the tables. Now that I want to go to production I want to get rid of all the data but also restore all the 'IDs' ( primary keys ) to 0 so that the live system can start fresh with sensible ID's like 1,2,3 etc.
Using MySQL and PHP / Codeigniter
Many Many Thanks for yoru help !
I would normally use TRUNCATE - this both removes the data and resets the AUTO_INCREMENT.
Note that MySQL will perform a row by row deletion if there is a foreign key relationship, which is quite convenient (compared to SQL Server).
If your pk is autoincrement, you can do
ALTER TABLE tbl AUTO_INCREMENT =1
Make sure table is empty before executing the query.