Apologies for the noob question (I'm keenly learning as I go). I'd be grateful for some advice on the Primary Key.
I have 5 separate (unrelated) tables (Access 2003) containing similar fields that I will be merging (using Append queries) into a single new table. Each record between tables is unique (no duplicated).
Each separate table already has a primary key field using the default autonumber method (1-n). This means (I'm thinking) that there will be many duplicate primary key numbers between tables.
Is it standard practice (and ok to do) to detete the existing primary key field and create a new (autonumber; 1-n) upon merging. Should I do this before the merge (for each separate table) or after the merge (on the single new table)?
Create your new table with the table structure, primary keys and any other necessary metadata defined. Then run a SELECT INTO statement from each of the five table tables specifying the columns to copy into the new table. Since you already have your identity column defined on the new table and you are not selecting the identity column on the old table(s) the data should copy over and the insert will assign a new primary key value.
Related
I’m pretty new to PowerApps and need to migrate an Access database over to PowerApps, first of all it’s tables to Dataverse. It’s a typical use case for a model-driven app, with many relationships between the tables. All Access tables had an autogenerated ID field as their primary key.
I transferred all tables via Excel ex/import to Dataverse. Before importing,I renamed all ID fields (columns) to ID_old and let Dataverse create its own, autogenerated ID field for each table.
What I want to achieve is to re-establish all relationships between the tables, where the foreign key points to the new primary key provided by Dataverse, as I want to avoid double keys. As a first step I created relationships between the ID_old field and the corresponding (old) foreign key field in the related table.
In good old Access, I’d now simply run an update query, filling the new (yet empty) foreign key field with the new ID of the related table. Finally, I would change the relationship to the new primary and foreign keys and then delete the old ID fields.
Where I got stuck is the update query. I searched the net and found a couple of options like UpdateIf / Patch functions or Power Query or Excel ex/import and some more. They all read pretty complicated and time intensive and I think I must have overseen a very simple solution for such a pretty common problem.
Is there someone out there who might point me in the right (and simple) direction? Thanks!
A more efficient approach would be to start with creating extra ID columns in Access. Generate your GUIDs and fix your foreign keys there. This can be done efficiently using a few SQL update statements.
When it comes to transferring your Access tables to Dataverse you just provide your Access shadow primary keys in the Create message.
I solved the issue as follows, which is pretty efficient in my perception. I”m assuming you have a auto-numbered ID field in every Access table, which you used for your relationships
Export your tables from Access to Excel.
Rename your ID fields to ID_old in all tables using Excel, as well as your foreign key fields to e.g. ForeignKey_old. This will make it easy to identify the fields later in Dataverse.
Import into Dataverse, using the Power Query tool. Important: Make sure, that you choose ID_old as additional primary key field in the last import step.
Re-create all relationships in Dataverse, using the Lookup datatype. This will create a new, yet empty column in your table.
Now use the “Edit in Excel” feature to open your table in Excel. You should get your prefix_foreignkey_old column with the old foreign keys displayed, as well as the reference to your related table, e.g. prefix_referencetable.prefix_id_old, which is still empty.
Now just copy the complete prefix_foreignkey_old column values into the prefix_referencetable.prefix_id_old column.
Import the changes and you’re done.
Hope this is helpful for some of you out there.
I am currently rebuilding a database which is used to store patient records. In the current database, the primary key for a patient is their name and date of birth, (a single column, ie "John Smith 1970-01-01", it is not composite). This is also a foreign key in many other tables to reference the patients table. I am planning to replace this key with an auto-generated integer key (since there will obviously be duplicate keys one day under the current system). How can I add a new primary key to this table and add appropriate foreign keys on all the other tables? Keep in mind that there is already a very large amount of data (~500,000 records) and these data references cannot be broken.
Thanks!
If up to me..
Add a new future-PK column as a non-null unique index (it must be a KEY, but not necessarily the PK) with auto_increment.
Add the appropriate new-FK columns to all the related tables, these should be initially nullable.
Set the new-FK value to the appropriate future-PK value based on the current-PK/FK relationships. Use an "UPDATE .. JOIN" for this step.
Enable the Referential Integrity Constraints (DRI) on the relevant tables. It only needs to be KEY/FK, not PK/FK, which is why the future-PK can be used. Every existing DRI constraint using the current-PK should likely be updated during this step.
Remove the new-FK column nullability based on modeling requirements.
Remove any residue old-FK columns as they are now redundant data.
Switch the old-PK and the new/future-PK (this can be done in one command and may take awhile to physically reorganize all the rows). Remove the old PK column as applicable, or perhaps simply remove the KEY status.
I would also offline the database during the process, review and test the process (use a testing database for dry-runs), and maintain backups.
The Data-Access Layer and any Views/etc will also need to be updated. These should be done at the same time, again through a review and testing process.
Also, even when adding an auto-increment PK, the table should generally still have an appropriate covering natural key enforced with unique constraints.
I solved the problem using the following method:
1- Assigned added a new primary key to the patients table and assigned unique values to all existing records
2- Created materialized views (without triggers) for each of the referencing tables that included all fields in the referencing table as well as the newly created id field in the patients table (via a join).
3- Deleted the source referencing tables
4- Renamed the materialized views to the names of the original source tables
The materialized views are now the dependent tables.
A reference for materialized views: http://www.fromdual.com/mysql-materialized-views
Im creating a database design in MySQL Workbench. I want to have a enumarated table which holds some standard values. The values of the enumaration table needs to be linked to a row in my other table. So i have a table called 'club' which holds a row 'club_soort'. The row 'club_soort' needs to relate to the enumaration table.
Also, I want to use my tables (when i'm ready with my database design) into phpMyAdmin.
I understand the concept of enumaration, but I can't implement it. I hope someone can help me!
Thanks!
Rather than using enumerations, you should use what's known as a lookup or reference table. This table would contain your enumerations and be referenced as a foreign key by the parent table.
As an example, this would look like:
parent_table
------------ club
id ----
club_soort ----------> soort
ENUM values cannot be linked to any MySQL structures. It can contain only static data.
Are you talking about primary keys?
Being a relational database, mysql uses primary keys and indexes to joint data the way you want to achieve.
Primary keys join tables in an efficient way, PK in the the origin or parent table and FPK, Foreign Primary Key in the related table.
When creating a table, in mysql workbench or phpmyadmin, define a primary key, just one per table and if needed indexes and if needed foreign keys.
Use union statements to join two or more tables.
Always use numeric keys data_type INT instead of natural, string keys. Also make then autoincrement and Not Null.
mysqlworkbench has an exporting tool, which allows you to export each created table, including their keys, indexes and cascading. You can copy and paste to create tables in phpmyadmin.
I was looking for the solution of this problem for some time and did not manage to find anything satisfactory. I know that similar problems were many times answered but there are usually workarounds rather than standard solutions for them.
The problem in my particular case is:
I have one table that contains predefined Primary Key that cannot be used as auto increment. It is predefined and it is also used by several other tables as a foreign key.
NID - my Primary Key
PID - the key from external source
Serial
bla1
bla2
NID is already in table ids (target table), not in source table
PID is already in source file/table, not in target table
other columns are in both tables
The pair NID-PID would be a unique match as these would be used further after matching.
Now I need to be able to insert values to this table on a weekly basis as these would be sent to me in csv/excel files, hundreds of records, so some easy way would be best, especially as easy way is easy to validate the import process.
Since there is no auto increment PK, I get an error:
1062 - Duplicate entry '' for key 'NID'
I was thinking about creating unique index on multiple fields like:
CREATE UNIQUE INDEX unique_index ON ids (NID,PID);
But it did not work very well either:
1062 - Duplicate entry '107521' for key 'unique_index'
I also tried to create separate table with data to be imported, but I get the same error.
The question is: what is the best way to insert records to table that contains PK and continue to do so on regular basis without altering existing data? What should I do to achieve this?
I would really appreciate any help since I'm stuck.
I am very new to database concepts and currently learning how to design a database. I have a table with below columns...
this is in mysql:
1. Names - text - unique but might change in future
2. Result - varchar - not unique
3. issues_id - int - not unique
4. comments - text - not unique
5. level - varchar - not unique
6. functionality - varchar - not unique
I cannot choose any of the above columns as primary keys as they might change in future. So i created a Auto-Increment id as names_id. I also have a GUI( a JTable) that shows this table and user updates Result,issues_id and comments based on the Names.Names here is a big text column. I cannot display names_id in the GUI as it does not make any sense in the GUI. Now when the user updates the database after giving inputs for column2,3,4 in the GUI i used the below query to update the database, i couldnt use names_id in where clause as the Jtable's row_id does not match with the names_id because not all the rows are loaded onto JTable.
update <tablename> set Result=<value>,issues_id=<value>,comments=<value>
where Names=<value>;
I could get the database updated but i want to know if its ok to update the database without even using the PK. how efficient is this? what purpose does the surrogate key serve here?
It is perfectly acceptable to update the database using a where condition that doesn't reference the primary key.
You may want to learn about indexes and constraints, though. You query could end up updating more than one row, if multiple rows have the same name. If you want to ensure that they are unique, then you can create a unique constraint on the column.
A primary key always creates an index on that column. This index makes access fast. If there is no index on name, then the update will need to scan the entire table to look at all names. You can make this faster by building an index on the field.