I’m pretty new to PowerApps and need to migrate an Access database over to PowerApps, first of all it’s tables to Dataverse. It’s a typical use case for a model-driven app, with many relationships between the tables. All Access tables had an autogenerated ID field as their primary key.
I transferred all tables via Excel ex/import to Dataverse. Before importing,I renamed all ID fields (columns) to ID_old and let Dataverse create its own, autogenerated ID field for each table.
What I want to achieve is to re-establish all relationships between the tables, where the foreign key points to the new primary key provided by Dataverse, as I want to avoid double keys. As a first step I created relationships between the ID_old field and the corresponding (old) foreign key field in the related table.
In good old Access, I’d now simply run an update query, filling the new (yet empty) foreign key field with the new ID of the related table. Finally, I would change the relationship to the new primary and foreign keys and then delete the old ID fields.
Where I got stuck is the update query. I searched the net and found a couple of options like UpdateIf / Patch functions or Power Query or Excel ex/import and some more. They all read pretty complicated and time intensive and I think I must have overseen a very simple solution for such a pretty common problem.
Is there someone out there who might point me in the right (and simple) direction? Thanks!
A more efficient approach would be to start with creating extra ID columns in Access. Generate your GUIDs and fix your foreign keys there. This can be done efficiently using a few SQL update statements.
When it comes to transferring your Access tables to Dataverse you just provide your Access shadow primary keys in the Create message.
I solved the issue as follows, which is pretty efficient in my perception. I”m assuming you have a auto-numbered ID field in every Access table, which you used for your relationships
Export your tables from Access to Excel.
Rename your ID fields to ID_old in all tables using Excel, as well as your foreign key fields to e.g. ForeignKey_old. This will make it easy to identify the fields later in Dataverse.
Import into Dataverse, using the Power Query tool. Important: Make sure, that you choose ID_old as additional primary key field in the last import step.
Re-create all relationships in Dataverse, using the Lookup datatype. This will create a new, yet empty column in your table.
Now use the “Edit in Excel” feature to open your table in Excel. You should get your prefix_foreignkey_old column with the old foreign keys displayed, as well as the reference to your related table, e.g. prefix_referencetable.prefix_id_old, which is still empty.
Now just copy the complete prefix_foreignkey_old column values into the prefix_referencetable.prefix_id_old column.
Import the changes and you’re done.
Hope this is helpful for some of you out there.
Related
For context, I have a Laravel 6 project which made a rather odd choice, to put it mildly, on how to manage relationships when I inherited it.
I have a user object which has it's usual autoincrement id, as well as a "system_id" which is provided by an external system.
For most of the project, relationships involving a user object make use of their "id" field as the foreign key in the belongsTo() part of the relationship which is all well and good.
However, one many-to-many relationship, specifically the one used for the relationship between a user model and a group model, uses the user model's "system_id" field as the foreign key instead of the usual "id" field used everywhere else which is beginning to cause all kinds of development headaches, and is already in production.
So as part of a cleanup project of the system, I intend on migrating the pivot table to use the user model's "id" field. The challenge now is the following:
In a database-agnostic way, how to copy the matching id to the "user_id" foreign key field in the pivot table given a known "system_id".
How will it look in a migration? Is a migration even a good option or should it be done directly in the database instead?
Anything else I should account for?
Is this even a good idea in the first place or should we just live with it?
Obviously, a backup will be made and the whole thing will be tested in a test environment first before it's attempted in production.
A database was created with 5 tables. These tables were populated with data upon creation - perhaps it was imported from a previous database.
When the DB was created, primary keys were created for each table, however foreign keys were not.
How do I run a query to identify which tables columns contain data that relates to the PK in other tables? Effectively, how do I identify the FK column(s) on each table? Some tables may contain 2 FK's.
The end goal is to identify the FK('s) in each table and properly set up the table with appropriate FK structure and table relations.
Don't try to use queries to automate this database design / reverse-engineering process. (If you had 500 tables, maybe. But you only have five.)
Eyeball your table definitions. If you have, for example, an id primary key column in your user table, your contact table might have a user_id column. That is the FK to user.id. It will help you greatly if you really understand how your tables tie together with FKs.
And, keep in mind that your system will still work tolerably well if you don't bother to actually declare these foreign keys. What you'll lose:
constraints, in which the database engine prevents, for example a contact.user_id column value that doesn't point to any user.id row.
possibly some helpful indexing.
MySql Workbench has a reverse engineering feature. It inspects the definition of a database and does its best to sort out various entities (tables) and the relationships (foreign key dependencies) between them. It presents graphical e:r diagrams and can generate DDL. That can help you understand a database and set up appropriate FKs. But still, check the relationships it suggests: this data is yours, not Workbench's.
I am a total novice to this whole database world and I have a question. I am building a database for my final project for my masters class. The database includes cities, counties, and demographic data for the state of Colorado. The database ultimately will be used as a spatial database. At this point I have all my tables built in Access, and have a ODBC connection to PostgreSQL to import the tables after they are created. Access does not allow for shapefiles to be added to the database, PostgreSQL does.
My question is about primary keys, each of my tables in Access share an FIPS code (this code allows me to join the demographic data to a shapefile and display the data in ArcMap with the proper coordinates). I have a many demographic data tables with this FIPS code. Is it acceptable to set the FIPS as the primary key for each table? Or does each table need its own individual primary key that is different from the others?
Thanks for the help!
The default PK is “ID”, so there really no problem with using this default for all tables.
In fact it means for any table or code you write you can now always rest easy as to what the primary key is going to be.
And if you copy or re-name a table, then again you know the ID.
Some people do prefer having the table name as part of the PK, but that does violate normalizing of data since now your attaching an external attribute to that PK column.
However for a FK (foreign key), since the VERY definition of the column is an external dependency, then I tend to include the table name like this:
Customers_ID
And once again due to this naming convention, then you can always “guess” or “know” the name of a FK column (table name + ID).
At the end of the day, there is not really a convention on this issue. However I will recommend for all tables you create, you do allow access to create that default PK of “id”. This of course assumes your database design is not using natural keys. And the debate of natural keys vs surrogate key (an auto number pk “id”) has many pros and cons. You can google natural keys vs surrogate keys for endless discussions on this issue.
I am working on a large database with many tables, all of which have Auto numbered primary keys. The database is stored on a network, and several people have access.
My issue is this: one user lost network connection while adding data to a table via a form. Several other people have added data to the table subsequently. This gives a situation where one primary key is missing (e.g. primary keys go from 1 - 2000, however the entry for PK 1974 is missing - the one that was being created when the user lost connection). I was asked to insert the missing data into the table, with the missing key ID at the appropriate point in the table. I used "DoCmd.RunSQL "INSERT INTO 'tablename' (PrimaryKeyID, Field1) VALUES ('1974', value1)".
This has caused issues in that Access thought the next 'newest' primary key it had to create was '1975' and we received a message about duplicated keys. A few people have since managed to add new data, however any subsequent new data is created at 1976, 1977 etc, which is overwriting the existing data.
Can anyone tell me why this is happening? Is there a way to force Access to 'look' at the largest primary key in the table to create new auto numbered keys?
Thanks
Lee
Try compacting the back-end. I think it should reset the new values.
I have a MySql database containing data about users of an application. This application is in production already, however improvements are added every day. The last improvement I've made changed the way data is collected and inserted into the database.
Just to be clearer, my database is composed of 5 tables containing user data and 1 table to relate all the tables, through foreign keys. These 5 foreign keys, together, form my Unique Index for this "Main Table" I have.
The issue is that one of these tables containing user data changed its format, and I want to remove all the data older than the modification I made on my application (just from this table, the other ones I need to keep untouched). However, this dataset has foreign keys in the main table, and I can't just drop these lines on the main table because the other informations I have are important. I tried to change the value of the foreign key for this table, in specific, but then, obviously, I have a problem related to duplicated indexes.
Reading on internet, I've found a solution to my problem using "Insert ... On duplicate key update ...", but i'm not inserting data, just updating it. I have an Idea about how to make a program on PHP to update my database, but is there another easier solution? Is it possible to avoid these problems using just MySql syntax?
might be worth looking at the below link
http://www.kavoir.com/2009/05/mysql-insert-if-doesnt-exist-otherwise-update-the-existing-row.html