I have a MySql database containing data about users of an application. This application is in production already, however improvements are added every day. The last improvement I've made changed the way data is collected and inserted into the database.
Just to be clearer, my database is composed of 5 tables containing user data and 1 table to relate all the tables, through foreign keys. These 5 foreign keys, together, form my Unique Index for this "Main Table" I have.
The issue is that one of these tables containing user data changed its format, and I want to remove all the data older than the modification I made on my application (just from this table, the other ones I need to keep untouched). However, this dataset has foreign keys in the main table, and I can't just drop these lines on the main table because the other informations I have are important. I tried to change the value of the foreign key for this table, in specific, but then, obviously, I have a problem related to duplicated indexes.
Reading on internet, I've found a solution to my problem using "Insert ... On duplicate key update ...", but i'm not inserting data, just updating it. I have an Idea about how to make a program on PHP to update my database, but is there another easier solution? Is it possible to avoid these problems using just MySql syntax?
might be worth looking at the below link
http://www.kavoir.com/2009/05/mysql-insert-if-doesnt-exist-otherwise-update-the-existing-row.html
Related
I’m pretty new to PowerApps and need to migrate an Access database over to PowerApps, first of all it’s tables to Dataverse. It’s a typical use case for a model-driven app, with many relationships between the tables. All Access tables had an autogenerated ID field as their primary key.
I transferred all tables via Excel ex/import to Dataverse. Before importing,I renamed all ID fields (columns) to ID_old and let Dataverse create its own, autogenerated ID field for each table.
What I want to achieve is to re-establish all relationships between the tables, where the foreign key points to the new primary key provided by Dataverse, as I want to avoid double keys. As a first step I created relationships between the ID_old field and the corresponding (old) foreign key field in the related table.
In good old Access, I’d now simply run an update query, filling the new (yet empty) foreign key field with the new ID of the related table. Finally, I would change the relationship to the new primary and foreign keys and then delete the old ID fields.
Where I got stuck is the update query. I searched the net and found a couple of options like UpdateIf / Patch functions or Power Query or Excel ex/import and some more. They all read pretty complicated and time intensive and I think I must have overseen a very simple solution for such a pretty common problem.
Is there someone out there who might point me in the right (and simple) direction? Thanks!
A more efficient approach would be to start with creating extra ID columns in Access. Generate your GUIDs and fix your foreign keys there. This can be done efficiently using a few SQL update statements.
When it comes to transferring your Access tables to Dataverse you just provide your Access shadow primary keys in the Create message.
I solved the issue as follows, which is pretty efficient in my perception. I”m assuming you have a auto-numbered ID field in every Access table, which you used for your relationships
Export your tables from Access to Excel.
Rename your ID fields to ID_old in all tables using Excel, as well as your foreign key fields to e.g. ForeignKey_old. This will make it easy to identify the fields later in Dataverse.
Import into Dataverse, using the Power Query tool. Important: Make sure, that you choose ID_old as additional primary key field in the last import step.
Re-create all relationships in Dataverse, using the Lookup datatype. This will create a new, yet empty column in your table.
Now use the “Edit in Excel” feature to open your table in Excel. You should get your prefix_foreignkey_old column with the old foreign keys displayed, as well as the reference to your related table, e.g. prefix_referencetable.prefix_id_old, which is still empty.
Now just copy the complete prefix_foreignkey_old column values into the prefix_referencetable.prefix_id_old column.
Import the changes and you’re done.
Hope this is helpful for some of you out there.
There are two tables in my MySQL database which have a many-to-many relationship. There is a third table which handles it, with the foreign keys of the first two.
I need to update the relationship. I may have to add a row with a new relation and delete a row that represents a relation that does not exist any more. To take track of the changes, I created a new table that contains all the relations that are valid, and does not contain the old ones that are meant to be deleted.
There is a lot of content on this MERGE statement for SQL, which would solve my problem:
https://www.sqlshack.com/sql-server-merge-statement-overview-and-examples/
https://codingsight.com/merge-updating-source-and-target-tables-located-on-separate-servers/
https://www.sqlservertutorial.net/sql-server-basics/sql-server-merge/
https://www.educba.com/mysql-merge/
The problem is that for some unclear reason MERGE does not exist in MySQL. It kinda has an alternative, called INSERT ... ON DUPLICATE KEY UPDATE, but it is not the same and does not cover what I am aiming here. I don't want to delete all the relations on the table and re-insert the new ones.
I would like to know if there is any other alternative to MERGEin MySQL, or any way to "add" it to my database.
We are currently in the process of developing our own e-commerce solution, as part of our research we have been examining the ZenCart Database Schema and found that data is quite frequently duplicated between various tables where it would seem that perhaps a Foreign Key would have been sufficient to link the two or more tables in question, for example:
Given that there is table "Products" that has the following columns
PRODUCT_IDPRODUCT_NAMEPRODUCT_PRICEPRODUCT_SKU
Then if there is a Sales_Item "Table" Then of course a product (and all its constituent columns)may be referenced by simply doing something like:
SALES_ITEM_IDProducts_PRODUCT_ID //This is the foreign key that relates a specific product to a sale item.SALE_TIMEREST_OF_SALE_SPECIFIC_DATA......
However instead it seems that the Sales table COPIES many of the field values defined in the Products table so it infact looks as follows:
SALES_ITEM_IDPRODUCT_IDPRODUCT_NAMEPRODUCT_PRICEPRODUCT_SKUSALE_TIME
My question is which approach would generally be considered best practice when attempting to build a scalable efficient solution. Using foreign keys means data is not duplicated but the caveat is that database or application-level JOINS would be needed in order to query the entire dataset. However than being said, for some reason the foreign key approach seems cleaner and more correct somehow.
I set up two Wordpress blogs a while ago, both obviously having different databases. I've more recently merged these databases into one by changing the tables prefixes, therefore these two 'entities' have the same amount of tables and the same names (as they originate from a Wordpress install) but with different prefixes, i.e.:
Blog1_tabledata1
Blog1_tabledata2
Blog1_tabledata3
Blog1_tabledata4
Blog2_tabledata1
Blog2_tabledata2
Blog2_tabledata3
Blog2_tabledata4
I have now realised that I need to merge these two databases (where they're both using the same tables) so that they can be used in the same Wordpress instance (later separated by tags etc).
What would be the most simple way of doing this?
(Please note I am asking this from a MySQL standpoint - this is not a Wordpress question!)
If you absolutely are not looking for a wordpress solution, that means you are not looking at all domains. By this, I mean that you are not looking at what the data means. This could be a problem. but nevertheless:
figure out the foreign keys. If the tables are MyIsam instead of InnoDB, they will be implicit. Figure out what ID points to what field.
select from one DB and insert into the other. This mean you add the rows of one table to the equivalent of the target database. Auto-increment rows will be fine. But for foreign key (explicit AND implicit) fields here is were the trouble starts.
If you insert, say a user, the user gets a new ID -> You have to find the equivalent of the userid in the old db so you can insert the foreign keys with the right ID. this is tricky and without making this a wordpress-question there is no more help we can give you: just figure out what rows they should be :). it is database // domain specific. (with that I mean you can't just figure that out by looking at the fields, you must know some of what they mean)
If the db is correct, this should work, but I'm not sure if you get into trouble with duplicates (all should go on ID, and you fixed that in step 3 with unique and connected id's. but if your domain doesn't want two accounts, two pages or two whatevers (tags?) to have the same name, you still have a problem. But again, this is domain specific logic and you're specifically asking not to go there.
I am considering designing a relational DB schema for a DB that never actually deletes anything (sets a deleted flag or something).
1) What metadata columns are typically used to accomodate such an architecture? Obviously a boolean flag for IsDeleted can be set. Or maybe just a timestamp in a Deleted column works better, or possibly both. I'm not sure which method will cause me more problems in the long run.
2) How are updates typically handled in such architectures? If you mark the old value as deleted and insert a new one, you will run into PK unique constraint issues (e.g. if you have PK column id, then the new row must have the same id as the one you just marked as invalid, or else all of your foreign keys in other tables for that id will be rendered useless).
If your goal is auditing, I'd create a shadow table for each table you have. Add some triggers that get fired on update and delete and insert a copy of the row into the shadow table.
Here are some additional questions that you'll also want to consider
How often do deletes occur. What's your performance budget like? This can affect your choices. The answer to your design will be different depending of if a user deleting a single row (like lets say an answer on a Q&A site vs deleting records on an hourly basis from a feed)
How are you going to expose the deleted records in your system. Is it only through administrative purposes or can any user see deleted records. This makes a difference because you'll probably need to come up with a filtering mechanism depending on the user.
How will foreign key constraints work. Can one table reference another table where there's a deleted record?
When you add or alter existing tables what happens to the deleted records?
Typically the systems that care a lot about audit use tables as Steve Prentice mentioned. It often has every field from the original table with all the constraints turned off. It often will have a action field to track updates vs deletes, and include a date/timestamp of the change along with the user.
For an example see the PostHistory Table at https://data.stackexchange.com/stackoverflow/query/new
I think what you're looking for here is typically referred to as "knowledge dating".
In this case, your primary key would be your regular key plus the knowledge start date.
Your end date might either be null for a current record or an "end of time" sentinel.
On an update, you'd typically set the end date of the current record to "now" and insert a new record the starts at the same "now" with the new values.
On a "delete", you'd just set the end date to "now".
i've done that.
2.a) version number solves the unique constraint issue somewhat although that's really just relaxing the uniqueness isn't it.
2.b) you can also archive the old versions into another table.