I'm using peewee as my ORM for mysql DB.
I have 3 tables in my scheme, one for devices, one for apps and one for results per device per tester app and tested app.
the APPS table looks like:
package name | version name | version code |apk name
the 3 first columns are my primary key since i want every revision in my table and i want it to be easy to filter apps according to certain version code (version code is incremented with revisions in git\svn while version name represents the version itself as taken from the development branch).
My problem starts when i want to have the APPS table as a reference table for my TESTS table, meaning each test refers to the APPS twice, once for the tester and once for the tested app.
I'm not sure if it's such a good idea to have a 3 fields foreign key (which makes it 6!) in my TESTS table.
Any good solution for that ?
I tried adding _ID field with auto increment as a 'KEY' so i'll have a numeric single field to access, but the ORM doesn't really supports it and i'm kinda gritting my teeth trying to pull this off.
Is my Db just organized bad or i need to simply replace ORM ? i think that without the ORM i would probably pull it off pretty easily...
Options:
Define an auto incremented primary key in the APPS table.
Define composite unique key in APPS table on pkg, ver, and
vcode columns.
Use the primary key value of this table as FK reference in child
table.
This is a very late answer, but I want to contribute the code for telling peewee about a composite key, something like the following. When this construct is used, Peewee does NOT add an "id" column to the table:
class SillyTable(peewee.Model):
field1 = peewee.IntegerField(null=False)
field2 = peewee.IntegerField(null=False)
# Composite key, no "id" column needed.
class Meta:
primary_key = peewee.CompositeKey('field1','field2')
I'm using peewee v2.6.3 and PyMySQL v0.6.3 to access MySQL.
Related
I’m pretty new to PowerApps and need to migrate an Access database over to PowerApps, first of all it’s tables to Dataverse. It’s a typical use case for a model-driven app, with many relationships between the tables. All Access tables had an autogenerated ID field as their primary key.
I transferred all tables via Excel ex/import to Dataverse. Before importing,I renamed all ID fields (columns) to ID_old and let Dataverse create its own, autogenerated ID field for each table.
What I want to achieve is to re-establish all relationships between the tables, where the foreign key points to the new primary key provided by Dataverse, as I want to avoid double keys. As a first step I created relationships between the ID_old field and the corresponding (old) foreign key field in the related table.
In good old Access, I’d now simply run an update query, filling the new (yet empty) foreign key field with the new ID of the related table. Finally, I would change the relationship to the new primary and foreign keys and then delete the old ID fields.
Where I got stuck is the update query. I searched the net and found a couple of options like UpdateIf / Patch functions or Power Query or Excel ex/import and some more. They all read pretty complicated and time intensive and I think I must have overseen a very simple solution for such a pretty common problem.
Is there someone out there who might point me in the right (and simple) direction? Thanks!
A more efficient approach would be to start with creating extra ID columns in Access. Generate your GUIDs and fix your foreign keys there. This can be done efficiently using a few SQL update statements.
When it comes to transferring your Access tables to Dataverse you just provide your Access shadow primary keys in the Create message.
I solved the issue as follows, which is pretty efficient in my perception. I”m assuming you have a auto-numbered ID field in every Access table, which you used for your relationships
Export your tables from Access to Excel.
Rename your ID fields to ID_old in all tables using Excel, as well as your foreign key fields to e.g. ForeignKey_old. This will make it easy to identify the fields later in Dataverse.
Import into Dataverse, using the Power Query tool. Important: Make sure, that you choose ID_old as additional primary key field in the last import step.
Re-create all relationships in Dataverse, using the Lookup datatype. This will create a new, yet empty column in your table.
Now use the “Edit in Excel” feature to open your table in Excel. You should get your prefix_foreignkey_old column with the old foreign keys displayed, as well as the reference to your related table, e.g. prefix_referencetable.prefix_id_old, which is still empty.
Now just copy the complete prefix_foreignkey_old column values into the prefix_referencetable.prefix_id_old column.
Import the changes and you’re done.
Hope this is helpful for some of you out there.
I've been reading some articles about usage of composite keys in MySql and found that a composite key can't own a auto_increment id column. However, I'm interested in using a similar feature. Let's explain it:
Using MariaDB 10 (InnoDB) and Hibernate 3.6.9
I want to do some of my application table fields translatable. I have thought an only table for translations should be enough. This table has a composite key which has an int value as a key for the translation and also the locale value for the concrete text. The same id and locale values can't place as entries.
So that's how the model should look like:
I don't want the translations to be loaded with each of the random entities as a Collection, I'm thinking about a method like String translationFor(Integer id, Locale loc) could do it for my current locale. However, when I save some translation Set I want to assign them the same id. Let's take this case:
Spanish: Cuchara
English: Spoon
The table should look as:
id locale translation
1 es Cuchara
1 en Spoon
But I can't tell MySql to have a composite id with auto_increment column. So, I consider I should assign it manually, performing these steps:
Build the Translation entities with the locale values
Begin a transaction in Hibernate session
Retrieve the last id value in the translations table
Assign it manually to the entities
Save them
Commit the transaction
Is it the most proper way? Am I doing it atomically?
I assume you are planning on having multiple tables needing the translation of 'spoon'? If so, let me move your focus away from id.
The translation table needs PRIMARY KEY(code, locale) where code is what you have as some_translatable_value in `random_table_1.
code could be the string (perhaps abbreviated) in your favorite language. Note that if you later change the phrasing of the text (to "silver spoon"), do not go back and change code; it can stay the same ("spoon").
I do not know whether you can achieve this in Hibernate; I am not fluent in that. (I tend to avoid 3rd party packages; they tend to get int the way.) If Hibernate forces you to have an AUTO_INCREMENT id on each table, so be it. It will be a harmless waste. You should then declare the pair (code, locale) as unique (in order to get the desired index).
It is popular to save all versions of posts when editing (like in stackexchange projects), as we can restore old versions. I wonder what is the best way to save all versions.
Method 1: Store all versions in the same table, and adding a column for order or active version. This will makes the table too long.
Method 2: Create an archive table to store older versions.
In both methods, I wonder how deals with the row ID which is the main identifier of the article.
The "best" way to save revision history depends on what your specific goals/constraints are -- and you haven't mentioned these.
But here some thoughts about your two suggested methods:
create one table for posts, and one for post history, for example:
create table posts (
id int primary key,
userid int
);
create table posthistory (
postid int,
revisionid int,
content varchar(1000),
foreign key (postid) references posts(id),
primary key (postid, revisionid)
);
(Obviously there would be more columns, foreign keys, etc.) This is straightforward to implement and easy to understand (and easy to let the RDBMS maintain referential integrity), but as you mentioned may result in posthistory have too many rows to be searched quickly enough.
Note that postid is a foreign key in posthistory (and the PK of posts).
Use a denormalized schema where all of the latest revisions are in one table, and previous revisions are in a separate table. This requires more logic on the part of the program, i.e. when I add a new version, replace the post with the same id in the post table, and also add this to the revision table.
(This may be what SE sites use, based on the data dump in the SE Data Explorer. Or maybe not, I can't tell.)
For this approach, postid is also a foreign key in the posthistory table, and the primary key in the posts table.
In my opinion, a interesting approach is
to define another table, for example posts_archive (it will contain all columns of posts table + an auto-incremented primary key + optionally a date...)
to feed this table through after-insert and after-updates triggers defined on posts table.
If the size of the table is an issue, then the second option would be the better choice. That way the active version can be returned quickly from a smaller table, and restoring an older version from the larger archive table is accepted to take longer. That said, the size of the table should not be an issue with a sensible database and indexing.
Either way, you need a primary key that consists of multiple table columns instead of just row ID. The trivial answer would be to include a timestamp containing the time each revision was created into the key, so that ID continues to identify a specific article, and ID and revision time together identify a specific revision of the article.
Dealing with temporal data is a known problem.
The method 1 simply changes your table identifier: you will end up with a table containing messageID, version, description, ... with a primary key messageID, version.
Modifying the data is done by simply adding a row with an incremented version. Querying is a little bit more complicated.
The method 2 is more tedious, you will end up with a table with a rowID and a second table that is exactly the same as in the method 1. Then, on every update, you will have to remember to copy the data into the "backup table".
The method 3: answser given by Matt
In my opinion, method 1 and 3 are better. The schema is simplier in 1, but you can have unversionned data for your posts using the method 3.
Not sure if this is possible... however I have a table called "LineItems" with a column called "PackageId" -> PackageId IS optional, however i'd still like to somehow setup a foreign key relationship to its relating table Packages - is this possible? If so how might I go about doing it
Also, I will be using the ADO.net Entity Framework Model v4 in conjunction with MySql. i would like to apply this constraint via MySQL (if possible) and have it carry into the Entity Framework model code
thanks!
Loren
I apologize for my ignorance, however I just learned that by setting a column to allow null appears to allow me to still keep the foriegn key relationship
I'm using before and after insert triggers to generate ids (primary key) of the form "ID_NAME-000001" in several tables. At the moment, the value of the hibernate generator class of these pojos is assigned. A random string is assigned to the object to be persisted and when it's inserted by hibernate, the trigger assigns a correct id value.
The problem with this approach is that I'm unable to retrieve the persisted object because the id only exists in the database, not in the object I just saved.
I guess I need to create a custom generator class that could retrieve the id value assigned by the trigger. I've seen an example of this for oracle (https://forum.hibernate.org/viewtopic.php?f=1&t=973262) but I haven't been able to create something similar for MySQL. Any ideas?
Thanks,
update:
Seems that this is a common and, yet, not solved problem. I ended up creating a new column to serve as a unique key to use a select generator class.
Hope this won't spark a holy war for whether using surrogate key or not. But it's time to open the conversation here.
Another approach would be just, use the generated key as surrogate key and assign a new field for your trigger assigned id. The surrogate key is the primary key. You have the logically named key (such as the "ID_NAME-000001" in your example). So your database rows will have 2 keys, the primary key is surrogate key (could be UUID, GUID, running number).
Usually this approach is preferable, because it can adapt to new changes better.
Say, you have these row using surrogate key instead of using the generated id as natural key.
Surrogate key:
id: "2FE6E772-CDD7-4ACD-9506-04670D57AA7F", logical_id: "ID_NAME-000001", ...
Natural key:
id: "ID_NAME-000001", ...
When later a new requirement need the logical_id to be editable, auditable (was it changed, who changed it when) or transferable, having the logical_id as primary key will put you in trouble. Usually you cannot change your primary key. It's horribly disadvantage when you already have lots of data in your database and you have to migrate the data because of the new requirement.
With surrogate key solution, it'll be easy, you just need to add
id: "2FE6E772-CDD7-4ACD-9506-04670D57AA7F", logical_id: "ID_NAME-000001", valid: "F", ...
id: "0A33BF97-666A-494C-B37D-A3CE86D0A047", logical_id: "ID_NAME-000001", valid: "T", ...
MySQL doesn't support sequence (IMO autoincrement isn't comparable to sequence). It's different from Oracle/PostgreSQL's sequence. I guess that's the cause why it's difficult to port the solution from Oracle database to MySQL. PostgeSQL does.