I want to clarify the format of providing initial data in raw SQL for a Django model that uses a many to many relationship. I've based my query on this example. I know I'm not required to specify the through parameter in ManyToManyField for this case, but I would like to have the table listed explicitly.
Backend db is MySQL & table names are lowercase Class names.
Model :
class Person(models.Model):
name = models.CharField(max_length=128)
class Group(models.Model):
name = models.CharField(max_length=128)
members = models.ManyToManyField(Person, through='Membership')
class Membership(models.Model):
person = models.ForeignKey(Person)
group = models.ForeignKey(Group)
I suppose the correct way to provide data would be :
INSERT INTO person VALUES ('Ringo Starr')
INSERT INTO person VALUES ('Paul McCartney')
INSERT INTO group VALUES ('The Beatles')
INSERT INTO membership VALUES (1, 1)
INSERT INTO membership VALUES (2, 1)
How will I specify this data if I did not declare the Membership table explicitly and didn't use the through parameter? Is the following correct?
INSERT INTO person VALUES ('Ringo Starr')
INSERT INTO person VALUES ('Paul McCartney')
INSERT INTO group VALUES ('The Beatles', 1)
INSERT INTO group VALUES ('The Beatles', 1)
UPDATE : The second method isn't correct. The members field under Group class isn't a real dB column.
Short answer:
The correct way to do it in SQL is the same as when you use through, but the table that was membership before will either have a name generated by Django like person_group_a425b, or a name you specify with the db_table parameter on Group.members.
More details:
Even if you don't make a model explicitly for the table that joins them together, Django will create a join table for them.
From the Django docs about how the table is named:
Behind the scenes, Django creates an intermediary join table to represent the
many-to-many relationship. By default, this table name is generated using the name of
the many-to-many field and the name of the table for the model that contains it. Since
some databases don’t support table names above a certain length, these table names will
be automatically truncated to 64 characters and a uniqueness hash will be used. This
means you might see table names like author_books_9cdf4; this is perfectly normal.
You can provide a shorter/different name for the join table in the definition of the ManyToManyfield, using the db_table option.
Related
I have made 2 separate tables namely userlogin and userprofile with the following columns.
userlogin:
id
uname
pass
userprofile:
user_id(Foreign key)
name
gender
aboutme
Since i Will be taking some of the userprofile data at signup i was wondering what would be the best way to give the newly generated id from the userlogin table to the userprofile table.
I thought of making both insert queries right after one another but what if multiple users are signing up at the same time and one users login is saved with another users profile.
Should i instead go with saving the profiles foreign keyed with username?
If you worry about "simultaneous" sign up, you can use mysql transactions.
start transaction; then commit;
Example:
http://www.mysqltutorial.org/mysql-transaction.aspx
For the insertion with foreign key see this question: https://dba.stackexchange.com/questions/46410/how-do-i-insert-a-row-which-contains-a-foreign-key
Using a transaction is good, but there is probably another thing to point out here.
If id is AUTO_INCREMENT, then
INSERT INTO userlogin (uname, pass) VALUES (..., ...) -- without specifying id.
Get LAST_INSERT_ID() -- this gives you the value, and it is specific to your connection. That is no other connection can sneak in and grab your id.
Use that value for doing INSERT INTO userprofile (user_id, ...) VALUES ($id, ...)
Is there some reason for not combining the two tables? 1:1 relationships are rarely useful. as discussed here .
In my Users table I have a row called "products_in_sale" that contains the product ids of the products the user sells.
For one user it looks like this. "33" <-- therefore this very user sells only one product with the id 33.
Now, since users can sell more than one product, I have to join another number (preferably with a "," in between) to this field when a user creates a new product.
How does the necessary UPDATE statement have to look?
maybe 'UPDATE Users SET products_in_sale = products_in_sale + ", '.$id.'"'; ?
thanks for your help!
Use the CONCAT_WS function:
UPDATE Users SET products_in_sale = CONCAT_WS(',', products_in_sale, newProductId) where userId = ?
This query should work also for the first insert, if your products_in_sale column defaults to NULL.
Anyway, as suggested by juergen, using another table would be a better option.
Never never store multiple values in a single cell, this is a violation of first normal form (1NF). Read more: http://www.ntu.edu.sg/home/ehchua/programming/sql/Relational_Database_Design.html
Every value (e.g. product_id) should have its cell in a relational database table. In your case, there should be a table say "user_product" with at least 2 fields - "user_id" and "product_id", and both are composite primary key.
Of course, there should be a "user" table storing user details which would have a "user_id" field linking to the "user_id" field of the "user_product" table. Likewise, there should be a "product" table storing product details which would have a "product_id" field linking to the "product_id" field of the "user_product" table.
I have the following 2 tables:
Parameters table: ID, EntityType, ParamName, ParamType
Entity table: ID, Type, Name, ParamID, StringValue, NumberValue, DateValue
Entity.ParamID is linked to Parameters.ID
Entity.Type is linked to Parameters.EntityType
StringValue, NumberValue, DateValue contains data based on Parameters.Type (1,2,3)
the query result should contain:
Entity.ID, Entity.Name, Parameters.ParamName1, Parameters.ParamName2... Parameters.ParamNameX
The content of ParamNameX is as the above correlation. How is it possible to turn the parameters names into columns and their values into data of those columns? I don't even know where to begin.
Explanation for the above: for example entity X can be entitytype 1 and entitytype 2. parameters table contains paramname for both type 1 and 2 but I need to get (for example) only entity type 1's paramname.
What you are trying to archive is a EAV (Entity Attribute Value) Model.
But the way you set up your tables is just wrong.
You should have a table per type.
So entity_string, entity_number, entity_date and a main table entity which holds the id and some general stuff like create_time, update_time and so on.
Look at magento and how they set up their tables. Like this it is much easier to ask for your data and organize it.
I need to add data to a MySQL database like that:
Person:
pId, nameId, titleId, age
Name:
nameId, name
Title:
titleId, title
I don't want to have any names or title more then once in the db so I didn't see a solution with LAST_INSERT_ID()
My approach looks like that:
INSERT IGNORE INTO Name(name) VALUES ("Peter");
INSERT IGNORE INTO Title(title) VALUES ("Astronaut");
INSERT INTO Person(nameId, titleId, age) VALUES ((SELECT nameId FROM Name WHERE name = "Peter"), (SELECT nameId FROM Name WHERE name = "Astronaut"), 33);
But I guess that's a quite dirty approach!?
If possible I want to add multiple persons with one query and without having anything more then one times in db.
Is this possible in a nice way? Thanks!
You could put title and name as two columns of your table and then:
set one UNIQUE index on each column if you don"t want to have two titles or two names identical in the DB
or set an UNIQUE index on (title,name) if you don't want to have two entries having both the same name and the same title.
If you really want to have separate tables, you could do as you suggested in your post, but wrapping all your insert statements in a TRANSACTION to allow rollback if you detect a duplicate somewhere.
See Design dilemma: If e-mail address already used, send e-mail "e-mail address already registered", but can't because can't add duplicate to table which appear to be exactly the same problem, but having name & email instead of name & titles.
START TRANSACTION;
INSERT INTO title(value) VALUES ("Prof.");
SELECT LAST_INSERT_ID() INTO #title_id;
-- Instead of using user-defined variable,
-- you should be able to use the last_insert_id
-- equivalent from the host language MySQL driver.
INSERT INTO username(value) VALUES ("Sylvain");
SELECT LAST_INSERT_ID() INTO #username_id;
-- Instead of using user-defined variable,
-- you should be able to use the last_insert_id
-- equivalent from the host language MySQL driver.
INSERT INTO account(username_id, email_id) VALUES (#username_id,#title_id);
COMMIT;
See LAST_INSERT_ID()
A third solution would be to SELECT before doing you insert to see in the entry are already present. But personally I wouldn't push to the check-before-set approach at the very least, this will require an extra query which is mostly superfluous if you use correctly indexes.
This question is much about how to do, idea etc.
I have a situation where a user can create as many custom fields as he can of type Number, Text, or Date, and use this to make a form. I have to make/design some table model which can handle and store the value so that query can be done on these values once saved.
Previously I have hard coded the format for 25 user defined fields (UDF). I make a table with 25 column with 10 Number, 10 Text, and 5 Date type and store the label in it if a user makes use of any field. Then map it to other table which has same format and store the value. Mapping is done to know which field is having what label but this is not an efficient way, I hope.
Any suggestion would be appreciated.
Users have permissions for creating any number of UDF of the above types. then it can be used to make forms again this is also N numbers and have to save the data for each form types.
e.g. let's say a user created 10 number 10 date and 10 text fields used first 5 of each to make form1 and all 10 to make form2 now saved the data.
My thoughts on it:
Make a table1 with [id,name(as UDF_xxx where xxx is data type),UserLabel ]
table2 to map form and table1 [id(f_key table1_id), F_id(form id)]
and make 1 table of each data type as [ id(f_key of table1),F_id(form number),R_id(row id for data, would be same for all data type),value]
Thanks to all I'm going to implement, it both DataSet entry and json approach looks good as it gives wider extension-ability. Still I've to figure out which will best fit with the existing format.
There are two approaches I have used.
XML: To create a dynamic user attribute, you may use XML. This XML will be stores in a clob column - say, user_attributes. Store the entire user-data in XML key-value pair, with type as an attribute or another field. This will give you maximum freedom. You can use XOM or any other XML object Model API to display or operate on the data. A typical Node will look like
<userdata>
...
...
<datanode>
<key type="Date">User Birth</key>
<value>1994-02-25</value>
</datanode>
...
</userdata>
Attribute-AttributeValue This is same thing as above but using tables. What you do is you create a table -- attributes with FK as user_id, another table attribute_values with FK as attribute_id. attributes contains multiple field-names and types for each user and attribute_values contains values of those attributes. so basically,
users
user_id
attributes
attr_id
user_id (FK)
attr_type
attr_name
attribute_values
attr_val_id
attr_id (FK)
attr_val
If you see in both the approached you are not limited by how-many or what type of data you have. But there is a down-side of this is parsing. In either of the case, you will have to to do a small amount of processing to display or analyze the data.
The best of both worlds (having rigid column structure vs having completely dynamic data) approach is to have a users table with must-have columns (like user_name, age, sex, address etc) and have user-created data (like favorite pet house etc.) in either XML or attribute-attribute_value.
What do you want to achieve?
A table per form permutation or might each dataset consist of different sets?
Two possibilities pop into my mind:
Create a table that describes one field of a dataset, i.e. the key might be dataset id + field id and additional columns could contain the value stored as a string and the type of that value (i.e. number, string, boolean, etc.).
That way each dataset might be different but upon reading a dataset and storing it into an object you could create the appropriate value types (Integer, Double, String, Boolean etc.)
Create a table per form, using some naimg convention. When the form layout is changed, execute ALTER TABLE statements to add, remove, rename columns or change their type.
When the user changes the type of a column or deletes it, you might need to either deny that if the values are not null or at least ask the user if she's willing to drop values that don't match the new requirements.
Edit: Example for approach 1
Table UDF //describes the available fields--------
id (PK)
user_id (FK)
type
name
Table FORM //describes a form's general attributes--------
id (PK)
user_id (FK)
name
description
Table FORM_LAYOUT //describes a form's field layout--------
form_id (FK)
udf_id (FK)
mapping //mapping info like column index, form field name etc.
Table DATASET_ENTRY //describes one entry of a dataset, i.e. the value of one UDF in
--------
id (PK)
row_id
form_id (FK)
udf_id (FK)
value
Selecting the content for a specific form might then be done like this:
SELECT e.value, f.type, l.mapping from DATASET_ENTRY e
JOIN UDF f ON e.udf_id = f.id
JOIN FORM_LAYOUT l ON e.form_id = l.form_id AND e.udf_id = l.udf_id
WHERE e.row_id = ? AND e.form_id = ?
Create a table which manages which fields exist. Then create tables for each data type you want to support, where the user will their values into.
create table Fields(
fieldid int not null,
fieldname text not null,
fieldtype int not null
);
create table FieldDate
(
ValueId int not null,
fieldid int not null,
value date
);
create table FieldNumber
(
ValueId int not null,
fieldid int not null,
value number
);
..
Another possibility would be to use ALTER TABLE to create custom fields. If your application has the rights to perform this command and the custom fields are changing very rarely this would be the option I chose.