Adding an auto increment sql script - mysql

I have a child table named case_parties, that consists of the name and address of each plaintiff and defendant to court cases.
The table columns include:
case_id, which is a foreign key to the parent table
party_type, which has coded field values of either 1 or 2 (1 indicating a plaintiff and 2 indicating a defendant). The caveat is that there is not always just 1 plaintiff and 1 defendant in every court case. Often, there are multiple plaintiffs and or multiple defendants in a single case. There can be anywhere from 1 to 1,000 + plaintiffs and or defendants on any given case. I created a new column, lets call it party_id and SET it with a CONCAT on the case_id and party_type columns. Therefore, matching rows in this column include either all the plaintiffs or all the defendants to a given case_id.
To create a simple unique key for each row, I want to run a script that adds an auto generated incremental number or letter to the end of the matching party_id field. For example, if there are 4 plaintiffs in the same court case, there are now 4 columns with matching party_id field values, with the last character being 1, representing the party is a plaintiff;
I want to add an increment on so each column is unique and the last two digits of the 4 rows would reflect something like this: "1A", "1B", "1C", "1D" or "1-1", "1-2", "1-3", "1-4",...etc. I'm thinking adding incremental numbers might be easier than adding incremental letters. No other column values individually or collectively make for an efficient composite index in this case. I'm seeking assistance with auto incrementing the matching column values and would greatly appreciate any assistance. Thank you.

I would suggest creating a separate table to represent the defendant/plaintiffs and have a type column in there. Then have a primary key on that table with a regular auto-increment.
You can then use that as your ID in the case_parties table (a foreign key) and it will address your issue with uniquely identifying each one.

Related

Sql two multique unique ignore when first case ocours

My table has four columns city, zipcode, number and extra. I created unique group for city, zipcode and number called unique1 and another group for city,zipcode,number and extra called unique2. Those groups need to be unique but the problem is that I can have non unique values when extra if different or is null. For example:
city | zipcode | number | extra
A 123 123 null
A 123 123 10 (I cant add this row because of the unique groups)
How can I solve this problem? (I`m using Mysql)
In another words, what I need is a way to:
1) The grouping of city, zipcode and number must be unique if extra is null
2) If extra isn't null I'd like to insert that information even if the new row collides with the unique rule on '1'.
In MySQL, using unique indexes to handle data constraints beyond simple ones is not a great idea. Other, more expensive, table servers have more elaborate ways to describe constraints.
Your first unique index (you called it a "group") -- unique1 -- prevents the second row in your example from being INSERTed to your table.
Edit: Your example shows that you require non-unique values for your first three columns.
I'm guessing a bit, but I think you should drop unique1 and just use unique2.
Drop unique1, unique2 should take care of it.

Can we use PK of M2M relation for another M2M table?

I have some tables like this :
batches(id, name)
terms(id, name)
subjects(id, name)
Batches is having many-to-many relation with Terms in a table called batches_x_terms.
I wanted to create a table to assign Subjects to many terms and those subjects be traceable from Batches too, so I thought of creating a table like this :
batches_x_terms_x_subjects(id, batch_id, term_id, subject_id)
But upon further thinking I concluded that I will have lots of rows in this table for less data and data redundancy will be there too.
So I want to know if I can use M2M table's PK as a FK in M2M relation between :
'batches_x_terms' and 'subjects'
Update 1:
Batches table is having another column 'year'
Batches will have same batches from different years, ex
'Science', 2010
'Science', 2011
Now, suppose every Batch is having 4 terms(semesters) and in each term they have different subjects but some subjects are common between these 2 batches, right?
If I follow 'batches_x_terms' and 'batches_x_subjects' then I won't be able to figure out which subject is taught to which batch in a specific term. I need to classify my data like this :
Batches have how many terms?
Which subjects are assigned to which Batch in a specific term.
Same subjects can be assigned to another Batch in some other term.
Moreover, I have a constraint that I can't assign a different Term ID to every Batch, for a single semester every Batch will have a common Term ID.
I hope this much detail is useful.
Declare a column set whose subrow value is unique but doesn't contain any smaller column set whose subrow value is unique as UNIQUE NOT NULL. (I'll limit myself to when NOT NULL is appropriate.) You can define one such set per table as PRIMARY KEY instead. (That's equivalent as a constraint to UNIQUE NOT NULL.)
When a column set subrow value must appear in another table as a subrow value for a unique column set declare a FOREIGN KEY. If the unique column set isn't already declared UNIQUE or PK, do so.
Not only "can" you declare a PK, UNIQUE and/or FK per above, you should declare every one that could be. (Some DBMSs will prevent you from declaring FK cycles though.)

MySQL performance with missing values on autoincrement id as key

Example:
I have a tab "books" with "id" (int autoincrement) as primary key,
name, author, price etc...(not important). There's 5 books and I delete the book with id=3.
When I add other books the autoincrement value will start from 6 and there's a missed 3 in the id sequence (1 2 4 5 6). So multiple delete/add can create a tab with missing id values if I don't set the autoincrement value or I don't reassign id to books.
Can the situation with missing numbers in id create lowering of performance in queries?
You don't need to worry about performance with missing auto-increment values.
Just make sure you choose the proper data type for the auto-increment column so you do not run out of values.
Thats okay with auto-increment. A normal integer field would do the job for a very big amount of data. You could use "unsigned" option to increase this amount.

How to get groups of rows in MySQL and Cassandra

So I have a table that is currently in mysql, but will be transferred to a nosql system soon. So I took out the normalization of the tables, and now there are duplicates of the data, but one of the ids changes in each row, while the rest of the data is constant. All rows are connected through ID A. ID B changes for each row, and the user ID is the same for all of the rows in ID A.
Now I need to grab 2 groups of rows using the user ID. The number of ID B's is variable for every group of A though, so it could have variables number of rows all grouped together by each ID A. So far I have just been displaying one group at a time so I have been selecting based on ID A, now I need to try and grab 2 sets by the user ID...
I can't seem to find a way to do this...although I don't know everything about sql. How can I do this now on mysql? and then on nosql when i move to the system in a bit? Will be happy to answer any further questions.
I think you're saying that the rows have a composite key made up of two columns, id's A and B. On the assumption that I got that right here's how you'd do it in Cassandra (and there are two
approaches).
You could use CQL and declare your table to have two primary keys, A and B, in that order, along with any other columns in your original MySql table.
You could also create a column family whose row key is id A and which will have a column for every unique id B for that id A. The name of the column will be the value of id B and the value of that column will be the value (or serialized values) of the remaining MySQL row values. Note that id B doesn't have to be a String value. For any given value of id A, this will result in a Cassandra column family row with as many columns as there unique id B values for that id A value. This is called the "Dynamic Column Family Pattern".
If you take the first approach, you basically end up doing the second approach under the covers (oversimplification alert).

How can I change auto increment primary key values on row deletion?

I have a problem that whenever I delete a row, the row ID corresponding to that row gets deleted, but I don't want this. What I want is if any row is deleted, then other rows after that row should shift one (the no. of rows deleted) position up.
Example:
Suppose there is a user table(id and name)
id(auto incremented primary key) name
1 xyz
2 aaa
3 ray
4 mark
5 allen
now delete row with id=3 and table should look like
id(auto incremented primary key) name
1 xyz
2 aaa
3 mark
4 allen
Is there any way to accomplish this?
No! Don't do this!
Your Autoincrement ID is the IDENTITY of a row. Other tables use this ID to refer to a certain row. If you update the ID, you would have to update all other tables referencing this row, which is not at all the point of a relational database.
Furthermore, there never is a need to do this: you won't run out of autoincrement columns fast (and if you do, just pick a bigger datatype).
An autoincrement ID is a purely technical number, your application users should never see or use it. If you want to display an identificator to your users, add another column!
You've completely got the wrong end of the stick. Auto numbers should not be changed as this would break the link between any other referencing tables.
What you want, by the sounds of it, is a row counter, not a primary key.
While its generally not recommended to change these values, there do exists instances where you may need to change them. If you have the appropriate Foreign Key relationships setup to cascade on UPDATE then you could do this. Granted you need to be 100% all FK relationships are defined as expected.