Automatically add letters in front of an auto-increment fieild - mysql

Is it possible to attach a letter in front of an auto-increment in MySQL. I have several tables in my database, some of which have unique id's created by auto-increment. I want to be able to distinguish between the auto-generated numbers by a letter at the front.
This is not a feature which is absolutely required but it would just help making small tasks easy.

You could create views for the tables that need distinctive letters in front of their ID values, and read the tables through the views:
CREATE VIEW VTableA
AS
SELECT
CONCAT('A', ID) AS ID,
other columns
FROM TableA
Same for other tables.

Unfortunately you cannot do that. At least not in sql. An autoincrement field is of integer type, so adding a letter there violates the constraint.
You can have a look to the link below for some sort of a solution to this problem.
MySQL auto increment plus alphanumerics in one column
I hope that this can guide you to the right direction.

The best answer probably depends on what you mean by an alphanumeric ID. Does the alpha part increment in some way, and if so, what are the rules for that? If the alpha part is static, then you don't even need it in the DB: just prepend it to the numeric ID when you output it (perhaps using [s]printf() or similar functions to prepend zeroes so that it's a fixed length?). Without know the full requirement, though, we're all just speculating

Related

what is the best practice - a new column or a new table?

I have a users table, that contains many attributes like email, username, password, phone, etc.
I would like to save a new type of data (integer), let's call it "superpower", but only very few users will have it. the users table contains 10K+ records, while fewer than 10 users will have a superpower (for all others it will be null).
So my question is which of the following options is more correct and better in terms of performance:
add another column in the users table called "superpower", which will be null for almost all users
have a new table calles users_superpower, which will at most contains 10 records and will map users to superpowers.
some things i have thought about:
a. the first option seems wasteful of space, but it really just an ingeger...
b. the second option will require a left join every time i query the users...
c. will the answer change if "superpower" data was 5 columns, for example?
note: i'm using hibenate and mysql, if it changes the answer
This might be a matter of opinion. My viewpoint on this follows:
If superpower is an attribute of users and you are not in the habit of adding attributes, then you should add it as a column. 10,000*4 additional bytes is not very much overhead.
If superpower is just one attribute and you might add others, then I would suggest using JSON or another EAV table to store the value.
If superpower is really a new type of user with other attributes and dates and so on, then create another table. In this table, the primary key can be the user_id, making the joins between the tables even more efficient.
I would go with just adding a new boolean field in your user entity which keeps track of whether or not that user has superpowers.
Appreciate that adding a new table and linking it requires the creation of a foreign key in your current users table, and this key will be another column taking up space. So it doesn't really get around avoiding storage. If you just want a really small column to store whether a user has superpowers, you can use a boolean variable, which would map to a MySQL BIT(1) column. Because this is a fixed width column, NULL values would still take up a single bit of space, but this not a big storage concern most likely as compared to the rest of your table.

Mysql xref table- add column or sparate

I have a cross reference table that contains three major columns:
object id
different object id
relation type between the two
Problem is, on some cases I need two more columns that help define the relation between the two objects.
My question is, what is the proper way to deal with the situation?
Should I create another table with five columns, and have two table for practically the same purpose?
Or is it ok to add two more columns that will almost always contain null. Will it needlessly affect response time and size?
Thanks
edit-
I've been asked for more information, so here it is:
the database hold philosophical arguments.
This specific table holds the information of which which statements are connected in what logic.
these are the columns:
statement_id
logic_id
direction
which are good for two-way logic (such as 'if-then');
But in case of a multiple statement logic (such as 'and' or 'or') I needs two more columns:
exit
inner-logic type
I'm not sure if this extra information helpful or just more confusing. feel free to ignore it and answer the question on purely academic base.
It is ok to have two ids and any number of columns describing the relationship. Those extra columns could be NULLable if they are optional or whatever.
It sounds like the two ids JOIN to a single table, correct? In that case, you may need to UNION two selects to check for an id in either of the columns. And have multiple indexes, one starting with one id, one starting with the other.
It would help if you provided SHOW CREATE TABLE and a SELECT or two. That might give us a better feel for what the tables are for.

Text and Autoincremented number together in a field in mysql

I want to create a table, where the fields are labeled with an auto-incremented number at the end, such as:
Comp1
Comp2
Comp3
Is that possible? then if yes, how?
I have a question for you first. Will the word "Comp" ever change?
If it will never change just create a column called id that's auto increment and prefix it as "Comp" in your code.
If the word "Comp" may change you have the option to split into two column. One id and another prefix. You will be querying with the id and prefix in your where clause.
Select * from yourtable where id=2232 and prefix ="Comp";
Another option is what you exactly desire, create a column of type binary(16) and use the hex() and unhex() functions to store and retrieve the id. However you will still need to maintain a separate column for auto incrementing ID. If you don't want to do that then before you insert get the last inserted record and then increment it yourself then insert. But this may have the chance for collision. Be sure to index this field you plan to query it. Get a buy in from your DBAs as they won't be happy as your index will grow larger :)

database storing multiple types of data, but need unique ids globally

a while ago, i asked about how to implement a REST api. i have since made headway with that, but am trying to fit my brain around another idea.
in my api, i will have multiple types of data, such as people, events, news, etc.
now, with REST, everything should have a unique id. this id, i take it, should be unique to the whole system, and not just each type of data.
for instance, there should not be a person with id #1 and a news item with id of #1. ultimately, these two things would be given different ids altogether: person #1 with unique id of #1 and news item #1 with unique id #2, since #1 was taken by a person.
in a database, i know that you can create primary keys that automatically increment. the problem is, usually you have a table for each data "type", and if you set the auto increment for each table individually, you will get "duplicate" ids (yes, the ids are still unique in their own table, but not the whole DB).
is there an easy way to do this? for instance, can all of these tables be set to work off of one incrementer (the only way i could think of how to put it), or would it require creating a table that holds these global ids, and ties them to a table and the unique id in that table?
You could use a GUID, they will be unique everywhere (for all intents and purposes anyway).
http://en.wikipedia.org/wiki/Globally_unique_identifier
+1 for UUIDs (note that GUID is a particular Microsoft implementation of a UUID standard)
There is a built-in function uuid() for generating UUID as text. You may probably prefix it with table name so that you may easily recognize it later.
Each call to uuid() will generate you a fresh new value (as text). So with the above method of prefixing, the INSERT query may look like this:
INSERT INTO my_table VALUES (CONCAT('my_table-', UUID()), ...)
And don't forget to make this column varchar of large enough size and of course create an index for it.
now, with REST, everything should have a unique id. this id, i take
it, should be unique to the whole system, and not just each type of
data.
That's simply not true. Every resource needs to have a unique identifier, yes, but in an HTTP system, for example, that means a unique URI. /people/1 and /news/1 are unique URI's. There is no benefit (and in fact quite a lot of pain, as you are discovering) from constraining the system such that /news/1 has to instead be /news/0983240-2309843-234802/ in order to avoid conflict.

How does Django name the index automatically created for foreign keys columns?

I know that Django automatically generates indexes for foreign keys unless we define the field with db_index=False.
I have read it in django doc
But I don't know if it is possible to choose the index name, or how django chooses it.
It's always something like "tablename_xxxxxx".
"xxxxx" are like random characters?
EDITED:
I've discovered that "xxxx" is some codification from the model field name, but I still don't know whether we can choose an explicit name
As far as I know, there's no way to choose the name of the index. It's actually dynamically computed and it implies the hash of the table name and the column names.
See for instance the code source here, even though it's for a particular version of Django, I'm not aware of any changes towards a user named index.
Is it possible to choose the index name? yes but there isn't any benefit in doing so.
The first step is to switch off the constraint, then you need to manually add one into the migration in the form of a migrations.RunSQL
yes, it's a lot of hard work and totally not worth doing.