I have several tables like Buyers, Shops, Brands, Money_Collectors, e.t.c.
Each one of those has a default value, e.g. the default Buyer is David, the default Shop is Ebay, and so on.
I would like to save those default values in a database (so that user could change them).
I thought to add is_default column to each one of the tables, but it seems to be ineffective because only one row in each table may be the default.
Then I thought that the best would be to have Defaults table that will contain all the default values. This table will have 1 row and N columns, where N is the number of the default values:
Defaults table:
buyer shop brand money_collector
----- ---- ----- ---------------
David Ebay Dell NULL (no default value)
But, this seems to be not the best approach because the table structure changes when a new default value is added.
What would be the best approach to store default values ?
Just to be clear.
The best way is with a column on each table which dropdowns source from.
And here's why...
"Shouldn't I worry about space when
saving data in a database?"
The short answer is no. The longer answer is what you should worry about is performance. Focusing on space will lead you to do very bad things.
Bad things that you'll do if space is a concern.
You'll bury meaning into Primary Keys. i.e. Smart Keys.
You'll try to store mulitple values in one column.
You'll index too little
(No doubt we could create a list of 50 bad practices which save space)
suppose there are 50 shops (select box
with 50 possible values). In this
case, to store the default shop you
need 50 boolean fields,
Well it's ONE Boolean column. It exists on each row.
Let me ask you this. If you created a table with 1 date column and inserted 1 row, how much space would you use on disk?
If you said a 7 or 8 bytes then you're off by about 1000 times.
The smallest unit of disk space is a block. Blocks are typical 8kb (the can be as small as 2kb as large as 32kb, in general (no nitpicking here, the actual limits are unimportant))
Let's say you have 8kb blocks then your 1 column, 1 row table takes 8Kb. If you insert another 999 rows it will still take up 8KB. (Again no nitpicking there is overhead per block and per row - it's an example)
So in your look up table with 50 store names, the likelihood that adding 50 bytes to the size of the table forces you to expand from 1 block to 2 is slim to none and completely irrelevant.
On the other hand, your default table will certainly take up at least one additional block.
But the worst hit to PERFORMANCE is that your call to fill a drop down will need two round trips to the database, one to get the list, one to get the default. (yes, you may be able to do this in one but go with it)
So you've saved exactly zero space and doubled your network traffic.
You see what I'm saying.
Another crucial reason to stop worrying about space is you're giving up clarity. think of the developer you're going to hire to run this app. When he joins the team and looks at the database, imagine the two scenarios.
There's a Boolean column named Default_value
There's a table with no relationships to anything that's named Default_Values
You ask him to build a new for with a dropdown for 'store'.
In scenario 1 he finds the store table, wires up the dropdown to a simple query of the table and uses the default_value field to select the initial value.
In scenario 2, without some training, how would he know to look for a separate table? Maybe he'd see the table but by the time you're hiring, your datamodel now has hundreds of tables.
Again, a little contrived but the point is salient. Clarity in the database is well, well worth a byte per row.
Technical stuff
I'm not a MySQL guy but in Oracle, a null column at the end of a row take no additional space. In Oracle I would use a Varchar2(1) and let 'T' = Default and leave the others null. That would have the effect on only using 1 addition byte total, and not per row. YMMV with MySQL, you can pose that question separately if you can't Google the answer.
But the time to worry about that is on millions of rows, not hundreds. Any table which feeds a dropdown will never be big enough to start worrying about extra bytes.
What if you create an XML and then store that XML in the table in an XML column. The XML column would contain the XML, and the XML could have tags of tables and a sub node of default values.
You should rather create a a table with two columns and n rows
Defaults table:
buyer, David
shop, Ebay,
brand, Dell
This way you can add new values without having to change table structure
You can create a catalog table (some kind of metadata table) containing the default values as strings for the desired table columns. Then you can use the convert function for getting the appropriate value. Below is a sample table definition (Transact-SQL was used):
create table dbo.cat_default_values
(
id_column varchar(30) not null,
id_table varchar(30) not null,
datatype varchar(30) not null,
value varchar(100) not null,
f_creation datetime not null,
usr_creation char(8) null,
primary key clustered (id_column, id_table)
)
declare #defaultValueInt int,
#defaultValueVarchar varchar(30)
select #defaultValueInt = convert(int, value)
from cat_default_values where id_column = "defColumInteger" and id_table = "table1"
select defaultValueVarchar = value
from cat_default_values where id_column = "defColumVarchar" and id_table = "table1"
What you are trying to store is not meta data information. First of all, so I will not invent an external data store to store this data.(coupled with extra code )
I assume you have a PK Sequence generation logic (under your control). I will assign a magic number x and I will insert a record in each table with _id = x as the default value. So if you want to show the user the default value, you can handle in your query in a uniform way or you can handle this in application logic while insert. The good thing about this is, you have access to default value all the time and without writing any extra logic and the logic for maintaining default value of a table can be maintained using the same code (templating ;)
(From the lessons W3c learned from modeling schema information of XML using DTD.)
Only catch is this logic should be made explicit either using some extensive documentation or could be hard imposed by using a trigger.
Related
How do I generate a range of numbers in one column in MySQL? I'm looking for any soluton to make numbers range that starts from 500000000 and ends on 889999999.
It seems that may want to use an AUTO_INCREMENT in the column value you want. You can set the starting value to the one you desire in this way.
Also, you can only have one AUTO_INCREMENT column in a given table.
CREATE TABLE your_table (
column_1 INT NOT NULL AUTO_INCREMENT = 500000000
--Add other columns
)
If you already have a table with the AUTO_INCREMENT column, just set the value to the one you want.
ALTER TABLE your_table AUTO_INCREMENT = 500000000;
If what you want is to insert rows with those numbers, use a loop.
Just for fun, generate the range in a text file, by any means available.
Unload to a text file.
Load that text file to your table. You don not say if you are constrained by how long this takes. It sounds like you just want a table of a single column of INT with lower and upper limits.
MySQL should just handle these numbers,this is not really a "big" range, seriously.
Do you want to constrain the values in the column to
{500000000..889999999}?
Or do you want to know how to define a column
to hold these values?
Do you want a written procedure to generate
these numbers for you?
Do you want us to size this for you?
Do you want us to write a script or program to load these?
All of these answers are available with minimal sweat. Keywords are MySQL,Integer, Types.
We cannot see your problem because your question does not describe a problem.
Tell us what you tried, and tell us what happened...
Otherwise just add them, you are still in INT territory (-2Gi..2Gi), not BIGINT yet.
Switch to MariaDB, then JOIN to a pseudo-table called seq_500000000_to_889999999.
I am building an application that will have one table of clients that has an autoincrement id INT field. Then I have an HTML "case" form where the user will have to chose a client from the dropdown, then add some info about "case" that will go into another table.
That means that the client will have an id of 1,2,3 and so on. And I would like that the case adds one decimal number on id number of the client chosen from dropdown. So for Client number two + 1: 2.1, 2.2 and so on. Client number 3, 3.1, 3.2 etc.
What is the best way to add that case filed to SQL? I see if I chose Decimal for a case id field I'm getting number 3.4 as 3.400 because I have chosen decimal 4,3 (MySQL) for testing. I Need to have such decimals because the number of cases can go to hundreds, I can not trim that. I'm struggling with the type of MySQL fields and how to approach this problem.
I'd appreciate some guidance.
The only thing I can think of is to pass the value of a client and then do id + "." + 1, and store it as decimal 1,1 (MySQL), will that auto autoincrement to 1.2 and so on?
The MySQL auto-increment mechanism only increments by whole integers. Sorry, that's the way it is implemented.
The best way to design your Case table in MySQL is this:
CREATE TABLE Cases (
case_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
client_id INT NOT NULL,
...other attributes of the case...
FOREIGN KEY (client_id) REFERENCES Client (client_id)
);
It will have one auto-increment counter for the table, and all clients will need to share this number. This means the case numbers won't always be consecutive for a given client, and they won't start at 1 for each client. Sorry, that's the way auto-increment works in MySQL.
The question has been asked many times with some variation of, "how can I make an auto-increment that renumbers for each group?" You could read the MAX(case_id) for the given client for which you need to insert a case, and then using the max case_id + 1 in your INSERT. In other words, forget about using the auto-increment feature, and calculate the id yourself.
You have to lock the table while doing this to avoid race conditions; two concurrent users could be inserting at the same time, and read the same value for MAX(case_id) and try to insert the same value.
Your plan of using decimal numbers will lead to problems.
What if one day you have a client with more than 999 cases? You'd have to reformat all your case id's, not only for the client with 1000 cases, but for all clients. Any references to the case id's that you had sent out in paper statements and reports would become invalid.
How would you do an SQL query to search for all cases for a given client? If you had client_id in its own column, it would be a query like SELECT ... FROM Case WHERE client_id = 3 but if you have to do a query like ... WHERE case_id BETWEEN 3.000 AND 3.999 it's less clear and harder to optimize. It's also harder to explain to a new programmer you hire for the project. If you end up extending the id format to 4 digits past the decimal, you'd have to rewrite all these SQL queries.
Don't do it. This is the best piece of advise I can give to you.
You are trying to use what was called "Inteligent Codes" back in the 80s.
They went out of fashion for a good reason. Very expensive to develop, non-mantainable, limited ranges, you-name-it. Stay away from them and use normal foreign keys instead. They give you all the flexibility you'll need when the application grows.
TL;DR: Is this design correct and how should I query it?
Let's say we have history tables for city and address designed like this:
CREATE TABLE city_history (
id BIGINT UNSIGNED NOT NULL PRIMARY KEY,
name VARCHAR(128) NOT NULL,
history_at DATETIME NOT NULL,
obj_id INT UNSIGNED NOT NULL
);
CREATE TABLE address_history (
id BIGINT UNSIGNED NOT NULL PRIMARY KEY,
city_id INT NULL,
building_no VARCHAR(10) NULL,
history_at DATETIME NOT NULL,
obj_id INT UNSIGNED NOT NULL
);
Original tables are pretty much the same except for history_id and obj_id (city: id, name; address: id, city_id, building_no). There's also a foreign key relation between city and address (city_id).
History tables are populated on every change of the original entry (create, update, delete) with the exact state of the entry at given time.
obj_id holds id of original object - no foreign key, because original entry can be deleted and history entries can't. history_at is the time of creation of history entry.
History entries are created for every table independently - change in city name creates city_history entry but does not create address_history entry.
So to see what was the state of the whole address with city (e.g. on printed documents) at any T1 point in time, we take from both history tables most recent entries for given obj_id created before T1, right?
With this design in theory we should be able to see the state of signle address with city at any given point of time. Could anyone help me create such a query for given address id and time? Please note that there could be multiple records with the same exact timestamp.
There is also a need to create a report for showing every change of state of given address in given time period with entries like "city_name, building_no, changed_at". Is it something that can be created with SQL query? Performance doesn't matter here so much, such reports won't be generated so often.
The above report will probably be needed in an interactive version where user can filter results e.g. by city name or building number. Is it still possible to do in SQL?
In reality address table and address_history table have 4 more foreign keys that should be joined in report (street, zip code, etc.). Wouldn't the query be like ten pages long to provide all the needed functionality?
I've tried to build some queries, play with greatest-n-per-group, but I don't think I'm getting anywhere with this. Is this design really OK for my use cases (if so, can you please provide some queries for me to play with to get where I want?)? Or should I rethink the whole design?
Any help appreciated.
(My answer copied from here, since that question never marked an answer as accepted.)
My normal "pattern" in (very)pseudo code:
Table A: a_id (PK), a_stuff
Table A_history: a_history_id (PK), a_id(FK referencing A.a_id), valid_from, valid_to, a_stuff
Triggers on A:
On insert: insert values into A_history with valid_from = now, and valid_to = null.
On update: set valid_to = now for last history record of a_id; and do the same insert from the "on insert" trigger with the updated values for the row.
On delete: set valid_to = now for last history record of a_id.
In this scenario, you'd query history with "x >= from and x < to" (not BETWEEN as the a previous record's "from" value should match the next's to "value").
Additionally, this pattern also makes "change log" reports easier.
Without a table dedicated to change logging, the relevant records can be found just by SELECT * FROM A_history WHERE valid_from BETWEEN [reporting interval] OR valid_to BETWEEN [reporting interval].
If there is a central change log table, the triggers can just be modified to include log entry inserts as well. (Unless log entries include "meta" data such as reason for change, who changed, etc... obviously).
Note: This pattern can be implemented without triggers. Using a stored procedure, or even just multiple queries in code, can actually negate the need for the non-history table.
The history table's "a_id" would need to be replaced with whatever uniquely identifies the record normally though; it could still be an id value, but these values would need synthesized when inserting, and known when updating/deleting.
Queries:
(if not new) UPDATE the most recent entry's valid_to.
(if not deleting) INSERT new entry
This is a very "traditional" Problem, when it comes down to versioning (or monitoring) of changes to a certain row.
There are various "solutions", each having its own drawback and advantage.
The following "statements" are a result of my expericence, they are neither perfect, nor do I claim they are the "only ones"!
1.) Creating a "history table": That's the worst Idea of all. You would always need to take into account which table you need to query, depending on DATA that should be queried. That's a "Chicken-Egg" Problem...
2.) Using ONE Table with ONE (increasing) "Revision" Number: That's a better approach, but it will get "hard" to query: Determining the "most recent row" per "id" is very costly no matter which aproach is used.
My personal expierence is, that following the pattern of a "double linked List" ist the best to solve this, when it comes down to Millions of records:
3.) Maintain two columns among every entity, let's say prev_version_id and next_version_id. prev_version_id points to NULL, if there is no previous version. next_version_id points to NULL if there is no later version.
This approach would require you to ALWAYS perform two actions upon an update:
Create the new row
Update the old rows reference (next_version_id) to the just insterted row.
However, when your database has grown to something like 100 Million Rows, you will be very happy that you have choosen this path:
Querying the "Oldest" Version is as simple as querying where ISNULL(prev_version_id) and entity_id = 5
Querying the "Latest" Version is as simple as querying where ISNULL(next_version_id) and entity_id = 5
Getting a full version history will just target the entity_id=5 of the data-table, sortable by either prev_version_id or next_version_id.
The very often neglected fact: The first two queries will also work to get a list of ALL first versions or of ALL recent versions of an entity - in about NO TIME! (Don't underestimate how "costly" it can be do determine the most recent version of an entity otherwise! Believe me, when "testing" everything seems equaly fine, but the real struggle starts when live-data with millions of records is used.)
cheers,
dognose
I have a MySQL table that stores user emails:
user_id | user_phonenumber
----------------------------
id1 | 555-123456789
I want to allow the user to store multiple phonenumbers and I don't want to limit the number of numbers a user can be associated with.
What's the best way of structuring my data, and how would a query work in PDO?
For example, should I store them all in the same field with comma separators and then parse the output when the query is returned, or should I use another table and have each row as a separate number with common user_ids? How would a lookup work then (please provide example code if possible)?
Thanks
Generally RDBMS systems are designed to access fields/rows. Everything will be much harder when you start to break the data-field link/consistency/logic.
I mean when you start to store more data in a single field.
But you know your system's future. It can happen that you won't ever have to access for example the first phone number, and if you can handle it everywhere as a blob then it can be fine to store more values in a single field.
Anyway If this is not a homework or similar short living task then you should choose the 1 phone number/1 record approach.
I mean something like this can be future proof:
create table user_phonenumbers(
id auto_increment primary key.
user_id integer references user(id),
phonenumber varchar(32)
);
Yes, use another table to store user phone numbers.
use inner join to lookup, it would be good.
I have a requirement to have 612 columns in my database table. The # of columns as per data type are:
BigInt – 150 (PositionCol1, PositionCol2…………PositionCol150)
Int - 5
SmallInt – 5
Date – 150 (SourceDateCol1, SourceDate2,………….SourceDate150)
DateTime – 2
Varchar(2000) – 150 (FormulaCol1, FormulaCol2………………FormulaCol150)
Bit – 150 (IsActive1, IsActive2,……………….IsActive150)
When user does the import for first time the data gets stored in PositionCol1, SourceDateCol1, FormulaCol1, IsActiveCol1, etc. (other datetime, Int, Smallint columns).
When user does the import for second time the data gets stored in PositionCol2, SourceDateCol2, FormulaCol2, IsActiveCol2, etc. (other datetime, Int, Smallint columns)….. so and so on.
There is a ProjectID column in the table for which data is being imported.
Before starting the import process, user maps the excel column names with the database column names (PositionCol1, SourceDateCol1, FormulaCol1, IsActiveCol1) and this mapping get stored in a separate table; so that when retrieved data can be shown under these mapping column names instead of DB column names. E.g.
PositionCol1 may be mapped to SAPDATA
SourceDateCol1 may be mapped to SAPDATE
FormulaCol1 may be mapped to SAPFORMULA
IsActiveCol1 may be mapped to SAPISACTIVE
40,000 rows will be added in this table every day, my questions is that will the SQL be able to handle the load of that much of data in the long run?
Most of the times, a row will have data in about 200-300 columns; in the worst case it’ll have data in all of the 612 columns. Keeping in view this point, shall I make some changes in the design to avoid any future performance issues? If so, please suggest what could be done?
If I stick to my current design, what points I should take care of, apart from Indexing, to have optimal performance while retrieving the data from this huge table?
If I need to retrieve data of a particular entity e.g. SAPDATA, I’ll have to go to my mapping table, get the database column name against SAPDATA i.e. PositionCol1 in this case; and retrieve it. But, in that way, I’ll have to write dynamic queries. Is there any other better way?
Don't stick with your current design. Your repeating groups are unweildy and self limiting... What happens when somebody uploads 151 times? Normalise this table so that you have one of each type per row rather than 150. You won't need mapping this way as you can select SAPDATA from the positioncol without worring if it is 1-150.
You probably want a PROJECTS table with an ID, a PROJECT_UPLOADS table with an ID and an FK to the PROJECTS table. This table would have Position, SourceDate, Formula and IsActive given your use-case above.
Then you could do things like
select p.name, pu.position from PROJECTS p inner join PROJECT_UPLOADS pu on pu.projectid = p.id WHERE pu.position = 'SAPDATA'
etc.