I have a MySQL table that stores user emails:
user_id | user_phonenumber
----------------------------
id1 | 555-123456789
I want to allow the user to store multiple phonenumbers and I don't want to limit the number of numbers a user can be associated with.
What's the best way of structuring my data, and how would a query work in PDO?
For example, should I store them all in the same field with comma separators and then parse the output when the query is returned, or should I use another table and have each row as a separate number with common user_ids? How would a lookup work then (please provide example code if possible)?
Thanks
Generally RDBMS systems are designed to access fields/rows. Everything will be much harder when you start to break the data-field link/consistency/logic.
I mean when you start to store more data in a single field.
But you know your system's future. It can happen that you won't ever have to access for example the first phone number, and if you can handle it everywhere as a blob then it can be fine to store more values in a single field.
Anyway If this is not a homework or similar short living task then you should choose the 1 phone number/1 record approach.
I mean something like this can be future proof:
create table user_phonenumbers(
id auto_increment primary key.
user_id integer references user(id),
phonenumber varchar(32)
);
Yes, use another table to store user phone numbers.
use inner join to lookup, it would be good.
Related
I am building an application that will have one table of clients that has an autoincrement id INT field. Then I have an HTML "case" form where the user will have to chose a client from the dropdown, then add some info about "case" that will go into another table.
That means that the client will have an id of 1,2,3 and so on. And I would like that the case adds one decimal number on id number of the client chosen from dropdown. So for Client number two + 1: 2.1, 2.2 and so on. Client number 3, 3.1, 3.2 etc.
What is the best way to add that case filed to SQL? I see if I chose Decimal for a case id field I'm getting number 3.4 as 3.400 because I have chosen decimal 4,3 (MySQL) for testing. I Need to have such decimals because the number of cases can go to hundreds, I can not trim that. I'm struggling with the type of MySQL fields and how to approach this problem.
I'd appreciate some guidance.
The only thing I can think of is to pass the value of a client and then do id + "." + 1, and store it as decimal 1,1 (MySQL), will that auto autoincrement to 1.2 and so on?
The MySQL auto-increment mechanism only increments by whole integers. Sorry, that's the way it is implemented.
The best way to design your Case table in MySQL is this:
CREATE TABLE Cases (
case_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
client_id INT NOT NULL,
...other attributes of the case...
FOREIGN KEY (client_id) REFERENCES Client (client_id)
);
It will have one auto-increment counter for the table, and all clients will need to share this number. This means the case numbers won't always be consecutive for a given client, and they won't start at 1 for each client. Sorry, that's the way auto-increment works in MySQL.
The question has been asked many times with some variation of, "how can I make an auto-increment that renumbers for each group?" You could read the MAX(case_id) for the given client for which you need to insert a case, and then using the max case_id + 1 in your INSERT. In other words, forget about using the auto-increment feature, and calculate the id yourself.
You have to lock the table while doing this to avoid race conditions; two concurrent users could be inserting at the same time, and read the same value for MAX(case_id) and try to insert the same value.
Your plan of using decimal numbers will lead to problems.
What if one day you have a client with more than 999 cases? You'd have to reformat all your case id's, not only for the client with 1000 cases, but for all clients. Any references to the case id's that you had sent out in paper statements and reports would become invalid.
How would you do an SQL query to search for all cases for a given client? If you had client_id in its own column, it would be a query like SELECT ... FROM Case WHERE client_id = 3 but if you have to do a query like ... WHERE case_id BETWEEN 3.000 AND 3.999 it's less clear and harder to optimize. It's also harder to explain to a new programmer you hire for the project. If you end up extending the id format to 4 digits past the decimal, you'd have to rewrite all these SQL queries.
Don't do it. This is the best piece of advise I can give to you.
You are trying to use what was called "Inteligent Codes" back in the 80s.
They went out of fashion for a good reason. Very expensive to develop, non-mantainable, limited ranges, you-name-it. Stay away from them and use normal foreign keys instead. They give you all the flexibility you'll need when the application grows.
We have an older system that's being replaced piecemeal. The people who originally designed it broke US telephone numbers for our clients up into three fields: phone_part_1, phone_part_2, and phone_part_3, corresponding to US Areacodes, Exchanges, and Phone Numbers respectively.
We're transitioning to use a single field, phone_number, to hold all 10 digits. But, because some pieces of the system will continue to reference the older fields, we've been forced to double up for the moment.
I'm wondering if it's possible to use MySQL built-in features to reroute requests for the old fields (both on read and write) to the newer field without having to change the old code (which is in a language nobody here is comfortable in anyhow.) So that:
SELECT phone_part_1 FROM users;
Would end up the same as
SELECT SUBSTRING( phone_number, 1, 3 );
To be clear, I want to do this without manipulating the individual queries. Is it possible? How?
You could define a VIEW:
CREATE VIEW users AS
SELECT SUBSTRING( phone_number, 1, 3 ) AS phone_number, ... FROM real_users;
Then you can query it as if it were a table:
SELECT phone_number FROM users;
But that would require your "real" table to be stored with a distinct table name. You can't make a view with the same name as an existing table.
When you're ready to really replace the table with the new structure, then you can use RENAME TABLE to change tables as a quick action (no table restructure required).
Have you looked into views? A view will take the place of a new table for now, providing a way to have your new structure, but still access the data in the original tables. Once you are ready for your final move, you can implement new tables and do a mass conversion of any remaining data you haven't done yet. Or you can go in reverse, which is what it sounds like you really would prefer.
Create your new table, convert your data, and set up a view that mimics the old structure.
Views in MySQL: http://dev.mysql.com/doc/refman/5.0/en/create-view.html
We would like to filter purchase orders either based on purchase order id (primary key) or name of the purchase order using a single search box.
We used the like parameter to search on the name field, but it doesn't seem to work on the primary key. It works only when we use the equal operator for id(s). But it would be preferable if we can filter purchase orders using like for id(s). How to do this?
create table purchase_orders (
id int(11) primary key,
name varchar(255),
...
)
Option 1
SELECT *
FROM purchase_orders
WHERE id LIKE '%123%'; -- tribute to TemporaryNickName
This is horrible, performance-wise :)
Option 2a
Add a text column which receives a string version of id. Maybe add some triggers to populate it automatically.
Option 2b
Change the type of id column to CHAR or VARCHAR (I believe CHAR should be preferred for a primary key).
In both 2a. and 2b. cases, add an index (maybe a FULLTEXT one) to this column.
I think LIKE should work. I assume that your SQL wasn't correctly written.
Let's assume that you have order name "ABCDEF" then you can find this using the following query structure.
SELECT id FROM purchase_orders WHERE name LIKE '%CD%';
To explain it, % sign means it's a wildcard. As a result this query is going to select any String that contains "CD" inside of it.
According to the table structure, varchar can contain 255 characters. I think this is quite a large string and it's probably going to consume a lot of resources and going to take more time to search something using SQL functions like LIKE. You can always search it by id
WHERE id = something. This is much faster way btw
, but I don't think order id is an user friendly data, instead I would let users to use product name. My recommendation is to use apache Lucene or MySQL's full text search feature (which can improve search performance).
Apache lucene
MySQL Full text search function
These are tools built to search certain pattern or word through list of large strings in much faster way. Many websites use this to build their own mini search engines. I found mysql full text search function requires pretty much no learning curve and straight forward to use =D
I am currently working on a web service that stores and displays money currency data.
I have two MySQL tables, CurrencyTable and CurrencyValueTable.
The CurrencyTable holds the names of the currencies as well as their description and so forth, like so:
CREATE TABLE CurrencyTable ( name VARCHAR(20), description TEXT, .... );
The CurrencyValueTable holds the values of the currencies during the day - a new value is inserted every 2 minutes when the market is open. The table looks like this:
CREATE TABLE CurrencyValueTable ( currency_name VARCHAR(20), value FLOAT, 'datetime' DATETIME, ....);
I have two questions regarding this design:
1) I have more than 200 currencies. Is it better to have a separate CurrencyValueTable for each currency or hold them all in one table?
2) I need to be able to show the current (latest) value of the currency. Is it better to just insert such a field to the CurrencyTable and update it every two minutes or is it better to use a statement like:
SELECT value FROM CurrencyValueTable ORDER BY 'datetime' DESC LIMIT 1
The second option seems slower.. I am leaning towards the first one (which is also easier to implement).
Any input would be greatly appreciated!!
p.s. - please ignore SQL syntax / other errors, I typed it off the top of my head..
Thanks!
To your questions:
I would use one table. Especially if you need to report on or compare data from multiple currencies, it will be incredibly improved by sticking to one table.
If you don't have a need to track the history of each currency's value, then go ahead and just update a single value -- but in that case, why even have a separate table? You can just add "latest value" as a field in the currency table and update it there. If you do need to track history, then you will need the two tables and the SQL you posted will work.
As an aside, instead of FLOAT I would use DECIMAL(10,2). After MySQL 5.0, this will actually have improved results when it comes to currency handling with rounding.
It is better to have one table holding all currencies
If there is need for historical prices, then the table needs to hold them. A reasonable compromise in many situations is to split the price table into a full list of historical prices and another table which only has the current prices.
Using data type float can be troublesome. Please be sure you know what you are doing. If not, use a database currency data type.
As your webservice is transactional it is better if you'd have to access less tables at the same time. Since you will be reading and writing a lot, I would suggest having a single table.
Its better to insert a field to the CurrencyTable and update it rather than hitting two tables for a single request.
I have several tables like Buyers, Shops, Brands, Money_Collectors, e.t.c.
Each one of those has a default value, e.g. the default Buyer is David, the default Shop is Ebay, and so on.
I would like to save those default values in a database (so that user could change them).
I thought to add is_default column to each one of the tables, but it seems to be ineffective because only one row in each table may be the default.
Then I thought that the best would be to have Defaults table that will contain all the default values. This table will have 1 row and N columns, where N is the number of the default values:
Defaults table:
buyer shop brand money_collector
----- ---- ----- ---------------
David Ebay Dell NULL (no default value)
But, this seems to be not the best approach because the table structure changes when a new default value is added.
What would be the best approach to store default values ?
Just to be clear.
The best way is with a column on each table which dropdowns source from.
And here's why...
"Shouldn't I worry about space when
saving data in a database?"
The short answer is no. The longer answer is what you should worry about is performance. Focusing on space will lead you to do very bad things.
Bad things that you'll do if space is a concern.
You'll bury meaning into Primary Keys. i.e. Smart Keys.
You'll try to store mulitple values in one column.
You'll index too little
(No doubt we could create a list of 50 bad practices which save space)
suppose there are 50 shops (select box
with 50 possible values). In this
case, to store the default shop you
need 50 boolean fields,
Well it's ONE Boolean column. It exists on each row.
Let me ask you this. If you created a table with 1 date column and inserted 1 row, how much space would you use on disk?
If you said a 7 or 8 bytes then you're off by about 1000 times.
The smallest unit of disk space is a block. Blocks are typical 8kb (the can be as small as 2kb as large as 32kb, in general (no nitpicking here, the actual limits are unimportant))
Let's say you have 8kb blocks then your 1 column, 1 row table takes 8Kb. If you insert another 999 rows it will still take up 8KB. (Again no nitpicking there is overhead per block and per row - it's an example)
So in your look up table with 50 store names, the likelihood that adding 50 bytes to the size of the table forces you to expand from 1 block to 2 is slim to none and completely irrelevant.
On the other hand, your default table will certainly take up at least one additional block.
But the worst hit to PERFORMANCE is that your call to fill a drop down will need two round trips to the database, one to get the list, one to get the default. (yes, you may be able to do this in one but go with it)
So you've saved exactly zero space and doubled your network traffic.
You see what I'm saying.
Another crucial reason to stop worrying about space is you're giving up clarity. think of the developer you're going to hire to run this app. When he joins the team and looks at the database, imagine the two scenarios.
There's a Boolean column named Default_value
There's a table with no relationships to anything that's named Default_Values
You ask him to build a new for with a dropdown for 'store'.
In scenario 1 he finds the store table, wires up the dropdown to a simple query of the table and uses the default_value field to select the initial value.
In scenario 2, without some training, how would he know to look for a separate table? Maybe he'd see the table but by the time you're hiring, your datamodel now has hundreds of tables.
Again, a little contrived but the point is salient. Clarity in the database is well, well worth a byte per row.
Technical stuff
I'm not a MySQL guy but in Oracle, a null column at the end of a row take no additional space. In Oracle I would use a Varchar2(1) and let 'T' = Default and leave the others null. That would have the effect on only using 1 addition byte total, and not per row. YMMV with MySQL, you can pose that question separately if you can't Google the answer.
But the time to worry about that is on millions of rows, not hundreds. Any table which feeds a dropdown will never be big enough to start worrying about extra bytes.
What if you create an XML and then store that XML in the table in an XML column. The XML column would contain the XML, and the XML could have tags of tables and a sub node of default values.
You should rather create a a table with two columns and n rows
Defaults table:
buyer, David
shop, Ebay,
brand, Dell
This way you can add new values without having to change table structure
You can create a catalog table (some kind of metadata table) containing the default values as strings for the desired table columns. Then you can use the convert function for getting the appropriate value. Below is a sample table definition (Transact-SQL was used):
create table dbo.cat_default_values
(
id_column varchar(30) not null,
id_table varchar(30) not null,
datatype varchar(30) not null,
value varchar(100) not null,
f_creation datetime not null,
usr_creation char(8) null,
primary key clustered (id_column, id_table)
)
declare #defaultValueInt int,
#defaultValueVarchar varchar(30)
select #defaultValueInt = convert(int, value)
from cat_default_values where id_column = "defColumInteger" and id_table = "table1"
select defaultValueVarchar = value
from cat_default_values where id_column = "defColumVarchar" and id_table = "table1"
What you are trying to store is not meta data information. First of all, so I will not invent an external data store to store this data.(coupled with extra code )
I assume you have a PK Sequence generation logic (under your control). I will assign a magic number x and I will insert a record in each table with _id = x as the default value. So if you want to show the user the default value, you can handle in your query in a uniform way or you can handle this in application logic while insert. The good thing about this is, you have access to default value all the time and without writing any extra logic and the logic for maintaining default value of a table can be maintained using the same code (templating ;)
(From the lessons W3c learned from modeling schema information of XML using DTD.)
Only catch is this logic should be made explicit either using some extensive documentation or could be hard imposed by using a trigger.