Modelling limited availability in Doctrine - mysql

This question targets modelling limited availability in Doctrine 2. I'm sure this has already been discussed here as it seems quite basic but I could not find any best practices. May it be that limit/restrict/max/... are bad search terms as they all mean something else in the db world :-).
Simplified example
Assume a typical online shop application that allows multiple users to buy items of some kind (at the same time). Some of these items may have a limited availability (first come first served). So two users may be in a concurrent situtation when trying to checkout/confirming the order. The faster one must win the race, the other order should not even be processed (inserted in the database).
Entities/tables may look like this:
items
+----+-----+---------------+---------+
| id | ... | max_available | version |
+----+-----|---------------|---------+
| 7 | | 4 | 2 |
| 8 | | 1 | 0 |
orders
+----+---------+----------+
| id | item_id | quantity |
+----+---------+----------+
| 1 | 7 | 2 |
| 1 | 7 | 1 |
In this case: Another order for item 8 with a quantity of 1 would be valid. Another order for item 7 with a quantity of 2 must be prevented as this would be one more that available.
Best practice?
The application uses Doctrine 2 ORM, the db will be MySQL. The system may be coupled to the db type but if there is a reasonable db agnostic way that's even better of course.
What's the best way to model this?
Transactions and locking on db level (db needs to support this)? Locking on ORM level (integer version field)? Or should there be (additionally) installed triggers that ensure data integrity on database level?
Sidenote: Should constraints be optional by design or can they be part of the business logic? In other words: Is it bad practice to test against constraints and let the test fail under normal conditions - e.g. by having a (concurrency safe) trigger on updates/inserts, that cancels the request if an item isn't available anymore? (This would only work for certain db types and InnoDB as the engine in the case of MySQL...)

Related

How to index database?

This is killing me - everybody say what it is but noone points to a guide or teach the basics.
Is it something that is better done from the start or can you index it as easily if your loading times are getting longer?
Has anyone found any good starting point for someone who's not a pro in databases? (I mean indexing starting point and don't worry, I know the basics of databases) Main rules, good practise etc.
Im not here to ask you to write a huge tutorial but if you're really, really bored - go ahead. :)
Im using Wordpress if that's important to know. Yes, I know that WP uses very basic indexing but if it's something good to start with from the beginning, I can't see a reason why not to.
It's barely related but I also didn't find answer online. I can guess the answer but Im not 100% sure - what's more efficient way to store data with same key: in array or separate rows (separate ids but same keys)? There's usually maximum of 20 items per post & the number of posts could be in thousands in future. Which would be a better solution?
Different rows, ids & values BUT same key
id | key |values|
--------------------
25 | Bob | 3455 |
--------------------
24 | Bob | 1654 |
--------------------
23 | Bob | 8432 |
Same row, id & key BUT value is serialized array
id | key | values |
------------------------------
23 | Bob | serialized array |
------------------------------
If you want a quick rule of thumb, index any columns in a table that you will be using to lookup rows. For example, I may have a table as follows:
id| Name| date |
--------------------
0 | Bob | 11.12.16 |
--------------------
1 | John| 15.12.16 |
--------------------
2 | Tim | 19.12.16 |
So obviously your ID is your primary index, but lets say you have a page that will SORT the whole table by DATE, well you would add date as an index.
Basically, indexes make it a lot faster for the engine to find specific records or order them by a specific column. They do a lot more, but when I am designing sites for myself or little tools for the office at work, I usually just go by that.
Large corporate tables can have thousands of indexes and even more relations between tables, but usually for us small peasant folk, what I said should be enough.
You're asking a really complicated question. But the tl;dr; A database index is a data structure that improves the speed of data retrieval operations on a database table at the cost of additional writes and storage space to maintain the index data structure.
more detailed info is already provided in the thorough answer here:
How does database indexing work?

node.js fetch mysql data using two tables

Table lists
id | user_id | name
1 | 3 | ListA
2 | 3 | ListB
Table celebrities
id | user_id | list_id | celebrity_code
1 | 3 | 1 | AA000297
2 | 3 | 1 | AA000068
3 | 3 | 2 | AA000214
4 | 3 | 2 | AA000348
I am looking a JSON object like this
[
{id:1, name:'ListA', celebrities:[{celebrity_code:AA000297},{celebrity_code:AA000068}]},
{id:2, name:'ListB', celebrities:[{celebrity_code:AA000214},{celebrity_code:AA000348}]}
]
Moved this to an answer since the details were getting long, and I thought the additional references would be useful to future readers.
Since you are using MySQL, check out GROUP_CONCAT. To get your object, you will want to GROUP_CONCAT on a CONCATenated string. If you could live with a schema more like {id:2, name:'ListB', celebrity_codes:['AA000214','AA000348']} you'll have a simpler query. If you make a SQLfiddle of your basic schema (basically your create tables plus the inserts of the above sample data), someone might even write it for you. :-)
To be clear, while GROUP_CONCAT can do this, if you are trying to generate more than a fairly simple schema, it gets to be some pretty messy code and it starts making more and more sense to move it into your application layer both from a code maintenance standpoint as well as performance & scalability considerations.
Also note that SQLLite supports GROUP_CONCAT, for other databases:
Postgres user should look at string_agg
SQL Server users should check out this project on CodePlex.
Oracle users can use MODEL, as illustrated here.

Whether to merge avatar and profile tables?

I have two tables:
Avatars:
Id | UserId | Name | Size
-----------------------------------------------
1 | 2 | 124.png | Large
2 | 2 | 124_thumb.png | Thumb
Profiles:
Id | UserId | Location | Website
-----------------------------------------------
1 | 2 | Dallas, Tx | www.example.com
These tables could be merged into something like:
User Meta:
Id | UserId | MetaKey | MetaValue
-----------------------------------------------
1 | 2 | location | Dallas, Tx
2 | 2 | website | www.example.com
3 | 2 | avatar_lrg | 124.png
4 | 2 | avatar_thmb | 124_thumb.png
This to me could be a cleaner, more flexible setup (at least at first glance). For instance, if I need to allow a "user status message", I can do so without touching the database.
However, the user's avatars will be pulled far more than their profile information.
So I guess my real questions are:
What king of performance hit would this produce?
Is merging these tables just a really bad idea?
This is almost always a bad idea. What you are doing is a form of the Entity Attribute Value model. This model is sometimes necessary when a system needs a flexible attribute system to allow the addition of attributes (and values) in production.
This type of model is essentially built on metadata in lieu of real relational data. This can lead to referential integrity issues, orphan data, and poor performance (depending on the amount of data in question).
As a general matter, if your attributes are known up front, you want to define them as real data (i.e. actual columns with actual types) as opposed to string-based metadata.
In this case, it looks like users may have one large avatar and one small avatar, so why not make those columns on the user table?
We have a similar type of table at work that probably started with good intentions, but is now quite the headache to deal with. This is because it now has 100s of different "MetaKeys", and there is no good documentation about what is allowed and what each does. You basically have to look at how each is used in the code and figure it out from there. Thus, figure out how you will document this for future developers before you go down that route.
Also, to retrieve all the information about each user it is no longer a 1-row query, but an n-row query (where n is the number of fields on the user). Also, once you have that data, you have to post-process each of those based on your meta-key to get the details about your user (which usually turns out to be more of a development effort because you have to do a bunch of String comparisons). Next, many databases only allow a certain number of rows to be returned from a query, and thus the number of users you can retrieve at once is divided by n. Last, ordering users based on information stored this way will be much more complicated and expensive.
In general, I would say that you should make any fields that have specialized functionality or require ordering to be columns in your table. Since they will require a development effort anyway, you might as well add them as an extra column when you implement them. I would say your avatar pics fall into this category, because you'll probably have one of each, and will always want to display the large one in certain places and the small one in others. However, if you wanted to allow users to make their own fields, this would be a good way to do this, though I would make it another table that can be joined to from the user table. Below are the tables I'd suggest. I assume that "Status" and "Favorite Color" are custom fields entered by user 2:
User:
| Id | Name |Location | Website | avatarLarge | avatarSmall
----------------------------------------------------------------------
| 2 | iPityDaFu |Dallas, Tx | www.example.com | 124.png | 124_thumb.png
UserMeta:
Id | UserId | MetaKey | MetaValue
-----------------------------------------------
1 | 2 | Status | Hungry
2 | 2 | Favorite Color | Blue
I'd stick with the original layout. Here are the downsides of replacing your existing table structure with a big table of key-value pairs that jump out at me:
Inefficient storage - since the data stored in the metavalue column is mixed, the column must be declared with the worst-case data type, even if all you would need to hold is a boolean for some keys.
Inefficient searching - should you ever need to do a lookup from the value in the future, the mishmash of data will make indexing a nightmare.
Inefficient reading - reading a single user record now means doing an index scan for multiple rows, instead of pulling a single row.
Inefficient writing - writing out a single user record is now a multi-row process.
Contention - having mixed your user data and avatar data together, you've forced threads that only one care about one or the other to operate on the same table, increasing your risk of running into locking problems.
Lack of enforcement - your data constraints have now moved into the business layer. The database can no longer ensure that all users have all the attributes they should, or that those attributes are of the right type, etc.

What is the best way to handle these MySQL database relationsships?

I'm building a small website that let users recommend their favourite books to eachother. So I have two tables, books and groups. A user can have 0 or more books in their library, and a book belongs to 1 or more groups. Currently, my tables look like this:
books table
|---------|------------|---------------|
| book_id | book_title | book_owner_id |
|---------|------------|---------------|
| 22 | something | 12 |
|---------|------------|---------------|
| 23 | something2 | 12 |
|---------|------------|---------------|
groups table
|----------|------------|---------------|---------|
| group_id | group_name | book_owner_id | book_id |
|----------|------------|---------------|---------|
| 231 | random | 12 | 22 |
|----------|------------|---------------|---------|
| 231 | random | 12 | 23 |
|----------|------------|---------------|---------|
As you can see, the relationsships between users+books and books+groups are defined in the tables. Should I define the relationsships in their own tables instead? Something like this:
books table
|---------|------------|
| book_id | book_title |
|---------|------------|
| 22 | something |
|---------|------------|
| 23 | something2 |
|---------|------------|
books_users_relationsship table
|---------|------------|---------|
| rel_id | user_id | book_id |
|---------|------------|---------|
| 1 | 12 | 22 |
|---------|------------|---------|
| 2 | 12 | 23 |
|---------|------------|---------|
groups table
|----------|------------|
| group_id | group_name |
|----------|------------|
| 231 | random |
|----------|------------|
groups_books_relationsship table
|----------|---------|
| group_id | book_id |
|----------|---------|
| 231 | 22 |
|----------|---------|
| 231 | 23 |
|----------|---------|
Thanks for your time.
The second form with four tables is the correct one. You could delete rel_id from books_users_relationsship as primary key might be composite with both user_id and book_id, just like in groups_books_relationsship table.
You do not need a "relationship table" to support a relationship. In Databases, implementing a Foreign Key in a child table defines the Relation between the parent and the child. You need tables only if they contain data, or to resolve a many-to-many relationship (and that has no data other than the Primary Keys of the parents).
The second problem you are facing, the reason the Relations become complex, and even optional, is due to the first two tables not being Normalised. Many problems ensue from that.
if you look closely at book, you may notice that the same book (title) gets repeated
likewise, there is no differentiation between (a) a book in terms of its existence in the world and (b) a copy of a book, that is owned by a member, and available for borrowing
eg. the review is about an existing book, once, and applies to all copies of a book; not to an owned book.
your "relationship" tables also have data in them, and the data is repeated.
all this repeated data needs to be maintained and kept in synch.
all those problems are eliminated if the data is Normalised.
Therefore (since you are seeking the "best way"), the sequence is to normalise the data first, after which (no surprise) the Relations are easy and not complex, and no data is repeated (in either the tables or the relations).
when Normalising, it is best to model the real world (not the entire real world, but whatever parts of it that you are implementing in the database). That insulates your database from the effects of change, and functional extensions to it in future do not require the existing tables to be changed.
It is also important to use accurate names for tables and columns, for the same reason. group in non-specific and will cause a problem in future when you implement some other form of grouping.
The relations can be now defined at the correct "level", between the correct tables.
The need to stick an Id column on everything that moves severely hinders your ability to understand the data and thus the Normalisation process, and robs the database of Relational power.
Notice that the existing keys are already unique and meaningful, short and efficient, no additional surrogate keys (and their additional index) is required.
ReviewerId, OwnerId and BorrowerIdare allMemberIds`, as Foreign Keys, showing the explicit Role in which they are used.
Note that your problem space is not as simple as you think, it is used as a case study and shipped with tutorials for SQL (eg. MS SQL, Sybase).
Social Library Data Model
Readers who are unfamiliar with the Standard for Modelling Relational Databases may find IDEF1X Notational useful.
I have provided the structure required to support borrowing, to again illustrate how easy it is to implement Relations on Normalised data, and to show the correct tables upon which borrowing depends (it is not between any book and any person; only owned book can be borrowed).
These issues are very important because they define the Referential Integrity of the database.
It is also important to implement that in the database itself, which is the Standard location (rather than in app code all over the place). Declarative Referential Integrity is part of IEC/ISO/ANSI Standard SQL. And the question has a database design tag.
Referential Integrity cannot be defined or enforced in some databases that do not fully implement the SQL Standard (sometimes it can be defined but it is not enforced, which is confusing). Nevertheless, you can design and implement whatever parts of a database your particular database supports.

On a stats-system, should I save little bits of information about single visit on many tables or just one table?

I've been wondering this for a while already. The title stands for my question. What do you prefer?
I made a pic to make my question clearer.
Why am I even thinking of this? Isn't one table the most obvious option? Well, kind of. It's the simpliest way, but let's think more practical. When there is a ton of data in one table and user wants to only see statistics about browsers the visitors use, this may not be as successful. Taking browser-data out of one table is naturally better.
Multiple tables has disadvantages too. Writing data takes more time and resources. With one table there's only one mysql-query needed.
Anyway, I figured out a solution, which I think makes sense. Data is written to some kind of temporary table. All of those lines will be exported to multiple tables later (scheduled script). This way the system doesn't take loading-time from the users page, but the data remains fast to browse.
Let's bring some discussion here. I'm hoping to raise some opinions.
Which one is better? Let's find out!
The date, browser and OS are all related on a one-to-one basis... Without more information to require distinguishing records further, I'd be creating a single table rather than two.
Database design is based on creating tables that reflect entities, and I don't see two distinct entities in the example provided. Consider using views to serve data without duplicating the data in the database; a centralized copy of the data makes managing the data much easier...
What you're really thinking of is whether to denormalize the table or use the first normal form. When you're using 1NF you have a table that looks like this:
Table statistic
id | date | browser_id | os_id
---------------------------------------------
1 | 127003727 | 1 | 1
2 | 127391662 | 2 | 2
3 | 127912683 | 3 | 2
And then to explain what browser and os the client used, you need other tables:
Table browser
id | name | company | version
-----------------------------------------------
1 | Firefox | Mozilla | 3.6.8
2 | Safari | Apple | 4.0
3 | Firefox | Mozilla | 3.5.1
Table os
id | name | company | version
-----------------------------------------------
1 | Ubuntu | Canonical | 10.04
2 | Windows | Microsoft | 7
3 | Windows | Microsoft | 3.11
As OMG Ponies already pointed out, this isn't a good example to be creating several entities, so one can safely go with one table and then think about how he/she is going to deal with having to, say, find all the entries with a matching browser name.