Basic question of language ability.
I am developing a database to keep track of market trades and provide useful metrics to the user. Most brokers do not supply enough information in the transaction .csv file which would be imported to this database to combine strategies and positions in a useful way, or in a way that I envision can be useful for users. For instance, combining a buy order on AAPL of 1000 shares that was filled in three separate transactions on the same order (1 order for 1000 shares filled initially with 200, then 350, then 450 shares). I could only think of assigning a trade group to each of these so that I can group them.
So, the example above, each transaction would be a separate record, I've created a column in my table with the alias of Trade Group, and each transaction would be assigned 1 under the Trade Group column. The sale of the 1,000 shares, no matter how many transactions it took to fill the order, would also be assigned to trade group 1.
My query combines the shares for both opening and closing transactions by using the trade group and transaction type (buy or sell). If there is a match, 1,000 shares on the buy side, and 1,000 shares on the sell side, then it runs some queries to provide useful data about the trade.
The problem I foresee is that the trade grouping can become cumbersome since it currently has to be manually input. I would like to develop a counter to automatically increment the trade group of each the opening and closing transactions every time the balance of shares = 0.
So if both the buy and sell of the above example belonged to trade group 1, and now I decide to open up a position of 2000 shares of AAPL, and subsequently sell them, those transactions would be automatically assigned trade group 2. And now that the balance of shares is 0 again, the next time I open and close a position on AAPL it will be assigned trade group 3.
That way, I don't need to clutter up my table with something that is manually input which can create mistakes. Instead, the query assigns the trade grouping every time it is run, and the necessary metrics supplied.
Is this something that can be done using SQL alone?
Thanks.
Let's say I'm trying to create sample db schema for cinema reservation.
I have a Table "auditorium" with int capacity and i wish to be able to create as much 'bookings' record, but i want to be unable to create booking when the auditorium is fill.
So for example, lets say Auditiorium A has 100 seats. Then i wish to be able to insert 100 bookings that has FK to Auditorium A, and when i try to create 101'st booking i will have error or something like this.
This would normally be handled by a trigger -- in either MySQL or SQL Server.
One method is to have an Auditoria table that has a maximum capacity (or perhaps a maximum per event, if the configuration could change).
The Bookings table would have a trigger. When records are inserted, the number of records for the event would be compared to the capacity for the auditorium, and inserts that exceed the capacity would fail.
If the Auditoria have a fixed capacity and layout, you can also have a table with one row per seat. The booking would then be for a particular seat -- and the capacity issue is automatically taken care of (because no seats would be available).
I'm working on the Product Catalog module of an Invoicing application.
When the user creates a new invoice the product name field should be an autocomplete field which shows the most recently used products from the product catalog.
How can I store this "usage recency/frequency" in the database?
I'm thinking about adding a new field recency which would be increased by 1 every time the product was used, and decreased by 1/(count of all products), when an other product is used. Then use this recency field for ordering, but it doesn't seem to me the best solution.
Can you help me what is the best practice for this kind of problem?
Solution for the recency calculation:
Create a new column in the products table, named last_used_on for example. Its data type should be TIMESTAMP (the MySQL representation for the Unix-time).
Advantages:
Timestamps contains both date and time parts.
It makes possible VERY precise calculations and comparisons in regard
to dates and times.
It lets you format the saved values in the date-time format of your
choice.
You can convert from any date-time format into a timestamp.
In regard to your autocomplete fields, it allows you to filter
the products list as you wish. For example, to display all products
used since [date-time]. Or to fetch all products used between
[date-time-1] and [date-time-2]. Or get the products used only on Mondays, at 1:37:12 PM, in the last two years, two months and three
days (so flexible timestamps are).
Resources:
Unix-Time
The DATE, DATETIME, and TIMESTAMP Types
How should unix timestamps be stored in int columns?
How to convert human date to unix timestamp in Mysql?
Solution for the usage rate calculation:
Well, actually, you are not speaking about a frequency calculation, but about a rate - even though one can argue that frequency is a rate, too.
Frequency implies using the time as the reference unit and it's measured in Hertz (Hz = [1/second]). For example, let's say you want to query how many times a product was used in the last year.
A rate, on the other hand, is a comparison, a relation between two related units. Like for example the exchange rate USD/EUR - they are both currencies. If the comparison takes place between two terms of the same type, then the result is a number without measurement units: a percentage. Like: 50 apples / 273 apples = 0.1832 = 18.32%
That said, I suppose you tried to calculate the usage rate: the number of usages of a product in relation with the number of usages of all products. Like, for a product: usage rate of the product = 17 usages of the product / 112 total usages = 0.1517... = 15.17%. And in the autocomplete you'd want to display the products with a usage rate bigger than a given percentage (like 9% for example).
This is easy to implement. In the products table add a column usages of type int or bigint and simply increment its value each time a product is used. And then, when you want to fetch the most used products, just apply a filter like in this sql statement:
SELECT
id,
name,
(usages*100) / (SELECT sum(usages) as total_usages FROM products) as usage_rate
FROM products
GROUP BY id
HAVING usage_rate > 9
ORDER BY usage_rate DESC;
Here's a little study case:
In the end, recency, frequency and rate are three different things.
Good luck.
To allow for future flexibility, I'd suggest the following additional (*) table to store the entire history of product usage by all users:
Name: product_usage
Columns:
id - internal surrogate auto-incrementing primary key
product_id (int) - foreign key to product identifier
user_id (int) - foreign key to user identifier
timestamp (datetime) - date/time the product was used
This would allow the query to be fine tuned as necessary. E.g. you may decide to only order by past usage for the logged in user. Or perhaps total usage within a particular timeframe would be more relevant. Such a table may also have a dual purpose of auditing - e.g. to report on the most popular or unpopular products amongst all users.
(*) assuming something similar doesn't already exist in your database schema
Your problem is related to many other web-scale search applications, such as e.g. showing spell corrections, related searches, or "trending" topics. You recognized correctly that both recency and frequency are important criteria in determining "popular" suggestions. In practice, it is desirable to compromise between the two: Recency alone will suffer from random fluctuations; but you also don't want to use only frequency, since some products might have been purchased a lot in the past, but their popularity is declining (or they might have gone out of stock or replaced by successor models).
A very simple but effective implementation that is typically used in these scenarios is exponential smoothing. First of all, most of the time it suffices to update popularities at fixed intervals (say, once each day). Set a decay parameter α (say, .95) that tells you how much yesterday's orders count compared to today's. Similarly, orders from two days ago will be worth α*α~.9 times as today's, and so on. To estimate this parameter, note that the value decays to one half after log(.5)/log(α) days (about 14 days for α=.95).
The implementation only requires a single additional field per product,
orders_decayed. Then, all you have to do is to update this value each night with the total daily orders:
orders_decayed = α * orders_decayed + (1-α) * orders_today.
You can sort your applicable suggestions according to this value.
To have an individual user experience, you should not rely on a field in the product table, but rather on the history of the user.
The occurrences of the product in past invoices created by the user would be a good starting point. The advantage is that you don't need to add fields or tables for this functionality. You simply rely on data that is already present anyway.
Since it is an auto-complete field, maybe past usage is not really relevant. Display n search results as the user types. If you feel that results are better if you include recency in the calculation of the order, go with it.
Now, implementation may defer depending on how and when product should be displayed. Whether it has to be user specific usage frequency or application specific (overall). But, in both case, I would suggest to have a history table, which later you can use for other analysis.
You could design you history table with atleast below columns:
Id | ProductId | LastUsed (timestamp) | UserId
And, now you can create a view, which will query this table for specific time range (something like product frequency of last week, last month or last year) and will give you highest sold product for specific time range.
Same can be used for User's specific frequency by adding additional condition to filter by Userid.
I'm thinking about adding a new field recency which would be increased
by 1 every time the product was used, and decreased by 1/(count of all
products), when an other product is used. Then use this recency field
for ordering, but it doesn't seem to me the best solution.
Yes, it is not a good practice to add a column for this and update every time. Imagine, this product is most awaiting product and people love to buy it. Now, at a time, 1000 people or may be more requested for this product and for every request you are going to update same row, since to maintain the concurrency database has to lock that specific row and update for each request, which is definitely going to hit your database and application performance instead you can simply insert a new row.
The other possible solution is, you could use your existing invoice table as it will definitely have all product and user specific information and create a view to get frequently used product as I mentioned above.
Please note that, this is an another option to achieve what you are expecting. But, I would personally recommend to have history table instead.
The scenario
When the user creates a new invoice the product name field should be an autocomplete field which shows the most recently used products from the product catalogue.
your suggested solution
How can I store this "usage recency/frequency" in the database?
If it is a web application, don't store it in a Database in your server. Each user has different choices.
Store it in the user's browser as Cookie or Localstorage because it will improve the User Experience.
If you still want to store it in MySQL table,
Do the following
Create a column recency as said in question.
When each time the item used, increase the count by 1 as said in question.
Don't decrease it when other items get used.
To get the recent most used item,
query
SELECT * FROM table WHERE recence = (SELECT MAX(recence) FROM table);
Side note
Go for the database use only if you want to show the recent most used products without depending the user.
As you aren't certain on wich measure to choose, and it's rather user experience related problem, I advice you have a number of measures and provide a user an option to choose one he/she prefers. For example the set of available measures could include most popular product last week, last month, last 3 months, last year, overall total. For the sake of performance I'd prefer to store those statistics in a separate table which is refreshed by a scheduled job running every 3 hours for example.
I have users who earn points by taking parts in various activities on the website and then the user can spend these points on whatever they like, the way I have it set up the at the minute is I have a table -
tbl_users_achievements and tbl_users_purchased_items
I have these two tables to track what the users have done and what they have bought (Obviously!)
But instead of having a column in my user tables called 'user_points', I have decided to display their points by doing a SELECT on all achievements and getting a sum of the points they have earnt, I am then doing another select on how many points they have spent.
I thought it might of been better to have a column to store their points and when they buy something and win stuff I do an UPDATE on the column for that user, but that seemed like multiple areas I have to manage, I have to insert a new row for the transaction and then update their column where if I use a query to work out their total won - spent I only have to insert the row and do no update. But the problem is then comes to performance of running and doing a calculation with the query.
So which solution would you go with and why?
Have a column to store their points and do an update
Use a query to work out the users points they can spend and have no column
Your current model is logically the right one - a key aspect for RDBMS normalization is not to repeat any information, and keeping an explicit "this customer has x points" column repeats data.
The benefits of this are obvious - you have less data manipulation code to write, and don't have to worry about what happens when you insert the transaction but can't update the users table.
The downsides are that you're running additional queries every time you show the customer profile; this can create a performance problem. The traditional response to that performance problem is to de-normalize, for instance by keeping a calculated total against the user table.
Only do that if that's absolutely, provably necessary.
myself, I would put the user points into a separate table PK'd by user ID or whatever and store them there and do updates to increment or decrement as achievements are attained or points spent.
I apologize if this has been asked before, but I'm pretty new to this and unable to find an answer that addresses the situation I'm faced with.
I'm trying to put together a database to run behind our company website. The database will store information on customer invoices and payments. I'm trying to figure out if I should create a field for the invoice balance, or if I should just have it calculate when the customer account is accessed? I don't want to create redundant data, and don't want to have the chance that somehow the field wouldn't be updated, and would thus be incorrect...but I also don't want to create to large of a burden on the server - especially if we pull up an overview of customer accounts - which would need to then calculate the balance of every account. Right now we are starting from scratch, so I want to set it up right!
We are anticipating having a couple hundred customer accounts by the end of the year, but will most likely be up to a couple thousand by the end of next year. (Average number of invoices per customer would be roughly 2-3 per year.)
There are probably other things to consider as well. For example, what if your invoice consist of ID's of products in another table... and the prices of those other products change? When you go to generate the invoice, you'll have the wrong total in there for what the guy actually paid 6 months ago. So if its a situation like that, you'll probably want to store the total on the invoice. And I wouldn't worry too much about doing a little math if you go the other route, it's not likely to be a huge bottleneck.
Yes, remember that items/goods could and will change their prices over time. You need to have the invoice balance as of the day of the purchase. Calculating the balance on the fly could lead to wrong balances later on.
Invoice balance is essential data to store, however I think you meant account balance since you referred to that later.
Storing the account balance would be denormalizing it, and that's not how accounting databases are typically designed. Always calculate account balance from invoices minus payments. Denormalizing is almost always a bad idea, and if you need to optimize in the future, there are other places to cache data that are more efficient than the database.
In your use case, a query like that on a few thousand rows would be negligible anyway, so don't optimize before you have to.