how to use multiple tables without duplication in tableau - mysql

I've trouble understanding how this should work...basically I've 2 main tables, in one I've Revenues, in another Costs.
Revenues table has fields as: P&L (string), Category (string), Products (string), Sold (int), invoiced (int), delivered (int), date (date).
Costs table has: P&L (string), Category (string), Products (string), Costs (int), date (date).
I'd like to use tables together to perform various calcs like margin, for example, at any level (total margin, which means total revenues - total costs, or at Category level for which I should be able to filter any category I have and perform the calc and so on).
Problem is, any tentative I've made to use relations or join, resulted in duplications.
The only workaround I was able to perform so far is to leave revenues table as it is, and create many Costs table, 1 for field basically (table1 with category and costs plus date, table2 with products, costs and date etc.). Joining Revenues with one of these tables seems to work but, in this way, I'm not able to create a wider view (one goal is to make a big table in the viz where we could read at once all the data). Plus, another problem I 've seen it appear doing this workaround is that, if I want to split by date costs, but I use the date column from the revenues table, even if the date is the same (I've done a copy/paste between tables basically), tableau doesn't recognize the date correctly, so to split costs, I've to use costs'date column, and to split revenues, I've to use revenues' date columns, which is frankly a pain...
So my question: how could I merge the 2 tables in one, or anyway how could I put all the data together in a working table to perform any kind of calcs,and also how could I use just 1 column for date that works for all the date altogether?
I've upload a file here to understand better what I'm trying to combine. Thank you guys
Data file
ps.: seems that tableau is using sql behind for these tasks so probably someone skilled in this kind of problem in sql could also help...for this I 've tagged sql as well, thanks

You need to UNION those 2 tables together, but are they really in Google or you just did that to demo it here?
If you're using Excel - both Revenue & Cost must be different sheets in the same XLS file
If you're using CSV - both Revenue & Cost must be different files (hopefully in the same folder)
I would really hope that you're using a database (some form of SQL), but either of the above options, UNION the data and it will work the way you expect :)

Related

How to store recent usage frequency in MySQL

I'm working on the Product Catalog module of an Invoicing application.
When the user creates a new invoice the product name field should be an autocomplete field which shows the most recently used products from the product catalog.
How can I store this "usage recency/frequency" in the database?
I'm thinking about adding a new field recency which would be increased by 1 every time the product was used, and decreased by 1/(count of all products), when an other product is used. Then use this recency field for ordering, but it doesn't seem to me the best solution.
Can you help me what is the best practice for this kind of problem?
Solution for the recency calculation:
Create a new column in the products table, named last_used_on for example. Its data type should be TIMESTAMP (the MySQL representation for the Unix-time).
Advantages:
Timestamps contains both date and time parts.
It makes possible VERY precise calculations and comparisons in regard
to dates and times.
It lets you format the saved values in the date-time format of your
choice.
You can convert from any date-time format into a timestamp.
In regard to your autocomplete fields, it allows you to filter
the products list as you wish. For example, to display all products
used since [date-time]. Or to fetch all products used between
[date-time-1] and [date-time-2]. Or get the products used only on Mondays, at 1:37:12 PM, in the last two years, two months and three
days (so flexible timestamps are).
Resources:
Unix-Time
The DATE, DATETIME, and TIMESTAMP Types
How should unix timestamps be stored in int columns?
How to convert human date to unix timestamp in Mysql?
Solution for the usage rate calculation:
Well, actually, you are not speaking about a frequency calculation, but about a rate - even though one can argue that frequency is a rate, too.
Frequency implies using the time as the reference unit and it's measured in Hertz (Hz = [1/second]). For example, let's say you want to query how many times a product was used in the last year.
A rate, on the other hand, is a comparison, a relation between two related units. Like for example the exchange rate USD/EUR - they are both currencies. If the comparison takes place between two terms of the same type, then the result is a number without measurement units: a percentage. Like: 50 apples / 273 apples = 0.1832 = 18.32%
That said, I suppose you tried to calculate the usage rate: the number of usages of a product in relation with the number of usages of all products. Like, for a product: usage rate of the product = 17 usages of the product / 112 total usages = 0.1517... = 15.17%. And in the autocomplete you'd want to display the products with a usage rate bigger than a given percentage (like 9% for example).
This is easy to implement. In the products table add a column usages of type int or bigint and simply increment its value each time a product is used. And then, when you want to fetch the most used products, just apply a filter like in this sql statement:
SELECT
id,
name,
(usages*100) / (SELECT sum(usages) as total_usages FROM products) as usage_rate
FROM products
GROUP BY id
HAVING usage_rate > 9
ORDER BY usage_rate DESC;
Here's a little study case:
In the end, recency, frequency and rate are three different things.
Good luck.
To allow for future flexibility, I'd suggest the following additional (*) table to store the entire history of product usage by all users:
Name: product_usage
Columns:
id - internal surrogate auto-incrementing primary key
product_id (int) - foreign key to product identifier
user_id (int) - foreign key to user identifier
timestamp (datetime) - date/time the product was used
This would allow the query to be fine tuned as necessary. E.g. you may decide to only order by past usage for the logged in user. Or perhaps total usage within a particular timeframe would be more relevant. Such a table may also have a dual purpose of auditing - e.g. to report on the most popular or unpopular products amongst all users.
(*) assuming something similar doesn't already exist in your database schema
Your problem is related to many other web-scale search applications, such as e.g. showing spell corrections, related searches, or "trending" topics. You recognized correctly that both recency and frequency are important criteria in determining "popular" suggestions. In practice, it is desirable to compromise between the two: Recency alone will suffer from random fluctuations; but you also don't want to use only frequency, since some products might have been purchased a lot in the past, but their popularity is declining (or they might have gone out of stock or replaced by successor models).
A very simple but effective implementation that is typically used in these scenarios is exponential smoothing. First of all, most of the time it suffices to update popularities at fixed intervals (say, once each day). Set a decay parameter α (say, .95) that tells you how much yesterday's orders count compared to today's. Similarly, orders from two days ago will be worth α*α~.9 times as today's, and so on. To estimate this parameter, note that the value decays to one half after log(.5)/log(α) days (about 14 days for α=.95).
The implementation only requires a single additional field per product,
orders_decayed. Then, all you have to do is to update this value each night with the total daily orders:
orders_decayed = α * orders_decayed + (1-α) * orders_today.
You can sort your applicable suggestions according to this value.
To have an individual user experience, you should not rely on a field in the product table, but rather on the history of the user.
The occurrences of the product in past invoices created by the user would be a good starting point. The advantage is that you don't need to add fields or tables for this functionality. You simply rely on data that is already present anyway.
Since it is an auto-complete field, maybe past usage is not really relevant. Display n search results as the user types. If you feel that results are better if you include recency in the calculation of the order, go with it.
Now, implementation may defer depending on how and when product should be displayed. Whether it has to be user specific usage frequency or application specific (overall). But, in both case, I would suggest to have a history table, which later you can use for other analysis.
You could design you history table with atleast below columns:
Id | ProductId | LastUsed (timestamp) | UserId
And, now you can create a view, which will query this table for specific time range (something like product frequency of last week, last month or last year) and will give you highest sold product for specific time range.
Same can be used for User's specific frequency by adding additional condition to filter by Userid.
I'm thinking about adding a new field recency which would be increased
by 1 every time the product was used, and decreased by 1/(count of all
products), when an other product is used. Then use this recency field
for ordering, but it doesn't seem to me the best solution.
Yes, it is not a good practice to add a column for this and update every time. Imagine, this product is most awaiting product and people love to buy it. Now, at a time, 1000 people or may be more requested for this product and for every request you are going to update same row, since to maintain the concurrency database has to lock that specific row and update for each request, which is definitely going to hit your database and application performance instead you can simply insert a new row.
The other possible solution is, you could use your existing invoice table as it will definitely have all product and user specific information and create a view to get frequently used product as I mentioned above.
Please note that, this is an another option to achieve what you are expecting. But, I would personally recommend to have history table instead.
The scenario
When the user creates a new invoice the product name field should be an autocomplete field which shows the most recently used products from the product catalogue.
your suggested solution
How can I store this "usage recency/frequency" in the database?
If it is a web application, don't store it in a Database in your server. Each user has different choices.
Store it in the user's browser as Cookie or Localstorage because it will improve the User Experience.
If you still want to store it in MySQL table,
Do the following
Create a column recency as said in question.
When each time the item used, increase the count by 1 as said in question.
Don't decrease it when other items get used.
To get the recent most used item,
query
SELECT * FROM table WHERE recence = (SELECT MAX(recence) FROM table);
Side note
Go for the database use only if you want to show the recent most used products without depending the user.
As you aren't certain on wich measure to choose, and it's rather user experience related problem, I advice you have a number of measures and provide a user an option to choose one he/she prefers. For example the set of available measures could include most popular product last week, last month, last 3 months, last year, overall total. For the sake of performance I'd prefer to store those statistics in a separate table which is refreshed by a scheduled job running every 3 hours for example.

Efficient cross-table mySQL queries in Visual Basic

I am currently working on a report-generating application (in Visual Basic) for a pre-existing database (in mySQL), in which I need to submit queries across multiple tables to access all of the information needed; however, my experience with this sort of project is limited. I am hoping someone can point me toward an efficient method of achieving this.
In the initial report, I need data from 3 tables.
Table 1) 'invoice' - table structure contains a date, an invoice number, and a customer number (and other non-pertinent columns)
Table 2) 'transaction_data' - table structure contains an invoice number, a billing code, and an item description (and other non-pertinent columns). Each row of the table contains a single line-item from a single invoice (so there can be several rows in this table containing the same invoice number).
Table 3) 'customers' - table structure contains customer number, name, address, phone (and other non-pertinent columns).
I need to be able to search 'invoice' based on dates, to get a list of all invoice numbers within the specified time frame (and their corresponding customer number). I then need to take that list of invoice numbers, and search 'transaction_data' for each row that contains one of the invoice numbers, and check for a specific billing code. If the billing code does not exist, I need to use the customer number (obtained during the invoice search) to put together a list of invoice number, customer name, address, phone number.
This can be accomplished fairly easily by populating an array variable utilizing for/while loops, but will require multiple queries across separate tables... of which 'invoice' and 'customers' have 20k+ entries and 'transaction_data' table has over 100k entries. Surely, this is not the most efficient manner of compiling said data.
Can someone please direct me as to how the query SHOULD be structured efficiently? Thanks in advance for helping a database noobie!
Look up JOINS in the MySQL manual. You would join the three tables together and with a WHERE clause get the specific rows you need that match the criteria you are searching for. No loops needed.

Time Dimension in Data Warehouse

I've a fact table that stores multiple date fields in it's rows. I would like to keep the design flexible and link all of these fields with the time dimension. However, the problem is that my reports end up having too many joins in their queries (one for each date field). How do I mitigate this problem ?
I have one idea of storing both the time dimension references (fast searching) and date fields (efficient retrieval). What would be the possible problems in doing so ?
Generalizing this idea, should we do it for other fields in the fact table as well ?
The table structure
acc_num | acc_approved_date| acc_rejected_date| file_gen_date
Proposed changes while linking to the date dimension
acc_num | acc_approved_date_id| acc_rejected_date_id| file_gen_date_id
However this creates problems of having too many joins to the date dimension table while creating the reports that captures all of these dates. I'm proposing a hybrid of the two where I store both the dates and the ids for these fields.
You'd only have joins to the date dimension table if you wanted to find out something about the date (a name of the month and year for example) or wanted to filter on the date.
Doing it by multiple date keys is the correct way of doing it- for all dimensions you want to filter by or include in your query results, you need a join.

Is it typical to have the same value for FKs in my fact table across all FK columns?

I'm new to multidimensional data warehousing and have been tasked my workplace in
developing a data warehousing solution for reporting purposes, so this might be a
stupid question but here it goes ...
Each record in my fact table have FK columns that link out to their respective dimension tables (ex. dimCustomer, dimGeography, dimProduct).
When loading the data warehouse during the ETL process, I first loaded up the dimension tables with the details, then I loaded the fact table and did lookup transformations to find the FK values to put in the fact table. In doing so it seems each row in the fact table has FKs of the same value (ex. row1 has a FK of 1 in each column across the board, row2 has value 2... etc.)
I'm just wondering if this is typical or if I need to rethink the design of the warehouse and ETL process.
Any suggestions would be greatly appreciated.
Thanks
Based on your comments, it sounds like there's a missed step in your ETL process.
For a call center / contact center, I might start out with a fact table like this:
CallFactID - unique key just for ETL purposes only
AssociateID - call center associate who initially took the call
ProductID - product that the user is calling about
CallTypeID - General, Complaint, Misc, etc
ClientID - company / individual that is calling
CallDateID - linked to your Date (by day) Dimension
CallTimeOfDayID - bucketed id for call time based on business rules
CallStartTimestamp - ANSI timestamp of start time
CallEndTimestamp - ANSI timestamp of end time
CallDurationTimestamp - INTERVAL data type, or integer in seconds, call duration
Your dimension tables would then be:
AssociateDim
ProductDim
CallTypeDim
ClientDim
DateDim
TimeOfDayDim
Your ETL will need to build the dimensions first. If you have a relational model in your source system, you would typically just go to the "lookup" tables for various things, such as the "Products" table or "Associates" table, and denormalize any relationships that make sense to be included as attributes. For example, a relational product table might look like:
PRODUCTS: ProductKey,
ProductName,
ProductTypeKey,
ProductManufacturerKey,
SKU,
UPC
You'd denormalize this into a general product dimension by looking up the product types and manufacturer to end up with something like:
PRODUCTDIM: PRODUCTID (DW surrogate key),
ProductKey,
ProductName,
ProductTypeDesc,
ManufacturerDesc,
ManufacturerCountry,
SKU,
UPC
For attributes that are only on your transaction (call record) tables but are low cardinality, you can create dimensions by doing SELECT DISTINCT on the these tables.
Once you have loaded all the dimensions, you then load the fact by doing a lookup against each of the dimensions based on the natural keys (which you've preserved in the dimension), and then assign that key to the fact row.
For a more detailed guide on ETL with DW Star Schemas, I highly recommend Ralph Kimball's book The Data Warehouse ETL Toolkit.

MySQL Database Design Questions

I am currently working on a web service that stores and displays money currency data.
I have two MySQL tables, CurrencyTable and CurrencyValueTable.
The CurrencyTable holds the names of the currencies as well as their description and so forth, like so:
CREATE TABLE CurrencyTable ( name VARCHAR(20), description TEXT, .... );
The CurrencyValueTable holds the values of the currencies during the day - a new value is inserted every 2 minutes when the market is open. The table looks like this:
CREATE TABLE CurrencyValueTable ( currency_name VARCHAR(20), value FLOAT, 'datetime' DATETIME, ....);
I have two questions regarding this design:
1) I have more than 200 currencies. Is it better to have a separate CurrencyValueTable for each currency or hold them all in one table?
2) I need to be able to show the current (latest) value of the currency. Is it better to just insert such a field to the CurrencyTable and update it every two minutes or is it better to use a statement like:
SELECT value FROM CurrencyValueTable ORDER BY 'datetime' DESC LIMIT 1
The second option seems slower.. I am leaning towards the first one (which is also easier to implement).
Any input would be greatly appreciated!!
p.s. - please ignore SQL syntax / other errors, I typed it off the top of my head..
Thanks!
To your questions:
I would use one table. Especially if you need to report on or compare data from multiple currencies, it will be incredibly improved by sticking to one table.
If you don't have a need to track the history of each currency's value, then go ahead and just update a single value -- but in that case, why even have a separate table? You can just add "latest value" as a field in the currency table and update it there. If you do need to track history, then you will need the two tables and the SQL you posted will work.
As an aside, instead of FLOAT I would use DECIMAL(10,2). After MySQL 5.0, this will actually have improved results when it comes to currency handling with rounding.
It is better to have one table holding all currencies
If there is need for historical prices, then the table needs to hold them. A reasonable compromise in many situations is to split the price table into a full list of historical prices and another table which only has the current prices.
Using data type float can be troublesome. Please be sure you know what you are doing. If not, use a database currency data type.
As your webservice is transactional it is better if you'd have to access less tables at the same time. Since you will be reading and writing a lot, I would suggest having a single table.
Its better to insert a field to the CurrencyTable and update it rather than hitting two tables for a single request.