Read vs Write tables database design - mysql

I have a user activity tracking log table where it logs all user activity as they occur. This is extremely high write table due to the in depth tracking of click by click tracking. Up to here the database design is perfect. Problem is the next step.
I need to output the data for the business folks + these people can query to fetch past activity data. Hence there is semi-medium to high read also. I do not like the idea of reading and writing from the same high traffic table.
So ideally I want to split the tables: The first one for quick writes (less to no fks), then copy that data over fully formatted & pulling in all the labels for the ids into a read table for reading use.
So questions:
1) Is this the best approach for me?
2) If i do keep 2 tables, how to keep them in sync? I cant copy the data to the read table instant as it writes to the write table - it will defeat the whole purpose of having seperate tables then, nor can i keep the read table to be old because the activity data tracked links with other user data like session_id, etc so if these IDs are not ready when their usecase calles for it the writes will fail.
I am using MySQL for user data and HBase for some large tables, with php codeignitor for my app.
Thanks.

Yes, having 2 separate tables is the best approach. I've had the same problem to solve a few months ago, though for a daemon-type application and not a website.
Eventually I ended up with 1 MEMORY table keeping "live" data which is inserted/updated/deleted on almost every event and another table that had duplicates of the live data rows, but without the unnecesary system columns - my history table, which was used for reading only per request.
The live table is only relevant to the running process, so I don't care if the contained data is lost due to a server failure - whatever data needs to be read later is already stored in the history table. So ... there's no problem in duplicating the data in the two tables - your goal is performance, not normalization.

Related

One database or multiple databases for statistical architecture

I currently already have a website running using CodeIgniter and MySQL. The MySQL database is around 110 tables big and contains mainly website specific data, like user data, vacancy data, etc.
Now I want to extend this website to include a complete statistical module as well. We would capture a lot of user actions and other aggregations from the data gather on our own website, and would also pull in some data from google analytics API to use in our statistics (we will generate a report in Excel but also show statistical graphs and numbers on a page (using chart.js)).
We are not thinking (in a forseeable future) to use this data in other programs, but we need to be able to open some data to the public using an API.
We expect to start with about 300.000-350.000 data points gathered per day, but this amount will keep on growing every day of course, the more users we get.
Using multiple databases in CodeIgniter seems to not be an issue, so the main problem I am left with is how I should create the architecture for this statistical module.
I have a couple of idea's on how to start doing this, but I am not aware if there is performance impact from one to another solution or other things to take into consideration.
My main idea boils down to having a table containing all "events", which just insert in that table every time an action is performed, eg "user is registered", "user put account on private", "user clicked on X", ...
Then once a day (probably at around midnight), a CRON job would run over that table for the past day and aggregate all the values into a format usable for our statistical metrics. Those aggregated values would be stored in a new table. This way we can clean up the "event" table quite regularly since that will become very big very fast.
Idea 1: Extend the current MySQL database architecture with new tables to incorporate the statistics. I would keep on using the current database architecture and add 2 new tables for the events and the aggregated values.
Idea 2: Create a new database, separate from the current existing one, and use this to insert all the events in a table there and the aggregated values in a new table there.
Note: we already have quite a few CRONS running on our current database, updating statusses and dates, sending emails, ...
Note2: sync issues between databases is not an issue since we will never be storing statistics on a per-user level.
MySQL does not care whether tables are in the same database or separate databases. It is just a convenience for the user. Some things:
You might need db1.tbla JOIN db2.tblb to talk across dbs.
It is convenient to have different GRANTs for different databases, but clumsy to have different GRANTs for 110 tables.
I can't think of any performance differences.
Nightly aggregation is a middle-of-the road approach. Using IODKU gives you 'immediate' aggregation, but is probably more burden on the system.
My blog on Summary Tables .
350K rows inserted per day is about 5/second, which is comfortably low, so I don't think we need to discuss performance issues there.
"Summarize and toss" (for events) -- Yes. I like that approach. (Most people fail to think of this option.)
Do the math. Which table is the largest after a year? How many GB will it be? Then think about whether you can shrink any of the columns in it: SMALLINT instead of INT, normalization of long, oft-repeated, strings, etc.

Access a large database table from multiple threads

I need an expert advice for my database. Basically we have 100s of sensors around the world. We collect data from the sensors and store in the database for future use.
Currently I create a separate database table for each customer i.e. When a customer registers to the application, I create a separate table for them and the data from all the sensors from this customer goes to their separate database table.
Now the number of customers are increasing so does the number of tables and this approach is not looking good anymore (may be this approach wasn't right in the first place).
I now want to keep all the data in one table so I copied all the data from the customer's table into a new table. Now the size of the new table is over 5GB with more than 34 million rows (and growing).
If I want to insert new rows into this new table simultaneously, from multiple thread for each sensor, it takes too long. To access the data from the same table takes long time too.
How can I resolve this issue? Is there any other solution ? Should I use some external cloud service to store data ?
Thanks in advance!
EDIT:
I am using indexes. Here is the table schema
With UNIQUE INDEX idx_userInsDate ( userID,instrumentID,utcDateTime)
I have also looked into the database sharding but my main issue is, inserting rows to the same table from multiple threads and reading data from multiple threads is taking some time.
With this limited information here's my advice.
When collecting millions of rows from many different customers unless the data has to be collected together for "easy reporting" a customer specific table or even a customer specific database can definitely be used and that is absolutely fine.
This actually has several benefits including protecting you from exposing one customers information to another customer on accident since all their data is in 1 table.
As your number of customers goes up then you get either a new database for each customer or a new table and that is fine and that is probably something you would want to automate in your software. For instance, if a customer signs up, this table is automatically created.
Both scenarios and designs are common and perfectly fine depending on your situation. For instance, I once owned a product company and for that company every customer had their own entire database. So as my customer count went up my number of databases went up. This is no different really than you having a database or table per customer and if you choose that route that's okay.
Whatever you choose you must consider your sql backups, size of your database versus hard drive space available etc. If the number of tables continues to grow maybe each customer should get their own database but how hard would it then be for you to backup all of these databases and relate them to central db if you needed to do so. Just consider everything like this including security and your reporting needs, how much data you will need to keep etc.
Here's another article I wrote some time ago on multi-tenant data architecture.
https://stackoverflow.com/a/38555345/671343
Check it out and hopefully this helps you. Your not the only one to struggle with a design decision about this. Just weigh all your options considering reporting, security, backups and more.
Hope thats helpful
Use Mongo or similar DB for your scenerio , that is the exact scenerio which requires Mongo .
Multiple Record Insertion at once is very fast and isolated from other records hence faster\
Reading is Faster if you have proper Data structure Tree formed for your data.
Proper structuring will furhter help to reduce the requirement of creating new table for each customer.

MySQL Whats better for speed one table with millions of rows or managing multiple tables?

Im re working an existing PHP/MySql/JS/Ajax web app that processes a LARGE number of table rows for users. Here's how the page works currently.
A user uploads a LARGE csv file. The test one I'm working with has 400,000 rows, (each row has 5 columns).
Php creates a brand new table for this data and inserts the hundreds of thousands of rows.
The page then sorts / processes / displays this data back to the user in a useful way. Processing includes searching, sorting by date and other rows and re displaying them without a huge load time (thats where the JS/Ajax comes in).
My question is should this app be placing the data into a new table for each upload or into one large table with an id for each file? I think the origional developer was adding seperate tables for speed purposes. Speed is very important for this.
Is there a faster way? Is there a better mouse trap? Has anyone ever delt with this?
Remember every .csv can contain hundreds of thousands of rows and hundreds of .csv files can be uploaded daily. Though they can be deleted about 24 hrs after they were last used (Im thinking cron job any opinions?)
Thank you all!
A few notes based on comments:
All data is unique to each user and changes so the user wont be Re accessing this data after a couple of hours. Only if they accidentally close the window and then come right back would they really re visit for the same .csv.
No Foreign keys required all csv's are private to each user and dont need to be cross referenced.
I would shy away from putting all the data into a single table for the simple reason that you cannot change the data structure.
Since the data is being deleted anyway and you don't have a requirement to combine data from different loads, there isn't an obvious reason for putting the data into a single table. The other argument is that the application now works. Do you really want to discover some requirement down the road that implies separate tables after you've done the work?
If you do decide on a single table, then use table partitioning. Since each user is using their own data, you can use partitions to separate each user load into a separate partition. Although there are limits on partitions (such as no foreign keys), this will make access the data in a single table as fast as accessing the original data.
Given 105 rows and 102 CSVs per day, you're looking at 10 million rows per day (and you say you'll clear that data down regularly). That doesn't look like a scary figure for a decent db (especially given that you can index within tables, and not across multiple tables).
Obviously the most regularly used CSVs could be very easily held in memory for speed of access - perhaps even all of them (a very simple calculation based on next to no data gives me a figure of 1Gb if you flush every over 24 hours. 1Gb is not an unreasonable amount of memory these days)

Medium-term temporary tables - creating tables on the fly to last 15-30 days?

Context
I'm currently developing a tool for managing orders and communicating between technicians and services. The industrial context is broadcast and TV. Multiple clients expecting media files each made to their own specs imply widely varying workflows even within the restricted scope of a single client's orders.
One client can ask one day for a single SD file and the next for a full-blown HD package containing up to fourteen files... In a MySQL db I am trying to store accurate information about all the small tasks composing the workflow, in multiple forms:
DATETIME values every time a task is accomplished, for accurate tracking
paths to the newly created files in the company's file system in VARCHARs
archiving background info in TEXT values (info such as user comments, e.g. when an incident happens and prevents moving forward, they can comment about it in this feed)
Multiply that by 30 different file types and this is way too much for a single table. So I thought I'd break it up by client: one table per client so that any order only ever requires the use of that one table that doesn't manipulate more than 15 fields. Still, this a pretty rigid solution when a client has 9 different transcoding specs and that a particular order only requires one. I figure I'd need to add flags fields for each transcoding field to indicate which ones are required for that particular order.
Concept
I then had this crazy idea that maybe I could create a temporary table to last while the order is running (that can range from about 1 day to 1 month). We rarely have more than 25 orders running simultaneously so it wouldn't get too crowded.
The idea is to make a table tailored for each order, eliminating the need for flags and unnecessary forever empty fields. Once the order is complete the table would get flushed, JSON-encoded, into a TEXT or BLOB so it can be restored later if changes need made.
Do you have experience with DBMS's (MySQL in particular) struggling from such practices if it has ever existed? Does this sound like a viable option? I am happy to try (which I already started) and I am seeking advice so as to keep going or stop right here.
Thanks for your input!
Well, of course that is possible to do. However, you can not use the MySQL temporary tables for such long-term storage, you will have to use "normal" tables, and have some clean-up routine...
However, I do not see why that amount of data would be too much for a single table. If your queries start to run slow due to much data, then you should add some indexes to your database. I also think there is another con: It will be much harder to build reports later on, when you have 25 tables with the same kind of data, you will have to run 25 queries and merge the data.
I do not see the point, really. The same kinds of data should be in the same table.

How to reduce growing size of an access file?

So, at my workplace, they have a huge access file (used with MS Access 2003 and 2007). The file size is about 1.2GB, so it takes a while to open the file. We cannot delete any of the records, and we have about 100+ tables (each month we create 4 more tables, don't ask!). How do I improve this, i.e. downsizing the file?
You can do two things:
use linked tables
"compact" the database(s) every once in a while
The linked tables will not in of themselves limit the overall size of the database, but it will "package" it in smaller, more manageable files. To look in to this:
'File' menu + 'Get External data' + 'Linked tables'
Linked tables also have many advantages such as allowing one to keep multiple versions of data subset, and selecting a particular set by way of the linked table manager.
Compacting databases reclaims space otherwise lost as various CRUD operations (Insert, Delete, Update...) fragment the storage. It also regroup tables and indexes, making search more efficient. This is done with
'Tools' menu + 'Database Utilities' + 'Compact and Repair Database...'
You're really pushing up against the limits of MS Access there — are you aware that the file can't grow any larger than 2GB?
I presume you've already examined the data for possible space saving through additional normalization? You can "archive" some of the tables for previous months into separate MDB files and then link them (permanently or as needed) to your "current" database (in which case you'd actually be benefiting from what was probably an otherwise bad decision to start new tables for each month).
But, with that amount of data, it's probably time to start planning for moving to a more capacious platform.
You should really think about your db architecture. If there aren't any links between the tables you could try to move some of them to another database (One db per year :) as a short-term solution..
A couple of “Grasping at straws” ideas
Look at the data types for each column, you might be able to store some numbers as bytes saving a small amount per record
Look at the indexes and get rid of the ones you don’t use. On big tables unnecessary indexes can add a large amount of overhead.
I would + 2^64 the suggestions about the database design being a bit odd but nothing that hasn’t already been said so I wont labour the point
well .. Listen to #Larry, and keep in mind that, on the long term, you'll have to find another database to hold your data!
But on the short term, I am quite disturbed by this "4 new tables per month" thing. 4 tables per month is 50 per year ... That surely sounds strange to every "database manager" here. So please tell us: how many rows, how are they built, what are they for, and why do you have to build tables every month?
Depending on what you are doing with your data, you could also think about archiving some tables as XML files (or even XLS?). This could make sense for "historic" data, that do not have to be accessed through relations, views, etc. One good example would be the phone calls list collected from a PABX. Data can be saved as/loaded from XML/XLS files through ADODB recordsets or the transferDatabase method
Adding more tables every month: that is already a questionable attitude, and seems suspicious regarding data normalisation.
If you do that, I suspect that your database structure is also sub-optimal regarding field sizes, data types and indexes. I would really start by double checking those.
If you really have a justification for monthly tables (which I cannot imagine, again), why not having 1 back-end per month ?
You could also have on main back-end, with, let's say, 3 month of data online, and then an archive db, where you transfer your older records.
I use that for transactions, with the main table having about 650.000 records, and Access is very responsive.