Suggestion for write frequently, read rarely database - mysql

I have an application server which writes frequently to a database and reads it in the near future, but then very rarely that data entry is read.
What is some good databases optimised for this kind of access? I am currently using MongoDB but I think that probably isnt the best choice in this case.
I am open to relational DBs (i.e. MySQL), MongoDB, Redis, etc.
P.S. Seems it's easy to answer this question for read frequently DB access, but hard to find information on this specific case.

This is very generic question, We need to know more details
Size of Database
Data growth, How much 10GB per day / 200GB per month ?
Is it a OLTP Application or OLAP Application ?
What is maximum number of concurrent transactions / users ?
Apart from it, Since you have mentioned data is rarely read beyond a certain point
You can always look at options for Archival (Cleaning up based on duration - Monthly basis / Yearly basis)
Parititioning is also another option, for faster retrieval
Again the option for going for SQL or NOSQL is based on
Consistency
If you have a fixed schema I would suggest you to go for Relational DB
Concurrency aspects, Based on need you need to decided SQL or NOSQL (example - online banking i would suggest RDBMS, For product reviews/comments storing for a site, I am ok for NOSQL as this does not need any concurrency handling)
You need to provide more details on your database need in terms of functionality, data volumes, data usage and growth aspects
Hope it helps...

Since you mention MySQL, you might want to look at the ARCHIVE storage engine.

Related

Why do NoSql databases scale better than relational databases? How should I choose between them?

By nosql databases I mean something like mongodb or dynamodb
I've been trying to find why NoSql dbs usually are usually better at horizontal scaling than relational dbs, and how to choose between them
I have looked into many videos and posts that tell us the "SQL vs NoSQL". Most of them end up talking about "Normalization vs Denormalization".
Here are some questions I am still confused about.
1.
Many people said that relational dbs have to follow ACID so they are bad at horizontal scaling. But ACID is about transaction, we can always choose not to use any transaction, right? I know not many people do this, but if we denormalized tables enough, would it be like NoSQL dbs where we almost don't use any transaction?. And many NoSql dbs now have transactions too.
2.
I know denormalization is probably good for horizontal scaling, because if data are
spreaded across many nodes(machines), it'll be hard to do table joining(or transaction).
But like transaction, we can choose not to use any table join.
The only thing I can think of is NoSQL are schema-free, it is easier to add new fields(columns) than RDB.
What I am trying to ask are
why is a "Denormalized NoSQL db" better than a "Denormalized relational db" ?
why is a "Normalized NoSQL db" worse than a "Normalized relational db" ?
what's the real thing that prevents relational database from denormalization?
I've read this post
https://softwareengineering.stackexchange.com/questions/194340/why-are-nosql-databases-more-scalable-than-sql
It says
""The SQL API lacks a mechanism to describe queries where ACID's requirements are relaxed. This is why the BASE databases are all NoSQL.""
Could anyone give me an example of this?
Sorry for not being specific
By NoSQL databases I mean something like mongodb
A blog like https://neo4j.com/blog/acid-vs-base-consistency-models-explained/ explains BASE this way:
Basic Availability
The database appears to work most of the time.
Soft-state
Stores don’t have to be write-consistent, nor do different replicas have to be mutually consistent all the time.
Eventual consistency
Stores exhibit consistency at some later point (e.g., lazily at read time).
This level of equivocation doesn't sound very reliable, does it? They trade off availability and consistency to gain performance and scalability.
This is fine if you're running a service that is tolerant of mismatched data or stale data, or which is okay with some minor amount of data loss once in a while. If those issues are an uncommon occurrence, but you get superior performance nearly all the time, it's very attractive. And more importantly, it demos well.
But if you have to run a service with strict requirements for data integrity, it's no good. If losing even one record of data gets you in trouble with auditors, or if you can't reliably read data you just committed a moment before because that commit takes time to propagate to all nodes of your cluster, it could be a deal-breaker.
So which data store to choose depends on the requirements of your app. Only you can judge if the relaxed availability and consistency of a BASE data store is sufficient for the needs of your app.
NoSQL is a term that covers lots of types of storage/query engines e.g. document stores, Graph Databases, etc. - basically anything that looks something like a database but doesn’t use the standard tables/rows/columns structure that a SQL database does.
NoSQL databases were developed to support use cases that relational databases don’t handle well - so while you might be able to use either a SQL or a NoSQL database in any given scenario, the choice between the 2 is normally a no-brainer; they would very rarely both be viable options.
Just to clarify, your questions about types of DB being better or worse are meaningless without context. Without knowing precisely what your requirements are, it’s impossible to say whether a NoSQL DB is better or worse than a SQL one - and that’s before you start looking at specific products in each category.
Also, that post you reference is about 8 years old and much of the information is out of date - as one of the contributors acknowledges in an update made in 2019

Data Base for handle large data

We have started a new project using MySQL, spring boot, and Angular js. Initially, we did not realize our DB is going to handle large data.
The number of tables will not be large (<130), only 10 to 20 tables will be contained in more data, which is almost inserted/ read/ update.
The estimated amount of data in that 10 table is going to grow at 12,00,000 records in a month, and we should not delete those data be able to do various reports.
There needs to be (read-only) replicated database as a backup/failover, and maybe for offloading reports in peak time.
I don't have first-hand experience with that large databases, so I'm asking the ones that have which DB is the best choice in this situation. as we have completed 100% coding and development but now we realize this. I have doubts may be MYSQL going to handle large data. I know that Oracle is the safe bet, interested if Mysql with a similar setup. But it is bound only in MySQL I am ok with any DB based on you all feedback I can take a call.
Open source DB more preferable but it's not mandatory we can go for paid DB also.
Handling Large Data
MySQL is more than capable of handling such loads. In fact, it is capable of handling much much more load than what you are talking about. You just have to create the right kind of tables. You can do that by choosing
the correct storage engine for your use-case
the correct character set
the optimal data type for your column
the right indexing strategy - creating indexes thoughtfully
the right partitioning strategy (if the data in the table exceeds tens of millions of records)
EDIT: You've also got to choose the right kind of data modelling and normalization strategy for your use-case. Most of OLTP applications require some level of normalization. But if you want to do analytics and aggregates on heavy tables, you should either have a Data Warehouse of have highly denormalized tables to avoid joins and/or have a column-oriented database to support such queries.
MySQL is open-source and has a very strong community support so you will find a lot of literature around any issue that you face. You can also find all the filed bugs (resolved and unresolved) here.
As far as the number of tables are concerned, there's really no cap on that. See here, MySQL permits 4 billion tables if you're using InnoDB as the engine.
A lot of very big companies with scale use MySQL in some capacity. Facebook is one of them.
Native JSON Support
With the growing popularity of JSON as the de facto data exchange format across the internet, MySQL has also provided native JSON support in 5.7, so now you can store and query JSON from your APIs, if required.
HA and Replication
MySQL Replication works! Earlier, MySQL used to support coordinate replication only but now it supports GTID replication which makes it easier to maintain and fix replication issues. There are third-party replicators also available in the market. For instance, Continuent's Tungsten is a replicator written in Java and is a replacement for native replication. It comes with a lot of configuration options which are not available with native MySQL replication.
I agree with MontyPython, MySql can do it and the design is critical. Fortunately MySql allows you to be flexible over time as needed.
I've had history tables needed used in daily reporting that grew to over a billion records in plain MySql and had no problems.
I've also used MySql Merge tables to divide up tables with big-ish rows (100KB+) to speed things up. Basically keeping the individual merge table file sizes under 30GB each. However that solution increases the open file count (in the system) per client - might be a bigger deal on a clustered system. That one was not.
That said, I like to give Honorable Mention to:
MariaDB - MySql but with contributions from Facebook, Alibaba, Google, and more.
I've moved most of my MySql community edition projects over to MariaDB and have been very happy. It's an almost transparent upgrade.
They offer an interesting enterprise Big Data Analytics (MariaDB AX) package, but with your current requirements its probably overkill and the standard community edition will fulfill your needs.
For example, here's an informative tutorial on how to set up a scalable Cluster (Galera) and adding MaxScale for High Availability:
https://mariadb.com/resources/blog/getting-started-mariadb-galera-and-mariadb-maxscale-centos
Another interesting option is Vitesse - developed at Youtube, which allows for sharded mysql through a (mostly) driver based solution. It solves the problem of needing to have available access to huge amounts of data and always yield good performance. As such, it goes beyond high availability and focuses on a solution wherein no single query (ie. a report against millions of rows of historical data) can negatively impact the other queries needing to be performed.

Limits to move from Sql to NoSql Database

We are facing performance related issues in our current MySQL DB. Our application is pretty heavy on a few tables ~20. We run lot of aggregation queries on this table as well as writes. Most of our teams are developers and we don't have access to a dba which might help in retuning our current db and make things work faster.
Moving to NoSql is an option. But seriously thinking what are the higher limits in terms of
Volumes (Current volumes per day ~50GB)
Structured or Raw Data? (Structured Data)
IO stats on DB - ( Current rate is 60 KB/Sec)
Record writes - (now 3000 rows/sec)
Question arise
Is 50GB is high enough to consider NoSql? Some documentation recommends more than a TB
The data should be raw data, which can be further processed to get structured and use in application
MySql scales out at 3000 rows/secs, not sure MySql can be further tuned
HBase seems to be promising for Analytic application.
Would like to get some guidelines on limits of RDBMS one can think of moving to NoSQL
This is such a broad topic so don't believe there are any "right" answers but maybe a few general recommendations would help:
I think you should think of this challenge in terms of picking the right tool for the problem. All databases have their pros and cons and in some challenges the best approach is to use an entire toolbox to get the job done.
Note that moving your data, or even just parts of it, to different datastores is rarely a non-trivial effort. Use this chance to rethink about your data model before implementing it.
Getting this job done should also take into account more requirements, such your growth plans for example. It looks you're at this crossroads because your original assumptions->choices are no longer en par with reality. If you want to delay the next time you're at the same place, you should use this opportunity to do so.
Lastly keep in mind that the job really done only after you do something with all that captured data - or else I'd recommend you use the infinitely-scalable write-to-/dev/null design pattern ;) Put differently, unless your data is write-only, you'd want to make sure that whatever SQL/NoSQL/NewSQL/other datastore that you choose can also get you the data/information/knowledge inside your use case's acceptable time frames.
It will probably worth it given your current infrastructure, but keep in mind that it's going to be a huge task, since you're going to need to redesign the whole process. HBase can help you, as it has some neat features, like realtime counters (which in some cases eliminates the needing of periodic rollups), or per-client buffering (which can allow you to scale to the >100k writes per second), but, be warned it cannot be queried in the same way you query a relational database, so, you're going to need to carefully plan it to make it work for you.
It seems that your main issue is with the raw data writes, sure, you can definitely rely on HBase for that, and then do the rollups every X min to store the data in your RDBMS so it can be queried as usual. But given you're doing them every minute, which is a very short gap, why don't you keep the data in memory and flush it the rolled up tables every minute?. Sure, you could loss data, but I don't know how critic is for you loosing one minute of data, and that alone could help you a lot.
Anyway, the best advice I can think of: read a book, understand how HBase works first, dig into the pros & cons, and think about how it can suit your specific needings. This is crucial because a good implementation is what is going to determine if it's a success or a total failure.
Some resources:
HBase: The Definitive Guide
HBase Administration Cookbook
HBase Reference guide (free)

MongoDB vs Mysql Storage space compare

I am building a data ware house that is the range of 15+ TBs. While storage is cheap, but due to limited budget we have to squeeze as much data as possible in to that space while maintaining performance and flexibility since the data format changes quiet frequently.
I tried Infobright(community edition) as a SQL solution and it works wonderful in term of storage and performance, but the limitation on data/table alteration is making it almost a no go. and infobright's pricing on enterprise version is quiet steep.
After checking out MongoDB, it seems promising except one thing. I was in a chat with a 10gen guy, and he stated that they don't really give much of a thought in term of storage space since they flatten out the data to achieve the performance and flexibility, and in their opinion storage is too cheap nowadays to be bother with.
So any experienced mongo user out there can comment on its storage space vs mysql (as it is the standard for what we comparing against to right now). if it's larger or smaller, can you give rough ratio? I know it's very situation dependent on what sort of data you put in SQL and how you define the fields, indexing and such... but I am just trying to get a general idea.
Thanks for the help in advance!
MongoDB is not optimized for small disk space - as you've said, "disk is cheap".
From what I've seen and read, it's pretty difficult to estimate the required disk space due to:
Padding of documents to allow in-place updates
Attribute names are stored in each collection, so you might save quite a bit by using abbreviations
No built in compression (at the moment)
...
IMHO the general approach is to build a prototype, insert data and see how much disk space your specific use case requires. The more realistic you can model your queries (inserts and updates) the better your result will be.
For more details see http://www.mongodb.org/display/DOCS/Excessive+Disk+Space as well.
Pros and Cons of MongoDB
For the most part, users seem to like MongoDB. Reviews on TrustRadius give the document-oriented database 8.3 out of 10 stars.
Some of the things that authenticated MongoDB users say they like about the database include its:
Scalability.
Readable queries.
NoSQL.
Change streams and graph queries.
A flexible schema for altering data elements.
Quick query times.
Schema-less data models.
Easy installation.
Users also have negative things to say about MongoDB. Some cons reported by authenticated users include:
User interface, which has a fairly steep learning curve.
Lack of joins, which can make some data retrieval projects difficult.
Occasional slowness in the cloud environment.
High memory consumption
Poorly structured documentation.
Lack of built-in analytics.
Pros and Cons of MySQL
MySQL gets a slightly higher rating (8.6 out of 10 stars) on TrustRadius than MongoDB. Despite the higher rating, authenticated users still mention plenty of pros and cons of choosing MySQL.
Some of the positive features that users mention frequently include MySQL’s:
Portability that lets it connect to secondary databases easily.
Ability to store relational data.
Fast speed.
Excellent reliability.
Exceptional data security standards.
User-friendly interface that helps beginners complete projects.
Easy configuration and management.
Quick processing.
Of course, even people who enjoy using MySQL find features that they don’t like. Some of their complaints include:
Reliance on SQL, which creates a steeper learning curve for users who
do not know the language.
Lack of support for full-text searches in InnoDB tables.
Occasional stability issues.
Dependence on add-on features.
Limitations on fine-tuning and common table expressions.
Difficulties with some complex data types.
MongoDB vs MySQL Performance
When comparing the performance of MongoDB and MySQL, you must consider how each database will affect your projects on a case-by-case basis. While some performance features may appear to be objectively promising, your team members may never use the features that drew you to a database in the first place.
MongoDB Performance
Many people claim that MongoDB outperforms MySQL because it allows them to create queries in multiple ways. To put it another way, MongoDB can be used without knowing SQL. While the flexibility improves MongoDB's performance for some organizations, SQL queries will suffice for others.
MongoDB is also praised for its ability to handle large amounts of unstructured data. Depending on the types of data you collect, this feature could be extremely useful.
MongoDB does not bind you to a single vendor, giving you the freedom to improve its performance. If a vendor fails to provide you with excellent customer service, look for another vendor.
MySQL Performance
MySQL performs extremely well for teams that want an open-source relational database that can store information in multiple tables. The performance that you get, however, depends on how well you configure the MySQL database. Configurations should differ depending on the intended use. An e-commerce site, for example, might need a different MySQL configuration than a team of research scientists.
No matter how you plan to use MySQL, the database’s performance gets a boost from full-text indexes, a high-speed transactional system, and memory caches that prevent you from losing crucial information or work.
If you don’t get the performance that you expect from MySQL data warehouses and databases, you can improve performance by integrating them with an excellent ETL tool that makes data storage and manipulation easier than ever.
MySQL vs MongoDB Speed
In most speed comparisons between MySQL and MongoDB, MongoDB is the clear winner. MongoDB is much faster than MySQL at accepting large amounts of unstructured data. When dealing with large projects, it's difficult to say how much faster MongoDB is than MySQL. The speed you get depends on a number of factors, including the bandwidth of your internet connection, the distance between your location and the database server, and how well you organise your data.
If all else is equal, MongoDB should be able to handle large data projects much faster than MySQL.
Choosing Between MySQL and MongoDB
Whether you choose MySQL or MongoDB probably depends on how you plan to use your database.
Choosing MySQL
For projects that require a strong relational database management system, such as storing data in a table format, MySQL is likely to be the better choice. MySQL is also a great choice for cases requiring data security and fault tolerance. MySQL is a good choice if you have high-quality data that you've been collecting for a long time.
Keep in mind that to use MySQL, your team members will need to know SQL. You'll need to provide training to get them up to speed if they don't already know the language.
Choosing MongoDB
When you want to use data clusters and search languages other than SQL, MongoDB may be a better option. Anyone who knows how to code in a modern language will be able to get started with MongoDB. MongoDB is also good at scaling quickly, allowing multiple teams to collaborate, and storing data in a variety of formats.
Because MongoDB does not use data tables to make browsing easy, some people may struggle to understand the information stored there. Users can grow accustomed to MongoDB's document-oriented storage system over time.

Database design with millions of entry

Suppose there is a messaging system. This system has millions of entry to be sent and get reported and the count is growing by 100K every hour. 2 service accesses db, one is sender, one is reporter. So what would you suggest in order to get maximum performance? How could the db be designed?
Also what open source RDBMS would you suggest among mysql, postgresql, mongodb etc. to fullfil this high volume db?
Thanks
You've not really provided much information on your requirement other than a few comments about expected data volumes. Simple storage of large volumes of data has no real intrinsic value, it's the ability to access that data which gives the real value; so knowing how you expected to retrieve information from the database is more important than how much data you want to store.
Do these messages really require a document db like MongDB, or are are they structured enough to use a straight RDBMS like Postgresql or MySQL. Do you need full text search capability? How often and what type of queries are executed against this message data? Are you trying to write your own Twitter?
If those are your current data volumes, look to using db replication for resilience. Consider partitioning your message table, perhaps by date posted. Use master/slave (or even multi-master/multi-slave) as Konerak has suggested. Look at the possibilities of an archive table for older messages that are less likely to be queried, but which are then still available. Look at what a commercial database like Oracle can offer you. Get in a professional to help tune the db for performance, rather than simply asking for free advice on sites like SO.
Consider your hardware as well... multiple load balanced servers to help with the volumes (we have 14 dedicated servers purely for accepting new messages, and three high performance servers tuned for querying the data).