graph database for social network - mysql

I am doing a system similar to a social network. The number max of users must be eventually 50.000 or 70.000 at best.
At the moment i am using mysqli+prepared statments. The ERD have now 30 tables, eventually may reach to 40 tables.
So, my question is: i never used a graph database...i have the ERD done by mysql workbench and some code already developed. For the number expected of the users in this project, is recommended change from MySQL to a graph database? my sql code and database model can be availed? there is any advantage with this change?
what do you think ?
thanks

Graphs are nice and fast when stored in SQL, if you've access to recursive queries (which is not the case in MySQL, but which are available in PostgreSQL) and your queries involve a max-depth criteria (which is probably your case on a social network), or if they're indexed properly.
There are multiple methods to index graphs. In your case your graph probably isn't dense, as in you're dealing with multiple forests which are nearly independent (you'll usually be dealing with tightly clustered groups of users), so you've plenty options.
The easiest to implement is a transitive closure (which is, basically, pre-calculating all of the potential paths is called). In your case it may very well be partial (say, depth-2 or depth-3). This allows to fully index related nodes in a separate table, for very fast graph queries. Use triggers or stored procedures to keep it in sync.
If your graph is denser than that, you may want to look into using a GRIPP index. Much like with nested sets, the latter works best (as in updated fastest) if you drop the (rgt - lft - 1) / 2 = number of children property, and use float values for lft/rgt instead of integers. (Doing so avoids to reindex entire chunks of the graph when you insert/move nodes.)

Related

Why Relational database cannot be scaled horizontally

I know this question has been asked quite a few times, but I have not got any satisfying answer.
I have read many blogs and most of them say that RDBMS cannot be scaled horizontally. The only way to deal with it is by buying bigger machines.
Then I read why they can't be scaled horizontally. People say because they provide solid, mature services according to the ACID properties. My argument to that is can't we drop an RDBMS to provide ACID properties for specific tables. Is that the only reason that it can't be scaled horizontally and we have to consider NoSQL databases.
The second argument that is put up is that NoSQL databases store data as a single unit whereas RDBMS stores data across multiple tables. Thus one piece of data may be in one system and another piece of data which it is referring may be in another system. Hence scaling RDBMS distributedly becomes difficult. My question to them is why can't we store all the related data in a single table rather scattering it across multiple tables if the situation demands. If NoSQL can store data as a single unit in a single collection, why can't RDBMS store data as a single unit in a single table. (For eg, why an order has to be split into order table, customer table and payment table. Why can't they be clubbed into a single table, the way a NoSQL would have stored)
This also allows developers to develop without having to convert in-memory structures to relational structures.
In short, can we make an RDBMS behave like a NoSQL database and make it scale horizontally?
First - what do you mean by 'scaling horizontally'?
To me - scaling horizontally is what we all do in MPP (Massive Parallel Processing) databases - like Vertica, Teradata, DB2 Parallel Edition, NonStop SQL, etc.: You have a very big table, which you distribute evenly across all nodes of your MPP cluster, based, usually, on the hash value of the primary key, or something similar. This is what Hadoop and all other Map-Reduce architecture does, too (while often being less effective, at least as of now).
(just editing to clarify): If you have 10 nodes in your cluster, your big tables all are distributed to have one tenth of their data on each node. Scaling, now, would be to add, for example, 10 nodes, and re-distribute the data so that each table has 1/20 of its data on each node. And MPP databases scale linearly; this means that by doubling the number of nodes, with the same data volume, the queries will now run twice as quick.
You seem to mean something different - and I'm curious on what you might mean.
As to RDBMS having to split everything into several tables:
The 'R' in RDBMS stands for 'Relational'. Before entering a discussion of all this, you should read a basic tutorial on relational algebra. A Relation simply is a set of objects that can all be described with the same attributes. With that, all objects have the same attributes/colums/fields. As soon as this rule is violated, it is not a relation / table anymore.
I strongly suggest that you take a training on relational theory and relational databases, even before starting to play with SQL.
It's a big, big world of its own that you will have the opportunity to explore. And it all boils down to set theory and Boolean and relational algebra. And you can do so many things with it ...
Your question here is just like asking why a bicycle has two wheels.
Or am I missing something?
Marco the Sane

Best practices for creating a huge SQL table

I want to create a table about "users" for each of the 50 states. Each state has about 2GB worth of data. Which option sounds better?
Create one table called "users" that will be 100GB large OR
Create 50 separate tables called "users_{state}", each which will be 2GB large
I'm looking at two things: performance, and style (best practices)
I'm also running RDS on AWS, and I have enough storage space. Any thoughts?
EDIT: From the looks of it, I will not need info from multiples states at the same time (i.e. won't need to frequently join tables if I go with Option 2). Here is a common use case: The front-end passes a state id to the back-end, and based on that id, I need to query data from the db regarding the specified state, and return data back to front-end.
Are the 50 states truly independent in your business logic? Meaning your queries would only need to run over one given state most of the time? If so, splitting by state is probably a good choice. In this case you would only need joining in relatively rarer queries like reporting queries and such.
EDIT: Based on your recent edit, this first option is the route I would recommend. You will get better performance from the table partitioning when no joining is required, and there are multiple other benefits to having the smaller partitioned tables like this.
If your queries would commonly require joining across a majority of the states, then you should definitely not partition like this. You'd be better off with one large table and just build the appropriate indices needed for performance. Most modern enterprise DB solutions are capable of handling the marginal performance impact going from 2GB to 100GB just fine (with proper indexing).
But if your queries on average would need to join results from only a handful of states (say no more than 5-10 or so), the optimal solution is a more complex gray area. You will likely be able to extract better performance from the partitioned tables with joining, but it may make the code and/or queries (and all coming maintenance) noticeably more complex.
Note that my answer assumes the more common access frequency breakdowns: high reads, moderate updates, low creates/deletes. Also, if performance on big data is your primary concern, you may want to check out NoSQL (for example, Amazon AWS DynamoDB), but this would be an invasive and fundamental departure from the relational system. But the NoSQL performance benefits can be absolutely dramatic.
Without knowing more of your model, it will be difficult for anyone to make judgement calls about performance, etc. However, from a data modelling point of view, when thinking about a normalized model I would expect to see a User table with a column (or columns, in the case of a compound key) which hold the foreign key to a State table. If a User could be associated with more than one state, I would expect another table (UserState) to be created instead, and this would hold the foreign keys to both User and State, with any other information about that relationship (for instance, start and end dates for time slicing, showing the timespan during which the User and the State were associated).
Rather than splitting the data into separate tables, if you find that you have performance issues you could use partitioning to split the User data by state while leaving it within a single table. I don't use MySQL, but a quick Google turned up plenty of reference information on how to implement partitioning within MySQL.
Until you try building and running this, I don't think you know whether you have a performance problem or not. If you do, following the above design you can apply partitioning after the fact and not need to change your front-end queries. Also, this solution won't be problematic if it turns out you do need information for multiple states at the same time, and won't cause you anywhere near as much grief if you need to look at User by some aspect other than State.

One SQL table for all or multiple tables per regression record?

I am moving a design flow consisting of running a regression consisting of multiple simulations run on a server farm from using files over NFS to using a MySQL DB for extra speed. (We have an associated flow that has just this optimisation so we know it can work).
We will probably run in the order of 1000 regressions over one year; each of approx 100K simulations, each simulation to store a mall record of its results/runtime/...
In the current flow, each regressions results are stored in a separate (CSV) file. Currently each regression in the DB is stored in the same table of regressions and all simulation results for simulations from every regression is all stored in the one sim_results table.
To minimise changes from the current flow, I would like to consider creating separate sim_results tables for each regression but
I don't know how to create a separate table from an iondividual regression record (which has ID as its primary index).
I don't know if I should do it this way - to better mimmick the current flow; orgo with the one sim_results table because it may be "The SQL way".
Help appreciated!
the SQL way is typically that you don't create multiple tables which each correspond to a different series of rows, except in the case where you're breaking out those tables for the purpose of sharding the data among multiple nodes (e.g. horizontal sharding). Horizontal sharding is generally a complex task that has lots of caveats.
But overall, the way you design your schema has to do with the use cases you need to suit. Particularly if you want to run queries over many simulations at once, storing all the data in a single series of tables is how you'd do that. If OTOH you don't really have any querying plans, then you probably don't need a relational DB in the first place.
I'm not sure of the format of your data, but one schema design that is common for large amounts of data to be "analyzed" is the star schema. The wikipedia page is a good read.
If you are heading towards creating many tables, SQLAlchemy's Table() construct is a Python data structure, which you can build programmatically. Build a function that creates new Table() objects as needed and then calls create() on them. I've worked with companies that have had to work hard to get off of this particular design though so I'd really consider if this scheme is worth it. Relational tables properly configured can store billions of rows without issue.

Which DB to choose for finding best matching records?

I'm storing an object in a database described by a lot of integer attributes. The real object is a little bit more complex, but for now let's assume that I'm storing cars in my database. Each car has a lot of integer attributes to describe the car (ie. maximum speed, wheelbase, maximum power etc.) and these are searchable by the user. The user defines a preferred range for each of the objects and since there are a lot of attributes there most likely won't be any car matching all the attribute ranges. Therefore the query has to return a number of cars sorted by the best match.
At the moment I implemented this in MySQL using the following query:
SELECT *, SQRT( POW((a < min_a)*(min_a - a) + (a > max_a)*(a - max_a), 2) +
POW((b < min_b)*(min_b - b) + (b > max_b)*(b - max_b), 2) +
... ) AS match
WHERE a < (min_a - max_allowable_deviation) AND a > (max_a + max_allowable_deviation) AND ...
ORDER BY match ASC
where a and b are attributes of the object and min_a, max_a, min_b and max_b are user defined values. Basically the match is the square root of the sum of the squared differences between the desired range and the real value of the attribute. A value of 0 meaning a perfect match.
The table contains a couple of million records and the WHERE clausule is only introduced to limit the number of records the calculation is performed on. An index is placed on all of the queryable records and the query takes like 500ms. I'd like to improve this number and I'm looking into ways to improve this query.
Furthermore I am wondering whether there would be a different database better suited to perform this job. Moreover I'd very much like to change to a NoSQL database, because of its more flexible data scheme options. I've been looking into MongoDB, but couldn't find a way to solve this problem efficiently (fast).
Is there any database better suited for this job than MySQL?
Take a look at R-trees. (The pages on specific variants go in to a lot more detail and present pseudo code). These data structures allow you to query by a bounding rectangle, which is what your problem of searching by ranges on each attribute is.
Consider your cars as points in n-dimensional space, where n is the number of attributes that describe your car. Then given a n ranges, each describing an attribute, the problem is the find all the points contained in that n-dimensional hyperrectangle. R-trees support this query efficiently. MySQL implements R-trees for their spatial data types, but MySQL only supports two-dimensional space, which is insufficient for you. I'm not aware of any common databases that support n-dimensional R-trees off the shelf, but you can take some database with good support for user-defined tree data structures and implement R-trees yourself on top of that. For example, you can define a structure for an R-tree node in MongoDB, with child pointers. You will then implement the R-tree algorithms in your own code while letting MongoDB take care of storing the data.
Also, there's this C++ header file implementing of an R-tree, but currently it's only an in-memory structure. Though if your data set is only a few million rows, it seems feasible to just load this memory structure upon startup and update it whenever new cars are added (which I assume is infrequent).
Text search engines, such as Lucene, meet your requirements very well. They allow you to "boost" hits depending on how they were matched, eg you can define engine size to be considered a "better match" than wheel base. Using lucene is really easy and above all, it's SUPER FAST. Way faster than mysql.
Mysql offer a plugin to provide text-based searching, but I prefer to use it separately, that way it's easily scalable (being read-only, you can have multiple lucene engines), and easily manageable.
Also check out Solr, which sits on top of lucene and allows you to store, retrieve and search for simple java object (Lists, arrays etc).
Likely, your indexes aren't helping much, and I can't think of another database technology that's going to be significantly better. A few things to try with MySQL....
I'd try putting a copy of the data in a memory table. At least the table scans will be in memory....
http://dev.mysql.com/doc/refman/5.0/en/memory-storage-engine.html
If that doesn't work for you or help much, you could also try a User Defined Function to optimize the calculation of the matching. Basically, this means executing the range testing in a C library you provide:
http://dev.mysql.com/doc/refman/5.0/en/adding-functions.html

handling large dataset using MySQL

I am trying to apply for a job, which asks for the experiences on handling large scale data sets using relational database, like mySQL.
I would like to know which specific skill sets are required for handling large scale data using MySQL.
Handling large scale data with MySQL isn't just a specific set of skills, as there are a bazillion ways to deal with a large data set. Some basic things to understand are:
Column Indexes, how, why, and when they're used, and the pros and cons of using them.
Good database structure to balance between fast writes and easy reads.
Caching, leveraging several layers of caching and different caching technologies (memcached, redis, etc)
Examining MySQL queries to identify bottlenecks and understanding the MySQL internals to see how queries get planned an executed by the database server in order to increase query performance
Configuring the MySQL server to be able to handle a lot of concurrent connections, and access it's data fast. Hardware bottlenecks, and the advantages to using different technologies to speed up your hardware (for example, storing your MySQL data on a RAID5 Array to increase IO performance))
Leveraging built-in MySQL technology (like Replication) to off-load read traffic
These are just a few things that get thought about in regards to big data in MySQL. There's a TON more, which is why the company is looking for experience in the area. Knowing what to do, or having experience with things that have worked or failed for you is an absolutely invaluable asset to bring to a company that deals with high traffic, high availability, and high volume services.
edit
I would be remis if I didn't mention a source for more information. Check out High Performance MySQL. This is an incredible book, and has a plethora of information on how to make MySQL perform in all scenarios. Definitely worth the money, and the time spent reading it.
edit -- good structure for balanced writes and reads
With this point, I was referring to the topic of normalization / de-normalization. If you're familiar with DB design, you know that normalization is the separation of data as to reduce (eliminate) the amount of duplicate data you have about any single record. This is generally a fantastic idea, as it makes tables smaller, faster to query, easier to index (individually) and reduces the number of writes you have to do in order to create/update a new record.
There are different levels of normalization (as #Adam Robinson pointed out in the comments below) which are referred to as normal forms. Almost every web application I've worked with hasn't had much benefit beyond the 3NF (3rd Normal Form). Which the definition of, if you were to read that wikipedia link above, will probably make your head hurt. So in lamens (at the risk of dumbing it down too far...) a 3NF structure satisfies the following rules:
No duplicate columns within the same table.
Create different tables for each set related data. (Example: a Companies table which has a list of companies, and an Employees table which has a list of each companies' employees)
No sub-sets of columns which apply to multiple rows in a table. (Example: zip_code, state, and city is a sub-set of data which can be identified uniquely by zip_code. These 3 columns could be put in their own table, and referenced by the Employees table (in the previous example) by the zip_code). This eliminates large sets of duplication within your tables, so any change that is required to the city/state for any zip code is a single write operation instead of 1 write for every employee who lives in that zip code.
Each sub-set of data is moved to it's own table and is identified by it's own primary key (this is touched/explained in the example for #3).
Remove columns which are not fully dependent on the primary key. (An example here might be if your Employees table has start_date, end_date, and years_employed columns. The start_date and end_date are both unique and dependent on any single employee row, but the years_employed can be derived by subtracting start_date from end_date. This is important because as end-date increases, so does years_employed so if you were to update end_date you'd also have to update years_employed (2 writes instead of 1)
A fully normalized (3NF) database table structure is great, if you've got a very heavy write-load. If your server is doing a lot of writes, it's very easy to write small bits of data, especially when you're running fewer of them. The drawback is, all your reads become much more expensive, because you have to (typically) run a lot of JOIN queries when you're pulling data out. JOINs are typically expensive and harder to create proper indexes for when you're utilizing WHERE clauses that span the relationship and when sorting the result-sets If you have to perform a lot of reads (SELECTs) on your data-set, using a 3NF structure can cause you some performance problems. This is because as your tables grow you're asking MySQL to cram more and more table data (and indexes) into memory. Ideally this is what you want, but with big data-sets you're just not going to have enough memory to fit all of this at once. This is when MySQL starts to create temporary tables, and has to use the disk to load data and manipulate it. Once MySQL becomes reliant on the hard disk to serve up query results you're going to see a significant performance drop. This is less-so the case with solid state disks, but they are super expensive, and (imo) are not mature enough to use on mission critical data sets yet (i mean, unless you're prepared for them to fail and have a very fast backup recovery system in place...then use them and gonuts!).
This is the balancing part. You have to decide what kind of traffic the data you're reading/writing is going to be serving more of, and design that to be fast. In some instances, people don't mind writes being slow because they happen less frequently. In other cases, writes have to be very fast, and the reads don't have to be fast because the data isn't accessed that often (or at all, or even in real time).
Workloads that require a lot of reads benefit the most from a middle-tier caching layer. The idea is that your writes are still fast (because you're 'normal') and your reads can be slow because you're going to cache it (in memcached or something competitive to it), so you don't hit the database very frequently. The drawback here is, if your cache gets invalidated quickly, then the cache is not reducing the read load by a meaningful amount and that results in no added performance (and possibly even more overhead to check/invalidate the caches).
With workloads that have the requirement for high throughput in writes, with data that is read frequently, and can't be cached (constantly changes), you have to come up with another strategy. This could mean that you start to de-normalize your tables, by removing some of the normalization requirements you choose to satisfy, or something else. Instead of making smaller tables with less repetitive data, you make larger tables with more repetitive / redundant data. The advantage here is that your data is all in the same table, so you don't have to perform as many (or, any) JOINs to pull the data out. The drawback...writes are more expensive because you have to write in multiple places.
So with any given situation the developer(s) have to identify what kind of use the data structure is going to have to serve, and balance between any number of technologies and paradigms to achieve an acceptable solution that meets their needs. No two systems or solutions are the same which is why the employer is looking for someone with experience on how to deal with these large datasets. Finding these solutions is not something that can really be learned out of a book, it typically takes some experience in the field and experience with how different solutions performed.
I hope that helps. I know I rambled a bit, but it's really a lot of information. This is why DBAs make the big dollars (:
You need to know how to process the data in "chunks". That means instead of simply trying to manipulate the entire data set, you need to break it into smaller more manageable pieces. For example, if you had a table with 1 Billion records, a single update statement against the entire table would likely take a long time to complete, and may possibly bring the server to it's knees.
You could, however, issue a series of update statements within a loop that would update 20,000 records at a time. Each iteration of the loop you would increment your range/counters/whatever to identify the next set of records.
Also, you commit your changes at the end of each loop, thereby allowing you to stop the process and continue where you left off.
This is just one aspect of managing large data sets. You still need to know:
how to perform backups
proper indexing
database maintenance
You can raed/learn how to handle large dataset with MySQL But it is not equivalent to having actual experiences.
Straight and simple answer: Study about partitioned database and find appropriate MySQL data structure types for large scale datasets similar with the partitioned database architecture.