I'm writing an app that needs to handle more than 15.000 photos and I want to store into the database their EXIF and IPTC attributes.
My initial approach is to use MySQL and create a table to store all the attributes, as it is suggested here.
However most of the photos have up to 250 attributes. Since I got 15k photos that means I will have almost 4 million rows. And this is only the beginning (I expect more photos in the future).
I wonder whether MySQL would be ok in this scenario or I should move to a NoSQL approach like MongoDB.
Please also note that I need to make the database searchable.
Thanks in advance.
If you're a .Net developer, RavenDB is ideal for your scenario. It can easily handle that volume on very modest hardware, and has outstanding search capabilities provided by it's internal use of the Lucene search engine.
The photos themselves would be stored as attachments, while the attributes would be part of the document.
Even if you're not a .Net developer, RavenDB can be used over http/rest from any language. It's just much easier with the native .Net client.
Related
I want to create a website that will have an ajax search. It will fetch the data or from a JSON file or from a database.I do not know which technology to use to store the data. JSON file or MySQL. Based on some quick research it is gonna be about 60000 entries. So the file size if i use JSON will be around 30- 50 MB and if use MySQL will have 60000 rows. What are the limitations of each technique and what are the benefits?
Thank you
I can't seem to comment since I need 50 rep. for commenting, so I will give it as an answer:
MySQL will be preferable for many reasons, not the least of which being you do not want your web server process to have write access to the filesystem (except for possibly logging) because that is an easy way to get exploited.
Also, the MySQL team has put a lot of engineering effort into things such as replication, concurrent access to data, ACID compliance, and data integrity.
Imagine if, for instance, you add a new field that is required in whatever data structure you are storing. If you store in JSON files, you will have to have some process that opens each file, adds the field, then saves it. Compare this to the difficulty of using ALTER TABLE with a DEFAULT value for the field. (A bit of a contrived example, but how many hacks do you want to leave in your codebase for dealing with old data?) so to be really blunt about, MySQL is a database while JSON is not, so the correct answer is MySQL, without hesitation. JSON is just a language, and barely even that. JSON was never designed to handle anything like concurrent connections or any sort of data manipulation, since its own function is to represent data, not to manage it.
So go with MySQL for storing the data. Then you should use some programming language to read that database, and send that information as JSON, rather than actually storing anything in JSON.
If you store the data in files, whether in JSON format or anything else, you will have all sorts of problems that people have stopped worrying about since databases started being used for the same thing. Size limitations, locks, name it. It's good enough when you have one user, but the moment you add more of them, you'll start solving so many problems that you would probably end up by writing an entire database engine just to handle the files for you, while all along you could have simply used an actual database. Do note! Don't take my word for granted, I am not an expert on this field, so let others post their answer and then judge by that. I think enough people here on stackoverflow have more experience then I do haha. These are NOT entirely my words, but I have taken out the parts that were true from what I knew and know and added some of my own knowledge :) Have a great time making your website
For MySQl :you can select specific rows,or specific column using queries ,filter data based on a key,order alphabetically
downside:need a REST API to fetch data because it can't be accessed directly,you have to use php or python or whatever programming language for backend code.
for json file :benefits :no backend code directly accessed using GET http request.
downside:no filtering ,ordering or any queries,you have to do it manually.
I want to build an application that will serve a lot of people (more than 2 million) so I think that I should use Google Cloud Datastore. However I also know that there is an option to use Google Cloud SQL and still serve a lot of people using mySQL (like what Facebook and Youtube do).
Is this a correct assumption to use Datastore rather that the relational Cloud SQL with this many users? Thank you in advance
To give an intelligent answer, I would need to know a lot more about your app. But... I'll outline the biggest gotchas I've found...
Google Datastore is effectively a distributed hierarchical data store. To get the scalability they wanted there had to be some compromises. As a developer you will find that these are anywhere from easy to work around, difficult to work around, or impossible to work around. The latter is far more likely than you would ever assume.
If you are accustomed to relational databases and the ability to manipulate data across multiple tables within the same transaction, you are likely to pull your hair out with datastore. The biggest(?) gotcha is that transactions are only supported across a limited number of entity groups (5 at the current time). To give a simple example, say you had a simple parent-child relationship and you needed to update child records under more than 5 parents at the same time within a transaction... can't be done (yes, really). If you reorganize your data structures and try to put all of the former child records under a single entity so they can be updated in a single transaction, you will come across another limitation... the fact that you can't reliably update the same entity group more than once per second (yes, really). And if you query an entity type across parents without specifying the root entity of each, you will get what is euphemistically referred to as "eventual consistency"... which means it isn't (yes, really).
The above is all in Google's documentation, but you are likely to gloss over it if you are just getting started (of course it can handle it!).
It is not strictly true that Facebook and YouTube are using MySQL to serve the majority of their content to the majority of their users. They both mainly use very large NoSQL stores (Cassandra and BigTable) for scalability, and probably use MySQL for smaller scale work that demands more complex relational storage. Try to use Datastore if you can, because you can start for free and will also save money when handling large volumes of data.
It depends on what you mean by 'a lot of people', what sort of data you have, and what you want to do with it.
Cloud SQL is designed for applications that need a SQL database, which can handle any query you can write in SQL, and ensures your data is always in a consistent state.
Cloud SQL can serve up to 3200 concurrent queries, depending on the tier. If the queries are simple and can be served from RAM they should take just a few ms, and assuming your users issue about 1 request per second, then it could support tens of thousands of simultaneously active users. If, however, they are doing more complex queries like searches, or writing a lot of data, then it will be less.
If you have a simple set of queries, are less concerned about immediate consistency, or expect much more traffic, then you should look at datastore.
I want to make an application like docs.google.com (without its api,completely on my own server) using
frontend : backbone
backend : node
What database would u think is better ? mysql or mongodb ? Should support good scalability .
I am familiar with mysql with php and i will be happy if the answer is mysql.
But many tutorials i saw, they used mongodb, why did they use mongodb without mysql ?
What should i use ?
Can anyone give me link for some sample application(with source) build using backbone , Node , mysql (or mongo) . or atleast app. with Node and mysql
Thanks
With MongoDB, you can just store JSON objects and retrieve them fully-formed, so you don't really need an ORM layer and you spend less CPU time translating your data back-and-forth. The developers behind MongoDB have also made horizontally scaling the database a higher priority and let you run arbitrary Javascript code to pre-process data on the DB side (allowing map-reduce style filtering of data).
But you lose some for these gains: You can't join records. Actually, the JSON structure you store could only be done via joins in SQL, but in MongoDB you only have that one structure to your data, while in SQL you can query differently and get your data represented in alternate ways much easier, so if you need to do a lot of analytics on your database, MongoDB will make that harder.
The query language in MongoDB is "rougher", in my opinion, than SQL's, partly because it's less familiar, and partly because the querying features "feel" haphazardly put together, partially to make it valid JSON, and partially because there are literally a couple of ways of doing the same thing, and some are older ways that aren't as useful or regularly-formatted as the others. And there's the added complexity of the array and sub-object types over SQL's simple row-based design, so the syntax has to be able to handle querying for arrays that contain some of the values you defined, contain all of the values you defined, contain only the values you defined, and contain none of the values you defined. The same distinctions apply to object keys and their values, and this makes the query syntax harder to grasp. (And while I can see the need for edge-cases, the $where query parameter, which takes a javascript function that is run on every record of the data and returns a boolean, is a Siren song because you can easily define what objects you want to return or not, but it has to run on every record in the database, no indexes can be used.)
So, it depends on what you want to do, but since you say it's for a Google Docs clone, you probably don't care about any representation but the document representation, itself, and you're probably only going to query based on document ID, document name, or the owner's ID/name, nothing too complex in the querying.
Then, I'd say being able to take the JSON representation of the document your user is editing, and just throw it into the database and have it automatically index these important fields, is worth the price of learning a new database.
I was also struggling with this choice looking at the hype created by using MongoDB for tasks it was not built for. So my 2 cents are:
Storing and retrieving hierarchical objects, that your documents probably are, is easier in MongoDB, as David says. It becomes more complicated if you want to store documents that are bigger than 16Mb though - MongoDB's answer is GridFS.
Organising documents in folders, groups, keeping track of which user owns which documents and who he/she provided access to them is definitely easier with MySQL - you have the advantage of powerful SQL queries with joins etc., built in EXPLAIN optimization, triggers, functions, stored procedures, etc. MongoDB is nowhere near.
So what prevents you from using both MySQL to organize the documents and MongoDB to store one collection of documents identified by id (or several collections - one for each document type)? It seems to me the best choice and using two databases in one application is not a problem, really.
MySQL will store users, groups, folders, permissions - whatever you fancy - and for each document it will store a reference to the collection and the document id (MongoDB has a special format for it - DBRefs). MongoDB will store documents themselves in collections, if they are all less than 16MB, or the previews and metadata of documents in collections and the whole documents in GridFS.
David provided a good answer. A few things to add to it.
MongoDB's flexible nature permits for easy agile / iterative development.
MongoDB like node.js is asyncronous in nature and works very well within asyncronous environments.
Mongoose is a good ODM (object document mapper) that makes working with MongoDB with Node.js feel very natural. Unlike ORMs this is a very thin layer.
For Google Doc like functionality, the flexibility & very rich data structure provided by MongoDB feels like a much better fit.
You can find some good example posts by searching for mongoose, node and MongoDB.
Here's one that also uses backbone.js and looks good http://mattkopala.com/blog/2012/02/12/getting-started-with-nodejs/
Some parts of my web app would work very well with a RDBMS, such as user and URL handling - I want to normalize users, emails, hosts (ie stackoverflow.com), and urls (ie https://stackoverflow.com/questions/ask) so that updating things in one place update things in all places and to minimize redundancy.
But some parts of my web app would very well with a document-based database, like Mongo, because they have a lot of components that would work more efficiently as embedded objects.
Would it make sense to use MySQL for the relational objects and Mongo for the document objects, or would it be not worth the hassle to have to manage two types of databases? I know that Mongo has references, but I get the idea that it is not really designed and optimized for references.
Thanks!
PS: I read this: Using combination of MySQL and MongoDB and it scratches the edge of what I am asking, but it is really a completely different question.
We use Mongo and MySQL in unision. Yes there is additional maintenance involved but it is about using the right tool for the right job. We use Mongo for a more real-time scenario where we need fast reads and writes and can do without persisting data for long periods of time. MySQL for everything else.
That being said, your needs may be unique and you need to figure out the right tool for the job.
I recently built a system using MySql for as the RDBMS managing users and blogging and MongoDB for searchable attributes. It works well however keeping data in sync, especially user Id's etc requires a bit of work. It is a case of basically choosing the right tool for the job.
my question is similar to other friend posted here...we are trying to develop an application that supports possibly terabytes of information based on a land registry in Paraguay with images and normal data.
The problem is that we want to reduce the cost of operation to minimum as possible because it´s like a competition between companies, and for that reason we want to use a free database....I have read a lot of information about it but I am still confused. We have to realize that the people who is gonna use it are government people so the DB has to be easy to manage at the same time.
What would u people recommend me?
Thanu very much
MySQL and even SQLite already have spatial indexes, so no problem there.
To store the datafiles you could use a BLOB field, but it's usually much better (and easier to optimise) to store as files. To keep the files related to the DB records you can either put the full path (or URL) in a varchar field, or store the image in a path calculated by the record's ID.
To easily scale into the multi-terabyte store, plan from the start on using several servers. If the data is read-mostly, an easy way is to store the images on different hosts, each with a static HTTP server, and the database records where is each image. then put a webapp frontend for the database, where the URLs for each image directly point to the appropriate storage server. That way you can keep adding storage without creating a bottleneck on the 'central' server.
Postgresql, SQL Server 2008 and Any recent version of Oracle all have spatial indexing, table partitioning and BLOBs and are capable of acting as the back-end of a large geographic database. You might also want to check out two open-source GIS applications: GRASS and QGIS, which might support doing what you want with less modification work than writing a bespoke application. Both can use Postgresql and other database back-ends.
As for support, any commercial or open-source database is going to need the attentions of a competent DBA if you want to get it to work well on terabyte-size databases. I don't think you will get away with a model of pure end-user support - attempts to do this are unlikely to work.
It sounds like the image files will be a considerable amount of your storage. Don't store them in a database just store the file location details in the database.
(If you want access via the internet try Amazon Storage. It isn't free but very cheap and they handle the scaleability for you. )
Another cautionary note on using B/C/LOBs, as I've been bitten on exponential DB growth by storing internally w/in the DB.
What about storing the GIS maps on a separate server and just store the LAT/LONG "shape" of the area w/in the DB. The GIS can be updated separately w/out the cost of storing the images in the main database.
Smaller to admin. Less cost to backup.
Whilst not meeting your criteria of being free, I would strongly recommend you consider using SQL Server 2008, because of two Gfeatures in this version which could help:
FILESTREAM - allows you to store your binary images within the filesystem, rather than within the database itself. This will make your database much more manageable whilst still allowing you to query the data in the usual way.
GEOGRAPHIC DATA TYPES - support for geospatial (lat/long) datatypes is likely to be very valuable to your solution.
Good luck!
Use ESRI's Image Server. You won't need a database to serve the images. Its very easy to use. It also works off of files and its fast and handles many image formats. Plus it does image processing on the fly and supports many clients. AutoCAD, Microstation, ArcMap, ArcIMS, ArcServer...etc.
Image Server