Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I have a Javascript library (SincIt) that I would like to use to synchronise my WebApp with a MySQL Database. However, this SincIt only works with MongoDB at the moment.
I could probably write an MySQL adapter for SincIt, since the library is modular, but I wonder if there is an adapter that translates MongoDB instructions to SQL.
MySQL is a relational database and uses SQL for manipulation of relational algebra. MongoDB is a document database that doesn't support relations or joins.
It does allow for hierarchical nesting of documents, but it's simply an entirely different paradigm.
The most important distinction in this case, is that MongoDB is schema-less. With a mongo collection you never need to do the equivalent of a "CREATE TABLE" statement. Furthermore, mongo has no problem with you starting with one document with a specific json structure, and then adding additional documents to that collection with entirely different structures.
As Mongo works with json data, you would also have the problem with a relational database of needing to convert table data to json documents and vice versa, which really isn't possible in any generic sense.
With MySQL you of course have to have table structures that are maintained in the data dictionary, and if anything changes you need to ALTER the table. You could probably implement a generic table structure of rows where the entire data store would be in a blob, stored in the same json format that Sinclt expects, but at that point you might as well just use mongo.
With that said, if there's some business rule that necessitates it, the fastest way to get it working with MySQL would probably be to do what I just suggested and have a generic row structure with something like:
id
optype (set, update, delete?)
data (blob storing the json payload)
parent_id
Just from a quick perusal of the SincIt docs it appears you'd need something to support the "linked list" aspect of the system.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I had a mobile app backend written by someone else in node(express). He is managing all data in mysql DB but storing notification for new cutomer signup etc in mongodb is there a performance gain or I should use one d.b throughout the project?
MySQL is highly organized for its flexibility, high performance, reliable data protection, and ease of managing data. Proper data indexing can resolve your issue with performance, facilitate interaction and ensure robustness.
But if your data is not structured and complex to handle, or if predefining your schema is not coming easy for you, you should better opt for MongoDB. What’s more, if you're required to handle a large volume of data and store it as documents, MongoDB will help you a lot!
The result: One isn’t necessarily better than the other. MongoDB and MySQL both serve in different niches.
Reference
MongoDB is good for handling large unstructured data. The best thing about MongoDB is that it is not bound to schema design.
To store notification you can use MongoDB though notifications can be in billions or trillions in number. So, MongoDB could be the choice to store and retrieve that data. Data retrieve is faster in MongoDB if we compare MySql.
Checkout the link MongoDB v/s MySql
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed last year.
Improve this question
I have a program with a priority queue (PQ) so huge that it does not fit to the memory. It was decided to move some data to MySQL database (DB) in following way: the new elements are put into DB instead of PQ, and when the PQ is emptied, it is updated by the entries in the DB. But this way appeared to spoil the priority ordering. Is there any solution which does not corrupt the priority ordering and combines PQ with DB?
For some reason I cannot get rid of PQ and use only DB.
Your question is rather vague on the functionality, but I think the idea is wrong.
Someone seems to have the idea of using the database as secondary storage for an in-memory application. That doesn't really make much sense. Normally, you would use a simple file for this. Although you can use a database for managing secondary/tertiary storage, a database does many other things, so it is like using a smart phone only as a clock.
If you are going to use a database, then store the entire structure in the database and develop an API for it that meets your needs.
If you want help with how to structure the data, then write another question and include:
sample data
how the priority queue will be used
any ideas you have on the data structure
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I need to create a database for dealing with click stream (from ~240 subdomains). I use a Java Script for grabbing information like (Host, Page, Date, userID, Referer, HostName, RefererPath, uniqueUserID) for each click and than insert the data to the database through a java web dynamic application. There are about 9 milion new records each day and I have to insert new records every minute. Another application needs to be able to retrieve information about pageviews/unique visitors/ect for a certain article/subdomain in the last (10min, 20min, 30min, 1hour...24 hours). I only need to keep records for the last 3 months.
Initially I thought about using MySQL as I'm only interested in open-source. But I'm thinking about NoSQL solutions. The problem is that I've had experience only with relational databases and am not really able to tell if NoSQL would be a better solution here or not. Also which database should I use if I choose to go wiht NoSQL? and would Key-value store be the best way to go?
I'm guessing this data consistency isn't critical (statistics ?) so you could indeed spare a bit of consistency. NoSQL seems a good choice and a key value store would also be my pick. Now the real question is : what is the most suited one ?
I'd give a consideration to Redis and Riak (which are basically the most well-known ones) :
Riak (AP system) :
Fault-tolerant (masterless with partitioning and replication)
Map reduce
Full text search
BASE
Redis (CP system) :
Really fast
In-memory : You need RAM ! That also means you want replication so you don't lose everything on a crash. Redis also uses disk snapshot I believe.
Master/Slave with reelection
BASE
Both have a lot more features, you should go read the documentation for gotchas. Redis is primarly used as a cache since it's fast, whereas Riak focuses on fault-tolerance. Given your scalability requirements, both can satisfy your need. Therefore you must chose according to what's above.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
as google isn't really helpful here:
what's the best way of migrating a mongodb database to a mysql database? any best practise examples?
thanks!
After you have completed scarpacci's exercise and have an idea of the mappings, I would then look at mongoexport - you should be careful about type fidelity though and then you will have to import the CSV/TSV into MySQL in a sane manner also.
The other option, especially if you run into typing issues, is to simply pull all of your data out programmatically using your language and driver of choice and insert it directly into MySQL, again using your favorite driver - this gives the most control, but is also the most work.
This might be a bit of a non-starter, because people usually implement things in Mongo or other NoSQL databases in order to have a schemaless design. That is diametrically opposed to the concept of a relational database.
Without looking at the sort of data structures you have in Mongo, this would be impossible to answer.
#Tronic I would start by utilizing the mappings MongoDB provides on their site:
Mapping Guide
I would then try and take your documents in each of your collections and try to break those out into the proper DB Entities (Tables). The entities / attributes could be difficult to design based on the schemaless design of Mongo (As #Mike Brant indicates)
Hope this helps.
--S
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I need to document a legacy database schema for a new employee and as there's no design document I'd like to generate one from the existing schema. As the tables are MyISAM the foreign key relationships won't produce a nice graph. I'm interested in producing a document showing the important tables, their columns, types and remarks.
Are there any tools available to produce a nice document (PDF, DOC, HTML or RTF say) from the database schemas meta data? Or am I better writing a utility to export this myself (I was thinking dump it to XML and then transform it using XSLT)? The schema is running on MySQL 5.
After some research and looking at the options available I've decided to use schema spy which does pretty much want I wanted.
It produced the results in a reasonable format, but also provided an XML dump of the meta data which I was able to use to write an XSLT transformation to match what I wanted in the first place.
Tip came from answer to question 1869.
You can use MySQL Workbench or his ancestor DBdesigner 4 (open source):
Capture all the database diagram graphically with the "reverse engineering" tool.
Adjust and comment anything you need.
Use the HTML report plugin included in the plugins menu.
Done!
DeZign for Databases can do that for you. Auto-layout is done after the import of your database. You can eventually rearrange the objects in the diagram and then generate a html (or pdf) report from your database including a clickable diagram.
DeZign for Databases
If you're using PHPmyAdmin, you can flip over to the Designer tab to get a visual schema. You could print/screenshot that if you just want to show table relationships.
Sybase Powerdesigner, is one of the best tool for reverse engineering a DB, producing nice diagrams and nice exports.