I've started investigating Couchbase 2.0 features, and I know CouchDB exposed only HTTP API, but I'm a bit surprised that Couchbase 2.0 (evolving from 1.8 where you were using memcached exclusively) exposes the views in same matter.
What are the considerations here? Isn't it possible to query the view using memcached?
With Couchbase Server 2.0, it's not possible to query the view using the Memcached API. However, keep in mind that the view is primarily a secondary index that will be used either to retrieve some projection of the original document or to retrieve the original document from the ID returned by the view query.
In other words, the pattern (Python in this case) would be:
view = bucket.view("_design/beer/_view/by_name")
Then as you iterate over the view, you'd take the ID from each row and retrieve the original document using the Memcached API. Again, in Python:
for row in view:
id = row["id"].__str__()
original_doc = bucket.get(id)[2]
Related
I'm not even sure exactly what constitutes an endpoint and how to turn it into ORM. I'm new to Angular, but from my understanding an endpoint is when you get data from a server, be it an online API or a database query using SQL.
From my research, ORM basically makes the code more high level and gets rid of the need for SQL, but how do you do it? And what if there is no SQL, and it's a request to an online API using a URL?
For clarification, I am working on a website that uses Angular. There is a database, an API, and a UI, and the API pulls data from online APIs and from the database. So my question is about how to refactor both types of endpoints.
ORM is a way to work with objects and let a framework take care of converting the object into a way required for the DB or data from DB to objects. UI has nothing to do with it. It is only a concern of the data layer or data tier of your backend. May be we can go on and say that even the controller part or others of your backend need not be aware of the ORM.
So the abstraction you have is
Irrespective of type of DB you talk in terms of your Model classes
Whether the data comes from DB or from another online API call by backend, finally it will also be a model object and can be sent to UI
The UI layer need not or should not know how your backend is getting the details that is required for UI. so the structure should be based more on Common sense, domain, your requirements etc and not on how backend acquires data.
And the same applies to API endpoints too. Create them in a meaningful way for your application. Example:
// this might be some of the endpoints required for a e commerce site
/product/recommendations // returns recommended products for a user
/product/categories // returns categories of products
/user/getCartItems // returns cart items of user
/orders/cancel // cancel a order
see how the api endpoints are completly unaware of how backend handles / retreives the data
I want to implement a method to get a nested objects (40 nested Object JSON format) and apply some businesses on objects, then insert those in some tables.
Is a good way to get all data objects in one web service method, or write some separate web service methods to get data and break input nested objects?
Is there any standards to implement web service?
Can any one introduce any books or articles to explain standards of web service implementation?
Thanks.
You really need to decide yourself whether the infrastructure (e.g. network connection, server capacity) can handle the "large" dataset and whether it is useful for your consuming clients to receive all of the datasets (they might need only a part of it most of the time).
If say 80% percent of the time it's a subset, you can return that subset and the rest of it you can use HATEOAS.
Those are just links where the client can direct themselves to if they need data which was not sent in the initial call to your API.
There are similar posts like this on the internet, but they seem to be targeted towards lower level languages like Java. NetBeans for example seems to have this kind of functionality.
Here is what I want to do:
I have a large dataset of items. I want to create a RESTful API that would enable my users to perform complex queries to retrieve data from the MySQL database on my backend.
The API needs to be able to:
SELECT a table to retrieve values from
Be able to use common MySQL aggregate functions such as COUNT, SUM,
and AVG on the results
Create WHERE conditions
Security is not an issue as this my simply an MVP for now. On a future iteration I will take security into consideration. Are there any Ruby gems which provide a framework for constructing this kind of system?
I am open to using either Sinatra or Rails for this system.
Maybe this can help you:rails-api
Rails::API is a subset of a normal Rails application, created for
applications that don't require all functionality that a complete
Rails application provides. It is a bit more lightweight, and
consequently a bit faster than a normal Rails application. The main
example for its usage is in API applications only, where you usually
don't need the entire Rails middleware stack nor template generation.
or you can use grape gem.
I'm developing a mobile app wrapped in Cordova that runs alongside our web-based application, based on PHP & MySQL. The mobile app uses local-storage & gets data via a layer of services that have been written to exchange data between the mobile app & the MySQL database. The mobile app only uses a subset of data stored in the main MySQL DB.
I am looking to replace my mobile-app local-storage solution with pouch DB & see that it requires CouchDB ... which got me thinking of a potential configuration / solution that I would like to find out whether would be advisable and feasible ...
Would it be feasible to set up a CouchDB database that runs as a mediator/slave between the main MySQL database & the mobile app's PouchDB? The mobile service layer would use this database (as well as the main MySQL DB if necessary) & data updates between the main-SQL & couch-DB are pushed periodically via cron. The CouchDB would only store a subset of data from the MySQL DB that is relevant for the mobile app.
Does this solution sound like overkill / a good idea? Is there a better way of approaching the setup described above? I do like the idea of pouchdb-CouchDB ... but don't want to rewrite my entire web-app to use couch-DB, while an additional level of abstraction providing a subset of mobile-specific data seems useful.
Thanks
Trace
PouchDB running on Node can actually use any LevelDOWN-based adapter, and there is one for MySQL. I haven't tested it. More info here: http://pouchdb.com/adapters.html#pouchdb_in_node_js.
However, this is probably not a good fit for your use case, because the data that PouchDB will store in MySQL will be totally different from the data your app is currently using in MySQL. In order to support replication, PouchDB keeps the revision history of every document stored (think git), which is different from a traditional database like MySQL, which just stores tables and rows that can be deleted/inserted/updated. Databases like CouchDB and PouchDB were built from the ground up to support replication, which is why this versioning system exists.
That being said, if you write your own sync layer between MySQL and CouchDB it could work in theory, but it would probably be so much work that you would lose the benefits of CouchDB and PouchDB's built-in replication.
I'm trying to achieve the very same schema with our ERP (SQL Server based).
Now i'm just trying to figure out if pouchdb on the mobile would be sufficient for the requirements, for example:
To be able to filter a given "price list" by "product description". Think a LIKE in sql as on:
SELECT * FROM Prices WHERE Description LIKE '%text%'
To be able to filter a given "price list" by "product category", OR by "product vendor"
Also, the mobile app would just need a subset of the full SQL schema/data. And my idea was to make easy the mobile pouchdb <-> couchdb replication part, which can be challenging just with webSQL <-> SQL Server), and then later "replicate" the added data on coouchdb to the SQL Server with a process, think a cron task.
So far i've found:
Building a pouchdb view on the client side can take ages to build, just for the first point of being able to do that LIKE operation. To solve this, i've built an auxiliar websql db which simply contains (pouchdb_id, pouchdb_text) where i rebuild it after replication, inserting the pouch keys, and object text fields concatenated. Then when i need a LIKE, i do it on webSQL, and fetch docs with pouchdb using db.allDocs( { keys: [sql returned keys array] })
The second point is on my analysis right now...
Analysis is currently going on, and any idea would be nice to share.
You can use the following NodeJS package https://www.npmjs.com/package/couchdb-to-mysql.
The package listens for CouchDB changes and reflects them on MySQL.
Example
var converter = require('couchdb-to-mysql');
var cvr = converter();
cvr.connect();
cvr.on('created', function (change) {
// replicate changes on mysql
});
Methods
var converter = require('couchdb-to-mysql')
var cvr = converter(config={})
Optionaly pass in a config:
config.couch.host
config.couch.port
config.couch.database
config.mySQL.host
config.mySQL.port
config.mySQL.user
config.mySQL.password
config.mySQL.database
events
cvr.on('created', function (change) {})
Every time a document is created, a created event fires.
cvr.on('updated', function (change) {})
Every time a document is updated, a updated event fires.
cvr.on('deleted', function (change) {})
Every time a document is deleted, a deleted event fires.
We would like to hook up an iPhone app directly to a database in the cloud. From the iPhone app, we would like to use some REST API to retrieve data from the cloud database. We prefer using MySQL or Mongo as the data store.
We prefer not setting up our database system on AWS or another hosted provider. We would like a cloud service that offers the database layer out of the box.
What database-as-a-service companies support this, and what are the pros/cons of each?
We looked around, and it seems like MongoLab is the only service offering a REST API.
Mongo is a great database for API interaction as the query language uses javascript expressions. Here are the open source API libraries that are implemented in Python
Sleepy Mongoose
Eve
Primary advantage
JavaScript can handle the querying part of data retrieval
Primary disadvantage
The API libraries above have difficulty capturing complex queries