Using CouchDB to serve HTML - json

I'm trying to use CouchDB with HTML/standalone REST architecture. That is, no other app server other than CouchDB and ajax style javascript calling CouchDB.
It looks like cross scripting is a problem. I was using Cloudkit/Tokyo Cabinet before and it seems like the needed callback function was screwing it up in the URL.
Now I'm trying CouchDB and getting the same problem.
Here are my questions:
1) Are these problems because the REST/JSON store like CouchDB or CloudKit is running on a different port from my web page? They're both run locally and called from "localhost".
2) Should I let CouchDB host my page and serve the HTML?
3) How do I do this? The documentation didnt seem so clear...
Thanks,
Alex

There is a simple answer: store static HTML as attachments to CouchDB documents. That way you can serve the HTML directly from the CouchDB.
There is a command-line tool to help you do this, called CouchApp
The book Mikeal linked to also has a chapter (Managing Design Documents) on how to use CouchApp to do this.

3) you can use CouchDB shows to generate HTML (or any content type)

There are huge advantages to having CouchDB serve/generate your HTML.
For one thing, the pages (which are HTTP resources) are tied to the data or to the queries on the data and CouchDB knows when to update the etag when the page has changed. This means that if you stick nginx in front of CouchDB and say "cache stuff" you get all the free caching you would normally need to build yourself.
I would push for nginx > apache in front of CouchDB because Apache isn't all that great at handling concurrent connections and nginx + erlang (CouchDB) are great at it.
Also, you can write these views in JavaScript which are documented well in the CouchDB book http://books.couchdb.org/relax/ or in Python using my view server http://github.com/mikeal/couchdb-pythonviews which isn't really documented at all yet but I'll be getting to it soon :)
I hope that view servers in other languages start implementing the new features in the view server protocol as well so that anyone can write standalone apps in CouchDB.

I think one way is thorugh mod_proxy in Apache. It forward the request from Apache to Couchdb so may solve the cross scripting issue.
# Configuration file for proxy
ProxyVia ON
ProxyPass /couchdb http://<<couchdb host>>:5984/sampleDB
ProxyPassReverse /couchdb http://<<couchdb host>>:5984/sampleDB

I can't help thinking you need some layer between the presentation layer (HTML) and the model (CouchDB).
That way you can mediate requests, provide additional facilities and functionality. At the moment you seem to be rendering persisted objects direct to the presentation layer, and you'll have no facility to change or extend the behaviour of your system going forward.
Adopting a model-view-controller architecture will insulate your model from the presentation layer and give you some flexibility going forwards.
(I confess I can't advise on your cross-site-scripting issues)

Related

Connect Sproutcore App to MySQL Database

I'm trying to build my first Sproutcore App and I struggle to connect it to a MySQL-Database or any datasource other than fixture. I can't seem to find ANY tutorial except this one from 2009 which is marked as deprecated: http://wiki.sproutcore.com/w/page/12413058/Todos%2007-Hooking%20Up%20to%20the%20Backend .
Do people usually not connect SC-Apps to a Database? If they do so, how do they find out how to? Or does the above mentioned tutorial still work? A lot of gem-commands in the introduction seems to already differ from the official Sproutcore getting-started-guide.
SproutCore apps, as client-side "in-browser" apps, cannot connect directly to a MySQL or any other non-browser database. The application itself runs only within the user's browser (it's just HTML, CSS & JavaScript once built and deployed) and typically accesses any external data via XHR requests to an API or APIs. Therefore, you will need to create a service wrapper around your MySQL database in order for your client-side app to be able to load and update data.
There are two things worth mentioning. The first is that since the SproutCore app contains all of your user interface and a great deal of business logic, your API can be quite simple and should only return raw data (such as JSON). The second is that, I should mention that the client-server design, while more tedious to implement, is absolutely necessary in practice, because you can never trust the client side code, which is in the hands of a possibly nefarious user. Therefore, your API should also act as the final gatekeeper to validate all requests from the client.
This tutorial I found helped me a lot. Its very brief and demonstrates how to implement a very simple login-app, how to send post-requests (triggered by the login-button-action) to the backend-server and how to asynchronously process the response inside the Sproutcore-App:
http://hawkins.io/2011/04/sproutcore_login_tutorial/

How Would I Go About Using Node.js For Frontend And Wordpress As The Backend?

I've had a thought of using Wordpress as a CMS backend, because well a lot of people know it and it is easy to use and then using Node.JS as the front-end. You're probably thinking now why would I want to do that in the first place, what is the advantage?
I want to use websockets and the wonderful Socket.io library for Node.JS provides beautiful cross-browser websockets support. Essentially I want a user to come to a site, a websocket is created and then content is fed to the frontend asynchronously as JSON and then decoded on the frontend all without page refreshing.
Effectively I am making Wordpress become a real-time CMS. You visit a site, but every link you click fetches the page as JSON and returns it via a websocket to save multiple requests and of course, page size.
How do I go about getting Node.JS talking to a MySQL database, pulling out info and then showing it? Any tutorials, resources and other useful tips would be gratefully appreciated. A few of my colleagues have wondered the same thing, so I think the answers will be a big help to everyone.
To be exact, you can't use Node.js for a front-end solution, since it runs on the server, not the browser (think of it like any other server-side language such as PHP, JSP etc).
You can, however, create the described solution with jQuery or any other Javascript library, you just have to implement data transfer with Socket.IO. On the server-side you'd need something to handle websockets, so the most native way would be to use Node.js, but since you want to use Wordpress, it gets really complicated, as Wordpress is not meant to be used in the way you described, so I'm afraid you'd have to write your CMS from ground up in Node.
Also, the way you described has a huge flaw. Search engine crawlers are still unable to parse and run Javascript, so if all of your content is loaded dynamically, it would seem empty to Google and others, so it would be impossible to ever make it in the search results rendering your site pretty much useless.
For MySQL and other modules for Node, you should check NPM registry and the Node modules page.
EDIT
After Dwayne explained his solution in comments, this is how I'd do it:
I'd use jQuery for front-end. Binding the document with .on(), and setting the selector to 'a', so that every anchor on the webpage would fire the handler.
The handler parses the a.href attribute and figures out whether it's an external link, which shouldn't be handled by Javascript, or if it's a link to the next page, to an article etc. You can prevent the default action by calling e.preventDefault() in the handler, which prevents the browser from redirecting to the location.
Then the handler would get the content in JSON by calling .getJSON() to the URL based on the article. The easiest way would be to have a certain pattern (such as all urls like www.domain.com/api) redirect to the Node service via .htaccess, to prevent cross-domain problems.
Node would then see the request, extract the parameters and figure out what the user wants. Then connect to the MySQL database with this module (it's as simple as it can get) and return the corresponding content formatted as JSON. Don't forget to set the Content-Type headers to 'application/json'.
jQuery gets the response, figure out the type of the request and updates the content accordingly. Profit.
As you can see, I wouldn't use WebSockets in this case, since you wouldn't really benefit much from it. They are mostly meant for small real-time updates (no huge HTTP headers to reduce the bandwidth) that are both-ways. This means that the server could also push data into the browser, without the browser asking for it. In a blog context, this is not required, and you won't have too many request, so the difference in bandwidth wouldn't be noticeable anyway. If, however, you would like to use it for educational purposes, just basically replace the getJSON part with SocketIO, I'm not sure whether Apache supports proxying WebSockets, though. Extra information about SocketIO basics are here.
Edit: I overlooked the part with 'using Node.js on the front-end'. As Vahur Roosimaa said, Node.js is on the server-side (think of it as Nginx / Apache + PHP combination). Node isn't a frontend library like jQuery.
If you want you can use it just for the websockets functionality (I suggest using Socket.IO).
Nice tutorials about Node.js and MySQL:
http://www.giantflyingsaucer.com/blog/?p=2596
http://mclear.co.uk/2011/01/26/very-simple-nodejs-mysql-select-query-example/
http://www.hacksparrow.com/using-mysql-with-node-js.html
This SO question might also help: MySQL with Node.js
Also check the examples from the github repo of node-mysql.
If you want something more advanced like an ORM, I recommend Sequelize.
Another good question from SO: Which ORM should I use for Node.js and MySQL?
You should check out Wordscript which I recently added a Node JS example which can act as a simple front end for doing basic post retrieval from a Wordpress database.
It uses a common mysql library for node, and generates MySQL queries from get parameters and renders data as it is retrieved from the database; including tags.
Wordscript aims to free backend/frontend developers from being forced to work with the Wordpress PHP codebase, but still allows for Wordpress'es administrative interface to be used when needed (and prudent to do so). API's have been written in Ruby and PHP that both return JSON feeds and function generally the same way the node version does; so thats an additional option where a scripting language is available.
One option you have, if you want to have wordpress as the CMS and keep its admin UI, is to write your wordpress templates to output JSON instead of HTML.
In contrast to Wordscript, this is more solution specific, since you will need to write your JSON output for every template/data you want. The upside is that you can create the JSON specifically for your needs.
On the node side, you write a small server that will consume the JSON, letting you use whatever javascript template language you want. Nodejs will also help out with performance, since you can save the rendered content and/or the JSON output in memory, saving you roundtrips to the wordpress templates.
I wrote a blog about this, which describes more of the benefits of using nodejs and wordpress together.
http://www.1001.io/improve-wordpress-with-nodejs/

http push django comet

I want to make a django server to refresh the content that you approach the database, if the idea is to first make the user see the current contents of the database and as the valley became the new content, this content comes and is placed above the previous content without reloading the page, in another part of the site is to make you change the current content with the new as it gets to the database?
evserver clearer is my choice, but really do not know how and what would be the most simple and efficient?
I think you should avoid HTTP Polling. Here's why:
if the frequency of the setInterval combined with the number of users on your web app is going to lead to a big resource drain. If you go through slides 9 to 19 in this presentation you'll see some quite dramatic figures for using Push (Note: this example uses a hosted service but hosting your own realtime server and using Push also has similar benefits)
between setInterval calls the data displayed in your app is potentially out of data. Using a Push technology means the instant that new data is available it can be push and displayed in your app. You don't want users looking at an app and thinking they are seeing correct information when they are not.
You should take a the following StackOverflow questions:
Django / Comet (Push): Least of all evils?
Need help understanding Comet in Python (with Django)
For Python/Comet see:
Python Comet Server
The latest recommendation for Comet in Python?
I'd recommend you also start considering "WebSockets" as well as "Comet". Most Comet servers now prefer to use a WebSocket connection when possible.
If you'd prefer to avoid installing and managing your own Comet/WebSocket solution then you could use a realtime hosted service which will allow you Push data through them using a REST API and your clients can receive events by embedding a JavaScript library and writing a small about of code to subscribe and receive the event.
The steps are quite straightforward:
Write a model to store data in DB
Write a view that will generate JSON-serialized data upon POST request.
Write a template that will contain JavaScript with setInterval() that will
proceed AJAX requests to the view and render recieved data. (I'd suggest using JQuery as it's well documented and widespread).

Is it exists any "rss hosting" with API for creating feeds

I am creating a desktop app that will create some reports. I want to export these reports as RSS or ATOM feeds. I can easily create feeds with Rome lib for Java. But I have no idea how to spread them. I thought about embedding httpd into my app, but it's bad idea, because a computer can be behind NAT or turned off.
I need some kind of "proxy" server, where can I push my feeds, and clients will be able to pull content from that server.
I can probable write server side app fore this, but first I'd like to find out if some dedicated solution is available for problems like this.
I was also thinking about using some blogging platform and using its API. What do you think about this approach?
One more thing I have to consider when choosing platform ability to handle lot of updates. Sometimes desktop app will be shut down but when it will be running, it generates quite a lot of updates.
Check out Google's feedburner.
EDIT
Here's a better link for their help / faq. You'll still need to use some service to generate your feed, but it won't have to handle a heavy load. Feedburner will poll your feed every 30 minutes and their servers will act as a proxy for your feed. As far as how to publish the feed for Feedburner to read, I would recommend writing a service to handle this, even more considering that you getting the data for the feeds from a number of desktop applications, and it'll probably be easier to write a custom service to interface with them, store your data in a DB, and publish feeds than it would be to try and modify a blogging service for this purpose.
I don't know why I didn't think of this when I first answered your question, but Yahoo has a service called Yahoo Pipes which allows you could use to generate feeds from various kinds of inputs. I'm not sure how well it would scale but it might work for you.

XML-RPC versus HTML Form

Can anyone explain to me the benefits of using XML-RPC over a straight HTML Form ?
On first glance, they seem to accomplish the same thing.
The XML-RPC is "formatted" using XML, but you can do the same in a form (think textbox).
I have an app that takes data from a script that runs on the client PC. The output from the script is XML. Currently it is submitted to the app (PHP using CodeIgniter) via a form POST. I have been told to look at using XML-RPC, but I am trying to understand "why"...
I primary benefit of XML-RPC is that you don't have to write any (or nearly any) glue code to get remote processes to communicate. There are a wide variety of XML-RPC client libraries available for many languages. In the case that you have a rich API of functions, XML-RPC can be a very easy way connect remote processes to that API. Performing the same task with a plain www-form-encoded POST will require you to convert an API to a form and dispatch requests into api calls. There are a few systems that can help with that, but it is certain to be more difficult than just exposing the API through XML-RPC.
On the other hand, if (as it sounds) you already have a rich API exposed through plain form encoded requests, It's hard to justify the work of porting both client and server to another interface.
If you mean encoding data in the www-form-encoded (or "query string style") format vs POSTing XML-RPC, it's really a matter of taste. Some people prefer to use XML for everything.
XML-RPC is a bit dated, in my opinion. Unless you're talking about a different protocol, this is the one that preceeded SOAP. I wouldn't use it at all. Instead, I'd either use a SOAP-based service, or a REST-based one.
HTML forms are not comparable to XML-RPCs.
You use HTML forms to get some input data through a WebPage, while you use XML-RPC to execute remote procedures using XML.
Usually you will use XML-RPC for a process without a UI trying to execute some procedure remotely.