Client Side Moxi case? - couchbase

I was wondering if I could see any example source code (Language: C) of a client that uses client-side Moxi.
I've seen architecture , but I have no idea how to write it in codes.
Also, from the get_callback function, if I need to return the CAS value and the Data received, is there any suggested way to do this?
And what is this vbucketmap thing? what do they represent and how to configure them?

Client side moxi means that you setup a moxi server on your client machine and then just tells the client to connect to moxi on your local host. This means that if moxi is running on localhost port 11211 then you tell you client to connect to localhost port 11211 and moxi will handle communication with the server. You don't need to write any special code to do this.
Also, from the get_callback function, if I need to return the CAS value and the Data received, is there any suggested way to do this?
I'm not very familiar with the c api, but there is probably a gets function call that returns the cas id in the callback.
And what is this vbucketmap thing? what do they represent and how to configure them?
A vbucket map is a map of servers to VBuckets. In Couchbase Server there are 1024 vbuckets that your data can hash into. VBuckets a spread around a cluster and the map tells the client which server to send a request to. With that said you shouldn't ever touch the vbucket map with your code. The map is obtained from the cluster and managed by either the client-side SDK or in your case Moxi.

Related

Fetching set of keys?

When Fetchinig Multiple key sets, I can see that client makes the request in one long string and sends to the connected couchbase server (the protocol seems to include the vbucket map of each key as well)
So, one network call from client with all the keys, their vbucketmaps.
How does server respond to this request?
If the connected server has all the values requested, then I expect the connected server to just give the values requested.
However, if there are several clusters, there is chance that the connected server might not have the requested key. What does server do in this situation? I can see that the request include the vbucket map, from this, I can expect that connected server could ask specific Key's master server for its values. This is just my guess, I would like to know how server respond in this situation.
Also, what happens if Key exists, however, the server fails to return the value due to "server busy" or some other error.
Always appreciated with your help
There are two different ways this can happen, either with moxi or without moxi.
Without Moxi (Smart Client)
The the client makes a connection with Couchbase it will first get a list of all of the servers in the cluster and the vbucket map. It then makes a connection to each server in the cluster. When you do a multi-operation the client will consult with the vbucket map that it contains and figure out which vbucket the server belongs. If we have three servers then the client will put together up to three multi-operations and send each to the corresponding server that contains all of the keys in that multi-operation. Each server will respond to the client and the client will put all of the results together into on set of results.
With Moxi
In this case the client doesn't know about the cluster or the vbucket map, but moxi does. The client will send all keys to moxi and then moxi will take care of splitting them up and sending them to the appropriate servers.
Sever Down Scenario:
If a server is down or busy then all keys in that server specific multi-operation will fail. The client should return you the keys that it could get from the other servers and alert you of the error.
Rebalancing Scenario:
During a rebalance there is a small chance that a request will be sent to the wrong server. In this case the client should retry the operation on the correct server. During rebalance each client should receive a "fast-forward" vbucket map that says where all of the vbuckets will be after the rebalance. It will use the server in this vbucket map for the retry.

Get client to act as server with websocket?

I am basically writing an almost purely clientside application (there is a webserver which can be used to store some persistent data, but its easier to forget about it), but as part of this I was looking to add some functionality akin to hosting a game.
The scenario would be 1 person would host the game via their browser (open a TCP socket awaiting connections), then X other people would connect to that server and join. The server would be in charge of receiving and sending data between clients.
So in this scenario is it possible to host a websocket server within a webpage?
I was looking at trying to do something peer to peer style, but I don't think it is currently supported, but its not a major problem as its only going to be for sending small amounts of text and some update messages between clients.
The WebSocket browser API is client only (for the foreseeable future).
In some sense, WebRTC us peer-to-peer, but even if the WebRTC API adds the ability to send arbitrary data, you still need a STUN/TURN server to establish the initial connection.

How do stock market data feeds work?

or any other type of realtime data feed from server to client... I'm talking about a bunch of realtime data from server to client. i.e., an informational update every second.
Does the server magically push the data to the client, or does the client need to continuously poll the server for updates? And under what protocol does this usually work? (http, socket communication, etc?)
In server-side financial applications used by brokers/banks etc. market data (quotes,trades etc) is transmitted over TCP via some application-level protocol which most probably won't be HTTP. Of course, there's no polling. Client is establishing TCP connection with server, server pushes data to client. One of common approaches to distribute market data is FIX.
Thomson-Reuters have bunch of cryptic proprietary protocols dating from mainframe days to distribute such data.
HTTP can be used for SOAP/RESTful to transmit/request data of not-so-large volume, like business news.
UPDATE Actually, even FIX is not enough in some cases, as it has big overhead because of it's "text" nature. Most brokers and exchanges transmit high-volume streams, such as quotes, using binary-format protocols (FAST or some proprietary).
In a simple case:
Create a server with a listening socket.
On the client, connect to the server's socket.
Have the client do a while(data = recv(socket)) (pseudocode)
When the server has something exciting to tell the client, it simply send(...)s on the socket.
You can even implement this pattern over HTTP (there is no real upper time limit to an HTTP socket). The server need not even read from the socket - it can be trying to write to the firehose only.
Usually a TCP socket is employed - messages arrive in order, and are best-effort. If latency is more important and dropped or out of order is not an issue, UDP can be used.

Bi-directional communication with 1 socket - how to deal with collisions?

I have one app. that consists of "Manager" and "Worker". Currently, the worker always initiates the connection, says something to the manager, and the manager will send the response.
Since there is a LOT of communication between the Manager and the Worker, I'm considering to have a socket open between the two and do the communication. I'm also hoping to initiate the interaction from both sides - enabling the manager to say something to the worker whenever it wants.
However, I'm a little confused as to how to deal with "collisions". Say, the manager decides to say something to the worker, and at the same time the worker decides to say something to the manager. What will happen? How should such situation be handled?
P.S. I plan to use Netty for the actual implementation.
"I'm also hoping to initiate the interaction from both sides - enabling the manager to say something to the worker whenever it wants."
Simple answer. Don't.
Learn from existing protocols: Have a client and a server. Things will work out nicely. Worker can be the server and the Manager can be a client. Manager can make numerous requests. Worker responds to the requests as they arrive.
Peer-to-peer can be complex with no real value for complexity.
I'd go for a persistent bi-directional channel between server and client.
If all you'll have is one server and one client, then there's no collision issue... If the server accepts a connection, it knows it's the client and vice versa. Both can read and write on the same socket.
Now, if you have multiple clients and your server needs to send a request specifically to client X, then you need handshaking!
When a client boots, it connects to the server. Once this connection is established, the client identifies itself as being client X (the handshake message). The server now knows it has a socket open to client X and every time it needs to send a message to client X, it reuses that socket.
Lucky you, I've just written a tutorial (sample project included) on this precise problem. Using Netty! :)
Here's the link: http://bruno.linker45.eu/2010/07/15/handshaking-tutorial-with-netty/
Notice that in this solution, the server does not attempt to connect to the client. It's always the client who connects to the server.
If you were thinking about opening a socket every time you wanted to send a message, you should reconsider persistent connections as they avoid the overhead of connection establishment, consequently speeding up the data transfer rate N-fold.
I think you need to read up on sockets....
You don't really get these kinds of problems....Other than how to responsively handle both receiving and sending, generally this is done through threading your communications... depending on the app you can take a number of approaches to this.
The correct link to the Handshake/Netty tutorial mentioned in brunodecarvalho's response is http://bruno.factor45.org/blag/2010/07/15/handshaking-tutorial-with-netty/
I would add this as a comment to his question but I don't have the minimum required reputation to do so.
If you feel like reinventing the wheel and don't want to use middleware...
Design your protocol so that the other peer's answers to your requests are always easily distinguishable from requests from the other peer. Then, choose your network I/O strategy carefully. Whatever code is responsible for reading from the socket must first determine if the incoming data is a response to data that was sent out, or if it's a new request from the peer (looking at the data's header, and whether you've issued a request recently). Also, you need to maintain proper queueing so that when you send responses to the peer's requests it is properly separated from new requests you issue.

JSON Asynchronous Application server?

First let me explain the data flow I need
Client connects and registers with server
Server sends initialization JSON to client
Client listens for JSON messages sent from the server
Now all of this is easy and straightforward to do manually, but I would like to leverage a server of some sort to handle all of the connection stuff, keep-alive, dead clients, etc. etc.
Is there some precedent set on doing this kind of thing? Where a client connects and receives JSON messages asynchronously from a server? Without using doing manual socket programming?
A possible solution is known as Comet, which involves the client opening a connection to the server that stays open for a long time. Then the server can push data to the client as soon as it's available, and the client gets it almost instantly. Eventually the Comet connection times out, and another is created.
Not sure what language you're using but I've seen several of these for Java and Scala. Search for comet framework and your language name in Google, and you should find something.
In 'good old times' that would be easy, since at the first connection the server gets the IP number of the client, so it could call back. So easy, in fact, that it was how FTP does it for no good reason.... But now we can be almost certain that the client is behind some NAT, so you can't 'call back'.
Then you can just keep the TCP connection open, since it's bidirectional, just make the client wait for data to appear. The server would send whatever it wants whenever it can.... But now everybody wants every application to run on top of a web browser, and that means HTTP, which is a strictly 'request/response' initiated by the client.
So, the current answer is Comet. Simply put, a JavaScript client sends a request, but the server doesn't answer for a looooong time. if the connection times out, the client immediately reopens it, so there's always one open pipe waiting for the server's response. That response will contain whatever message the server want's to send to the client, and only when it's pertinent. The client receives it, and immediately sends a new query to keep the channel open.
The problem is that HTTP is a request response protocol. The server cannot send any data unless a requests is submitted by the client.
Trying to circumvent this by macking a request and then continously send back responses on the same, original, requests is flawed as the behavior does not conform with HTTP and it does not play well with all sort of intermediaries (proxies, routers etc) and with the browser behavior (Ajax completion). It also doesn't scale well, keeping a socket open on the server is very resource intensive and the sockets are very precious resources (ordinarly only few thousand available).
Trying to circumvent this by reversing the flow (ie. server connects to the client when it has somehting to push) is even more flawed because of the security/authentication problems that come with this (the response can easily be hijacked, repudiated or spoofed) and also because often times the client is unreachable (lies behind proxies or NAT devices).
AFAIK most RIA clients just poll on timer. Not ideal, but this how HTTP works.
GWT provides a framework for this kind of stuff & has integration with Comet (at least for Jetty). If you don't mind writing at least part of your JavaScript in Java, it might be the easier approach.