Sync Gateway "Channels" in PouchDB - couchbase

Is there any support for Couchbase Sync Gateway's "Channels" in Pouch DB?
I'd like to be able to have uses see a subset of the overall data and if they create new data to be able to share whom they share it with.
Is that possible with PouchDB? Or would I have to interact with the server directly or use couchbase lite for mobile devices?

Just a little update: This is now possible, PouchDB (since version V3.4.0) is now compatible with sync gateway.
See tutorial here: http://blog.couchbase.com/first-steps-with-pouchdb--sync-gateway-todomvc-todolite

Here is the solution to make the pouch db client work with Couchbase Sync Gateway over user's channels:
var sync = function () {
var opts = {
live: true,
retry: true,
//-- from here
filter: "sync_gateway/bychannel",
query_params: {
"channels": channels
}
//-- to here
};
database.sync(syncServer, opts);
}
The key here is you just pass the filter & query_params as is, the Sync Gateway anyway has the ability to understand this filter.

PouchDB is modeled after CouchDB, which doesn't have the concept of channels, so there are no plans to implement it in PouchDB.
However, one easy way to solve your problem is to sync PouchDB to a CouchDB, then sync that to Couchbase Sync Gateway. The reason you will need CouchDB as an intermediary is that there are a few issues with direct PouchDB <-> Couchbase Sync Gateway syncing, although hopefully they should be resolved soon (see e.g. this and this).

PouchDB syncing through channels / filtered replication
Here's a concrete example to use channels.
var db = new PouchDB("yep");
db.sync(new PouchDB("http://localhost:4984/beer-sample/"), {
live: true,
retry: true,
filter: "sync_gateway/bychannel",
query_params: {
channels: "channel-1,channel-2,channel-3,bar"
}
})
filter: "sync_gateway/bychannel"
Passes the name of filter to apply to the source documents, currently the only supported filter is "sync_gateway/bychannel", this will replicate documents only from the set of named channels.1
query_params.channels
Instead of passing array we separate them by commas.2
Sync Function Example
And in the Sync Gateway your sync function might look like this (It was my intention to keep the sync function as stupid as possible so at one glance you can understand how we used the channels above in PouchDB):
function sync(doc, oldDoc) {
if (doc.type == "beer") {
channel("channel-1");
} else if (doc.type == "soap") {
channel("channel-2");
} else if (doc.type == "sweets") {
channel("channel-3");
} else if (doc.type == "bar") {
channel(doc.type);
}
}
6 years too late though... But it's better late than never!

Related

How to cache all requests in browser?

My Webapp is React -> .NET -> SQL
I want to cache all post requests such that all .NET calls are made just once, next time it's fed from cache. For every small UI change in react I want to use the cache and save development time.
As it's Just for development, looking or something in Chrome maybe, is there an extension for such a task or any guide to what I should look into will be helpful.
How about using chrome.storag.local API where you can store, retrieve, and track changes to user data.
You can use it like this based from this SO post:
function fetchLive(callback) {
doSomething(function(data) {
chrome.storage.local.set({cache: data, cacheTime: Date.now()}, function() {
callback(data);
});
});
}
function fetch(callback) {
chrome.storage.local.get(['cache', 'cacheTime'], function(items) {
if (items.cache && items.cacheTime && items.cacheTime) {
if (items.cacheTime > Date.now() - 3600*1000) {
return callback(items.cache); // Serialization is auto, so nested objects are no problem
}
}
fetchLive(callback);
});
}
Just remember that:
Chrome employs two caches — an on-disk cache and a very fast in-memory cache. The lifetime of an in-memory cache is attached to the
lifetime of a render process, which roughly corresponds to a tab.
Requests that are answered from the in-memory cache are invisible to
the web request API.

Can I use Feathers for a real-time site that uses data updated from external sources?

In the docs it states that Services only emit events when a Service method modifies data. This is the case in all examples I have seen, where a client modifies that data from the browser itself and it gets automatically updated in other clients (like a chat webapp). But what if my data is modified externally outside of Feathers? Will I be able to use Feathers so that the data is updated in all clients?
In my specific case my data is actually stored in a MongoDB database which gets updated externally and autonomously. I want my web application to use MongoDB Change Streams to listen to changes on the MongoDB database (I already know how to do this) and then I want Feathers to take care of sending updates to all my clients in real-time.
In the example chat app, this would be equivalent to having a bot that also writes messages directly to the database outside of Feathers, and this messages should also be broadcasted to clients in real-time.
Is my use-case a good fit for Feathers? Any hint on how should I approach it?
Watching changefeeds has been done for feathers-rethinkdb here. Something similar could be done for MongoDB but there are several challenges discussed in this issue.
If your MongoDB collection only gets updated externally you could also create a simple pass through service like this:
app.use('/feed/messages', {
async create(data) {
return data;
},
async remove(id) {
return { id };
},
async update(id, data) {
return data;
},
async patch(id, data) {
return data;
}
});
Which is then called by the changefeed watcher and will automatically take care of updating all the clients through its events.

What's the proper way to store Data for an AngularJS driven learning page

I am building a webpage for learning. Actually doing the page is the main goal, if it works well it would only be a bonus since i will most likely be the only person using it.
That being said i am using Angular Objects that hold a lot of informations, like:
Semester - Subcategory - Question - List of answers as objects with "true"/ "false" properties for multi choice and the answer itself ect.
Since i will be doing the whole sorting / filtering with angular i wonder if i really need SQL or if a XML file would be superior.
With SQL saving is my main issue here. PHP seems to butcher arrays into a string with the value "array". If i use json_encode it saves correctly, but on GET it stops working since i have to rebuild the whole data structure with " and ' about everywhere.
With XML it really looks like angular just is not build for that. I have found some outdated tutorials that did not even have a working example.
So i guess my question here is:
Do i either go for SQL, putting up with multiple tables. Splitting my objects into several columns with optional values all over the place, while also rebuilding the whole thing on load?
Or do i use XML, since i would only use the DB to GET the whole thing anyways?
Both approaches have been tested by me and work, somewhat. Both would need quite a lot of further digging, reading and trying. I don't have the spare time to do both routes. Which one is the better one to go for in this particular use case?
This is ofcourse a personal preference but I always try to avoid XML. The JSON format is alot lean and meaner and it's way easier to work with in web applications.
In fact I would suggest to start with some static JSON files until you're finished with giving your website some structure. You can generate them manually, use some generator tools (like http://www.mockaroo.com/) or build them by using some simple javascript (JSON.stringify is your friend). You can then use this data quite easily by using the $http service:
$http.get('my-data.json')
.then(function(response) {
$scope.myData = response.data;
});
This is actually the approach my teams take when building large enterprise applications. We mock all data resources and replace them with the real thing when we (or the customer) are happy with the progress.
Using a JSON-File should be sufficient. You can store all the needed objects in it and change it easily. With the following code you can load the data within JavaScript
function loadJSON(path, success, error) {
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
if (xhr.readyState === XMLHttpRequest.DONE) {
if (xhr.status === 200) {
if (success)
success(JSONH.parse(xhr.responseText));
} else {
if (error)
error(xhr);
}
}
};
xhr.open("GET", path, true);
xhr.send();
}
usage
loadJSON('data.json',//relative path
function (data) {//success function
$scope.questions = question;
$scope.$apply();
},
function (xhr) {//error function
console.error(xhr);
}
);

How to sync conflicting changes with Google Drive API

I have a google drive app which will auto-save changes. If you have two active sessions then they will overwrite each other. The app supports merging changes but I can't see how to safely integrate this with the drive API. Some options I have considered:
Version safe commits
Use google drive to "only update if current revision in drive == X otherwise fail"
If failed then fetch latest version, merge and retry
problem: I don't think drive supports this. Previous API versions used etags but I see no mention of this in the current documenation.
Pre-commit check
check current saved version, if still current, save
otherwise download, merge and update
problem: obvious race condition between clients
Post-commit check
save new version
if new version is as expected: done
if new version higher than expected: download previous version, merge and update
problem: I don't have much faith this is safe. I can see multiple clients getting in edit loops.
Google real-time api - field binding
replace file format with a google rt datamodel
problem: It would require redesigning just for google-rt
Google real-time api - document support
use google rt api external document support
problem: I don't think this solves the problem
I would really like a way to achieve #1 but any suggestions would be helpful. I would be happy enough with a basic locking / handover scheme between clients but I don't think Drive supports that either.
According to here, "If-Match" using etags still works. Not sure if it applies to data, but at least it applies to metadata.
To follow up on user1828559's answer, the following Java code seems to work well:
private File updateDriveFile(Drive drive, File file, byte[] data) throws IOException {
try {
ByteArrayContent mediaContent = new ByteArrayContent(MIME_TYPE, data);
Drive.Files.Update update = drive.files().update(file.getId(), file, mediaContent);
update.getRequestHeaders().setIfMatch(file.getEtag());
return update.execute();
}
catch (GoogleJsonResponseException e) {
if (isConflictError(e.getDetails())) {
logger.warn("ETag precondition failed, concurrent modification detected!");
return null;
}
throw e;
}
}
private boolean isConflictError(GoogleJsonError error) {
if (error.getCode() == 412) {
final List<GoogleJsonError.ErrorInfo> errors = error.getErrors();
if (errors != null && errors.size() == 1) {
final GoogleJsonError.ErrorInfo errorInfo = errors.get(0);
if ("header".equals(errorInfo.getLocationType()) &&
"If-Match".equals(errorInfo.getLocation()) &&
"conditionNotMet".equals(errorInfo.getReason()))
return true;
}
}
return false;
}

Sync indexedDB with mysql database

I am about to develop an application where employees go to service repair machines at customer premises. They need to fill up a service card using a tablet or any other mobile device.
In case of no Internet connection, I am thinking about using HTML5 offline storage, mainly IndexedDB to store the service card (web form) data locally, and do a sync at the office where Internet exists. The sync is with a MySQL database.
So the question: is it possible to sync indexedDB with mysql? I have never worked with indexedDB, I am only doing research and saw it is a potential.
Web SQL is deprecated. Otherwise, it could have been the closer solution.
Any other alternatives in case the above is difficult or outside the standard?
Your opinions are highly appreciated.
Thanks.
This is definitly do able. I am only just starting to learn indexeddb the last couple of days. This is how I would see it working tho. Sorry dont have code to give you.
Website knows its in offline mode somehow
Clicking submit form saves the data into indexeddb
Later laptop or whatever is back online or on intranet and can now talk to main server sends all indexeddb rows to server to be stored in mysql via an ajax call.
indexeddb is cleared
repeat
A little bit late, but i hope it helps.
This is posible, am not sure if is the best choice. I can tell you that am building a webapp where I have a mysql database and the app must work offline and keep trace of the data. I try using indexedDB and it was very confusing for me so I implemented DexieJs, a minimalistic and straight forward API to comunicate with indexedDB in an easy way.
Now the app is working online then if it goes down the internet, it works offline until it gets internet back and then upload the data to the mysql database. One of the solutions i read to save the data was to store in a TEXT field the json object been passed to JSON.stringify(), and once you need the data back JSON.parse().
This was my motivation to build the app in that way and also that we couldn't change of database :
IndexedDB Tutorial
Sync IndexedDB with MySQL
Connect node to mysql
[Update for 2021]
For anyone reading this, I can recommend to check out AceBase.
AceBase is a realtime database that enables easy storage and synchronization between browser and server databases. It uses IndexedDB in the browser, and its own binary db format or SQL Server / SQLite storage on the server side. MySQL storage is also on the roadmap. Offline edits are synced upon reconnecting and clients are notified of remote database changes in realtime through a websocket (FAST!).
On top of this, AceBase has a unique feature called "live data proxies" that allow you to have all changes to in-memory objects to be persisted and synced to local and server databases, so you can forget about database coding altogether, and program as if you're only using local objects. No matter if you're online or offline.
The following example shows how to create a local IndexedDB database in the browser, how to connect to a remote database server that syncs with the local database, and how to create a live data proxy that eliminates further database coding altogether.
const { AceBaseClient } = require('acebase-client');
const { AceBase } = require('acebase');
// Create local database with IndexedDB storage:
const cacheDb = AceBase.WithIndexedDB('mydb-local');
// Connect to server database, use local db for offline storage:
const db = new AceBaseClient({ dbname: 'mydb', host: 'db.myproject.com', port: 443, https: true, cache: { db: cacheDb } });
// Wait for remote database to be connected, or ready to use when offline:
db.ready(async () => {
// Create live data proxy for a chat:
const emptyChat = { title: 'New chat', messages: {} };
const proxy = await db.ref('chats/chatid1').proxy(emptyChat); // Use emptyChat if chat node doesn't exist
// Get object reference containing live data:
const chat = proxy.value;
// Update chat's properties to save to local database,
// sync to server AND all other clients monitoring this chat in realtime:
chat.title = `Changing the title`;
chat.messages.push({
from: 'ewout',
sent: new Date(),
text: `Sending a message that is stored in the database and synced automatically was never this easy!` +
`This message might have been sent while we were offline. Who knows!`
});
// To monitor realtime changes to the chat:
chat.onChanged((val, prev, isRemoteChange, context) => {
if (val.title !== prev.title) {
console.log(`Chat title changed to ${val.title} by ${isRemoteChange ? 'us' : 'someone else'}`);
}
});
});
For more examples and documentation, see AceBase realtime database engine at npmjs.com