I'm collecting some analytic data on my client device which does not require any initial data from the server database.
Is it possible to start with an empty database, add some analytic documents and then when I'm ready use push replication to add those documents to my server database with the sync gate?
I'm going to have an analytics channel but I don't want to pull EVERYTHING from that channel into my client database since it doesn't care about what's there already, it only wants to add to it.
I would be asking this question on the Couchbase forums but it is currently down.
Sure, push and pull replications are entirely separate so as long as you do not create a pull replication you won't receive any data from sync gateway.
Use the following API from CBLDatabase to upload data to server.'
/** Creates a replication that will 'push' this database to a remote database at the given URL.
This always creates a new replication, even if there is already one to the given URL.
You must call -start on the replication to start it. */
- (CBLReplication*) createPushReplication: (NSURL*)url;
Here's an example how you can setup push replication.
NSURL* url = [NSURL URLWithString: #"https://example.com/mydatabase/"];
CBLReplication *push = [database createPushReplication: url];
push.continuous = YES; // NO for One-shot replication
//After authenticating and adding progress observers here, call -start
[push start];
You can set-up pull replication(if needed) in similar way by using -createPullReplication:. Read more from docs over here - Replication.
Related
I have been reading the documentation for last 2 days. I'm new to feathersjs.
First issue: any link related to feathersjs is not accessible. Such as this.
Giving the following error:
This page isn’t working
legacy.docs.feathersjs.com redirected you too many times.
Hence I'm unable to traceback to similar types or any types of previously asked threads.
Second issue: It's a great framework to start with Real-time applications. But not all real time application just require alone DB access, their might be access required to something like Amazon S3, Microsoft Azure etc. In my case it's the same and it's more like problem with setting up routes.
I have executed the following commands:
feathers generate app
feathers generate service (service name: upload, REST, DB: Mongoose)
feathers generate authentication (username and password)
I have the setup with me, ready but how do I add another custom service?
The granularity of the service starts in the following way (Use case only for upload):
Conventional way of doing it >> router.post('/upload', (req, res, next) =>{});
Assume, I'm sending a file using data form, and some extra param like { storage: "s3"} in the req.
Postman --> POST (Only) to /upload ---> Process request (isStorageExistsInRequest?) --> Then perform the actual upload respectively to the specific Storage in Req and log the details in local db as well --> Send Response (Success or Failure)
Another thread on stack overflow where you have answered with this:
app.use('/Category/ExclusiveContents/:categoryId', {
create(data, params) {
// do complex stuff here
params.categoryId // the id of the category
data // -> additional data from the POST request
}
});
The solution can viewed in this way as well, since featherjs supports micro service approach, It would be great to have sub-routes like:
/upload_s3 -- uploads to s3
/upload_azure -- uploads to azure and so on.
/upload -- main route which is exposed to users. User requests, process request, call the respective sub-route. (Authentication and Auth to be included as well)
How to solve these types of problems using existing setup of feathersjs?
1) This is a deployment issue, Netlify is looking into it. The current documentation is not on the legacy domain though, what you are looking for can be found at docs.feathersjs.com/api/databases/querying.html.
2) A custom service can be added by running feathers generate service and choosing the custom service option. The functionality can then be implemented in src/services/<service-name>/<service-name>.class.js according to the service interface. For file uploads, an example on how to customize the parameters for feathers-blob (which is used in the file uploading guide) can be found in this issue.
We have a system where we have a Master / Multiple Slaves .
Currently everything happens on the Master and the slaves are just here for backup .
We use Codeigniter as a development platform .
Now we decided to user the slaves for the Reads and the Master for the Write queries .
I have been told that this is not doable without modifying the source code because proxy can't know the type of the query .
Any idea how to proceed with this without causing too much damages for a perfectly working system ...
We will use this : http://dev.mysql.com/downloads/mysql-proxy/
It does exactly what we want :
More info here :
http://jan.kneschke.de/2007/8/1/mysql-proxy-learns-r-w-splitting/
http://www.infoq.com/news/2007/10/mysqlproxyrwsplitting
http://archive.oreilly.com/pub/a/databases/2007/07/12/getting-started-with-mysql-proxy.html
something i was also looking, few month back i did something like this but i added 3 web server with master slave mysql servers, first web server enabled with mod_proxy to redirect request to read and write server all request will come to this server, if post,put or delete request come to server it will go to write server, all get or normal request will go to read server
here you can find mod_proxy setting which i used
http://pastebin.com/a30BRHFq
here you can read about load balancing
http://www.rackspace.com/knowledge_center/article/simple-load-balancing-with-apache
still looking for better solution with less hardware involved
figure out another solution through CI, create two database connections in database.php file keep save mysql server as default database connection and other connection for write only server
you can use this base model extend
https://github.com/jamierumbelow/codeigniter-base-model
you need to extend your models with this model and need to extend you model with this, it has functionality for callbacks before and after insert,update, delete and get queries, only you need to add one custom method or callback change_db_group
//this method in MY_Model
function change_db_group{
$this->_database = $this->load->database('writedb', TRUE)
}
no your example model
class Example_Model extends MY_Model{
protected $_table = 'example_table';
protected $before_create = array('change_db_group');
protected $before_update = array('change_db_group');
protected $before_delete = array('change_db_group');
}
you database connection will be changed before executing insert,update or delete queries
I am about to develop an application where employees go to service repair machines at customer premises. They need to fill up a service card using a tablet or any other mobile device.
In case of no Internet connection, I am thinking about using HTML5 offline storage, mainly IndexedDB to store the service card (web form) data locally, and do a sync at the office where Internet exists. The sync is with a MySQL database.
So the question: is it possible to sync indexedDB with mysql? I have never worked with indexedDB, I am only doing research and saw it is a potential.
Web SQL is deprecated. Otherwise, it could have been the closer solution.
Any other alternatives in case the above is difficult or outside the standard?
Your opinions are highly appreciated.
Thanks.
This is definitly do able. I am only just starting to learn indexeddb the last couple of days. This is how I would see it working tho. Sorry dont have code to give you.
Website knows its in offline mode somehow
Clicking submit form saves the data into indexeddb
Later laptop or whatever is back online or on intranet and can now talk to main server sends all indexeddb rows to server to be stored in mysql via an ajax call.
indexeddb is cleared
repeat
A little bit late, but i hope it helps.
This is posible, am not sure if is the best choice. I can tell you that am building a webapp where I have a mysql database and the app must work offline and keep trace of the data. I try using indexedDB and it was very confusing for me so I implemented DexieJs, a minimalistic and straight forward API to comunicate with indexedDB in an easy way.
Now the app is working online then if it goes down the internet, it works offline until it gets internet back and then upload the data to the mysql database. One of the solutions i read to save the data was to store in a TEXT field the json object been passed to JSON.stringify(), and once you need the data back JSON.parse().
This was my motivation to build the app in that way and also that we couldn't change of database :
IndexedDB Tutorial
Sync IndexedDB with MySQL
Connect node to mysql
[Update for 2021]
For anyone reading this, I can recommend to check out AceBase.
AceBase is a realtime database that enables easy storage and synchronization between browser and server databases. It uses IndexedDB in the browser, and its own binary db format or SQL Server / SQLite storage on the server side. MySQL storage is also on the roadmap. Offline edits are synced upon reconnecting and clients are notified of remote database changes in realtime through a websocket (FAST!).
On top of this, AceBase has a unique feature called "live data proxies" that allow you to have all changes to in-memory objects to be persisted and synced to local and server databases, so you can forget about database coding altogether, and program as if you're only using local objects. No matter if you're online or offline.
The following example shows how to create a local IndexedDB database in the browser, how to connect to a remote database server that syncs with the local database, and how to create a live data proxy that eliminates further database coding altogether.
const { AceBaseClient } = require('acebase-client');
const { AceBase } = require('acebase');
// Create local database with IndexedDB storage:
const cacheDb = AceBase.WithIndexedDB('mydb-local');
// Connect to server database, use local db for offline storage:
const db = new AceBaseClient({ dbname: 'mydb', host: 'db.myproject.com', port: 443, https: true, cache: { db: cacheDb } });
// Wait for remote database to be connected, or ready to use when offline:
db.ready(async () => {
// Create live data proxy for a chat:
const emptyChat = { title: 'New chat', messages: {} };
const proxy = await db.ref('chats/chatid1').proxy(emptyChat); // Use emptyChat if chat node doesn't exist
// Get object reference containing live data:
const chat = proxy.value;
// Update chat's properties to save to local database,
// sync to server AND all other clients monitoring this chat in realtime:
chat.title = `Changing the title`;
chat.messages.push({
from: 'ewout',
sent: new Date(),
text: `Sending a message that is stored in the database and synced automatically was never this easy!` +
`This message might have been sent while we were offline. Who knows!`
});
// To monitor realtime changes to the chat:
chat.onChanged((val, prev, isRemoteChange, context) => {
if (val.title !== prev.title) {
console.log(`Chat title changed to ${val.title} by ${isRemoteChange ? 'us' : 'someone else'}`);
}
});
});
For more examples and documentation, see AceBase realtime database engine at npmjs.com
I have recently started development on a relatively simple WCF REST service which returns JSON formatted results. At first everything worked great, and the service was quickly up and running.
The main function of the service is to return a large chunk of data extracted from a database. This data rarely changes, so I decided to try and setup a caching mechanism to speed things up. To do this I planned to set InstanceContextMode.Single and ConcurrencyMode.Multiple, and then with some thread locks, safely return a static cached result. Every 5 minutes or so, or whenever IIS decides to clear everything, the data would be re-fetched from the database.
My issue is InstanceContextMode.Single does not behave as expected. My understanding is a single instance of my WCF service class should be created and maintained. However the behaviour I have is a completely new instance of my Class is created per call. This include re-initialising all static variables.
I tried changing the web service from webHttpBinding (used for REST) to wsHttpBinding and using the service as a SOAP config, but this results in exactly the same behaviour.
What am I doing wrong!!! Have spent way too long trying to figure this out.
Any help would be great!.
Strange, can you try this and tell me what happen then?
ServiceThrottlingBehavior ThrottleBehavior = new ServiceThrottlingBehavior();
ThrottleBehavior.MaxConcurrentSessions = 1;
ThrottleBehavior.MaxConcurrentCalls = 1;
ThrottleBehavior.MaxConcurrentInstances = 1;
ServiceHost Host = ...
Host.Description.Behaviors.Add(ThrottleBehavior);
And [how] do you know your single service instance isn't "Single"? You saw multiple database connection from profiler? Is that what suggested to you why your service isn't a single instance? From your service operation implementation, do you do some of the work on a separate thread?
I've a MySql database hosted in my web site, with a table named UsrLic
Where any one wants to buy my software must register and enter his/her Generated Machine Key (+ username, email ...etc).
So my question is:
I want to automate this process from my software, how this Process will be?
Should I connect and update my database directly from my software ( and this means I must save all my database connection parameters in it * my database username , password , server * and then use ADO or MyDac to connect to this database ? and if yes how secure is this process ?
or any other suggestions .
I recommend creating an API on your web site in PHP and calling the API from Delphi.
That way, the database is only available to your web server and not to the client application, ever. In fact, you should run your database on localhost or with a private IP so that only machines on the same physical network can reach it.
I have implemented this and am implementing it again as we speak.
PHP
Create a new file named register_config.php. In this file, setup your MySQL connection information.
Create a file named register.php. In this file, put your registration functions. From this file, include 'register_config.php'. You will pass parameters to the functions you create here, and they will do the reading and writing to your database.
Create a file named register_api.php. From this file, include 'register.php'. Here, you will process POST or GET variables that are sent from your client application, call functions in register.php, and return results back to the client, all via HTTP.
You will have to research connecting to and querying a MySQL database. The W3Schools tutorials will have you doing this very quickly.
For example:
Your Delphi program calls https://mysite/register_api.php with Post() and sends the following values:
name=Marcus
email=marcus#gmail.com
Here's how the beginning of register_api.php might look:
// Our actual database and registration functions are in this library
include 'register.php';
// These are the name value pairs sent via POST from the client
$name = $_POST['name'];
$email = $_POST['email'];
// Sanitize and validate the input here...
// Register them in the DB by calling my function in register.php
if registerBuyer($name, $email) {
// Let them know we succeeded
echo "OK";
} else {
// Let them know we failed
echo "ERROR";
}
Delphi
Use Indy's TIdHTTP component and its Post() or Get() method to post data to register_api.php on the website.
You will get the response back in text from your API.
Keep it simple.
Security
All validation should be done on the server (API). The server must be the gatekeeper.
Sanitize all input to the API from the user (the client) before you call any functions, especially queries.
If you are using shared web hosting, make sure that register.php and register_config.php are not world readable.
If you are passing sensitive information, and it sounds like you are, you should call the registration API function from Delphi over HTTPS. HTTPS provides end to end protection so that nobody can sniff the data being sent off the wire.
Simply hookup a TIdSSLIOHandlerSocketOpenSSL component to your TIdHTTP component, and you're good to go, minus any certificate verification.
Use the SSL component's OnVerifyPeer event to write your own certificate verification method. This is important. If you don't verify the server side certificate, other sites can impersonate you with DNS poisoning and collect the data from your users instead of you. Though this is important, don't let this hold you up since it requires a bit more understanding. Add this in a future version.
Why don't you use e.g. share*it? They also handle the buying process (i don't see how you would do this for yourself..) and let you create a reg key through a delphi app.