How to Dynamically change sync function in sync gateway couchbase - couchbase

Is there a way I can dynamically change sync function for eg. Lets ssy my documents have a field ID and I want to get documents belonging to a particular ID so ID is my variable here. eg. below is a sync function for ID=4
"sync":
function (doc) {
if(doc.ID==4){
channel (doc.channels);
}
else{
throw({forbidden: "Missing required properties"});
}
},
Now This will only work for ID=4. How Can I make my sync function dynamic. Is there a way I can supply arguments to my sync function?
EDIT 1 Added Use Case
Ok so my use case is like this.I have an app in which when a user logs in I need to get user specific data from CouchBase Server to CouchBase lite. In My CouchBase Server I have 20000 documents and for each user there are 5 documents so I have (20000/5) 4000 users. So When a user logs in to my app my CouchBase server should send only 5 documents which are related to that user and not all 20000 documents
EDIT 2
This is how I have implemented the replication
private URL createSyncURL(boolean isEncrypted){
URL syncURL = null;
String host = "http://172.16.25.108";
String port = "4986";
String dbName = "sync_gateway";
try {
//syncURL = new URL("http://127.0.0.1 :4986/sync_gateway");
syncURL = new URL(host + ":" + port + "/" + dbName);
} catch (Exception me) {
me.printStackTrace();
}
Log.d(syncURL.toString(),"URL");
return syncURL;
}
private void startReplications() throws CouchbaseLiteException {
Log.d(TAG, "");
Replication pull = database.createPullReplication(this.createSyncURL(false));
Replication push = database.createPushReplication(this.createSyncURL(false));
Authenticator authenticator = AuthenticatorFactory.createBasicAuthenticator("an", "1234");
pull.setAuthenticator(authenticator);
//push.setAuthenticator(authenticator);
List<String> channels1 = new ArrayList<String>();
channels1.add("u1");
pull.setChannels(channels1);
pull.setContinuous(true);
// push.setContinuous(true);
pull.start();
//push.start();
if(!push.isRunning()){
Log.d(TAG, "MyBad");
}
/*if(!push.isRunning()) {
Log.d(TAG, "Replication is not running due to " +push.getLastError().getMessage());
Log.d(TAG, "Replication is not running due to " +push.getLastError().getCause());
Log.d(TAG, "Replication is not running due to " +push.getLastError().getStackTrace());
Log.d(TAG, "Replication is not running due to " +push.getLastError().toString());
}*/
}

The easiest way to achieve this is to qualify each user to one channel, named like the user, and to give to a document the channel (= user) names of all users for whom this document is relevant (maybe just one channel name per document, but that's completely up to you).
So with the standard sync function (without any if condition), if your config.json contains
"users": {
"u1": {
"admin_channels": ["u1"],
"password": "abracadabra"
},
"u2": {
"admin_channels": ["u2"],
"password": "simsalabim"
...
and you have documents having
{"channels": "u1",...
{"channels": "u2",...
{"channels": ["u1", "u2"],...
then the first will be transferred to u1, the second to u2, and the third to both of them. You don't need to make your channel names identical to the user name, but for this scenario it's the easiest way to go.
The programmatic assignment of channels to users can be done via the Sync Gateway Admin REST API, see http://developer.couchbase.com/documentation/mobile/1.2/develop/references/sync-gateway/admin-rest-api/user-admin/post-user/index.html. (Note that the Admin API should run on a port that is opened only to the local server where CB runs, not to the public.)

Related

Download SFTP file using SSIS package

I want to create a SSIS package which need to download a file automatically and place it in our local.
Note: using process executive task and batch script files only.
In a new SSIS project, create a new package. Navigate to the Parameters tab, where we’ll create a handful of runtime values that will make the DownloadSFTP package more reusable.
pFilename: This is the file name to download from the server. Note
that we can also use wildcards (assuming they are supported by the
target server) – in the example above, we’ll be downloading all files
ending in “.TXT”.
pServerHostKey: This is used to satisfy a security mechanism built
into the WinSCP process. By default, WinSCP will prompt the user to
verify and add to local cache the host key when connecting to an SFTP
server for the first time. Because this will be done in an automated,
non-interactive process, getting that prompt would cause an error in
our script. To prevent this, the script is built to supply the server
host key to avoid the error, and also has the added benefit of
ensuring we’re actually connecting to the correct SFTP server. This
brief article on the WinSCP documentation site describes how to
retrieve the server host key for the target server.
pServerUserPassword: This is marked as sensitive to mask the
password. As part of the script logic, this password will be
decrypted before it is sent to the server.
Create a new script task in the control flow, and add all 7 of the parameters shown above to the list of ReadOnlyVariables.
Using the Main() function (which is created automatically in a new script task), create the Process object and configure a few of the runtime options, including the name of the executable and the download directory.
public void Main()
{
// Create a new Process object to execute WinSCP
Process winscp = new Process();
// Set the executable path and download directory
winscp.StartInfo.FileName = Dts.Variables["$Package::pWinSCPLocation"].Value.ToString();
winscp.StartInfo.WorkingDirectory = Dts.Variables["$Package::pDownloadDir"].Value.ToString();
// Set static execution options (these should not need to change)
winscp.StartInfo.UseShellExecute = false;
winscp.StartInfo.RedirectStandardInput = true;
winscp.StartInfo.RedirectStandardOutput = true;
winscp.StartInfo.CreateNoWindow = true;
// Set session options
string sessionOptionString = "option batch abort" + System.Environment.NewLine + "option confirm off";
The next step is to create the input strings that will make the connection and download the file. At the bottom of this snippet, there are 3 variables that will capture output messages, error messages, and the return value, all of which will be used to log runtime information.
// Build the connect string (<user>:<password>#<hostname>)
string connectString = #"open " + Dts.Variables["$Package::pServerUserName"].Value.ToString()
+ ":"
+ Dts.Variables["$Package::pServerUserPassword"].GetSensitiveValue().ToString()
+ "#"
+ Dts.Variables["$Package::pServerName"].Value.ToString();
// Supplying the host key adds an extra level of security, and avoids getting the prompt to trust the server.
string hostKeyString = Dts.Variables["$Package::pServerHostKey"].Value.ToString();
// If hostkey was specified, include it
if (hostKeyString != null && hostKeyString.Length > 0)
connectString += " -hostkey=\"" + hostKeyString + "\"";
// Build the get command string
string getString = "get " + Dts.Variables["$Package::pFilename"].Value.ToString();
// Create output variables to capture execution info
string outStr = "", errStr = "";
int returnVal = 1;
With all of the options configured, it’s time to invoke WinSCP.com. The try/catch block below will attempt to connect and download the specified file from the server.
// This try/catch block will capture catastrophic failures (such as specifying the wrong path to winscp).
try
{
winscp.Start();
winscp.StandardInput.WriteLine(sessionOptionString);
winscp.StandardInput.WriteLine(connectString);
winscp.StandardInput.WriteLine(getString);
winscp.StandardInput.Close();
winscp.WaitForExit();
// Set the outStr to the output value, obfuscating the password
outStr = winscp.StandardOutput.ReadToEnd().Replace(":" + Dts.Variables["$Package::pServerUserPassword"].GetSensitiveValue().ToString() + "#", ":*******#");
returnVal = winscp.ExitCode;
}
catch (Exception ex)
{
errStr = "An error occurred when attempting to execute winscp.com: " + ex.Message.Replace("'", "\"").Replace("--", " - ");
}
The package is ready to be executed. Assuming everything is configured properly, running the package on the system should download exactly two text files (remember, we used the wildcard “*.txt” to get all text files).

Firebase cloud functions / One works, other (same function) not works

I am very nervous.. I can't test normally firebase cloud functions, because a lot of things don't work. I tested it, I copied same function with a different name, the new function don't work.
Why????
Why working helloworld and why not working tryhello???
cloud functions nodejs index.js:
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp();
const db = admin.firestore();
exports.tryHello = functions.https.onCall((data, context) => {
let dataexample = {
name: 'examplename',
state: 'examplestate',
country: 'examplecountry'
};
let setDoc = db.collection('newexample').doc(data.text).set(dataexample);
return { text : "success. uid:" + context.auth.uid }
});
exports.helloWorld = functions.https.onCall((data, context) => {
let dataexample = {
name: 'examplename',
state: 'examplestate',
country: 'examplecountry'
};
let setDoc = db.collection('newexample').doc(data.text).set(dataexample);
return { text : "success. uid:" + context.auth.uid }
});
Unity C#:
public void testbutton()
{
var data = new Dictionary<string, object>();
data["text"] = "example";
//I tested "tryHello" and helloWorld"
FirebaseFunctions.DefaultInstance.GetHttpsCallable("tryHello")
.CallAsync(data).ContinueWith((task) =>
{
if (task.IsFaulted)
{
// Handle the error...
print("error");
}
else if (task.IsCompleted)
{
IDictionary snapshot = (IDictionary)task.Result.Data;
print("Result: " + snapshot["text"]);
}
}
Result:
1. First, I write unity: GetHttpsCallable("helloWorld") and save.
I start game, login, then I click testbutton.
result: firebase console: success create example collection, example document, country:examplecountry, name:examplename, state:examplestate. Ok good.
unity log:
1. User signed in successfully: Jane Q. User (qQC3wEOU95eDFw8UuBjb0O1o20G2)
UnityEngine.Debug:LogFormat(String, Object[])
loginfire:b__10_0(Task1) (at Assets/loginfire.cs:155)
2. Result: success. uid:qQC3wEOU95eDFw8UuBjb0O1o20G2
UnityEngine.MonoBehaviour:print(Object)
<>c:<testbutton>b__16_0(Task1) (at Assets/loginfire.cs:411)
System.Threading._ThreadPoolWaitCallback:PerformWaitCallback()
cloud functions "helloWorld" log:
Function execution started
Function execution took 3020 ms, finished with status code: 200
Ok. I delete "example" collection in firebase console.
2. Second, I write unity: GetHttpsCallable("tryHello") and save.
I start game, login, then I click testbutton.
result: not create collection.
unity log:
*1. User signed in successfully: Jane Q. User (qQC3wEOU95eDFw8UuBjb0O1o20G2)
UnityEngine.Debug:LogFormat(String, Object[])
loginfire:b__10_0(Task`1) (at Assets/loginfire.cs:155)
System.Threading._ThreadPoolWaitCallback:PerformWaitCallback()
error
UnityEngine.MonoBehaviour:print(Object)
<>c:b__16_0(Task`1) (at Assets/loginfire.cs:396)
System.Threading._ThreadPoolWaitCallback:PerformWaitCallback()*
cloud functions "tryHello" log:
nothing...
Why?
I don't understand. same, only the name is different!
And.. in many cases it show success but still does not update the database. or just much later. why? Lately, "helloWorld" also often writes an error, if I don't press the testbutton immediately after logging in, it can't read the uid.
I start to get tired of the system right from the start.
Thanks..
Solved!
I needed it configured to cloud console "tryHello" permission settings. (Not same helloworld settings.)
Lately, "helloWorld" also often writes an error, if I don't press the testbutton immediately after logging in, it can't read the uid.
-> I needed declared Firebasauth within testbutton().
Sorry for the question, thanks.

Featherjs - Add custom field to hook context object

When using feathersjs on both client and server side, in the app hooks (in the client) we receive an object with several fields, like the service, the method, path, etc.
I would like, with socket io, to add a custom field to that object. Would that the possible? To be more precise, I would like to send to the client the current version of the frontend app, to be able to force or suggest a refresh when the frontend is outdated (using pwa).
Thanks!
For security reasons, only params.query and data (for create, update and patch) are passed between the client and the server. Query parameters can be pulled from the query into the context with a simple hook like this (where you can pass the version as the __v query parameter):
const setVersion = context => {
const { __v, ...query } = context.params.query || {};
context.version = __v;
// Update `query` with the data without the __v parameter
context.params.query = query;
return context;
}
Additionally you can also add additional parameters like the version number as extraHeaders which are then available as params.headers.
Going the other way around (sending the version information from the server) can be done by modifying context.result in an application hook:
const { version } = require('package.json');
app.hooks({
after: {
all (context) {
context.result = {
...context.result,
__v: version
}
}
}
});
It needs to be added to the returned data since websockets do not have any response headers.

Is it possible to integrate Amazon QuickSight dashboard graphs to a web application?

I need to display live interactive graphs based on customer data present in MySQL,for generating the graphs, I am planning to use Amazon Quick Sight but i would like to know whether the generated graphs can be integrated with my web application UI ?
Datasource MYSQL is hosted in AWS.
Any other better design solution is also most welcome :)
I don't think so. Even if you want to share the dashboard to
someone, you need to create a user in QuickSight. Any more than 1
user will be charged by AWS.
The dashboard cannot be public and you need to login to view the
dashboard. If it was public, you could have embedded it in your
webpage as an iframe. But you cannot.
So, I think you are having limited options here, when it comes to
QuickSight.
You can always using D3 or Google Charts to display the data by
exposing REST services for your data in MySQL.
If you have a huge database, you may want to consider indexing the
data to Elasticsearch and perform queries on it.
Check if Kibana + Elasticsearch works out of the box for you.
Good luck!
Update: Dec 28 2018
Amazon announced in Nov 2018, that Amazon QuickSight dashboards can now be embedded in applications. Read more here at this AWS QuickSight Update.
AWS has enabled the embedding of the Dashboards into web apps. The feature was released on 27th Nov 2018. Here are a few helpful links:
1. https://aws.amazon.com/blogs/big-data/embed-interactive-dashboards-in-your-application-with-amazon-quicksight/
2. https://docs.aws.amazon.com/quicksight/latest/user/embedded-dashboards-setup.html
Note: This answer is applicable only if you are using AWS Cognito
In order to generate Quicksight secure dashboard URL, follow the below steps:
Step 1: Create a new Identity Pool. Go to https://console.aws.amazon.com/cognito/home?region=u-east-1 , click ‘Create new Identity Pool’
Give an appropriate name.
Go to the Authentication Providers section, select Cognito.
Give the User Pool ID(your User pool ID) and App Client ID (go to App
Clients in user pool and copy id).
Click ‘Create Pool’. Then click ‘Allow’ to create roles of the
identity pool in IAM.
Step 2: Assign Custom policy to the Identity Pool Role
Create a custom policy with the below JSON.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "quicksight:RegisterUser",
"Resource": "*",
"Effect": "Allow"
},
{
"Action": "quicksight:GetDashboardEmbedUrl",
"Resource": "*",
"Effect": "Allow"
},
{
"Action": "sts:AssumeRole",
"Resource": "*",
"Effect": "Allow"
}
]
}
Note: if you want to restrict the user to only one dashboard, replace the * with the dashboard ARN name in quicksight:GetDashboardEmbedUrl,
then goto the roles in IAM.
select the IAM role of the Identity pool and assign the custom policy
to the role.
Step 3: Configuration for generating the temporary IAM(STS) user
Login to your application with the user credentials.
For creating temporary IAM user, we use Cognito credentials.
When user logs in, Cognito generates 3 token IDs - IDToken,
AccessToken, RefreshToken. These tokens will be sent to your application server.
For creating a temporary IAM user, we use Cognito Access Token and credentials will look like below.
AWS.config.region = 'us-east-1';
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId:"Identity pool ID",
Logins: {
'cognito-idp.us-east-1.amazonaws.com/UserPoolID': AccessToken
}
});
For generating temporary IAM credentials, we call sts.assume role
method with the below parameters.
var params = {
RoleArn: "Cognito Identity role arn",
RoleSessionName: "Session name"
};
sts.assumeRole(params, function (err, data) {
if (err) console.log( err, err.stack); // an error occurred
else {
console.log(data);
})
You can add additional parameters like duration (in seconds) for the
user.
Now, we will get the AccessKeyId, SecretAccessKey and Session
Token of the temporary user.
Step 4: Register the User in Quicksight
With the help of same Cognito credentials used in Step 3, we will
register the user in quicksight by using the quicksight.registerUser
method with the below parameters
var params = {
AwsAccountId: “account id”,
Email: 'email',
IdentityType: 'IAM' ,
Namespace: 'default',
UserRole: ADMIN | AUTHOR | READER | RESTRICTED_AUTHOR | RESTRICTED_READER,
IamArn: 'Cognito Identity role arn',
SessionName: 'session name given in the assume role creation',
};
quicksight.registerUser(params, function (err, data1) {
if (err) console.log("err register user”); // an error occurred
else {
// console.log("Register User1”);
}
});
Now the user will be registered in quicksight.
Step5: Update AWS configuration with New credentials.
Below code shows how to configure the AWS.config() with new
credentials generated Step 3.
AWS.config.update({
accessKeyId: AccessToken,
secretAccessKey: SecretAccessKey ,
sessionToken: SessionToken,
"region": Region
});
Step6: Generate the EmbedURL for Dashboards:
By using the credentials generated in Step 3, we will call the
quicksight.getDashboardEmbedUrl with the below parameters
var params = {
AwsAccountId: "account ID",
DashboardId: "dashboard Id",
IdentityType: "IAM",
ResetDisabled: true,
SessionLifetimeInMinutes: between 15 to 600 minutes,
UndoRedoDisabled: True | False
}
quicksight.getDashboardEmbedUrl(params,
function (err, data) {
if (!err) {
console.log( data);
} else {
console.log(err);
}
}
);
Now, we will get the embed url for the dashboard.
Call the QuickSightEmbedding.embedDashboard from front end with the
help of the above generated url.
The result will be the dashboard embedded in your application with
filter controls.
I know this is a very late reply, but just in case someone else stumbles across this question... We use periscopedata.com to embed BI dashboards in our SaaS app. All that's needed is knowledge of SQL (to create the charts/dashboards) and enough dev knowledge to call their API endpoint to display the dash in your own app.

Caching json response from server and automatically detect when the data is changed

I am working on an angular app , which makes calls to a server and gets JSON data. I am looking at caching this JSON data.
Approach tried :
1) I tried using the html5 localSorage , to save the data locally. But , the problem with this is , i should manually set an expiration time and there is no way of knowing how occasionally the data will change.
2) I have tried using the $cacheFactory , however this does not cache data across refreshes or page navigations.
Solution I am looking for ,
I want the data to be saved locally or cached. And use some mechanism to detect if the data returned by the server has changed (JSON data) and only then make a call.
Is this possible in any way ?
I hope your getting the data from html, calling the web service and store the data in local storage.
In $http success call the function which stores the data in local storage .
setUserDetails = function(userData){
var username = "";
if(userData != null){
app.setInLocalStorage("loginName",userData.userName);}
}
where userData is like a object holds the your html data,
var userData= {
userName : $scope.userName,
};
Get the userData.userName from local storage
var userLoggedIn = app.retrieveFromLocalStorage("userName");
compare the data is it changed the from server or not
if(userLoggedIn != null){
//call the service which gives dynamic response
//i hope your saving the success data in userData and userName is property which changes dynamically
var newUser = userData.userName;
if(angular.equals(userLoggedIn, newUser){
// maintain the same data in localstorage
//or
//No need to call any other function calls
}else{
//clear the old data from local storage
app.clearLocalStorage();
//storage the new dynamic data in local storage
//or
//call the new function calls if you need
app.setInLocalStorage("loginName",newUser)
}
}
In app.js file include the following lines to storage ,retrieve,clear the local storage
setInLocalStorage : function(key , value) {
// Check browser support
if (typeof(Storage) != "undefined")
{
// Store
localStorage.setItem(key , value);
}
else
{
alert("Sorry, your browser does not support Web Storage...");
}
},
retrieveFromLocalStorage : function(key){
return localStorage.getItem(key);
},
clearLocalStorage : function(){
localStorage.clear();
}