sync only one latest updated document in pouchdb through couchbase sync gateway - couchbase

In my angular Js application I am fetching data from pouchdb synced with couchbase sync gateway.I have set rev_limit to 5 in sync gateway config file. while syncing in pouchdb through sync gateway all revisions of each document are getting synced in pouchdb,so size of pouchdb is increasing,I want to sync only latest document in pouch rather to sync all revisions.
Can you please help in this or please suggest me any improvment I need to do
Thanks

There are two ways of doing this.
Impose the compaction when you create DB
var db = new PouchDB('mydb', {auto_compaction: true});
Compact on demand
db.compact();

Related

disable generation of generatedSample attribute in Response Representations of api configuration

Is there a mechanism within Azure API Management to disable the generation of the sample data that is injected into the api specification by the APIM processes?
I have noticed in APIs that have significant large/complex models and a large number of operations that the sampleGenerated attribute is creating an extremely large overhead to the configuration of the api. For example we have an api that is ~260k on original import of the swagger file and when it ends up in the APIM repository the configuration file has expanded out to over 13 megs of data. This sample data doesn't appear to be used in the admin or developer portal so not sure of its value stored in the primary configuration file. I have attempted to update via the repository to clear these values however it appears to be recreated after the repository update.
The only way to do so is to provide your own samples.

Is it possible for firebase to update itself?

I need to provide every user with data from different sites. Each of the site provides data in JSON format, but some of them have restriction to maximal number of request.
My idea for solution is to download the data to firebase periodically, than users will access just the firebase database.
From docs it seems to me that firebase can somehow use http requests.
Can I use firebase to periodically update itself by http request?
Or should I establish server which will do the task?
I am pretty new to those topics so any tip where to look for information will be appreciated.

Is it possible to delete or segment a bucket in the forge API

I'm building an app where users will add collections of CAD files to an engineering project.
My plan was to have one transient and temporary bucket for the whole app to use for temp storage. Then create a persistent bucket for each project to hold that projects CAD files for the life of the project.
I've worte the functions to create the new buckets for each project as they are created. I started to write the function to delete the bucket if the project is deleted and realised there is no API function to delete a bucket!
Now I'm wondering if I'm thinking about it wrong.
Rather than creating/deleting buckets with projects. Would it be better to have one persistent bucket segmented in some way to hold project files in each segment and delete that with the project?
How would I go about this? Or should I do something else alltogether?
Yes it is. It is simply not documented yet.
The API works like this when using OSS v2:
DELETE
https://developer.api.autodesk.com/oss/v2/buckets/:bucketKey
requires 'bucket:delete' scope
action cannot be undone
It deletes the bucket and all files in it, but viewables will be preserved.
You can test it using the sample here. Checkout the bucketDelete command.
There is an API to delete the buckets but I'm not sure it's exposed to public API keys. It's using DELETE verb and requires 'bucket:delete' scope.
On the other hand, as you mentioned, there is not really a need for a per-project bucket, that's really up to you to manage how you create your buckets and place the files in them. To give you an example the Autodesk A360 cloud infrastructure is using a single bucket to place the files of all the customers!
You could get away with simply 3 buckets (one of each type), and manage project/files relationship using a third-party database or a prefix naming mechanism.

Implementing IoT PowerBI table schema

I'm currently implementing an IoT solution that has a bunch of sensors sending information in JSON format through a gateway.
I was reading about doing this on azure but couldn't quite figure out how the JSON scheme and the Event Hubs work to display the info on PowerBI?
Can I create a schema and upload it to PowerBI then connect it to my device?
there's multiple sides to this. To start with, the IoT ingestion in Azure is done tru Event Hubs as you've mentioned. If your gateway is able to do a RESTful call to the Event Hubs entry point, Event Hubs will get this data and store it temporarily for the retention period specified. Then stream analytics, will consume the data from Event Hubs and will enable you to do further processing and divert the data to different outputs. In your case, you can set one of the outputs to be a PowerBI dashboard which you can authorize with an organizational account (more on that later) and the output will automatically tied to PowerBI. The data schema part is interesting, the JSON itself defines the data table schema to be used on PowerBI side and will propagate from EventHubs to Stream Analytics to PowerBI with the first JSON package sent. Once the schema is there it is fixed and the rest of the data being streamed in should be in the same format.
If you don't have an organizational account at hand to use with PowerBI, you can register your domain under Azure Active Directory and use that account since it is considered within your org.
There may be a way of altering the schema afterwards using PowerBI rest api. Kindly find the links below..Haven't tried it myself tho.
https://msdn.microsoft.com/en-us/library/mt203557.aspx
Stream analytics with powerbi
Hope this helps, let me know if you need further info.
One way to achieve this is to send your data to Azure Events Hub, read it and send it to PowerBI with Stream Analytics. Listing all the steps here would be too long. I suggest that you take a look at a series of blog posts I wrote describing how I built a demo similar to what you try to achieve. That should give you enough info to get you started.
http://guyb.ca/IoTAzureDemo

WSO2 API Manager 1.6.0 Published API does not show up in the store

I have a distributed Publisher (port 9446) and Store (port 9447). I'm starting them with the -Dprofile options per: http://docs.wso2.org/display/AM160/Running+the+Product+on+a+Preferred+Profile
and both components are configured as follows:
CarbonDB = wso2reg
User = wso2user
API = wso2API
Reg = wso2SharedRegistry (for governance and config).
When I create a new API on the Publisher and then publish to the gateway I see in the logs that it gets published:
INFO - API Initializing API: admin--CleanPhoneVerify:v1.0.0
But when I log into the Store on port 9447 (https://StorePubServer.domain.ext:9447/Store) I don't see the API.
However, when I log into the address (https://StorePubServer.domian.ext:9446/Store) I see it.
Question 1: Shouldn't the preferred profile start options prevent the Store from working on port 9446?
Question 2: Why don't I see the api on the Store running on port 9447 that I started with my -Dprofile option?
Answer 1
At the moment profiles doesn't remove the web applications, ie Store and Publisher apps. They only remove features coming through Jars by eliminating those jars which are not related to the given profile.
Answer 2
Please enable clustering in Store and Publisher by setting to same clustering domain. For that do the below changes to both Store and Publisher.
1. Open AM_HOME/repository/conf/axis2/axis2.xml and locate clustering configuration.
2. Make clustering true
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"
enable="true">
Set a clustering domain value. This should be same to both Store and Publisher.
<parameter name="domain">storepub.domain</parameter>
Restart the servers and try with a new API.