Managing Couchbase buckets dynamically through Java SDK - couchbase

I would like to know if there's some way to perform functions like listing the existing buckets in a couchbase cluster, creating a new bucket, retrieving cluster information etc. using the Couchbase Java SDK?
I know this can be done through the REST API, but I'm trying to manage the cluster dynamically using Java.

Yes, there is a ClusterManager class accessible through the Clusterobject's clusterManager() method. You'll need the administrative credentials.

To create a new bucket you can use the cluster manager class's insertBucket() method that takes in a BucketSettings object. For example you can create a bucket like this :
....
BucketSettings PrashantSampleBucket = new
DefaultBucketSettings.Builder()
.type(BucketType.COUCHBASE)
.name("PrashantSampleBucket")
.password("")
.quota(2048) // megabytes
.replicas(1)
.indexReplicas(true)
.enableFlush(true)
.build();
....
and now you need to insert your bucket in your cluster this can be done by :
cluster.clusterManager().insertBucket(PrashantSampleBucket);

Related

How do I create a lot of sample data for firestore?

Let's say I need to create a lot of different documents/collections in firestore. I need to add it quickly, like copy and paste json. I can't do that with standard firebase console, because adding 100 documents will take me forever. Is there any solutions for to bulk create mock data with a given structure in firestore db?
If you switch to the Cloud Console (rather than Firebase Console) for your project, you can use Cloud Shell as a starting point.
From the Cloud Shell environment you'll find tools like node and python installed and available. Using whatever one you prefer, you can write a script using the Server Client libraries.
For example in Python:
from google.cloud import firestore
import random
MAX_DOCUMENTS = 100
SAMPLE_COLLECTION_ID = u'users'
SAMPLE_COLORS = [u'Blue', u'Red', u'Green', u'Yellow', u'White', u'Black']
# Project ID is determined by the GCLOUD_PROJECT environment variable
db = firestore.Client()
collection_ref = db.collection(SAMPLE_COLLECTION_ID)
for x in range(0, MAX_DOCUMENTS - 1):
collection_ref.add({
u'primary': random.choice(SAMPLE_COLORS),
u'secondary': random.choice(SAMPLE_COLORS),
u'trim': random.choice(SAMPLE_COLORS),
u'accent': random.choice(SAMPLE_COLORS)
})
While this is the easiest way to get up and running with a static dataset, it lives a little to be desired. Namely with Firestore, live dynamic data is needed to exercises it's functionally, such as real-time queries. For this task, using Cloud Scheduler & Cloud Functions is a relatively easy way to regularly updating sample data.
In addition to the sample generation code, you'll specific the update frequency in the Cloud Scheduler. For instance in the image below, */10 * * * * defines a frequency of every 10 minutes using the standard unix-cron format:
For non-static data, often a timestamp is useful. Firestore provides a way to have a timestamp from the database server added at write-time as one of the fields:
u'timestamp': firestore.SERVER_TIMESTAMP
It is worth noting that timestamps like this will hotspot in production systems if not sharded correctly. Typically 500 writes/second to the same collection is the maximum you will want so that the index doesn't hotspot. Sharding can be as simple something like as each user having their own collection (500 writes/second per user). However for this example, writing 100 documents every minute via a scheduled Cloud Function is definitely not an issue.
FireKit is a good resource to use for this purpose. It even allows sub-collections.
https://retroportalstudio.gumroad.com/l/firekit_free

Feathers.js and background jobs, how to trigger service event (or realtime update clients)

Looking a bit through the feathers docs I understood that is based on services and hooks and that services have the events that also help to offer realtime sync between server and client.
As long as things are simple, as in docs I understand, basically having a service generated and then adding/saving/updating using the service methods will triggeer the event.
My scenario is a bit different:
The API endpoint does not return info from a table but complex queries based on multiple tables
I need to have background workers that do operations on the database,probably using Kue (if there's no better way inside feathers), when a worker finishes the job, I need to have a way to trigger the API service so it updates the clients with the new data.
How can I do this in feathers?
Both scenarios can be handled with Feathers like this:
Feathers services do not have to be tied to a table. You can implement a custom service just like you would in any other framework (controller). It is not uncommon to create Dashboard services that aggregates different service calls or uses service.Model to access the ORM you are using directly:
class MyService {
find(params) {
const userModel = this.app.service('users').Model;
const invoiceModel = this.app.service('invoices').Model;
return userModel.doSomething()
.then(data => invoiceModel.doSomethingElse());
}
setup(app, path) {
this.app = app;
}
}
Background workers should also be using the Feathers API (in Node this can be done by either using the application directly via const app = require('./src/app') or connecting transparently through Feathers as the client) so that all connected clients will get updates automatically. Then there should be no need to trigger events manually (which comes with caveats like having to also run your raw data through any hooks that change the data).

Feathers service with multiple endpoints. Single service or multiple services?

I am in the process of mapping an existing API (ebay REST API) to a feathers service and am trying to reason the best way to design the service. The API has multiple endpoints, each with their own GET, POST, etc:
/ebay/inventory/item
/ebay/inventory/location
/ebay/inventory/offer
/ebay/account/paymentPolicy
/ebay/account/returnPolicy
/ebay/account/fulfillmentPolicy
etc...
I would like to avoid having to create a service for each endpoint and wanted to know if its possible to have a single service, with CRUD for each endpoint. Something like below, each subdirectory would have get(), create(), etc for their corresponding endpoint:
services
ebay
inventory
item
location
...
account
...
The feathers scaffolding creates the following, so it seems like all I would need to do is add additional app.use('/ebay/inventory/xxx'). to the base ebay service. Does this look like a good way to go about this? If so, how does one add multiple endpoints to a single service?
...
module.exports = function(){
const app = this;
// Initialize our service with any options it requires
app.use('/ebay/inventory', new Service());
// Get our initialize service to that we can bind hooks
const ebayService = app.service('/ebay/inventory');
...
module.exports.Service = Service

What is the url for Couchbase UI?

I used couchdb before and really liked the UI, because I can create views and test them directly in UI and view documents.
Because, I need to scale, I started using couchbase. But, after installing couchbase, I don't know the url of couch base client side UI.
Thank you
Just use public IP of any of the nodes in the cluster and connect to port 8091, like this http://example.com:8091
Yes it's served via 8091. But I would read through this section of the docs: http://www.couchbase.com/docs/couchbase-manual-2.0/couchbase-bestpractice-cloud-ip.html; when it comes to IP's.
//Daniel
CouchDB's GUI is available at http://127.0.0.1:5984/_utils

Accessing JBoss JMX data via JSON

Is there a way to access the JBoss JMX data via JSON?
I am trying to pull a management console together using data from a number of different servers. I can achieve this using screen scraping, but I would prefer to use a JSON object or XML response if one exists, but I have not been able to find one.
You should have a look at Jolokia, a full featured JSON/HTTP adapter for JMX.
It supports and has been tested on JBoss as well as on many other platforms. Jolokia
is an agent, which is deployed as a normal Java EE war, so you simply drop it into your
deploy directory within you JBoss installation. Also, there a some client libraries available, e.g. jmx4perl which allows for programatic access to the agent.
There is much more to discover and it is actively developed.
If you are using Java, then you can make small program that make JMX request to JBoss server and transform the response into XML/JSON.
Following is small code snippet. This may help you.
String strInitialProp = "javax.management.builder.initial";
System.setProperty(strInitialProp, "mx4j.server.MX4JMBeanServerBuilder");
String urlForJMX = "jnp://localhost:1099";//for jboss
ObjectName objAll = ObjectName.getInstance("*:*");
JMXServiceURL jmxUrl = new JMXServiceURL(urlForJMX);
MBeanServerConnection jmxServerConnection = JMXConnectorFactory.connect(jmxUrl).getMBeanServerConnection();
System.out.println("Total MBeans :: "+jmxServerConnection.getMBeanCount());
Set mBeanSet = jmxServerConnection.queryNames(objAll,null);
There are some jmx-rest bridges available, that internally talk JMX to MBeans and expose the result over REST calls (which can deliver JSON as data format).
See e.g. polarrose or jmx-rest-access. There are a few others out there.