Cassandra configuration - is native transport necessary on all cluster nodes? - configuration

Does Cassandra require both of the following options to be on?
start_native_transport: true
start_rpc: true
Are these required on all Cassandra nodes?
As far as I can tell, the purpose of each is thus:
* native transport - for servicing CQL clients
* rpc - for cluster inter node communication
are these correct?
If they are, I guess I should enable rpc on all nodes, and perhaps native transport on only one node? Is this correct?

The native transport is the CQL Native Protocol (as opposed to the Thrift Protocol) and is the way all modern Cassandra Driver's communicate with the server. This includes all reads/writes/schemachanges/etc
Hence you cannot make start_native_transport as false.

Related

Is it mandatory to use Couchbase at both client and server end for seamless sync operations?

I want to know how to sync Couchbase with other Databases seamlessly? Can we use different databases with Couchbase in the same project?
As you haven't specified which databases you have in mind, I will give you a broad answer:
Mobile: Couchbase can be sync with Couchbase Lite (https://www.couchbase.com/products/lite) via Sync Gateway - the middleware between cblite and Couchbase Server. Sync Gateway is mandatory in this case for security reasons, as you should not simply expose your database on the web.
Xamarin: https://blog.couchbase.com/synchronized-drawing-apps-with-couchbase-mobile/
Android: https://docs.couchbase.com/couchbase-lite/current/java-android.html
Swift: https://docs.couchbase.com/couchbase-lite/current/swift.html
Java: https://docs.couchbase.com/couchbase-lite/current/java-platform.html
Others: https://docs.couchbase.com/couchbase-lite/current/index.html
Couchbase Lite 1.x could also be sync with PouchDB, but we dropped this support on Couchbase Lite 2.x as we rewrote the whole thing and this is a feature yet to come.
Server: One of the most common ways to sync Couchbase Server with another database is through the Kafka Connector https://docs.couchbase.com/kafka-connector/current/index.html

com.webMethods.jms.naming.keystore vs com.webMethods.jms.ssl.keystore

We are trying to secure and encrypt the communication between our application and webMethods by using the CA signed certificates. During analysis, we got below parameters to be set as system properties in JBoss 6.4.
Could you please explain the difference between below parameters and which one should be used for configuring?
com.webMethods.jms.naming.keystore
com.webMethods.jms.ssl.keystore
JNDI (which is used to look up arbitrary objects like JMS connection factories, JDBC connection factories, EJBs, etc.) is 100% independent from JMS (which is a messaging API). The JNDI specification and JMS specification are completely different and are implemented in different ways. Therefore, both JNDI and JMS each need their own way to secure their communication. The com.webMethods.jms.naming.keystore property is used for securing JNDI communication, and com.webMethods.jms.ssl.keystore is used for securing JMS communication.

How does the intracluster replication on couchbase work?

How does the intracluster replication on couchbase work?
I understood that the buckets that contains the documents, are subdivided in vbuckets.
The vbuckets also create their replicas to provide High Availability,and the master vbucket and the replicas are stored in different servers throughout the cluster. Now I wanted to understand how is the process of sending the copies to the replicas done? With MongoDB we have oplogs, what about in couchbase?
Couchbase Server uses Distributed Change Protocol (DCP) for intracluster and intercluster replication.
From Couchbase Distributed Data Management documentation:
[DCP is] a high-performance streaming protocol that communicates the state of the data using an ordered change log with sequence numbers.
The Couchbase Forums have some commentary on the replication process in the face of node failures.
DCP facilitates many Couchbase integrations such as the Kafka Connector. See the Connector Guides for more examples.

MySQL JDBC SSL Client Certificates without keystore (SSLSocketFactory)

Google Cloud SQL (MySQL) supports SSL Client Certificates for securing connections. I have gotten these working with the mysql CLI client and with MySQL-python without any drama, but Java's JDBC driver model seems to be significantly less flexible. I was able to get it working by importing the necessary keys and certificates into my keystore, but it does not appear that I can easily provide a specific certificate to use for a particular connection at runtime.
For my use storing all the certificates in a single keystore per JVM won't work, we have a multi-tenant environment with dozens of isolated client certificates. The PostgreSQL JDBC documentation offhandedly mentions it should be possible by implementing your own SSLSocketFactory (source):
Information on how to actually implement such a class is beyond the scope of this documentation. Places to look for help are the JSSE Reference Guide and the source to the NonValidatingFactory provided by the JDBC driver.
The Java SSL API is not very well known to the JDBC driver developers and we would be interested in any interesting and generally useful extensions that you have implemented using this mechanism. Specifically it would be nice to be able to provide client certificates to be validated by the server.
The only implementation I have seen is GoogleCloudPlatform/cloud-sql-mysql-socket-factory which on-the-fly queries the Google Cloud APIs to retreive emphemeral ssl client certificates. This is what I'm starting with, but I'm disheartened by the fact some basic socket properties (notably connectTimeout and socketTimeout) are not currently implemented.
Are there other SSLSocketFactory implementations I should be aware of? It seems like a generic implementation would be useful for multiple JDBC drivers (MySQL connector/J, PostgreSQL pgJDBC and Oracle offer some client cert support). That way JDBC connection strings could support client certificates as standardize parameters (base64 encoded?) just as usernames and passwords are currently included.

Couchbase 1.8.0 concurrency (number of concurrent req support in java client/server): scalability

Is there any limit on server on serving number of requests per second or number of requests serving simultaneously. [in configuration, not due to RAM, CPU etc hardware limitations]
Is there any limit on number of simultaneous requests on an instance of CouchbaseClient in Java servlet.
Is it best to create only one instance on CouchbaseClient and keep it open or to create multiple instances and destroy.
Is Moxi helpful with Couchbase 1.8.0 server/Couchbase java client 1.0.2
I need this info to setup application in production.
Thanks you
The memcached instance that runs behind Couchbase has a hard
connection limit of 10,000 connections. Couchbase in general
recommends that you should increase the number of nodes to address
the distrobution of traffic on that level.
The client itself does not have a hardcoded limit in regards to how
many connections it makes to a Couchbase cluster.
Couchbase generally recommends that you create a connection pool
from your application to the cluster and just re-use those
connections versus creation and destroying them over and over. On
heavier load applications, the creation and destruction of these
connections over and over can get very expensive from a resource
perspective.
Moxi is an integrated piece of Couchbase. However, it is generally
in place as an adapter layer for clients developers to specifically
use it or to give legacy access to applications designed to directly
access a memcached interface. If you are using the Couchbase client
driver you won't need to use the Moxi interface.