Couchbase says: No valid node found to bootstrap from - couchbase

What means this error message ?
com.couchbase.client.core.config.ConfigurationException: No valid node found to bootstrap from. Please check your network configuration.
From the source code:
https://github.com/couchbase/couchbase-jvm-core/blob/master/src/main/java/com/couchbase/client/core/message/cluster/SeedNodesRequest.java
it looks like my node host is found, but is not valid:

If memory serves, it means that the Couchbase SDK cannot connect to the cluster you have in your connect string in your connection object. It is trying to connect and get the cluster map to know the cluster topology, what services are available and where in the cluster they are.
In the future, please add your code you are using to your question so as to have people answer your question, but also benefit the community here.

Related

Unable to connect to the binlog client in NiFi

I'm building a NiFi dadaflow, and I need to get the data changes from a MySql database, so I want to use the CaptureChangeMySQL processor to do that.
I get the following error when I run the CaptureChangeMySQL processor and I don't see what's causing this :
Failed to process session due to Could not connect binlog client to any of the specified hosts due to: BinaryLogClient was unable to connect in 10000ms: org.apache.nifi.processor.exception.ProcessException: Could not connect binlog client to any of the specified hosts due to: BinaryLogClient was unable to connect in 10000ms
I have the following controller services enabled :
DistributedMapCacheClientService
DistributedMapCacheServer
But I'm not sure if they are properly configured :
DistributedMapCacheServer properties
DistributedMapCacheClientService properties
In MySql, I have enabled the log_bin variable, by default it wasn't. I checked and I have indeed some binlog files created when data change.
So I think the issue is with the controller services and how they connect, it's not clear to me.
I searched for tutorials about how to use this NiFi processor but I couldt not find how to fix this error. I looked mainly at this one : https://community.hortonworks.com/articles/113941/change-data-capture-cdc-with-apache-nifi-version-1-1.html but it did not helped me.
Does anyone have already use this processor to do CDC?
Thank you in advance.
I found what was wrong : I was trying to connect to the wrong port for the MySQL Host of the CaptureChangeMySQL processor :x
For others who are still facing similar issues, check if the firewall of the server is stopping the connection. Allow mysql 3306 in your firewall rules.

After Aurora Cluster DB failover, unable to write to DB

Right now I am connecting to a cluster endpoint that I have set up for an Aurora DB-MySQL compatible cluster, and after I do a "failover" from the AWS console, my web application is unable to properly connect to the DB that should be writable.
My setup is like this:
Java Web App (tomcat8) with HikariCP as the connection pool, with ConnecterJ as the driver for MySQL. I am evaluating Aurora-MySQL to see if it will satisfy some of the needs the application has. The web app sits in an EC2 instance that is in the same VPC and SG as the Aurora-MySQL cluster. I am connecting through the cluster endpoint to get to the database.
After a failover, I would expect HikariCP to break connections (it does), and then attempt to reconnect (it does), however, the application must be connecting to the wrong server, because anytime a write is hit to the database, a SQL Exception is thrown that says:
The MySQL server is running with the --read-only option so it cannot execute this statement
What is the solution here? Should I rework my code to flush DNS after all connections go down, or after I start receiving this error, and then try to re-initiate connections after that? That doesn't seem right...
I don't know why I keep asking questions if I just answer them (I should really be more patient), but here's an answer in case anyone stumbles upon this in a Google search:
RDS uses DNS changes when working with the cluster endpoint to make it looks "seamless". Since the IP behind the hostname can change, if there is any sort of caching going on, then you can see pretty quickly how a change won't be reflected. Here's a page from AWS' docs that go into it a bit more: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-jvm-ttl.html
To resolve my issue, I went into the jvm's security file and then changed it to be 0 just to verify if what was happening was correct. Seems correct. Now I just need to figure out how to do it properly...

Kafka Connect with MySQL Source

Before I begin, I'd like to start by saying I am completely new to Kafka and am fairly new to Linux, so if this ends up being a ridiculously simple answer, please be kind! :)
The high level idea of what I'm trying to do is use Confluent's Kafka Connect to read from a MySQL database that is having sensor data streamed to it on a minute or sub-minute basis and then use Kafka as an "ETL pipeline" to instantly route that data to a Data Warehouse and/or MongoDB for reporting or even tie in directly to Kafka from our web-app.
I am using Robin Moffatt's series as well as Confluent's JDBC Source Connector Quickstart as my initial guide. As far as where these are hosted, I am using an Amazon RDS MySQL database and a separate AWS EC2 t2.large instance with Ubuntu 16.04.2 to run Kafka Connect.
Using Robin's workflow, I am to the point where I have created the configuration file, but I am not using the json format he uses. I am using the format from the quickstart article.
name=jdbc_source_mysql_4427_Data
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
connection.url=jdbc:mysql://lndbtest.cdveaddpnevv.us-east-2.rds.amazonaws.com:3306/LNDBv1?user=adminRDS&password=*****
table.whitelist=4427_Data
mode=timestamp
timestamp.column.name=TmStamp
validate.non.null=false
topic.prefix=mysql-
And that is saved at:
/etc/kafka-connect-jdbc/kafka-connect-jdbc-source.properties
I then run:
/usr/bin/confluent load jdbc_source_mysql_4427_Data -d /etc/kafka-connect-jdbc/kafka-connect-jdbc-source.properties
and get this error:
{
"error_code": 400,
"message": "Connector configuration is invalid and contains the following 2 error(s):\nInvalid value java.sql.SQLException: No suitable driver found for jdbc:mysql://lndbtest.cdveaddpnevv.us-east-2.rds.amazonaws.com:3306/LNDBv1?user=adminRDS&password=*** for configuration Couldn't open connection to jdbc:mysql://lndbtest.cdveaddpnevv.us-east-2.rds.amazonaws.com:3306/LNDBv1?user=adminRDS&password=***\nInvalid value java.sql.SQLException: No suitable driver found for jdbc:mysql://lndbtest.cdveaddpnevv.us-east-2.rds.amazonaws.com:3306/LNDBv1?user=adminRDS&password=*** for configuration Couldn't open connection to jdbc:mysql://lndbtest.cdveaddpnevv.us-east-2.rds.amazonaws.com:3306/LNDBv1?user=adminRDS&password=***\nYou can also find the above list of errors at the endpoint `/{connectorType}/config/validate`"
}
It seems to be a driver issue. My question at this point is, "Do I need to download the MySQL JDBC driver to my EC2 instance, or should that have been included in the Confluent Platform package?"
Also, does my overall idea sound like a good fit for Kafka Connect?
As I mentioned earlier, I am new to these technologies, but have found the best way to learn something is to jump right in and try to solve a problem. Any ideas and suggestions would be more than welcome. Thank you!
The overall concept makes sense to me. You do need to download the driver and add it to your worker classpath. It isn't packaged for licensing reasons I assume.
As #dawsaw says, you do need to make the MySQL JDBC driver available to the connector.
My observation here would be–given a free hand in all the application and architecture you describe– it would be best to stream from the sensor into Kafka, and then from there Kafka into MySQL, Mongo, webapp, etc.
Streaming into a DB to then stream out of the DB is not a perfect choice, if you have the option.
It's because there's no mysql driver in the distribution of confluent. I think you can solve the problem by downloading a mysql driver jar file, then putting it in confluent/share/java/kafka-connect-jdbc folder and re-run the program.

Spymemcached client auto-reconnect to another server in Couchbase Cluster?

I read Couchbase Rebalancing document (http://blog.couchbase.com/rebalancing-couchbase-part-i) and it wrote : "A client losing its connection to the cluster will attempt to reestablish (configurable). Anytime it reconnects (first time or not) it gets the latest map that the cluster has. Ironically, a flaky network in theory might just help here to keep the map constantly updated during a rebalance, but that's for a different discussion."
I use Spymemcached 2.7.3 and how can i achieve that.
I give an example: My Java client add two server (10.0.0.40 and 10.0.0.15, use URL) to connect to Couchbase cluster. But in reality, when 10.0.0.40 down, the persistent connection did not keep. I have to restart my client to switch to 10.0.0.15. How can my client can re-connect to 10.0.0.15 when 10.0.0.40 down without restart my application.
Updated:
I use below code to connect to Couchbase cluster:
ArrayList<URI> listAddr = new ArrayList<>();
listAddr.add(new URI("http://10.0.0.40:8091/pools"));
listAddr.add(new URI("http://10.0.0.15:8091/pools"));
listAddr.add(new URI("http://10.0.0.16:8091/pools"));
client = new MemcachedClient(new BinaryConnectionFactory(), listAddr, "test", "test", "");
I want to my java client auto reconnect to another server in pool (40,15,16) to get topology (when my java client's still running) if the first server in pool (40) failed.
Can i achieve this purpose with spymemcahce or i have to move to Couchbase Java SDK.
spymemcached java client dos not handle membase fail over for particular node.
You can check here .
If you update your java client to couchbase java client, then you can handle fail over by removing failed node from cluster.
for more information you can check here or here

DB2 Connect issue using Native OLE DB\MS OLEDB Provider for DB2

I downloaded and installed the driver setup file, DB2OLEDB.exe, from here:
http://download.microsoft.com/mwg-internal/de5fs23hu73ds/progress?id=HYLbKUfGNl
Using the connection string that worked on another PC, I tried to create a Connection Object in an SSIS package. When I tested the connection I got this error:
Test connection failed because of an error in initializing provider. A TCPIP socket error has occurred (10057): A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied.
Any suggestions on what the cause of this error is and how I might resolve this issue?
By the way, when I use the DB2 Configuration set up utility and test a connection from within that, I am able to successfully connect.
What other info can I provide to help you answer this question?
Thank you
Could this be related to a blocked port?
If you follow all the steps illustrated here: http://www.bidn.com/blogs/PatrickLeBlanc/ssis/700/connecting-to-db2-using-ssis do you still get the same result?
Maybe a silly question, did you restart the computer after the installation?
Are you an admin user on one machine and not on the other?
You could try to verify the port connectivity with a quick telnet command:
telnet your-db-host your-db-listening-port
If it connects, that one is off the list.
Doing some research I've found two possible fixes.
The first link suggests calling BeginReceive after the EndAccept logic is complete. Are you using script code, or just using the GUI without any scripting?
TCP async sockets throwing 10057
The second link points to drivers / software on the PC. It could be that you are missing a windows update or have faulty hardware / drivers.
I think this is less likely the case since you could connect to a different machine with the same connection string(?). Can you verify this is a valid statement?
http://social.msdn.microsoft.com/Forums/en-US/1bc3df95-c86d-4d25-aa20-30f61ed00c63/odd-socket-errors
If you could show the connection strings used for both the working and non working, and give a little more detail about The "Other PC" in comparison to the non-working PC... that would be helpful =]
If neither of the posts I've linked are the solution, this specific Google search has proven to yield some seemingly helpful results
"socket" "10057" "no address was supplied."