Connecting node to a cluster in Couchbase - couchbase

I have created a bucket in my local system and I am trying to connect another node which is located in a remote server. I am able to work with the nodes separately. But I need to join these two nodes to form a cluster. Is there a way to add the remote server node into my local server by using the web UI?
When I tried to add the remote server's IP address by clicking "Add Server", I am getting the following error.
"Attention - Prepare join failed. Authentication failed. Verify username and password. Got HTTP status 401 from REST call post to http://XXX.XXX.XXX.XXX:8091/engageCluster2. Body was: []"
I used my local server's username and password. If I give that server's username and password, I get this error.
Attention - This node cannot add another node ('ns_1#XXX.XXX.XXX.XXX') because of cluster version compatibility mismatch. Cluster works in [4, 1] mode and node only supports [2, 0].
Is there a way to link them using Java API? Can someone please help me with this?

Related

How do I connect a to a mysql database hosted on pythonanywhere?

I have bought a basic 6$ pythonanywhere server which allows for SCP and SSH connections to their servers. I have deployed my flask application (REST API) to the server and set up a local environment (installed packages, set up path and environmental variables). I ran the app, and it gives me the 404 page on the / route which is actually a sign that it is working.
However when I try to hit a route like /api/users/3 for example, it gives me an error 500 (internal server error). I dug around some log files on the server and found one which is basically the output of the WSGI web server which is host to my flask application. It said that there is an issue with the database communication. From what I understand, it connected successfully but it couldn't query the data from the table:
(1044, "Access denied for user 'secret_username'#'%' to database 'test'")
I have tried to fix this through the web mysql console by giving my <secret_username>#secret_username.pythonanywhere.com all the privilages to test.*, but it gives me an access error once again. I tried to SSH into the machine that is host to the mysql server and tried to connect to it - secret_username.mysql.pythonanywhere-services.com but it doesn't allow me to ssh into this server.
Has anyone experienced this issue? I am almost sure that my connection is set up correctly because if it didn't establish connection it would give me an access error with message `using password "yes". I've read on forums a bit, they suggest installing a mysql server on the same server as the flask application, but I don't have access to sude for some reason, probably due to my cheap plan. Any ideas?
If you are using the MySQL database provided by PythonAnywhere, with the hostname secret_username.mysql.pythonanywhere-services.com, then your database will be called secret_username$test, not just test -- see the "Databases" page on the PythonAnywhere website.

MySQL Domo AWS RDS Connector

I'm having issues connecting Domo to a MySQL database hosted with AWS RDS. Whenever I try to authenticate I get this error:
"Failed to authenticate. Verify the credentials and try again. Domo is ready, but the credentials you entered are invalid. Verify your account credentials and try again. Error setting up SQL connection. Could not create connection to database server. Attempted reconnect 3 times. Giving up."
Its not security group settings. Someone suggested on this post:
https://dojo.domo.com/t5/Data-Sources-and-Connectors/MySQL-connector-issues/td-p/15462
that I should enable SSL in AWS database but I'm not sure how to do that.
I'll assume you're using the MySQL connector, not the MySQL SSH connector.
It sounds like you need to whitelist Domo's IP addresses within your AWS RDS's security groups.
Aside from that, make sure you're populating the credentials in Domo with the right pieces of information. Hostname should be the server's public IP address.
This connector follows the same general process as described in AWS's documentation here, with the exception that steps 5 and 6 are optional since SSH is not required for this connector.

Containerized server application failing to connect to MySQL databases

I'm trying to connect my server code running as a Docker container in our Kubernetes cluster (hosted on Google Container Engine) to a Google Cloud SQL managed MySQL 5.7 instance. The issue I'm running into is that every connection is being rejected by the database server with Access denied for user 'USER'#'IP' (using password: YES). The database credentials (username, password, database name, and SSL certificates) are all correct and work when connecting via other MySQL clients or the same application running as a container on a local instance.
I've verified that all credentials are the same on the local and the server-hosted versions of the app and that the user I'm connecting with has the wildcard % host specified. Not really sure what to check next here, to be honest...
An edited version of the connection code is below:
let connectionCreds = {
host: Config.SQL.HOST,
user: Config.SQL.USER,
password: Config.SQL.PASSWORD,
database: Config.SQL.DATABASE,
charset: 'utf8mb4',
};
if (Config.SQL.SSL_ENABLE) {
connectionCreds['ssl'] = {
key: fs.readFileSync(Config.SQL.SSL_CLIENT_KEY_PATH),
cert: fs.readFileSync(Config.SQL.SSL_CLIENT_CERT_PATH),
ca: fs.readFileSync(Config.SQL.SSL_SERVER_CA_PATH)
}
}
this.connection = MySQL.createConnection(connectionCreds);
Additional information: the server application is built in Node using the mysql2 library to connect to the database. There are no special firewall rules in place that are causing network issues, and that's confirmed by the fact that the library IS connecting, but failing to authenticate.
After setting up Cloud SQL Proxy I managed to figure out what the actual error was: somewhere between the secret and the pod configuration an extra newline was being added to the database name, causing any connection attempt to fail. With the proxy set up this was made clear because there was an actual error message to that effect displayed.
(notably all of my logging around the credentials that I was using to validate that the credentials were accurate didn't explicitly display the newline and was disguised by the fact that the console display added line breaks to wrap the display, and it happened to line up exactly with where the database name ended)
Have you read the documentation on https://cloud.google.com/sql/docs/mysql/connect-container-engine ?
In Container Engine, you need to set up a Cloud SQL Proxy container alongside your application pod and talk to it. The Cloud SQL Proxy will then make the actual call to Cloud SQL service.
If the container worked locally, I assume you have Application Default Credentials set on your development machine. It could be failing because those credentials are not on your container as a Service Account file. Try configuring a Service Account file, or create your GKE cluster with --scopes argument that gives your instances access to Cloud SQL.

Failed to add server in couchbase

When I try add server to couchbase : Server Nodes/ -> Add Server
I add Server IP Address : XX.XXX.X.XXX:port and username/password
but when I click Add Server I have a warning like "picture"
I try switch with many server but same stack always ...
Attention - Failed to reach erlang port mapper. Timeout connecting to "10.107.2.237" on port "4369". This could be due to an incorrect host/port combination or a firewall in place between the servers.
Warning \u2013 Adding a server to this cluster means all data on that server will be removed.
thx for your help
4369 is the port for erlang node interconnection and must be accessible from all your nodes. So as described in error message you must check, if that port is accessible from other nodes. Also, are you sure that couchbase started on that new node?
You can read more about that port on http://erlang.org/doc/man/epmd.html

Connect a Centura client application to SQL Server

I am new to Centura application configuration
When I try opening the windows client application, which has the Centura sql.ini configuration file. I get the below error.
Can anyone please help me understand the issue?
Error code: 401
Reason: FOR SQLBASE: The specified database cannot be found. SQLBase cannot find the file named "x:\dbdir\dbname\dbname.DBS" where x:\dbdir is either the default, c:\SQLBASE, or modified with the DBDIR SQL.INI configuration keyword. In a multiuser network configuration, this error indicates that your network is working correctly, but the database system was unable to locate the specified database filename.
FOR NON-SQLBASE DATABASES: This problem can also occur with a SQLGateway when leaving out the protocol type in the SERVERNAME parameter that the client uses to communicate with the gateway (like SQLNBIOS).
For example, SERVERNAME=SERVER33,SQLQUEUE DBNAME=DB2DBMS, SQLQUEUE, SQLNBIOS
will not allow a remote client process (using SQLNBIOS on the LAN to communicate with the SQLGateway machine) to connect to the SQLGateway machine.
For SPX connectivity from DOS or MS Windows to a Unixware SQLBase Server check for the omission of the "serverpath=..." parameter in the SQL.INI file under the client section.
Remedy: Verify the database file exists. The default drive letter and dbdirname is c:\SQLBASE unless overridden with a DBDIR SQL.INI configuration keyword parameter. Verify the DBDIR keyword is not missing or pointing to a wrong database directory. Verify the DBNAME keyword is specified for the named database. Verify the SERVER keyword is not missing or conflicting with other network server names. In your CONFIG.SYS file, verify at least 40 files set with the FILES=40 parameter. If the server was being initialized while the connection was tried, retry the connection after the server has initialized. If all of the above fails, try using a different database name or try connecting to the database in single user mode at the same machine. If you can connect with a local engine it probably indicates a network configuration error exists. If you can connect with a new database name it probably indicates a previously named database was never properly initialized.