All I want to do is to do an upsert operation. I have a JsonDocument and I have a Couchbase server "123.456.789.1011" and a bucket inside, called "testbucket". Now, when I open the server using the IP address with port 8091, it asks me for a username and password say "uname","pwd" and, after entering, it opens. There is no any password for my bucket.
cluster = CouchbaseCluster.create("123.456.789.101");
cluster.clusterManager("testuser","testuser123");
bucket = cluster.openBucket("testbucket");
jsonObject = JsonObject.create()
.put("Order",map);
jsonDocument = JsonDocument.create("Hello",jsonObject);
jsonDocumentResponse = bucket.upsert(jsonDocument);
This is my code, but the problem is always on running the code I get an error saying that
ERROR spark.webserver.MatcherFilter -
com.couchbase.client.java.error.InvalidPasswordException: Passwords for bucket "testbucket" do not match.
at com.couchbase.client.java.CouchbaseAsyncCluster$1.call(CouchbaseAsyncCluster.java:156)
at com.couchbase.client.java.CouchbaseAsyncCluster$1.call(CouchbaseAsyncCluster.java:146)
at rx.internal.operators.OperatorOnErrorResumeNextViaFunction$1.onError(OperatorOnErrorResumeNextViaFunction.java:77)
at rx.internal.operators.OperatorMap$1.onError(OperatorMap.java:49)
at rx.internal.operators.NotificationLite.accept(NotificationLite.java:147)
at rx.internal.operators.OperatorObserveOn$ObserveOnSubscriber.pollQueue(OperatorObserveOn.java:177)
at rx.internal.operators.OperatorObserveOn$ObserveOnSubscriber.access$000(OperatorObserveOn.java:65)
at rx.internal.operators.OperatorObserveOn$ObserveOnSubscriber$2.call(OperatorObserveOn.java:153)
at rx.internal.schedulers.ScheduledAction.run(ScheduledAction.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I am new to Couchbase, and I really don;t know what to do. I Googled it but nothing is there on the web. Even their documentation is also not suggesting me anything. I hope someone on StackOverflow will surely have an answer for me. Thanks.
It would seem you need to pass a bucket password(which is different than the cluster password) in the openBucket method: http://docs.couchbase.com/sdk-api/couchbase-java-client-2.0.0/com/couchbase/client/java/Cluster.html#openBucket%28java.lang.String,%20java.lang.String%29
It looks like you're trying to connect a bucket using the cluster credentials. Try instead connect to a bucket with bucket username and an empty password:
cluster = CouchbaseCluster.create("123.456.789.101");
bucket = cluster.openBucket("testbucket", "");
Related
I want to connect via Jdbc.getConnection() with my Google Cloud MySQL db and use SSL.
Within GAS: I have made the exact same setup as described in this old answer, but I get the error message: "Exception: We're sorry, a server error occurred. Please wait a bit and try again."
conn = Jdbc.getConnection('jdbc:mysql://xxx.xxx.xxx.xxx/myDBname?useSSL=true', {
'user': settings.user,
'password': settings.userPwd,
'_serverSslCertificate': '-----BEGIN CERTIFICATE-----super_secret_1-----END CERTIFICATE-----',
'_clientSslCertificate': '-----BEGIN CERTIFICATE-----super_secret_2-----END CERTIFICATE-----',
'_clientSslKey': '-----BEGIN RSA PRIVATE KEY-----super_secret_3-----END RSA PRIVATE KEY-----'
});
Did something change over the years?
What I have tried so far:
The user and password seems to be correct, because without "?useSSL=true" everything works
I have also created new SSL certificates within GCP
Unfortunately Jdbc.getCloudSqlConnection() is not an option to use instead Jdbc.getConnection()
Runtime V8 and Stable/Rhino throw the same error
Cause for the issue: Within the certificate and key strings the "\n" are missing.
These are not inserted within the GCP SQL modal when a new SSL certificate was created. So you need to download client-key.pem, client-cert.pem and server-ca.pem and replace each line break with a "\n".
I have an app that uses Bigtable-hbase api for creating an Bigtable Connection using Service Account File.
This works fine in the local and sometimes on weblogic server also.
But after some request through the api , I am getting the following error -
io.grpc.StatusRuntimeException: UNAUTHENTICATED: Unexpected failure get auth token at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:981)
at com.google.auth.oauth2.ServiceAccountCredentials.refreshAccessToken(ServiceAccountCredentials.java:365)
at com.google.cloud.bigtable.grpc.io.RefreshingOAuth2CredentialsInterceptor.refreshCredentials(RefreshingOAuth2CredentialsInterceptor.java:379)
at com.google.cloud.bigtable.grpc.io.RefreshingOAuth2CredentialsInterceptor.access$100(RefreshingOAuth2CredentialsInterceptor.java:60)
at com.google.cloud.bigtable.grpc.io.RefreshingOAuth2CredentialsInterceptor$2.call(RefreshingOAuth2CredentialsInterceptor.java:328)
at com.google.cloud.bigtable.grpc.io.RefreshingOAuth2CredentialsInterceptor$2.call(RefreshingOAuth2CredentialsInterceptor.java:325)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)>
I am using following configuration to create the connection -
Configuration config = BigtableConfiguration.configure(projectId,instanceId);
config.set(BigtableOptionsFactory.BIGTABLE_SERVICE_ACCOUNT_JSON_KEYFILE_LOCATION_KEY, new File(filePath).toString());
Connection btConnection=ConnectionFactory.createConnection(config);
//then the code to read and write into the table`...
These code works fine sometimes and after some request it is throwing the above error.
I need to reason why it is happening and how can I resolve it so that application works fine when we sent the bulk request.
I think issue might be access refresh token , but how can I solve it?
I have a trouble to set up a GitLab's account to manage the tasks (issues) inside the PhpStorm from TOOLS > TASKS & CONTEXTS > CONFIGURE SERVERS
What is a TOKEN field ? Where do I find it, I've searched in my Profile on GitLab server but found nothing.
Only thing I have found and tried, is a Personal Access Tokens located here: https://gitlab.com/profile/personal_access_tokens
PERSONAL ACCESS TOKEN was already generated and used but it does not work.
=== UPDATED ===
Error log (I have replaced a real URL path with asterixes because of the privacy)
2017-09-25 19:59:41,023 [7154630] WARN - lij.tasks.impl.TaskManagerImpl - Cannot connect to GitlabRepository(URL='https://gitlab.com/***/***/issues')
com.intellij.tasks.impl.RequestFailedException: Request failed with HTTP error: 404 Not Found.
at com.intellij.tasks.impl.RequestFailedException.forStatusCode(RequestFailedException.java:16)
at com.intellij.tasks.impl.httpclient.TaskResponseUtil$GsonMultipleObjectsDeserializer.handleResponse(TaskResponseUtil.java:173)
at com.intellij.tasks.impl.httpclient.TaskResponseUtil$GsonMultipleObjectsDeserializer.handleResponse(TaskResponseUtil.java:151)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:222)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:164)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:139)
at com.intellij.tasks.gitlab.GitlabRepository.fetchProjects(GitlabRepository.java:139)
at com.intellij.tasks.gitlab.GitlabRepository.ensureProjectsDiscovered(GitlabRepository.java:254)
at com.intellij.tasks.gitlab.GitlabRepository.fetchIssues(GitlabRepository.java:160)
at com.intellij.tasks.gitlab.GitlabRepository.getIssues(GitlabRepository.java:107)
at com.intellij.tasks.TaskRepository.getIssues(TaskRepository.java:168)
at com.intellij.tasks.impl.TaskManagerImpl.a(TaskManagerImpl.java:783)
at com.intellij.tasks.impl.TaskManagerImpl.b(TaskManagerImpl.java:742)
at com.intellij.tasks.impl.TaskManagerImpl.a(TaskManagerImpl.java:736)
at com.intellij.openapi.application.impl.ApplicationImpl$2.run(ApplicationImpl.java:342)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
From what I gather, if you generated the token correctly on the Gitlab side (SETTINGS > ACCESS TOKENS > CREATE PERSONAL ACCESS TOKEN) and inserted it into the token field at TOOLS > TASKS & CONTEXTS > CONFIGURE SERVERS in PhPStorm, it could be a timeout problem.
If you go to SETTINGS(Ctrl+Alt+S) > TASKS and change the CONNECTION TIMEOUT parameter to, say, 20000ms it should work.
Please refer to this post if you have any doubts.
You need to create a new Personal Access Token here https://gitlab.com/profile/personal_access_tokens
and put it in that field.
Have you checked this:
https://confluence.jetbrains.com/display/PhpStorm/Integration+with+an+Issue+Tracking+System+in+PhpStorm
I am trying to insert values in cassandra when I come across this error:
15/08/14 10:21:54 INFO Cluster: New Cassandra host /a.b.c.d:9042 added
15/08/14 10:21:54 INFO Cluster: New Cassandra host /127.0.0.1:9042 added
INFO CassandraConnector: Connected to Cassandra cluster: Test Cluster
15/08/14 10:21:54 ERROR Session: Error creating pool to /127.0.0.1:9042
com.datastax.driver.core.TransportException: [/127.0.0.1:9042] Cannot connect
at com.datastax.driver.core.Connection.<init>(Connection.java:109)
at com.datastax.driver.core.PooledConnection.<init>(PooledConnection.java:32)
at com.datastax.driver.core.Connection$Factory.open(Connection.java:586)
at com.datastax.driver.core.SingleConnectionPool.<init>(SingleConnectionPool.java:76)
at com.datastax.driver.core.HostConnectionPool.newInstance(HostConnectionPool.java:35)
at com.datastax.driver.core.SessionManager.replacePool(SessionManager.java:271)
at com.datastax.driver.core.SessionManager.access$400(SessionManager.java:40)
at com.datastax.driver.core.SessionManager$3.call(SessionManager.java:308)
at com.datastax.driver.core.SessionManager$3.call(SessionManager.java:300)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.net.ConnectException: Connection refused: /127.0.0.1:9042
My replication factor is 1. There are 5 nodes in the Cass cluster (they're all up). rpc_address: 0.0.0.0, broadcast_rpc_address: 127.0.0.1
I would think that I should see 5 of those "INFO Cluster: New Cassandra host.." line from above for each of the 5 nodes. But instead I see 127.0.0.1, I am not sure why.
I also noticed that in the cassandra.yaml file, all the 5 nodes are listed under seed. (which I know is not advised but I did not set up this cluster)
seed_provider:
class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
seeds: "ip1, ip2, ip3, ip4, ip5"
Where ipx is the ipaddr for node x.
And under cassandra-topology.properties it just says the following and does not mention any of the 5 nodes.
# default for unknown nodes
default=DC1:r1
Can someone explain why I am seeing the ERROR Session: Error creating pool to /127.0.0.1:9042 error.
Kind of new to Cassandra.. thanks in advance!
I think the problem is your rpc_broadcast_address is set to 127.0.0.1. Is there a reason in particular you are doing this?
The java driver uses the system.peers table to look up the ip address to use to connect to hosts. If rpc_broadcast_address is set this is what will be present in system.peers and the driver will try to use it. If rpc_broadcast_address is not set, rpc_address will be used. In either case, you'll want to set one of these addresses to an address that will be accessible by your client. If you set rpc_address, you will want to remove broadcast_rpc_address.
API Store is throwing errors when I try to create or edit an application
java.sql.SQLException: Can't call commit when autocommit=true
I've added the setting of
init-command='set autocommit=0'
to the my.cnf file
I've also added the flag:
?relaxAutoCommit=true
to the connection string but to no avail. I continue to get this error.
I am using the same mysql database for both the WSO2_CARBON_DB and teh WSO2AM_DB plus I have a single publisher node and two separate store nodes all pointing to the same mysql datasource.
I notice the application edit is saved (or the new application is created) but the exception is still thrown in the console and an error message appears in the user interface (as per the error at the top of this question).
Is there some other setting, within the WSO2 conf files that I have to tweak in order to get this to work properly?
Add both autoReconnect and relaxAutoCommit flags to the jdbc url of your defined "WSO2AM_DB" datasource in master-datasources.xml file. This will resolve your issue.
<configuration>
<url>jdbc:mysql://localhost:3306/AM_DB?autoReconnect=true&relaxAutoCommit=true</url>
<username>xxxx</username>
<;password>xxxxx</password>
EDIT: I updated the url to reflect the correct syntax for escaping the ampersand.
just for the sake of completeness, the JDBC URL shoud be
jdbc:mysql://localhost:3306/WSO2CARBON_DB?autoReconnect=true&relaxAutoCommit=true