I used the online N1QL tutorial to practices writing queries. Now that I have a couchbase server of my own, I want to query my own data.
My questions is
Where in the Couchbase server can I write my queries?
thanks
Remember that N1Q1 is still in Beta.
The way it works is that you have to run Couchbase Query Server (aka CBQ). It runs in a default port 8093 (see N1QL)
The query server will connect to the specified Couchbase instance/cluster.
e.g.
cbq-engine -couchbase <CB-location>
Once CB Query Engine up and running you can run command line client and in a command prompt can issue your N1QL statements, e.g.:
cbq -engine http://your-cb-host:8093/
cbq> SELECT 'Hello World' AS Greeting
{
"resultset": [
{
"Greeting": "Hello World"
}
],
"info": [
{
"caller": "http_response:160",
"code": 100,
"key": "total_rows",
"message": "1"
},
{
"caller": "http_response:162",
"code": 101,
"key": "total_elapsed_time",
"message": "4.0002ms"
}
]
}
N1QL is released and available as part of Couchbase Server.
Please download Couchbase Server 4.1
http://www.couchbase.com/nosql-databases/downloads
Learn more at: http://www.couchbase.com/n1ql
Just to note, there is a new developer preview of N1QL out now (http://docs.couchbase.com/developer/n1ql-dp3/n1ql-intro.html) and the way to connect to a Couchbase cluster has changed from the answer given by user1697575; it's now:
cbq-engine -datastore <CB-location>
The Couchbase query engine can also serve N1QL queries from a file system, and there is a file system included in the download that can be queried:
cbq-engine -datastore=dir:./data
Related
I am running couchbase enterprise edition version 6.6.2 on Windows server 2016 standard edition.
I have two buckets called A and B. Bucket A is configured to run with enable_shared_bucket_access = true, my sync gateway creates new documents in bucket A, a bunch of services change and delete these documents.
XDCR replicates documents from bucket A to bucket B. All changes to documents in bucket A are replicated to bucket B, except deletions in bucket A are not replicated to bucket B. When documents in bucket B get older than 62 days they get deleted by an external service.
Over time I noticed that 93% of the documents in bucket B are binary documents! My own documents are in JSON, I don’t use any kind of binary documents in my solution. This leads me to the conclusion that these binary documents are some internal couchbase documents.
Here is a example of these binary documents
{
"$1": {
"cas": 1667520921496387584,
"expiration": 0,
"flags": 50331648,
"id": "_sync:rev:00001abd-1f99-4b4e-a695-d11574ea9ed8:0:",
"type": "base64"
},
"pa": "<binary (1 b)>"
},
{
"$1": {
"cas": 1667484959445614592,
"expiration": 0,
"flags": 50331648,
"id": "_sync:rev:00001abd-1f99-4b4e-a695-d11574ea9ed8:34:2-d3fb2d58672f853d98ce343d3ae84c1d",
"type": "base64"
},
"pa": "<binary (1129 b)>"
}
My issue with these documents is that they increase dramatically over time! and they don’t get cleaned up automatically! So they just grow and consume resources!
What are these documents used for?
Why aren’t these documents cleaned automatically?
Is it safe to simply delete these documents?
Is this a bug or a feature? :-)
Regards,
Siraf
The issue was solved by adding this AND NOT REGEXP_CONTAINS(META().id,"^_sync:rev") to the XDCR replication filter expression. This stopped binary documents do be replicated from bucket A to B.
I'm setting up an InnoDB Cluster using mysqlsh. This is in Kubernetes, but I think this question applies more generally.
When I use cluster.configureInstance() I see messages that includes:
This instance reports its own address as node-2:3306
However, the nodes can only find each other through DNS at an address like node-2.cluster:3306. The problem comes when adding instances to the cluster; they try to find the other nodes without the qualified name. Errors are of the form:
[GCS] Error on opening a connection to peer node node-0:33061 when joining a group. My local port is: 33061.
It is using node-n:33061 rather than node-n.cluster:33061.
If it matters, the "DNS" is set up as a headless service in Kubernetes that provides consistent addresses as pods come and go. It's very simple, and I named it "cluster" to created addresses of the form node-n.cluster. I don't want to cloud this question with detail I don't think matters, however, as surely other configurations require the instances in the cluster to use DNS as well.
I thought that setting localAddress when creating the cluster and adding the nodes would solve the problem. Indeed, after I added that to the createCluster options, I can look in the database and see
| group_replication_local_address | node-0.cluster:33061 |
After I create the cluster and look at the topology, it seems that the local address setting has no effect whatsoever:
{
"clusterName": "mycluster",
"defaultReplicaSet": {
"name": "default",
"primary": "node-0:3306",
"ssl": "REQUIRED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"node-0:3306": {
"address": "node-0:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.29"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "node-0:3306"
}
And adding more instances continues to fail with the same communication errors.
How do I convince each instance that the address it needs to advertise is different? I will try other permutations of the localAddress setting, but it doesn't look like it's intended to fix the problem I'm having. How do I reconcile the address the instance reports for itself with the address that's actually useful for other instances to find it?
Edit to add: Maybe it is a Kubernetes thing? Or a Docker thing at any rate. There is an environment variable set in the container:
HOSTNAME=node-0
Does the containerized MySQL use that? If so, how do I override it?
Apparently this value has to be set at startup. The option for my setup was
--report-host=${HOSTNAME}.cluster
when starting the MySQL instances resolved the issue.
Specifically for Kubernetes, an example is at https://github.com/adamelliotfields/kubernetes/blob/master/mysql/mysql.yaml
I would like to get the details/usage of network traffic used by each application (not by host).
I tried and I was able to get the list of application running on a host, by using:
{
"jsonrpc": "2.0",
"method": "application.get",
"params": {
"output": "extend",
"hostids": "10107"
},
"auth": "02axxxxxxx6e1023exxx252cd2xx70",
"id": 1
}
but I need the network traffic consumption details:
There is no native way to do it.
The application.get api retrieves the application list from the template/host, Zabbix uses it as a grouping mechanism.
See Configuration -> Templates -> Pick one -> Applications
I've just started experimenting with Azure functions and I'm trying to understand how to control the app settings depending on environment.
In dotnet core you could have appsettings.json, appsettings.development.json etc. And as you moved between different environments the config would change.
However from looking at Azure function documentation all I can find is that you can set up config in the azure portal but I can't see anything about setting up config in the solution?
So what is the best way to manage build environment?
Thanks in advance :-)
The best way, in my opinion, is using a proper build and release system, like VSTS.
What I've done in one of my solutions is creating an ARM template of my Function App and deploy this using a release pipeline with VSTS RM.
This way you can just add a value to the template.json, like the one from below.
"appSettings": [
// other entries
{
"name": "MyValue",
"value": "[parameters('myValue')]"
}
You will need another file, called parameters.json which will hold the values. This file looks like so (at the moment).
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"name": {},
"storageName": {},
"location": {},
"subscriptionId": {}
}
}
Back in VSTS you can just change/override the values of these parameters in the portal.
By using such a workflow you will get a professional CI/CD implementation where no one has to bother themselves with the actual secrets. They are only known to the system administrators.
I am trying to restore a previous backup trough gconsole and json api as well and I am getting the error "The instance or operation is not in an appropriate state to handle the request" what it means? I can't find in the docs, instance is on runnable status
I want to restore a backup because there was some data loss probably due to the fact I am using myisam instead of innodb, not sure if that matters with this issue
any idea?
Thank you
ps:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "invalidState",
"message": "The instance or operation is not in an appropriate state to handle the request."
}
],
"code": 409,
"message": "The instance or operation is not in an appropriate state to handle the request."
}
}
As Juan Enrique pointed out Cloud SQL stores the seven last backups, and it will allow you to restore any of them.
When using the API you may list backups that all the backups, including those older than the seven-last limit, but it won't allow you to restore from any of those.