While creating index I get this error:
[
{
"code": 3000,
"msg": "syntax error - at -",
"query_from_user": "create primary index on sample-partner"
}
]
If I change the bucket name to sample_partner, then it works. Using Couchbase 4.5 Enterprise edition.
Yeah that's because N1QL will interpret the - as a minus sign... You simply need to escape the bucket name using backquotes:
CREATE PRIMARY INDEX ON `sample-partner`;
It should work that way. Remember to always escape that bucket name in all N1QL queries and you should be fine. Or use the underscore in the bucket name, as an alternative :)
Related
I'm trying to use an incrementing ingest to produce a message to a topic on update of a table in mysql. It works using timestamp but doesn't seem to be working using incrementing column mode. When I insert a new row into the table, I do not see any message published to the topic.
{
"_comment": " --- JDBC-specific configuration below here --- ",
"_comment": "JDBC connection URL. This will vary by RDBMS. Consult your manufacturer's handbook for more information",
"connection.url": "jdbc:mysql://localhost:3306/lte?user=root&password=tiger",
"_comment": "Which table(s) to include",
"table.whitelist": "candidate_score",
"_comment": "Pull all rows based on an timestamp column. You can also do bulk or incrementing column-based extracts. For more information, see http://docs.confluent.io/current/connect/connect-jdbc/docs/source_config_options.html#mode",
"mode": "incrementing",
"_comment": "Which column has the timestamp value to use? ",
"incrementing.column.name": "attempt_id",
"_comment": "If the column is not defined as NOT NULL, tell the connector to ignore this ",
"validate.non.null": "true",
"_comment": "The Kafka topic will be made up of this prefix, plus the table name ",
"topic.prefix": "mysql-"
}
attempt_id is an auto incrementing non null column which is also the primary key.
Actually, its my fault. I was listening to the wrong topic.
I have documents with schema in a bucket:
{
"status": "done",
"id": 1
}
I want to select all documents that have status as done.
Assuming you're using Couchbase Server 4.x or greater, you can use a N1QL query to do this. For instance:
SELECT d.*
FROM mydocuments d
WHERE d.status == 'done'
You also need to create an index on status (at least--creating indexes is more complex than a StackOverflow answer can provide) like this:
CREATE INDEX ix_status ON mydocuments (status);
For more information, check out the N1QL documentation and the interactive N1QL tutorial.
We have the following documents in couchbase:
Doc1 :
{
property1 : "someval"
name : "DOC_OF_TYPE1"
}
Doc2 :
{
property1 : "someval2"
name : "DOC_OF_TYPE1"
}
Doc3 :
{
property1 : "someval2"
name : "DOC_OF_TYPE2"
}
Is there a way to view documents of "DOC_OF_TYPE1" only ? And is there a way to delete all documents of that type from couchbase?
From Couchbase Server 4.1 onwards this is made easy through the use of the N1QL queries and DML (data manipulation language).
Firstly create a primary index on your data using N1QL, this can be done via a Couchbase SDK, Query workbench (integrated in the upcoming Couchbase 4.5 release) or the CBQ tool located in the Couchbase bin directory (/opt/couchbase/bin on linux, inside the .app file on OSX and in the install directory on Windows).
The following query creates the primary index on a bucket named 'mybucket', this allows you to perform any kind of N1QL query on a bucket:
CREATE PRIMARY INDEX ON `mybucket`;
For performance and production purposes you should create a secondary index:
CREATE INDEX 'document_name' ON `mybucket`(name);
This creates an index on every document's 'name' field. You can now efficiently select documents by their name field (This works with just the primary index but that would be slower):
SELECT *, meta().id FROM `mybucket` WHERE name = 'DOC_OF_TYPE1';
Or delete them based on their name field
DELETE FROM `mybucket` WHERE name = 'DOC_OF_TYPE2';
You can find more information about N1QL in the Couchbase Server documentation
I want to find the similar location names using couchbase server. i created an index as follows
function (doc, meta) {
emit(doc.loc_name, doc);
}
this is how i query data
http://IP Address:8092/dev-locations/_design/dev_test_view/_view/searchByLocationName?full_set=true&inclusive_end=true&stale=false&connection_timeout=60000&key=%22Joh%22
But this will return only if the exact match found. What i am looking for is when i send the key joh, it should return johenaskirchen and johenasberg (same as our LIKE in MySQL)
Any help will be highly appreciated.
Note : I already tried N1QL and i am looking for ways to implement this without N1QL
the key parameter is an exact match. What you want is a combination of startKey and endKey:
?startkey=%22joh%22&endkey=%22joh\uefff%22
The \uefff is a trick, this unicode character can be seen as "the biggest character" so it ensures that a key like johzzzzzz will still be considered under the upper bound of joh\uefff (endkey is inclusive).
I have a requirement wherein I have to delete an entry from the couchbase bucket. I use the delete method of the CouchbaseCient from my java application to which I pass the key. But in one particular case I dont have the entire key name but a part of it. So I thought that there would be a method that takes a matcher but I could not find one. Following is the actual key that is stored in the bucket
123_xyz_havefun
and the part of the key that I have is xyz. I am not sure whether this can be done. Can anyone help.
The DELETE operation of the Couchbase doesn't support neither wildcards, nor regular expressions. So you have to get the list of keys somehow and pass it to the function. For example, you might use Couchbase Views or maintain your own list of keys via APPEND command. Like create the key xyz and append to its value all the matching keys during application lifetime with flushing this key after real delete request
Well, I think you can achieve delete using wildcard or regex like expression.
Above answers basically says,
- Query the data from the Couchbase
- Iterate over resultset
- and fire delete for each key of your interest.
However, I believe: Delete on server should be delete on server, rather than requiring three steps as above.
In this regards, I think old fashioned RDBMS were better all you need to do is fire SQL query like 'DELETE * from database where something like "match%"'.
Fortunately, there is something similar to SQL is available in CouchBase called N1QL (pronounced nickle). I am not aware about JavaScript (and other language syntax) but this is how I did it in python.
Query to be used: DELETE from b where META(b).id LIKE "%"
layer_name_prefix = cb_layer_key + "|" + "%"
query = ""
try:
query = N1QLQuery('DELETE from `test-feature` b where META(b).id LIKE $1', layer_name_prefix)
cb.n1ql_query(query).execute()
except CouchbaseError, e:
logger.exception(e)
To achieve the same thing: alternate query could be as below if you are storing 'type' and/or other meta data like 'parent_id'.
DELETE from where type='Feature' and parent_id=8;
But I prefer to use first version of the query as it operates on key, and I believe Couchbase must have some internal indexes to operate/query faster on key (and other metadata).
Although it is true you cannot iterate over documents with a regex, you could create a new view and have your map function only emit keys that match your regex.
An (obviously contrived and awful regex) example map function could be:
function(doc, meta) {
if (meta.id.match(/_xyz_/)) {
emit(meta.id, null);
}
}
An alternative idea would be to extract that portion of the key from each document and then emit that. That would allow you to use the same index to match different documents by that particular key form.
function(doc, meta) {
var match = meta.id.match(/^.*_(...)_.*$/);
if (match) {
emit(match[1], null);
}
}
In your case, this would emit the key xyz (or the corresponding component from each key) for each document. You could then just use startkey and endkey to limit based on your criteria.
Lastly, there are a ton of options from the information retrieval research space for building text indexes that could apply here. I'll refer you to this doc on permuterm indexes to get you started.