How to execute a query in couchbase server - couchbase

I want to execute this query : select path_id , usr_id from pth, so I can get all data from my bucket "path" so I can use it to generate data for my bucket "step".
I tried to create a index using this query :
CREATE INDEX idx_xref ON pth(path_id,usr_id);
And then I executed this query :
select path_id , usr_id from pth
I'm expecting a json results .. but I always get this error :
[
{
"code": 4000,
"msg": "No index available on keyspace pth that matches your query. Use CREATE INDEX or CREATE PRIMARY INDEX to create an index, or check that your expected index is online.",
"query_from_user": "select path_id , usr_id from pth;"
}
]

select path_id, usr_id from pth where path_id IS NOT MISSING
The leading key of the index needs to be present in the WHERE clause for index selection.
You can find more details here:
https://blog.couchbase.com/n1ql-practical-guide-second-edition/

I solved the problem by creating a primary index :
CREATE PRIMARY INDEX ON pth
Thank you all for your help :))

Related

CONTAINS doesn't return any result

I know LIKE can be used instead of CONTAINS but CONTAINS is relatively faster as compared to LIKE. Here, the following query doesn't return me any result. Why?
Query:
SELECT CustomerName FROM members WHERE CONTAINS(Country, 'Mexico');
DATABASE:
MySQL Solution
select customername
from members
where match(country) against ('Mexico')
MS SQL Server Solution
Full text indexes aren't necessarily always populated after creation.
Use the following code to ensure the index updates:
ALTER FULLTEXT INDEX ON members SET CHANGE_TRACKING AUTO;
More info: https://msdn.microsoft.com/en-us/library/ms142575.aspx
Full example (including change tracking option on index creation rather than in a later alter statement):
use StackOverflow
go
create table myTable
(
id bigint not null identity(1,1)
constraint myTable_pk primary key clustered
, name nvarchar(100)
);
create fulltext catalog StackOverflow_Catalog;
create fulltext index
on myTable(name)
key index myTable_pk
on StackOverflow_Catalog
with change_tracking auto;
insert myTable
select 'Mexico'
union select 'London';
select *
from myTable
where contains(name,'Mexico');
Have you tried using in it might be even faster if it works in your case
SELECT CustomerName
FROM members
WHERE Country IN ('Mexico')
But in your case you may put %% in contains is actually faster then contain so just use this
SELECT CustomerName
FROM members
WHERE CONTAINS(Country, '"*Mexico"');
try this
If there is Full-Text Search Index on column Country.
SELECT CustomerName FROM members
WHERE CONTAINS(Country, 'Mexico');
Otherwise just do
SELECT CustomerName FROM members
WHERE Country LIKE N'%Mexico%';
there is a unicode character é, you need to prefix the string with N

unique index on embedded json object

I am testing Postgresql 9.4 beta2 right now. I am wondering if it is possible to create a unique index on embedded json object?
I create a table name products:
CREATE TABLE products (oid serial primary key, data jsonb)
Now, I try to insert json object into data column.
{
"id": "12345",
"bags": [
{
"sku": "abc123",
"price": 0,
},
{
"sku": "abc123",
"price": 0,
}
]
}
However, I want sku of bags to be unique. It means the json can't be inserted into products tables because sku is not unique in this case.
I tried to create a unique index like below, but it failed.
CREATE UNIQUE INDEX product_sku_index ON products( (data->'bags'->'sku') )
Any suggestions?
Your attempt to create a UNIQUE INDEX on the expression was bound to fail for multiple reasons.
CREATE UNIQUE INDEX product_sku_index ON products( (data->'bags'->'sku') )
The first and most trivial being that ...
data->'bags'->'sku'
does not reference anything. You could reference the first element of the array with
data->'bags'->0->>'sku'
or shorter:
data#>>'{bags,0,sku}'
But that expression only returns the first value of the array.
Your definition: "I want sku of bags to be unique" .. is unclear. Do you want the value of sku to be unique? Within one JSON object or among all json objects in the column data? Or do you want to restrict the array to a single element with an sku?
Either way, neither of these goals can be implemented with a simple UNIQUE index.
Possible solution
If you want sku values to be unique across all json arrays in data->'bags', there is a way. Unnest the array and write all individual sku values to separate rows in a simple auxiliary table with a unique (or PK) constraint:
CREATE TABLE prod_sku(sku text PRIMARY KEY); -- PK enforces uniqueness
This table may be useful for additional purposes.
Here is a complete code example for a very similar problem with plain Postgres arrays:
Can PostgreSQL have a uniqueness constraint on array elements?
Only adapt the unnesting technique. Instead of:
DELETE FROM hostname h
USING unnest(OLD.hostnames) d(x)
WHERE h.hostname = d.x;
...
INSERT INTO hostname(hostname)
SELECT h
FROM unnest(NEW.hostnames) h;
Use:
DELETE FROM prod_sku p
USING jsonb_array_elements(NEW.data->'bags') d(x)
WHERE p.sku = d.x->>'sku';
...
INSERT INTO prod_sku(sku)
SELECT b->>'sku'
FROM jsonb_array_elements(NEW.data->'bags') b
Details for that:
PostgreSQL joining using JSONB

"order by" kills performance

I'm not a MySQL guy, actually I'm doing this to help a friend.
I have these tables in a MySQL database:
create table post (ID bigint, p text)
create table user (ID bigint, user_id bigint)
and I'm querying them by this script:
select * from post
where ID in (select user_id from user where ID = 50)
order by ID DESC --this line kills performance.
limit 0,20
As I mentioned in comment, when there is no order by ID DESC, the query executes very fast. But when I add that to the query, it got very very slow with a huge CPU usage. Do you have any idea what am I doing wrong?
You should define ID as Primary Key for your table. This will add an index and increase performance. At least as a first step, it's a good one.
This query should do the trick:
create table post (
ID bigint,
p text,
PRIMARY KEY (ID));
Thanks to #frlan the problem got solved by indexes:
CREATE INDEX IDX_POST_ID ON post (ID);
CREATE INDEX IDX_USER_ID ON user (ID);
CREATE INDEX IDX_USER_USERID ON user (user_id);

Is there a keyword to identify a PRIMARY column in a MySQL WHERE clause?

I have a situation where the column name "field1" and "field3" are not given to me but all the other data is. The request is coming in via a url in like: /table1/1 or /table2/3 and it is assumed that 1 or 3 represent the primary key. However, the column name may be different. Consider the following 2 queries:
SELECT * FROM table1 where field1 = 1 and field2 =2;
SELECT * FROM table2 where field3 = 3 and field4 =4;
Ideally, I'd like to perform a search like the following:
SELECT * FROM table1 where MYSQL_PRIMARY_COLUMN = 1 and field2 =2;
SELECT * FROM table2 where MYSQL_PRIMARY_COLUMN = 3 and field4 =4;
Is there a keyword to identify MYSQL_PRIMARY_COLUMN in a MySQL WHERE clause?
No, there's no pseudocolumn you can use to map to the primary key column. One reason this is complicated is that a given table may have a multi-column primary key. This is a totally ordinary way to design a table:
CREATE TABLE BooksAuthors (
book_id INT NOT NULL,
author_id INT NOT NULL,
PRIMARY KEY (book_id, author_id)
);
When I implemented the table data gateway class in Zend Framework 1.0, I had to write code to "discover" the table's primary key column (or columns) as #doublesharp describes. Then the table object instance retained this information so that searches and updates knew which column (or columns) to use when generating queries.
I understand you're looking for a solution that doesn't require this "2 pass process" but no such solution exists for the general case.
Some application framework environments attempt to simplify the problem by encouraging developers to give every table a single column primary key, and name the column "id" by convention. But this convention is an oversimplification that fails to cover many legitimate table designs.
You can use DESCRIBE (which is a synonym for EXPLAIN) to get information about the table, which will include the all column information.
DESCRIBE `table`;
You can also use SHOW INDEX to just get information about the PRIMARY key.
SHOW INDEX FROM `table` WHERE Key_name = 'PRIMARY'

Inserting unique records into MS access 2007

So i have been looking around and not finding much. I appologize ahead of time because this is probably the wrong way to do this but it is what it is.
So i have to track class's that co-workers have completed. This is done through a excel sheet that feeds the MS access database. There is 3 fields that are supplied to me.
Full name, Course Name, and Completion Date.
I know that i dont have a primary key here so i am trying to create a query that will only append the unique records pulled from the excel sheet. I can do it based on a single field but need help making my query append it only when both the Full name and Course Name are not the same for example
Joe Somebody, Course#1, 14feb13
Joe Somebody, Course#2, 15feb13
Joe Somebody, Course#1, 15feb13
I need a query that will append the first 2 rows to a table but ignore the third one due to the person already completing course#1. this is what i have so far that basicly turns my name field into a Primary key.
INSERT INTO table [Full name], [Course], [Date]
SELECT excel_table.[Full name], excel_table.[Course], excel_table.[Date]
FROM excel_table
WHERE excel_table.[Full name] Not in (SELECT table.[Full Name] FROM table)
I also have some Is Not Null stuff at the end but i didnt think it would be relevent to the question.
The easiest way to do this so you do not get duplicates is to add an index. In this case, a composite primary key would seem to be the answer. Just select all of the fields you want included in the composite key and click the Primary Key button:
You will not be allowed nulls in any of the fields comprising the primary key, but as long as the combination of the fields is not matched, data in each of the fields can be repeated. So:
Joe Somebody, Course#1, 14feb13 <-- good
Joe Somebody, Course#2, 15feb13 <-- good
Joe Somebody, Course#1, 15feb13 <-- fails
Joe SomebodyElse, Course#1, 14feb13 <-- good
Now, if you run an ordinary append query build with the query design window, you will get an error if the record exists twice in the Excel import table or already exists in Access:
You don't actually need a composite primary key. In fact there are a few places in Access where you are encouraged to not use a composite primary key. You can create your Access table with a simple integer primary key:
create table CourseCompletions (
ID autoincrement primary key
, FullName varchar(100)
, CourseName varchar(100)
, CompletionDate date
);
Then you can gulp in all the data from the Excel file:
insert into CourseCompletions (
, FullName
, CourseName
, CompletionDate
) select
[Full name]
, [Course]
, [Date]
from excel_table;
This will give each row of your input Excel table a unique number and stash it in the Access table. Now you need to decide how you want to reject conflicting rows from your CourseCompletions table. (The following queries show only the records that you decide to not reject.) If you want to reject completions by the same person of the same course at a later date:
select
ID
, FullName
, CourseName
, min(CompletionDate)
from CourseCompletions
group by
ID
, FullName
, CourseName;
If you want to reject completions at an earlier date simply change the MIN to MAX.
If you want to reject any course completion that appeared earlier in the Excel table:
select
cc1.ID
, cc1.FullName
, cc1.CourseName
, cc1.CompletionDate
from CourseCompletions as cc1
inner join (
select
max(ID) as WantedID
, FullName
, CourseName
from CourseCompletions
group by FullName, CourseName
) as cc2
on cc1.ID = cc2.WantedID;
And to reject course completions that appeared later in the Excel table, simply replace MAX with MIN.
So using an integer primary key gives you some options.