Slow performance on view index - couchbase

I am trying to find the maximum value of an indexed value but getting very slow performance (over 15 minutes, bucket size >42M items, 100% resident). The index was created with
CREATE INDEX by_name ON sample(name) USING VIEW;
And queried by:
SELECT MAX(name) AS device FROM sample USE INDEX (by_name USING VIEW);
What can I do to increase performance?

Related

Aurora MySQL 5.7 ignores primary key index [duplicate]

I am observing weird behaviour which I am trying to understand.
MySQL version: 5.7.33
I have the below query:
select * from a_table where time>='2022-05-10' and guid in (102,512,11,35,623,6,21,673);
a_table has primary key on time,guid and index on guid
The query I wrote above has very good performance and as per explain plan is using index condition; using where; using MRR
As I increase the number of value in my in clause, the performance is impacted significantly.
After some dry runs, I was able to get a rough number. For less than ~14500 values the explain plan is same as above. For number of values higher than this, explain plan only uses where and it takes forever to run my query.
In other words, for example, if I put 14,000 values in my in clause, the explain plan has 14,000 rows as expected. However, if I put 15,000 values in my in clause, the explain has 221200324 rows. I dont even have these many rows in my whole table.
I am trying to understand this behaviour and to know if there is any way to fix this.
Thank you
Read about Limiting Memory Use for Range Optimization.
When you have a large list of values in an IN() predicate, it uses more memory during the query optimization step. This was considered a problem in some cases, so recent versions of MySQL set a max memory limit (it's 8MB by default).
If the optimizer finds that it would need more memory than the limit, there is not another condition in your query it can use to optimize, it gives up trying to optimize, and resorts to a table-scan. I infer that your table statistics actually show that the table has ~221 million rows (though table statistics are inexact estimates).
I can't say I know the exact formula to know how much memory is needed for a given list of values, but given your observed behavior, we could guess that it's about 600 bytes per item on average, given that 14k items works and more than that does not work.
You can set range_optimizer_max_mem_size = 0 to disable the memory limit. This creates a risk of excessive memory use, but it avoids the optimizer "giving up." We set this value on all MySQL instances at my last job, because we couldn't educate the developers to avoid creating huge lists of values in their queries.

Database table size and mysql query execution time

I have a database table which has about 500000 rows. When I use mysql select query the execution time is quite long, about 0.4 seconds. Same query from a smaller table takes about 0.0004 seconds.
Is there any solutions to make this query faster?
Most important thing: Use an index, suitable for your where-clause.
0.1) Use an index, that covers not only the where clause, but also all selected columns. This way the result can be returned by only using the index and not loading the data from the actual rows indentifed by the index.
If that is not enough you can even use an index that contains all rows that need to be returned by your query. So the query can look up everything from the index and does not have to load the actual rows.
Reduce the number of returned columns to the columns you really need. Don't select all columns if you are not using every one of them.
Use data types appropriate to the stored data, and choose the smalles data types possible. E.g. when you have to store a number that will never exceed 100 you can use a TINYINT that only consumes 1 byte instead of a BIGINT that will use 8 byte in every row (integer types).

How to improve the performance of the Mysql Query

I have a Mysql Query:
select stu_photo, type, sign, type1, fname,regno1
from stadm
where regno1 = XXXXXX
LIMIT 1000;
stadm table has 67063 rows. Execution time period of above query is 5-6Mints.
am unable to add index for stu_photo and sign column (there datatype is blob & longblob) Table_Enginee is Innodb. How can i increasing the performance (i.e., To reduce the execution time period)?
One improvement I can see for your query would be to add an index on the regno1 column. This would potentially speed up the WHERE clause.
am unable to add index for stu_photo and sign column
These columns should not impact the performance of the query you showed us.
Another factor influencing the performance of the query is the time it takes to send the result set back to your console or application. If each record be very large, then things may appear slow, even for only 1000 records.
create a new column md5_regno1 from type varchar and store the md5 from regno1 in it. then you can create a index on the new column search like this:
select stu_photo, type, sign, type1, fname,regno1
from stadm
where md5_regno1 = MD%('XXXXXX')
LIMIT 1000;

MySQL fetch time too important

My query as a small duration time when the Fetch time is quite large. What is exactly the fetching time. Can it be reduced ? Is it dependent on the network since the server is a remote one.
my Query is a simple one SELECT * FROM Table WHERE id = a primary_Key; Usually, the query returns between 5 and 50k rows.
But the table has 9 million Rows. The count(*) takes 82 seconds (duration) and 0 for fetching time.
Fetch time does depend on network speed. Your SELECT Count(*) ... query only returns a single number so network overhead is minimal.
To improve the speed, I suggest you only fetch the columns and rows you need (i.e. replace SELECT * ... with the exact columns you need).
Also, enabling compression between client and server will reduce time (but will slightly increase CPU usage for compressing/decompressing). See this related question.
Have you looked into using Mysql Indexes

MySQL query slow because of separate indexes?

Here is my situation. I have a MySQL MyISAM table containing about 4 million records with a total of 13,3 GB of data. The table contains messages received from an external system. Two of the columns in the table keep track of a timestamp and a boolean whether the message is handled or not.
When using this query:
SELECT MIN(timestampCB) FROM webshop_cb_onx_message
The result shows up almost instantly.
However, I need to find the earliest timestamp of unhandled messages, like this:
SELECT MIN(timestampCB ) FROM webshop_cb_onx_message WHERE handled = 0
The results of this query show up after about 3 minutes, which is way too slow for the script I'm writing.
Both columns are individually indexed, not together. However, adding an index to the table would take incredibly long considering the amount of data that is in there already.
Does my problem originate from the fact that both columns are separatly indexed, and if so, does anyone have a solution to my issue other than adding another index?
It is commonly recommended that if the selectivity of an index over 20% then a full table scan is preferable over an index access. This would mean it is likely that your index on handled won't actually result in using the index but a full table scan given the selectivity.
A composite index of handled, timestampCB may actually improve the performance given its a composite index, even if the selectivity isn't great MySQL would most likely still use it - even if it didn't you could force it's use.