Reverse column contents to utilize index? - mysql

Based on the query I'm running now I assume this is a pipe dream:
I have an index on a column that contains a string id. Those IDs have an identifier on the end, so to capture the data I need I need to pattern match like so:
key LIKE '%racecar'
Since you can't take advantage of an index with the wildcard starting the string, I was hoping I could do this:
reverse(key) LIKE 'racecar%'
But, this would mean MySQL has to look at, and perform a function on, every single row anyway, is that correct? Any other ways to get around this issue short of changing the naming conventions?

This smells like bad DB design. Split the string and the id into two separate columns and the problem(and many other problems) will be automatically solved.
I also doubt the order of the string and the id will make difference to MYSQL with respect to performance.
Also keep in mind you have an index over key, but this does not mean you have an index over the reverse of key which is the reason you get no speedup at all. I believe that given the situation the performance is beyond salvation.

Related

MySQL- INDEX(): How to Create a Functional Key Part Using Last nth Characters?

How would I write the INDEX() statement to use the last Nth characters of a functional keypart? I'm brand new to SQL/MySQL, and believe that's the proper verbiage of my question. explanation of what I'm looking for is below.
The MySQL 8.0 Ref Manual explains how to use the first nth characters, showing that the secondary index using col2's first 10 characters, via example:
CREATE TABLE t1 (
col1 VARCHAR(40),
col2 VARCHAR(30),
INDEX (col1, col2(10))
);
However, I would like to know how one could form this using the ending characters? Perhaps something like:
...
INDEX ((RIGHT (col2,3)));
);
However, I think that says to index over a column called 'xyz' instead of "put an index on each column value using the last 3 of 30 potential characters"? That's what I'm really trying to figure out.
For some context, it'd be helpful to index something with smooshed/mixed data and am playing around as to how such a thing could be accomplished. Example of the kind of data I'm talking about, below, is a simplified, adjusted version of exported data from an inventory/billing manager that hails from the 90's that I had to endure some years back...:
Col1
Col2
GP6500012_SALES_FY2023_SBucks_503_Thurs
R-DK_Sumat__SKU-503-20230174
GP6500012_SALES_FY2023_SBucks_607_Mon
R-MD_Columb__SKU-607-2023035
GP6500012_SALES_FY2023_SBucks_627_Mon-pm
R-BLD_House__SKU-503-20230024
GP6500012_SALES_FY2023_SBucks_929_Wed
R-FR_Ethp__SKU-929-20230324
Undoubtedly, better options exist that bypass this question altogether- and I'll presumably learn those techniques with time in my data analytics coursework. For now, I'm just curious if it's possible to somehow index the rows by suffix instead of prefix, and what a code example would look like to accomplish that. TIA.
Proposed solution (INDEX ((RIGHT (col2,3)))):
Not available.
Case 1:
When you need to split apart a column to search it, you have probably designed the schema wrong. In particular, that part of the columns needs to be in its own column. That being said, it is possible to use a 'virtual' (or 'generated') column that is a function of the original column, then INDEX that.
Case 2:
If you are suggesting that the last 3 characters are the most selective and that might speed up any lookup, don't bother. Simply index the entire column.
That data:
I would consider splitting up the stuff that is concatenated together by _. Do it as you INSERT the rows. If it needs to be put back together, do so during subsequent SELECTs.
DATEs:
Do not, on the other hand, split up dates (into year, month, etc). Keep them together. (That's another discussion.) Always go to the effort to convert dates (and datetimes) to the MySQL format (year-first) when storing. That way, you can properly use indexes and use the many date functions.
MySQL's Prefix indexing:
In general it is a "bad idea" to use the INDEX(col(10)) construct. It rarely is of any benefit; it often fails to use the index as much as you would expect. This is especially deceptive: UNIQUE(col(10)) -- It declares that the first 10 chars are unique, not the entire col!
CAST:
If the data is the wrong datatype (string vs int; wrong collation; etc), the I argue that it is a bad schema design. This is a common problem with EAV (Entity-Attribute-Value) schemas. When a number is stored as a string, CAST is needed to sort (ORDER BY) it.
Functional indexes:
Your proposed solution not a "prefix", it is something more complicated. I suspect any expression, even on non-string columns will work. This is when it became available:
---- 2018-10-22 8.0.13 General Availability -- -- -----
MySQL now supports creation of functional index key parts that index
expression values rather than column values. Functional key parts
enable indexing of values that cannot be indexed otherwise, such as
JSON values. For details, see CREATE INDEX Syntax.

SQL Index on Strings Helpful?

So I have used MySQL a lot in small projects, for school; however, I'm not taking over a enterprise-ish scale project, and now speed matters, not just getting the right information back. I have Googled around a lot trying to learn how indexes might make my website faster, and I am hoping to further understand how they work, not just when to use them.
So, I find myself doing a lot of SELECT DISTINCTS in order to get all the distinct values, so i can populate my dropdowns. I have heard that this would be faster if this column was indexed; however, I don't completely understand why. If the values in this columns were ints, I would totally understand; basically a data structure like a BST would be created, and search times could be Log(n); however, if my column is strings, how can it put a string in a BST? This doesn't seem possible, since there is no metric to compare a string against another string (like there are with numbers). It seems like an index would just create a list of all the possible values for that column, but it seems as if the search would still require the database to go through every single row, making this search linear, just like if the database just scanned a regular tables.
My second question is what does the database do once it finds the right value in the index data structure. For example, let's say I'm doing a where age = 42. So, the database goes through the data structure until it finds 42, but how does it map that lookup to the whole row? Does the index have some sort of row number associated with it?
Lastly, if I am doing these frequent SELECT DISTINCT statements, is adding an index going to help? I feel like this must be a common task for websites, as many sites have dropdowns where you can filter results, I'm just trying to figure out if I'm approaching it the right way.
Thanks in advance.
You logic is good, however, your assumption that there is no metric to compare string to other strings is incorrect. Strings can simply be compared in alphabetical order, giving them a perfectly usable comparison metric that can be used to build the index.
It takes a tiny bit longer to compare strings then it does ints, however, having an index still speeds things up, regardless of the comparison cost.
I would like to mention however that if you are using SELECT DISTINCT as much as you say, there are probably problems with your database schema.
You should learn about normalizing your database. I recommend starting with this link: http://databases.about.com/od/specificproducts/a/normalization.htm
Normalization will provide you with querying mechanism that can vastly outweigh benefits received from indexing.
if your strings are something small like categories, then an index will help. If you have large chunks of random text, then you will likely want a full text index. If you are having to use select distinct a lot, your database may not be properly normalized for what you are doing. You could also put the distinct values in a separate table (that only has the distinct values), but this only helps if the content does not change a lot. Indexing strategies are particular to your application's access patterns, the data itself, and how the tables are normalized (or not).
HTH

MYSQL - int or short string?

I'm going to create a table which will have an amount of rows between 1000-20000, and I'm having fields that might repeat a lot... about 60% of the rows will have this value, where about each 50-100 have a shared value.
I've been concerned about efficiency lately and I'm wondering whether it would be better to store this string on each row (it would be between 8-20 characters) or to create another table and link them with its representative ID instead... So having ~1-50 rows in this table replacing about 300-5000 strings with ints?
Is this a good approach, or at all even neccessary?
Yes, it's a good approach in most circumstances. It's called normalisation, and is mainly done for two reasons:
Removing repeated data
Avoiding repeating entities
I can't tell from your question what the reason would be in your case.
The difference between the two is that the first reuses values that just happen to look the same, while the second connects values that have the same meaning. The practical difference is in what should happen if a value changes, i.e. if the value changes for one record, should the value itself change so that it changes for all other records also using it, or should that record be connected to a new value so that the other records are left unchanged.
If it's for the first reason then you will save space in the database, but it will be more complicated to update records. If it's for the second reason you will not only save space, but you will also reduce the risk of inconsistency, as a value is only stored in one place.
That is a good approach to have a look-up table for the strings. That way you can build more efficient indexes on the integer values. It wouldn't be absolutely necessary but as a good practice I would do that.
I would recommend using an int with a foreign key to a lookup table (like you describe in your second scenario). This will cause the index to be much smaller than indexing a VARCHAR so the storage required would be smaller. It should perform better, too.
Avitus is right, that it's generally a good practice to create lookups.
Think about the JOINS you will use this table in. 1000-20000 rows are not a lot to be handled by MySQL. If you don't have any, I would not bother about the lookups, just index the column.
BUT as soon as you start joining the table with others (of the same size) that's where the performance loss comes in, which you can (most likely) compensate by introducing lookups.

MySQL more than one INDEX key for the same column

Is it right to have more than one INDEX key for the same column in MySQL database?
for example, id field indexed twice with different Keyname while phpmyadmin gives me a warning message:
More than one INDEX key was created
for column id
I would like to know if that is ok, and if there are any effects or side-effect on my script or the server using this method?
I use this method for grouping columns for each index.
Indexing a single column twice is useless, slows down inserts and updates because now you have to (uselessly) maintain two indexes, and probably confuses the optimizer (if it doesn't actually break something). On the other hand, it's fine (and often useful) to have a column indexed alone and then also as part of one or more compound keys.
Theoretically it can be a good idea to have a reverse index on a column as well as the normal index. Not sure if its supported by MySQL though.
It depends what you are seraching for. If you are expecting the user to search for lastnames, and you store first and last names in the same column, then many searches will be of the form
LIKE %lastname
In that case, a normal index will not help much, because it builds the index from the beginning of the string. It will need to look through every record to see that it at some point doesn't contain the search string. A reverse index, will be useful, because it indexes from the back and foward. Using double indexes would speed up this particular kind of search.
With wildcards at both the beginning and the end.

How does a hash table work? Is it faster than "SELECT * from .."

Let's say, I have :
Key | Indexes | Key-values
----+---------+------------
001 | 100001 | Alex
002 | 100002 | Micheal
003 | 100003 | Daniel
Lets say, we want to search 001, how to do the fast searching process using hash table?
Isn't it the same as we use the "SELECT * from .. " in mysql? I read alot, they say, the "SELECT *" searching from beginning to end, but hash table is not? Why and how?
By using hash table, are we reducing the records we are searching? How?
Can anyone demonstrate how to insert and retrieve hash table process in mysql query code? e.g.,
SELECT * from table1 where hash_value="bla" ...
Another scenario:
If the indexes are like S0001, S0002, T0001, T0002, etc. In mysql i could use:
SELECT * from table WHERE value = S*
isn't it the same and faster?
A simple hash table works by keeping the items on several lists, instead of just one. It uses a very fast and repeatable (i.e. non-random) method to choose which list to keep each item on. So when it is time to find the item again, it repeats that method to discover which list to look in, and then does a normal (slow) linear search in that list.
By dividing the items up into 17 lists, the search becomes 17 times faster, which is a good improvement.
Although of course this is only true if the lists are roughly the same length, so it is important to choose a good method of distributing the items between the lists.
In your example table, the first column is the key, the thing we need to find the item. And lets suppose we will maintain 17 lists. To insert something, we perform an operation on the key called hashing. This just turns the key into a number. It doesn't return a random number, because it must always return the same number for the same key. But at the same time, the numbers must be "spread out" widely.
Then we take the resulting number and use modulus to shrink it down to the size of our list:
Hash(key) % 17
This all happens extremely fast. Our lists are in an array, so:
_lists[Hash(key % 17)].Add(record);
And then later, to find the item using that key:
Record found = _lists[Hash(key % 17)].Find(key);
Note that each list can just be any container type, or a linked list class that you write by hand. When we execute a Find in that list, it works the slow way (examine the key of each record).
Do not worry about what MySQL is doing internally to locate records quickly. The job of a database is to do that sort of thing for you. Just run a SELECT [columns] FROM table WHERE [condition]; query and let the database generate a query plan for you. Note that you don't want to use SELECT *, since if you ever add a column to the table that will break all your old queries that relied on there being a certain number of columns in a certain order.
If you really want to know what's going on under the hood (it's good to know, but do not implement it yourself: that is the purpose of a database!), you need to know what indexes are and how they work. If a table has no index on the columns involved in the WHERE clause, then, as you say, the database will have to search through every row in the table to find the ones matching your condition. But if there is an index, the database will search the index to find the exact location of the rows you want, and jump directly to them. Indexes are usually implemented as B+-trees, a type of search tree that uses very few comparisons to locate a specific element. Searching a B-tree for a specific key is very fast. MySQL is also capable of using hash indexes, but these tend to be slower for database uses. Hash indexes usually only perform well on long keys (character strings especially), since they reduce the size of the key to a fixed hash size. For data types like integers and real numbers, which have a well-defined ordering and fixed length, the easy searchability of a B-tree usually provides better performance.
You might like to look at the chapters in the MySQL manual and PostgreSQL manual on indexing.
http://en.wikipedia.org/wiki/Hash_table
Hash tables may be used as in-memory data structures. Hash tables may also be adopted for use with persistent data structures; database indices sometimes use disk-based data structures based on hash tables, although balanced trees are more popular.
I guess you could use a hash function to get the ID you want to select from. Like
SELECT * FROM table WHERE value = hash_fn(whatever_input_you_build_your_hash_value_from)
Then you don't need to know the id of the row you want to select and can do an exact query. Since you know that the row will always have the same id because of the input you build the hash value form and you can always recreate this id through the hash function.
However this isn't always true depending on the size of the table and the maximum number of hashvalues (you often have "X mod hash-table-size" somewhere in your hash). To take care of this you should have a deterministic strategy you use each time you get two values with the same id. You should check Wikipedia for more info on this strategy, its called collision handling and should be mentioned in the same article as hash-tables.
MySQL probably uses hashtables somewhere because of the O(1) feature norheim.se (up) mentioned.
Hash tables are great for locating entries at O(1) cost where the key (that is used for hashing) is already known. They are in widespread use both in collection libraries and in database engines. You should be able to find plenty of information about them on the internet. Why don't you start with Wikipedia or just do a Google search?
I don't know the details of mysql. If there is a structure in there called "hash table", that would probably be a kind of table that uses hashing for locating the keys. I'm sure someone else will tell you about that. =)
EDIT: (in response to comment)
Ok. I'll try to make a grossly simplified explanation: A hash table is a table where the entries are located based on a function of the key. For instance, say that you want to store info about a set of persons. If you store it in a plain unsorted array, you would need to iterate over the elements in sequence in order to find the entry you are looking for. On average, this will need N/2 comparisons.
If, instead, you put all entries at indexes based on the first character of the persons first name. (A=0, B=1, C=2 etc), you will immediately be able to find the correct entry as long as you know the first name. This is the basic idea. You probably realize that some special handling (rehashing, or allowing lists of entries) is required in order to support multiple entries having the same first letter. If you have a well-dimensioned hash table, you should be able to get straight to the item you are searching for. This means approx one comparison, with the disclaimer of the special handling I just mentioned.