Reverse string with leading wildcard scan in Postgres - mysql

SQL Server: Index columns used in like?
I've tried using the query method in the link above with Postgres (0.3ms improvement), it seems to only work with MySQL (10x faster).
MYSQL
User Load (0.4ms) SELECT * FROM users WHERE reverse_name LIKE REVERSE('%Anderson PhD')
User Load (5.8ms) SELECT * FROM users WHERE name LIKE ('%Anderson Phd')
POSTGRES
User Load (2.1ms) SELECT * FROM users WHERE reverse_name LIKE REVERSE('%Scot Monahan')
User Load (2.5ms) SELECT * FROM users WHERE name LIKE '%Scot Monahan'
Did some googling but couldn't quite understand as I'm quite new to DBs. Could anyone explain why this is happening?

To support a prefix match in Postgres for character type columns you either need an index with a suitable operator class: text_pattern_ops for text etc. (Unless you work with the "C" locale ...)
Or you use a trigram index to support any pattern - then you don't need the "reverse" trick any more.
See:
PostgreSQL LIKE query performance variations
You'll see massive performance improvement for tables of none-trivial size, as soon as an index can be used for the query. Have a look at the query plan with EXPLAIN.

Related

mysql postgresql query without specifying table name and public folder

My client requests to change from using MySQL to PostgreSQL. The database migration ran well, and my codes are using DevArt dotConnect Universal. Things are looking good except for the actual SQL statement.
In my C# code I used,
"SELECT * FROM users WHERE user_name LIKE '%abc%';"
and it worked with MySQL, but when it is connected to the PostgreSQL, I have to change the SQL statement to use,
"SELECT * FROM public.\"users\" WHERE user_name LIKE '%abc%';"
and the searched text is case-sensitive!
How do I,
(a) make the searched text case insensitive?
(b) avoid needing to add [public.] in front of the table name and to double-quote the table name?
I have seen someone posted something here, Accessing a table without specifying the schema name
but I have 120 tables and it will be time consuming. Is there any faster approach to solve my 2 issues described above?
EDIT:
Oh I realized these two statements yields the same result.
SELECT * FROM Public.user;
and
SELECT * FROM \"user\";
(a) make the searched text case insensitive?
ILIKE keyword in PostgreSQL provides case insensitive search
SELECT * FROM users WHERE user_name ILIKE '%abc%';
or
select * from words where LOWER(word) LIKE '%aba%'
Note: if your table name is Users then use SELECT * FROM "Users"
> SQLFIDDLE DEMO

SQL 'LIKE BINARY' any slower than plain 'LIKE'?

I'm using a django application which does some 'startswith' ORM operations comparing longtext columns with a unicode string. This results in a LIKE BINARY comparison operation with a u'mystring' unicode string. Is a LIKE BINARY likely to be any slower than a plain LIKE?
I know the general answer is benchmarking, but I would like to get a general idea for databases in general rather than just my application as I'd never seen a LIKE BINARY query before.
I happen to be using MySQL but I'm interested in the answer for SQL databases in general.
If performance seems to become a problem, it might be a good idea to create a copy of the first eg. 255 characters of the longtext, add an index on that and use the startswith with that.
BTW, this page says: "if you need to do case-sensitive matching, declare your column as BINARY; don't use LIKE BINARY in your queries to cast a non-binary column. If you do, MySQL won't use any indexes on that column." It's an old tip but I think this is still valid.
For the next person who runs across this - in our relatively small database the query:
SELECT * FROM table_name WHERE field LIKE 'some-field-search-value';
... Result row
Returns 1 row in set (0.00 sec)
Compared to:
SELECT * FROM table_name WHERE field LIKE BINARY 'some-field-search-value';
... Result row
Returns 1 row in set (0.32 sec)
Long story short, at least for our database (MySQL 5.5 / InnoDB) there is a very significant difference in performance between the two lookups.
Apparently though this is a bug in MySQL 5.5: http://bugs.mysql.com/bug.php?id=63563 and in my testing against the same database in MySQL 5.1 the LIKE BINARY query still uses the index (while in 5.5 it does a full table scan.)
A trick: If you don't want to change the type of your column to binary, try to write your ‍WHERE statement like this:
WHERE field = 'yourstring' AND field LIKE BINARY 'yourstring'
instead of:
WHERE field LIKE BINARY 'yourstring'
Indeed, it will check the first condition very quickly, and try the second one only if the first one is true.
It worked well on my project for this test of equality, and I think you can adapt this to the "starts with" test.

Some questions related to SphinxSE and RT indexes

I consider using Sphinx search in one of my projects so I have a few questions related to it.
When using SphinxSE and RT index, every UPDATE or INSERT in the SphinxSE table will update the index, right? No need to call indexer or anything?
Can I search on both tags (user entered keywords for a document) and the content and give more relevance to the tag matches? And if it's possible how do I implement the tag search (now I have them in separate tables like an inverted index)
For the fillter attributes is it better to stick duplicates of them in the SphinxSE table or fillter using mysql from the regular documents table I have?
Thanks in advance!
OK, I finally understand how things work with the sphinx thing.
You cannot INSERT or UPDATE directly the SphinxSE table. Instead you use INSERT/REPLACE while connected to SphinxQL (directly to sphinx daemon).
With 1.10 you can add multiple FullText searchable fields. I added title, tags and content. And the query to give more weight to the title, then tags and then content looks like this:
SELECT SQL_NO_CACHE * FROM sphinx_docs WHERE query = 'a lot of keywords;weights=3,2,1;';
I use the SQL_NO_CACHE to tell mysql not to cache the result of this, because on next calls I can't get the number of rows returned from sphinx (SHOW STATUS LIKE 'sphinx_total_found')
It's better to let sphinx do all the sorting, filltering and use mysql only to JOIN the table you need more info from.
In addition I have to say that I tried many times to add the sphinxse plugin to mysql without success (endless make waiting hours) so I switched to MariaDB 5.2.4 which includes the SphinxSE storage engine.

Querying multiple MySQL tables

What is the best thing to approach something like:
select * from (show tables like "T_DATA___") // Invalid
There are over 600 tables with the name T_DATAxy where x and y are letters
Something went seriously wrong with this design. Accessing 600 tables at once means accessing as much as 1800 files on disk. You should've partitioned this data instead.
As far as th question goes, Im afraid that you will need to use a stored procedure or external application, to build a multiple UNION query statement. Still, I seem to remember that there's a limit of 32 tables merged in a UNION.
You could get the list of tables whose data you want (show tables like __) and then use mysql dump, passing in that list.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
If you are determined to get it from SQL queries, you could generate appropriate sql queries using macros and execute them all at once. e.g. get the list of tables, replace newline with "; (newline) select * from ", execute all queries. (The emacs mysql mode makes this super easy).
As the other commenter says, you won't be able to do it in a single query due to #-table limits.

Saving commands for later re-use in MySQL?

What would be the equivalant in MySQL for:
Saving a command for later reuse.
eg: alias command1='select count(*) from sometable;'
Where then I simply type command 1 to get the count for SomeTable.
Saving just a string, or rather part of a command.
eg: select * from sometable where $complex_where_logic$ order by attr1 desc;
WHere $complex_where_logic$ is something I wish to save and not have to keep writing out
Another approach is to create a view with your $complex_where_logic$ and query the view instead of the tables:
CREATE VIEW my_view AS SELECT * FROM sometable WHERE $complex_where_logic$
SELECT my_column FROM my_view ORDER BY some_column
Whenever you query a view, you always get the up-to-date data. Internally, MySQL runs the SELECT given in the CREATE VIEW statement and queries the results in order to obtain the results of your current SELECT. Therefore, a view does not improve performance compared to a single query. There a two main advantages in using views:
you have simpler SELECT statements since you do not have to type complex WHERE or JOIN Syntax again and again
you can use it to control user privileges, e.g. give a user access to a view but not to the original tables; this is not useful in your example, but - for example - you can think of views containing aggregate data only
This "template" feature could be part of client tools. Basically I use Toad. It has record macros feature. I think it is possible to do.
I take it the answer you are looking for isn't 'stored procedures'...?
I found that the best solution for this is just any rich GUI for SQL queries (TOAD, mysql query browser, etc). They offer the ability to save commands and browse them and well, of course, much more.