MySQL ORDER BY FIELD equivalent in Redis search - mysql

How do I accomplish this custom sort by field feature available in MySQL in Redis search?
select * from product ORDER BY FIELD(id,3,2,1,4)
For some business reason, I need to enforce custom orders.

There is not equivalent of the FIELD function in RediSearch.
With FT.SEARCH / SORTBY you can sort results using a field.
In your hash, you may create a new field (NUMERIC SORTABLE) holding a value that will be used to sort the result. Of course, that would work only if you don't want need to specify a different order on each query.
Second option: This could be handled by using FT.AGGREGATE with an appropriate function. You may have a look at the existing function and see if one could be used for that. IF the function does not exist, you may do a feature request.
A third option is to implement your own scoring function using the extension API (but it may be over engineering ...)

Related

Is There a Way to Combine Similar Rows in SQL Based on a Value That Isn't Exactly the Same?

i have an SQL table that you can add brand names to, and when a new brand name is added, it will either increase the existing brand active count, or create a new brand name.
The problem is, if someone added a new brand with different spelling (like adding Toyota but spelled toyota) it would make a new brand with a new active count and new brand id. Now that the table has a few instances of this, is there a way i can sort through with SQL and merge the similar brands? I know this would end up deleting a few rows and I'm not sure if SQL has the power to do this all at once.
I'm still kind of new to SQL so any advice on this is appreciated. I heard that using Python Pandas would be easier so I am currently looking into that for a method to do this.
In case of simple case changes, you can use functions like LOWER() to convert all of them to lower case and then group results together based on brand_name,
However, your question says "similar" records where similar is not so well defined. The SQL language expects you to clearly define what you need.
If you are looking to fix one / few characters you can use LIKE operator with percentage (%) and / or underscore (_) sign. You can define all permutations of errors you would like to identify by placing % and _ at various positions. Alternatively, you can also explore SOUNDEX function or sounds like in MySQL and see if you can merge brand names based on SOUNDEX.
If data is not huge, I will suggest you to create another table / temporary table to perform such operation. This way, you can always refer back to original data.

Sorting ResultSet obtained from SpringJpa ExampleMatcher from most matching to least to be used in an Advanced Search

I am in the process of writing an advanced search function using Spring boot and MySQL for a Book Management system.
My Book object contains various information such as material id,book name, author, publisher, description, product type (as in a story book or a reference material etc.)
I managed to write an ExampleMatcher as follows;
ExampleMatcher exampleMatcher = ExampleMatcher.matchingAny().
withIgnoreCase()
.withIgnorePaths("material_id")
.withStringMatcher(ExampleMatcher.StringMatcher.CONTAINING)
.withStringMatcher(ExampleMatcher.StringMatcher.STARTING)
.withIgnoreNullValues();
Example example = Example.of(book, exampleMatcher);
List<Book> all = bookRepository.findAll(example);
But when i get the results set, the results are sorted according to the material id. And records that have attributes matching almost all the fields are also there, but sorted according to the id.
Is there a way for me to sort the results in a way that the most matching records are in the first few records in the list and then the other records? As in, to sort from most matching to least matching?
As far as i understood, JpaSort allows ascending and descending sorting and also we can allow specific sorting for specific attributes.
But in the advanced search, the searching is done dynamically according to the attributes that the user fills in. Therefore, i cannot program which fields of the table to sort right? For example, if i program the book name field to be sorted in ascending order and if the user did not specify any value for that particular field, then sorting under that field is useless right?
That is why i want to know if there is any way to dynamically sort the results from most matching to least matching. Any way of achieving this task is much appreciated. Thank you.
After two whole days of reading more than 50-70 articles and posts on the Internet, i was able to implement the Advanced Search in a more optimized manner.
I was not able to find how to sort the results obtained from most-matching to least-matching as i originally asked in the question. So if someone can still answer my original question, i am happy to accept.
The workaround i used is as follows.
From an idea i got to dynamically generate the SQL query, i was able to find a lead and referred articles on that.
In Dynamic Query in Spring Boot, the author has used Java Reflection API to manually go through the non-null fields of the entity class and to generate the SQL query. But as we all know, when you are using Springboot and when all the configurations are done for you by Springboot, i don't think it is really an effective way to have the Hibernate dependency explicitly, to manage sessions and run your SQL query. The HibernateJpaSessionFactoryBean used in the above article is now deprecated. Therefore, i referred various articles and the Spring Data Jpa Documentation but could not resolve the error that i always got saying that Springboot cannot find the entityManagerFactory bean.
Therefore, i searched for ways to dynamically generate queries using Spring Data JPA itself and not use Hibernate and facing a hassle on session managing etc. Dynamic Queries with Spring Data JPA Specifications and Using Spring Data JPA Specification has enough information on how to implement JpaSpecification in order to generate queries dynamically in Springboot.
So at the end, i used information from all these 3 articles sited here to come up with my implementation. I used Java Reflection to create the Specification according to the Class type of the non-null fields in my entity object.
The new part i added by myself was, i grouped all the separate Specifications together to a List, and wrote a loop to dynamically generate the final Specification to be used in retrieving the data. It is as follows.
List<BookSpecification> bookSpecifications = createDynamicQuery(book);
if (bookSpecifications.size() != 0) {
Specification<Book> dynamicQuery = Specification.where(bookSpecifications.get(0));
for (int i = 1; i < bookSpecifications.size(); i++) {
dynamicQuery = dynamicQuery.or(bookSpecifications.get(i));
}
List<Book> all = bookRepository.findAll(dynamicQuery);
all.forEach(System.out::println);
return all;
}
The createDynamicQuery() method above, which i used in my own way is inspired from the information in the cited articles.
Using this way, i was able to obtain much more accurate Advanced Search results rather than using ExampleMatcher for the same advanced search criteria. And since i am searching by specific field names, the search results were also sorted in an accurate way.

Neutral value in SELECT mySql query

I have simple mySql query in my php code:
query(sprintf(SELECT * FROM customers WHERE city='%s' AND state='%s' AND age='%s'(...))
This query is used in search engine in my application. I want user to be able to search, for example, customers from New York, but for now he must specify 'state' and 'age'.
User can specify search filter by more than one criteria, but doesn't have to specify all of them.
Is there any method that will bypass values not used in current search session?
i believe that you are looking for CASE statement.
https://dev.mysql.com/doc/refman/5.7/en/case.html

Tridion 2009 embedded metadata storage format in the broker

I'm fairly new to Tridion and I have to implement functionality that will allow a content editor to create a component and assign multiple date ranges (available dates) to it. These will need to be queried from the broker to provide a search functionality.
Originally, this was only require a single start and end date and so were implemented as individual meta data fields.
I am proposing to use an embedded schema within the schema's 'available dates' metadata field to allow multiple start and end dates to be assigned.
However, as the field is now allowing multiple values, the data is stored in the broker as comma separated values in the 'KEY_STRING_VALUE' column rather than as a date value in the 'KEY_DATE_VALUE' column as it was when it was only allowed a single start and end values.
eg.
KEY_NAME | KEY_STRING_VALUE
end_date | 2012-04-30T13:41:00, 2012-06-30T13:41:00
start_date | 2012-04-21T13:41:00, 2012-06-01T13:41:00
This is now causing issues with my broker querying as I can no longer use simple query logic to retrieve the items I require for the search based on the dates.
Before I start to write C# logic to parse these comma separated dates and search based on those, I was wondering if anyone had had similar requirements/experiences in the past and had implemented this in a different way to reduce the amount of code parsing required and to use the broker querying to complete the search.
I'm developing this on Tridion 2009 but using the 5.3 Broker (for legacy reasons) so the query currently looks like this (for the single start/end dates):
query.SetCustomMetaQuery((KEY_NAME='end_date' AND KEY_DATE_VALUE>'" + startDateStr + "') AND (ITEM_ID IN(SELECT ITEM_ID FROM CUSTOM_META WHERE KEY_NAME='start_date' AND KEY_DATE_VALUE<'" + endDateStr + "')))";
Any help is greatly appreciated.
Just wanted to come back and give some details on how I finally approached this should anyone else face the same scenario.
I proposed the set number of fields to the client (as suggested by Miguel) but the client wasn't happy with that level of restriction.
Therefore, I ended up implementing the embeddable schema containing the start and end dates which gave most flexibility. However, limitations in the Broker API meant that I had to access the Broker DB directly - not ideal, but the client has agreed to the approach to get the functionality required. Obviously this would need to be revisited should any upgrades be made in the future.
All the processing of dates and the available periods were done in C# which means the performance of the solution is actually pretty good.
One thing that I did discover that caused some issues was that if you have multiple values for the field using the embedded schema (ie in this case, multiple start and end dates) then the meta data is stored in the KEY_STRING_VALUE column in the CUSTOM_META table. However, if you only have a single value in the field (i.e. one start and end date) then these are stored as dates in the KEY_DATE_VALUE column in the same way as if you'd just used single fields rather than an embeddable schema. It seems a sensible approach for Tridion to take but it serves to make it slightly more complicated when writing the queries and the parsing code!
This is a complex scenario, as you will have to go throughout all the DCPs and parse those strings to determine if match the search criteria
There is a way you could convert that metadata (comma separated) in single values in the broker, but the name of the fields need to be different Range1, Range2, ...., RangeN
You can do that with a deployer extension where you change the XML Structure of the package and convert each those strings in different values (1,2, .., n).
This extension can take some time if you are not familiar with deployer extensions and doesn't solve 100% your scenario.
The problem of this is that you still have to apply several conditions for retrieve those values and there is always a limit you have to set (Versus the User that can add as may values as wants)
Sample:
query.SetCustomMetaQuery((KEY_NAME='end_date1'
query.SetCustomMetaQuery((KEY_NAME='end_date2'
query.SetCustomMetaQuery((KEY_NAME='end_date3'
query.SetCustomMetaQuery((KEY_NAME='end_date4'
Probably the fastest and easiest way to achieve that is instead to use an multi-value field, use different fields. I understand that is not the most generic scenario and there are Business Requirements implications but can simplify the development.
My previous comments are in the context of use only the Broker API, but you can take advantage of a search engine if is part of your architecture.
You can index the Broker Database and massage the data.
Using the Search Engine API you can extract the ids of the Components/Component Templates and use the Broker API to retrieve the proper information

Reverse hash lookup query

i have an web service and one of the parameter our clients needs to use is a custom key. this key is a hash of sha1
eg:
bce700635afccfd8690836f37e4b4e9cf46d9c08
then when the client call our web service i have to check few things:
is client active
is client can submit via webservice and service
now my problem is this:
i have a query:
$sql = "SELECT permission, is_active FROM clients WHERE sha1(concat(id,key)) = '" . mysql_real_escape_string($key) . "'";
Am i doing the right thing? or there's a better way?
thanks
This approach is expensive, since, every time you run this query, MySQL will have to examine every single record in clients and compute the SHA-1 hash of its id and key. (I'm assuming here that clients has more than a few rows, or at least, that you'd like to use an approach that supports the case where clients has more than a few rows.)
Why don't you add a new field called (say) id_key_sha1? You can use a trigger to keep the field populated, and add an index on it. This approach should perform much better.
Edited to add: You mention that the client, in addition to passing in this SHA-1 hash, also has to submit a username and password? I don't know what your table structure looks like, but I'm guessing that it would make more sense to find the client record based on the username first, and then comparing the SHA-1 hash for that specific record, rather than trying to find the record by the SHA-1 hash.
You should not applying function into your LHS column where doing filtering in mysql,
this make not possible for mysql to make use of index for comparison.
An example will allow make use on index :-
where key = SHA1(CONCAT(id, :key))
// where the :key = user submitted api key
// and in this case mysql able to fetch the matched rows via index