I have tables with relationships like the below:
I have cascading drop down boxes linked to each other, i.e. when you select countries, the regions under that country will be loaded in the regions drop down. But now I want to change the drop downs to Ajax based auto-complete text-boxes.
My question is, how many text-boxes I should have? Should I have one text-box for everything like "search by location", I would need to change table design, or one text-box for each like country, region, city etc,
If I have textboxes like these, the users may not know, few places whether they are region or a city, for example Auckland, New Zealand is a region not city.
They may search for regions in city textbox & search cities in region textbox...now that they have a dropdown, they can see their region from it, "Auckland will be there in region for sure"
I may not find what I want from individual text-boxes,
I need some suggestions on redesigning from both the database & interface point of view.
Your schema is fine. But it sounds like what the user wants at a minimum is:
1. A google-style free-form text field which they can just type in words, but...
2. Which brings up a subset of matching results in a combo-style fashion.
So here's the deal: Search-like capability isn't what relational databases are designed for, and that's basically the problem you're running into. That said, MySQL, while not my domain of expertise, does seem to have reasonable full-text search support (MySQL Full Text Search).
Perhaps you could have FULLTEXT indices on each of the description fields and issue five different queries. Or if you're willing to go with a dirty solution, have a separate BUSINESS_SEARCH(business_id, concat_description) where concat_description is just all of the related "description" fields munged together; though you'll need to account for description updates.
But I have no idea what the performance implications are with FULLTEXT. If it's non-trivial, I'd offload these queries to a separate copy of the server.
My personal feeling--completely without evidence to back it up--is that you'll run into performance problems down the road. Have you considered an add-on? A quick google search-engine shows Google-like Search Engine in PHP/mySQL. The big downside is that you're introducing all of the pitfalls of yet an unproven/unfamiliar technology.
For either approach, I think you have some research cut out for you.
Good luck!
Related
i would like to know how to decide between different database-design solutions?
I guess best to describe my Question is to give an example.
Lets say we want to create a Database for Cars. Every Car has a number of Properties we want to save.
There are a lot of Properties every Car has like:
Producer, Model, Color, Age,...
But here are also Properties that are just found in a subcategory or in a small group of cars like:
Draw Bar, Roof Rack, Cargo area, 4 Wheel Drive,...
Some Properties may even only be relevant for less than 5% of the Cars. There are different solutions to solve this.
- The first is dump everything into one table. Of Course Normalized! (not mentioned below)
- The second solution would be creating a table with Properties that every car has. Adding a CartoDrawbar ... table to establish an m:m connection between the rare Properties and the Cars.
- The third possibility i can imagine would be creating Tables for Car Groups like SUVs, Notchback, Truck, Compact, Pickup ... to group cars with similar Properties. (my rare Properties were not the best choice to resemble this).
- Last idea is creating a table with all shared Properties and add a Char or Text Column to fill in everything special.
But which is the best Solution or the fitting Solution? Did i forget an important one? Are there differences in Speed, Filesize or ... to consider? Or some thresholds when to chose this or that solution. I have a personal favorite but i don't want to influence you and i don't have enough knowledge about the relational Databases and or Management Software to judge Speed or File-size of a Table.
There is no "best" solution. In fact, most of your "rare" columns look more like flags -- a car has 4-wheel drive or it does not, a car has a roof-rack or it does not.
My suggestion is to put these into one table, with separate columns, of the appropriate type.
Then, if you really do have optional features, like say the number of gears in a manual transmission, you can then think about how to implement a list. Nowadays, most databases support JSON and that would be a natural choice for such elements.
I'm building a real estate website and I'm a little bit confused on how to filter apartment search results. The user can filter his search by clicking on check boxes and a textbox that contains keywords to search for.
My problem is that I have many filtering options (by city and/or location in city and/or apartment size and/or number of bed rooms and/or ... ). so my problem is how to write a mysql stored procedure that can be dynamic to accept different inputs and give back filtered results with pagination. for example, someone can choose number of bedrooms to be 2 or 3 in his filter and to be in a certain city and simply he might not care about the other conditions. And the user might also put a keyword along with the conditions to search for. I'm using Spring MVC and mysql but I guess the help I need is more about the concept than about what languages and relational DB I'm using.
At first, I though of passing key value pairs but this will complicate things a lot in the procedure I guess and will depend on enum tables. so, can you please suggest a proper way to implement this kind of search based on best practices and your expertise.
Many thx
Faceted Search is really an analytic problem, meaning your need an analytic schema to do it properly.
This means a dimensional design. It also means OLAP-style querying.
So, you should read up on those first.
Basically, you want one big table (where each row is a house for sale), with all applicable columns. This doesn't have to be a real table, it could be a view or materialized view.
I'd pass on using sprocs for this. I don't see how it would help.
I'm currently working on Blog-Software which should offer support for content in multiple languages.
I'm thinking of a way to design my database (MySQL). My first thought was the following:
Every entry is stored in a table (lets call it entries). This table
holds information which doesn't change (like the unique ID, if it's
published or not and the post-type).
Another table (let's call it content) contains the strings
(like the content, the headline, the date, and author of the specific
language).
They are then joined by the unique entry-id.
The idea of this is that one article can be translated into multiple other languages, but it doesn't need to be. If there is no translation in the native language of the user (determined by his IP or something), he sees the standard language (which would be English).
For me this sounds like a simple multilingual database and I'm sure there is a design pattern for this. Sadly, I didn't find any.
If there is no pattern, how would you go about realizing this? Any input is greatly appreciated.
Your approach is what I've seen in most applications with this kind of capability. The only changing piece is that some places will put the "default" values into the base table (Entry) while others will treat it as just another Content row.
That design will also give you the ability to search (or restrict search) in all languages easily. From a db design perspective, its imho the best design you can use.
With small amounts of text and a simple application this would work. In the large, you might be bitten by the extra joins needed, especially when your database is larger than ram. Presenting things in the right order (sorting) also might need solving
I have appx. 2TB of text that I want to turn into a searchable database, where I will usually be searching to see if 2-4 word expressions exist in the database (for instance I might do a search to see if the phrase "these are four words", or "three consecutive words" appears anywhere in the text).
These searches will happen very often so it is very important that I setup the database to use as little processing as possible. I'd also want to minimize the overhead as much as possible so I can lower the amount of database servers I'll need.
Does anybody have any suggestions as to how I should setup this database?
For instance I was thinking of doing a linked list that was organized |id|word1|word2| (with all three beings keys) so for the expression "these are four words", I'd first search "these are", then I'd search "are four", check to see if any matches for "these are" are 1 id lower than "are four", and then do the same thing for "four words". But I think there has to be a more efficient way of doing it.
EDIT: The ONLY thing I will be using this database for is doing these 2-4 word exact match searches, and it is meant for internal use. All I want this database to be able to do is let me know if a 2-4 word expression exists somewhere in all of my files of information, and nothing more.
Does anybody have any suggestions as
to how I should setup this database?
Personally, I'd first rule out the possibility of using MySQL's full-text search, and every Open Source, full-text search engine. There's a list of Open Source search engines on Wikipedia. I'd also rule out using Google Custom Search. Heck, I'd even consider a commercial product before I'd try rolling my own.
At the very least, studying their code might give you some ideas about index structure.
If you're thinking of building a linked list in SQL, well, you might want to build a tiny test before you get too far into it. I don't think it will be practical, but I could be wrong.
It takes a lot of work to do full-text search really well. (Think about proximity searches—find "there are" within 3 words of "many ways to fail". ) Reinventing this wheel might not be the best use of your time.
In my application, I want to create a "universal search" box that will allow the users to perform a general search on any of the 'informational' data within the database. The system happens to be an account management system, so ideally they'd be able to do searches for e-mail addresses, usernames, ID's, etc.
I've been searching around the web for a solution but I haven't come to a conclusion yet so I figured I'd ask the question on SO.
What's the best way to perform a 'search' query on the database and return potential results from multiple tables?
My initial thought was to perform a SELECT query on each individual table using a wildcard for each 'searchable' column. Would this be a correct approach?
I would use a special search engine for such kind of "universal search". For example, Sphinx, free open source SQL full-text search engine.
A SELECT query on each table will cause very low performance if the database is large enough.