Efficient read and write in mysql database - mysql

I have a mysql table which stores a set of queries (strings).
The table with its contents looks like this :-
query_id queryString
1. Query 1
2. Query 2
3. Query 3
The results table which are somehow related to the above mentioned queries are stored in a different mysql table in the form shown
result_id query_id resultString
1. 1 Result 1
2. 1 Result 2
3. 2 Result 3
4. 2 Result 4
5. 2 Result 1
6. 3 Result 3
7. 3 Result 4
8. 3 Result 5
Clearly, the model above has redundancy, as I have to store Result 1 , Result 3 and Result 4 more than once. This redundancy further increases with increase in number of similar queries. So, lets say if I have to do some processing on the results query, I would have to do on several duplicate values.
In another alternative which I can think of is that I can store results uniquely in a table and store the results_id to which they refer along with the queries in the query table. But in that case, while reading results for a query, I would have to hit a number of mysql queries, one corresponding to every result_id that I have. So, that seems inefficient(w.r.t read) to me.
What other possible solutions could help me in removing redundancies with minimal increase in the read/write load ?
Please comment if I am unclear in asking my doubt.
Thanks !

It seems this is a N:N relationship between querys and resultstrings so :
You need for Querystrings like the one you already have.
Create another table for resultstrings and create another one to link querystrings and resultstrings. Dont forget foreign keys.

Related

Generate a query that show the times that questions get wrong

I have a table named countwronganswer with columns cwa_id, question_num. How can I generate a table with query that shows two columns, one column lists all the question_num and second column lists the number of times that cwa_id that related to the question_num.
Question Number |Total # of Mistake |
1 12
2 22
..etc
ATTENTION: This question was asked without the awareness of the existence of count or Groupby method because of the knowledge level at that state. Count() or Groupby() were the key to generate the 2nd column of total # values which I did not aware of completely, therefore, any attempt, at that point of time, to write the code for the data will be close to meaningless. Vote up if possible if you think its useful or resolved your issue.
Probably something like this
SELECT question_num, COUNT(cwa_id) total_mistakes
FROM countwronganswer
GROUP BY question_num
select question_num , count(cwa_id)
from tableName group by question_num

Concatenating multiple MySQL tables for search. Is this a good use of a MySQL View?

I'm trying to make it quick and easy to perform a keyword search on a set of MySQL tables which are linked to each other.
There's a table of items with a unique "itemID" and associated data is spread out amongst other tables, all linked to via the itemID.
I've created a view which concatenates much of this information into one usable form. This makes searching really easy, but hasn't helped with performance. It's my first use of a view, and perhaps wasn't the right use. If anyone could give me some pointers I'd be very grateful.
A simplified example is:
ITEMS TABLE:
itemID | name
------ -------
1 "James"
2 "Bob"
3 "Mary"
KEYWORDS TABLE:
keywordID | itemID | keyword
------ ------- -------
1 2 "rabbit"
2 2 "dog"
3 3 "chicken"
plus many more relations...
MY VIEW: (created using CONCAT_WS, GROUP_CONCAT and a fair few JOINs)
itemID | important_search_terms
------ -------
1 "James ..."
2 "Bob, rabbit, dog ..."
3 "Mary, chicken ..."
I can then search the view for "mary" and "chicken" and easily find that itemID=3 matches. Brilliant!
The problem is, it seems to be doing all the work of the CONCATs and JOINs for each and every search which is not efficient. With my current test data searches are taking approx 2 seconds, which is not practical.
I was hoping that the view would be cached in some way, but perhaps I'm not using it in the right way.
I could have an actual table with this search info which I update periodically, but it doesn't seem as neat as I had hoped.
If anyone has any suggestions I'd be very grateful. Many Thanks
Well, a view is nothing more than making it easier to read what you query for but underneath perform the SQL-Statement lying underneath everytime.
So no wonder it is as slow (even slower...) as when you run that statement itself.
Usually this is done by indexing jobs (running at nighttime, not annoying anyone), or indexed inserts (when new data is inserted, checks run if it is a good idea to insert them into the indexed interesting words).
Having that at runtime is really hard and require well designed database structures and most of the time potent hardware for the sql server (depending of data amount).
A MySQL view is not the same as a materialized view in other SQL languages. All it's really doing is caching the query itself, not the data needed for the query.
The primary use for a MySQL view is to eliminate repetitive queries that you have to write over and over again.
You've made it easy, but not made it quick. I think if you look at the EXPLAIN for your query you are going to see that MySQL is materializing that view (writing out a copy of the result set from the view query as a "derived table") each time you run the query, and then running a query from that "derived table".
You would get better performance if you can have the "search" predicate run against each table separately, something like this:
SELECT 'items' AS source, itemID, name AS found_term
FROM items WHERE name LIKE 'foo'
UNION ALL
SELECT 'keywords', itemID, keyword
FROM keywords WHERE keyword LIKE 'foo'
UNION ALL
SELECT 'others', itemID
FROM others WHERE other LIKE 'foo'
-or-
if you don't care what the matched term is, or which table it was found in, and you just want to return a distinct list of itemID that were matched
SELECT itemID
FROM items WHERE name LIKE 'foo'
UNION
SELECT itemID
FROM keywords WHERE keyword LIKE 'foo'
UNION
SELECT itemID
FROM others WHERE other LIKE 'foo'

How to collapse MS Access Table rows by matching IDs

I have a table similar to the Example Source Table shown below that I would like to collapse based on the ID field (see Example Collapsed Table). I can do this with code but it inflates my Access database beyound the 2 GB maximum size so I'm hoping there is a way to do it with a query. I should probably note that for any given ID value I don't need to worry about more than one record having a value in field One, Two, Three, or Four.
Example Source Table:
ID One Two Three Four
1 My Is
1 Matt
1 Name
2 My Is Matt
2 Name
3 My Name Is Matt
Example Collapsed Table:
ID One Two Three Four
1 My Name Is Matt
2 My Name Is Matt
3 My Name Is Matt
You can use an aggregate query which groups by ID and returns the Max() for each of those other 4 columns within each ID grouping.
SELECT
ID,
Max(One),
Max(Two),
Max(Three),
Max(Four)
FROM tblSource
GROUP BY ID;
If you want to store the results in a new table, convert the query to a "make table query". If you already have your destination table created and want to add those results to it, convert the query to an "append query".
If you're approaching the 2 GB db file size limit, first use Compact & Repair to discard unused space. If compact doesn't give you enough working room, create another db file and store the new (collapsed) table there. You can link to it from your original database.

How can I add a "group" of rows and increment their "group id" in MySQL?

I have the following table my_table with primary key id set to AUTO_INCREMENT.
id group_id data_column
1 1 'data_1a'
2 2 'data_2a'
3 2 'data_2b'
I am stuck trying to build a query that will take an array of data, say ['data_3a', 'data_3b'], and appropriately increment the group_id to yield:
id group_id data_column
1 1 'data_1a'
2 2 'data_2a'
3 2 'data_2b'
4 3 'data_3a'
5 3 'data_3b'
I think it would be easy to do using a WITH clause, but this is not supported in MySQL. I am very new to SQL, so maybe I am organizing my data the wrong way? (A group is supposed to represent a group of files that were uploaded together via a form. Each row is a single file and the the data column stores its path).
The "Psuedo SQL" code I had in mind was:
INSERT INTO my_table (group_id, data_column)
VALUES ($NEXT_GROUP_ID, 'data_3a'), ($NEXT_GROUP_ID, 'data_3b')
LETTING $NEXT_GROUP_ID = (SELECT MAX(group_id) + 1 FROM my_table)
where the made up 'LETTING' clause would only evaluate once at the beginning of the query.
You can start a transaction do a select max(group_id)+1, and then do the inserts. Or even by locking the table so others can't change (insert) to it would be possible
I would rather have a seperate table for the groups if a group represents files which belong together, especially when you maybe want to save meta data about this group (like the uploading user, the date etc.). Otherwise (in this case) you would get redundant data (which is bad – most of the time).
Alternatively, MySQL does have something like variables. Check out http://dev.mysql.com/doc/refman/5.1/en/set-statement.html

Merge columns in MySQL SELECT

I have a table that stores a default configuration and a table that stores a user configuration. I can join the two tables and get all the info I need however I was hoping there might be a cleaner way to overwrite one column with the other when a value exists in the second column.
Example:
Current query result:
id defaultValue userValue
1 one ONE
2 two
3 three THREE
4 four
Desire query result:
id value
1 ONE
2 two
3 THREE
4 four
Maybe there isn't a good way to do this... Thought I'd ask though as it's probably faster to do it in MySQL if a method exists than to do it in PHP.
You can use COALESCE() for this:
SELECT id, COALESCE(uservalue,defaultvalue) AS value
FROM table