Is there a way to have one column equal the count of all the rows in a different table where the id = a column in the current row?
For example.
table1 has the columns: id, count
table2 has the columns id, table1_id
I want the table1.count to automatically be filled with the amount of rows in table2 where table2.table1_id = table1.id
(its a parent child has one relationship).
Obviously i can just do this with php, i just figured this would be faster than to have to recount this constantly using php every time the page loads. Thanks.
You can accomplish this with a view. It will probably recalculate the number each time, though.
To update a regular table each time another one is changed, use a trigger.
Don't worry too much about recalculationif the table is seldom changed bcause the result will usually be in the query cache.
Using a view and a table table1_data, you could do this:
CREATE VIEW table1 AS
SELECT table1_data.*, COUNT(*) AS count
FROM table1_data, table2
WHERE table1_data.id = table2.table1_id
GROUP BY table1_data.id
This will take all the data from table1_data, add a column count as you described it, and present the result as a view which can be used just like a table in most situations.
Assuming table1.id is a unique key, you can do the following:
REPLACE INTO table1 (id, count)
SELECT table1_id, COUNT(*)
FROM table2
GROUP BY table1_id
You can achieve this "automatic" effect using triggers, of course if your DB engine supports them.
You would need to establish 2 triggers on your table_data, onInsert() and onDelete().
Related
I have about 20 tables. These tables have only id (primary key) and description (varchar). The data is a lot reaching about 400 rows for one table.
Right now I have to get data of at least 15 tables at a time.
Right now I am calling them one by one. Which means that in one session I am giving 15 calls. This is making my process slow.
Can any one suggest any better way to get the results from the database?
I am using MySQL database and using Java Springs on server side. Will making view for all combined help me ?
The application is becoming slow because of this issue and I need a solution that will make my process faster.
It sounds like your schema isn't so great. 20 tables of id/varchar sounds like a broken EAV, which is generally considered broken to begin with. Just the same, I think a UNION query will help out. This would be the "View" to create in the database so you can just SELECT * FROM thisviewyoumade and let it worry about the hitting all the tables.
A UNION query works by having multiple SELECT stataements "Stacked" on top of one another. It's important that each SELECT statement has the same number, ordinal, and types of fields so when it stacks the results, everything matches up.
In your case, it makes sense to manufacturer an extra field so you know which table it came from. Something like the following:
SELECT 'table1' as tablename, id, col2 FROM table1
UNION ALL
SELECT 'table2', id, col2 FROM table2
UNION ALL
SELECT 'table3', id, col2 FROM table3
... and on and on
The names or aliases of the fields in the first SELECT statement are the field names that are used in the result set that is returned, so no worries about doing a bunch AS blahblahblah in subsequent SELECT statements.
The real question is whether this union query will perform faster than 15 individual calls on such a tiny tiny tiny amount of data. I think the better option would be to change your schema so this stuff is already stored in one table just like this UNION query outputs. Then you would need a single select statement against a single table. And 400x20=8000 is still a dinky little table to query.
To get a row of all descriptions into app code in a single roundtrip send a query kind of
select t1.description, ... t15.description
from t -- this should contain all needed ids
join table1 t1 on t1.id = t.t1id
...
join table1 t15 on t15.id = t.t15id
I cannot get you what you really need but here merging all those table values into single table
CREATE TABLE table_name AS (
SELECT *
FROM table1 t1
LEFT JOIN table2 t2 ON t1.ID=t2.ID AND
...
LEFT JOIN tableN tN ON tN-1.ID=tN.ID
)
I have two tables:
1) is a list of all parameter-ids and the info to which set of parameters the parameter-id belongs
2) is data that includes some of the parameter-ids, and some additional data such as timestamp and values.
I'm designing a data-warehouse-like system. But instead of a summary table where i store precalculated values (that doesn't really make sense in my case) i try to decrease the amount of data the different reporting-scripts have to look through to get their results.
I want to transfer every row that is in table2 into a table for each set of parameters so that in the end i have "summary tables", one for each set of parameters. Which parameter belongs to which set is saved in table1.
Is there a faster way than to loop over every entry from table1, get #param_id = ... and #tablename = ... and do a INSERT INTO #tablename SELECT * FROM table2 WHERE parameter_id = #param_id? I read that a "Set based approach" would be faster (and better) than the procedural approach, but I don't quite get how that would work in my case.
Any help is appreciated!
Don't do it. Your 3rd table would be redundant with the original two tables. Instead do a JOIN between the two tables whenever you need pieces from both.
SELECT t1.foo, t2.bar
FROM t1
JOIN t2 ON t1.x = t2.x
WHERE ...;
I have 3 access tables with information from the past 3 years. There are tons of the same records in each but there are also unique records in each.
2 tables have the same unique primary key (ID) while the 3rd table has a different set of unique IDs
How do I combine and select all the unique ID's into one master table? Thanks
Not 100% sure I understand where the overlaps occur and not, but try this:
select ID
into All_Id
from (
select ID from Table1
union
select ID from Table2
union all
select ID from Table3
)
This presupposes that Table1 and Table2 might share some IDs, and you only want them listed once, but Table3 doesn't have any overlaps.
Truth be told, there is no harm in making them all union, other than maybe having the query run slower.
If you want unique IDs, use a UNION query. If you want everything, use a UNION ALL.
UNION = no dupes
UNION ALL = returns all records including dupes
The Access engine supports union queries but you have to manually write the union query in the SQL view. Design view is not available.
Depending on how much data you have from the past three years, the UNION might take some time and may even blow up a few times. I'd make a back up copy first just in case.
If you want purely unique IDs and a new table, here's what I would do:
1.) Write your union query.
SELECT ID FROM Table1
UNION
SELECT ID FROM Table2
...
2.) Save the query.
3.) Create a make table query (to select and combine all unique IDs into a presumably new master table).
4.) Run the make table query. The new table will be created.
Hope that helps. Let us know how you make out!
I'm trying to grab the latest ID from a duplicate record within my table, without using a timestamp to check.
SELECT *
FROM `table`
WHERE `title` = "bananas"
-
table
id title
-- -----
1 bananas
2 apples
3 bananas
Ideally, I want to grab the ID 3
I'm slightly confused by the SELECT in your example, but hopefully you will be able to piece this out from my example.
If you want to return the latest row, you can simply use a MAX() function
SELECT MAX(id) FROM TABLE
Though I definitely recommend trying to determine what makes that row the "latest". If its just because it has the highest column [id], you may want to consider what happens down the road. What if you want to combine two databases that use the same data? Going off the [id] column might not be the best decision. If you can, I suggest an [LastUpdated] or [Added] datestamp column to your design.
im assuming the id's are autoincremented,
you can count how many rows you have, store that in a variable and then set the WHERE= clause to check for said variable that stores how many rows you have.
BUT this is a hack solution because if you delete a row and the ID is not decremented you can end up skipping an id.
select max(a.id) from mydb.myTable a join mydb.myTable b on a.id <> b.id and a.title=b.title;
table1:
columns: id, name
table2:
columns: id, name
assoc_table1_table2:
columns: id_table1, id_table2
I need to select all rows from table1 where at least one row in table2 is associated with this row.
What would be an efficient way to do it? Or, more correct in some way?
I'm thinking of:
SELECT DISTINCT t.id, t.name
FROM table1 t
JOIN assoc_table1_table2 a ON t.id=a.id_table1;
or:
SELECT id, name
FROM table1 t WHERE EXISTS (
SELECT *
FROM assoc_table1_table2 a
WHERE t.id=a.id_table1
);
Any ideas on what of the above is generally faster?
(the obvious indices are in place)
Neither.
I'd recommend using a "WHERE EXISTS" as it will give the optimizer more freedom.
Using "WHERE COUNT(*)" or DISTINCT will force a full table scan to compute.
You only want to know whether at least 1 row exists, for example, on a billion row table. "WHERE EXISTS" can be satisfied as soon as the db finds the first row. On databases with reasonable optimizers, you should find it works well.