I have a database table with 10000 rows in it and I'd like to select a few thousand items using something like the following:
SELECT id FROM models WHERE category_id = 2
EDIT: The id column is the primary index of the table. Also, the table has another index on category_id.
My question is what would be the impact on performance? Would the query run slow? Should I consider splitting my models table into separate tables (one table for each category)?
This is what database engines are designed for. Just make sure you have an index on the column that you're using in the WHERE clause.
You can try this to get the 100 records
SELECT id FROM models WHERE category_id = 2 LIMT 100
Also you can create index on that column to get the fast retrival of the result
ALTER TABLE `table` ADD INDEX `category_id ` (`category_id `)
EDIT:-
If you have index created on your columns then you dont have to worry about the performance, database engines are smart enough to take care of the performance.
My question is what would be the impact on performance? Would the
query run slow? Should I consider splitting my models table into
separate tables
No you dont have to split your tables as that would not help you in gaining performance
Im fairly new to SQL however I would first index the column
I agree with R.T.'s solution. In addition I can recommend you the link below :
https://indexanalysis.codeplex.com/
download the sql code. It's a stored procedure that helps me a lot when I want to analyze the impact of the indexes or what status they have in my database.
Please check.
Related
I've got a table with almost 10 mill. rows and it contains a reference (varchar20) and date_created(datetime) columns. The table has two indexes on these columns.
By checking the slow_queries log, we've detected a very very slow query that is used very often:
...WHERE reference LIKE '%SOME_TEXT%' ORDER BY date_created
Now it is mandatory to improve the query time.
I've read that the best way would be a multiple index on these columns, something like this:
create index ref_date on my_table (reference, date_created)
But I would like to know if there's a better way to create this index or set a specific index type in order to make it perfect for the beforementioned query.
Thank you very much, guys!
As you suggested, we removed the starting wildcard and searches now go faster.
We also added some indexes in numeric columns, improving the overall performance. Thank you all!
I'm using a MySQL database and have to perform some select queries on large/huge tables (e.g. 267,736 rows and 30 columns).
Query details:
Only select queries (the data in the table is fixed, never an update, insert or delete)
Select query on all the columns (business requirement)
Mostly limit the number of rows (LIMIT 10 to all rows -> user can choose)
Could be ordered by one or multiple columns (creation of indexes here will not help since the user can order by any column he likes)
Could be filtered by a value the user chooses (where filter on one or more columns)
Currently the queries take up to 2 seconds, which is to long.
Is there a way to speed them up?
Which storage engine should I use: InnoDB/MyISAM/...
Should I have a primary key, even if I will never use him?
...?
You should (must actually) use indexes.
Create indexes on all columns with which WHERE or ORDER BY is going to be used. Also study and use EXPLAIN to see the impact of the indexes and to optimize your queries.
You don't have to create a primary key if there is no column with unique data in your table, but it is very likely that you do have such a column (id, time...). In this case you should use primary key to filter your queries.
Number of columns in the query has close to no impact on SELECT speed.
As long as you make "Only select queries" storage engine does not matter either. MyISAM might be a bit faster, but InnoDB has many features you will need when you decide that your "Only select queries" rule must be broken.
I am currently using mysql
I have two tables called person and zim_list_id both tables has over 2 million rows
I want to update person table using zim_list_id table
the query I am using is
update person p JOIN zim_list_id z on p.person_id = z.person_id
set p.office_name = z.`Office Name`;
I have also created index on zim_list_id table and person table , the query I executed was
create index idx_person_office_name on person(`Office_name`);
create index idx_zim_list_id_office_name on zim_list_id(`Office name`);
the query execution is taking very long. is there any way to reduce the execution time?
The indexes on Office Name do nothing at all for this query. All you've done with those indexes is make inserts and updates slower, as now the database has to update the index any time that column changes.
What you really need, if you don't already have them, are indexes on the person_id field in those tables, to make the join more efficient.
You might also consider adding Office_Name as a second column on the zim_list_id table's index, as this will allow the database to fullfill that part of the query entirely from the index. But I wouldn't do that until I had checked the results after setting the plain person_id indexes first.
Finally, I'm curious how much memory is in that server (especially relative to the total size of the database), how much of it is available in your MySql buffer_pool_size setting, and what other work that server might be doing... there could always be an environmental factor as well.
I've got a three col table. It has a unique index, and another two (for two different columnts) for faster queries.
+-------------+-------------+----------+
| category_id | related_id | position |
+-------------+-------------+----------+
Sometimes the query is
SELECT * FROM table WHERE category_id = foo
and sometimes it's
SELECT * FROM table WHERE related_id = foo
So I decided to make both category_id and related_id an index for better performance. Is this bad practice? What are the downsides of this approach?
In the case I already have 100.000 rows in that table, and am inserting another 100.000, will it be an overkill. having to refresh the index with every new insert? Would that operation then take too long? Thanks
There are no downsides if it's doing exactly what you want, you query on a specific column a lot, so you make that column indexed, that's the whole point. Now you have a 60 column table and your adding indexes to columns you never query on then you are wasting resources because those indexes need to be maintained on INSERT/UPDATE/DELETE operations.
If you have created index for each column then you will definitely get benefit out of it.
Don't go for composite indexes (Multiple coulmn indexes).
You yourself can see the advantage of index in your query by using EXPLAIN (statement provides information about how MySQL executes statements).
EXAMPLE:
EXPLAIN SELECT * FROM table WHERE category_id = foo;
Hope this will help.
~K
Its good to have indexes. Just understand that indexes would take more disk space, but faster search.
It is in your best interest to index those fields which have less repeated values. For eg. Indexing a field that contains a Boolean flag might not be a good idea.
Since in your case you are having an id, hence I think you won't be having any problem in keeping the indexes that you have created.
Also, the inserts would be slower, but since you are saving id's there won't be much of a difference in the time required to insert. Go ahead and do the insert.
My personal advice :
When you are inserting large number of rows in a single table in one go, don't insert them using a single query, unless mandatory. This would prevent your table from getting locked and inaccessible for a long time.
Let us consider I have a table with 60 columns , I need to perform all kind of queries on that table and need to join that table with other tables as well. And I almost using all rows for searching data in that table including other tables. This table is the like a primary table(like a primary key) in the database. So all table are in relation with this table.
By considering the above scenario can I create index on each column on the table (60 columns )
,is it good practice ?
In single sentence:
Is it best practice to create index on each column in a table ?
What might happens if I create index on each column in a table?
Where index might be "Primary key", "unique key" or "index"
Please comment, if this question is unclear for you people I will try to improve this question.
MySQL's documentation is pretty clear on this (in summary use indices on columns you will use in WHERE, JOIN, and aggregation functions).
Therefore there is nothing inherently wrong with creating an index on all columns in a table, even if it is 60 columns. The more indices there are the slower inserts and some updates will be because MySQL has to create the keys, but if you don't create the indices MySQL has to scan the entire table if only non-indexed columns are used in comparisons and joins.
I have to say that I'm astonished that you would
Have a table with 60 columns
Have all of those columns used either in a JOIN or WHERE clause without dependency on any other column in the same table
...but that's a separate issue.
It is not best practice to create index on each column in a table.
Indexes are most commonly used to improve query performance when the column is used in a where clause.
Suppose you use this query a lot:
select * from tablewith60cols where col10 = 'xx';
then it would be useful to have an index on col10.
Note that primary keys by default have an index on them, so when you join the table with other tables you should use the primary key to join.
Adding an index means that the database has to maintain it, that means that it has to be updated, so the more writes you have, the more the index will be updated.
Creating index out of the box is not a good idea, create an index only when you need it (or when you can see the need in the future... only if it is pretty obvious)
creating more index in SQL will increase only search speed while you will get slowness of insert and update and also it will take more storage.