I am using code-igniter version 3.1.6 and xampp control panel v3.2.2 ,listing customer details by table wise and join two table 'tbl_customer' and 'tbl_additional_details' tbl_customer"cus_id" is set a index and primary key
My Query like
$this->db->select('*');
$this->db->from('tbl_additional_details');
$this->db->join('tbl_customer','tbl_additional_details.customer_id = tbl_customer.cus_id ');
$this->db->where('tbl_additional_details.branch_id',$_SESSION['branchs']);
$this->db->order_by('tbl_additional_details.customer_id',"desc");
$query = $this->db->get();
return $query->result();
why the listing show very slowly any way to improve the result time?
Queries optimisation is a complex topic... you could do with running the SQL in MySQL directly and doing some profiling there, but start with:
Make sure you create an index on any field that you might select or join on.
Run "optimize" on all your tables... you can do that from phpmyadmin. Do this AFTER creating your indexes.
Consider "denormalization", i.e. merge the tables or duplicate certain fields. For example if you are joining 2 tables for the sake of getting ONE extra field, you might want to copy that extra field over to the first table and eliminate the join altogether. This produces drastic increases in speed on large tables. Look at example here: http://courses.ischool.berkeley.edu/i257/f99/Lecture9_257/sld018.htm
All the best!
Related
I use mysql trigger to update column in one of ,y DB tables called comments_count but I want to know what is best and faster??
Using mysql triggers or select query like this below:
select count(*) from comments where discussion_id=something
different types of overhead:
with the trigger you will have extra time during insert, and may get out of synch over time for some unforseen reason.
with the query, you will always get the right answer but you will need to calculate at runtime. usually, this should be very fast especially with an index on the discussion_id
I'm experiencing huge performance problem in one legacy application.
There is a search form where user can search records with given value.
A result row contains 10 columns. Then a SP returns any row which contains in any column that value.
This SP uses 8 Tables and some of them have about million records. Every minute I get a new record. This SP conducts paging as well.
Execution of this SP takes sometimes around 40 seconds.
What I did was, I created a new table and put there all records by using a query from this SP, but without conditions.
When there is a new update or update in one of source table I use a trigger and update this new "cache" table.
Now waiting for results from this new table takes only 1-3 seconds.
Has someone experience with something like this?
One of my colleagues said I better use view, but then every time I will be making JOINS.
What do you think? Is there another way?
Often times temporary tables can help you resolve performance issues. One approach might be to collect only the records that you need to consider into temporary tables and then create your final select statement from the temporary tables joined to any other tables that you're not filtering.
As an example, let's say one of the fields you are searching for is field1 in table1. Start by inserting into table #table1 only records that have the value of field1 you are looking for:
select PrimaryKeyTable1, Field1, Field2, Field3, etc...
into #table1
from table1
where Field1 = 'Whatever you are looking for'
This should be pretty fast even for a big tables, especially if you have an index on Field1. You do this for every table with search fields to collect all the records that have relevant records you are searching.
Then you also need to be sure to insert any records into your temporary tables that might have foreign key references to any of your other temporary tables. So let's say you also built a table #table2 with the above method that has a foreign key to table1 called PrimaryKeyTable1. You would insert those records like:
Insert into #table1
(PrimaryKeyTable1, Field1, Field2, Field3, etc...)
select table1.PrimaryKeyTable1, table1.Field1, table1.Field2, table1.Field3, etc...
from table1
join #table2
on table1.PrimaryKeyTable1 = table2.PrimaryKeyTable1
where table1.PrimaryKeyTable1 not in
(Select PrimaryKeyTable1 from #table1)
Now you will also have any records in #table1 that match to a record in #table2 that contain records that match the search criteria. You do this for all your temporary tables that have relevant foreign keys. The order that you do the inserts matters; be sure that you don't reference any temporary tables until after the last insert statement while collecting the foreign key referenced records.
Then you can simply do your final select statement, replacing the actual tables with the temporary tables you have built and eliminating all the filters that search your field data. Depending on the structure of your query there might be other optimizations, but that is the general idea.
If you've already explored all of your indexing options and this still doesn't help, MS SQL Server has "Change Tracking" features that maybe be of use to you in building your cache table. You enable the database for change tracking and configure which tables you wish to track. SQL Server then creates change records on every update, insert, delete on a table and then lets you query for changes to records that have been made since the last time you checked. This is very useful for syncing changes and is more efficient than using triggers. It's also easier to manage than making your own tracking tables. This has been a feature since SQL Server 2005.
How to: Use SQL Server Change Tracking
Change tracking only captures the primary keys of the tables and let's you query which fields might have been modified. Then you can query the tables join on those keys to get the current data. If you want it to capture the data also you can use Change Capture, but it requires more overhead and at least SQL Server 2008 enterprise edition.
Change Data Capture
Your solution is a robust way of doing what is called in Microsoft SQL Server "an indexed view" or "materialized view" in Oracle.
Basically you are correct - it's faster to navigate single indexed table then a dozen ones which are updated constantly.
You should really try creating an indexed view (some start here https://technet.microsoft.com/en-us/library/dd171921(v=sql.100).aspx) and it will probably solve all your performance issues.
You can use schema binding View and create cluster index on view.it will store your view data physically.but after creating schema binding view you can not alter your table.
Here is a chunk of the SQL I'm using for a Perl-based web application. I have a number of requests and each has a number of accessions, and each has a status. This chunk of code is there to update the table for every accession_analysis that shares all these fields for each accession in a request.
UPDATE accession_analysis
SET analysis_id = ? ,
reference_id = ? ,
status = ? ,
extra_parameters = ?
WHERE analysis_id = ?
AND reference_id = ?
AND status = ?
AND extra_parameters = ?
and accession_id is (
SELECT accesion_id
FROM accessions
where request_id = ?
)
I have changed the tables so that there's a status table for accession_analysis, so when I update, I update both accession_analysis and accession_analysis_status, which has status, status_text and the id of the accession_analysis, which is a not null auto_increment variable.
I have no strong idea about how to modify this code to allow this. My first pass grabbed all the accessions and looped through them, then filtered for all the fields, then updated. I didn't like that because I had many connections with short SQL commands, which I understood to be bad, but I can't help but think the only way to really do this is to go back to the loop in Perl holding two simpler SQL statements.
Is there a way to do this in SQL that, with my relative SQL inexperience, I'm just not seeing?
The answer depends on which DBMS you're using. The easiest way is to create a trigger on one table that provides the logic of updating the other table. (For any DB newbies -- a trigger is procedural code attached to a table at the DBMS (not application) layer that runs in response to an insert, update or delete on the table.). A similar, slightly less desirable method is to put the logic in a stored procedure and execute that instead of the update statement you're now using.
If the DBMS you're using doesn't support either of these mechanisms, then there isn't a good way to do what you're after while guaranteeing transactional integrity. However if the problem you're solving can tolerate a timing difference in the two tables' updates (i.e. The data in one of the tables is only used at predetermined times, like reporting or some type of batched operation) you could write to one table (live) and create a separate process that runs when needed (later) to update the second table using data from the first table. The correctness of allowing data to be updated at different times becomes a large and immovable design assumption, however.
If this is mostly about connection speed, then one option you have is to write a stored procedure that handles the "double update or insert" transparently. See the manual for stored procedures:
http://dev.mysql.com/doc/refman/5.5/en/create-procedure.html
Otherwise, You probably cannot do it in one statement, see the MySQL INSERT syntax:
http://dev.mysql.com/doc/refman/5.5/en/insert.html
The UPDATE syntax allows for multi-table updates (not in combination with INSERT, though):
http://dev.mysql.com/doc/refman/5.5/en/update.html
Each table needs its own INSERT / UPDATE in the query.
In fact, even if you create a view by JOINing multiple tables, when you INSERT into the view, you can only INSERT with fields belonging to one of the tables at a time.
The modifications made by the INSERT statement cannot affect more than one of the base tables referenced in the FROM clause of the view. For example, an INSERT into a multitable view must use a column_list that references only columns from one base table. For more information about updatable views, see CREATE VIEW.
Inserting data into multiple tables through an sql view (MySQL)
INSERT (SQL Server)
Same is true of UPDATE
The modifications made by the UPDATE statement cannot affect more than one of the base tables referenced in the FROM clause of the view. For more information on updatable views, see CREATE VIEW.
However, you can have multiple INSERTs or UPDATEs per query or stored procedure.
I have a table with over 3000000 entries, and i need to delete 500000 of them with given ID's.
My idea is to create a query like:
DELETE FROM TableName WHERE ID IN (id1, id2, ...........)
which I generate with a simple C# code.
The question is:
is there a limit to how many values I can set in the array of ID's.
And if someone have a better way to achieve this delete more efficiently I'm open to ideas.
If your IDs can't be determined with whatever comparison (as in WHERE ID < 1000000) you could
INSERT them into a temp table with multiple inserts and then
JOIN this temp table to yours
But inserts may become problematic. You should check that. How could you speed this thing up?
make deletes in several bulks
insert IDs into temp table in bulks
At the end my solution which works not so bad:
1. Sorted the ID's (to save server paging)
2. Created with C# code query's with 500 ID's in them.
3. sent the query's one by one.
I assume that when i worked with query having 1000+ ids the sql server time to process the query was slowing me down (after all any query you run in sql server is being process and optimized).
I Hope this help someone
I am creating a HUGE query with 5 or 6 join, and want to turn this into a view. But, I need too be able to update this view. Is this possible?
By update, I mean, run an SQL UPDATE command, alter a value, and then let the changed values filter through to the appropriate tables.
Certain views can be updatable in MySQL, but there are constraints with them. At the very least, there must be a 1 to 1 relationship between the rows in the view and the rows in the underlying table.