mysql triggers using vs select query - mysql

I use mysql trigger to update column in one of ,y DB tables called comments_count but I want to know what is best and faster??
Using mysql triggers or select query like this below:
select count(*) from comments where discussion_id=something

different types of overhead:
with the trigger you will have extra time during insert, and may get out of synch over time for some unforseen reason.
with the query, you will always get the right answer but you will need to calculate at runtime. usually, this should be very fast especially with an index on the discussion_id

Related

MySQL - Trigger or Replication is better?

I want to replicate certain table from one database into another database in the same server. This tables contain exactly the same fields.
I was considering to use MySQL Replication to replicate that table but some people said that it will increase IO so i find another way to create 3 Trigger (Insert, update and Delete) that will perform exactly the same thing like what i expect.
My Question is, which way is better? Is it using MySQL replication is better even though it's in the same server or using Trigger to replicate the data is better.
Thanks.
I don't know what is your goal, but I got mine getting use of the VIEW functionality.
I had two different applications with separate databases but in the same Mysql server. Application2 needed to get a few data from Application1. In general, this is a trivial situation that you can handle with USE DB1; or USE DB2; as your needing, but my programming framework does not work very well with multiple DBs.
So, lets see my solution...
Here is my select query to retrieve this data:
SELECT id, name FROM DB1.customers;
So, using DB2 as default schema, I've created a VIEW:
USE DB2;
CREATE VIEW app1_customers AS SELECT id, name FROM DB1.customers;
Now I can retrieve this data in DB2 as a regular table with a regular SELECT statement.
SELECT * FROM DB2.app1_customers;
Hope ts useful. BR
Assuming you have two databases on the same server i.e DB1 and DB2 and the table is called tbl1 and it is sitting in DB1 you can query the table like this:
USE DB1;
SELECT * FROM tbl1;
USE DB2;
SELECT * FROM DB1.tbl1;
This way you wont need to copy the data and worry about extra space and extra code. You can query a table in another database on the same server. Replication and triggers are not your answer here. You could also create a view to encapsulate the SQL statement.
Definitely triggers is the way to go. Having another server (slave) will need to spare several MB for installation, logs, cpu and memory usage.
I'd use triggers to keep both tables equal. If you want to create a table with the same columns definition and data use:
USE db2;
CREATE TABLE t1 AS SELECT * FROM db1.t1;
After that, go ahead and create the triggers for Update, Insert and Delete statemetns.
Also you could ALTER the new table to a different engine like MEMORY or add indexes to see if you can improve something.

Perl - DBI - MySQL - Easy way to get row id after update?

Is there an easy way to get the id of the row that was affected by an update statement from DBI? In this particular case, it will always be either 0 or 1 row. I didn't want the expense of having to redo the selection part of the query again to get the data, as it is kind of costly.
I am have to do the update first, because otherwise I introduce the possibility of a race-time condition between the select and the update.
You might want to read this related SO topic (I've linked to the answer by #Erwin Brandstetter) -- this is the way I've always handled it.
Depending on your database engine, you are likely to have a SELECT ... FOR UPDATE facility. You should use this to
SELECT ... FOR UPDATE the record you want to update
Save the ID from the record, and do the UPDATE using the ID instead of the original criteria
The MySQL documentation about SELECT ... FOR UPDATE may helps you, working with transactions.

Can I INSERT/UPDATE into two tables with one query?

Here is a chunk of the SQL I'm using for a Perl-based web application. I have a number of requests and each has a number of accessions, and each has a status. This chunk of code is there to update the table for every accession_analysis that shares all these fields for each accession in a request.
UPDATE accession_analysis
SET analysis_id = ? ,
reference_id = ? ,
status = ? ,
extra_parameters = ?
WHERE analysis_id = ?
AND reference_id = ?
AND status = ?
AND extra_parameters = ?
and accession_id is (
SELECT accesion_id
FROM accessions
where request_id = ?
)
I have changed the tables so that there's a status table for accession_analysis, so when I update, I update both accession_analysis and accession_analysis_status, which has status, status_text and the id of the accession_analysis, which is a not null auto_increment variable.
I have no strong idea about how to modify this code to allow this. My first pass grabbed all the accessions and looped through them, then filtered for all the fields, then updated. I didn't like that because I had many connections with short SQL commands, which I understood to be bad, but I can't help but think the only way to really do this is to go back to the loop in Perl holding two simpler SQL statements.
Is there a way to do this in SQL that, with my relative SQL inexperience, I'm just not seeing?
The answer depends on which DBMS you're using. The easiest way is to create a trigger on one table that provides the logic of updating the other table. (For any DB newbies -- a trigger is procedural code attached to a table at the DBMS (not application) layer that runs in response to an insert, update or delete on the table.). A similar, slightly less desirable method is to put the logic in a stored procedure and execute that instead of the update statement you're now using.
If the DBMS you're using doesn't support either of these mechanisms, then there isn't a good way to do what you're after while guaranteeing transactional integrity. However if the problem you're solving can tolerate a timing difference in the two tables' updates (i.e. The data in one of the tables is only used at predetermined times, like reporting or some type of batched operation) you could write to one table (live) and create a separate process that runs when needed (later) to update the second table using data from the first table. The correctness of allowing data to be updated at different times becomes a large and immovable design assumption, however.
If this is mostly about connection speed, then one option you have is to write a stored procedure that handles the "double update or insert" transparently. See the manual for stored procedures:
http://dev.mysql.com/doc/refman/5.5/en/create-procedure.html
Otherwise, You probably cannot do it in one statement, see the MySQL INSERT syntax:
http://dev.mysql.com/doc/refman/5.5/en/insert.html
The UPDATE syntax allows for multi-table updates (not in combination with INSERT, though):
http://dev.mysql.com/doc/refman/5.5/en/update.html
Each table needs its own INSERT / UPDATE in the query.
In fact, even if you create a view by JOINing multiple tables, when you INSERT into the view, you can only INSERT with fields belonging to one of the tables at a time.
The modifications made by the INSERT statement cannot affect more than one of the base tables referenced in the FROM clause of the view. For example, an INSERT into a multitable view must use a column_list that references only columns from one base table. For more information about updatable views, see CREATE VIEW.
Inserting data into multiple tables through an sql view (MySQL)
INSERT (SQL Server)
Same is true of UPDATE
The modifications made by the UPDATE statement cannot affect more than one of the base tables referenced in the FROM clause of the view. For more information on updatable views, see CREATE VIEW.
However, you can have multiple INSERTs or UPDATEs per query or stored procedure.

Maximum values possible in a WHERE IN query

I have a table with over 3000000 entries, and i need to delete 500000 of them with given ID's.
My idea is to create a query like:
DELETE FROM TableName WHERE ID IN (id1, id2, ...........)
which I generate with a simple C# code.
The question is:
is there a limit to how many values I can set in the array of ID's.
And if someone have a better way to achieve this delete more efficiently I'm open to ideas.
If your IDs can't be determined with whatever comparison (as in WHERE ID < 1000000) you could
INSERT them into a temp table with multiple inserts and then
JOIN this temp table to yours
But inserts may become problematic. You should check that. How could you speed this thing up?
make deletes in several bulks
insert IDs into temp table in bulks
At the end my solution which works not so bad:
1. Sorted the ID's (to save server paging)
2. Created with C# code query's with 500 ID's in them.
3. sent the query's one by one.
I assume that when i worked with query having 1000+ ids the sql server time to process the query was slowing me down (after all any query you run in sql server is being process and optimized).
I Hope this help someone

Is it possible a trigger on a select statement with MySQL?

I know that triggers can be used on insert, update and delete, but what about a trigger (or sort of) on a select statement. I want to use a trigger to insert data on a table B when it is selected an existent record on a table A, it could be possible?.
Thanks in advance.
You should design your application so that database access occurs only through certain methods, and in those methods, add the monitoring you need.
Not exactly a trigger, but you can:
CREATE FUNCTION myFunc(...) BEGIN INSERT INTO myTable VALUES(...) END;
And then
SELECT myFunc(...), ... FROM otherTable WHERE id = 1;
Not an elegant solution, though.
It is not possible in the database itself.
However there are monitoring/instrumentation products for databases (e.g. for Sybase - not sure about MySQL) which track every query executed by the server, and can do anything based on that - usually store the query log into a data warehouse for later analysis, but they can just as well insert a record into table B for you, I would guess.
You can write an application which will be monitoring the query log and doing something when a select occurs. A pretty crude way to solve the problem though...