How to execute multiple SQL queries at same time - mysql

I want to know If we can execute multiple queries at the same time in MYSQL/SQL. Let me explain a case scenario to elaborate my statement further.
Let's Assume, We have to create and load two tables.create table tbl1(col,col,col,col...); Insert Into tbl1 (val,val,val,val...) and other query as create table tbl2(col,col,col,col...); Insert Into tbl2 (val,val,val,val...). Now, When I execute the statement the flow will be
Create Table1
Insert Into Table1
Create Table2
Insert Into Table2
Is there any method We can use to minimize these 4 steps into a single step? Similar to the functionality of threads that run in parrallel.

You can use two different instances of SSMS or may be different tabs within SSMS.
Other solution to run 2 queries at the same time with a maintenance plan. Here is the link for more details

You can chain multiple queries by using the ";", see here for further details: How to run multiple SQL queries?.
In your setup, 1. needs to be executed before 2. (same is true for 3. before 4.) because you cannot insert data into a database that did not exist. So running these 4 queries in parallel is not possible. However, running 1+2 and 3+4 in parallel is possible.

Related

Select Into/Insert Into SQL Server query duplicates

Sorry for asking this question, but I am a beginner in SQL, my colleague at work build a view, which I need as datasource for a report, however since this view is based on several other views it takes like 45 minutes to execute the query. This is way to long. Therefore I created a table from that view, initial execution time is the same, but once in place it executes in seconds.
In Microsoft SQL Server 2014 I used the following query:
select *
into [dbo].[MAT_v_demnew_daily_am_all_data]
from [dbo].[v_demnew_daily_am]
This works fine, but since the view is updated daily I also need to refresh the table everyday. When I now execute the above mentioned query I get the message that the table already exists.
That's why I tried to use 'insert' in this case:
insert into [dbo].[MAT_v_demnew_daily_am_all_data]
select *
from [dbo].[v_demnew_daily_am]
Here I have the problem that it not only inserts the additional data but also the already existing data, so in the end I have duplicates.
As a workaround I now manually delete the [dbo].MAT_v_demnew_daily_am_all_data] table and then execute the select * into query.
Now I am looking for an easier solution, is it possible to having the table deleted by query and in the same query create a new one by select * into or is it possible to only insert new data from the view to the table so that I don't get duplicates.
Moreover, is it possible to have such SQL statement being executed automatically on a daily basis, maybe by .bat file and windows task scheduler?
I know that the source of all problems is the View and that we should improve that, but looking for a short term solution first.
Thanks so much.
Mathias
Try this:
IF OBJECT_ID('dbo.MAT_v_demnew_daily_am_all_data', 'U') IS NOT NULL
DROP TABLE dbo.MAT_v_demnew_daily_am_all_data
SELECT INTO dbo.MAT_v_demnew_daily_am_all_data FROM dbo.v_demnew_daily_am
This query is reusable on a daily basis.
You can create one stored procedure including this query.
Then you only need to execute the stored procedure.
Updated
Before you create the stored procedure, please check if you have the permission.
Then try:
create procedure [procedure_name]
as
IF OBJECT_ID('dbo.MAT_v_demnew_daily_am_all_data', 'U') IS NOT NULL
DROP TABLE dbo.MAT_v_demnew_daily_am_all_data
SELECT INTO dbo.MAT_v_demnew_daily_am_all_data FROM dbo.v_demnew_daily_am;
After you create it:
EXEC [procedure_name];

mysql triggers using vs select query

I use mysql trigger to update column in one of ,y DB tables called comments_count but I want to know what is best and faster??
Using mysql triggers or select query like this below:
select count(*) from comments where discussion_id=something
different types of overhead:
with the trigger you will have extra time during insert, and may get out of synch over time for some unforseen reason.
with the query, you will always get the right answer but you will need to calculate at runtime. usually, this should be very fast especially with an index on the discussion_id

How to combine 8 queries into one to create a procedure?

I have a table I created to match an online Template for uploading inventory to Amazon. In total, it has 440 columns. I'm not worried about that, and neither are they, it is mostly necessary. It pulls from two other tables that I'll call table1 and table2. I'll call the other one templateTable.
Basically, I'm starting with a TRUNCATE to completely wipe the information on the templateTable. I want it empty when it gets filled, for no reason other than it makes me feel comfortable. No other table gets truncated, just this table every time the query is run.
After that, there is a massive INSERT query that takes info from table1 and table2 and puts all of that into templateTable's specific columns.
Query 3 is an update at this point, and so is pretty much queries 3 - 8. They're all update queries. I did them separately from the second query, where everything gets populated, because each update has a CASE and different requirements.
I wanted to create a procedure for these queries so they could just run the one procedure and call it a day. But I'm uncertain how to combine the 8 queries that fill and correct the information in this templateTable. I should mention I'm not just taking info from one table and sticking it in the templateTable- it is more like "case when table1.modelNum = 1234 then templateTable.modelNum = 5678".
You can wrap all of your 8 SQL statements inside of a procedure in MySQL like this:
CREATE PROCEDURE MyProcedure()
BEGIN
<SQL STATEMENT 1>;
<SQL STATEMENT 2>;
<SQL STATEMENT 3>;
<SQL STATEMENT ...>;
END//
Then to call it you submit:
exec MyProcedure;

Can I INSERT/UPDATE into two tables with one query?

Here is a chunk of the SQL I'm using for a Perl-based web application. I have a number of requests and each has a number of accessions, and each has a status. This chunk of code is there to update the table for every accession_analysis that shares all these fields for each accession in a request.
UPDATE accession_analysis
SET analysis_id = ? ,
reference_id = ? ,
status = ? ,
extra_parameters = ?
WHERE analysis_id = ?
AND reference_id = ?
AND status = ?
AND extra_parameters = ?
and accession_id is (
SELECT accesion_id
FROM accessions
where request_id = ?
)
I have changed the tables so that there's a status table for accession_analysis, so when I update, I update both accession_analysis and accession_analysis_status, which has status, status_text and the id of the accession_analysis, which is a not null auto_increment variable.
I have no strong idea about how to modify this code to allow this. My first pass grabbed all the accessions and looped through them, then filtered for all the fields, then updated. I didn't like that because I had many connections with short SQL commands, which I understood to be bad, but I can't help but think the only way to really do this is to go back to the loop in Perl holding two simpler SQL statements.
Is there a way to do this in SQL that, with my relative SQL inexperience, I'm just not seeing?
The answer depends on which DBMS you're using. The easiest way is to create a trigger on one table that provides the logic of updating the other table. (For any DB newbies -- a trigger is procedural code attached to a table at the DBMS (not application) layer that runs in response to an insert, update or delete on the table.). A similar, slightly less desirable method is to put the logic in a stored procedure and execute that instead of the update statement you're now using.
If the DBMS you're using doesn't support either of these mechanisms, then there isn't a good way to do what you're after while guaranteeing transactional integrity. However if the problem you're solving can tolerate a timing difference in the two tables' updates (i.e. The data in one of the tables is only used at predetermined times, like reporting or some type of batched operation) you could write to one table (live) and create a separate process that runs when needed (later) to update the second table using data from the first table. The correctness of allowing data to be updated at different times becomes a large and immovable design assumption, however.
If this is mostly about connection speed, then one option you have is to write a stored procedure that handles the "double update or insert" transparently. See the manual for stored procedures:
http://dev.mysql.com/doc/refman/5.5/en/create-procedure.html
Otherwise, You probably cannot do it in one statement, see the MySQL INSERT syntax:
http://dev.mysql.com/doc/refman/5.5/en/insert.html
The UPDATE syntax allows for multi-table updates (not in combination with INSERT, though):
http://dev.mysql.com/doc/refman/5.5/en/update.html
Each table needs its own INSERT / UPDATE in the query.
In fact, even if you create a view by JOINing multiple tables, when you INSERT into the view, you can only INSERT with fields belonging to one of the tables at a time.
The modifications made by the INSERT statement cannot affect more than one of the base tables referenced in the FROM clause of the view. For example, an INSERT into a multitable view must use a column_list that references only columns from one base table. For more information about updatable views, see CREATE VIEW.
Inserting data into multiple tables through an sql view (MySQL)
INSERT (SQL Server)
Same is true of UPDATE
The modifications made by the UPDATE statement cannot affect more than one of the base tables referenced in the FROM clause of the view. For more information on updatable views, see CREATE VIEW.
However, you can have multiple INSERTs or UPDATEs per query or stored procedure.

Can I launch a trigger on select statement in mysql?

I am trying to run an INSERT statement on table X each time I SELECT any record from table Y is there anyway that I can accomplish that using MySQL only?
Something like triggers?
Short answer is No. Triggers are triggered with INSERT, UPDATE or DELETE.
Possible solution for this. rather rare scenario:
First, write some stored procedures
that do the SELECTs you want on
table X.
Then, restrict all users to use only
these stored procedures and do not
allow them to directly use SELECT on table
X.
Then alter the stored procedures to
also call a stored procedure that
performs the action you want
(INSERT or whatever).
Nope - you can't trigger on SELECT - you'll have to create a stored procedure (or any other type of logging facility - like a log file or what ever) that you implicitly call on any query statement - easier if you create a wrapper that calls your query, calls the logging and returns query results.
If you're trying to use table X to log the order of SELECT queries on table Y (a fairly common query-logging setup), you can simply reverse the order of operations and run the INSERT query first, then run your SELECT query.
That way, you don't need to worry about linking the two statements with a TRIGGER: if your server crashes between the two statements then you already logged what you care about with your first statement, and whether the SELECT query runs or fails has no impact on the underlying database.
If you're not logging queries, perhaps you're trying to use table Y as a task queue -- the situation I was struggling with that lead me to this thread -- and you want whichever session queries Y first to lock all other sessions out of the rows returned so you can perform some operations on the results and insert the output into table X. In that case, simply add some logging capabilities to table Y.
For example, you could add an "owner" column to Y, then tack the WHERE part of your SELECT query onto an UPDATE statement, run it, and then modify your SELECT query to only show the results that were claimed by your UPDATE:
UPDATE Y SET owner = 'me' WHERE task = 'new' AND owner IS NULL;
SELECT foo FROM Y WHERE task = 'new' AND owner = 'me';
...do some work on foo, then...
INSERT INTO X (output) VALUES ('awesomeness');
Again, the key is to log first, then query.