Speeding up a SQL Query - sql-server-2008

I have been currently running this query for the past 5 days and it still hasn't completed.
I wanted to see if someone could look at my query and let me know if there is a way to speed it up.
A little background (this is a query to purge threat event 1095 from our SQL database for our ePO server. The events are clogging up the database currently these events account for 165GB of the database. I have been running the query to check away at blocks of the data. The current query is purging data between '2016/03/20' and '2016/05/20. I need to keep 120 days of data for security reasons).
SET ROWCOUNT 10000
DELETE FROM epoEvents
WHERE (([EPOEvents].[ThreatEventID] = '1095')
AND ([EPOEvents].[ReceivedUTC] BETWEEN '2016/03/20' AND '2016/05/20'))
WHILE ##rowcount > 0
BEGIN
DELETE FROM epoEvents
WHERE (([EPOEvents].[ThreatEventID] = '1095')
AND ([EPOEvents].[ReceivedUTC] BETWEEN '2016/03/20' AND '2016/05/20'))
END
SET ROWCOUNT 0
GO
Any help is appreciated.
Thank you

Related

How to prevent Lock Timeout in Mysql Update Table with millions request

I have problem with one table from mysql.
Struktur table like this :
Every table_detail will insert every a minutes. And user need to show all off detail data in on page. So, i create table_header that collect all last data from every table_detail.
But the problems is, Table_header lock because milion request update to this table.
How to resolved this problem ?
I have already set
innodb_lock_wait_timeout 100
innodb_write_io_threads 24
Thanks anyway
Thanks to Mr. #shadow
After use SAAS Mysql server & recommendation from google cloud sql - we create replication server. 1 Server for INSERT/UPDATE and other server for SELECT.
But cost server very high.
After research - not all transaction in data will store in mysql - we use scylladb for handle high request.
Thanks all

MySQL + EF update very slow?

I want to update about millions (or half of the millions) records into my database, but it went very slow. It took me couple of hours to update only 100,000. Do you guys have any ideas?
Basically I have process to encrypt particular column value and then update it back to database. I cant do it at database level because of code integration dependency.
sample code:
dbContext.Configuration.AutoDetectChangesEnabled = false;
List<Users> usersLst = dbContext.Users.AsNoTracking().Take(500000).ToList();
foreach (var usr in usersLst) {
usr.Password = this.Encrypt(usr.Password);
dataContext.Entry(consumer).State = System.Data.Entity.EntityState.Modified;
}
dbContext.SaveChanges();
Note: - I tried with SQL Server, it is much faster. 1M records update in 15-20 mins.
Update will take too much of time, From my personal experience I am telling, Don't update if you have more then 10k data. Better way is
Write a sub query for that field and try to get the values what you have to update with the old data from your table.
Convert this select query into a view, If you are new in creating view.. just check the structure here
Now write a insert query in which, you are taking values from the view which you created just now .
Insert won't take too much time, for me it took 10 minutes to insert 1 crore data in a table from view.
Even if you feel it is too much of time you can do this insert using trigger in mysql, which will take only 3 or 4 minutes.
Hope I have answered your question. Just try and give me a feedback.

Simple UPDATE crash MySQL sometimes when I execute it

I have a very strange problem on my server. My site is on a dedicated host, and runs very well. But sometimes when I'm running a very simple query like:
UPDATE product SET active = 1 WHERE id_product = 99
I feel that sometimes, I would even say the first time that day (?!) MySQL literally crashed. Even the MySQL service becomes impossible to restart, and I have to reboot my server. If I re-execute the same query, then it takes 0.001 second.
This problem can arrive on any UPDATE query. I try to repair all these tables, I even delete the database + re-create the problem remains the same.
I do not know what can be the problem? Someone has an idea? :(

How to log query and its execution time of each query which runs on a db?

I have a huge database with more than 250 tables. Different type of queries are ran on the database. Since the database has grown over the years and now I need to optimise the database and queries. I have already followed optimisation concepts such as indexing and so on.
My problem is, How to log the query and its execution time of each query which runs on the database ? So I can analyse which query takes how many seconds and optimise them.
Given that I know that MYSQL Trigger would be ideal for this but I don't know how to write a trigger for the whole database, so that it logs each query to a table with query's execution time. I want the trigger to log all the CRUD operation which occurred in the database.
How can I get it done ?
Use mysql slow query log. This will help you to get only queries which are slow instead of log / analyze all queries. You just need to set param for long_query_time like 1 or 2 second.
Also you could set long_query_time = 0 and you will see all sql queries!

application setup to avoid timeouts and deadlocks in SQL server DB table

My application accesses a local DB where it inserts records into a table (+- 30-40 million a day). I have processes that run and process data and do these inserts. Part of the process involves selecting an id from an IDs table which is unique and this is done using a simple
Begin Transaction
Select top 1 #id = siteid from siteids WITH (UPDLOCK, HOLDLOCK)
delete siteids where siteid = #id
Commit Transaction
I then immediately delete that id with a separate statement from that very table so that no other process grabs it. This is causing tremendous timeout issues and with only 4 processes accessing it, I am surprised though. I also get timeout issues when checking my main post table to see if a record was inserted using the above id. It runs fast but with all the deadlocks and timeouts I think this indicates poor design and is a recipe for disaster.
Any advice?
EDIT
this is the actual statement that someone else here helped with. I then removed the delete and included it in my code as a separately executed statement. Will the order by clause really help here?