How to increase SQL Server database performance? - linq-to-sql

I have table in a SQL Server database with only 900 record with 4 column.
I am using Linq-to-SQL. Now I am trying retrieve data from that table for this I have written a select query.
Its not querying data from database and its showing time out error.
Please give me idea for this. First how can I increase time and second how can increase performance of query so can it easily access.
Thanks

That is a tiny table, there is either something very wrong with your database, or your application.
Try seeing what is happening in the database with SQL Profiler.

If you have just 900 records and four columns then unless you are storing many megabytes of data in each field the query should be very fast. I think your problem is that the connection is failing, possibly due to a firewall or other networking problem.
To debug I'd suggest running a simpler query and see if you can get any data at all. Also try running the same query from the SQL Server Management Studio to see if it works there.

Related

Self Hosted mysql server on azure vm hangs on select where DB is ~100mb

I i'm doing select from 3 joined tables on MySql server 5.6 running on azure instance with inno_db set to 2GB. I used to have 14GB ram and 2core server and I just doubled ram and cores hoping this will result positive on my select but it didn't happen.
My 3 tables I'm doing select from are 90mb,15mb and 3mb.
I believe I don't do anything crazy in my request where I select few booleans however i'm seeing this select is hangind the server pretty bad and I can't get my data. I do see traffic increasing to like 500MB/s via Mysql workbench but can't figure out what to do with this.
Is there anything I can do to get my sql queries working? I don't mind to wait for 5 minutes to get that data, but i need to figure out how to get it.
==================== UPDATE ===============================
I was able to get it done via cloning the table that is 90 mb and forfilling it with filtered original table. It ended up to be ~15mb, then I just did select all 3 tables joining then via ids. So now request completes in 1/10 of a second.
What did I do wrong in the first place? I feel like there is a way to increase some sizes of some packets to get such queries to work? Any suggestions on what shall I google?
Just FYI, my select query looked like this
SELECT
text_field1,
text_field2,
text_field3 ,..
text_field12
FROM
db.major_links,db.businesses, db.emails
where bool1=1
and bool2=1
and text_field is not null or text_field!=''
and db.businesses.major_id=major_links.id
and db.businesses.id=emails.biz_id;
So bool1,2 and textfield i'm filtering are the filds from that 90mb table
I know this might be a bit late, but I have some suggestions.
First take a look the max_allowed_packet in your my.ini file. This is usually found here in Windows:
C:\ProgramData\MySQL\MySQL Server 5.6
This controls the packet size, and usually causes errors in large queries if it isn't set correctly. I have mine set to 100M
Here is some documentation for you:
Official documentation
In addition I've slow queries when there are a lot of items in the where statement and here you have several. Make sure you have indexes and compound indexes on the values in your where clause especially related to the joins.

How to increase query performance at Sql Server 2000 and compare Sql Server 2008?

I have complex query and big database.I execute my query on the Sql Server 2008 it took time 8-10 minute.But I execute my query on the Sql Server 2000 it took time 1-2 hour.Why?I used index and I used execution plan but I didn't solve this problem.Anybody can help me? or Does anyone have a suggestion?
You can possibly create an auxiliary table for this query. That way the query would run faster, since the work is done before the query in background. Note: This usually works if the query retrieving the data doesnt have to be in sync with the DB.
Also, It depends on how you want to use the data, you might be able to cache or precache the results.

MySQL is so slow on Amazon EC2 m1.large

I'm migration my .NET/MSSQL to RoR/MySQL/EC2/Ubuntu platform. After I transferred all my existing data into MySQL, I found the MySQL querying speed is incredibily slow, even for a super-basic query , like querying a select count(*) from countries, it's just a country table, only contains around 200 records, but it takes 0.124ms for the query. It's obviously not normal.
I'm a newbie to MySQL, can anyone tell me what would be the possible problem? Or any initial optimization button I should turn on after installing MySQL?
count(*) operation cannot really be optimized since it has to either do a full table scan (O(n)), or read the cached table count (O(1)) depending on the database engine you are using. Either ways, your query should not be that slow. You might want to get in touch with AWS support. It's possible the box is being choked by some other process running on it.

Error code: 2013: Lost connection to Mysql server during the query.. How can improve this query

I am getting the error stated in my post title. I have two tables. The first one, large is over than 4000,000 records and the second, small one is arounf 7000 records. I want to search for the value in the samll table and if found, I want to extract the whole record from the large table. The command never executed and always lose the connection with the database. I tried to limit the out put to 50 records only, the same thing happens. Please help me. If I need something like indexing (I read this might solve such performance problems, please clarify to me how. I'm not a DBA).
select * from db.large, db.small
where large.value=small.value;
*EDIT: * I use MySQL workbench 5.2.41 CE.
At one point on a previous project, I could actually crash the MySQL server reproducibly with a pretty simple query. In the code that called the database, I saw the same error message. Can you verify that the MySQL server's process ID is the same before and after the query? Chances are that your OS restarts the MySQL server immediately after the crash, and the MySQL command line client automatically reconnects (though it emits a notice when it does).

How to select large data from SQL Server 2008 R2?

I have run into problem with selecting large data from SQL Server. I have a view with 200 columns and 200 000 rows. And I am using the view for Solr indexing.
I tried to select data with paging but it took a lot of time(more then 6 hours). Now i am selecting it without paging and it take 1 hour. But SQL Server takes a lot of memory.
What is the best method or approach to select large data in such situations from SQL Server 2008 R2?
Thanks in advance.
200k rows is not that much and definitely shouldn't take 6 hours, not even 1 hour.
I did not understant if the problem is on the actuall select or bringing the result to the application.
I would recomend running the select with NOLOCK to ignore blockings, maybe your table is beeing accessed by other processes when you are running the query
SELECT * FROM TABLE WITH(NOLOCK)
If the problem is on bringing the data to the application you'll need to provide mroe details on how you are doing it
I'd suggest taking a look at your execution plan. Look at the properties in the very first operator and see if you're getting "Good Enough Plan Found" or a timeout. If it's the latter, you may have a very complicated view or you may be nesting views (a view calling a view). See what you can do to simplify the query in order to give the optimizer a chance to create a good execution plan.