I'm wondering what could possibly cause this? I am stumped as I have been searching for an answer for 2 days now.
I have a big table, around 390.000~ rows. So there's no problem with the 50% treshhold. I have been building this site on my test server using XAMPP. I have now moved everything to my server using Ubuntu. Website files + mysql tables.
I have also set the only two settings I can think of in my my.cnf, min_word_length and ft_stopword_file. On the test server I get perfect results from my searches. On my live server I get barely any results (although some, sometimes).
I am just wondering what settings I could've forgotten to get my live server to work?
I know this is a fuzzy question but I think it could be useful for many people in the same situation in the future.
Thank you in advance!
Related
I have just moved a WP installation from one hosting provider to another. Everything went fine except for a problem I have with the new installation. Please note that I have moved from a regular VPS to a kinda powerful and fast dedicated machine.
The thing is that now, the website is slower than when in the previous server. It takes 6-7 seconds to load a page and according to Chrome's Dev Tools network panel, it has a period of 3-4 seconds to the get the first response byte (TTFB), which is insane.
I have tried the following with no success:
Review database for anomalies
Disable all plugins (and delete them)
Disable template (and delete it)
With these last two actions, I lowered the loading time to 5-6 seconds, which is a lot for small site (a few hundreds of posts and 50-60 pages), with no comments enabled. I still have the 3-4 TTFB period.
After that, I installed the Query Monitor plugin and found out that, at every page load, WP performs hundreds (ranging from 400 to 800) database queries and, in some cases, even 1500 database queries. OMG!
Honestly, I am quite lost here. I mean, on one hand I have this strange database behavior I cannot really understand. And on the other hand, I cannot help wondering how it was faster on the previous & slower server.
By the way, I have moved from MySQL to MariaDB, which should be even faster too. Indexes are kept when dumping & importing the file. I am lost. :(
Any help is greatly appreciated. Apologies for my english (not my language) and please let me know if the is some important information missing. I will be glad to provide all the necessary information that help me/us troubleshoot this.
Thanks in advance!
I think you should optimize your MySQL config (my.cnf in Linux or my.ini in Windows). To view problems in MySQL you can try run the script MySQLTuner: https://github.com/major/MySQLTuner-perl.
A little bit stumped here, can't seem to find anyone with the question. I feel like my question is a lot simpler than it probably sounds. Basically I want to have an exact copy of my rails database on a different server, as it is being populated. Let me explain with an example
I have a production website that needs to be up at all times is the bottom line here.
Currently if the website goes down, I have to use the latest copy that I have of the database (Because the server is down), and when the site comes back up, I have to manually import anything new into the original mysql server. So I am looking for a way to keep the mysql server on both servers in sync with each other so that if one goes down, they both still have the same information.
I understand that this can add a lot of overhead in the rails app, which I am not that concerned about as I can find ways to defer the mysql queries. Unless someone knows a better way to do this?
I have been working on a requirement for our apache2 logs to be recorded to a mysql database, instead of the text log file norm.
I had no difficulty in accomplishing the setup and config, and it works as expected, however there seems to be a bit of information that I cannot find (or may very well be that I am searching for the wrong thing).
Is there anyone out there who use (or even like to use) libapache2-mod-log-sql that are able to tell more about its connection to mysql? Is it persistent? What kind of resource impact should I expect?
These two issues are core to my research, and yet so rare to find info on.
thanks in advance.
Hey guys, i had posted this question in another question but i didnt get a helpful response. Partly because i dont think i explained myself properly. So im going to try again.
I've been programming a back end server in vb.net and its been using a mysql database to store information. I was up until a couple of days ago using a webhost's mysql server to do this.
I did not care to renew my webhost so ive moved everything to a home server to continue work on my program. I've got mysql 5.5 installed (which is a newer version then the one on my previous webhost) and everything is working perfectly except for one thing so far.
This program when starting up for the first time sends a query about a million table inserts large. the query looks something like "INSERT into blah VALUES(1,1,1,1,1);INSERT into...." and so on. This used to take about 5-10 mins or so on my webhost (which i had my server program running on my home server machine and it was sending the info via the net to the webhost's mysql.)
Now doing everything locally i was hoping for this to be faster, but it didnt really matter i just needed it to work. So when i send this query it just locks up for a minute or so and then returns a timeout. When i check the table in the database it has loaded exactly 1000 items everytime i try this.
Now im assuming this is some sort of setting issue, however i've played with my "my.ini" to see if it would help, it didnt. I had tried switching to some of the pre-packaged my-huge or my-innodb etc and those did not help either. I would assume that if anything it would just take longer or something, not just timeout immediately after 1000 inserts.
And just for some background info the server machine im using has a quad core core i7, 8gb ram, and a 1tb hd in it running windows 7.
Any help would be great thank you.
Probably your query is too large, you can tune MySQL behavior via max_allowed_packet option.
You can also save some bytes by combining several insert queries into one, like this: INSERT INTO blah VALUES(1,1,1,1,1),VALUES(2,2,2,2,2),VALUES(3,3,3,3,3). But if this large combined query fails, then data in that insert will fail too.
But in my opinion it's not the smartest way to do it. If your application should import huge sql dump on start, it can possible use mysql executable like this mysql -uroot -ppassword db_name < dump.sql and you're done. It will be possibly the most effective way to accomplish this task.
Our relatively high traffic website just screeched to a halt, and we're totally stumped. We run on Django and Mysql (InnoDB), and we're trying to figure out why it's all of a sudden totally slow.
Here's what we know so far:
On our mysql server, a simple query (from django shell) runs fast.
On our app server, a simple query (from django shell) runs very slow.
Without having any details on the query or on the tables involved in the query, it is quite difficult to answer this question.
Most likely it is because of a lot of data in the table and a missing index on the field you are querying.
This would explain why it is slow on the production box, but fast on the dev box (since there's less data).
To answer the question better, could you provide us with more details? Table structure, query, number of rows in the table, etc. ?
More assumptions: Disk I/O on the app server could be a problem, maybe the log files in MySql are not properly configured (especially with InnoDB this could lead to a problem). Maybe there's a load-heavy query running too often? Table locks when multiple users write to/read from the same tables?
As I said, without having more details, it is quite difficult to guess. But I hope, at least I could point you in the right direction.
Run EXPLAIN on the SELECT.
Study this page carefully:
http://dev.mysql.com/doc/refman/5.0/en/using-explain.html
Understanding the concepts on that page are key to properly index your tables.
Thanks for the responses everyone.
Turns out it was a DNS issue (which was a regression). MySQL is really stupid in that the default is to use DNS lookups. They got really slow, which killed all the network flow between the app server and the db server. It was as simple as adding "skip-name-resolve" to our my.cnf.
Are the 'mysql server' and 'app server' on the same box and talking to the same DB instance?
Your question suggests not, so I'd look for a problem on the network - start by pinging the database server from each box and compare the results.
Once you've done that you'll need to be a little more specific about the problem - were the ping times the same, are you running the same query, etc...