Error while instaling Drupal 7 - mysql

I know that many people get the same kind of error when they try to install Drupal 7. But after trying out the solutions that I read about, I still didn't manage to install Drupal properly.
After installing 27 modules (of 28) I get the following message:
An AJAX HTTP error occurred.
HTTP Result Code: 500 Debugging information follows.
Path: `http://localhost/drupal/install.php?profile=standard&locale=en&id=1&op=do`
StatusText: Service unavailable (with message)
ResponseText: PDOException: SQLSTATE[HY000]:
General error: 2006 MySQL server has gone away:
SELECT expire, value FROM {semaphore}
WHERE name = :name; Array ( [:name] => menu_rebuild )
in lock_may_be_available()
(line 167 of C:\wamp\www\drupal\includes\lock.inc).Uncaught exception thrown in shutdown function.PDOException:
SQLSTATE[HY000]: General error: 2006 MySQL server has gone away:
DELETE FROM {semaphore} WHERE (value = :db_condition_placeholder_0) ;
Array ( [:db_condition_placeholder_0] => 802228002541876118e8773.14607894 )
in lock_release_all()
(line 269 of C:\wamp\www\drupal\includes\lock.inc).
Uncaught exception thrown in session handler.PDOException: SQLSTATE[HY000]:
General error: 2006 MySQL server has gone away:
SELECT 1 AS expression FROM {sessions} sessions
WHERE ( (sid = :db_condition_placeholder_0) AND
(ssid = :db_condition_placeholder_1) );
Array ( [:db_condition_placeholder_0] => ZLNqcOjZv5_OY8Y_fNwE0Il6hHmlJCLVL9qK5XUBTIo
[:db_condition_placeholder_1] => ) in _drupal_session_write()
(line 209 of C:\wamp\www\drupal\includes\session.inc).'
When I restart WAMP server after it stopped, I can install Drupal anyway, but then it is not working properly.
I have raised the number of the max_allowed_packet to various high amounts.
I have raised other numbers in my my.ini.
I have installed WAMP anew.
Each time I try anew, I delete Drupal, my databank and my history in Chrome.
My PHP version is 5.5.12
I work in Windows 8
I work on my localhost.
What else can I try?
I tried to increase the numbers in my.ini and php.ini, but that didn't help. I drop my database after each error.
I have a new idea myself:
Could it have something to do with my settings.php?
Before I start I set: $update_free_access = FALSE; to TRUE (as administrator in Notepad++). After I close the file I reopen it to check whether it has really changed. After I get the error I reopen the settings.php file and it says $update_free_access = FALSE again. But I did not change it back myself.
Could this be the source of my problem? How can I avoid this?
And should I do something with this in the same file?
'pdo' => array( * PDO::ATTR_TIMEOUT => 5,
Is there something else I should change in the settings.php?

Perhaps not enough MySQL memory! Increasing MySQL memory might solve this issue (my.conf).
See here: https://www.drupal.org/node/1014172

Try to change it in your php.ini:
max_allowed_packet=100M
I think it fixes your problem. Don't forget to restart wamp.
Regards.

I think first time you tried installing Drupal, it must have met some unexpected termination. The above error also comes when all the tables and variables at install time are not configured properly.
I suggest to drop your database and attempt a fresh install again. Also, do ensure your installation is not timing out.

RE: I have raised other numbers in my my.ini.
If you are using the WAMPServer 64bit install, please check that the my.ini file contains this section header [wampmysqld64]
i.e.
Replace [wampmysqld] with [wampmysqld64]
This section name should match the service name that MySQL is run under which in the 64bit install of WAMPServer is wampmysqld64. Unfortunately the 2.5 (64bit) release has this error in it.

Please check if your environment matches System requirements for Drupal.
Drupal 7:
MySQL 5.0.15/MariaDB 5.1.44/Percona Server 5.1.70 or higher with PDO,
PostgreSQL 8.3 or higher with PDO,
SQLite 3.3.7 or higher
and database requirements which says:
It may be necessary to set the system variable max_allowed_packet to at least 16M. Some inexpensive hosting plans set this value too low (the MySQL default is only 1M). In that case, you may need to choose a better hosting plan.
So basically you need to edit your my.cnf (e.g. ~/.my.cnf) and set or increase the value of max_allowed_packet your under [mysqld] section, e.g.
[mysqld]
max_allowed_packet=64M
If it's still failing, don't hesitate to increase it further more like 256M in case when Drupal is in need of parsing huge amount of data.
See: B.5.2.9 MySQL server has gone away for more detailed information.

Related

max_allowed_packet, I don't have MySQL

I'm trying to run sonar-runner.bat, when it almost finished analyzing, it's written max_allowed_packet more than something something. So it fails.
Through deep search, everyone said that i should configer my.ini file inside MySQL folder. But,
I don't have MySQL Installed.
Log:
Error: unable to execute sonar
error: caused by: unable to save file sources
error: caused by:
Error updating database. cause: com.mysql.jdbc.packettoobigexception: packet for query is too large (3215747 >1048576). you can change this..bla..bla
the error may involve org.sonar.core.source.db.filesourcemapper.insert-inline
the error occurred while setting parameters
how can i change it?
help!
As #Fabrice- SonarQube Team suggested that you are running SonarQube Server on top of Mysql. If you want to check you can check within Sonar.properties file.
for removing this issues you have to modify the my.cnf(Linux) or my.ini(for windows).
[mysqld]
max_allowed_packet=256M
if you want to set the same Globally.Log in to Mysql and run the following command.
SET GLOBAL max_allowed_packet=1073741824;
Once you do this settings, Please restart Mysql Server.
I found the answer myself.
Looks like I don't realize how the database works in Sonarqube.
So by DEFAULT, sonarqube use H2. This is a good one, and I believe such my problem won't happened.
Turned out someone from my company actually used his own MYSQL server. So, I found the MySQL folder, change .ini/.cfg file, insert MAX_ALLOWED_PACKET value to bigger number.
VOILA!
Thanks for your help!

MySql : wait_timeout & "mysql server has gone away" error message

I've a page which selects data from a table based on the primary key of the table
ex: SELECT * FROM table_name WHERE id=primary_key_value
As soon as I execute this from my CI page, I'm getting following error message
"Mysql server has gone away"
I searched in mysql reference manual, they are telling, its the cause of "wait_timeout" in configuration file. My configuration file has following settings
wait_timeout = 10
Could someone help me resolve this issue?
MySQL Server Has Gone Away - MySQL 5.5
link
Another one

Difference In Phpmyadmin Mysql web client and Terminal client

I got problem (#2006 Mysql server gone away) with mysql while connecting and performing some operations through web browser.
Operation Listed below:
When Executing big procedure
Importing database dump
When Access some particular tables It immediately throws "Server gone away".
Refer this question for Scenarios: Record Not Inserted - #2006 Mysql server gone away
Note : The above operations are works fine when I perform through terminal.
I tried some configuration as googing stated. That is set wait_timeout, max_allowed_packet. I checked for the bin_log but it is not available.
But the issues will not rectified.
What is the problem & How can I figure out & fix the issue?
what is the different between access phpmyadmin mysql server from web browser and terminal?
Where I can find the mysql server log file?
Note: If you know about any one of the above questions. Please post here. It would be helpful to trace.
Please help me to figure this out..
Thanks in advance...
Basically nothing except phpMyAdmin is limited by PHP's timeout and resource limits (limits to keep a runaway script from bogging down your entire machine for all eternity; see the docs for details of those values. In some cases, you might be authenticating through a different user account (for instance, root#localhost and root#127.0.0.1 aren't the same user), but as long as you're using a user with the same permissions the differences are minimal.
You can read more about logs in the MySQL manual, note that "By default, no logs are enabled (except the error log on Windows)".
Below are answer for question
From my research the problem is that browser have some limit to disconnect the connection i.e timeout connection. So that the above problem raised.
To resolve this problem
Go to /opt/lampp/phpmyadmin and open config.inc.php
add the command $cfg['ExecTimeLimit'] = 0;
Restart the xamp server. Now you can perform any operations.
`
2. Web client is differ from terminal because Terminal client will not getting timeout. Terminal client maintain the connection till the progress completed. I recommenced to use command prompt to import/export/run process by safe way.
Basically phpmyadmin will not have any log file. If you wanna see warnings and error you should configure the log file.
Configuration steps:
Go to /opt/lampp/etc/my.cnf
Add log_bin = /opt/lampp/var/mysql/filename.log
Restart the xamp server. You can get the log information.

MySQL unknown table engine innodb

I am trying to migrate a test website on drupal onto a live server on Amazon ec2. I migrated the database using phpmyadmin, and tried to access the site. I got this error:
PDOException: SQLSTATE[42000]: Syntax error or access violation: 1286 Unknown table engine 'InnoDB': SELECT expire, value FROM {semaphore} WHERE name = :name; Array ( [:name] => variable_init ) in lock_may_be_available() (line 167 of /var/www/includes/lock.inc).
I believe the problem here is that MySQL doesn't have InnoDB. I have looked through the my.cnf file and there is no line that says skip-innodb.
I have tried show engines and it showed a bunch of engines but not innodb.
I have tried restarting my server and deleting the logfile, just like has been suggested previously, but that didn't work.
Maybe what needs to be done is to somehow install the Innodb engine. Could you tell me how I may do that?
On Debian 6 this can also happen when you change the innodb_log_file_size parameter... some times mysql does not start.. but other times it just starts up bud disables innodb engine... so solution is to remove ib_logfiles from /var/lib/mysql and restart mysql
Look in your mysql error log. Run
select ##log_error;
To see where exactly that is.
There is probably something in there telling why it failed on startup. (Perhaps trying to allocate more buffer pool than you have memory?)

"MySQL server has gone away" with Ruby on Rails

After our Ruby on Rails application has run for a while, it starts throwing 500s with "MySQL server has gone away". Often this happens overnight. It's started doing this recently, with no obvious change in our server configuration.
Mysql::Error: MySQL server has gone away: SELECT * FROM `widgets`
Restarting the mongrels (not the MySQL server) fixes it.
How can we fix this?
Ruby on Rails 2.3 has a reconnect option for your database connection:
production:
# Your settings
reconnect: true
See:
Ruby on Rails 2.3 Release Notes, sub section 4.8 Reconnecting MySQL Connections.
MySQL auto-reconnect revisited
Good luck!
This is probably caused by the persistent connections to MySQL going away (time out is likely if it's happening over night) and Ruby on Rails is failing to restore the connection, which it should be doing by default:
In the file vendor/rails/actionpack/lib/action_controller/dispatcher.rb is the code:
if defined?(ActiveRecord)
before_dispatch { ActiveRecord::Base.verify_active_connections! }
to_prepare(:activerecord_instantiate_observers) {ActiveRecord::Base.instantiate_observers }
end
The method verify_active_connections! performs several actions, one of which is to recreate any expired connections.
The most likely cause of this error is that this is because a monkey patch has redefined the dispatcher to not call verify_active_connections!, or verify_active_connections! has been changed, etc.
Try ActiveRecord::Base.connection.verify! in Ruby on Rails 4. Verify pings the server and reconnects if it is not connected.
I had this problem when sending really large statements to MySQL. MySQL limits the size of statements and will close the connection if you go over the limit.
set global max_allowed_packet = 1048576; # 2^20 bytes (1 MB) was enough in my case
As the other contributors to this thread have said, it is most likely that MySQL server has closed the connection to your Ruby on Rails application because of inactivity. The default timeout is 28800 seconds, or 8 hours.
set-variable = wait_timeout=86400
Adding this line to your /etc/my.cnf will raise the timeout to 24 hours
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#option_mysqld_wait_timeout.
Although the documentation doesn't indicate it, a value of 0 may disable the timeout completely, but you would need to experiment as this is just speculation.
There are however three other situations that I know of that can generate that error. The first is the MySQL server being restarted. This will obviously drop all the connections, but as the MySQL client is passive, and this won't be noticed till you do the next query.
The second condition is if someone kills your query from the MySQL command line, and this also drops the connection, because it could leave the client in an undefined state.
The last is if your MySQL server restarts itself due to a fatal internal error. That is, if you are doing a simple query against a table and instantly see 'MySQL has gone away', I'd take a close look at your server's logs to check for hardware error, or database corruption.
First, determine the max_connections in MySQL:
show variables like "max_connections";
You need to make sure that the number of connections you're making in your Ruby on Rails application is less than the maximum allowed number of connections. Note that extra connections can be coming from your cron jobs, delayed_job processes (each would have the same pool size in your database.yml), etc.
Monitor the SQL connections as you go through your application, run processes, etc. by doing the following in MySQL:
show status where variable_name = 'Threads_connected';
You might want to consider closing connections after a Thread finishes execution as database connections do not get closed automatically (I think this is less of an issue with Ruby on Rails 4 applications Reaper):
Thread.new do
begin
# Thread work here
ensure
begin
if (ActiveRecord::Base.connection && ActiveRecord::Base.connection.active?)
ActiveRecord::Base.connection.close
end
rescue
end
end
end
The connection to the MySQL server is probably timing out.
You should be able to increase the timeout in MySQL, but for a proper fix, have your code check that the database connection is still alive, and re-connect if it's not.
Using reconnect: true in the database.yml will cause the database connection to be re-established AFTER the ActiveRecord::StatementInvalid error is raised (As Dave Cheney mentioned).
Unfortunately adding a retry on the database operation seemed necessary to guard against the connection timeout:
begin
do_some_active_record_operation
rescue ActiveRecord::StatementInvalid => e
Rails.logger.debug("Got statement invalid #{e.message} ... trying again")
# Second attempt, now that db connection is re-established
do_some_active_record_operation
end
Do you monitor the number of open MySQL connections or threads? What is your mysql.ini settings for max_connections?
mysql> show status;
Look at Connections, Max_used_connections, Threads_connected, and Threads_created.
You may need to increase the limits in your MySQL configuration, or perhaps rails is not closing the connection properly*.
Note: I've only used Ruby on Rails briefly...
The MySQL documentation for server status is in http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html.
Something else to check is Unicorn config is correct. See before_fork and after_fork handling of ActiveRecord connection here: https://gist.github.com/nebiros/2776085#file-unicorn-rb
I had this problem in a Ruby on Rails 3 application, using the mysql2 gem. I copied out the offending query and tried running it in MySQL directly, and I got the same error, "MySQL server has gone away.".
The query in question was very, very large. A very large insert (+1 MB). The field I was trying to insert into was a TEXT column and their max size is 64 KB. Rather than throwing an errorm, the connection went away.
I increased the size of the field and got the same thing, so I'm still not sure what the exact issue was. The point is that it was in the database due to some strange query. Anyway!
While forking in Rails.
For anyone running into this while forking in Rails, try clearing the existing connections before forking and then establish a new connection for each fork, like this:
# Clear existing connections before forking to ensure they do not get inherited.
::ActiveRecord::Base.clear_all_connections!
fork do
# Establish a new connection for each fork.
::ActiveRecord::Base.establish_connection
# The rest of the code for each fork...
end
See this StackOverflow answer here: https://stackoverflow.com/a/8915353/293280