I want to use foreign key in MySQL. For that I needed to enable InnoDB feature. I have tried downloading latest version of MySQL Server from its official site.
I went through similar questions on stackoverflow but they addressed different issues.
I have tried editing all the .ini files and enabling InnoDB properties by removing # in front of corresponding properties.
Then I restarted MySQL and checked status of InnoDB from MySQL Client using query/
show engines;
It still shows InnoDB is disabled
I want to know steps of enabling the built in InnoDB feature for MySQL.
Here are the links to questions I visited:
Ques1
Ques2
Official MySQL forum
I am newbie in MySQL.
I will be very thankful for any help :-)
Check mysql log file. There could be some messages that may explain why InnoDB does not start. I suppose you don't have important InnoDB data. If so, try deleting ib_logfile0.xxx files and ibdata located in mysql data dir, then restart mysql to force those file to recreate. Also, check if innodb variables in my.cnf are properly configurated (For example, I have set memory for innodb_pool... to 1024G instead of 1024M, as a mistake).
I'm debugging a MySQL 5.1.61 database, and I have the long_query_time set in the my.cnf file to 10. Slow queries are being logged to a database table.
However, it's logging queries that take a fraction of a second.
In fact, my queries being logged are so fast, the query_time field in mysql shows "00:00:00" for every query logged. Even when I had them logging to a file, they showed query times in the range of "Query_time: 0.004763"
I know that my configuration file is being read, because all my other changes have worked.
From all the documentation I've read, long_query_time should be seconds. Is there something else I need to do for that setting to stick?
Do those queries have indexes? If not, then that's probably why they're being logged.
Before MySQL 4.1, if you also use --log-long-format when logging slow queries, queries that are not using indexes are logged as well. Starting with MySQL 4.1, logging of queries not using indexes for row lookups is enabled using the --log-queries-not-using-indexes option instead. The --log-long-format is deprecated as of MySQL 4.1, when --log-short-format was introduced, which causes less information to be logged. (The long log format is the default setting since version 4.1.) (>>)
I believe you just forgot to restart your Mysql after editing 'my.cnf' file?
sudo /etc/init.d/mysqld restart
or
sudo service mysql restart
I got the Error Code: 2013. Lost connection to MySQL server during query error when I tried to add an index to a table using MySQL Workbench.
I noticed also that it appears whenever I run long query.
Is there away to increase the timeout value?
New versions of MySQL WorkBench have an option to change specific timeouts.
For me it was under Edit → Preferences → SQL Editor → DBMS connection read time out (in seconds): 600
Changed the value to 6000.
Also unchecked limit rows as putting a limit in every time I want to search the whole data set gets tiresome.
If your query has blob data, this issue can be fixed by applying a my.ini change as proposed in this answer:
[mysqld]
max_allowed_packet=16M
By default, this will be 1M (the allowed maximum value is 1024M). If the supplied value is not a multiple of 1024K, it will automatically be rounded to the nearest multiple of 1024K.
While the referenced thread is about the MySQL error 2006, setting the max_allowed_packet from 1M to 16M did fix the 2013 error that showed up for me when running a long query.
For WAMP users: you'll find the flag in the [wampmysqld] section.
Start the DB server with the comandline option net_read_timeout / wait_timeout and a suitable value (in seconds) - for example: --net_read_timeout=100.
For reference see here and here.
SET ##local.net_read_timeout=360;
Warning: The following will not work when you are applying it in remote connection:
SET ##global.net_read_timeout=360;
Edit: 360 is the number of seconds
Add the following into /etc/mysql/cnf file:
innodb_buffer_pool_size = 64M
example:
key_buffer = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
innodb_buffer_pool_size = 64M
In my case, setting the connection timeout interval to 6000 or something higher didn't work.
I just did what the workbench says I can do.
The maximum amount of time the query can take to return data from the DBMS.Set 0 to skip the read timeout.
On Mac
Preferences -> SQL Editor -> Go to MySQL Session -> set connection read timeout interval to 0.
And it works 😄
There are three likely causes for this error message
Usually it indicates network connectivity trouble and you should check the condition of your network if this error occurs frequently
Sometimes the “during query” form happens when millions of rows are being sent as part of one or more queries.
More rarely, it can happen when the client is attempting the initial connection to the server
For more detail read >>
Cause 2 :
SET GLOBAL interactive_timeout=60;
from its default of 30 seconds to 60 seconds or longer
Cause 3 :
SET GLOBAL connect_timeout=60;
You should set the 'interactive_timeout' and 'wait_timeout' properties in the mysql config file to the values you need.
Just perform a MySQL upgrade that will re-build innoDB engine along with rebuilding of many tables required for proper functioning of MySQL such as performance_schema, information_schema, etc.
Issue the below command from your shell:
sudo mysql_upgrade -u root -p
If you experience this problem during the restore of a big dump-file and can rule out the problem that it has anything to do with network (e.g. execution on localhost) than my solution could be helpful.
My mysqldump held at least one INSERT that was too big for mysql to compute. You can view this variable by typing show variables like "net_buffer_length"; inside your mysql-cli.
You have three possibilities:
increase net_buffer_length inside mysql -> this would need a server restart
create dump with --skip-extended-insert, per insert one line is used -> although these dumps are much nicer to read this is not suitable for big dumps > 1GB because it tends to be very slow
create dump with extended inserts (which is the default) but limit the net-buffer_length e.g. with --net-buffer_length NR_OF_BYTES where NR_OF_BYTES is smaller than the server's net_buffer_length -> I think this is the best solution, although slower no server restart is needed.
I used following mysqldump command:
mysqldump --skip-comments --set-charset --default-character-set=utf8 --single-transaction --net-buffer_length 4096 DBX > dumpfile
On the basis of what I have understood this error was caused due to read timeout and max allowed packet default is 4M. if your query file more than 4Mb then you get an error. this worked for me
change the read timeout. For changing go to Workbench Edit → Preferences → SQL Editor
2. change the max_allowed_packet manually by editing the file my.ini. for editing go to "C:\ProgramData\MySQL\MySQL Server 8.0\my.ini". The folder ProgramData folder is hidden so if you did not see then select show hidden file in view settings. set the max_allowed_packet = 16M in my.ini file.
3. Restart MySQL. for restarting go to win+ R -> services.msc and restart MySQL.
I know its old but on mac
1. Control-click your connection and choose Connection Properties.
2. Under Advanced tab, set the Socket Timeout (sec) to a larger value.
Sometimes your SQL-Server gets into deadlocks, I've been in to this problem like 100 times. You can either restart your computer/laptop to restart server (easy way) OR you can go to task-manager>services>YOUR-SERVER-NAME(for me , it was MySQL785 something like this). And right-click > restart.
Try executing query again.
Try please to uncheck limit rows in in Edit → Preferences →SQL Queries
because You should set the 'interactive_timeout' and 'wait_timeout' properties in the mysql config file to the values you need.
Change "read time out" time in Edit->Preferences->SQL editor->MySQL session
I got the same issue when loading a .csv file.
Converted the file to .sql.
Using below command I manage to work around this issue.
mysql -u <user> -p -D <DB name> < file.sql
Hope this would help.
If all the other solutions here fail - check your syslog (/var/log/syslog or similar) to see if your server is running out of memory during the query.
Had this issue when innodb_buffer_pool_size was set too close to physical memory without a swapfile configured. MySQL recommends for a database specific server setting innodb_buffer_pool_size at a max of around 80% of physical memory, I had it set to around 90%, the kernel was killing the mysql process. Moved innodb_buffer_pool_size back down to around 80% and that fixed the issue.
Go to Workbench Edit → Preferences → SQL Editor → DBMS connections read time out : Up to 3000.
The error no longer occurred.
I faced this same issue. I believe it happens when you have foreign keys to larger tables (which takes time).
I tried to run the create table statement again without the foreign key declarations and found it worked.
Then after creating the table, I added the foreign key constrains using ALTER TABLE query.
Hope this will help someone.
This happened to me because my innodb_buffer_pool_size was set to be larger than the RAM size available on the server. Things were getting interrupted because of this and it issues this error. The fix is to update my.cnf with the correct setting for innodb_buffer_pool_size.
Go to:
Edit -> Preferences -> SQL Editor
In there you can see three fields in the "MySQL Session" group, where you can now set the new connection intervals (in seconds).
Turns out our firewall rule was blocking my connection to MYSQL. After the firewall policy is lifted to allow the connection i was able to import the schema successfully.
I had the same problem - but for me the solution was a DB user with too strict permissions.
I had to allow the Execute ability on the mysql table. After allowing that I had no dropping connections anymore
Check if the indexes are in place first.
SELECT *
FROM INFORMATION_SCHEMA.STATISTICS
WHERE TABLE_SCHEMA = '<schema>'
I ran into this while running a stored proc- which was creating lots of rows into a table in the database.
I could see the error come right after the time crossed the 30 sec boundary.
I tried all the suggestions in the other answers. I am sure some of it helped , however- what really made it work for me was switching to SequelPro from Workbench.
I am guessing it was some client side connection that I could not spot in Workbench.
Maybe this will help someone else as well ?
If you are using SQL Work Bench, you can try using Indexing, by adding an index to your tables, to add an index, click on the wrench(spanner) symbol on the table, it should open up the setup for the table, below, click on the index view, type an index name and set the type to index, In the index columns, select the primary column in your table.
Do the same step for other primary keys on other tables.
There seems to be an answer missing here for those using SSH to connect to their MySQL database. You need to check two places not 1 as suggested by other answers:
Workbench Edit → Preferences → SQL Editor → DBMS
Workbench Edit → Preferences → SSH → Timeouts
My default SSH Timeouts were set very low and causing some (but apparently not all) of my timeout issues. After, don't forget to restart MySQL Workbench!
Last, it may be worth contacting your DB Admin and asking them to increase wait_timeout & interactive_timeout properties in mysql itself via my.conf + mysql restart or doing a global set if restarting mysql is not an option.
Hope this helps!
Three things to be followed and make sure:
Whether multiple queries show lost connection?
how you use set query in MySQL?
how delete + update query simultaneously?
Answers:
Always try to remove definer as MySQL creates its own definer and if multiple tables involved for updation try to make a single query as sometimes multiple query shows lost connection
Always SET value at the top but after DELETE if its condition doesn't involve SET value.
Use DELETE FIRST THEN UPDATE IF BOTH OF THEM OPERATIONS ARE PERFORMED ON DIFFERENT TABLES
I had this error message due to a problem after of upgrade Mysql. The error appeared immediately after I tried to do any query
Check mysql error log files in path /var/log/mysql (linux)
In my case reassigning Mysql owner to the Mysql system folder worked for me
chown -R mysql:mysql /var/lib/mysql
Establish connection first
mysql --host=host.com --port=3306 -u username -p
then select your db use dbname
then source dumb source C:\dumpfile.sql.
After it's done \q
I am trying to change the table engine from MyISAM to INNODB. I am using the
alter table tablename ENGINE=INNODB
command. I am not getting any errors or warnings on the mysql side. I also commented the
skip-innodb
line in my.cnf file. So when I do a
show variables like 'have-innodb%'
it gives me a "YES". Also just to be on the safe side, I also deleted my ib_logfile0 and ib_logfile1 and restarted my mysql server.
But it still does not change the engine. I also did a show engines, and it shows innodb as one of the available engines.
Also these tables are full of data and have around 5000 rows, so is changing the engine type when a table has data, would that be the problem??
What could the missing link be??
Are you able to restart the server? If so, the error log will tell you if it had problems initialising the InnoDB engine.
Is this the first InnoDB table in your db? If so, you may have forgotten to create your ibdata files.
Does the table use fulltext indexing or other InnoDB-incompatible features?
I'm trying to run a rather large query that is supposed to run nightly to populate a table. I'm getting an error saying Incorrect key file for table '/var/tmp/#sql_201e_0.MYI'; try to repair it but the storage engine I'm using (whatever the default is, I guess?) doesn't support repairing tables.
how do I fix this so I can run the query?
You must change the location of MySQL's temporary folder which is '/tmp' in most cases to a location with a bigger disk space. Change it in MySQL's config file.
Basically your server is running out of disk space where /tmp is located.
You'll need to run this command from the MySQL prompt:
REPAIR TABLE tbl_name USE_FRM;
From MySQL's documentation on the Repair command:
The USE_FRM option is available for use if the .MYI index file is missing or if its header is corrupted. This option tells MySQL not to trust the information in the .MYI file header and to re-create it using information from the .frm file. This kind of repair cannot be done with myisamchk.
Your query is generating a result set so large that it needs to build a temporary table either to hold some of the results or some intermediate product used in generating the result.
The temporary table is being generated in /var/tmp. This temporary table would appear to have been corrupted. Perhaps the device the temporary table was being built on ran out of space. However, usually this would normally result in an "out of space" error. Perhaps something else running on your machine has clobbered the temporary table.
Try reworking your query to use less space, or try reconfiguring your database so that a larger or safer partition is used for temporary tables.
MySQL Manual - B.5.4.4. Where MySQL Stores Temporary Files
The storage engine (MyISAM) DOES support repair table. You should be able to repair it.
If the repair fails then it's a sign that the table is very corrupted, you have no choice but to restore it from backups.
If you have other systems (e.g. non-production with same software versions and schema) with an identical table then you might be able to fix it with some hackery (copying the frm an MYI files, followed by a repair).
In essence, the trick is to avoid getting broken tables in the first place. This means always shutting your db down cleanly, never having it crash and never having hardware or power problems. In practice this isn't very likely, so if durability matters you may want to consider a more crash-safe storage engine.
Simple "REPAIR the table" from PHPMYADMIN solved this problem for me.
go to phpmyadmin
open problematic table
go to Operations tab (in my version of PMA)
at the bottom you will find "Repair table" link
In my case, there was a disc space issue. I deleted some unwanted war files from my server and it worked after that.
REPAIR TABLE tbl_name USE_FRM;
Command only run when MySQL 'Storage Engine' type should be 'MyISAM'
Hope this helps
this issue is because of low storage space availability of a particular drive(c:\ or d:\ etc.,), release some memory then it will work.
Thanks
Saikumar.P
This happenes might be because you ran out of disk storage and the mysql files and starting files got corrupted
The solution to be tried as below
First we will move the tmp file to somewhere with larger space
Step 1: Copy your existing /etc/my.cnf file to make a backup
cp /etc/my.cnf{,.back-`date +%Y%m%d`}
Step 2: Create your new directory, and set the correct permissions
mkdir /home/mysqltmpdir
chmod 1777 /home/mysqltmpdir
Step 3: Open your /etc/my.cnf file
nano /etc/my.cnf
Step 4: Add below line under the [mysqld] section and save the file
tmpdir=/home/mysqltmpdir
Secondly you need to remove or error files and logs from the /var/lib/mysql/ib_* that means to remove anything that starts by "ib"
rm /var/lib/mysql/ibdata1 and rm /var/lib/mysql/ibda.... and so on
Thirdly you will need to make sure that there is a pid file available to have the database to write in
Step 1 you need to edit /etc/my.cnf
pid-file= /var/run/mysqld/mysqld.pid
Step 2 create the directory with the file to point to
mkdir /var/run/mysqld
touch /var/run/mysqld/mysqld.pid
chown -R mysql:mysql /var/run/mysqld
Last step restart mysql server
/etc/init.d/mysql restart
I just resolved a similar issue “Incorrect key file:\bonga_process\alarms.MYI’:try to repair it”
How I resolved it:
I started the server configuration application, ServerConfiguration.exe
Maitenance tab
Click on Repair tables command button
Prompts to delete files to free disk space appear with Manage Storage command
button
Click on Manage Storage command, this will run a take you to some files you need to delete to free more disk space. Delete any useless files.
Restart the system auditor application
Apply proper charset and collation to database, table and columns/fields.
I creates database and table structure using sql queries from one server to another.
it creates database structure as follows:
database with charset of "utf8", collation of "utf8_general_ci"
tables with charset of "utf8" and collation of "utf8_bin".
table columns / fields have charset "utf8" and collation of "utf8_bin".
I change collation of table and column to utf8_general_ci, and it resolves the error.
Change to MyISAM engine and run this command
REPAIR TABLE tbl_name USE_FRM;