I'm running MySQL 8.0.23 on Windows server 2019.
Two tables:
CREATE TABLE `tblp` (
`id` int unsigned NOT NULL AUTO_INCREMENT,
`datum` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (`id`),
KEY `index_dat` (`datum`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
and
CREATE TABLE `tblpdet` (
`id` int unsigned NOT NULL,
`katbr` varchar(45) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
`redid` int unsigned NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`redid`),
KEY `Index_2` (`id`),
KEY `idx_katbr` (`katbr`),
CONSTRAINT `FK_tblpdet_1` FOREIGN KEY (`id`) REFERENCES `tblp` (`id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
Now, if I execute:
select katbr, min(date(datum))
from tblp p
join tblpdet d on p.id = d.id
group by katbr;
I get error
Error Code: 1114. The table 'd:\tmp#sql1e5c_18_1eb' is full
If I execute:
select katbr, min(redid)
from tblpdet
group by katbr;
then it works fine.
Result should return some 120.000 rows.
Here are global settings relevant to this issue:
> innodb_data_file_path=ibdata1:12M:autoextend
> innodb_buffer_pool_size=51539607552
Table tblp has some 5.800.000 rows, and tblpdet has some 43.000.000 rows.
Data folder of MySQL is on SSD (mirrored) drive with 800GB of free space.
Total RAM is 128GB;
Machine has 2 processors with total of 20 cores, running only MySQL (at the moment).
Everything I read is ending up with 'not enough disk space', or wrong configuration of innodg_data_file_path. Anybody help?
First post so be kind.
I have a similar issue (Error Code: 1114. The table '/tmp/#sql2cc_b_3e' is full) with selecting and joining large datasets with sort and group by. The issue is present in 8.0.24 but not 8.0.22 or prior.
I tested this by exporting the database out of 8.0.24 where I received the error and importing into 8.0.20 where the same query runs successfully. I then updated without changing any other settings to 8.0.24 and the same query fails. I have also done a fresh install of 8.0.24 and also receive the error there.
Related
I have an InnoDB, MySQL table and this query returns zero rows:
SELECT id, display FROM ra_table WHERE parent_id=7266 AND display=1;
However, there are actually 17 rows that should match:
SELECT id, display FROM ra_itable1 WHERE parent_id=7266;
ID display
------------------
1748 1
5645 1
...
There is an index on display (int 1), and ID is the primary key. The table also has several other fields which I'm not pulling in this query.
After noticing this query wasn't working, I defragmented the table and then the first query started working correctly, but only for a time. It seems after a few days, the query stops working again and I have to defragment to fix it.
My question is, why does the fragmented table break this query?
Additional info: MySQL 5.6.27 on Amazon RDS.
CREATE TABLE `ra_table` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`parent_id` int(6) NOT NULL,
`display` int(1) NOT NULL,
PRIMARY KEY (`id`),
KEY `parent_id` (`parent_id`),
KEY `display` (`display`),
) ENGINE=InnoDB AUTO_INCREMENT=13302 DEFAULT CHARSET=latin1
ROW_FORMAT=DYNAMIC
There may be a bug in the version you are running.
Meanwhile, change
INDEX(parent_id),
INDEX(display)
to
INDEX(parent_id, display)
By combining them, the query will run faster (and hopefully correctly). An index on a flag (display) is likely to never be used.
I am exporting a mySQL database from one server to another. My export file contains all the table definitions, data, structure etc. All tables use the InnoDB engine and the utf8 charset. I am importing with the 'enable foreign key checks' switched off - my export file also has the line 'SET FOREIGN_KEY_CHECKS=0;'.
However, when I import the data, I get the error '#1215 - Cannot add foreign key constraint'
Here is the table definition in the input file:
CREATE TABLE IF NOT EXISTS UserTable (
Index_i int(13) NOT NULL,
UserUUID_vc varchar(36) DEFAULT NULL,
AccountID_i int(13) DEFAULT NULL,
FirstName_vc varchar(30) DEFAULT NULL,
LastName_vc varchar(40) DEFAULT NULL,
Password_vc varchar(255) DEFAULT NULL,
Country_vc varchar(2) DEFAULT NULL,
DateRegistered_dt timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
DateAccountTypeChanged_dt timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
DateAccountStatusChanged_dt timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
DateAcceptedTandC_dt timestamp NOT NULL DEFAULT '0000-00-00 00:00:00'
) ENGINE=InnoDB AUTO_INCREMENT=165 DEFAULT CHARSET=utf8;
It hits this problem before it reads any of the statements that apply constraints such as FK's to the table. In effect it doesn't know if any of these columns are foreign keys or not when it triggers the error message.
Any ideas?
If you're using terminal, login to mysql using your credentials with following command.
mysql -u[your_username] -p[your_password]
Set your mysql foreign key constraint checks to 0 using following command.
SET FOREIGN_KEY_CHECKS = 0;
Import your database using source command like below.
SOURCE /path/to/your/sql/file.sql
And set your mysql foreign key constraint checks back to 1 using following command.
SET FOREIGN_KEY_CHECKS = 1;
Hurrrrrraaaaah. Works like a charm. :)
P.S :- Tested on ubuntu 14.04, 16.04, 18.04
Add it as checkbox - disable foregin key check for this query in phpmyadmin to allow make changes on database
I created two tables from phpmyadmin like this
CREATE TABLE customers (
id int(11) NOT NULL AUTO_INCREMENT,
name varchar(245) DEFAULT NULL,
place varchar(245) DEFAULT NULL,
email varchar(245) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;
and another one like this
CREATE TABLE `orders` (
id int(11) NOT NULL AUTO_INCREMENT,
menu_name varchar(245) DEFAULT NULL,
menu_id int(11) DEFAULT NULL,
date_of_order date DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `FK orders menu_id customer id_idx` (`menu_id`),
CONSTRAINT `FK orders menu_id customer id` FOREIGN KEY (`menu_id`)
REFERENCES `customers` (`id`) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;
after this i insert a value in the first table called 'customers' like this:
now after that when i insert values into the 'orders' table, phpmyadmin linter displays error like this:
However, strangely when i click 'Go', the query works fine. It also works fine through the command line too. So is it a bug? or i have to write it in a different way?
Its a bug in phpmyadmin sql query parser in parsing sub queries. The issue is opened and has not been entertained yet.
You have some alternatives here:
Adminer
Or you can try a different mySql client:
MySQL Workbench
HeidiSQL
Yes, phpmyadmin version 4.5.1 had a bug which #Shaharyar mentioned above. i apologize for not posting the version before. However, updating it to version 4.6.3 fixed the issue. Thank you.
I have created a table using SQLyog. When i insert values into it, it pops up following error message:
Operation not allowed when innodb_forced_recovery > 0.
My table consist only four columns including one primary key.
Following is my create and insert queries:
CREATE TABLE `news` (
`id` int(10) NOT NULL AUTO_INCREMENT,
`title` varchar(100) NOT NULL,
`slug` varchar(100) NOT NULL,
`descr` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=latin1
insert into `test`.`news` (`title`, `slug`, `descr`)
values ('titleOne', 'slugOne', 'descOne')
This error is comes when MySQL is in Read only mode.
Edit file /etc/my.cnf.
And comment out following line
# innodb_force_recovery = 1
Apparently this setting causes innodb to become read-only. If you don't have access to /etc/my.cnf on shared hosting, ask your host to fix it for you. When it's commented out or non-existent in /etc/my.cnf, the it reverts to a default setting of 0.
This happens to me also but what i did was to change the SQL Engine during creatng the table from InnoDB to MyISAM
like in
ENGINE=innoDB to ENGINE=MyISAM
So if you have your database and want to upload it, open it with any editor and change the engine at the end of each table from innoDB to MyISAM.
this resolved the problem.
Need some advice working with EF4 and MySql.
I have a table with lots of data items. Each item belongs to a module and a zone. The data item also has a timestamp (ticks). The most common usage is for the app to query for data after a specified time for a module and a zone. The data should be sorted.
Problem is that the query selects to many rows and the database server will be low on memory resulting in a very slow query. I tried to limit the query to 100 items but the generated sql will only apply the limit after all the items has been selected and sorted.
dataRepository.GetData().WithModuleId(ModuleId).InZone(ZoneId).After(ztime).OrderBy(p
=> p.Timestamp).Take(100).ToList();
Generated SQL by the MySql .Net Connector 6.3.6
SELECT
`Project1`.`Id`,
`Project1`.`Data`,
`Project1`.`Timestamp`,
`Project1`.`ModuleId`,
`Project1`.`ZoneId`,
`Project1`.`Version`,
`Project1`.`Type`
FROM (SELECT
`Extent1`.`Id`,
`Extent1`.`Data`,
`Extent1`.`Timestamp`,
`Extent1`.`ModuleId`,
`Extent1`.`ZoneId`,
`Extent1`.`Version`,
`Extent1`.`Type`
FROM `DataItems` AS `Extent1`
WHERE ((`Extent1`.`ModuleId` = 1) AND (`Extent1`.`ZoneId` = 1)) AND
(`Extent1`.`Timestamp` > 634376753657189002)) AS `Project1`
ORDER BY
`Timestamp` ASC LIMIT 100
Table definition
CREATE TABLE `mydb`.`DataItems` (
`Id` bigint(20) NOT NULL AUTO_INCREMENT,
`Data` mediumblob NOT NULL,
`Timestamp` bigint(20) NOT NULL,
`ModuleId` bigint(20) NOT NULL,
`ZoneId` bigint(20) NOT NULL,
`Version` int(11) NOT NULL,
`Type` varchar(1000) NOT NULL,
PRIMARY KEY (`Id`),
KEY `IX_FK_ModuleDataItem` (`ModuleId`),
KEY `IX_FK_ZoneDataItem` (`ZoneId`),
KEY `Index_4` (`Timestamp`),
KEY `Index_5` (`ModuleId`,`ZoneId`),
CONSTRAINT `FK_ModuleDataItem` FOREIGN KEY (`ModuleId`) REFERENCES
`Modules` (`Id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
CONSTRAINT `FK_ZoneDataItem` FOREIGN KEY (`ZoneId`) REFERENCES `Zones`
(`Id`) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB AUTO_INCREMENT=22904 DEFAULT CHARSET=utf8;
All suggestions on how to solve this are welcome.
What's your GetData() method doing? I'd bet it's executing a query on the entire table. And that's why your Take(100) at the end isn't doing anything.
I solved this by using the Table Splitting method described here:
Table splitting in entity framework