I have a MYSQL database on my Windows machine and another similar set up on Linux server.
While I had no issues in executing a very basic update/delete/drop query on Windows set up, my query hangs when run on Linux server.
MYSQL version on Windows machine is 5.1.42 while for Linux is 5.6.5-m8
Also, the hanging of update is not limited to a single table. There are 4-5 cross referencing tables in my datatabase for which this update/delete/drop is hanging.
Update/delete/drop is working for other un-related tables though.
On Linux, First, I simply source a dump(with no inserts) generated from my Windows machine. Second, I run an insert statement (insert into flat (FLATID, BLOCKNO, FLATNO) Values (1, 'B1','F1');).
Third, I run update on the table(UPDATE FLAT SET FLATNO='F2';) and it hangs.
flat table description is below:
+----------------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------------+--------------+------+-----+---------+----------------+
| FLATID | bigint(20) | NO | PRI | NULL | auto_increment |
||BLOCKNO | varchar(20) | NO | | NULL | |
| FLATNO | varchar(10) | NO | | NULL | |
||col_COLID | int(11) | YES | MUL | NULL | |
| famID_FAMID | bigint(20) | YES | MUL | NULL | |
| | allotID_ALLOTID | bigint(20) | YES | MUL | NULL | |
|+----------------------+--------------+------+-----+---------+----------------+
After having tried almost everything including playing around with INNODB variables, i eventually tried removing reference allotID_ALLOTID from flat table.
The ALLOTMENT table had reference to FAMILY table to which FLAT table had reference to as well.
Removing allotID_ALLOTID from both flat, family tables worked.
Related
I'm trying to add an index to an existing table as part of the upgrade process for Invision Community forums. The database is hosted in AWS Aurora Serverless, which has MySQL 5.6 compatibility. However, every time, I receive the error:
ERROR 1709 (HY000): Index column size too large. The maximum column size is 767 bytes.
Here are the details about the table and the schema:
+---------------+--------+---------+------------+-------+----------------+-------------+-----------------+--------------+-----------+----------------+-------------+-------------+------------+--------------------+----------+--------------------+---------+
| Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment |
+---------------+--------+---------+------------+-------+----------------+-------------+-----------------+--------------+-----------+----------------+-------------+-------------+------------+--------------------+----------+--------------------+---------+
| ibf_core_tags | InnoDB | 10 | Dynamic | 36862 | 299 | 11026432 | 0 | 13189120 | 4194304 | 95183 | NULL | NULL | NULL | utf8mb4_unicode_ci | NULL | row_format=DYNAMIC | |
+---------------+--------+---------+------------+-------+----------------+-------------+-----------------+--------------+-----------+----------------+-------------+-------------+------------+--------------------+----------+--------------------+---------+
+--------------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------------+--------------+------+-----+---------+----------------+
| tag_id | bigint(20) | NO | PRI | NULL | auto_increment |
| tag_aai_lookup | char(32) | NO | MUL | | |
| tag_aap_lookup | char(32) | NO | MUL | | |
| tag_meta_app | varchar(200) | NO | MUL | | |
| tag_meta_area | varchar(200) | NO | | | |
| tag_meta_id | int(10) | NO | | 0 | |
| tag_meta_parent_id | int(10) | NO | | 0 | |
| tag_member_id | int(10) | NO | MUL | 0 | |
| tag_added | int(10) | NO | MUL | 0 | |
| tag_prefix | int(1) | NO | | 0 | |
| tag_text | varchar(255) | YES | | NULL | |
+--------------------+--------------+------+-----+---------+----------------+
Default charset for the table is utf8mb4 and the innodb_large_prefix setting is ON.
The operation I'm trying to do is:
ALTER TABLE `ibf_core_tags` ADD KEY `tag_text` (`tag_text`(191));
I would think 191 * 4 = 764, which is less than the 767 byte value it says I'm exceeding. Is this a bug in Aurora Serverless? Is there a way around this issue? I've tried changing the table to MyISAM to add the index, but I actually get the same error when I try that.
Using a local install of MySQL 5.6, I was able to run this ALTER TABLE query on the same database, so I'm not sure why Aurora Serverless is any different.
I ended up trying this on a non-serverless Aurora instance and had that same issue.
I had this same issue with a Slim PHP project using Laravel for the DB connections. By default, AWS Aurora Serverless is using the file format of Antelope which defaults to the row format of COMPACT. We need a file format of Barracuda and a row format of DYNAMIC in order to allow large index key prefixes (reference).
I created a custom Parameter Group and explicitly set the following parameters:
innodb_file_format = Barracuda
innodb_file_per_table = 1
innodb_large_prefix = 1
These params are allowed to be set according to AWS's Aurora Serverless documentation.
Setting these params didn't fix the issue by itself though. Tables were still being created with a row format of COMPACT. In the DB connection params I also had to set 'engine' => 'InnoDB ROW_FORMAT=DYNAMIC' (reference). This syntax is for Laravel but hopefully it points others in the right direction as I just spent a whole afternoon figuring it out :)
I'm still learning MySQL and while working on a new project that requires multi-language content, I have stumbled upon a question about the most practical way to design a database that will support this functionality and at the same time be the most efficient database setup.
Table content_quote:
+--------------+-----------------------+------+-----+---------------------+-----------------------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+-----------------------+------+-----+---------------------+-----------------------------+
| quote_id | int(11) unsigned | NO | PRI | NULL | auto_increment |
| url_slug | varchar(255) | NO | | NULL | |
| author_id | mediumint(8) unsigned | NO | | NULL | |
| quote | mediumtext | NO | | NULL | |
| category | varchar(15) | NO | | NULL | |
| likes | int(11) unsigned | NO | | 0 | |
| publish_time | datetime | NO | | 0000-00-00 00:00:00 | on update CURRENT_TIMESTAMP |
| locale | char(5) | NO | | NULL | |
+--------------+-----------------------+------+-----+---------------------+-----------------------------+
Now here I can just have a standard locale value like en-US in the locale field, but I have quite a few tables like that and I'm not sure what is the correct path, either leave it like that OR create a locale table to store all the locales and change the current locale field to be tinyint 2 with a Foreign Key going to the new table that stores all the locales.
Example:
+-----------+------------------+------+-----+---------+-------------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+------------------+------+-----+---------+-------------------+
| locale_id | tinyint(2) unsigned | NO | PRI | NULL | auto_increment |
| locale | char(50) | NO | | NULL | |
+-----------+------------------+------+-----+---------+-------------------+
More than the answer itself, I'm interested to know what are the advantages/disadvantages of both approaches.
Advantages and disadvantages of a new locales table (they are swapped when NOT a locales table is used):
Advantages
Add a list of available locales when some might not be used yet. It allows you to create a dropdown list of available locales in some form.
Prevent typos since there will be only one en_US value available.
Disadvantages
JOIN on the new table all the time just to get a string like en_US.
Keep in mind that space will not be an issue. Don't try to make a decision base on 5 chars vs tiny int size.
I have two tables here. One is Items and other is Parts.
Items have a part_id and Parts have an item_id.
When a user press on the submit button from the ItemDetail view, data are sent to the server and inserted into those two tables.
Here is how my code works :
Insert to Items table first and get the id of new Item data
Insert to Parts table with this item_id and other Part data
Update to Items table using new part_id
But can I write those three SQL requests in just one request ?
Here is the structure of my tables:
Items
Field | Type | Null | Key | Default | Extra |
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| name | varchar(255) | YES | | NULL | |
| price | int(11) | YES | | NULL | |
| part_id | int(10) unsigned | YES | | NULL | |
| type | varchar(255) | YES | | NULL | |
Parts
Field | Type | Null | Key | Default | Extra |
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| item_id | int(10) unsigned | NO | | NULL | |
| name | varchar(255) | NO | | NULL | |
| number | varchar(255) | YES | | NULL | |
You shouldn't have 2 tables pointing to each other like this, only one of the tables should have a foreign key, not both.
Then what you are looking for is this: http://dev.mysql.com/doc/refman/5.7/en/commit.html
Transactions make sure that either all queries are executed, or if there is an error somewhere all changes will be reverted.
Looking at the logic you are using, you are doing it correctly.
As they are two separate tables you will need to do two separate insert statements in SQL. Of course you can use a stored procedure so that you only need to call one item in your code and the SP will do two inserts.
A question here is what code are you using? If you are using something like entity framework and your relationships are defined between your elements such as
Items
-Field 1
-Parts (FK) List<Parts>
That would work, but looking at what you have tagged I am guessing your not using a ASP language?? If you are let me know and I may have a better solution for you.
The table is in InnoDB table. Here is some information that might be helpful.
EXPLAIN SELECT COUNT(*) AS y0_ FROM db.table this_ WHERE this_.id IS NOT NULL;
+----+-------------+-------+-------+---------------+---------+---------+------+---------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+---------+---------+------+---------+--------------------------+
| 1 | SIMPLE | this_ | index | PRIMARY | PRIMARY | 8 | NULL | 4711235 | Using where; Using index |
+----+-------------+-------+-------+---------------+---------+---------+------+---------+--------------------------+
1 row in set (0.00 sec)
mysql> DESCRIBE db.table;
+--------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------+--------------+------+-----+---------+-------+
| id | bigint(20) | NO | PRI | NULL | |
| id2 | varchar(28) | YES | | NULL | |
| photo | longblob | YES | | NULL | |
| source | varchar(10) | YES | | NULL | |
| file_name | varchar(120) | YES | | NULL | |
| file_type | char(1) | YES | | NULL | |
| created_date | datetime | YES | | NULL | |
| updated_date | datetime | YES | | NULL | |
| createdby | varchar(50) | YES | | NULL | |
| updatedby | varchar(50) | YES | | NULL | |
+--------------+--------------+------+-----+---------+-------+
10 rows in set (0.05 sec)
The explain query gives me the result right there. But the actual query has been running for quite a while. How can I fix this? What am I doing wrong?
I basically need to figure out how many photos there are in this table. Initially the original coder had a query which checked WHERE photo IS NOT NULL (which took 3hours+) but I changed this query to check the id column as it is a primary key. I expected a huge performance gain there and was expecting an answer in under a second but that seems to not be the case.
What sort of optimizations on the database do I need to do? I think the query is fine but feel free to correct me if I am wrong.
Edit: mysql Ver 14.14 Distrib 5.1.52, for redhat-linux-gnu (x86_64) using readline 5.1
P.S: I renamed the tables for some crazy reason. I don't actually have the database named db and the table in question named table.
How long is 'long'? How many rows are there in this table?
A MyISAM table keeps track of how many rows it has, so a simple COUNT(*) will always return almost instantly.
InnoDB, on the other hand works differently: an InnoDB table doesn't keep track of how many rows it has, and so when you COUNT(*), it literally has to go and count each row. If you have a large table, this can take a number of seconds.
EDIT: Try COUNT(ID) instead of COUNT(*), where ID is an indexed column that has no NULLs in it. That may run faster.
EDIT2: If you're storing the binary data of the files in the longblob, your table will be massive, which will slow things down.
Possible solutions:
Use MyISAM instead of InnoDB.
Maintain your own count, perhaps using triggers on inserts and deletes.
Strip out the binary data into another table, or preferably regular files.
I'm relatively confused about this...
I've got a table like:
+----------------+--------------------------------------------------+------+-----+-------------------+-----------------------------+
| Field | Type | Null | Key | Default | Extra |
+----------------+--------------------------------------------------+------+-----+-------------------+-----------------------------+
| dhcp | int(10) unsigned | NO | PRI | 0 | |
| ip | int(10) unsigned | NO | PRI | 0 | |
| mac | varchar(17) | YES | MUL | NULL | |
| lease | enum('fr','a','fi','a','u') | NO | MUL | free) | | |
| date | timestamp | NO | MUL | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
| uid | varchar(255) | YES | | NULL | |
| starts_date | datetime | YES | | NULL | |
| starts_weekday | tinyint(1) | YES | | NULL | |
| ends_date | datetime | YES | | NULL | |
| ends_weekday | tinyint(1) | YES | | NULL | |
| first_seen | datetime | YES | | NULL | |
+----------------+--------------------------------------------------+------+-----+-------------------+-----------------------------+
I just added the first_seen column. The idea is that I will use INSERT... ON DUPLICATE KEY UPDATE. dhcp and ip are rightly primary keys, as I want to only have one record for them at a time. So if mac changes, it should update the existing row if one exists for that dhcp, ip combination.
however, I want to have first_seen updated every time a (ip, dhcp, mac) combination is seen... i.e. if the value of 'mac' changes, I want to update first_seen. If the value of 'mac' stays the same, I want to leave first_seen the same.
Is there any simple way to do this in SQL... i.e. an IF() or something? or do I have to handle this with another SELECT in the PHP script (keeping in mind that we're parsing a file to get this data, and inserting abut 10-16k rows, so time is a factor).
Thanks,
Jason
Have you considered using triggers? That's a MySQL server-side event happening when some other event, such as update of the mac column, happens.
You could use a MySQL trigger for this.
I think you might want to use the TIMESTAMP column in MySQL - this will change when the record is updated or inserted. It will change when any value in the row changes however. If you want something else to happen you might consider a trigger that fires after update and validates the other business logic you referenced.
Thanks for all the suggestions, guys. It turns out that I don't need a trigger for this, just an IF at the beginning of the update statement.
in the INSERT INTO... I have first_seen=CURRENT_TIMESTAMP
Then at the beginning of the UPDATE I have
first_seen = IF( mac != VAULES(MAC), CURRENT_TIMESTAMP, first_seen)
This only updates first_seen if the mac changes, else it sets first_seen to itself (its current value).