I have a MySQL table called config with 5 columns, the structure is like follows:
config_id | product_id | color_id | preview_front | preview_back
-----------+------------+----------+---------------+--------------
int(11) | int(11) | int(11) | BLOB | BLOB
The 2 BLOB columns have the attribute "BINARY" set. They contain 150x150 pixel preview images, each with a file size of roughly 6 KB.
My question is simple: How can I delete / remove the BLOBs without deleting the entire row?
Please note: Deleting the row is not an option. We need the data from the first 3 columns and are legally obliged to keep them. But we'd like to free up some space on our servers and approximately 1 GB of old and unused images seems like a good place to start.
I've already tried changing the column properties in PHPMyAdmin (version 3.4.10.1) to allow NULL values but I got an Internal Server Error.
Just simply run an UPDATE query and set the BLOB fields to ''.
UPDATE table SET preview_front='', preview_back=''
WHERE config_id = 1
or like that
UPDATE table
SET preview_front=NULL, preview_back=NULL
WHERE config_id = 1
Related
The requirement
I am currently building a permissions system. One of the requirements is that it is horizontally scalable.
To achieve this, we have done the following
There is a single "compiled resource permission" table that looks something like this:
| user_id | resource_id | reason |
| 1 | 1 | 1 |
| 1 | 2 | 3 |
| 2 | 1 | 2 |
The structure of this table denotes user 1 has accesses to resource 1 & 2, and user 2 has access only to resource 1.
The "reason" column is a bitwise number which has bits switched on depending on "Why" they have that permission. Binary bit "1" denotes they are an admin, and binary bit "2" denotes they created the resource.
So user 1 has access to resource 1 because they are an admin. He has access to resource 2 because he is an admin and he created that resource. If he was no longer admin, he would still have access to ticket 2 but not ticket 1.
To figure out what needs to go into this table, we use a "patcher" class that programmatically loops around the users & resources passed to it and logically looks at all DB tables necessary to figure out what rows need adding and what rows need removing from the table.
How we are trying to scale and the problem
To horizontally scale this, we split the logic into chunks and give it to a number of "workers" on an async queue
This only seems to scale so far before it no longer speeds up or sometimes even row locking happens which slows it down.
Is there a particular type of row lock we can use to allow it to scale indefinitely?
Are we approaching this from completely the wrong angle? We have a lot of "Reasons" and a lot of complex permission logic that we need to be able to recompile fairly quickly
SQL Queries that run concurrently, for reference
When we are "adding" reasons:
INSERT INTO `compiled_permissions` (`user_id`, `resource_id`, `reason`) VALUES ((1,1,1), (1,2,3), (2,1,2)) ON DUPLICATE KEY UPDATE `reason` = `reason` | VALUES(`reason`);
When we are "removing" reasons:
UPDATE `compiled_permissions` SET `reason` = `reason` & ~ (CASE
(user_id = 1 AND resource_id = 1 THEN 2 ... CASE FOR EVERY "REASON REMOVAL")
ELSE `reason`
END)
WHERE (`user_id`, `resource_id`) IN ((1,1),(1,2) .. ETC )
In a MySQL database,
I have a table License with a few example rows as presented below:
ID | Key | Location
1 25 C:/Public/lics/1885-0001.lic
3 21 C:/Public/lics/1885-0006.lic
There are many such rows, which I would like to modify as given below:
ID | Key | Location
1 25 C:/Licenses/1885-0001.lic
3 21 C:/Licenses/1885-0006.lic
One of the columns from all the rows get modified. How do I update the table to make this change across all rows.
Judging from the docs I posted in my comment, I think you should do something like this:
UPDATE License SET Location = REPLACE(Location, 'C:/Public/lics', 'C:/Licenses');
UPDATE License
SET Value = REPLACE(Location, 'Public/lics', 'Licenses')
I dumped a working production database from a django app and am trying to migrate it to my local development environment. The production server runs MySQL 5.1, and locally I have 5.6.
When migrating the django-mailer's "messagelog" table, I'm running into the dreaded Error 1118:
ERROR 1118 (42000) at line 2226: Row size too large (> 8126). Changing some columns to TEXT or BLOB may help. In current row format, BLOB prefix of 0 bytes is stored inline.
I've read lots of stuff online about this error, but none of it has solved my problem.
N.B. This error is not coming from the creation of the table, but rather the insertion of a row with pretty large data.
Notes:
The innodb_file_format and innodb_file_format_max variables are set to Barracuda.
The ROW_FORMAT is set to DYNAMIC on table creation.
The table does not have very many columns. Schema below:
+----------------+------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------+------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| message_data | longtext | NO | | NULL | |
| when_added | datetime | NO | | NULL | |
| priority | varchar(1) | NO | | NULL | |
| when_attempted | datetime | NO | | NULL | |
| result | varchar(1) | NO | | NULL | |
| log_message | longtext | NO | | NULL | |
+----------------+------------+------+-----+---------+----------------+
Again, the error happens ONLY when I try to insert a quite large (message_data is about 5 megabytes) row; creating the table works fine, and about 500,000 rows are added just fine before the failure.
I'm out of ideas; I've tried DYANMIC and COMPRESSED row formats, and I've triple checked the values of the relevant innodb variables:
mysql> show variables like "%innodb_file%";
+--------------------------+-----------+
| Variable_name | Value |
+--------------------------+-----------+
| innodb_file_format | Barracuda |
| innodb_file_format_check | ON |
| innodb_file_format_max | Barracuda |
| innodb_file_per_table | ON |
+--------------------------+-----------+
The creation code (from SHOW CREATE TABLE) looks like:
CREATE TABLE `mailer_messagelog` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`message_data` longtext NOT NULL,
`when_added` datetime NOT NULL,
`priority` varchar(1) NOT NULL,
`when_attempted` datetime NOT NULL,
`result` varchar(1) NOT NULL,
`log_message` longtext NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=869906 DEFAULT CHARSET=latin1 ROW_FORMAT=DYNAMIC
According to one of the answers to this question, your problem might be caused by changes in MySQL 5.6 (see the InnoDB Notes on http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-20.html):
InnoDB Notes
Important Change: Redo log writes for large, externally stored BLOB
fields could overwrite the most recent checkpoint. The 5.6.20 patch
limits the size of redo log BLOB writes to 10% of the redo log file
size. The 5.7.5 patch addresses the bug without imposing a limitation.
For MySQL 5.5, the bug remains a known limitation.
As a result of the redo log BLOB write limit introduced for MySQL 5.6,
the innodb_log_file_size setting should be 10 times larger than the
largest BLOB data size found in the rows of your tables plus the
length of other variable length fields (VARCHAR, VARBINARY, and TEXT
type fields). No action is required if your innodb_log_file_size
setting is already sufficiently large or your tables contain no BLOB
data.
Note In MySQL 5.6.22, the redo log BLOB write limit is relaxed to 10%
of the total redo log size (innodb_log_file_size *
innodb_log_files_in_group).
(Bug #16963396, Bug #19030353, Bug #69477)
Does it help if you change innodb_log_file_size to something bigger than 50M? (Changing that variable needs some steps to work correctly:
https://dba.stackexchange.com/questions/1261/how-to-safely-change-mysql-innodb-variable-innodb-log-file-size ).
If this is useful for anybody, the #klasske solution did not work for me, however writing this line in 'my.cnf' did:
innodb_file_format=Barracuda
I encountered the same error in my project. I tried a lot of suggestions, such as increasing innodb_log_file_size, innodb_buffer_pool_size or even disabling strict mode innodb_strict_mode=0 in the my.cnf file, but nothing worked for me.
What worked for me was the following:
Changing the offending CharFields with a big max_length to TextFields. For example, models.CharField(max_length=4000) to models.TextField(max_length=4000)
Splitting the table into multiple tables after the first solution wasn't enough on its own.
It was only after doing that I got rid of the error.
Recently, the same error haunted me again on the same project. This time, when I was running python manage.py test. I was confused because I had already split the tables and changed the CharFields to TextFields.
So I created another dummy Django project with a different database from my main project. I copied the models.py from the main project into the dummy project and run migrate. To my surprise, everything went fine.
It dawned on me that something could be wrong with my main project migrations. Perhaps running manage.py test uses my earlier migrations with the offending CharFields? I don't know for sure.
So I disabled the migrations when running tests by editing settings.py and adding the following snippet at the end the file. It disables the migrations when testing and solves the error.
class DisableMigrations(object):
def __contains__(self, item):
return True
def __getitem__(self, item):
return None
if 'test' in sys.argv[1:]:
MIGRATION_MODULES = DisableMigrations()
Doing that solved the problem for me when testing. I hope someone else finds it useful.
Source for the snippet settings_test_snippet.py
ERROR 1118 (42000) at line 1852: Row size too large (> 8126). Changing some columns to TEXT or BLOB may help. In current row format, BLOB prefix of 0 bytes is stored inline.
[mysqld]
innodb_log_file_size = 512M
innodb_strict_mode = 0
ubuntu 16.04 edit path : nano /etc/mysql/mysql.conf.d/mysqld.cnf
it work!!….
[http://dn59-kmutnb.blogspot.com/2017/06/error-1118-42000-at-line-1852-row-size.html][1]
I'm having a problem with a column ( VARCHAR(513) NOT NULL ) on a MySQL table.During a procedure of import from a CSV file, a bunch of rows got filled with some weird stuff coming from I don't know where.
This stuff is not visible from Workbench, but if I query the DBMS with:SELECT * FROM MyTable;I got:ID | Drive | Directory | URI | Type ||
1 | Z: | \Users\Data\ | \server\dati | 1 || // <-correct row
...
32 | NULL | \Users\OtherDir\ | | 0 ||While row 1 is correct, row 32 shows a URI filled with something. Now, if I query dbms with:SELECT length(URI) FROM MyTable WHERE ID = 32; I got 32. While, doing:SELECT URI FROM MyTable WhERE ID = 32; inside a MFC application, gets a string with length 0.Inside this program I have a tool for handling this table but this cannot work because I cannot build up queries about rows with bugged URI: how can I fix this? Where this problem comes from? If you need more information please ask.
Thanks.
Looks like you have white spaces in the data and which is causing the issue and when you import data from CSV its most often happen.
So to fix it you may need to run the following update statement
update MyTable set URI = trim(URI);
The above will remove the white spaces from the column.
Also while importing data from CSV its better to use the TRIM() for the values before inserting into the database and this will avoid this kind of issues.
Say I have a CSV file with 3 columns and a destination table with 5 columns (3 identical to the CSV columns, and 2 more). All rows have data for the same number of columns.
CSV:
id | name | rating
---|-----------|-------
1 | radiohead | 10
2 | bjork | 9
3 | aqua | 2
SQL table:
id | name | rating | biggest_fan | next_concert
Right now, in order to import the CSV file, I create a temporary table with 3 columns, then copy the imported data into the real table. But this seems silly, and I can't seem to find any more efficient solution.
Isn't there a way to import the file directly into the destination table, while generating NULL / default values in the columns that appear in the table but not in the file?
I'm looking for a SQL / phpMyAdmin solution
No, I don't think there's a better way. A different way would be to use a text manipulating program (sed, awk, perl, python,...) to add two commas to the end of each line; even if your column order didn't match, phpMyAdmin has a field for changing the order when importing a CSV. However, it seems to still require the proper number of columns. Whether that's more or less work than what you're already doing is up to you, though.