Calculating total data size of BLOB column in a table - mysql

I have a table with large amounts of BLOB data in a column. I am writing a utility to dump the data to file system. But before dumping, I need to check if necessary space is available on the disk to export all the blob fields throughout the table.
Please suggest an efficient approach to get size of all the blob fields in the table.

You can use the MySQL function OCTET_LENGTH(your_column_name). See here for more details.

select sum(length(blob_column)) as total_size
from your_table

select sum(length(blob_column_name)) from desired_tablename;

Sadly this is DB specific at best.
To get the total size of a table with blobs in Oracle I use the following:
https://blog.voina.org/?p=374
Sadly this does not work in DB2 I still have to find an alternative.
The simple
select sum(length(blob_column)) as total_size
from your_table
is not a correct query as is not going to estimate correctly the blob size based on the reference to the blob that is stored in your blob column. You have to get the actual allocated size on disk for the blobs from the blob repository.

Related

Converting varbinary to longblob in MySQL

We are storing data in an Innodb table having varbinary column. However, our data size requirement has grown to over 1 MB and hence I converted the column to longblob.
alter table mytable modify column d longblob;
Everything seems to be working as expected after I converted the column. However, I like to know from people who have done it earlier if anything more is required other than just converting column as shown above, especially
is there any MySQL / MariaDB version specific issues with longblob that I should take care of. There is no index on the column.
We use mysqldump to take regular backup. Do we need to change anything since the blob storage mechanism seems to be different than varbinary.
Any other precautions/suggestion.
Thank you for your guidance

Export blob column from mysql dB to disk and replace it with new file name

So I'm working on a legacy database, and unfortunately the performance of database is very slow. Simple select query can take up to 10 seconds in tables with less than 10000 record.
So i tried to investigate problem and found out that deleting column that they have used to store files (mostly videos and images) fix the problem and improve performance a lot.
Along with adding proper indexes I was able to run exact same query that used to take 10-15sec to run in under 1sec.
So my question is. Is there any already existing tool or script I can use to help me export those blobs (videos) from database and save the to disk and update row with new file name/path on file system?
If not is there any proper way to optimize database so that those blob would not impact performance that much?
Hint some one clients consuming this database use high level orms so we don't have much control on queries orm use to fetch rows and its relations. So I cannot optimize queries directly.
SELECT column FROM table1 WHERE id = 1 INTO DUMPFILE 'name.png';
How about this way?
These is also INTO_OUTFILEinstead of INTO_DUMPFILE
13.2.10.1 SELECT ... INTO Statement The SELECT ... INTO form of SELECT enables a query result to be stored in variables or written to a file:
SELECT ... INTO var_list selects column values and stores them into
variables.
SELECT ... INTO OUTFILE writes the selected rows to a file. Column and
line terminators can be specified to produce a specific output format.
SELECT ... INTO DUMPFILE writes a single row to a file without any
formatting.
Link: https://dev.mysql.com/doc/refman/8.0/en/select-into.html
Link: https://dev.mysql.com/doc/refman/8.0/en/select.html

mysql MEDIUMTEXT performance

I want to save very long text like base64(image) encoded string into mysql table.
in this case, is it slow to execute query(select, insert, update, delete)?
select * from A where index = x
table A
column index
column base64String <-- MEDIUMTEXT type
No not at all, depends on how you are fetching the data not on the size or type of the data. IF you store only the file name of the image file and fetch image from a path might be faster because you can cache those files. But When you store file in base64 encoded please use blob data type in mysql.
I dont have any performance issue with storing file in base64, I am using blob as a mysql datatype for the image encoded data. Slow and faster again depends on your complexity of your query and depends on consumer that how your DB consumer gonna consume the data. There are different mechanism for optimization for consuming data from DB but as soon as I store my user's profile image on DB I use Blob as a data type.

PostgreSQL: compress JSON column

I have a table with a JSON type column and I have 1 row in the table.
Following request show me result 20761 bytes:
SELECT pg_column_size(test_column) FROM test_table;
The value from test_column has size 45888 bytes so it means that PostgreSQL compressed this data, but it compressed 45888/20761=~2.1 times. How can I do compression of JSON type more than existing value?
Changing the type to jsonb does not make it use less disk space, it might in some cases even use more. Take a look at ZSON. It is a PostgreSQL extension that compresses the JSON data by creating a lookup table for the most common data, most likely the json-keys, and it claims to be able to save up to half of the needed disk space.

Importing a CSV file into mysql. (Specifically about create table command)

I hava text file full of values like this:
The first line is a list of column names like this:
col_name_1, col_name_2, col_name_3 ......(600 columns)
and all the following columns have values like this:
1101,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,1101,1,3.86,65,0.46418,65,0.57151...
What is the best way to import this into mysql?
Specifically how to come up with the proper CREATE TABLE command so that the data will load itself properly? What is the best generic data type which would take in all the above values like 1101 or 3.86 or 0.57151. I am not worried about the table being inefficient in terms of storage as I need this for a one time usage.
I have tried some of the suggestions in other related questions like using Phpmyadmin (it crashes I am guessing due to the large amount of data)
Please help!
Data in CSV files is not normalized; those 600 columns may be spread across a couple of related tables. This is the recommended way of treating those data. You can then use fgetcsv() to read CSV files line-by-line in PHP.
To make MySQL process the CSV, you can create a 600 column table (I think) and issue a LOAD DATA LOCAL INFILE statement (or perhaps use mysqlimport, not sure about that).
The most generic data type would have to be VARCHAR or TEXT for bigger values, but of course you would lose semantics when used on numbers, dates, etc.
I noticed that you included the phpmyadmin tag.
PHPMyAdmin can handle this out of box. It will decide "magically" which types to make each column, and will CREATE the table for you, as well as INSERT all the data. There is no need to worry about LOAD DATA FROM INFILE, though that method can be more safe if you want to know exactly what's going on without relying on PHPMyAdmin's magic tooling.
Try convertcsvtomysql, just upload your csv file and then you can download and/or copy the mysql statement to create the table and insert rows.