I have a table with a JSON type column and I have 1 row in the table.
Following request show me result 20761 bytes:
SELECT pg_column_size(test_column) FROM test_table;
The value from test_column has size 45888 bytes so it means that PostgreSQL compressed this data, but it compressed 45888/20761=~2.1 times. How can I do compression of JSON type more than existing value?
Changing the type to jsonb does not make it use less disk space, it might in some cases even use more. Take a look at ZSON. It is a PostgreSQL extension that compresses the JSON data by creating a lookup table for the most common data, most likely the json-keys, and it claims to be able to save up to half of the needed disk space.
Related
We are storing data in an Innodb table having varbinary column. However, our data size requirement has grown to over 1 MB and hence I converted the column to longblob.
alter table mytable modify column d longblob;
Everything seems to be working as expected after I converted the column. However, I like to know from people who have done it earlier if anything more is required other than just converting column as shown above, especially
is there any MySQL / MariaDB version specific issues with longblob that I should take care of. There is no index on the column.
We use mysqldump to take regular backup. Do we need to change anything since the blob storage mechanism seems to be different than varbinary.
Any other precautions/suggestion.
Thank you for your guidance
I have a DB2 11 database with a large table that has JSON data stored in a CLOB column. Given that I'd like to perform queries on it using the JSON_VAL function, I always need to use JSON2BSON to convert it first, which I assume is a significant overhead. I would like to move the data into another table that has exactly the same structure, except for the CLOB column which I'd like to replace with a BLOB one to store the JSON immediately in BLOB, hoping that this will speed up my queries.
My approach to this was writing a
insert into newtable (ID, BLOBDATA) select ID, SYSTOOLS.JSON2BSON(CLOBDATA) from oldtable;
After doing this I realized that long json objects got truncated. I have googled on this and learned that selects to truncate large objects.
I am reaching out to here to see if there is any simple way for me to do this excercise, without having to write a program to read out and write back all the data. (I had myself burnt with similar truncation taking place when I used DB2 csv export features.)
Thanks.
Starting with Db2 11.1.4.4 there are new JSON functions based on the ISO technical paper. I would advise to use them. They are the strategic functionality going forward.
You could use JSON_VALUE to perform the equivalent of what you planned to with JSON_VAL.
I want to save very long text like base64(image) encoded string into mysql table.
in this case, is it slow to execute query(select, insert, update, delete)?
select * from A where index = x
table A
column index
column base64String <-- MEDIUMTEXT type
No not at all, depends on how you are fetching the data not on the size or type of the data. IF you store only the file name of the image file and fetch image from a path might be faster because you can cache those files. But When you store file in base64 encoded please use blob data type in mysql.
I dont have any performance issue with storing file in base64, I am using blob as a mysql datatype for the image encoded data. Slow and faster again depends on your complexity of your query and depends on consumer that how your DB consumer gonna consume the data. There are different mechanism for optimization for consuming data from DB but as soon as I store my user's profile image on DB I use Blob as a data type.
I'd like to know if you can query a blob (large/medium any kind) column and retrieve the bytes from N to M so you can query a huge blob file and only get small chunks of it in your resultset. If this is possible in MySQL, how can you do it (an example please!)?
I found this question for plain text but what about doing the same for bytes?
You can find the answer right here: MySQL blob: how to get just a subset of the stored data
MySQL treats blobs the same as strings (more or less):
BLOB values are treated as binary strings (byte strings). They have no character set, and sorting and comparison are based on the numeric values of the bytes in column values.
So all the usual string functions work on blobs. In particular, you can use substring to grab just part of of a blob.
That said, storing a multi-gigabyte data file in a relational database as a BLOB isn't the best thing to do. You'd be better off storing the file's metadata in the database and leaving the file itself in the file system; file systems are pretty good at managing files, relational databases are good at handling structured data.
I have a table with large amounts of BLOB data in a column. I am writing a utility to dump the data to file system. But before dumping, I need to check if necessary space is available on the disk to export all the blob fields throughout the table.
Please suggest an efficient approach to get size of all the blob fields in the table.
You can use the MySQL function OCTET_LENGTH(your_column_name). See here for more details.
select sum(length(blob_column)) as total_size
from your_table
select sum(length(blob_column_name)) from desired_tablename;
Sadly this is DB specific at best.
To get the total size of a table with blobs in Oracle I use the following:
https://blog.voina.org/?p=374
Sadly this does not work in DB2 I still have to find an alternative.
The simple
select sum(length(blob_column)) as total_size
from your_table
is not a correct query as is not going to estimate correctly the blob size based on the reference to the blob that is stored in your blob column. You have to get the actual allocated size on disk for the blobs from the blob repository.