mysql MEDIUMTEXT performance - mysql

I want to save very long text like base64(image) encoded string into mysql table.
in this case, is it slow to execute query(select, insert, update, delete)?
select * from A where index = x
table A
column index
column base64String <-- MEDIUMTEXT type

No not at all, depends on how you are fetching the data not on the size or type of the data. IF you store only the file name of the image file and fetch image from a path might be faster because you can cache those files. But When you store file in base64 encoded please use blob data type in mysql.
I dont have any performance issue with storing file in base64, I am using blob as a mysql datatype for the image encoded data. Slow and faster again depends on your complexity of your query and depends on consumer that how your DB consumer gonna consume the data. There are different mechanism for optimization for consuming data from DB but as soon as I store my user's profile image on DB I use Blob as a data type.

Related

How to work with BLOB contents in MySQL?

I'm using a BLOB data type to store big array of bytes. Arrays are produced and consumed by C# application, but now I need to edit my BLOB in SQL language.
The question is:
How can I update just one byte in a BLOB field using SQL?

PostgreSQL: compress JSON column

I have a table with a JSON type column and I have 1 row in the table.
Following request show me result 20761 bytes:
SELECT pg_column_size(test_column) FROM test_table;
The value from test_column has size 45888 bytes so it means that PostgreSQL compressed this data, but it compressed 45888/20761=~2.1 times. How can I do compression of JSON type more than existing value?
Changing the type to jsonb does not make it use less disk space, it might in some cases even use more. Take a look at ZSON. It is a PostgreSQL extension that compresses the JSON data by creating a lookup table for the most common data, most likely the json-keys, and it claims to be able to save up to half of the needed disk space.

MySQL get blob N bytes

I'd like to know if you can query a blob (large/medium any kind) column and retrieve the bytes from N to M so you can query a huge blob file and only get small chunks of it in your resultset. If this is possible in MySQL, how can you do it (an example please!)?
I found this question for plain text but what about doing the same for bytes?
You can find the answer right here: MySQL blob: how to get just a subset of the stored data
MySQL treats blobs the same as strings (more or less):
BLOB values are treated as binary strings (byte strings). They have no character set, and sorting and comparison are based on the numeric values of the bytes in column values.
So all the usual string functions work on blobs. In particular, you can use substring to grab just part of of a blob.
That said, storing a multi-gigabyte data file in a relational database as a BLOB isn't the best thing to do. You'd be better off storing the file's metadata in the database and leaving the file itself in the file system; file systems are pretty good at managing files, relational databases are good at handling structured data.

Pictures using Postgres and Xojo

I have converted from a MySQL database to Postgres. During the conversion, the picture column in Postgres was created as bytea.
This Xojo code works in MySQL but not Postgres.
Dim mImage as Picture
mImage = rs.Field("Picture").PictureValue
Any ideas?
I don't know about this particular issue, but here's what you can do to find out yourself, perhaps:
Pictures are stored as BLOBs in the database. Now, this means that the column must also be declared as BLOB (or a similar binary type). If it was accidentally marked as TEXT, this would work as long as the database does not get exported by other means. I.e, as long as only your Xojo code reads and writes to the record, using the PictureValue functions, that takes care of keeping the data in BLOB form. But if you'd then convert to another database, the BLOB data would be read as text, and in that process it might get mangled.
So, it may be relevant to let us know how you converted the DB. Did you perform a export as SQL commands and then imported it into Postgres by running these commands again? Do you still have the export file? If so, find a record with picture data in it and see if that data is starting with: x' and then contains hex byte code, e.g. x'45FE1200... and so on. If it doesn't, that's another indicator for my suspicion.
So, check the type of the Picture column in your old DB first. If that specifies a binary data type, then the above probably does not apply.
Next, you can look at the actualy binary data that Xojo reads. To do that, get the BlobValue instead of the PictureValue, and store that in a MemoryBlock. Do the same for a single picture, both with the old and the new database. The memoryblock should contain the same bytes. If not, that would suggest that the data was not transferred correctly. Why? Well, that depends on how you converted it.

MySQL blob: how to get just a subset of the stored data

I would like to use MYSQL as a storage system for a huge number of files.
I would like to read/write just a portion of the data stored in a column (data is stored as bytes) so I don't have to load the entire file into the application (because it can be > than a GB).
So, in brief, I would like to have random read/write access in a blob column without loading the entire data into memory.
Are there functions available to perform these operations? Thank you.
MySQL treats blobs the same as strings (more or less):
BLOB values are treated as binary strings (byte strings). They have no character set, and sorting and comparison are based on the numeric values of the bytes in column values.
So all the usual string functions work on blobs. In particular, you can use substring to grab just part of of a blob.
That said, storing a multi-gigabyte data file in a relational database as a BLOB isn't the best thing to do. You'd be better off storing the file's metadata in the database and leaving the file itself in the file system; file systems are pretty good at managing files, relational databases are good at handling structured data.
You can try this approach. Store the meta data of your files (like path, name, etc.) in the database and store the files under a directory.
From the database you can fetch the filepath and the read the file in random access mode. Using the file-offset you can get the required subset of the stored data.
You could use e.g. MID() [1] to cut portions of the BLOB; though I would prefer to store files in the file system, not in a database. MySQL performs rather poor on BLOBs.
[1]
http://dev.mysql.com/doc/refman/5.1/en/string-functions.html#function_mid