How can I insert a file into a table in SQL? - mysql

I haven't seen any data type that can store a file in SQL. Is there something like that? What I'm particularly talking about is that I want to insert into my table a source code. What is the best method to do it? It can be either stored in my database as a nicely formatted text, or better (what I actually want) to store it as a single file. Please note that I'm using MySQL.

It is best not to store a file in your SQL database but to store a path to the file in the server or any other UNC path that your application can retrieve by itself and do with it what ever is unnecessary.
see this: https://softwareengineering.stackexchange.com/questions/150669/is-it-a-bad-practice-to-store-large-files-10-mb-in-a-database
and this:
Better way to store large files in a MySQL database?
and if you still want to store the file on the DB.. here is an example:
http://mirificampress.com/permalink/saving_a_file_into_mysql

If you can serialized the file you can store it as binary and then deserialize when needed
http://dev.mysql.com/doc/refman/5.0/en/binary-varbinary.html

You can also use a BLOB (http://dev.mysql.com/doc/refman/5.0/en/blob.html) which has some differences. Normally I just store the file in the filesystem and a pointer in the DB, which makes serving it back via something like HTTP a bit easier and doesn't bloat up the Database.

Storing the file in a table only makes sense if you need to do searches in that code. In other cases, you should only store a file's URL.
If you want to store a text file, use the TEXT datatype. Since it is a source code, you may consider using the ASCII character set to save space - but be aware that this will cause character set conversions during your queries, and this affects performances. Also, if it is ASCII you can use REGEXP for searches (that operator doesnt work with multi-byte charsets).
To load the file, if the file is on the same server as MySQL, you can use the FILE() function within an INSERT.

Related

What is the best way to store a pretty large JSON object in MySQL

I'm building a Laravel app the core features are driven with rather large JSON objects. (the largest ones are between 1000-1500 lines).
I know there are better data base choices than MySQL for storing files and blocks of data, but for various reasons I will need to use MySQL for the application.
So my question is, how to I store my JSON objects most effective in MySQL? I will not need to do any queries on the column that holds the data, there will be other columns for identifying it. Something like this:
id, title, created-at, updated-at, JSON-blobthingy
Any ideas?
You could use the JSON data type if you have MySQL version 5.7.8 or above.
You could store the JSON file on the server, and simply reference its location via MySQL.
You could use also one of the TEXT types.
The best answer i can give is to use MySQL 5.7. On this version the new column type JSON are supported. Which handles large JSON very well (obviously).
https://dev.mysql.com/doc/refman/5.7/en/json.html
You could compress the data before inserting it if you don't need it searchable. I'm using the 'zlib' library for that
Simply, you can use the type longblob which can handle up to 4GB of data for the column holding the large JSON object where you can insert, update, and read this column normally as if it is text or anything else!

What's the best way to access a MySql BLOB in CakePhp 3.0?

The manual says:
Likewise, binary columns will accept file handles, and generate file handles when reading data.
I'm seeing it do this. I have a table users with a column my_blob. I did the bake all, so now I have a Table Users with an Entity User. If I get $user then $user->my_blob is a file handle (or stream) resource, not a string or something I can access directly. Reading the data may be easy enough fread($user->my_blob, 256). But writing to it would be extremely cumbersome. I think I'd have to create a file to store my data then write to that or something like that.
How do I write to a BLOB field?

Pictures using Postgres and Xojo

I have converted from a MySQL database to Postgres. During the conversion, the picture column in Postgres was created as bytea.
This Xojo code works in MySQL but not Postgres.
Dim mImage as Picture
mImage = rs.Field("Picture").PictureValue
Any ideas?
I don't know about this particular issue, but here's what you can do to find out yourself, perhaps:
Pictures are stored as BLOBs in the database. Now, this means that the column must also be declared as BLOB (or a similar binary type). If it was accidentally marked as TEXT, this would work as long as the database does not get exported by other means. I.e, as long as only your Xojo code reads and writes to the record, using the PictureValue functions, that takes care of keeping the data in BLOB form. But if you'd then convert to another database, the BLOB data would be read as text, and in that process it might get mangled.
So, it may be relevant to let us know how you converted the DB. Did you perform a export as SQL commands and then imported it into Postgres by running these commands again? Do you still have the export file? If so, find a record with picture data in it and see if that data is starting with: x' and then contains hex byte code, e.g. x'45FE1200... and so on. If it doesn't, that's another indicator for my suspicion.
So, check the type of the Picture column in your old DB first. If that specifies a binary data type, then the above probably does not apply.
Next, you can look at the actualy binary data that Xojo reads. To do that, get the BlobValue instead of the PictureValue, and store that in a MemoryBlock. Do the same for a single picture, both with the old and the new database. The memoryblock should contain the same bytes. If not, that would suggest that the data was not transferred correctly. Why? Well, that depends on how you converted it.

Importing a CSV file into mysql. (Specifically about create table command)

I hava text file full of values like this:
The first line is a list of column names like this:
col_name_1, col_name_2, col_name_3 ......(600 columns)
and all the following columns have values like this:
1101,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,1101,1,3.86,65,0.46418,65,0.57151...
What is the best way to import this into mysql?
Specifically how to come up with the proper CREATE TABLE command so that the data will load itself properly? What is the best generic data type which would take in all the above values like 1101 or 3.86 or 0.57151. I am not worried about the table being inefficient in terms of storage as I need this for a one time usage.
I have tried some of the suggestions in other related questions like using Phpmyadmin (it crashes I am guessing due to the large amount of data)
Please help!
Data in CSV files is not normalized; those 600 columns may be spread across a couple of related tables. This is the recommended way of treating those data. You can then use fgetcsv() to read CSV files line-by-line in PHP.
To make MySQL process the CSV, you can create a 600 column table (I think) and issue a LOAD DATA LOCAL INFILE statement (or perhaps use mysqlimport, not sure about that).
The most generic data type would have to be VARCHAR or TEXT for bigger values, but of course you would lose semantics when used on numbers, dates, etc.
I noticed that you included the phpmyadmin tag.
PHPMyAdmin can handle this out of box. It will decide "magically" which types to make each column, and will CREATE the table for you, as well as INSERT all the data. There is no need to worry about LOAD DATA FROM INFILE, though that method can be more safe if you want to know exactly what's going on without relying on PHPMyAdmin's magic tooling.
Try convertcsvtomysql, just upload your csv file and then you can download and/or copy the mysql statement to create the table and insert rows.

MySQL blob: how to get just a subset of the stored data

I would like to use MYSQL as a storage system for a huge number of files.
I would like to read/write just a portion of the data stored in a column (data is stored as bytes) so I don't have to load the entire file into the application (because it can be > than a GB).
So, in brief, I would like to have random read/write access in a blob column without loading the entire data into memory.
Are there functions available to perform these operations? Thank you.
MySQL treats blobs the same as strings (more or less):
BLOB values are treated as binary strings (byte strings). They have no character set, and sorting and comparison are based on the numeric values of the bytes in column values.
So all the usual string functions work on blobs. In particular, you can use substring to grab just part of of a blob.
That said, storing a multi-gigabyte data file in a relational database as a BLOB isn't the best thing to do. You'd be better off storing the file's metadata in the database and leaving the file itself in the file system; file systems are pretty good at managing files, relational databases are good at handling structured data.
You can try this approach. Store the meta data of your files (like path, name, etc.) in the database and store the files under a directory.
From the database you can fetch the filepath and the read the file in random access mode. Using the file-offset you can get the required subset of the stored data.
You could use e.g. MID() [1] to cut portions of the BLOB; though I would prefer to store files in the file system, not in a database. MySQL performs rather poor on BLOBs.
[1]
http://dev.mysql.com/doc/refman/5.1/en/string-functions.html#function_mid