I have a table that contains image BLOB field.
I want to be able to submit a query to the database and have the BLOBs written to the windows file system.
Is this possible??
Yes, it is possible. You can use SELECT command with INTO DUMPFILE clause. For example -
SELECT
data_column
FROM
table1
WHERE
id = 1
INTO DUMPFILE 'image.png';
From the reference: If you use INTO DUMPFILE instead of INTO OUTFILE, MySQL writes only one row into the file, without any column or line termination and without performing any escape processing. This is useful if you want to store a BLOB value in a file.
Related
So I'm working on a legacy database, and unfortunately the performance of database is very slow. Simple select query can take up to 10 seconds in tables with less than 10000 record.
So i tried to investigate problem and found out that deleting column that they have used to store files (mostly videos and images) fix the problem and improve performance a lot.
Along with adding proper indexes I was able to run exact same query that used to take 10-15sec to run in under 1sec.
So my question is. Is there any already existing tool or script I can use to help me export those blobs (videos) from database and save the to disk and update row with new file name/path on file system?
If not is there any proper way to optimize database so that those blob would not impact performance that much?
Hint some one clients consuming this database use high level orms so we don't have much control on queries orm use to fetch rows and its relations. So I cannot optimize queries directly.
SELECT column FROM table1 WHERE id = 1 INTO DUMPFILE 'name.png';
How about this way?
These is also INTO_OUTFILEinstead of INTO_DUMPFILE
13.2.10.1 SELECT ... INTO Statement The SELECT ... INTO form of SELECT enables a query result to be stored in variables or written to a file:
SELECT ... INTO var_list selects column values and stores them into
variables.
SELECT ... INTO OUTFILE writes the selected rows to a file. Column and
line terminators can be specified to produce a specific output format.
SELECT ... INTO DUMPFILE writes a single row to a file without any
formatting.
Link: https://dev.mysql.com/doc/refman/8.0/en/select-into.html
Link: https://dev.mysql.com/doc/refman/8.0/en/select.html
I have a mysql database table with audio data in MEDIUMBLOB fields. The thing is, one audio file is in different rows. So I want to join them. When I do this:
select data from audio where id=1 into outfile "output" fields escaped by '';
.. I get audio. When I do the same thing for id=2, I get audio. When I put them together:
select data from audio order by date, time into outfile "output" fields escaped by '';
.. I get audio for awhile, then high amplitude noise. The noise starts where id=2 would have been. If I select more than two columns to put together, sometimes the output from that particular id is noise, sometimes it's the correct audio. It's not exactly interleaved.
So, how can I extract and concatenate blobs from multiple rows into a coherent binary output file?
Edit: this is raw audio data. E.g. to read it into Audacity you would go to import->raw.
SELECT INTO OUTFILE is intended mainly to allow you quickly dump a table to a text file, and also to complement LOAD DATA INFILE.
I strongly suspect that the SELECT INTO OUTFILE is either adding an ASCII NULL, or other formatting between columns so that it can be read back.
You could compare binary output to determine if this is true, and also pick up upon any other formatting, encoding or shaping that may also be present.
Do you have to use INTO OUTFILE? - Can you not get the binary data and create a file directly with a script or other middle tier layer rather than relying on the database?
Update
After writing this, and reading the comment below. I thought about CONCAT, MySQL treats BLOB values as binary strings (byte strings), therefore in theory a simple concatenation of multiple columns into one single one might work.
If it doesn't then it wouldn't take much to write a simple pearl, PHP, C, bash or other scripting language to query the database, join two or more binary columns and write the output to a file on the system.
what you need is to run the mysql command from the command line, using -e to pass a sql statement to execute, and the appropriate flags for the binary output.
You then use group_concat with an empty separator to concat all the data.
Finally, you output to your file. The only thing you need to be aware of is the built in limit on the group_concat, which you can change in your .ini file, or by setting a global variable.
Here is how would this look:
mysql --binary-mode -e "select group_concat(data separator '') from audio order by date, time;" -A -B -r -L -N YOUR_DATABASE_NAME > output
i just tried it with binary data (not audio) in a database and it works.
I have tables that are on different mysql instances. I want to export some data as csv from a mysql instance, and perform a left join on a table with the exported csv data. How can I achieve this?
Quite surprisingly that is possible with MySQL, there are several steps that you need to go through.
First create a template table using CSV engine and desired table layout. This is the table into which you will import your CSV file. Use CREATE TABLE yourcsvtable (field1 INT NOT NULL, field2 INT NOT NULL) ENGINE=CSV for example. Please note that NULL values are not supported by CSV engine.
Perform you SELECT to extract the CSV file. E.g. SELECT * FROM anothertable INTO OUTFILE 'temp.csv' FIELDS TERMINATED BY ',';
Copy temp.csv into your target MySQL data directory as yourcsvtable.CSV. Location and exact name of this file depends on your MySQL setup. You cannot perform the SELECT in step 2 directly into this file as it is already open - you need to handle this in your script.
Use FLUSH TABLE yourcsvtable; to reload/import the CSV table.
Now you can execute your query against the CSV file as expected.
Depending on your data you need to ensure that the data is correctly enclosed by quotation marks or escaped - this needs to be taken into account in step 2.
CSV file can be created by MySQL on some another server or by some other application as long as it is well-formed.
If you export it as CSV, it's no longer SQL, it's just plain row data. Suggest you export as SQL, and import into the second database.
I have a text file to be imported in a MySQL table. The columns of the files are comma delimited. I set up an appropriate table and I used the command:
load data LOCAL INFILE 'myfile.txt' into table mytable FIELDS TERMINATED BY ‘,’;
The problem is, there are several spaces in the text file, before and after the data on each column, and it seems that the spaces are all imported in the tables (and that is not what I want). Is there a way to load the file without the empty spaces (other than processing each row of the text file before importing in MySQL)?
As far as I understand, there's no way to do this during the actual load of the data file dynamically (I've looked, as well).
It seems the best way to handle this is to either use the SET clause with the TRIM
function
("SET column2 = TRIM(column2)")
or run an update on the string columns after loading, using the TRIM() function.
You can also create a stored procedure using prepared statements to run the TRIM function on all columns in a specified table, immediately after loading it.
You would essentially pass in the table name as a variable, and the sp would use the information_schema database to determine which columns to upload.
If you can use .NET, CSVReader is a great option(http://www.codeproject.com/KB/database/CsvReader.aspx). You can read data from a CSV and specify delimiter, trimming options, etc. In your case, you could choose to trim left and right spaces from each value. You can then either save the result to a new text file and import it into the database, or loop through the CsvReader object and insert each row into the database directly. The performance of CsvReader is impressive. Hope this helps.
According to the MySQL Certification Guide, when --tab option is used,
a SELECT ... INTO OUTFILE is used to generate a tab delimited file in the specified directory, a file containing SQL (CREATE TABLE) will also be generated
and that
using --tab to produce tab-delimited dump files is much faster
but how can it be faster if both generate a SQL file and the --tab one just has an extra tab?
I would say that :
without using --tab, a file is generated, that contains both :
create statements
and data, as lots of insert statements
with --tab, two files are generated :
one with create statements
one other with data, in a tab-delimited format, instead of insert statements
The difference is the second part :
inserts
vs tab-delimited
I'm guessing that creating lots of insert statements takes more time than just dumping data with a tab-delimited format -- maybe it's the same with importing the data back, too ?