I have a bunch of images stored as blob data in database, now I would like to read(retrieve) these images from mysql using MATLAB, then do further image processing. However, the problem is: retrieved data is a N*1 unit8 vector, which can not be used for image feature (such as SIFT) extraction implementation. The following is my code:
conn = database.ODBCConnection('test','username','password');
curs = exec(conn,'select image from roomimage'); % image is the column of saved blob data, and roomimage is the table in database
curs = fetch(curs);
img=curs.Data{1,1}; % as and example, read the first image
Then I get class(img)=unit8, size is 111365*1
Then if I write and save as image file:
imwrite(img,'test.jpg')
imwrite(img,'test.png') % alternatively, I also tried to save it as .png file
Here comes the problem: the saved file cannot be opened! :(
Could someone show me, when using MATLAB, how to read blob data from mysql, save it as .jpg file, which can be opened as an image? Thank you very much!!!
#
Add: if I export and save image from mysql beforehand, then load image from local file to MATLAB, I get something like:683*1024*3 unit8 data, which is NOT like the N*1 vector read from mysql...
Related
I am running a big set of simulations in Dymola using a script, so far, it works well.
However, it remains incomplete because all the results are still in .mat and I have not find a way to automatically save them as .csv.
I found the DataFiles.convertMATtoCSV() function, but it requires me to specify a list of variables to export. I would like it to export all the variables without writing them one by one, is it possible?
In the Dymola Manual, there is a section "Saving all values into a CSV file".
It contains the following example code:
// Define name of trajectory file (fileName) and CVS file
// (CSVfile)
fileName="PID_Controller.mat";
CSVfile="AllVariables.csv";
// Read the size of the trajectories in the result file and
// store in 'n'
n=readTrajectorySize(fileName);
// Read the names of the trajectories
names = readTrajectoryNames(fileName);
// Read the trajectories 'names' (and store in 'traj')
traj=readTrajectory(fileName,names,n);
// transpose traj
traj_transposed=transpose(traj);
// write the .csv file using the package 'DataFiles'
DataFiles.writeCSVmatrix(CSVfile, names, traj_transposed);
This should do what you want. Also it gives room for customization if necessary later...
I use Stata 12.
I want to add some country code identifiers from file df_all_cities.csv onto my working data.
However, this line of code:
merge 1:1 city country using "df_all_cities.csv", nogen keep(1 3)
Gives me the error:
. run "/var/folders/jg/k6r503pd64bf15kcf394w5mr0000gn/T//SD44694.000000"
file df_all_cities.csv not Stata format
r(610);
This is an attempted solution to my previous problem of the file being a dta file not working on this version of Stata, so I used R to convert it to .csv, but that also doesn't work. I assume it's because the command itself "using" doesn't work with csv files, but how would I write it instead?
Your intuition is right. The command merge cannot read a .csv file directly. (using is technically not a command here, it is a common syntax tag indicating a file path follows.)
You need to read the .csv file with the command insheet. You can use it like this.
* Preserve saves a snapshot of your data which is brought back at "restore"
preserve
* Read the csv file. clear can safely be used as data is preserved
insheet using "df_all_cities.csv", clear
* Create a tempfile where the data can be saved in .dta format
tempfile country_codes
save `country_codes'
* Bring back into working memory the snapshot saved at "preserve"
restore
* Merge your country codes from the tempfile to the data now back in working memory
merge 1:1 city country using `country_codes', nogen keep(1 3)
See how insheet is also using using and this command accepts .csv files.
I have been trying to convert a set of GeoTIFF files into MBTiles using gdal_translate (GDAL 3.0.4). My command looks as follows:
gdal_translate -of MBTiles -ot Byte -strict -scale 0 255 bogota.tif bogota.mbtiles
The GeoTIFF image is successfully converted to MBTiles, and I am able to render it using QGIS. However, it appears that the result is somewhat compressed, or that the new image has lost some resolution. I have been experimenting with the -outsize option and trying to force it to 100% of the original size of the image, but with no success.
Is there a way to make sure that the result maintains the full resolution in the output?
Here are some screenshots to compare the results:
Before
After
Note: GeoTIFF image is taken from the following link:
https://download.osgeo.org/geotiff/samples/made_up/bogota.tif
I am relatively new to machine learning/python/ubuntu.
I have a set of images in .jpg format where half contain a feature I want caffe to learn and half don't. I'm having trouble in finding a way to convert them to the required lmdb format.
I have the necessary text input files.
My question is can anyone provide a step by step guide on how to use convert_imageset.cpp in the ubuntu terminal?
Thanks
A quick guide to Caffe's convert_imageset
Build
First thing you must do is build caffe and caffe's tools (convert_imageset is one of these tools).
After installing caffe and makeing it make sure you ran make tools as well.
Verify that a binary file convert_imageset is created in $CAFFE_ROOT/build/tools.
Prepare your data
Images: put all images in a folder (I'll call it here /path/to/jpegs/).
Labels: create a text file (e.g., /path/to/labels/train.txt) with a line per input image . For example:
img_0000.jpeg 1
img_0001.jpeg 0
img_0002.jpeg 0
In this example the first image is labeled 1 while the other two are labeled 0.
Convert the dataset
Run the binary in shell
~$ GLOG_logtostderr=1 $CAFFE_ROOT/build/tools/convert_imageset \
--resize_height=200 --resize_width=200 --shuffle \
/path/to/jpegs/ \
/path/to/labels/train.txt \
/path/to/lmdb/train_lmdb
Command line explained:
GLOG_logtostderr flag is set to 1 before calling convert_imageset indicates the logging mechanism to redirect log messages to stderr.
--resize_height and --resize_width resize all input images to same size 200x200.
--shuffle randomly change the order of images and does not preserve the order in the /path/to/labels/train.txt file.
Following are the path to the images folder, the labels text file and the output name. Note that the output name should not exist prior to calling convert_imageset otherwise you'll get a scary error message.
Other flags that might be useful:
--backend - allows you to choose between an lmdb dataset or levelDB.
--gray - convert all images to gray scale.
--encoded and --encoded_type - keep image data in encoded (jpg/png) compressed form in the database.
--help - shows some help, see all relevant flags under Flags from tools/convert_imageset.cpp
You can check out $CAFFE_ROOT/examples/imagenet/convert_imagenet.sh
for an example how to use convert_imageset.
There is a MySQL backup file which is a huge file - about 3 GB. There is one table that has a LONGBLOB column that stores JPEG image data.
The file imports successfully if done from MySQL Workbench - Data Import/Restore.
I need to open this file and extract the first few lines (about two rows of INSERTs of the table with the image data) so that I can test if another program can import this data into another MySQL database.
I tried opening the file with EmEditor (which is good at opening large files) and then copy/paste only upto one Insert statement of the script into a new file (upto about line 25, because the table in question is the first table in the backup script), and then Paste the selection into a new file.
Here comes the problem:
However this messes up the encoding (even though I save as utf8). I realize this when I try to import (restore) this new file (again using MySQL Workbench) into a MySQL database, the restore goes ahead without errors, but the JPEG images in the blob column are now destroyed/corrupted.
My guess is that the encoding is different between the original file and new file.
EmEditor does not show the encoding on the original file, there is an option to detect, and it detects it as 'UTF8 Unsigned'. But when saving I save it as UTF8. I tried also saving as ANSI, ISO8859 (windows default), etc, etc.. but everytime the same result.
Do you have any solution for this particular problem? ie I want to only cut the first few lines of the huge backup file and save to a new file keeping the encoding the same, so that the images (blobs) are not changed. Is there any way this can be done with EmEditor (ie do I have the wrong approach [ie Cut-Paste]?) Is there any specialized software that can do this? How can I diagnose what is going wrong here?
Thanks for any responses.
this messes up the encoding (even though I save as utf8)
UTF-8 is not a good choice for arbitrary binary data. There are many sequences of high-bytes which are not valid in UTF-8, so you will mangle them at some point during the load-alter-save process.
If you load the file using an encoding that maps every single byte to a unique character, and re-save the file using that same encoding, you should preserve the original content(*). ISO-8859-1 is the encoding usually chosen for this purpose, since it simply maps each byte 0..0xFF to the Unicode code point with the same number.
(*: assuming the editor is binary-safe with regard to other tricky points like nulls, \n/\r and other control characters... I believe EmEditor can be.)
When opening the original file in EmEditor, trying selecting the encoding as Binary (ASCII View). The Binary (ASCII View) will, as bobince said, map each byte to a unique character and preserve that when you save the file. I think this should fix your problem.