Failed to close tables within MySQL UDF (V5.6) - mysql

I've spent three days trying to figure out this, any help would be appreciated. I'm trying to open few tables (stored in InnoDB format) within the UDF in MySQL. I'm able to open them if I create a new THD instance and set it to be the "current thd". However, I'm having problem to properly close these tables. The code I used to open the tables looks like this
THD *thd = new THD;
thd->thread_stack = (char*) &thd;
thd->set_db(db_name, strlen(db_name));
my_pthread_setspecific_ptr(THR_THD, thd);
const unsigned int NUMBER_OF_TABLES = 5;
char* tableNames[NUMBER_OF_TABLES];
... set table name here ...
TABLE_LIST tables[NUMBER_OF_TABLES];
for (unsigned int i = 0; i < NUMBER_OF_TABLES; i++)
{
tables[i].init_one_table((char*)db_name, strlen(db_name), tableNames[i],
strlen(tableNames[i]), tableNames[i], TL_WRITE_ALLOW_WRITE);
if (i != NUMBER_OF_TABLES-1)
tables[i].next_local = tables[i].next_global = tables + 1 + i;
}
}
if (open_and_lock_tables(thd, tables, false, MYSQL_LOCK_IGNORE_TIMEOUT))
{
goto end;
}
I was able to open all tables using the above code block. However, when I finish using them, I was not able to close these tables since assertion fails. I would appreciate if anyone can help with these. I use the following code block to close the tables,
thd->cleanup_after_query();
close_thread_tables(thd);
delete thd;
The failed assertion was to do with the thd->transaction.stmt.is_empty().
The is_empty() method simply checks whether ha_list == NULL.
I would like to understand how can I properly close these tables??
I'm developing on Windows 7 with MySQL 5.6.

Related

Is there any way to tell if the TADOTable I am looking for is in the database (MS Access)?

I use C++ Builder (Delphi 10.2 and C++Builder 10.2 Update 2) and I need a method that, in case there is no particular table, creates it using TADO objects (ADODB)?
I mean TADOQuery, TADOTable, TADOConnection, etc.
How can I do this?
I tried looking at the methods of TADOConncection, of TADOTable, but none of them seem to be useful.
I also tried this route (https://docwiki.embarcadero.com/Libraries/Alexandria/en/Bde.DBTables.TTable.Exists) but there are compatibility problems.
Does this help ?
TADOConnection *YOUR_TADOCONNECTION; // your connection defined earlier in your code
TStringList *TableList = new TStringList;
bool WithSystemTables = true; // or false according to your requirements
YOUR_TADOCONNECTION->GetTableNames(TableList, WithSystemTables);
for (int i = 0 i < TableList->Count(); i++) {
String NextTableName = TableList->Strings[i];
/*.... your check for the table name being the one you want goes here .... */
}
delete TableList;

How to ensure that updating to database on certain interval

I am working in a application Spring java8
I have one function that generate Labels(pdf generation) asynchronously.
it contains a loop, usually it will run more than 1000, it generate more than 1000 pdf labels.
after every loop ends we need to update the database, so that we just saving the status, ie initially it save numberOfgeneratedCount=0 , after each label we just increment the variable and update the table.
It is not Necessary to save this incremented count to db at every loop ends, what we need is in a fixed intervals only we need to update the database to reduce load on dataBase inserts.
currently my code is like
// Label is a database model class labeldb is variable of that
//commonDao.saveLabelToDb function to save Label object
int numberOfgeneratedCount =0;
labeldb.setProcessedOrderCount(numberOfgeneratedCount);
commonDao.saveLabelToDb(labeldb);
for(Order order: orders){
generated = true;
try{
// pdf generation code
}catch Exception e{
// catch block here
generated = false;
}
if(generated){
numberOfgeneratedCount++;
deliveryLabeldb.setProcessedOrderCount(numberOfgeneratedCount);
commonDao.saveLabelToDb(labeldb );
}
}
to improve the performance we need to update database only an interval of 10 seconds. Any help would appreciated
I have done this using the following code, I am not sure about whether this is a good solution, Some one please improve this using some built in functions
int numberOfgeneratedCount =0;
labeldb.setProcessedOrderCount(numberOfgeneratedCount);
commonDao.saveLabelToDb(labeldb);
int nowSecs =LocalTime.now().toSecondOfDay();
int lastSecs = nowSecs;
for(Order order: orders){
nowSecs = LocalTime.now().toSecondOfDay();
generated = true;
try{
// pdf generation code
}catch Exception e{
// catch block here
generated = false;
}
if(generated){
numberOfgeneratedCount++;
deliveryLabeldb.setProcessedOrderCount(numberOfgeneratedCount);
if(nowSecs-lastSecs > 10){
lastSecs=nowSecs;
commonDao.saveLabelToDb(labeldb );
}
}
}

Jooq batchInsert().execute()

My process looks like:
select some data 50 rows per select,
do sth with data (set some values)
transform row to object of another table
call batchInsert(myListOfRecords).execute()
My problem is how to set up when data should be inserted ? In my current setup data is only inserted at the end of my loop. This is some kind of problem for me because i want process much more data then i do in my tests. So if i will agree with this then my proccess will end with exception (OutOfMemory). Where i should define max amount of data in batch to call instert?
The important thing here is to not fetch all the rows you want to process into memory in one go. When using jOOQ, this is done using ResultQuery.fetchLazy() (possibly along with ResultQuery.fetchSize(int)). You can then fetch the next 50 rows using Cursor.fetchNext(50) and proceed with your insertion as follows:
try (Cursor<?> cursor = ctx
.select(...)
.from(...)
.fetchSize(50)
.fetchLazy()) {
Result<?> batch;
for (;;) {
batch = cursor.fetchNext(50);
if (batch.isEmpty())
break;
// Do your stuff here
// Do your insertion here
ctx.batchInsert(...);
}
}

Storing matlab array in MySQL. Again

I have a 3D array in Matlab of uint16(basically it is just an image 1080x1920x3). I want to store it in mysql. Here is what I'm doing:
MySQL:
create table imgtest(img longblob);
Matlab:
% image_data - is my image as described before
raw_im = reshape(image_data,1,[]);
conn = database('test','root','root','Vendor','MySQL','Server','localhost')
x = conn.Handle;
insertcommand = ['INSERT INTO imtest (img) values (?)'];
StatementObject = x.prepareStatement(insertcommand);
StatementObject.setObject(1,raw_im)
StatementObject.execute
The problem is that I'm writing about 600k uint16 values into this blob field. But when I take this field from the DB, I always getting about 1.2 million of uint8 elements(exactly two times more).
So, is there a way to read this byte field as a set of uint16, but not uint8?
Thank you.
I have been doing something similar for one of my projects
basically there was one difference but maybe it would clarify something to you.
I was loading image directly to DB from file with command:
INSERT INTO BaseImage(Image)
SELECT * FROM OPENROWSET(BULK N'C:\co.jpg', SINGLE_BLOB) as image
and getting it back to Matlab required typecasting (just like #sebastian mentioned)
SQL_query = 'select TOP 1 pk_BaseImage,Image from BaseImage order by pk_BaseImage desc';
[data] = SQL_query_exec(SQL_query);
pk_BaseImage = data.Data.pk_BaseImage;
out = typecast(data.Data.Image{1,1},'uint8');
BUT..
it was not enough, I had to do some trick to use 'out' as image
I was forced to write it to temporary file and read it again to Matlab (I know it's strange but it worked very well and I could for example calculate DWT, DFT and so on)
image_matrix = get_image_matrix( out );
get_image_matrix function looks like:
function [ out ] = get_image_matrix( input )
targetfilename = 'temp.jpg';
%wynik
fid = fopen(targetfilename,'w');
if fid
fwrite(fid,input,'uint8');
end
fclose(fid);
out = imread(targetfilename);
delete(targetfilename);
end
I hope it will help you :)
One important notice - I used gray-scale images (uint8 type)
You can most probably typecast the uint8's into uint16's to get back at your original image data:
uint16_result = typecast(uint8_result, 'uint16');
I'm not familiar with the database toolbox - so there might well be a way to tell Matlab to do this on its own.
OK, thank you both. I've summarized your answers and this what I've got:
Since blob field is nothing more than byte array, then we should cast our data in matlab before writing it to the DB. After reading it from DB, we should cast them back.
Minimum working example is:
MySQL
create table imgtest(img longblob);
Matlab
% image_data - is my image as described before
raw_im = typecast(reshape(image_data,1,[]),'uint8'); %! the main string
conn = database('test','root','root','Vendor','MySQL','Server','localhost')
x = conn.Handle;
insertcommand = ['INSERT INTO imtest (img) values (?)'];
StatementObject = x.prepareStatement(insertcommand);
StatementObject.setObject(1,raw_im)
StatementObject.execute
After we can read it back:
res = exec(conn,'Select * from imtest')
array_uint8 = fetch(res);
array_uint8 = array_uint8{1};
array_uint16 = typecast(array_uint8,'uint16').
Hope this will help someone.

LINQ-to-SQL oddity with multiple where clause arguments on the same field

My problem requires me to dynamically add where clauses to a IQueryable based on user input. The problem i'm having is that Linq-to-SQL doesn't seem to like having multiple where clauses on the same field, it actually duplicates the search arg value for the last item on all parameters. I verified this behavior through a SQL trace. Here is what I'm seeing.
WHERE ([t22].[OpenText] LIKE #p11) AND ([t22].[OpenText] LIKE #p12)
-- #p11: Input NVarChar (Size = 10; Prec = 0; Scale = 0) [%classify%] // Should be 2da57652-dcdf-4cc8-99db-436c15e5ef50
-- #p12: Input NVarChar (Size = 10; Prec = 0; Scale = 0) [%classify%]
My code uses a loop to dynamically add the where clauses as you can see below. My question is how do I work around this? This pretty much seems like a bug with the tool, no?
// add dyanmic where clauses based on user input.
MatchCollection searchTokens = Helper.ExtractTokensWithinBracePairs(filterText);
if (searchTokens.Count > 0)
{
foreach(Match searchToken in searchTokens)
query = query.Where((material => material.OpenText.Contains(searchToken.Value)));
}
else
{
query = query.Where((material => material.OpenText.Contains(filterText)));
}
Closing over the loop variable considered harmful! Change
foreach(Match searchToken in searchTokens) {
query = query.Where(
material => material.OpenText.Contains(searchToken.Value)
);
}
to
foreach(Match searchToken in searchTokens) {
Match token = searchToken;
query = query.Where(
material => material.OpenText.Contains(token.Value)
);
}
You are closing over the loop variable, which is considered harmful. To fix do this:
foreach(Match searchToken in searchTokens)
{
Match searchToken2 = searchToken;
// ^^^^^^^^^^^^ copy the value of the reference to a local variable.
query = query.Where(material => material.OpenText.Contains(searchToken2.Value));
// use the copy here ^^^^^^^^^^^^
}
The reason why your version doesn't work is that the query refers to the variable searchToken, not the value it had when the query was created. When the variable's value changes, all your queries see the new value.
I don't have enough rep to leave comments yet (or this would be a comment and not an answer) but the answers listed here worked for me.
However, I had to turn off compiler optimizations in order for it to work. If you do not turn off compiler optimizations (at least at the method level) then the compiler sees you setting a loop variable to a local variable and throws the local variable away.