Getting issues when SQL is trying to write to table. Value Larger than Specific Precision - mysql

So I work for a retail company and was involved in looking at an incident where I systemically had to 'load' some stock onto a trailer, to be dispatched to a store.
So I have an issue with some code which executed, the process is trying to do the following to a SQL table;
UPDATE RESV_LOCN_HDR SET RESV_LOCN_HDR.USER_ID = :1, RESV_LOCN_HDR.CURR_WT = :2, RESV_LOCN_HDR.CURR_VOL = :3, RESV_LOCN_HDR.CURR_UOM_QTY = :4, RESV_LOCN_HDR.MOD_DATE_TIME = :5 WHERE ( RESV_LOCN_HDR.LOCN_ID = :6 )
The input variables for reference are;
input variables
1: Address(309adbdc) Length(0) Type(8) "RFUSER" - Indicator # 309ad7d4 = 0
2: Address(309ae02c) Length(0) Type(4) "11047.4" - Indicator # 309ad9b8 = 0
3: Address(309ae1ac) Length(0) Type(4) "-1.01e+09" - Indicator # 309ada3c = 0
4: Address(309ae32c) Length(0) Type(4) "-57" - Indicator # 309adac0 = 0
5: Address(309ae55c) Length(0) Type(7) "10/27/2016 2:57:50 PM" - Indicator # 309adb70 = 0
6: Address(309ade0c) Length(0) Type(8) "600062508" - Indicator # 309ad8d0 = 0
The columns I am writing to (USER_ID, CURR_WT, CURR_VOL, CURR_UOM_QTY, MOD_DATE_TIME, LOCN_ID all have the following data types;
USER_ID = VARCHAR2(15 CHAR)
CURR_WT, CURR_VOL = NUMBER(13,4)
CURR_UOM_QTY = NUMBER(9,2)
MOD_DATE_TIME = DATE
LOCN_ID = VARCHAR2(10 CHAR)
From my understanding (and I'm sure you'll all see the above as well, all the column data types are met apart from 3: CURR_UOM_QTY is trying to write -1.01e+09, now is it just me or is this the reason why the error is being returned? Is it not that the above data is a macro? or something similar?
Just not seen SQL try to write 1.01e+09 before, as it should only be writing numerical data, not a mixture.
Using oracle 11g.

Yes, the value of -1.01e+09 is why you are getting an error since your decimal precision is 4 and that value has many more decimals. It does not automatically round to four decimal places.

Related

Meta-model of the field function OpenTurns 1.16rc1

After updating Openturns from 1.15 to 1.16rc1 I have the following issue with building the meta-model of the field function:
to reduce the computational burden:
ot.ResourceMap.SetAsUnsignedInteger("FittingTest-KolmogorovSamplingSize", 1)
algo = ot.FunctionalChaosAlgorithm(sample_X, outputSampleChaos)
algo.run()
metaModel = ot.PointToFieldConnection(postProcessing, algo.getResult().getMetaModel())
The "FittingTest-KolmogorovSamplingSize" was removed from OpenTurns 1.16rc1 and when I try to replace the fitting test with:
ot.ResourceMap.SetAsUnsignedInteger("FittingTest-LillieforsMaximumSamplingSize", 10)
Or with
ot.ResourceMap.SetAsUnsignedInteger("FittingTest-LillieforsMinimumSamplingSize", 1)
The code is freezing. Is there any solution for this?
The proposed solution is simply to use another distribution to model your data. You could have used any other multivariate continuous distribution of proper dimension. IMO it is not a valid answer as the distribution has no link to your data.
After inspection, it appears that the problem has nothing to do with Lilliefors's test. In OT 1.15 we were using this test (under the wrong name of Kolmogorov) to select automatically a distribution suited to the input sample, but we switched to a more sophisticated selection algorithm (see MetaModelAlgorithm::BuildDistribution). It is based on a first pass using the raw Kolomgorov test (thus ignoring the fact that parameters have been estimated) then an information-based criterion is used to select the most relevant model (AIC, AICC, BIC depending on the value of the "MetaModelAlgorithm-ModelSelectionCriterion" key in ResourceMap. The problem is caused by the TrapezoidalFactory class during the Kolmogorov phase. I will provide a fix ASAP in OpenTURNS master. In the mean time, I have adapted the proposed solution to something more adapted to your data:
degree = 6
dimension_xi_X = 3
dimension_xi_Y = 450
enumerateFunction = ot.HyperbolicAnisotropicEnumerateFunction(dimension_xi_X, 0.8)
basis = ot.OrthogonalProductPolynomialFactory(
[ot.StandardDistributionPolynomialFactory(ot.HistogramFactory().build(sample_X[:,i])) for i in range(dimension_xi_X)], enumerateFunction)
basisSize = enumerateFunction.getStrataCumulatedCardinal(degree)
#basis = ot.OrthogonalProductPolynomialFactory(
# [ot.HermiteFactory()] * dimension_xi_X, enumerateFunction)
#basisSize = 450#enumerateFunction.getStrataCumulatedCardinal(degree)
adaptive = ot.FixedStrategy(basis, basisSize)
projection = ot.LeastSquaresStrategy(
ot.LeastSquaresMetaModelSelectionFactory(ot.LARS(), ot.CorrectedLeaveOneOut()))
ot.ResourceMap.SetAsScalar("LeastSquaresMetaModelSelection-ErrorThreshold", 1.0e-7)
algo_chaos = ot.FunctionalChaosAlgorithm(sample_X,
outputSampleChaos,basis.getMeasure(), adaptive, projection)
algo_chaos.run()
result_chaos = algo_chaos.getResult()
meta_model = result_chaos.getMetaModel()
metaModel = ot.PointToFieldConnection(postProcessing,
algo_chaos.getResult().getMetaModel())
I also implemented a quick and dirty estimator of the L2-error:
# Meta_model validation
iMax = 5
# Input values
sample_X_validation = ot.Sample(np.array(month_1_parameters_MSE.iloc[:iMax,0:3]))
print("sample size=", sample_X_validation.getSize())
# sample_X = ot.Sample(month_1_parameters_MSE[['Rseries','Rsh','Isc']])
# output values
#month_1_simulated.iloc[0:1].transpose()
Field = ot.Field(mesh,np.array(month_1_simulated.iloc[0:1]).transpose())
sample_Y_validation = ot.ProcessSample(1,Field)
for k in range(1,iMax):
sample_Y_validation.add( np.array(month_1_simulated.iloc[k:k+1]).transpose() )
# In[18]:
graph = sample_Y_validation.drawMarginal(0)
graph.setColors(['red'])
drawables = graph.getDrawables()
graph2 = metaModel(sample_X_validation).drawMarginal(0)
graph2.setColors(['blue'])
drawables = graph2.getDrawables()
graph.add(graph2)
graph.setTitle('Model/Metamodel Validation')
graph.setXTitle(r'$t$')
graph.setYTitle(r'$z$')
drawables = graph.getDrawables()
L2_error = 0.0
for i in range(iMax):
L2_error = (drawables[i].getData()[:,1]-drawables[iMax+i].getData()[:,1]).computeRawMoment(2)[0]
print("L2_error=", L2_error)
You get an error of 79.488 with the previous answer and 1.3994 with the new proposal. Here is a graphical comparison.
Comparison between test data & previous answer
Comparison between test data & new proposal
The solution is to use:
degree = 1
dimension_xi_X = 3
dimension_xi_Y = 450
enumerateFunction = ot.LinearEnumerateFunction(dimension_xi_X)
basis = ot.OrthogonalProductPolynomialFactory(
[ot.HermiteFactory()] * dimension_xi_X, enumerateFunction)
basisSize =450 #enumerateFunction.getStrataCumulatedCardinal(degree)
adaptive = ot.FixedStrategy(basis, basisSize)
projection = ot.LeastSquaresStrategy(
ot.LeastSquaresMetaModelSelectionFactory(ot.LARS(), ot.CorrectedLeaveOneOut()))
ot.ResourceMap.SetAsScalar("LeastSquaresMetaModelSelection-ErrorThreshold", 1.0e-7)
algo_chaos = ot.FunctionalChaosAlgorithm(sample_X,
outputSampleChaos,basis.getMeasure(), adaptive, projection)
algo_chaos.run()
result_chaos = algo_chaos.getResult()
meta_model = result_chaos.getMetaModel()
metaModel1 = ot.PointToFieldConnection(postProcessing,
algo_chaos.getResult().getMetaModel())

How to get dataset into array

I have worked all the tutorials and searched for "load csv tensorflow" but just can't get the logic of it all. I'm not a total beginner, but I don't have much time to complete this, and I've been suddenly thrown into Tensorflow, which is unexpectedly difficult.
Let me lay it out:
Very simple CSV file of 184 columns that are all float numbers. A row is simply today's price, three buy signals, and the previous 180 days prices
close = tf.placeholder(float, name='close')
signals = tf.placeholder(bool, shape=[3], name='signals')
previous = tf.placeholder(float, shape=[180], name = 'previous')
This article: https://www.tensorflow.org/guide/datasets
It covers how to load pretty well. It even has a section on changing to numpy arrays, which is what I need to train and test the 'net. However, as the author says in the article leading to this Web page, it is pretty complex. It seems like everything is geared toward doing data manipulation, where we have already normalized our data (nothing has really changed in AI since 1983 in terms of inputs, outputs, and layers).
Here is a way to load it, but not in to Numpy and no example of not manipulating the data.
with tf.Session as sess:
sess.run( tf.global variables initializer())
with open('/BTC1.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter =',')
line_count = 0
for row in csv_reader:
?????????
line_count += 1
I need to know how to get the csv file in to the
close = tf.placeholder(float, name='close')
signals = tf.placeholder(bool, shape=[3], name='signals')
previous = tf.placeholder(float, shape=[180], name = 'previous')
so that I can follow the tutorials to train and test the net.
It's not that clear for me your question. You might be answering, tell me if I'm wrong, how to feed data in your model? There are several fashions to do so.
Use placeholders with feed_dict during the session. This is the basic and easier one but often suffers from training performance issue. Further explanation, check this post.
Use queue. Hard to implement and badly documented, I don't suggest, because it's been taken over by the third method.
tf.data API.
...
So to answer your question by the first method:
# get your array outside the session
with open('/BTC1.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter =',')
dataset = np.asarray([data for data in csv_reader])
close_col = dataset[:, 0]
signal_cols = dataset[:, 1: 3]
previous_cols = dataset[:, 3:]
# let's say you load 100 row each time for training
batch_size = 100
# define placeholders like you
...
with tf.Session() as sess:
...
for i in range(number_iter):
start = i * batch_size
end = (i + 1) * batch_size
sess.run(train_operation, feed_dict={close: close_col[start: end, ],
signals: signal_col[start: end, ],
previous: previous_col[start: end, ]
}
)
By the third method:
# retrieve your columns like before
...
# let's say you load 100 row each time for training
batch_size = 100
# construct your input pipeline
c_col, s_col, p_col = wrapper(filename)
batch = tf.data.Dataset.from_tensor_slices((close_col, signal_col, previous_col))
batch = batch.shuffle(c_col.shape[0]).batch(batch_size) #mix data --> assemble batches --> prefetch to RAM and ready inject to model
iterator = batch.make_initializable_iterator()
iter_init_operation = iterator.initializer
c_it, s_it, p_it = iterator.get_next() #get next batch operation automatically called at each iteration within the session
# replace your close, signal, previous placeholder in your model by c_it, s_it, p_it when you define your model
...
with tf.Session() as sess:
# you need to initialize the iterators
sess.run([tf.global_variable_initializer, iter_init_operation])
...
for i in range(number_iter):
start = i * batch_size
end = (i + 1) * batch_size
sess.run(train_operation)
Good luck!

Writing a circuit in ZoKrates to proof age is over 21 years

I am trying to see if I can use ZoKrates in a scenario where a user can prove to the verifier that age is over 21 years without revealing the date of birth. I think its a good use case for zero-knowledge proof but like to understand the best way to implement it.
The circuit code (sample) takes the name of the user as public input(name attestation is done by a trusted authority like DMV and is a most likely combination of offline/online mechanism), then the date of birth which is a private input.
//8297122105 = "Razi" is decimal.
def main(pubName,private yearOfBirth, private centuryOfBirth):
x = 0
y = 0
z = 0
x = if centuryOfBirth == 19 then 1 else 0 fi
y = if yearOfBirth < 98 then 1 else 0 fi
z = if pubName == 8297122105 then 1 else 0 fi
total = x + y + z
result = if total == 3 then 1 else 0 fi
return result
Now, using ./target/release/zokrates generate-proof command get the output that can be used as an input toverifier.sol.
A = Pairing.G1Point(0x24cdd31f8e07e854e859aa92c6e7f761bab31b4a871054a82dc01c143bc424d, 0x1eaed5314007d283486826e9e6b369b0f1218d7930cced0dd0e735d3702877ac);
A_p = Pairing.G1Point(0x1d5c046b83c204766f7d7343c76aa882309e6663b0563e43b622d0509ac8e96e, 0x180834d1ec2cd88613384076e953cfd88448920eb9a965ba9ca2a5ec90713dbc);
B = Pairing.G2Point([0x1b51d6b5c411ec0306580277720a9c02aafc9197edbceea5de1079283f6b09dc, 0x294757db1d0614aae0e857df2af60a252aa7b2c6f50b1d0a651c28c4da4a618e], [0x218241f97a8ff1f6f90698ad0a4d11d68956a19410e7d64d4ff8362aa6506bd4, 0x2ddd84d44c16d893800ab5cc05a8d636b84cf9d59499023c6002316851ea5bae]);
B_p = Pairing.G1Point(0x7647a9bf2b6b2fe40f6f0c0670cdb82dc0f42ab6b94fd8a89cf71f6220ce34a, 0x15c5e69bafe69b4a4b50be9adb2d72d23d1aa747d81f4f7835479f79e25dc31c);
C = Pairing.G1Point(0x2dc212a0e81658a83137a1c73ac56d94cb003d05fd63ae8fc4c63c4a369f411c, 0x26dca803604ccc9e24a1af3f9525575e4cc7fbbc3af1697acfc82b534f695a58);
C_p = Pairing.G1Point(0x7eb9c5a93b528559c9b98b1a91724462d07ca5fadbef4a48a36b56affa6489e, 0x1c4e24d15c3e2152284a2042e06cbbff91d3abc71ad82a38b8f3324e7e31f00);
H = Pairing.G1Point(0x1dbeb10800f01c2ad849b3eeb4ee3a69113bc8988130827f1f5c7cf5316960c5, 0xc935d173d13a253478b0a5d7b5e232abc787a4a66a72439cd80c2041c7d18e8);
K = Pairing.G1Point(0x28a0c6fff79ce221fccd5b9a5be9af7d82398efa779692297de974513d2b6ed1, 0x15b807eedf551b366a5a63aad5ab6f2ec47b2e26c4210fe67687f26dbcc7434d);
Question
Consider a scenario when a user (say Razi) can take the proof above (probably in a form of a QR code) and scan it on a machine (confirms age is over 21) that will run the verifierTx method on the contract. Since the proof explicitly has "Razi" inside the proof and contract can verify the age without knowing the actual date of birth we get a better privacy. However, the challenge is now anyone else can reuse the proof since it was used within the transaction. One way to mitigate this issue is to make sure that either the proof is valid for a limited time or (just may good for one-time use). Another way is to ensure proof of user's identity ("Razi"), in a way that is satisfied beyond doubt (e.g. by confirming identity on blockchain etc.).
Are there ways to make sure proof can be used by a user more than once?
I hope the question and explanation make sense. Happy to elaborate more on this, so let me know.
What you will need is:
Razi owning an ethereum public/private key
a (salted) fingerprint fact (e.g. birthday as unix timestamp) associated with Razi's public ethereum address and endorsed on-chain by an authority
Now you can write a ZoKrates program like this
def main(private field salt, private field birthdayAsUnixTs, field pubFactHashA, field pubFactHashB, field ts) -> (field)
// check that the fact is corresponding to the endorsed salted fact fingerprint onchain
h0, h1 = sha256packed(0,0,salt,birthdayAsUnixTs)
h0 == pubFactHashA
h1 == pubFactHashB
// 18 years is pseudo code only!
field ok = if birthdayAsUnixTs * 18 years <= ts then 1 else 0 fi
return ok
Now in your contract you can
check that msg.sender is the owner of the endorsed fact
require(ts <= now)
call verifier with the proof and public input: (factHash, ts, 1)
You can do that by hashing the proof and adding that hash in a list of "used proofs", so no one can use it again.
Now, ZoKrates add randomness in the generation of the proof in order to prevent revealing that the same witnesss has been used, since zkproofs do not show anything about the witness. So, if you want to prevent the person to use his credential (accredit that he is over 21 years old ) more than once you have to use a nullifier (See ZCash approach in the "How zk-SNARKs are applied to create a shielded transaction" part).
Basically you use a string with the data of Razi nullifier_string = centuryOfBirth+yearOfBirth+pubName and then you publish it Hash nullifier = H(nullifier_string) in a table of revealed nullifiers. In the ZoKrates scheme you have to add the nullifier as a public input and then verify that the nullifier corresponds to the data provided. Something like this:
import "utils/pack/unpack128.code" as unpack
import "hashes/sha256/256bitPadded.code" as hash
import "utils/pack/nonStrictUnpack256.code" as unpack256
def main(pubName,private yearOfBirth, private centuryOfBirth, [2]field nullifier):
field x = if centuryOfBirth == 19 then 1 else 0 fi
field y = if yearOfBirth < 98 then 1 else 0 fi
field z = if pubName == 8297122105 then 1 else 0 fi
total = x + y + z
result = if total == 3 then 1 else 0 fi
null0 = unpack(nullifier[0])
null1 = unpack(nullifier[1])
nullbits = [...null0,...null1]
nullString = centuryOfBirth+yearOfBirth+pubName
unpackNullString = unpack256(nullString)
nullbits == hash(unpackNullString)
return result
This has to be made in order to prevent that Razi provide a random nullifier unrelated to his data.
Once you had done this, you can check if the nullifier provided has been already used if it is registered in the revealed nullifier table.
The problem with this in your case is that the year of birth is a weak number to hash. Someone can do a brute-force attack to the nullifier and reveal the year of birth of Razi. You have to add a strong number in the verification (Razi secret ID? a digital signature?) to prevent this attack.
Note1: I have an old version of ZoKrates, so check the import path right.
Note2: Check the ZoKrates Hash function implementation, you may have problem with the padding of the inputs, the unpack256 function prevent this I suppose, but you can double check this to prevent bugs.

Update planned order - two committed modifications, only one saved

I need to update two information on one object: the quantity (PLAF-gsmng) and refresh the planned order via the module function 'MD_SET_ACTION_PLAF'.
I successfully find a way to update each data separately. But when I execute the both solutions the second modification is not saved on the database.
Do you know how I can change the quantity & set the action on PLAF (Planned order) table ?
Do you know other module function to update only the quantity ?
Maybe a parameter missing ?
It's like if the second object is locked (sm12 empty, no sy-subrc = locked) ... and the modification is not committed.
I tried to:
change the order of the algorithm (refresh and after, change PLAF)
add, remove, move the COMMIT WORK & COMMIT WORK AND WAIT
add DEQUEUE_ALL or DEQUEUE_EMPLAFE
This is the current code:
1) Read the data
lv_plannedorder = '00000000001'
"Read PLAF data
SELECT SINGLE * FROM PLAF INTO ls_plaf WHERE plnum = lv_plannedorder.
2) Update Quantity data
" Standard configuration for FM MD_PLANNED_ORDER_CHANGE
CLEAR ls_610.
ls_610-nodia = 'X'. " No dialog display
ls_610-bapco = space. " BAPI type. Do not use mode 2 -> Action PLAF-MDACC will be autmatically set up to APCH by the FM
ls_610-bapix = 'X'. " Run BAPI
ls_610-unlox = 'X'. " Update PLAF
" Customize values
MOVE p_gsmng TO ls_plaf-gsmng. " Change quantity value
MOVE sy-datlo TO ls_plaf-mdacd. " Change by/datetime, because ls_610-bapco <> 2.
MOVE sy-uzeit TO ls_plaf-mdact.
CALL FUNCTION 'MD_PLANNED_ORDER_CHANGE'
EXPORTING
ecm61o = ls_610
eplaf = ls_plaf
EXCEPTIONS
locked = 1
locking_error = 2
OTHERS = 3.
" Already committed on the module function
" sy-subrc = 0
If I go on the PLAF table, I can see that the quantity is edited. It's working :)
3) Refresh BOM & change Action (MDACC) and others fields
CLEAR ls_imdcd.
ls_imdcd-pafxl = 'X'.
CALL FUNCTION 'MD_SET_ACTION_PLAF'
EXPORTING
iplnum = lv_plannedorder
iaccto = 'BOME'
iaenkz = 'X'
imdcd = ls_imdcd
EXCEPTIONS
illegal_interface = 1
system_failure = 2
error_message = 3
OTHERS = 4.
IF sy-subrc = 0.
COMMIT WORK.
ENDIF.
If I go on the table, no modification (only the modif. of the part 2. can be found on it).
Any idea ?
Maybe because the ls_610-bapco = space ?
It should be possible to update planned order quantity with MD_SET_ACTION_PLAF too, at least SAP Help tells us so. Why don't you use it like that?
Its call for changing the quantity should possibly look like this:
DATA: lt_acct LIKE TABLE OF MDACCTO,
ls_acct LIKE LINE OF lt_acct.
ls_acct-accto = 'BOME'.
APPEND lt_acct.
ls_acct-accto = 'CPOD'.
APPEND lt_acct.
is_mdcd-GSMNG = 'value' "updated quantity value
CALL FUNCTION 'MD_SET_ACTION_PLAF'
EXPORTING
iplnum = iv_plnum
iaenkz = 'X'
IVBKZ = 'X'
imdcd = is_mdcd "filled with your BOME-related data + new quantity
TABLES
TMDACCTO = lt_accto
EXCEPTIONS
illegal_interface = 1
system_failure = 2
error_message = 3.
So there is no more need for separate call of MD_PLANNED_ORDER_CHANGE anymore and no more problems with update.
I used word possibly because I didn't find any example of this FM call in the Web (and SAP docu is quite ambiguous), so I propose this solution just as is, without verification.
P.S. Possible actions are listed in T46AS table, and possible impact of imdcd fields on order can be checked in MDAC transaction. It is somewhat GUI equivalent of this FM for single order.

Storing matlab array in MySQL. Again

I have a 3D array in Matlab of uint16(basically it is just an image 1080x1920x3). I want to store it in mysql. Here is what I'm doing:
MySQL:
create table imgtest(img longblob);
Matlab:
% image_data - is my image as described before
raw_im = reshape(image_data,1,[]);
conn = database('test','root','root','Vendor','MySQL','Server','localhost')
x = conn.Handle;
insertcommand = ['INSERT INTO imtest (img) values (?)'];
StatementObject = x.prepareStatement(insertcommand);
StatementObject.setObject(1,raw_im)
StatementObject.execute
The problem is that I'm writing about 600k uint16 values into this blob field. But when I take this field from the DB, I always getting about 1.2 million of uint8 elements(exactly two times more).
So, is there a way to read this byte field as a set of uint16, but not uint8?
Thank you.
I have been doing something similar for one of my projects
basically there was one difference but maybe it would clarify something to you.
I was loading image directly to DB from file with command:
INSERT INTO BaseImage(Image)
SELECT * FROM OPENROWSET(BULK N'C:\co.jpg', SINGLE_BLOB) as image
and getting it back to Matlab required typecasting (just like #sebastian mentioned)
SQL_query = 'select TOP 1 pk_BaseImage,Image from BaseImage order by pk_BaseImage desc';
[data] = SQL_query_exec(SQL_query);
pk_BaseImage = data.Data.pk_BaseImage;
out = typecast(data.Data.Image{1,1},'uint8');
BUT..
it was not enough, I had to do some trick to use 'out' as image
I was forced to write it to temporary file and read it again to Matlab (I know it's strange but it worked very well and I could for example calculate DWT, DFT and so on)
image_matrix = get_image_matrix( out );
get_image_matrix function looks like:
function [ out ] = get_image_matrix( input )
targetfilename = 'temp.jpg';
%wynik
fid = fopen(targetfilename,'w');
if fid
fwrite(fid,input,'uint8');
end
fclose(fid);
out = imread(targetfilename);
delete(targetfilename);
end
I hope it will help you :)
One important notice - I used gray-scale images (uint8 type)
You can most probably typecast the uint8's into uint16's to get back at your original image data:
uint16_result = typecast(uint8_result, 'uint16');
I'm not familiar with the database toolbox - so there might well be a way to tell Matlab to do this on its own.
OK, thank you both. I've summarized your answers and this what I've got:
Since blob field is nothing more than byte array, then we should cast our data in matlab before writing it to the DB. After reading it from DB, we should cast them back.
Minimum working example is:
MySQL
create table imgtest(img longblob);
Matlab
% image_data - is my image as described before
raw_im = typecast(reshape(image_data,1,[]),'uint8'); %! the main string
conn = database('test','root','root','Vendor','MySQL','Server','localhost')
x = conn.Handle;
insertcommand = ['INSERT INTO imtest (img) values (?)'];
StatementObject = x.prepareStatement(insertcommand);
StatementObject.setObject(1,raw_im)
StatementObject.execute
After we can read it back:
res = exec(conn,'Select * from imtest')
array_uint8 = fetch(res);
array_uint8 = array_uint8{1};
array_uint16 = typecast(array_uint8,'uint16').
Hope this will help someone.