Reading variable length cell arrays into matrix - octave

I am using Octave 4.2 and using xlsread in a for loop to import data from several different RTDs. I am importing using the following code:
for i=rtdmin:rtdmax
filnum=num2str(i);
fid = strcat(pre, filnum, filtyp);
j = exist(fid);
if j == 2
[num{i}, txt{i}, raw{i}, lim{i}] = xlsread(fid);
time{i} = num{i}(:,2);
temp{i} = num{i}(:,3);
endif
endfor
The problem is none of the RTDs have the exact same number of readings (30,000 +-200), or stop and start at the exact same time, although the readings overlap. Because of the variable size of the data in each cell I cannot simply pull it out into a matrix in order to process the data. Can anyone suggest a solution of how to get the data into a matrix, or can suggest a change to the existing code so the data is read into a matrix to begin with. Thank you in advance.

Related

anova_test not returning Mauchly's for three way within subject ANOVA

I am using a data set called sleep (found here: https://drive.google.com/file/d/15ZnsWtzbPpUBQN9qr-KZCnyX-0CYJHL5/view) to run a three way within subject ANOVA comparing Performance based on Stimulation, Deprivation, and Time. I have successfully done this before using anova_test from rstatix. I want to look at the sphericity output but it doesn't appear in the output. I have got it to come up with other three way within subject datasets, so I'm not sure why this is happening. Here is my code:
anova_test(data = sleep, dv = Performance, wid = Subject, within = c(Stimulation, Deprivation, Time))
I also tried to save it to an object and use get_anova_table, but that didn't look any different.
sleep_aov <- anova_test(data = sleep, dv = Performance, wid = Subject, within = c(Stimulation, Deprivation, Time))
get_anova_table(sleep_aov, correction = "GG")
This is an ideal dataset I pulled from the internet, so I'm starting to think the data had a W of 1 (perfect sphericity) and so rstatix is skipping this output. Is this something anova_test does?
Here also is my code using a dataset that does return Mauchly's:
weight_loss_long <- pivot_longer(data = weightloss, cols = c(t1, t2, t3), names_to = "time", values_to = "loss")
weight_loss_long$time <- factor(weight_loss_long$time)
anova_test(data = weight_loss_long, dv = loss, wid = id, within = c(diet, exercises, time))
Not an expert at all, but it might be because your factors have only two levels.
From anova_summary() help:
"Value
return an object of class anova_test a data frame containing the ANOVA table for independent measures ANOVA. However, for repeated/mixed measures ANOVA, it is a list containing the following components are returned:
ANOVA: a data frame containing ANOVA results
Mauchly's Test for Sphericity: If any within-Ss variables with more than 2 levels are present, a data frame containing the results of Mauchly's test for Sphericity. Only reported for effects that have more than 2 levels because sphericity necessarily holds for effects with only 2 levels.
Sphericity Corrections: If any within-Ss variables are present, a data frame containing the Greenhouse-Geisser and Huynh-Feldt epsilon values, and corresponding corrected p-values. "

How to get dataset into array

I have worked all the tutorials and searched for "load csv tensorflow" but just can't get the logic of it all. I'm not a total beginner, but I don't have much time to complete this, and I've been suddenly thrown into Tensorflow, which is unexpectedly difficult.
Let me lay it out:
Very simple CSV file of 184 columns that are all float numbers. A row is simply today's price, three buy signals, and the previous 180 days prices
close = tf.placeholder(float, name='close')
signals = tf.placeholder(bool, shape=[3], name='signals')
previous = tf.placeholder(float, shape=[180], name = 'previous')
This article: https://www.tensorflow.org/guide/datasets
It covers how to load pretty well. It even has a section on changing to numpy arrays, which is what I need to train and test the 'net. However, as the author says in the article leading to this Web page, it is pretty complex. It seems like everything is geared toward doing data manipulation, where we have already normalized our data (nothing has really changed in AI since 1983 in terms of inputs, outputs, and layers).
Here is a way to load it, but not in to Numpy and no example of not manipulating the data.
with tf.Session as sess:
sess.run( tf.global variables initializer())
with open('/BTC1.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter =',')
line_count = 0
for row in csv_reader:
?????????
line_count += 1
I need to know how to get the csv file in to the
close = tf.placeholder(float, name='close')
signals = tf.placeholder(bool, shape=[3], name='signals')
previous = tf.placeholder(float, shape=[180], name = 'previous')
so that I can follow the tutorials to train and test the net.
It's not that clear for me your question. You might be answering, tell me if I'm wrong, how to feed data in your model? There are several fashions to do so.
Use placeholders with feed_dict during the session. This is the basic and easier one but often suffers from training performance issue. Further explanation, check this post.
Use queue. Hard to implement and badly documented, I don't suggest, because it's been taken over by the third method.
tf.data API.
...
So to answer your question by the first method:
# get your array outside the session
with open('/BTC1.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter =',')
dataset = np.asarray([data for data in csv_reader])
close_col = dataset[:, 0]
signal_cols = dataset[:, 1: 3]
previous_cols = dataset[:, 3:]
# let's say you load 100 row each time for training
batch_size = 100
# define placeholders like you
...
with tf.Session() as sess:
...
for i in range(number_iter):
start = i * batch_size
end = (i + 1) * batch_size
sess.run(train_operation, feed_dict={close: close_col[start: end, ],
signals: signal_col[start: end, ],
previous: previous_col[start: end, ]
}
)
By the third method:
# retrieve your columns like before
...
# let's say you load 100 row each time for training
batch_size = 100
# construct your input pipeline
c_col, s_col, p_col = wrapper(filename)
batch = tf.data.Dataset.from_tensor_slices((close_col, signal_col, previous_col))
batch = batch.shuffle(c_col.shape[0]).batch(batch_size) #mix data --> assemble batches --> prefetch to RAM and ready inject to model
iterator = batch.make_initializable_iterator()
iter_init_operation = iterator.initializer
c_it, s_it, p_it = iterator.get_next() #get next batch operation automatically called at each iteration within the session
# replace your close, signal, previous placeholder in your model by c_it, s_it, p_it when you define your model
...
with tf.Session() as sess:
# you need to initialize the iterators
sess.run([tf.global_variable_initializer, iter_init_operation])
...
for i in range(number_iter):
start = i * batch_size
end = (i + 1) * batch_size
sess.run(train_operation)
Good luck!

Why do I get odd 0,0 point in Octave trisurf

I am trying to draw a surface from a file on disk (shown below). But I get an odd additional point at co-ords (0,0).
The file appears to be in correct shape to me.
I draw the chart from my C# application with a call to Octave .Net. Here is the Octave part of the script:
figure (1,'name','Map');
colormap('hot');
t = dlmread('C:\Map3D.csv');
# idx = find(t(:,4) == 4.0);t2 = t(idx,:);
tx =t(:,1);ty=t(:,2);tz=t(:,3);
tri = delaunay(tx,ty);
handle = trisurf(tri,tx,ty,tz);xlabel('Floor');ylabel('HurdleF');zlabel('Sharpe');
waitfor(handle);
This script is called from my C# app, with the following , very simple, code snippet:
using (var octave = new OctaveContext())
{
octave.Execute(script, int.MaxValue);
}
Can anyone explain if my Octave script is wrong, or the way I have structured the file.
Floor,HurdleF,Sharpe,Model
1.40000000,15.00000000,-0.44,xxx1.40_Hrd_15.00
1.40000000,14.00000000,-0.49,xxx1.40_Hrd_14.00
1.40000000,13.00000000,-0.19,xxx1.40_Hrd_13.00
1.40000000,12.00000000,-0.41,xxx1.40_Hrd_12.00
1.40000000,11.00000000,0.42,xxx1.40_Hrd_11.00
1.40000000,10.00000000,0.17,xxx1.40_Hrd_10.00
1.40000000,9.00000000,0.28,xxx1.40_Hrd_9.00
1.40000000,8.00000000,0.49,xxx1.40_Hrd_8.00
1.40000000,7.00000000,0.45,xxx1.40_Hrd_7.00
1.40000000,6.00000000,0.79,xxx1.40_Hrd_6.00
1.40000000,5.00000000,0.56,xxx1.40_Hrd_5.00
1.40000000,4.00000000,1.76,xxx1.40_Hrd_4.00
1.30000000,15.00000000,-0.46,xxx1.30_Hrd_15.00
1.30000000,14.00000000,-0.55,xxx1.30_Hrd_14.00
1.30000000,13.00000000,-0.24,xxx1.30_Hrd_13.00
1.30000000,12.00000000,0.35,xxx1.30_Hrd_12.00
1.30000000,11.00000000,0.08,xxx1.30_Hrd_11.00
1.30000000,10.00000000,0.63,xxx1.30_Hrd_10.00
1.30000000,9.00000000,0.83,xxx1.30_Hrd_9.00
1.30000000,8.00000000,0.21,xxx1.30_Hrd_8.00
1.30000000,7.00000000,0.55,xxx1.30_Hrd_7.00
1.30000000,6.00000000,0.63,xxx1.30_Hrd_6.00
1.30000000,5.00000000,0.93,xxx1.30_Hrd_5.00
1.30000000,4.00000000,2.50,xxx1.30_Hrd_4.00
1.20000000,15.00000000,-0.40,xxx1.20_Hrd_15.00
1.20000000,14.00000000,-0.69,xxx1.20_Hrd_14.00
1.20000000,13.00000000,0.23,xxx1.20_Hrd_13.00
1.20000000,12.00000000,0.56,xxx1.20_Hrd_12.00
1.20000000,11.00000000,0.22,xxx1.20_Hrd_11.00
1.20000000,10.00000000,0.56,xxx1.20_Hrd_10.00
1.20000000,9.00000000,0.79,xxx1.20_Hrd_9.00
1.20000000,8.00000000,0.20,xxx1.20_Hrd_8.00
1.20000000,7.00000000,1.09,xxx1.20_Hrd_7.00
1.20000000,6.00000000,0.99,xxx1.20_Hrd_6.00
1.20000000,5.00000000,1.66,xxx1.20_Hrd_5.00
1.20000000,4.00000000,2.23,xxx1.20_Hrd_4.00
1.10000000,15.00000000,-0.31,xxx1.10_Hrd_15.00
1.10000000,14.00000000,-0.18,xxx1.10_Hrd_14.00
1.10000000,13.00000000,0.24,xxx1.10_Hrd_13.00
1.10000000,12.00000000,0.70,xxx1.10_Hrd_12.00
1.10000000,11.00000000,0.31,xxx1.10_Hrd_11.00
1.10000000,10.00000000,0.76,xxx1.10_Hrd_10.00
1.10000000,9.00000000,1.24,xxx1.10_Hrd_9.00
1.10000000,8.00000000,0.94,xxx1.10_Hrd_8.00
1.10000000,7.00000000,1.09,xxx1.10_Hrd_7.00
1.10000000,6.00000000,1.53,xxx1.10_Hrd_6.00
1.10000000,5.00000000,2.41,xxx1.10_Hrd_5.00
1.10000000,4.00000000,2.16,xxx1.10_Hrd_4.00
1.00000000,15.00000000,-0.41,xxx1.00_Hrd_15.00
1.00000000,14.00000000,-0.24,xxx1.00_Hrd_14.00
1.00000000,13.00000000,0.33,xxx1.00_Hrd_13.00
1.00000000,12.00000000,0.18,xxx1.00_Hrd_12.00
1.00000000,11.00000000,0.61,xxx1.00_Hrd_11.00
1.00000000,10.00000000,0.96,xxx1.00_Hrd_10.00
1.00000000,9.00000000,1.75,xxx1.00_Hrd_9.00
1.00000000,8.00000000,0.74,xxx1.00_Hrd_8.00
1.00000000,7.00000000,1.63,xxx1.00_Hrd_7.00
1.00000000,6.00000000,2.12,xxx1.00_Hrd_6.00
1.00000000,5.00000000,2.73,xxx1.00_Hrd_5.00
1.00000000,4.00000000,2.03,xxx1.00_Hrd_4.00
0.90000000,15.00000000,-0.42,xxx0.90_Hrd_15.00
0.90000000,14.00000000,-0.37,xxx0.90_Hrd_14.00
0.90000000,13.00000000,0.58,xxx0.90_Hrd_13.00
0.90000000,12.00000000,0.03,xxx0.90_Hrd_12.00
0.90000000,11.00000000,0.68,xxx0.90_Hrd_11.00
0.90000000,10.00000000,0.79,xxx0.90_Hrd_10.00
0.90000000,9.00000000,1.54,xxx0.90_Hrd_9.00
0.90000000,8.00000000,0.82,xxx0.90_Hrd_8.00
0.90000000,7.00000000,1.81,xxx0.90_Hrd_7.00
0.90000000,6.00000000,2.33,xxx0.90_Hrd_6.00
0.90000000,5.00000000,2.99,xxx0.90_Hrd_5.00
0.90000000,4.00000000,1.71,xxx0.90_Hrd_4.00
0.80000000,15.00000000,-0.46,xxx0.80_Hrd_15.00
0.80000000,14.00000000,-0.26,xxx0.80_Hrd_14.00
0.80000000,13.00000000,0.55,xxx0.80_Hrd_13.00
0.80000000,12.00000000,0.07,xxx0.80_Hrd_12.00
0.80000000,11.00000000,0.65,xxx0.80_Hrd_11.00
0.80000000,10.00000000,1.08,xxx0.80_Hrd_10.00
0.80000000,9.00000000,1.27,xxx0.80_Hrd_9.00
0.80000000,8.00000000,1.12,xxx0.80_Hrd_8.00
0.80000000,7.00000000,1.98,xxx0.80_Hrd_7.00
0.80000000,6.00000000,2.62,xxx0.80_Hrd_6.00
0.80000000,5.00000000,3.35,xxx0.80_Hrd_5.00
0.80000000,4.00000000,1.27,xxx0.80_Hrd_4.00
0.70000000,15.00000000,-0.56,xxx0.70_Hrd_15.00
0.70000000,14.00000000,-0.33,xxx0.70_Hrd_14.00
0.70000000,13.00000000,0.24,xxx0.70_Hrd_13.00
0.70000000,12.00000000,-0.22,xxx0.70_Hrd_12.00
0.70000000,11.00000000,0.74,xxx0.70_Hrd_11.00
0.70000000,10.00000000,1.19,xxx0.70_Hrd_10.00
0.70000000,9.00000000,1.24,xxx0.70_Hrd_9.00
0.70000000,8.00000000,1.14,xxx0.70_Hrd_8.00
0.70000000,7.00000000,2.26,xxx0.70_Hrd_7.00
0.70000000,6.00000000,2.70,xxx0.70_Hrd_6.00
0.70000000,5.00000000,3.52,xxx0.70_Hrd_5.00
0.70000000,4.00000000,1.05,xxx0.70_Hrd_4.00
0.60000000,15.00000000,-0.50,xxx0.60_Hrd_15.00
0.60000000,14.00000000,-0.60,xxx0.60_Hrd_14.00
0.60000000,13.00000000,0.11,xxx0.60_Hrd_13.00
0.60000000,12.00000000,-0.16,xxx0.60_Hrd_12.00
0.60000000,11.00000000,0.73,xxx0.60_Hrd_11.00
0.60000000,10.00000000,1.08,xxx0.60_Hrd_10.00
0.60000000,9.00000000,1.31,xxx0.60_Hrd_9.00
0.60000000,8.00000000,1.38,xxx0.60_Hrd_8.00
0.60000000,7.00000000,2.24,xxx0.60_Hrd_7.00
0.60000000,6.00000000,2.89,xxx0.60_Hrd_6.00
0.60000000,5.00000000,3.50,xxx0.60_Hrd_5.00
0.60000000,4.00000000,1.11,xxx0.60_Hrd_4.00
0.50000000,15.00000000,-0.40,xxx0.50_Hrd_15.00
0.50000000,14.00000000,-0.37,xxx0.50_Hrd_14.00
0.50000000,13.00000000,0.13,xxx0.50_Hrd_13.00
0.50000000,12.00000000,-0.11,xxx0.50_Hrd_12.00
0.50000000,11.00000000,0.61,xxx0.50_Hrd_11.00
0.50000000,10.00000000,0.92,xxx0.50_Hrd_10.00
0.50000000,9.00000000,1.41,xxx0.50_Hrd_9.00
0.50000000,8.00000000,1.39,xxx0.50_Hrd_8.00
0.50000000,7.00000000,2.19,xxx0.50_Hrd_7.00
0.50000000,6.00000000,2.80,xxx0.50_Hrd_6.00
0.50000000,5.00000000,3.41,xxx0.50_Hrd_5.00
0.50000000,4.00000000,1.05,xxx0.50_Hrd_4.00
0.40000000,15.00000000,-0.25,xxx0.40_Hrd_15.00
0.40000000,14.00000000,-0.44,xxx0.40_Hrd_14.00
0.40000000,13.00000000,0.02,xxx0.40_Hrd_13.00
0.40000000,12.00000000,0.00,xxx0.40_Hrd_12.00
0.40000000,11.00000000,0.69,xxx0.40_Hrd_11.00
0.40000000,10.00000000,0.67,xxx0.40_Hrd_10.00
0.40000000,9.00000000,1.02,xxx0.40_Hrd_9.00
0.40000000,8.00000000,1.29,xxx0.40_Hrd_8.00
0.40000000,7.00000000,2.17,xxx0.40_Hrd_7.00
0.40000000,6.00000000,2.88,xxx0.40_Hrd_6.00
0.40000000,5.00000000,3.19,xxx0.40_Hrd_5.00
0.40000000,4.00000000,0.98,xxx0.40_Hrd_4.00
0.30000000,15.00000000,-0.02,xxx0.30_Hrd_15.00
0.30000000,14.00000000,-0.36,xxx0.30_Hrd_14.00
0.30000000,13.00000000,-0.26,xxx0.30_Hrd_13.00
0.30000000,12.00000000,-0.11,xxx0.30_Hrd_12.00
0.30000000,11.00000000,0.50,xxx0.30_Hrd_11.00
0.30000000,10.00000000,0.50,xxx0.30_Hrd_10.00
0.30000000,9.00000000,1.01,xxx0.30_Hrd_9.00
0.30000000,8.00000000,1.28,xxx0.30_Hrd_8.00
0.30000000,7.00000000,2.11,xxx0.30_Hrd_7.00
0.30000000,6.00000000,2.89,xxx0.30_Hrd_6.00
0.30000000,5.00000000,3.16,xxx0.30_Hrd_5.00
0.30000000,4.00000000,0.95,xxx0.30_Hrd_4.00
What's happening
dlmread() is reading the file in as numeric data and returning a numeric matrix. It doesn't recognize the text in your header line, so it silently converts that row to all zeros. (IMHO this is a design flaw in dlmread.) Remove the header line.
How to debug this
So, you've got some zeros in your plot that you didn't expect to be there? Check for zeros in your input data:
ixZerosX = find(tx == 0)
ixZerosY = find(ty == 0)
ixZerosZ = find(tz == 0)
The semicolons are omitted intentionally there to get Octave to automatically display the results.
Better yet, since doubles are an approximate type, and the values might be close to but not actually zero, do a "near zero" search:
threshold = 0.1;
ixZerosX = find(abs(tx) < threshold)
ixZerosY = find(abs(ty) < threshold)
ixZerosZ = find(abs(tz) < threshold)

Storing matlab array in MySQL. Again

I have a 3D array in Matlab of uint16(basically it is just an image 1080x1920x3). I want to store it in mysql. Here is what I'm doing:
MySQL:
create table imgtest(img longblob);
Matlab:
% image_data - is my image as described before
raw_im = reshape(image_data,1,[]);
conn = database('test','root','root','Vendor','MySQL','Server','localhost')
x = conn.Handle;
insertcommand = ['INSERT INTO imtest (img) values (?)'];
StatementObject = x.prepareStatement(insertcommand);
StatementObject.setObject(1,raw_im)
StatementObject.execute
The problem is that I'm writing about 600k uint16 values into this blob field. But when I take this field from the DB, I always getting about 1.2 million of uint8 elements(exactly two times more).
So, is there a way to read this byte field as a set of uint16, but not uint8?
Thank you.
I have been doing something similar for one of my projects
basically there was one difference but maybe it would clarify something to you.
I was loading image directly to DB from file with command:
INSERT INTO BaseImage(Image)
SELECT * FROM OPENROWSET(BULK N'C:\co.jpg', SINGLE_BLOB) as image
and getting it back to Matlab required typecasting (just like #sebastian mentioned)
SQL_query = 'select TOP 1 pk_BaseImage,Image from BaseImage order by pk_BaseImage desc';
[data] = SQL_query_exec(SQL_query);
pk_BaseImage = data.Data.pk_BaseImage;
out = typecast(data.Data.Image{1,1},'uint8');
BUT..
it was not enough, I had to do some trick to use 'out' as image
I was forced to write it to temporary file and read it again to Matlab (I know it's strange but it worked very well and I could for example calculate DWT, DFT and so on)
image_matrix = get_image_matrix( out );
get_image_matrix function looks like:
function [ out ] = get_image_matrix( input )
targetfilename = 'temp.jpg';
%wynik
fid = fopen(targetfilename,'w');
if fid
fwrite(fid,input,'uint8');
end
fclose(fid);
out = imread(targetfilename);
delete(targetfilename);
end
I hope it will help you :)
One important notice - I used gray-scale images (uint8 type)
You can most probably typecast the uint8's into uint16's to get back at your original image data:
uint16_result = typecast(uint8_result, 'uint16');
I'm not familiar with the database toolbox - so there might well be a way to tell Matlab to do this on its own.
OK, thank you both. I've summarized your answers and this what I've got:
Since blob field is nothing more than byte array, then we should cast our data in matlab before writing it to the DB. After reading it from DB, we should cast them back.
Minimum working example is:
MySQL
create table imgtest(img longblob);
Matlab
% image_data - is my image as described before
raw_im = typecast(reshape(image_data,1,[]),'uint8'); %! the main string
conn = database('test','root','root','Vendor','MySQL','Server','localhost')
x = conn.Handle;
insertcommand = ['INSERT INTO imtest (img) values (?)'];
StatementObject = x.prepareStatement(insertcommand);
StatementObject.setObject(1,raw_im)
StatementObject.execute
After we can read it back:
res = exec(conn,'Select * from imtest')
array_uint8 = fetch(res);
array_uint8 = array_uint8{1};
array_uint16 = typecast(array_uint8,'uint16').
Hope this will help someone.

Octave and multiple Bode plots

I'm teaching myself Octave and as a motivational exercise am attempting to create some Bode plots. I'd like to create a plot that has multiple curves for different values of a parameter in a transfer function, for example the time constant of a simple RC filter. I'm trying to do it as follows:
tau = [1,2,3]
for i = tau
g(i) = tf(1,[tau(i),1])
endfor
bode(g(1),g(2),g(3))
But it doesn't work, I get the error
error: octave_base_value::imag (): wrong type argument `struct'
However, it works fine if there are not multiple arguments to the bode command and the last line is simply:
bode(g(1))
Any advice as to where I've gone wrong would be appreciated - is there a better way to do what I want to do?
I was able to do it with the following sequence (with octave 3.2.4 on debian):
bode(g(1))
set (findobj (gcf, "type", "axes"), "nextplot", "add")
bode(g(2))
bode(g(3))
The second command is similar to hold on but it works when there are subplots; I found it here.
Using your own code:
subplot(211), hold on
subplot(212), hold on
tau = [1,2,3]
for i = 1:length(tau),
g(i) = tf(1,[tau(i),1]);
bode(g(i))
endfor
The problem with this solution is that you cannot identify a specific plot. You cannot access figure properties through bode() function directly.
Here then a plausible solution to bring you colorful plots:
colorsplot = ["b","m","g"]
tau = [1,2,3]
g = tf(1,[tau(1),1]);
[mag, ph, w] = bode(g);
subplot(211), semilogx(w,20*log(mag)), hold on
subplot(212), semilogx(w,ph), hold on
for i = 2:length(tau),
g = tf(1,[tau(i),1]);
[mag, ph, waux] = bode(g,w);
subplot(211), semilogx(w,20*log(mag),colorsplot(i))
subplot(212), semilogx(w,ph,colorsplot(i))
endfor