Scilab plot shows incorrect ode values - function

I'm trying to create a model of an asynchronous electrical motor in scilab, and display graphs of how the rpm, currents and torque change over time. It looks quite long but you don't need to read it all.
fHz = 50;
Um = 230;
p = 3;
we = 2*%pi*fHz/p;
wb = 2*%pi*50;
Rs = 0.435;
Rr = 0.64;
Ls = 0.0477;
Xls = wb*Ls; // [Ohm]
Lr = 0.0577;
Xlr = wb*Lr; // [Ohm]
Lm = 0.012;
Xm = wb*Lm; // [Ohm]
Xml = 1/(1/Xls + 1/Xm + 1/Xlr) // [Ohm];
D = 0.0002;
J = 0.28;
Mt = 0.0;
function [xdot]=AszinkronGep(t, x, Um, fHz)
xdot = zeros(12, 1);
Fsq = x(1);
Fsd = x(2);
Frq = x(3);
Frd = x(4);
wr = x(5);
isabc(1) = x(6);
isabc(2) = x(7);
isabc(3) = x(8);
irabc(1) = x(9);
irabc(2) = x(10);
irabc(3) = x(11);
Ua = Um*sin(2*%pi*fHz*t);
Ub = Um*sin(2*%pi*fHz*t - 2*%pi/3);
Uc = Um*sin(2*%pi*fHz*t + 2*%pi/3);
Uab = 2/3*[1, -0.5, -0.5; 0, sqrt(3)/2, -sqrt(3)/2]*[Ua;Ub;Uc];
phi = 2*%pi*fHz*t;
Udq = [cos(phi), sin(phi); -sin(phi), cos(phi)]*Uab;
Usd = Udq(1);
Usq = Udq(2);
Urd = 0;
Urq = 0;
isd = ( Fsd-Xml*(Fsd/Xls + Frd/Xlr) )/Xls;
isq = ( Fsq-Xml*(Fsq/Xls + Frq/Xlr) )/Xls;
ird = ( Frd-Xml*(Fsd/Xls + Frd/Xlr) )/Xlr;
irq = ( Frq-Xml*(Fsq/Xls + Frq/Xlr) )/Xlr;
isdq = [isd; isq];
isalphabeta = [cos(phi), -sin(phi); sin(phi), cos(phi)]*isdq;
isabc = [1, 0; -0.5, sqrt(3)/2; -0.5, -sqrt(3)/2]*isalphabeta;
irdq = [ird; irq];
iralphabeta = [cos(phi), -sin(phi); sin(phi), cos(phi)]*irdq;
irabc = [1, 0; -0.5, sqrt(3)/2; -0.5, -sqrt(3)/2]*iralphabeta;
//TORQUE
Me = (3/2)*p*(Fsd*isq - Fsq*isd)/wb
Fmq = Xml*( Fsq/Xls + Frq /Xlr );
Fmd = Xml*( Fsd/Xls + Frd /Xlr );
//Differential equations
xdot(1) = wb*( Usq - we/wb*Fsd + Rs/Xls*(Fmq - Fsq) );
xdot(2) = wb*( Usd + we/wb*Fsq + Rs/Xls*(Fmd - Fsd) );
xdot(3) = wb*( Urq - (we - wr)/wb*Frd + Rr/Xlr *(Fmq - Frq) );
xdot(4) = wb*( Urd + (we - wr)/wb*Frq + Rr/Xlr *(Fmd - Frd ) );
xdot(5) = p*(Me - D*wr - Mt)/J;
xdot(6) = isabc(1);
xdot(7) = isabc(2);
xdot(8) = isabc(3);
xdot(9) = irabc(1);
xdot(10) = irabc(2);
xdot(11) = irabc(3);
xdot(12) = Me;
if t <= 5 then
disp(Me);
end
endfunction
//Simulation parameter
t = 0:0.001:5;
t0 = 0;
//Starting parameters
y0 = [0;0;0;0;0;0;0;0;0;0;0;0]
y = ode(y0,t0,t,list(AszinkronGep,Um,fHz));
//Graphs
figure(1)
plot(t,y(5,:), "linewidth", 3);
xlabel("time [s]", "fontsize", 3, "color", "blue");
ylabel("rpm [rpm]", "fontsize", 3, "color", "blue");
figure(4)
plot(t,y(12,:), "linewidth", 3);
xlabel("time [s]", "fontsize", 3, "color", "blue");
ylabel("torque [Nm]", "fontsize", 3, "color", "blue");
I want a graph that shows 'Me' as a function of time. So I write: xdot(12) = Me, then plot that, but it doesn't looks like how it should at all. Just to check, I added 'disp(Me)' at the end of the function, to see if the calculations are correct at all. And yes, those are the right values. Why does it give me different values when I plot it?

As noted in the comments, y(12) is the integral of Me over t.
If you want Me, you just need to differentiate it:
//after you run ode() and have y values
h = t(2) - t(1);
Me = diff(y(12,:)) ./ h;
//plotting
scf(); clf();
subplot(2,1,1);
xtitle("y(12,:)");
plot2d(t,y(12,:));
subplot(2,1,2);
xtitle("Me");
plot2d(t(1:$-1),Me);
Here is the output:

Related

Pine script : create linear regression fixed interval line with standard error, r-squared, and angle of line readouts

I want to be able to plot a linear regression line(fixed interval straight trend line, not a continuous curve) from any two start and end price bars on a stock chart. Tradingview has a regression TREND tool that allows you to draw it on the chart, instead of entering the dates manually. This tool is easier, but I cant find the code for it, so a manual date entry box is also fine. I would like that regression trendline to appear on the chart, then state the standard error of each bar, making it easy to note the most extreme outliers. If possible, having r-squared added to this line as a readout would also be an excellent comparative tool (to help determine weakest r-squared). Lastly, I want some type of ANGLE measurement of the linear regression trendline; Angle of line, or Slope? Attached is tradingview's built-in Linear regression Channel indicator code as a starting point. Any help with this would be greatly appreciated. Thank you
```
//#version=5
indicator("Linear Regression Channel", shorttitle="LinReg", overlay=true)
lengthInput = input.int(100, title="Length", minval = 1, maxval = 5000)
sourceInput = input.source(close, title="Source")
group1 = "Channel Settings"
useUpperDevInput = input.bool(true, title="Upper Deviation", inline = "Upper Deviation", group = group1)
upperMultInput = input.float(2.0, title="", inline = "Upper Deviation", group = group1)
useLowerDevInput = input.bool(true, title="Lower Deviation", inline = "Lower Deviation", group = group1)
lowerMultInput = input.float(2.0, title="", inline = "Lower Deviation", group = group1)
group2 = "Display Settings"
showPearsonInput = input.bool(true, "Show Pearson's R", group = group2)
extendLeftInput = input.bool(false, "Extend Lines Left", group = group2)
extendRightInput = input.bool(true, "Extend Lines Right", group = group2)
extendStyle = switch
extendLeftInput and extendRightInput => extend.both
extendLeftInput => extend.left
extendRightInput => extend.right
=> extend.none
group3 = "Color Settings"
colorUpper = input.color(color.new(color.blue, 85), "", inline = group3, group = group3)
colorLower = input.color(color.new(color.red, 85), "", inline = group3, group = group3)
calcSlope(source, length) =>
max_bars_back(source, 5000)
if not barstate.islast or length <= 1
[float(na), float(na), float(na)]
else
sumX = 0.0
sumY = 0.0
sumXSqr = 0.0
sumXY = 0.0
for i = 0 to length - 1 by 1
val = source[i]
per = i + 1.0
sumX += per
sumY += val
sumXSqr += per * per
sumXY += val * per
slope = (length * sumXY - sumX * sumY) / (length * sumXSqr - sumX * sumX)
average = sumY / length
intercept = average - slope * sumX / length + slope
[slope, average, intercept]
[s, a, i] = calcSlope(sourceInput, lengthInput)
startPrice = i + s * (lengthInput - 1)
endPrice = i
var line baseLine = na
if na(baseLine) and not na(startPrice)
baseLine := line.new(bar_index - lengthInput + 1, startPrice, bar_index, endPrice, width=1, extend=extendStyle, color=color.new(colorLower, 0))
else
line.set_xy1(baseLine, bar_index - lengthInput + 1, startPrice)
line.set_xy2(baseLine, bar_index, endPrice)
na
calcDev(source, length, slope, average, intercept) =>
upDev = 0.0
dnDev = 0.0
stdDevAcc = 0.0
dsxx = 0.0
dsyy = 0.0
dsxy = 0.0
periods = length - 1
daY = intercept + slope * periods / 2
val = intercept
for j = 0 to periods by 1
price = high[j] - val
if price > upDev
upDev := price
price := val - low[j]
if price > dnDev
dnDev := price
price := source[j]
dxt = price - average
dyt = val - daY
price -= val
stdDevAcc += price * price
dsxx += dxt * dxt
dsyy += dyt * dyt
dsxy += dxt * dyt
val += slope
stdDev = math.sqrt(stdDevAcc / (periods == 0 ? 1 : periods))
pearsonR = dsxx == 0 or dsyy == 0 ? 0 : dsxy / math.sqrt(dsxx * dsyy)
[stdDev, pearsonR, upDev, dnDev]
[stdDev, pearsonR, upDev, dnDev] = calcDev(sourceInput, lengthInput, s, a, i)
upperStartPrice = startPrice + (useUpperDevInput ? upperMultInput * stdDev : upDev)
upperEndPrice = endPrice + (useUpperDevInput ? upperMultInput * stdDev : upDev)
var line upper = na
lowerStartPrice = startPrice + (useLowerDevInput ? -lowerMultInput * stdDev : -dnDev)
lowerEndPrice = endPrice + (useLowerDevInput ? -lowerMultInput * stdDev : -dnDev)
var line lower = na
if na(upper) and not na(upperStartPrice)
upper := line.new(bar_index - lengthInput + 1, upperStartPrice, bar_index, upperEndPrice, width=1, extend=extendStyle, color=color.new(colorUpper, 0))
else
line.set_xy1(upper, bar_index - lengthInput + 1, upperStartPrice)
line.set_xy2(upper, bar_index, upperEndPrice)
na
if na(lower) and not na(lowerStartPrice)
lower := line.new(bar_index - lengthInput + 1, lowerStartPrice, bar_index, lowerEndPrice, width=1, extend=extendStyle, color=color.new(colorUpper, 0))
else
line.set_xy1(lower, bar_index - lengthInput + 1, lowerStartPrice)
line.set_xy2(lower, bar_index, lowerEndPrice)
na
linefill.new(upper, baseLine, color = colorUpper)
linefill.new(baseLine, lower, color = colorLower)
// Pearson's R
var label r = na
label.delete(r[1])
if showPearsonInput and not na(pearsonR)
r := label.new(bar_index - lengthInput + 1, lowerStartPrice, str.tostring(pearsonR, "#.################"), color = color.new(color.white, 100), textcolor=color.new(colorUpper, 0), size=size.normal, style=label.style_label_up)
```
You can use the confirm = true argument with input.time() in order to manually click start and end points.
Since we have to render something like the individual errors historically, we have to use the label functions to do so. There is a 500 label limit, so if the regression contains more than 500 bars, then you will have to define "extreme outliers" in this context in order to limit outliers to 500 labelled bars or less.
//#version=5
indicator("point to point linreg", overlay = true, max_lines_count = 500, max_labels_count = 500)
start_time = input.time(timestamp("20 Jul 2022 00:00 +000"), title = "Start time", confirm = true)
end_time = input.time(timestamp("21 Jul 2022 00:00 +000"), title = "End time", confirm = true)
src = input.source(close, title = "Source")
devmult = input.float(1.000, title = "Dev mult")
reg_line_col = input.color(color.blue, title = "Reg line color")
up_line_col = input.color(color.green, title = "Upper Dev line color")
dn_line_col = input.color(color.red, title = "Lower Dev line color")
reg_line_width = input.int(2, title = "Line width")
f_linreg_from_arrays(_x_array, _y_array) =>
_size_x = array.size(_x_array)
_size_y = array.size(_y_array)
float _sum_x = array.sum(_x_array)
float _sum_y = array.sum(_y_array)
float _sum_xy = 0.0
float _sum_x2 = 0.0
float _sum_y2 = 0.0
if _size_y == _size_x
for _i = 0 to _size_y - 1
float _x_i = nz(array.get(_x_array, _i))
float _y_i = nz(array.get(_y_array, _i))
_sum_xy := _sum_xy + _x_i * _y_i
_sum_x2 := _sum_x2 + math.pow(_x_i, 2)
_sum_y2 := _sum_y2 + math.pow(_y_i, 2)
_sum_y2
float _a = (_sum_y * _sum_x2 - _sum_x * _sum_xy) / (_size_x * _sum_x2 - math.pow(_sum_x, 2))
float _b = (_size_x * _sum_xy - _sum_x * _sum_y) / (_size_x * _sum_x2 - math.pow(_sum_x, 2))
float[] _f = array.new_float()
for _i = 0 to _size_y - 1
float _vector = _a + _b * array.get(_x_array, _i)
array.push(_f, _vector)
_slope = (array.get(_f, 0) - array.get(_f, _size_y - 1)) / (array.get(_x_array, 0) - array.get(_x_array, _size_x - 1))
_y_mean = array.avg(_y_array)
float _SS_res = 0.0
float _SS_tot = 0.0
for _i = 0 to _size_y - 1
float _f_i = array.get(_f, _i)
float _y_i = array.get(_y_array, _i)
_SS_res := _SS_res + math.pow(_f_i - _y_i, 2)
_SS_tot := _SS_tot + math.pow(_y_mean - _y_i, 2)
_SS_tot
_r_sq = 1 - _SS_res / _SS_tot
float _sq_err_sum = 0
for _i = 0 to _size_y - 1
_sq_err_sum += math.pow(array.get(_f, _i) - array.get(_y_array, _i), 2)
_dev = math.sqrt(_sq_err_sum / _size_y)
[_f, _slope, _r_sq, _dev]
var int[] time_vals = array.new_int()
var float[] price_vals = array.new_float()
var line[] reg_lines = array.new_line()
var line[] up_lines = array.new_line()
var line[] dn_lines = array.new_line()
var label reg_label = label.new(x = na, y = na, xloc = xloc.bar_time, style = label.style_label_upper_left, textcolor = color.white, textalign = text.align_left)
is_last_bar = time >= end_time and time[1] < end_time
is_in_time_range = time >= start_time and time <= end_time
if is_in_time_range
array.unshift(time_vals, time)
array.unshift(price_vals, src)
if barstate.isfirst
for i = 0 to 165
array.push(reg_lines, line.new(x1 = na, y1 = na, x2 = na, y2 = na, xloc = xloc.bar_time, color = reg_line_col, width = reg_line_width))
array.push(up_lines, line.new(x1 = na, y1 = na, x2 = na, y2 = na, xloc = xloc.bar_time, color = up_line_col, width = reg_line_width))
array.push(dn_lines, line.new(x1 = na, y1 = na, x2 = na, y2 = na, xloc = xloc.bar_time, color = dn_line_col, width = reg_line_width))
if is_last_bar
[f, slope, r_sq, dev] = f_linreg_from_arrays(time_vals, price_vals)
size = array.size(time_vals)
if size > 0
if size <= 167
for i = 0 to size - 2
start_x = array.get(time_vals, i)
start_y = array.get(f, i)
end_x = array.get(time_vals, i + 1)
end_y = array.get(f, i + 1)
reg_line = array.get(reg_lines, i)
up_line = array.get(up_lines, i)
dn_line = array.get(dn_lines, i)
line.set_xy1(reg_line, x = start_x, y = start_y)
line.set_xy2(reg_line, x = end_x, y = end_y)
else
interval = math.ceil(size / 166)
line_index = 0
for i = 0 to size - math.floor(interval / 2) - 2 by interval
index2 = i + math.floor(interval / 2)
start_x = array.get(time_vals, i)
start_y = array.get(f, i)
end_x = array.get(time_vals, index2)
end_y = array.get(f, index2)
reg_line = array.get(reg_lines, line_index)
up_line = array.get(up_lines, line_index)
dn_line = array.get(dn_lines, line_index)
line.set_xy1(reg_line, x = start_x, y = start_y)
line.set_xy2(reg_line, x = end_x, y = end_y)
line.set_xy1(up_line, x = start_x, y = start_y + devmult * dev)
line.set_xy2(up_line, x = end_x, y = end_y + devmult * dev)
line.set_xy1(dn_line, x = start_x, y = start_y - devmult * dev)
line.set_xy2(dn_line, x = end_x, y = end_y - devmult * dev)
line_index += 1
reg_info = "Slope : " + str.tostring(slope) + "\nR² : " + str.tostring(r_sq) + "\ndev : " + str.tostring(dev)
label_x = array.get(time_vals, 0)
label_y = array.get(f, 0) - devmult * dev
label.set_xy(reg_label, x = label_x, y = label_y)
label.set_text(reg_label, text = reg_info)```

The dependency graph in MATLAB

I've got the following function:
phi0 = [0; 0]; %initial values
[T,PHI] = ode45(#eqn,[0, 10], phi0,odeset('RelTol',2e-13,'AbsTol',1e-100));
plot(T, PHI(:,1),'-b',T, PHI(:,2),'-g');
title('\it d = 0.1')
w1 = PHI(end,1)/(10000*2*pi) %the limit frequency for phi1
w2 = PHI(end,2)/(10000*2*pi) %the limit frequency for phi2
delta_w = w2 - w1
phi1_at_t_10k = PHI(end,1) %the value phi1(t=10000)
phi2_at_t_10k = PHI(end,2)
function dy_dt = eqn(t,phi)
d = 0.1; %synchronization parameter
n = 3;
g = [ 1.01; 1.02];
f = g-sin(phi/n);
exch = [d;-d]*sin(phi(2)-phi(1));
dy_dt = f+exch;
end
The w is calculated by the formula: w_i = (1/2pi)(lim((phi(t)-phi(0))/t) where t->infinity (here it's equal to 10000).
The question is how to plot the dependence of delta_w on different values of d (from d=0 to d=5 with step = 0.1)?
To collect summarize my comments:
First make the parameter d explicit in the ODE function
function dy_dt = eqn(t,phi,d)
n = 3;
g = [ 1.01; 1.02];
f = g-sin(phi/n);
exch = [d;-d]*sin(phi(2)-phi(1));
dy_dt = f+exch;
end
Then put the ODE integration and evaluation of the result in its own procedure
function delta_w = f(d)
phi0 = [0; 0]; %initial values
opts = odeset('RelTol',2e-13,'AbsTol',1e-100);
[T,PHI] = ode45(#(t,y)eqn(t,y,d), [0, 10], phi0, opts);
w1 = PHI(end,1)/(10000*2*pi); %the limit frequency for phi1
w2 = PHI(end,2)/(10000*2*pi); %the limit frequency for phi2
delta_w = w2 - w1;
end
And finally evaluate for the list of d values under consideration
d = [0:0.1:5];
delta_w = arrayfun(#(x)f(x),d);
plot(d,delta_w);
This should give a result. If it is not the expected one, further research into assumptions, equations and code is necessary.

handwriting synthesis alex graves

I have been trying to replicate the alex graves handwriting synthesis model, and I did this with tensorflow, and python on a 1080Ti GPU with cuda,
I exactly replicated all of the features explained in the paper and even clipped the respective gradient values in place, but I have real difficulty training it.
I also preproccessed the data in the way explained in the paper, including normalizing the X and y offsets, but the problem is that the training usually can't lower the negative log likelihood more than 1000 which in the paper it reaches -1000, and after that i see NaN weights.
The only extra thing I did was to add 0.0000001 to the conditional probability of every stroke to prevent NaN values in log likelihood.
Any tips or suggestions or experience with such a task?
this is the cell code I use,
class Custom_Cell(RNNCell):
def __init__(self,forget_bias,bias,one_hot_vector, hidden_layer_nums=[700,700,700], mixture_num=10, attention_num=4):
self.bias = bias
self.lstms = []
for i in hidden_layer_nums:
self.lstms.append(LSTMCell(num_units=i, initializer=tf.truncated_normal_initializer(0.075), dtype=tf.float32, forget_bias=forget_bias))
self.attention_num = attention_num
self.mixture_num = mixture_num
self.state_size = 2*sum(hidden_layer_nums) + 3*self.attention_num
self.attention_var_num = 3*self.attention_num
self.output_size = 6*self.mixture_num + 1 + 1
self.one_hot_vector = one_hot_vector
self.lstm_num = len(hidden_layer_nums)
self.hidden_layer_nums = hidden_layer_nums
temp_shape = self.one_hot_vector.shape
self.char_num = temp_shape[2]
self.i_to_h = []
self.w_to_h = []
self.h_to_h = []
self.prev_h_to_h = []
self.lstm_bias = []
self.lstm_to_attention_weights = tf.get_variable("lstms/first_to_attention_mtrx",shape=[hidden_layer_nums[0],self.attention_var_num],dtype=tf.float32,initializer=tf.truncated_normal_initializer(stddev=0.075),trainable=True)
self.lstm_to_attention_bias = tf.get_variable("lstms/first_to_attention_bias",shape=[self.attention_var_num],dtype=tf.float32,initializer=tf.truncated_normal_initializer(stddev=0.075),trainable=True)
self.all_to_output_mtrx = []
for i in range(self.lstm_num):
self.all_to_output_mtrx.append( tf.get_variable("lstms/to_output_mtrx_" + str(i), shape=[hidden_layer_nums[i],self.output_size-1],dtype=tf.float32,initializer=tf.truncated_normal_initializer(stddev=0.075),trainable=True))
self.all_to_output_bias = tf.get_variable("lstms/output_bias",shape=[self.output_size-1],dtype=tf.float32,initializer=tf.truncated_normal_initializer(stddev=0.075),trainable=True)
for i in range(self.lstm_num):
self.i_to_h.append(tf.get_variable("lstms/i_to_h_"+str(i),shape=[3,hidden_layer_nums[i]],dtype=tf.float32,initializer=tf.truncated_normal_initializer(stddev=0.075),trainable=True))
self.w_to_h.append(tf.get_variable("lstms/w_to_h_"+str(i),shape=[self.char_num,hidden_layer_nums[i]],dtype=tf.float32,initializer=tf.truncated_normal_initializer(stddev=0.075),trainable=True))
self.h_to_h.append(tf.get_variable("lstms/h_to_h_"+str(i),shape=[hidden_layer_nums[i],hidden_layer_nums[i]],dtype=tf.float32,initializer=tf.truncated_normal_initializer(stddev=0.075),trainable=True))
self.lstm_bias.append(tf.get_variable("lstms/bias_" + str(i),shape=[hidden_layer_nums[i]],dtype=tf.float32,initializer=tf.truncated_normal_initializer(stddev=0.075),trainable=True))
if not i == 0:
self.prev_h_to_h.append(
tf.get_variable("lstms/prev_h_to_h_" + str(i), shape=[hidden_layer_nums[i-1], hidden_layer_nums[i]],
dtype=tf.float32, initializer=tf.truncated_normal_initializer(stddev=0.075),
trainable=True))
def __call__(self, inputs, state, scope=None):
# Extracting previous configuration and vectors
splitarray = []
for i in self.hidden_layer_nums:
splitarray.append(i)
splitarray.append(i)
splitarray.append(3*self.attention_num)
splitted = tf.split(state,splitarray,axis=1)
prev_tuples = []
for i in range(self.lstm_num):
newtuple = LSTMStateTuple(splitted[2*i],splitted[2*i + 1])
prev_tuples.append(newtuple)
prev_attention_vec = splitted[2*self.lstm_num]
new_attention_vec = 0
next_states = []
most_attended = 0
last_output = 0
for i in range(self.lstm_num):
prev_c, prev_h = prev_tuples[i]
cell = self.lstms[i]
if i == 0:
with tf.name_scope("layer_1"):
w, most_attended = self.gaussian_attention(self.one_hot_vector,prev_attention_vec)
input_vec = tf.matmul(inputs,self.i_to_h[0]) + tf.matmul(prev_h,self.h_to_h[0]) + tf.matmul(w,self.w_to_h[0]) + self.lstm_bias[0]
_, new_state = cell(input_vec, prev_tuples[0])
new_c, new_h = new_state
next_states.append(new_c)
next_states.append(new_h)
last_output = tf.matmul(new_h,self.all_to_output_mtrx[0])
with tf.name_scope("attention_layer"):
temp_attention = tf.matmul(new_h,self.lstm_to_attention_weights) + self.lstm_to_attention_bias
new_alpha, new_beta, new_kappa = tf.split(temp_attention,[self.attention_num,self.attention_num,self.attention_num],axis=1)
old_alpha, old_beta, old_kappa = tf.split(prev_attention_vec,[self.attention_num,self.attention_num,self.attention_num], axis=1)
new_alpha = tf.exp(new_alpha)
new_beta = tf.exp(new_beta)
new_kappa = tf.exp(new_kappa) + old_kappa
new_attention_vec = tf.concat([new_alpha,new_beta,new_kappa],axis=1)
else:
with tf.name_scope("layer_" + str(i)):
w, most_attended = self.gaussian_attention(self.one_hot_vector,new_attention_vec)
input_vec = tf.matmul(inputs,self.i_to_h[i]) + tf.matmul(next_states[-1],self.prev_h_to_h[i-1]) + tf.matmul(prev_h,self.h_to_h[i]) + tf.matmul(w,self.w_to_h[i]) + self.lstm_bias[i]
_,new_state = cell(input_vec,prev_tuples[i])
new_c, new_h = new_state
next_states.append(new_c)
next_states.append(new_h)
last_output = last_output + tf.matmul(new_h, self.all_to_output_mtrx[i])
with tf.name_scope("output"):
last_output = last_output + self.all_to_output_bias
next_states.append(new_attention_vec)
state_to_return = tf.concat(next_states,axis=1)
output_split_param = [1,self.mixture_num,2*self.mixture_num,2*self.mixture_num,self.mixture_num]
binomial_param, pi, mu, sigma, rho = tf.split(last_output,output_split_param,axis=1)
binomial_param = tf.divide(1.,1.+tf.exp(binomial_param))
pi = tf.nn.softmax(tf.multiply(pi,1.+self.bias),axis=1)
mu = mu
sigma = tf.exp(sigma-self.bias)
rho = tf.tanh(rho)
output_to_return = tf.concat([most_attended, binomial_param, pi, mu, sigma, rho],axis=1)
return output_to_return, state_to_return
def state_size(self):
return self.state_size
def output_size(self):
return self.output_size
def gaussian_attention(self,sequence,params):
with tf.name_scope("attention_calculation"):
alpha, beta, kappa = tf.split(params,[self.attention_num,self.attention_num,self.attention_num],axis=1)
seq_shape = sequence.shape
seq_length = seq_shape[1]
temp_vec = 20*np.asarray(range(seq_length),dtype=float)
final_result = 0
alpha = tf.split(alpha,self.attention_num,1)
beta = tf.split(beta,self.attention_num,1)
kappa = tf.split(kappa,self.attention_num,1)
for i in range(self.attention_num):
alpha_now = alpha[i]
beta_now = beta[i]
kappa_now = kappa[i]
result = kappa_now - temp_vec
result = tf.multiply(tf.square(result),tf.negative(beta_now))
result = tf.multiply(tf.exp(result),alpha_now)
final_result = final_result+result
most_attended = tf.argmax(final_result,axis=1)
most_attended = tf.reshape(tf.cast(most_attended,dtype=tf.float32),shape=[-1,1])
final_result = tf.tile(tf.reshape(final_result,[-1,seq_shape[1],1]),[1,1,seq_shape[2]])
to_return = tf.reduce_sum(tf.multiply(final_result,sequence),axis=1)
return to_return, most_attended
and this is the rnn with loss network:
`to_write_one_hot = tf.placeholder(dtype=tf.float32,shape=(None,line_length,dict_length))
sequence = tf.placeholder(dtype=tf.float32,shape=(None,None,3))
sequence_shift = tf.placeholder(dtype=tf.float32,shape=(None,None,3))
bias = tf.placeholder(shape=[1],dtype=tf.float32)
sequence_length = tf.placeholder(shape=(None),dtype=tf.int32)
forget_bias_placeholder = tf.placeholder(shape=(None),dtype=tf.float32)
graves_cell = Custom_Cell(forget_bias=1,one_hot_vector=to_write_one_hot,hidden_layer_nums=hidden_layer_nums,mixture_num=mixture_num,bias=bias,attention_num=attention_num)
output, state = tf.nn.dynamic_rnn(graves_cell,sequence,dtype=tf.float32,sequence_length=sequence_length)
with tf.name_scope("loss_layer"):
mask = tf.sign(tf.reduce_max(tf.abs(output), 2))
most_attended, binomial_param, pi, mu, sigma, rho = tf.split(output,[1,1,mixture_num,2*mixture_num,2*mixture_num,mixture_num], axis=2)
pi = tf.split(pi,mixture_num,axis=2)
mu = tf.split(mu,mixture_num,axis=2)
sigma = tf.split(sigma,mixture_num,axis=2)
rho = tf.split(rho,mixture_num,axis=2)
negative_log_likelihood = 0
probability = 0
x1, x2, e = tf.split(sequence_shift,3,axis=2)
for i in range(mixture_num):
pi_now = pi[i]
mu_now = tf.split(mu[i],2,axis=2)
mu_1 = mu_now[0]
mu_2 = mu_now[1]
sigma_now = tf.split(sigma[i],2,axis=2)
sigma_1 = sigma_now[0] + (1-tf.reshape(mask, [-1,max_len,1]))
sigma_2 = sigma_now[1] + (1-tf.reshape(mask, [-1,max_len,1]))
rho_now = rho[i]
Z = tf.divide(tf.square(x1-mu_1),tf.square(sigma_1)) + tf.divide(tf.square(x2-mu_2),tf.square(sigma_2)) - tf.divide(tf.multiply(tf.multiply(x1-mu_1,x2-mu_2),2*rho_now),tf.multiply(sigma_1,sigma_2))
prob = tf.exp(tf.div(tf.negative(Z),2*(1-tf.square(rho_now))))
Normalizing_factor = 2*np.pi*tf.multiply(sigma_1,sigma_2)
Normalizing_factor = tf.multiply(Normalizing_factor,tf.sqrt(1-tf.square(rho_now)))
prob = tf.divide(prob,Normalizing_factor)
prob = tf.multiply(pi_now,prob)
probability = probability + prob
binomial_likelihood = tf.multiply(binomial_param,e) + tf.multiply(1-binomial_param,1-e)
probability = tf.multiply(probability,binomial_likelihood)
probability = probability + (1-tf.reshape(mask,[-1,max_len,1]))
temp_tensor = tf.multiply(mask, tf.log(tf.reshape(probability,[-1,max_len]) + mask*0.00001))
negative_log_likelihood_0 = tf.negative(tf.reduce_sum(temp_tensor,axis=1))
negative_log_likelihood_1 = tf.divide(negative_log_likelihood_0,tf.reshape(tf.cast(sequence_length, dtype=tf.float32), shape=[-1,1]))
negative_log_likelihood_1 = tf.reduce_mean(negative_log_likelihood_1)
tf.summary.scalar("average_per_timestamp_log_likelihood", negative_log_likelihood_1)
negative_log_likelihood = tf.reduce_mean(negative_log_likelihood_0)
with tf.name_scope("train_op"):
optimizer = tf.train.RMSPropOptimizer(learning_rate=0.0001,momentum=0.9, decay=0.95,epsilon=0.0001)
gvs = optimizer.compute_gradients(negative_log_likelihood)
capped_gvs = []
for grad, var in gvs:
if var.name.__contains__("rnn"):
capped_gvs.append((tf.clip_by_value(grad,-10,10),var))
else:
capped_gvs.append((tf.clip_by_value(grad,-100,100),var))
train_op = optimizer.apply_gradients(capped_gvs)
`
Edit.1. I discovered that I was clipping gradients in a wrong way, the correct way was to introduce a new 'op' as explained by https://github.com/tensorflow/tensorflow/issues/2793 to clip only the output gradients of the whole network and lstm cells.
#tf.custom_gradient
def clip_gradient(x, clip):
def grad(dresult):
return [tf.clip_by_norm(dresult, clip)]
return x, grad
add the lines above to your code and use the function on any variable you want to clip the gradient in back propagation!
I should still see my results.
Edit 2.
The changed Model code is:
from tensorflow.contrib.rnn import RNNCell
from tensorflow.contrib.rnn import LSTMCell
from tensorflow.contrib.rnn import LSTMStateTuple
import tensorflow as tf
import numpy as np
#tf.custom_gradient
def clip_gradient_lstm(x):
def grad(dresult):
return [tf.clip_by_value(dresult,-10,10)]
return x, grad
#tf.custom_gradient
def clip_gradient_output(x):
def grad(dresult):
return [tf.clip_by_value(dresult,-100,100)]
return x, grad
def length_of(seq):
used = tf.sign(tf.reduce_max(tf.abs(seq),axis=2))
length = tf.reduce_sum(used,1)
length = tf.cast(length,tf.int32)
return length
class Custom_Cell(RNNCell):
def __init__(self,forget_bias,bias,one_hot_vector, hidden_layer_nums=[700,700,700], mixture_num=10, attention_num=4):
self.bias = bias
self.lstms = []
for i in hidden_layer_nums:
self.lstms.append(LSTMCell(num_units=i, initializer=tf.truncated_normal_initializer(0.075), dtype=tf.float32, forget_bias=forget_bias))
self.attention_num = attention_num
self.mixture_num = mixture_num
self.state_size = 2*sum(hidden_layer_nums) + 3*self.attention_num
self.attention_var_num = 3*self.attention_num
self.output_size = 6*self.mixture_num + 1 + 1
self.one_hot_vector = one_hot_vector
self.lstm_num = len(hidden_layer_nums)
self.hidden_layer_nums = hidden_layer_nums
temp_shape = self.one_hot_vector.shape
self.char_num = temp_shape[2]
self.i_to_h = []
self.w_to_h = []
self.h_to_h = []
self.prev_h_to_h = []
self.lstm_bias = []
self.lstm_to_attention_weights = tf.get_variable("lstms/first_to_attention_mtrx",shape=[hidden_layer_nums[0],self.attention_var_num],dtype=tf.float32,initializer=tf.truncated_normal_initializer(stddev=0.075),trainable=True)
self.lstm_to_attention_bias = tf.get_variable("lstms/first_to_attention_bias",shape=[self.attention_var_num],dtype=tf.float32,initializer=tf.truncated_normal_initializer(stddev=0.075),trainable=True)
self.all_to_output_mtrx = []
for i in range(self.lstm_num):
self.all_to_output_mtrx.append( tf.get_variable("lstms/to_output_mtrx_" + str(i), shape=[hidden_layer_nums[i],self.output_size-1],dtype=tf.float32,initializer=tf.truncated_normal_initializer(stddev=0.075),trainable=True))
self.all_to_output_bias = tf.get_variable("lstms/output_bias",shape=[self.output_size-1],dtype=tf.float32,initializer=tf.truncated_normal_initializer(stddev=0.075),trainable=True)
for i in range(self.lstm_num):
self.i_to_h.append(tf.get_variable("lstms/i_to_h_"+str(i),shape=[3,hidden_layer_nums[i]],dtype=tf.float32,initializer=tf.truncated_normal_initializer(stddev=0.075),trainable=True))
self.w_to_h.append(tf.get_variable("lstms/w_to_h_"+str(i),shape=[self.char_num,hidden_layer_nums[i]],dtype=tf.float32,initializer=tf.truncated_normal_initializer(stddev=0.075),trainable=True))
self.h_to_h.append(tf.get_variable("lstms/h_to_h_"+str(i),shape=[hidden_layer_nums[i],hidden_layer_nums[i]],dtype=tf.float32,initializer=tf.truncated_normal_initializer(stddev=0.075),trainable=True))
self.lstm_bias.append(tf.get_variable("lstms/bias_" + str(i),shape=[hidden_layer_nums[i]],dtype=tf.float32,initializer=tf.truncated_normal_initializer(stddev=0.075),trainable=True))
if not i == 0:
self.prev_h_to_h.append(
tf.get_variable("lstms/prev_h_to_h_" + str(i), shape=[hidden_layer_nums[i-1], hidden_layer_nums[i]],
dtype=tf.float32, initializer=tf.truncated_normal_initializer(stddev=0.075),
trainable=True))
def __call__(self, inputs, state, scope=None):
# Extracting previous configuration and vectors
splitarray = []
for i in self.hidden_layer_nums:
splitarray.append(i)
splitarray.append(i)
splitarray.append(3*self.attention_num)
splitted = tf.split(state,splitarray,axis=1)
prev_tuples = []
for i in range(self.lstm_num):
newtuple = LSTMStateTuple(splitted[2*i],splitted[2*i + 1])
prev_tuples.append(newtuple)
prev_attention_vec = splitted[2*self.lstm_num]
new_attention_vec = 0
next_states = []
most_attended = 0
last_output = 0
for i in range(self.lstm_num):
prev_c, prev_h = prev_tuples[i]
cell = self.lstms[i]
if i == 0:
with tf.name_scope("layer_1"):
w, most_attended = self.gaussian_attention(self.one_hot_vector,prev_attention_vec)
input_vec = tf.matmul(inputs,self.i_to_h[0]) + tf.matmul(prev_h,self.h_to_h[0]) + tf.matmul(w,self.w_to_h[0]) + self.lstm_bias[0]
_, new_state = cell(input_vec, prev_tuples[0])
new_c, new_h = new_state
new_h = clip_gradient_lstm(new_h)
next_states.append(new_c)
next_states.append(new_h)
last_output = tf.matmul(new_h,self.all_to_output_mtrx[0])
with tf.name_scope("attention_layer"):
temp_attention = tf.matmul(new_h,self.lstm_to_attention_weights) + self.lstm_to_attention_bias
new_alpha, new_beta, new_kappa = tf.split(temp_attention,[self.attention_num,self.attention_num,self.attention_num],axis=1)
old_alpha, old_beta, old_kappa = tf.split(prev_attention_vec,[self.attention_num,self.attention_num,self.attention_num], axis=1)
new_alpha = tf.exp(new_alpha)
new_beta = tf.exp(new_beta)
new_kappa = tf.exp(new_kappa) + old_kappa
new_attention_vec = tf.concat([new_alpha,new_beta,new_kappa],axis=1)
else:
with tf.name_scope("layer_" + str(i)):
w, most_attended = self.gaussian_attention(self.one_hot_vector,new_attention_vec)
input_vec = tf.matmul(inputs,self.i_to_h[i]) + tf.matmul(next_states[-1],self.prev_h_to_h[i-1]) + tf.matmul(prev_h,self.h_to_h[i]) + tf.matmul(w,self.w_to_h[i]) + self.lstm_bias[i]
_,new_state = cell(input_vec,prev_tuples[i])
new_c, new_h = new_state
new_h = clip_gradient_lstm(new_h)
next_states.append(new_c)
next_states.append(new_h)
last_output = last_output + tf.matmul(new_h, self.all_to_output_mtrx[i])
with tf.name_scope("output"):
last_output = last_output + self.all_to_output_bias
last_output = clip_gradient_output(last_output)
next_states.append(new_attention_vec)
state_to_return = tf.concat(next_states,axis=1)
output_split_param = [1,self.mixture_num,2*self.mixture_num,2*self.mixture_num,self.mixture_num]
binomial_param, pi, mu, sigma, rho = tf.split(last_output,output_split_param,axis=1)
binomial_param = tf.divide(1.,1.+tf.exp(binomial_param))
pi = tf.nn.softmax(tf.multiply(pi,1.+self.bias),axis=1)
mu = mu
sigma = tf.exp(sigma-self.bias)
rho = tf.tanh(rho)
output_to_return = tf.concat([most_attended, binomial_param, pi, mu, sigma, rho],axis=1)
return output_to_return, state_to_return
def state_size(self):
return self.state_size
def output_size(self):
return self.output_size
def gaussian_attention(self,sequence,params):
with tf.name_scope("attention_calculation"):
alpha, beta, kappa = tf.split(params,[self.attention_num,self.attention_num,self.attention_num],axis=1)
seq_shape = sequence.shape
seq_length = seq_shape[1]
temp_vec = np.asarray(range(seq_length),dtype=float)
final_result = 0
alpha = tf.split(alpha,self.attention_num,1)
beta = tf.split(beta,self.attention_num,1)
kappa = tf.split(kappa,self.attention_num,1)
for i in range(self.attention_num):
alpha_now = alpha[i]
beta_now = beta[i]
kappa_now = kappa[i]
result = kappa_now - temp_vec
result = tf.multiply(tf.square(result),tf.negative(beta_now))
result = tf.multiply(tf.exp(result),alpha_now)
final_result = final_result+result
most_attended = tf.argmax(final_result,axis=1)
most_attended = tf.reshape(tf.cast(most_attended,dtype=tf.float32),shape=[-1,1])
final_result = tf.tile(tf.reshape(final_result,[-1,seq_shape[1],1]),[1,1,seq_shape[2]])
to_return = tf.reduce_sum(tf.multiply(final_result,sequence),axis=1)
return to_return, most_attended
and the Training is done by
with tf.name_scope("train_op"):
optimizer =
tf.train.RMSPropOptimizer(learning_rate=0.0001,momentum=0.9, decay=0.95,epsilon=0.0001,centered=True)
train_op = optimizer.minimize(negative_log_likelihood)
and right now is still in training, but it is now as low as -10.

keras Input layer (Nnoe,200,3), Why there is None?input have 3 dimensions, but got array with shape (200, 3)

The acc gyro in model.fit is (200 * 3),in the Input layer shape is (200 * 3). Why is there such a problem? Error when checking input: expected acc_input to have 3 dimensions, but got array with shape (200, 3).This is a visualization of my model.
Here's my code:
WIDE = 20
FEATURE_DIM = 30
CHANNEL = 1
CONV_NUM = 64
CONV_LEN = 3
CONV_LEN_INTE = 3#4
CONV_LEN_LAST = 3#5
CONV_NUM2 = 64
CONV_MERGE_LEN = 8
CONV_MERGE_LEN2 = 6
CONV_MERGE_LEN3 = 4
rnn_size=128
acc_input_tensor = Input(shape=(200,3),name = 'acc_input')
gyro_input_tensor = Input(shape=(200,3),name= 'gyro_input')
Acc_input_tensor = Reshape(target_shape=(20,30,1))(acc_input_tensor)
Gyro_input_tensor = Reshape(target_shape=(20,30,1))(gyro_input_tensor)
acc_conv1 = Conv2D(CONV_NUM,(1, 1*3*CONV_LEN),strides= (1,1*3),padding='valid',activation=None)(Acc_input_tensor)
acc_conv1 = BatchNormalization(axis=1)(acc_conv1)
acc_conv1 = Activation('relu')(acc_conv1)
acc_conv1 = Dropout(0.2)(acc_conv1)
acc_conv2 = Conv2D(CONV_NUM,(1,CONV_LEN_INTE),strides= (1,1),padding='valid',activation=None)(acc_conv1)
acc_conv2 = BatchNormalization(axis=1)(acc_conv2)
acc_conv2 = Activation('relu')(acc_conv2)
acc_conv2 = Dropout(0.2)(acc_conv2)
acc_conv3 = Conv2D(CONV_NUM,(1,CONV_LEN_LAST),strides=(1,1),padding='valid',activation=None)(acc_conv2)
acc_conv3 = BatchNormalization(axis=1)(acc_conv3)
acc_conv3 = Activation('relu')(acc_conv3)
acc_conv3 = Dropout(0.2)(acc_conv3)
gyro_conv1 = Conv2D(CONV_NUM,(1, 1*3*CONV_LEN),strides=(1,1*3),padding='valid',activation=None)(Gyro_input_tensor)
gyro_conv1 = BatchNormalization(axis=1)(gyro_conv1)
gyro_conv1 = Activation('relu')(gyro_conv1)
gyro_conv1 = Dropout(0.2)(gyro_conv1)
gyro_conv2 = Conv2D(CONV_NUM,(1, CONV_LEN_INTE),strides=(1,1),padding='valid',activation=None)(gyro_conv1)
gyro_conv2 = BatchNormalization(axis=1)(gyro_conv2)
gyro_conv2 = Activation('relu')(gyro_conv2)
gyro_conv2 = Dropout(0.2)(gyro_conv2)
gyro_conv3 = Conv2D(CONV_NUM,(1, CONV_LEN_LAST),strides=(1,1),padding='valid',activation=None)(gyro_conv2)
gyro_conv3 = BatchNormalization(axis=1)(gyro_conv3)
gyro_conv3 = Activation('relu')(gyro_conv3)
gyro_conv3 = Dropout(0.2)(gyro_conv3)
sensor_conv_in = concatenate([acc_conv3, gyro_conv3], 2)
sensor_conv_in = Dropout(0.2)(sensor_conv_in)
sensor_conv1 = Conv2D(CONV_NUM2,kernel_size=(2, CONV_MERGE_LEN),padding='SAME')(sensor_conv_in)
sensor_conv1 = BatchNormalization(axis=1)(sensor_conv1)
sensor_conv1 = Activation('relu')(sensor_conv1)
sensor_conv1 = Dropout(0.2)(sensor_conv1)
sensor_conv2 = Conv2D(CONV_NUM2,kernel_size=(2, CONV_MERGE_LEN2),padding='SAME')(sensor_conv1)
sensor_conv2 = BatchNormalization(axis=1)(sensor_conv2)
sensor_conv2 = Activation('relu')(sensor_conv2)
sensor_conv2 = Dropout(0.2)(sensor_conv2)
sensor_conv3 = Conv2D(CONV_NUM2,kernel_size=(2, CONV_MERGE_LEN3),padding='SAME')(sensor_conv2)
sensor_conv3 = BatchNormalization(axis=1)(sensor_conv3)
sensor_conv3 = Activation('relu')(sensor_conv3)
conv_shape = sensor_conv3.get_shape()
print conv_shape
x1 = Reshape(target_shape=(int(conv_shape[1]), int(conv_shape[2]*conv_shape[3])))(sensor_conv3)
x1 = Dense(64, activation='relu')(x1)
gru_1 = GRU(rnn_size, return_sequences=True, init='he_normal', name='gru1')(x1)
gru_1b = GRU(rnn_size, return_sequences=True, go_backwards=True, init='he_normal', name='gru1_b')(x1)
gru1_merged = merge([gru_1, gru_1b], mode='sum')
gru_2 = GRU(rnn_size, return_sequences=True, init='he_normal', name='gru2')(gru1_merged)
gru_2b = GRU(rnn_size, return_sequences=True, go_backwards=True, init='he_normal', name='gru2_b')(gru1_merged)
x = merge([gru_2, gru_2b], mode='concat')
x = Dropout(0.25)(x)
n_class=2
x = Dense(n_class)(x)
model = Model(input=[acc_input_tensor,gyro_input_tensor], output=x)
model.compile(loss='mean_squared_error',optimizer='adam')
model.fit(inputs=[acc,gyro],outputs=labels,batch_size=1, validation_split=0.2, epochs=2,verbose=1 ,
shuffle=False)
The acc gyro in model.fit is (200 * 3),in the Input layer shape is (200 * 3). Why is there such a problem? Error when checking input: expected acc_input to have 3 dimensions, but got array with shape (200, 3)
Shape (None, 200, 3) is used in Keras for batches, None means batch_size, because in the time of creating or reshaping input arrays, the batch size might be unknown, so if you will be using batch_size = 128 your batch input matrix will have shape (128, 200, 3)

HTML5 canvas drawing, allow multiple touch inputs

I would like to create an web based guestbook for an 84" multi-touch device. I know how to create a canvas which allows a user to draw, but I wonder if the same is possible for multiple users? I have found some information on html5 multi-touch input, but all of it is related to gestures.
Since the surface is large enough and the device supports it, ideally I would like for several users to draw something onto the canvas simultaneously.
Here is whole program. There is no nothing to allow multiple touch inputs.
You only need to access event.changedTouches
Navigate with mobile device and test demo.
I made also button click detection example (HUB BUTTON).
Source: https://github.com/zlatnaspirala/multi-touch-canvas-handler/blob/master/index.html
function MOBILE_CONTROL () {
this.X = null;
this.Y = null;
this.LAST_X_POSITION = null;
this.LAST_Y_POSITION = null;
this.MULTI_TOUCH = 'NO';
this.MULTI_TOUCH_X1 = null;
this.MULTI_TOUCH_X2 = null;
this.MULTI_TOUCH_X3 = null;
this.MULTI_TOUCH_X4 = null;
this.MULTI_TOUCH_X5 = null;
this.MULTI_TOUCH_Y1 = null;
this.MULTI_TOUCH_Y2 = null;
this.MULTI_TOUCH_Y3 = null;
this.MULTI_TOUCH_Y4 = null;
this.MULTI_TOUCH_Y5 = null;
this.MULTI_TOUCH_X6 = null;
this.MULTI_TOUCH_X7 = null;
this.MULTI_TOUCH_X8 = null;
this.MULTI_TOUCH_X9 = null;
this.MULTI_TOUCH_X10 = null;
this.MULTI_TOUCH_Y6 = null;
this.MULTI_TOUCH_Y7 = null;
this.MULTI_TOUCH_Y8 = null;
this.MULTI_TOUCH_Y9 = null;
this.MULTI_TOUCH_Y10 = null;
this.MULTI_TOUCH_INDEX = 1;
this.SCREEN = [window.innerWidth , window.innerHeight];
this.SCREEN.W = this.SCREEN[0];
this.SCREEN.H = this.SCREEN[1];
//Application general
this.AUTOR = "Nikola Lukic";
this.APPLICATION_NAME = "TestApplication";
this.SET_APPLICATION_NAME = function (value) {
if (typeof value != 'string')
throw 'APPLICATION_NAME must be a string!';
if (value.length < 2 || value.length > 20)
throw 'APPLICATION_NAME must be 2-20 characters long!';
this.APPLICATION_NAME = value;
};
this.APP = function(){/*construct*/};
this.APP.BODY = document.getElementsByTagName('body')[0];
this.APP.BODY.SET_STYLE = function (string) {this.style = string;}
this.APP.BODY.SET_BACKGROUND_COLOR = function (color) {this.style.backgroundColor = color;}
this.APP.BODY.SET_COLOR = function (color) {this.style.Color = color;}
this.APP.BODY.ADD_DIV = function (id , style , innerHTML) {
var n = document.createElement('div');
var divIdName = id;
n.setAttribute('id',divIdName);
n.setAttribute('style',style);
n.innerHTML = innerHTML;
this.appendChild(n);
};
this.APP.BODY.ADD_2DCANVAS = function (id,width_,height_) {
var n = document.createElement('canvas');
var divIdName = id;
n.setAttribute('id',divIdName);
n.setAttribute('width',width_);
n.setAttribute('height',height_);
//n.innerHTML = 'Element is here';
this.appendChild(n);
};
console.log('<MOBILE_CONTROL JavaScript class>');
console.log('status:MOBILE_CONTROL FINISH LOADING');
console.log('EASY_IMPLEMENTATION!');
}
function CANVAS_DRAW() {
var run = true;
var timer = null;
window.addEventListener('touchstart', MDPORT, false);
function MDPORT(){
if (run) {
clearInterval(timer);
run = false;
} else {
timer = setInterval(makeHarmonograph, 1);
run = true;
}
}
var A1 = 200, f1 = 2, p1 = 1/16, d1 = 0.02;
var A2 = 10, f2 = 4, p2 = 3 / 2, d2 = 0.0315;
var A3 = 200, f3 = 4, p3 = 13 / 15, d3 = 0.00012;
var A4 = 10, f4 = 4, p4 = 1, d4 = 0.012;
var r = 10, g =10, b = 0;
var UPDOWN = 1,CONTROL_WIDTH = 0;
var ctx = document.getElementById("canvas").getContext("2d");
setInterval(randomColor, 5000);
timer = setInterval(makeHarmonograph, 1);
function randomColor() {
r = Math.floor(Math.random() * 256);
g = Math.floor(Math.random() * 256);
b = Math.floor(Math.random() * 256);
}
function makeHarmonograph() {
f1 = (f1 / 10) % 10;
f2 = (f2 / 40) % 10;
f3 = (f3 + Math.random() / 80) % 10;
f4 = (f4 + Math.random() / 411) % 10;
p1 += 0.5 % (Math.PI*2)
drawHarmonograph();
}
function drawHarmonograph() {
ctx.clearRect(0, 0, 850, 450);
ctx.save();
ctx.fillStyle = "#000000";
ctx.strokeStyle = "rgb(" + r + "," + g + "," + b + ")";
ctx.fillRect(0, 0, 1100, 400);
ctx.translate(511, 0);
ctx.rotate(1.57);
///////////
// Draw guides
ctx.strokeStyle = '#09f';
ctx.lineWidth = A1 ;
ctx.translate(111, 1);
ctx.rotate(0.01);
if (CONTROL_WIDTH == 0) { UPDOWN=UPDOWN+1;}
if (UPDOWN>51){CONTROL_WIDTH=1; }
if (CONTROL_WIDTH == 0) { UPDOWN=UPDOWN-0.1;}
if (UPDOWN<1){CONTROL_WIDTH=0; }
// Set line styles
ctx.strokeStyle = '#099909';
ctx.lineWidth = UPDOWN;
// check input
ctx.miterLimit = UPDOWN;
// Draw lines
ctx.beginPath();
ctx.moveTo(111,100);
for (i=0;i<121;i++){
var dy = i%2==0 ? 25 : -25 ;
ctx.lineTo(Math.pow(i,1.5)*2,75+dy);
}
ctx.stroke();
return false;
ctx.translate(444, 333);
ctx.fillStyle='lime';
ctx.font="30px Arial";
ctx.fillText("Overlapping harmonics with JavaScript, wait secund.",5,25);
ctx.stroke();
}
}
function CANVAS_DRAW_1(){
var run = true;
var timer = null;
timer = setInterval(makeHarmonograph, 1);
run = true;
var A1 = 200, f1 = 2, p1 = 1/16, d1 = 0.02;
var A2 = 10, f2 = 4, p2 = 3 / 2, d2 = 0.0315;
var A3 = 200, f3 = 4, p3 = 13 / 15, d3 = 0.00012;
var A4 = 10, f4 = 4, p4 = 1, d4 = 0.012;
var r = 10, g =10, b = 0;
var UPDOWN = 1,CONTROL_WIDTH = 0;
var ctx = document.getElementById("canvas_1").getContext("2d");
setInterval(randomColor, 5000);
timer = setInterval(makeHarmonograph, 1);
function randomColor() {
r = Math.floor(Math.random() * 256);
g = Math.floor(Math.random() * 256);
b = Math.floor(Math.random() * 256);
}
function makeHarmonograph() {
f1 = (f1 / 10) % 10;
f2 = (f2 / 40) % 10;
f3 = (f3 + Math.random() / 80) % 10;
f4 = (f4 + Math.random() / 411) % 10;
p1 += 0.5 % (Math.PI*2)
drawHarmonograph();
}
function drawHarmonograph() {
ctx.clearRect(0, 0, 850, 450);
ctx.save();
ctx.fillStyle = "#000000";
ctx.strokeStyle = "rgb(" + r + "," + g + "," + b + ")";
ctx.fillRect(0, 0, 800, 400);
ctx.translate(0, 250);
ctx.beginPath();
if (A1 > 100) {}
for (var t = 0; t < 100; t+=0.1) {
var x = A1 * Math.sin(f1 * t + Math.PI * p1) * Math.exp(-d1 * t) + A2 * Math.sin(f2 * t + Math.PI * p2) * Math.exp(-d2 * t);
var y = A3 * Math.sin(f3 * t + Math.PI * p1) * Math.exp(-d3 * t) + A4 * Math.sin(f4 * t + Math.PI * p4) * Math.exp(-d4 * t);
//ctx.lineTo(x*x, y/x);
ctx.lineTo(x*x+1, y+1/x);
}
ctx.stroke();
ctx.translate(A1, 0);
ctx.rotate(1.57);
ctx.beginPath();
for (var t = 0; t < 100; t+=0.1) {
var x = A1 * A3* Math.sin(f1 * t + Math.PI * p1) * Math.exp(-d1 * t) + A2 * Math.sin(f2 * t + Math.PI * p2) * Math.exp(-d2 * t);
var y = A3 * Math.sin(f3 * t + Math.PI * p1) * Math.exp(-d3 * t) + A4 * Math.sin(f4 * t + Math.PI * p4) * Math.exp(-d4 * t);
ctx.lineTo(x*x+1, y+1/x);
}
ctx.stroke();
ctx.restore();
// Draw guides
ctx.strokeStyle = '#09f';
ctx.lineWidth = A1;
if (CONTROL_WIDTH == 0) { UPDOWN=UPDOWN+1;}
if (UPDOWN>51){CONTROL_WIDTH=1; }
if (CONTROL_WIDTH == 0) { UPDOWN=UPDOWN-0.1;}
if (UPDOWN<1){CONTROL_WIDTH=0; }
// Set line styles
ctx.strokeStyle = '#099909';
ctx.lineWidth = UPDOWN;
// check input
ctx.miterLimit = 5;
// Draw lines
ctx.beginPath();
ctx.moveTo(0,100);
for (i=0;i<124;i++){
var dy = i%2==0 ? 25 : -25 ;
ctx.lineTo(Math.pow(i,1.5)*2,75+dy);
}
ctx.stroke();
return false;
ctx.translate(A1, 210);
ctx.fillStyle='lime';
ctx.font="30px Arial";
ctx.fillText("Overlapping harmonics with JavaScript, wait secund.",5,25);
}
}
function CANVAS_DRAW_2(){
var timer = null;
var A1 = 200, f1 = 2, p1 = 1/16, d1 = 0.02;
var A2 = 10, f2 = 4, p2 = 3 / 2, d2 = 0.0315;
var A3 = 200, f3 = 4, p3 = 13 / 15, d3 = 0.00012;
var A4 = 10, f4 = 4, p4 = 1, d4 = 0.012;
var r = 10, g =10, b = 0;
var ctx = document.getElementById("canvas_2").getContext("2d");
setInterval(randomColor, 5000);
timer = setInterval(t, 1);
function randomColor() {
r = Math.floor(Math.random() * 256);
g = Math.floor(Math.random() * 256);
b = Math.floor(Math.random() * 256);
}
function t() {
r1();
}
function r1() {
ctx.clearRect(0, 0, CONTROL.SCREEN.W, CONTROL.SCREEN.H);
ctx.save();
ctx.strokeStyle = "rgb(" + r + "," + g + "," + b + ")";
ctx.fillStyle = "#000000";
ctx.fillRect(0, 0, CONTROL.SCREEN.W, CONTROL.SCREEN.H);
ctx.beginPath();
var x = CONTROL.X;
var y = CONTROL.Y;
ctx.fillStyle = "rgb(" + r + "," + g + "," + b + ")";
ctx.fillRect(x, y-400, 1, 2500);
ctx.fillRect(x-400, y, 2500, 1);
ctx.fillText(" ( targetX:" + CONTROL.X + " ( targetY: "+ CONTROL.Y + " ) ",x,y);
ctx.fillStyle = "#00FF00";
ctx.font="10px Arial";
ctx.fillText(" JavaScript ",x- CONTROL.SCREEN.W/3,y - CONTROL.SCREEN.H/3.4);
ctx.fillText(" welcome here , canvas example with MOBILE_TOUCH() ",x - CONTROL.SCREEN.W/3,y - CONTROL.SCREEN.H/3.2);
ctx.fillText(" no css files need, pure js ",x - CONTROL.SCREEN.W/3,y - CONTROL.SCREEN.H/3);
if (CONTROL.X > CONTROL.SCREEN.W/2.2 && CONTROL.X < CONTROL.SCREEN.W/2.2 + 300 && CONTROL.Y > CONTROL.SCREEN.H/2.2 && CONTROL.Y < CONTROL.SCREEN.H/2.2 + 100) {
ctx.strokeStyle = "lime";
}
else{
ctx.strokeStyle = "red";
}
ctx.strokeRect( CONTROL.SCREEN.W/2.2, CONTROL.SCREEN.H/2.2, 300, 100);
ctx.fillText(" HUB DETECT ", CONTROL.SCREEN.W/2, CONTROL.SCREEN.H/2);
if (CONTROL.MULTI_TOUCH_X1 !== 'undefined'){
ctx.fillRect(CONTROL.MULTI_TOUCH_X1 , CONTROL.MULTI_TOUCH_Y1-400 , 1, 2500);
ctx.fillRect(CONTROL.MULTI_TOUCH_X1 -400 , CONTROL.MULTI_TOUCH_Y1 , 2500, 1);
}
if (CONTROL.MULTI_TOUCH_X2 !== 'undefined'){
ctx.fillRect(CONTROL.MULTI_TOUCH_X2 , CONTROL.MULTI_TOUCH_Y2-400 , 1, 2500);
ctx.fillRect(CONTROL.MULTI_TOUCH_X2 -400 , CONTROL.MULTI_TOUCH_Y2 , 2500, 1);
}
if (CONTROL.MULTI_TOUCH_X3 !== 'undefined'){
ctx.fillRect(CONTROL.MULTI_TOUCH_X3 , CONTROL.MULTI_TOUCH_Y3-400 , 1, 2500);
ctx.fillRect(CONTROL.MULTI_TOUCH_X3 -400 , CONTROL.MULTI_TOUCH_Y3 , 2500, 1);
}
if (CONTROL.MULTI_TOUCH_X4 !== 'undefined'){
ctx.fillRect(CONTROL.MULTI_TOUCH_X4 , CONTROL.MULTI_TOUCH_Y4-400 , 1, 2500);
ctx.fillRect(CONTROL.MULTI_TOUCH_X4 -400 , CONTROL.MULTI_TOUCH_Y4 , 2500, 1);
}
if (CONTROL.MULTI_TOUCH_X5 !== 'undefined'){
ctx.fillRect(CONTROL.MULTI_TOUCH_X5 , CONTROL.MULTI_TOUCH_Y5-400 , 1, 2500);
ctx.fillRect(CONTROL.MULTI_TOUCH_X5 -400 , CONTROL.MULTI_TOUCH_51 , 2500, 1);
}
if (CONTROL.MULTI_TOUCH_X6 !== 'undefined'){
ctx.fillRect(CONTROL.MULTI_TOUCH_X6 , CONTROL.MULTI_TOUCH_Y6-400 , 1, 2500);
ctx.fillRect(CONTROL.MULTI_TOUCH_X6 -400 , CONTROL.MULTI_TOUCH_Y6 , 2500, 1);
}
if (CONTROL.MULTI_TOUCH_X7 !== 'undefined'){
ctx.fillRect(CONTROL.MULTI_TOUCH_X7 , CONTROL.MULTI_TOUCH_Y8-400 , 1, 2500);
ctx.fillRect(CONTROL.MULTI_TOUCH_X7 -400 , CONTROL.MULTI_TOUCH_Y8 , 2500, 1);
}
if (CONTROL.MULTI_TOUCH_X9 !== 'undefined'){
ctx.fillRect(CONTROL.MULTI_TOUCH_X9 , CONTROL.MULTI_TOUCH_Y9-400 , 1, 2500);
ctx.fillRect(CONTROL.MULTI_TOUCH_X9 -400 , CONTROL.MULTI_TOUCH_Y9 , 2500, 1);
}
if (CONTROL.MULTI_TOUCH_X10 !== 'undefined'){
ctx.fillRect(CONTROL.MULTI_TOUCH_X10 , CONTROL.MULTI_TOUCH_Y10-400 , 1, 2500);
ctx.fillRect(CONTROL.MULTI_TOUCH_X10 -400 , CONTROL.MULTI_TOUCH_Y10 , 2500, 1);
}
// Draw lines
ctx.fillStyle='lime';
ctx.font="30px Arial";
ctx.fillText("MOBILE_TOUCH example ",5,25);
}
}
//definition
var CONTROL = new MOBILE_CONTROL();
//CONTROL.APP.BODY.ADD_2DCANVAS("canvas");
//CONTROL.APP.BODY.ADD_2DCANVAS("canvas_1");
CONTROL.APP.BODY.ADD_2DCANVAS("canvas_2",CONTROL.SCREEN.W,CONTROL.SCREEN.H);
CONTROL.APP.BODY.SET_STYLE('margin-left:-10px;padding:0;border:none;');
//CANVAS_DRAW();
//CANVAS_DRAW_1();
CANVAS_DRAW_2();
//<!-- SCREEN.prototype.sayHello = function(){alert ('hello' + "sss" );}; -->
//###################################################################
//EVENTS
//###################################################################
document.addEventListener('touchstart', function(event)
{
if (CONTROL.MULTI_TOUCH == 'NO') {
var touch = event.touches[0];
CONTROL.X = touch.pageX;
CONTROL.Y = touch.pageY;
console.log('TOUCH START AT:(X' + CONTROL.X + '),(' + CONTROL.Y + ')' );
}
else if (CONTROL.MULTI_TOUCH == 'YES') {
var touches_changed = event.changedTouches;
for (var i=0; i<touches_changed.length;i++) {
//CONTROL.MULTI_TOUCH_X1
console.log("multi touch : x" + CONTROL.MULTI_TOUCH_INDEX + ":(" +touches_changed[i].pageX + ")");
switch(CONTROL.MULTI_TOUCH_INDEX)
{
case 1:
CONTROL.MULTI_TOUCH_X1=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y1=touches_changed[i].pageY;
break;
case 2:
CONTROL.MULTI_TOUCH_X2=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y2=touches_changed[i].pageY;
break;
case 3:
CONTROL.MULTI_TOUCH_X3=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y3=touches_changed[i].pageY;
break;
case 4:
CONTROL.MULTI_TOUCH_X4=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y4=touches_changed[i].pageY;
break;
case 5:
CONTROL.MULTI_TOUCH_X5=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y5=touches_changed[i].pageY;
break;
case 6:
CONTROL.MULTI_TOUCH_X6=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y6=touches_changed[i].pageY;
break;
case 7:
CONTROL.MULTI_TOUCH_X7=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y7=touches_changed[i].pageY;
break;
case 8:
CONTROL.MULTI_TOUCH_X8=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y8=touches_changed[i].pageY;
break;
case 9:
CONTROL.MULTI_TOUCH_X9=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y9=touches_changed[i].pageY;
break;
case 10:
CONTROL.MULTI_TOUCH_X10=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y10=touches_changed[i].pageY;
break;
default:
//code to be executed if n is different from case 1 and 2
}
CONTROL.MULTI_TOUCH_INDEX = CONTROL.MULTI_TOUCH_INDEX + 1;
}
}
CONTROL.MULTI_TOUCH = 'YES';
},false);
////////////////////////////////////////////////////////
document.addEventListener('touchmove', function(event)
{
var touch = event.touches[0];
CONTROL.X = touch.pageX;
CONTROL.Y = touch.pageY;
console.log('TOUCH MOVE AT:(X' + CONTROL.X + '),(' + CONTROL.Y + ')' );
//#############
if (CONTROL.MULTI_TOUCH == 'YES') {
var touches_changed = event.changedTouches;
for (var i=0; i<touches_changed.length;i++) {
//CONTROL.MULTI_TOUCH_X1
console.log("multi touch : x" + CONTROL.MULTI_TOUCH_INDEX + ":(" +touches_changed[i].pageX + ")");
switch(i)
{
case 1:
CONTROL.MULTI_TOUCH_X1=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y1=touches_changed[i].pageY;
break;
case 2:
CONTROL.MULTI_TOUCH_X2=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y2=touches_changed[i].pageY;
break;
case 3:
CONTROL.MULTI_TOUCH_X3=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y3=touches_changed[i].pageY;
break;
case 4:
CONTROL.MULTI_TOUCH_X4=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y4=touches_changed[i].pageY;
break;
case 5:
CONTROL.MULTI_TOUCH_X5=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y5=touches_changed[i].pageY;
break;
case 6:
CONTROL.MULTI_TOUCH_X6=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y6=touches_changed[i].pageY;
break;
case 7:
CONTROL.MULTI_TOUCH_X7=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y7=touches_changed[i].pageY;
break;
case 8:
CONTROL.MULTI_TOUCH_X8=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y8=touches_changed[i].pageY;
break;
case 9:
CONTROL.MULTI_TOUCH_X9=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y9=touches_changed[i].pageY;
break;
case 10:
CONTROL.MULTI_TOUCH_X10=touches_changed[i].pageX;
CONTROL.MULTI_TOUCH_Y10=touches_changed[i].pageY;
break;
default:
//code to be executed if n is different from case 1 and 2
}
}}
//#############
event.preventDefault();
},false);
////////////////////////////////////////////////////////
document.addEventListener('touchend', function(event)
{
CONTROL.LAST_X_POSITION = CONTROL.X;
CONTROL.LAST_Y_POSITION = CONTROL.Y;
CONTROL.MULTI_TOUCH = 'NO';
CONTROL.MULTI_TOUCH_INDEX = 1;
CONTROL.MULTI_TOUCH_X1 = null;
CONTROL.MULTI_TOUCH_X2 = null;
CONTROL.MULTI_TOUCH_X3 = null;
CONTROL.MULTI_TOUCH_X4 = null;
CONTROL.MULTI_TOUCH_X5 = null;
CONTROL.MULTI_TOUCH_X6 = null;
CONTROL.MULTI_TOUCH_X7 = null;
CONTROL.MULTI_TOUCH_X8 = null;
CONTROL.MULTI_TOUCH_X9 = null;
CONTROL.MULTI_TOUCH_X10 = null;
CONTROL.MULTI_TOUCH_Y1 = null;
CONTROL.MULTI_TOUCH_Y2 = null;
CONTROL.MULTI_TOUCH_Y3 = null;
CONTROL.MULTI_TOUCH_Y4 = null;
CONTROL.MULTI_TOUCH_Y5 = null;
CONTROL.MULTI_TOUCH_Y6 = null;
CONTROL.MULTI_TOUCH_Y7 = null;
CONTROL.MULTI_TOUCH_Y8 = null;
CONTROL.MULTI_TOUCH_Y9 = null;
CONTROL.MULTI_TOUCH_Y10 = null;
console.log('LAST TOUCH POSITION AT:(X' + CONTROL.X + '),(' + CONTROL.Y + ')' );
},false);
////////////////////////////////////////////////////////
document.addEventListener("touchcancel", function(event)
{
console.log('BREAK - LAST TOUCH POSITION AT:(X' + CONTROL.X + '(,(' + CONTROL.Y + ')' );
}, false);
////////////////////////////////////////////////////////