I have some simple code that uses the minmax algoritm to locate birds. Everything works but I find my programming not good and I believe there is a better solution. I'm not that experienced in RoR but if somebody knows a better way to achieve the same solution then I'm greatful ;).
There are two parts I hate, the 4 lists I had to create to determine the max or min value for the different combinations (the core of the min-max algorithm) and the very ugly SQL hack.
Thanks!
def index
# fetch all our birds
#birds = Bird.all
# Loop over the birds
#birds.each do |bird|
#fixed = Node.where("d7type = 'f'")
xminmax = []
xmaxmin = []
yminmax = []
ymaxmin = []
#fixed.each do |fixed|
rss = Log.find_by_sql("SELECT logs.fixed_mac, AVG(logs.blinker_rss) AS avg_rss FROM logs
WHERE logs.blinker_mac = '#{bird.d7_mac}' AND logs.fixed_mac = '#{fixed.d7_mac}' ORDER BY logs.id DESC LIMIT 30")
converted_rss = calculate_distance_rss(rss[0].attributes["avg_rss"])
xminmax.push(fixed.xpos + converted_rss)
xmaxmin.push(fixed.xpos - converted_rss)
yminmax.push(fixed.ypos + converted_rss)
ymaxmin.push(fixed.ypos - converted_rss)
end
pos = {x: (xminmax.min + xmaxmin.max) / 2, y: (yminmax.min + ymaxmin.max) / 2}
puts pos
end
end
2 things you could do to start with is (assuming Birds could be a large table) Change Bird.all to
Bird.find_each do |bird|
... code ...
end
It's a more efficient way to loop over many table records.
2nd: take #fixed = Node.where("d7type = 'f'") out of the each loop since it doesn't need any variables for its query. Put it above the loop so it doesn't execute each time.
3rd (Not so much of an optimization but just safer code): Your Log.find_by_sql looks simple enough to use active_record, you can change it to:
Log.select('fixed_mac, AVG(logs.blinker_rss) AS avg_rss, blinker_mac').
where(blinker_mac: bird.d7_mac, fixed_mac: fixed.d7_mac).
order('id DESC').limit(30)
converted_rss = calculate_distance_rss(rss.first.avg_rss)
Everything else looks fine.
Related
I have worked all the tutorials and searched for "load csv tensorflow" but just can't get the logic of it all. I'm not a total beginner, but I don't have much time to complete this, and I've been suddenly thrown into Tensorflow, which is unexpectedly difficult.
Let me lay it out:
Very simple CSV file of 184 columns that are all float numbers. A row is simply today's price, three buy signals, and the previous 180 days prices
close = tf.placeholder(float, name='close')
signals = tf.placeholder(bool, shape=[3], name='signals')
previous = tf.placeholder(float, shape=[180], name = 'previous')
This article: https://www.tensorflow.org/guide/datasets
It covers how to load pretty well. It even has a section on changing to numpy arrays, which is what I need to train and test the 'net. However, as the author says in the article leading to this Web page, it is pretty complex. It seems like everything is geared toward doing data manipulation, where we have already normalized our data (nothing has really changed in AI since 1983 in terms of inputs, outputs, and layers).
Here is a way to load it, but not in to Numpy and no example of not manipulating the data.
with tf.Session as sess:
sess.run( tf.global variables initializer())
with open('/BTC1.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter =',')
line_count = 0
for row in csv_reader:
?????????
line_count += 1
I need to know how to get the csv file in to the
close = tf.placeholder(float, name='close')
signals = tf.placeholder(bool, shape=[3], name='signals')
previous = tf.placeholder(float, shape=[180], name = 'previous')
so that I can follow the tutorials to train and test the net.
It's not that clear for me your question. You might be answering, tell me if I'm wrong, how to feed data in your model? There are several fashions to do so.
Use placeholders with feed_dict during the session. This is the basic and easier one but often suffers from training performance issue. Further explanation, check this post.
Use queue. Hard to implement and badly documented, I don't suggest, because it's been taken over by the third method.
tf.data API.
...
So to answer your question by the first method:
# get your array outside the session
with open('/BTC1.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter =',')
dataset = np.asarray([data for data in csv_reader])
close_col = dataset[:, 0]
signal_cols = dataset[:, 1: 3]
previous_cols = dataset[:, 3:]
# let's say you load 100 row each time for training
batch_size = 100
# define placeholders like you
...
with tf.Session() as sess:
...
for i in range(number_iter):
start = i * batch_size
end = (i + 1) * batch_size
sess.run(train_operation, feed_dict={close: close_col[start: end, ],
signals: signal_col[start: end, ],
previous: previous_col[start: end, ]
}
)
By the third method:
# retrieve your columns like before
...
# let's say you load 100 row each time for training
batch_size = 100
# construct your input pipeline
c_col, s_col, p_col = wrapper(filename)
batch = tf.data.Dataset.from_tensor_slices((close_col, signal_col, previous_col))
batch = batch.shuffle(c_col.shape[0]).batch(batch_size) #mix data --> assemble batches --> prefetch to RAM and ready inject to model
iterator = batch.make_initializable_iterator()
iter_init_operation = iterator.initializer
c_it, s_it, p_it = iterator.get_next() #get next batch operation automatically called at each iteration within the session
# replace your close, signal, previous placeholder in your model by c_it, s_it, p_it when you define your model
...
with tf.Session() as sess:
# you need to initialize the iterators
sess.run([tf.global_variable_initializer, iter_init_operation])
...
for i in range(number_iter):
start = i * batch_size
end = (i + 1) * batch_size
sess.run(train_operation)
Good luck!
I am trying to create my own discrete time integrator in Simulink Using the Bogacki Shampine rule. The general formula for the rule (when it is only a function of time) is:
y(n+1) = y(n) + (t/9)*(2*s1+3*s2+4s3)
where:
s1 = x(n)
s2 = x(n+h/2)
s3 = x(n+3h/4)
which is also equal to :
y(n) = y(n-1) + (t/9)*(2*s1+3*s2+4s3) ;
where:
s1 = x(n-1)
s2 = x(n-h/2)
s3 = x(n-h/4)
Then I compared the results with the simple integrator block that uses ode3 (Bogacki Shampine). Results were close to each other but not too much.
Also I am not sure that I create this integrator in a correct way. Since Bogacki Shampine is 3rd order. I thought I should have used 3 unit delay, but 2 was enough for me.
How can I improve this or create another one to get more accurate results?
I want to avoid importing different modules as that is mostly what I have found while looking online. I am stuck with this bit of code and I don't really know how to fix it or improve on it. Here's what I've got so far.
def avg(lst):
'''lst is a list that contains lists of numbers; the
function prints, one per line, the average of each list'''
for i[0:-1] in lst:
return (sum(i[0:-1]))//len(i)
Again, I'm quite new and this for loops jargon is quite confusing to me, so if someone could help me get it so the output of, say, a list of grades would be different lines containing the averages. So if for lst I inserted grades = [[95,92,86,87], [66,54], [89,72,100], [33,0,0]], it would have 4 lines that all had the averages of those sublists. I also am to assume in the function that the sublists could have any amount of grades, but I can assume that the lists have non-zero values.
Edit1: # jramirez, could you explain what that is doing differently than mine possible? I don't doubt that it is better or that it will work but I still don't really understand how to recreate this myself... regardless, thank you.
I think this is what you want:
def grade_average(grades):
for grade in grades:
avg = 0
for num in grade:
avg += num
avg = avg / len(grade)
print ("Average for " + str(grade) + " is = " + str(avg))
if __name__ == '__main__':
grades = [[95,92,86,87],[66,54],[89,72,100],[33,0,0]]
grade_average(grades)
Result:
Average for [95, 92, 86, 87] is = 90.0
Average for [66, 54] is = 60.0
Average for [89, 72, 100] is = 87.0
Average for [33, 0, 0] is = 11.0
Problems with your code: the extraneous indexing of i; the use of // to truncate he averate (use round if you want to round it); and the use of return in the loop, so it would stop after the first average. Your docstring says 'print' but you return instead. This is actually a good thing. Functions should not print the result they calculate, as that make the answer inaccessible to further calculation. Here is how I would write this, as a generator function.
def averages(gradelists):
'''Yield average for each gradelist.'''
for glist in gradelists:
yield sum(glist) /len(glist)
print(list(averages(
[[95,92,86,87], [66,54], [89,72,100], [33,0,0]])))
[90.0, 60.0, 87.0, 11.0]
To return a list, change the body of the function to (beginner version)
ret = []
for glist in gradelists:
ret.append(sum(glist) /len(glist))
return ret
or (more advanced, using list comprehension)
return [sum(glist) /len(glist) for glist in gradelists]
However, I really recommend learning about iterators, generators, and generator functions (defined with yield).
I have a problem with a function in matlab. This specific function is for filtering light signals. As you can see below I added the coding I’ve used in the function and in the while loop itself. The code is written for a NXT Lego robot.
Is there any tip how to get the count variable ( i = i + 1 ) to work in the function, so we can plot Light(i)? Because we’re getting a bunch of error messages when we try different codes to make it work.
function [light] = filter_func( i)
lightI(i) = GetLight(SENSOR_3);
if i==1
light(i)=lightI(i)
elseif i==2
light(i) = 0.55*lightI(i) + 0.45*lightI(i-1)
else
light(i) = 0.4*lightI(i) + 0.3*lightI(i-1) + 0.3*lightI(i-2);
end
end
i=1
while true
lightI(i) = GetLight(SENSOR_3); % Get’s a lightvalue between 0 and 1024.
if i>2
light =filter_func(i)
light=round(light);
else
light(i) = GetLight(SENSOR_3);;
end
i=1+i
plot(light(end-90:end), 'r-');
title('Lightvalue')
axis([0 100 0 1023]) ;
end
You probably mainly get errors because you are not allowed to mix script and functions like this in MATLAB (like you are in Python).
Your filter function is only used when i>2 so why are you doing the first 2 tests? It seems like you want lightI as a global variable, but that is not what you have done. The lightI inside the function is not the same as the one in the while loop.
Since your while loop runs forever, maybe you don't need to worry about updating the plot the first two times. In that case you can do this:
filter = [0.4 0.3 0.3]';
latest_filtered_light = nan(90,1);
lightI = [];
p = plot(latest_filtered_light, 'r-');
title('Lightvalue')
axis([0 100 0 1023]) ;
while True
lightI(end+1,1) = rand*1024; % Get’s a lightvalue between 0 and 1024.
if i>=3
new_val = lightI(end-2:end,1)'*filter;
latest_filtered_light = [latest_filtered_light(2:end);...
new_val];
set(p, 'ydata', latest_filtered_light)
drawnow
end
end
I think it is an important point to not call plot every time - at least if you are the least concerned about performance.
I'm new to sqlalchemy and could use some help.
I'm trying to write a small application for which i have to dynamically change a select-statement. So I do s = select([files]), and then i add filters by s = s.where(files.c.createtime.between(val1, val2)).
This works great, but only with an AND-conjunction.
So, when I want to have all entries with createtime (between 1.1.2009 and 1.2.2009) OR createtime == 5.2.2009, I got the problem that i don't know how to achieve this with different filter-calls. Because of the programs logic it's not possible to use s= s.where(_or(files.c.createtime.between(val1, val2), files.c.createtime == DateTime('2009-02-01')))
Thanks in advance,
Christof
You can build or clauses dynamically from lists:
clauses = []
if cond1:
clauses.append(files.c.createtime.between(val1, val2))
if cond2:
clauses.append(files.c.createtime == DateTime('2009-02-01'))
if clauses:
s = s.where(or_(*clauses))
If you're willing to "cheat" by making use of the undocumented _whereclause attribute on Select objects, you can incrementally specify a series of OR terms by building a new query each time based on the previous query's where clause:
s = select([files]).where(literal(False)) # Start with an empty query.
s = select(s.froms).where(or_(s._whereclause,
files.c.createtime.between(val1, val2)))
s = select(s.froms).where(or_(s._whereclause,
files.c.createtime == datetime(2009, 2, 1)))
Building up a union is another option. This is a bit clunkier, but doesn't rely on undocumented attributes:
s = select([files]).where(literal(False)) # Start with an empty query.
s = s.select().union(
select([files]).where(files.c.createtime.between(val1, val2)))
s = s.select().union(
select([files]).where(files.c.createtime == datetime(2009, 2, 1)))