I would like to access the raw pixels in the OpenAI gym CartPole-v0 environment without opening a render window. How do I do this?
Example code:
import gym
env = gym.make("CartPole-v0")
env.reset()
img = env.render(mode='rgb_array', close=True) # Returns None
print(img)
img = env.render(mode='rgb_array', close=False)
# Opens annoying window, but gives me the array that I want
print(img.shape)
PS. I am having a hard time finding good documentation for OpenAI gym. Is it just me, or does it simply not exist?
Edit: I don't need to ever open the render video.
I was curious about same so I started looking into the source code and this is what I found.
Open AI uses pyglet for displaying the window and animations.
For showing the animation everything is drawn on to window and then rendered.
And then pyglet stores what is being displayed on to a buffer.
Dummy version of how code is written in open AI
import pyglet
from pyglet.gl import *
import numpy as np
display = pyglet.canvas.get_display()
screen = display.get_screens()
config = screen[0].get_best_config()
pyglet.window.Window(width=500, height=500, display=display, config=config)
# draw what ever you want
#get image from the buffer
buffer = pyglet.image.get_buffer_manager().get_color_buffer()
image_data=buffer.get_image_data()
arr = np.frombuffer(image_data.get_data(),dtype=np.uint8)
print(arr)
print(arr.shape)
output:
[0 0 0 ... 0 0 0]
(1000000,)
so basically every image we get is from buffer of what is being displayed on the window.
So if we don't draw anything on window we get no image so that window is required to get the image.
so you need to find a way such that windows is not displayed but its values are stored in buffer.
I know its not what you wanted but I hope it might lead you to a solution.
I've just gone through half of the gym source code line by line, and I can tell you that 1, the observation space of cartpole is digits to the ai, not pixels. eg, from their cartpole env py file...
Observation:
Type: Box(4)
Num Observation Min Max
0 Cart Position -2.4 2.4
1 Cart Velocity -Inf Inf
2 Pole Angle -0.209 rad (-12 deg) 0.209 rad (12 deg)
3 Pole Angular Velocity -Inf Inf
So, the pixels are for you at this point. And 2, if your goal is to teach the ai on pixels, you will need to render images from your data-in array, then pass them THROUGH the observation space as a pixel array, like Maunish Dave shows. OpenAI's Atari version does this.
If you want a better guide, don't read the OpenAI Docs, read the Stable Baseline docs here: https://stable-baselines.readthedocs.io/
Someone offers an answer here:
https://github.com/openai/gym/issues/374
"The atari and doom environments give pixels in their observations (ie, the return value from step). I don't think any other ones do.
render produces different results on different OSes, so they're not part of any official environment for benchmarking purposes. But if you want to create a new environment where the observation is in pixels, you could implement it by wrapping an existing environment and calling render."
I'm also working on getting raw pixels as well and I'm trying to find a way to see if what has been returned is what I expect it is.
The documentation can be found:
https://gym.openai.com/docs
And a forum for discussing OpenAI:
discuss.openai.com
Although its not very lively.
I have faced the similar problem:
This is how fixed it, in rendering.py file at /gym/envs/classic_control find the following line in the Viewer class:
self.window = pyglet.window.Window(width=width, height=height, display=display)
Change this line to:
self.window = pyglet.window.Window(width=width, height=height, display=display, visible=False)
Hope it helps!!
Related
So I am involved in a project that involves feeding a combination of text embeddings and image vectors into a DNN to arrive at the result. Now for the word embedding part, I am using TFHUB's Electra while for the image part I am using a NASNet Mobile network.
However, the issue I am facing is that while running the word embedding part, using the code shown below, the code just keeps running nonstop. It has been over 2 hours now and my training dataset has just 14900 rows of tweets.
Note - The input to the function is just a list of 14900 tweets.
tfhub_handle_encoder="https://tfhub.dev/google/electra_small/2"
tfhub_handle_preprocess="https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3
# Load Models
bert_preprocess_model = hub.KerasLayer(tfhub_handle_preprocess)
bert_model = hub.KerasLayer(tfhub_handle_encoder)
def get_text_embedding(text):
preprocessing_layer = hub.KerasLayer(tfhub_handle_preprocess, name='Preprocessor')
encoder_inputs = preprocessing_layer(text) encoder =
hub.KerasLayer(tfhub_handle_encoder, trainable=True, name='Embeddings') outputs =
encoder(encoder_inputs) text_repr = outputs['pooled_output'] text_repr =
tf.keras.layers.Dense(128, activation='relu')(text_repr)
return text_repr
text_repr = get_text_embedding(train_text)
Is there a faster way to get text representation using these models?
Thanks for the help!
The operation performed in the code is quadratic in its nature. While I managed to execute your snippet with 10000 samples within a few minutes, a 14900 long input ran out of memory on 32GB RAM runtime. Is it possible that your runtime is experiencing swapping?
It is not clear what is the snippet trying to achieve. Do you intend to train model? In such case you can define the text_input as an Input layer and use fit to train. Here is an example: https://www.tensorflow.org/text/tutorials/classify_text_with_bert#define_your_model
I am wondering how to go about setting up ONLY a test phase in Caffe for an LMDB file. I have already trained my model, everything seems good, my loss has decreased, and the output I am getting on images loaded in one by one also seem good.
Now I would like to see how my model performs on a separate LMDB test set, but seem to be unable to do so successfully. It would not be ideal for me to do a loop by loading images one at a time since my loss function is already defined in caffe and this would require me to redefine it.
this is what I have so far, but the results of this dont make sense; when I compare the loss I have from the train set to the loss I get from this, they don't match (orders of magnitude apart). Does anyone have any idea what my problem could be?
caffe.set_device(0)
caffe.set_mode_gpu()
net = caffe.Net('/home/jeremy/Desktop/caffestuff/JP_Kitti/all_proto/mirror_shuffle/deploy_JP.prototxt','/home/jeremy/Desktop/caffestuff/JP_Kitti/all_proto/mirror_shuffle/snapshot_iter_10000.caffemodel',caffe.TEST)
solver = None # ignore this workaround for lmdb data (can't instantiate two solvers on the same data)
solver = caffe.SGDSolver('/home/jeremy/Desktop/caffestuff/JP_Kitti/all_proto/mirror_shuffle/lenet_auto_solverJP_test.prototxt')
niter = 100
test_loss = zeros(niter)
count = 0
for it in range(niter):
solver.test_nets[0].forward() # SGD by Caffe
# store the test loss
test_loss[count] = solver.test_nets[0].blobs['loss']
print(solver.test_nets[0].blobs['loss'].data)
count = count+1
See my answer here. Do not forget to subtract the mean, otherwise you'll get low accuracy. The link to the code, posted above, takes care of that.
I'm having a problem generating simulations from a 3 level glmer model when conditioning on the random effects (I'm actually using predict via bootMer but the problem is the same).
This works:
library(lme4)
fit1 = glmer(cbind(incidence, size - incidence) ~ period + (1 | herd),
data = cbpp, family = binomial)
simulate(fit1, re.form=NULL)
This fails:
cbpp$bigherd = rep(1:7, 8)
fit2 = glmer(cbind(incidence, size - incidence) ~ period + (1 | bigherd / herd),
data = cbpp, family = binomial)
simulate(fit2, re.form=NULL)
Error: No random effects terms specified in formula
Many thanks for any ideas.
Update
Ben, many thanks for your help below, really appreciate it. I wonder if I can impose on you again.
What I want to do is simulate predictions on the response scale and I'm not sure if I can use your work around? Or if there is an alternative to what I'm doing. Thank you!
This works as expected, but is not conditional on random effects:
FUN = function(.){
predict(., type="response")
}
bootMer(fit2, FUN, nsim=3)$t
This doesn't work, as would be expected given above problem:
bootMer(fit2, FUN, nsim=3, use.u=TRUE)$t
As far as I can see, I can't pass re.form to bootMer.
Does the alternative below result in simulated predictions conditional on random effects without passing use.u to bootMer?
FUN = function(.){
predict(., type="response", re.form=~(1|herd:bigherd) + (1|bigherd))
}
bootMer(fit2, FUN, nsim=10)$t
I'm not sure what's going on yet, but here are two workarounds that do work:
simulate(fit2, re.form=lme4:::reOnly(formula(fit2)))
simulate(fit2, re.form=~(1|herd:bigherd) + (1|bigherd))
There must be something going wrong with the expansion of the "slash" term, because this doesn't work:
simulate(fit2, re.form=~(1|bigherd/herd))
I've posted this as an lme4 issue
These workarounds don't work for bootMer (which only takes the use.u argument, not re.form) in the current CRAN release (1.1-9).
It is fixed in the development version on Github (1.1-10): devtools::install_github("lme4/lme4") will install it, if you have compilation tools installed.
In the meantime you could just go ahead and implement your own parametric bootstrap (for parametric bootstrapping, bootMer is actually a very thin wrapper around simulate()/[refit()orupdate()]/FUN`). Much of the complication has to do with parallel computation (you'd have to add some of it back in if you want parallel computation in your own PB implementation).
This is the outline of a hand-rolled parametric bootstrap:
nboot <- 10
nresp <- length(FUN(orig_fit))
res <- matrix(NA,nboot,nresp)
for (i in 1:nboot) {
res[i,] <- FUN(update(orig_fit,data=simulate(orig_fit,...)))
## or use refit() for LMMs
## ... are options to simulate()
}
t(apply(res,2,quantile,c(0.025,0.975)))
I have a project where I have to recognize the frequency from an audio file. For this I use a single tone of 10 kHz to see if I can get it working.
Since I am pretty new to Octave, I tried this example with my own audio file.
I tried to understand what happens by doing some research to all functions.
My question here is; if I let specgram plot the figure when I do not specify it's output:
specgram(y,fftn,Fs,hanning(window),step);
it gives a line at 10kHz which is what I want.
But if I specify the output for the specgram function
[S,f,t]= specgram(y,fftn,Fs,hanning(window),step);
and let it plot, it plots the line at 18 kHz.
I figured it have to be in the inputs for the figure and I tried modifying these a bit, but every time I do that Octave gives an error.
I need the frequency as an given output, since I have to do some calculations with it, I figured I need to specify the frequency output.
This is the part of the code that specify the plot for the spectrogram:
step= fix(5*Fs/1000); % stepsize of the window
window= fix(90*Fs/1000); % window size
fftn =2^nextpow2(window); % Size of the FFT block
[S,f,t]= specgram(y,fftn,Fs,hanning(window),step);
S= abs(S(2:fftn*12000/Fs,:)); % Normalize the phase
S= S/max(S(:)); % Normalize the Energy
S= max(S, 10^(-40/10)); % Throw out values below -40 dB and above -3dB
S= min(S, 10^(-3/10));
figure
imagesc(t,f,(log(S)));
Can anyone help me here how to gain the frequency data from the audio file so I can use it in some calculations?
I have searched for answers already in the Octave manual for help and I tried it with various matlab sites. Also checked already many posts here such as:
How does Octave spectrogram 'specgram' from signal work?
Methodology of FFT for Matlab spectrogram / short time Fourier transform functions
P.S. Sorry for my bad English, it's not my native language
I found the answer myself, it turns out it is in this line of code:
S= abs(S(2:fftn*12000/Fs,:));
if I delete this line, the lines are placed on the right frequency in the figure. To me it looks like this line just takes a small space of the fft and replaces it with other frequencies but I'm not shure about that.
HI there
I am using Octave 2.3.4 with a plot command. I am new at Octave. This plot does not display for some reason. Here is my M file sample:
1;
clear all;
%%%%%%%%% parameters setting %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
r=0.01; %risk free rate
S0=50; %underlying price 1
%create an implied volatiltiy surface using below parameters:
basevol=0.25; %implied volatility at time t=0 and in center of strike axis
skewT=-0.001; %icrease in vol for one unit increase in maturity
v1=0.1; %defines how much a smile is raised at left end from base vol
v3=0.2; %defines how much a smile is raised at right end from base vol
nK=100; %no. of strike steps
nT=10; %no. of time steps
Tmax=1; %maximum value in time axis
Kmin=1; %minimum value in strike price axis
Kmax=150; %maximum value of strike price axis
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
dt=Tmax/(nT-1);
Tvec=0:dt:1;
dk=(Kmax-Kmin)/(nK-1);
Kvec=Kmin:dk:Kmax;
Tvec=Tvec';
Kvec=Kvec';
nK=size(Kvec,1);
nT=size(Tvec,1);
dvolT=ones(nK,nT)*(skewT*dt);
dvolT=cumsum(dvolT,2);
SmileVec=GetSmile(Kvec,v1,0,v3);
dvolK=ones(nK,nT);
dvolK=repmat(SmileVec,1,nT);
ImpliedVolSurface=ones(nK,nT)*basevol+dvolT+dvolK;
%use formula mentioned by John Elder in "Hedging for Financial Derivatives"
%this formula gives local volatility using implied volatility
function ret=GetLocalVolSurface(ImpliedVolSurface, S, r, Kvec, Tvec)
[m,n]=size(ImpliedVolSurface);
LocalVolSurface=zeros(m,n);
dk=Kvec(2)-Kvec(1);
dt=Tvec(2)-Tvec(1);
x=ImpliedVolSurface;
for i=3:m-2, %loop over strikes
for j=1:n-1, %loop over time steps
dv_dk=(x(i+1,j)-x(i-1,j))/(2*dk);
dv2_dk2=(x(i-1,j)-2*x(i,j)+x(i+1,j))/(dk*dk);
dv_dt=(x(i,j+1)-x(i,j))/dt;
K=Kvec(i);
T=Tvec(j+1);
rT=T^0.5;
sig=x(i,j);
h1=(log(S/K)+r*T+0.5*sig*sig*T)/(sig*rT);
numer=sig*sig + 2*T*sig*dv_dt + 2*r*K*T*sig*dv_dk;
denom=(1+K*h1*rT*dv_dk)^2 + K*K*T*sig*sig*(dv2_dk2-h1*dv_dk*dv_dk*rT);
LocalVolSurface(i,j)=(numer/denom)^0.5;
end
end
ret=LocalVolSurface;
endfunction
LocalVol_Surface=GetLocalVolSurface(ImpliedVolSurface,S0,r,Kvec,Tvec);
AsyImplVols=zeros(nK,1);
T=Tvec(nT-1);
F=S0*exp(r*T);
for i=3:nK-2,
% use formula sigBS(F,K)=sigLoc( (F+K)/2 )
K=Kvec(i);
lookupK=(F+K)/2;
kdiff=abs(Kvec-lookupK); %try to find nearest point in grid
kidx=min(find(kdiff==min(kdiff)));
if ( (kidx > 3) && (kidx < nK-2) ),
AsyImplVols(i)=LocalVol_Surface(kidx);
else
AsyImplVols(i) = NaN;
end
end
figure(1);
plot(Kvec(3:nK-2),[ImpliedVolSurface(3:nK-2,nT-1) LocalVol_Surface(3:nK-2,nT-1) AsyImplVols(3:nK-2)]);
When I run in Octave with no error, the plot is never displayed. It does include gnuplot 1.0.1 which I understand does the graph? Is there something I am not doing or missing? I am also running this on Windows 2003 Server.
Thanks
I got the answer here. Octave by default uses fltk for plotting etc, which is failing to work, using gnuplot works here. Just add below lines to .octaverc file in your home directory.
graphics_toolkit("gnuplot")
So that every time octave starts it will set the default package for plotting to gnuplot
I do know that's an old question but since I ran into quite the same error yesterday, maybe this can help someone else, too:
According to the Octave wiki pages, there seems to be a problem with plotting and the "oct2mat"-library. For me, the problem was solved after I ran this at the octave command prompt:
pkg rebuild -noauto oct2mat
and restarted octave. When you need to use "oct2mat", type:
pkg load oct2mat
Hope that helps!
I get the same problem installing on a windows 7 64 bit laptop (HP 630) with intel graphics. Every time you plot, it fails to do anything, but if you plot again, it shows up. It's some kind of refresh bug. It's annoying, but if you plot twice, the second time it works.
I am wondering if it is some kind of double buffering bug, because it works correctly on my own laptop running windows 7 with a dedicated graphics card.
In any case, try plotting twice in a row, I'll bet it works, and please let me know what the machine and video card are, because I've reported this to octave development.