pygame know when I am in the X range and Y range - pygame

I was making a game and now I want to see how could pygame know that I am in x of player and y of player.
I have to break in safe, to do that I must stand where it is and stay there for 3 seconds.
problem is even that I managed to alert pygame that I am in area where safe is,
is how could pygame know that I am in safe for 3 sec.
This is non-optimal:
import time
now = time.time()
future = now + 3
while future > now:
#IT DOESNT WORK WELL IN WHILE LOOP TO ALERT YA,
Is there a better solution?

What you could do to "alert" pygame/your code is you could use if statements if your using non-OOP and all you have is a list of coordinates for your player location, or if you do have an object for your player you could pygame.sprite.Sprite as a super class. Here's the solution if your using a coordinate list and not a big class.
loc = [x, y] # Don't run this because it won't understand what x and y are. This is just an example
safe_rect = ( x , y , w , h ) # Using tuple because I assume that the safe doesn't move
while True:
if loc[0] >= safe_rect[0] and # The newlines are for readability
loc[0] < safe_rect[0] + safe_rect[2] and
loc[1] >= safe_rect[1] and
loc[1] < safe_rect[1] + safe_rect[3]:
not_yet_existent_do_something_function() # Performs statements once inside the safe
If you want to include a timer, do this:
import time
loc = [x, y]
safe_rect = ( x , y , w , h )
# Time variables
time = time.time()
time_wanted = time + 3 # 3 seconds after the time is assigned to time
while True:
if loc[0] >= safe_rect[0] and
loc[0] < safe_rect[0] + safe_rect[2] and
loc[1] >= safe_rect[1] and
loc[1] < safe_rect[1] + safe_rect[3]:
if time.time() > time_wanted:
not_yet_existent_do_something_function()
I hope this works for you, if you have a question about what I did please ask me.

Related

predicting simple autoregressive model with fully connected

the question is at the end, you can just jump to the question, I just wanted to share my process, in case someone want to give me general advice.
I started learning how to use LSTM layers and tried to build a simple predictor to the following AR model:
class AR_model:
def __init__(self, length=100):
self.time = 0
self.first_value = 0
self.a1 = 0.6
self.a2 = -0.5
self.a3 = -0.2
self.Xt = self.first_value
self.Xt_minus_1 = 0
self.Xt_minus_2 = 0
self.length = length
def __iter__(self):
return self
def __next__(self): # raise StopIteration
if self.time == self.length:
raise StopIteration
new_value = self.a1 * self.Xt + \
self.a2 * self.Xt_minus_1 + \
self.a3 * self.Xt_minus_2 + \
random.uniform(0, 0.1)
self.Xt_minus_2 = self.Xt_minus_1
self.Xt_minus_1 = self.Xt
self.Xt = new_value
self.time += 1
return new_value
which basicly means the following series:
Xt = a1 * Xt−1 + a2 * Xt−2 + a3X * t−3 + Ut
where: a1 = 0.6, a2 = −0.5, a3 = −0.2 and Ut (i.i.d) ∼ Uniform(0, 0.1)
using the following forward method:
def forward(self, input):
# input: [Batch x seq_length x input_size]
x, _ = self.lstm(input)
# x: [Batch x seq_length x hidden_state]
x = x[:, -1, :]
# taking only the last x: [Batch x hidden_state]
x = self.linear(x)
# x: [Batch x 1]
return x
the best result seems ok:
picture of results, 91 steps
with the following hyper-parameters:
signal_count = 50
signal_length = 200
hidden_state = 200
learning_rate = 0.1
also tried it on sin and tri waves:
sin wave 20 steps
tri wave 75 steps
tri wave might have worked on deeper layered network but I didnt bother to try
Question 1
It make sense that for a simple AR model, such as:
Xt = a1 * Xt−1 + a2 * Xt−2 + a3X * t−3 + Ut
where: a1 = 0.6, a2 = −0.5, a3 = −0.2 and Ut (i.i.d) ∼ Uniform(0, 0.1)
It would be possible to get a good prediction with a simple three input one layered fully connected network, where the inputs are the last tree values of the AR series.
but I just get terrible result. Even when I remove the noise from the AR model I still get bad results. Am I in the wrong to think this?
I didn't post the code because I think its a concept problem. If someone asks, I will post.
Question 2
for the above AR model, what simple predictor would you recommend, not necessarily based deep learning.
asking friends I got recommended kalman filter, and Markovian based.
haven't really checked them out yet.
Thank you for reading

Function approximator and q-learning

I am trying to implement q-learning with an action-value approximation-function. I am using openai-gym and the "MountainCar-v0" enviroment to test my algorithm out. My problem is, it does not converge or find the goal at all.
Basically the approximator works like the following, you feed in the 2 features: position and velocity and one of the 3 actions in a one-hot encoding: 0 -> [1,0,0], 1 -> [0,1,0] and 2 -> [0,0,1]. The output is the action-value approximation Q_approx(s,a), for one specific action.
I know that usually, the input is the state (2 features) and the output layer contains 1 output for each action. The big difference that I see is that I have run the feed forward pass 3 times (one for each action) and take the max, while in the standard implementation you run it once and take the max over the output.
Maybe my implementation is just completely wrong and I am thinking wrong. Gonna paste the code here, it is a mess but I am just experimenting a bit:
import gym
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Activation
env = gym.make('MountainCar-v0')
# The mean reward over 20 episodes
mean_rewards = np.zeros(20)
# Feature numpy holder
features = np.zeros(5)
# Q_a value holder
qa_vals = np.zeros(3)
one_hot = {
0 : np.asarray([1,0,0]),
1 : np.asarray([0,1,0]),
2 : np.asarray([0,0,1])
}
model = Sequential()
model.add(Dense(20, activation="relu",input_dim=(5)))
model.add(Dense(10,activation="relu"))
model.add(Dense(1))
model.compile(optimizer='rmsprop',
loss='mse',
metrics=['accuracy'])
epsilon_greedy = 0.1
discount = 0.9
batch_size = 16
# Experience replay containing features and target
experience = np.ones((10*300,5+1))
# Ring buffer
def add_exp(features,target,index):
if index % experience.shape[0] == 0:
index = 0
global filled_once
filled_once = True
experience[index,0:5] = features
experience[index,5] = target
index += 1
return index
for e in range(0,100000):
obs = env.reset()
old_obs = None
new_obs = obs
rewards = 0
loss = 0
for i in range(0,300):
if old_obs is not None:
# Find q_a max for s_(t+1)
features[0:2] = new_obs
for i,pa in enumerate([0,1,2]):
features[2:5] = one_hot[pa]
qa_vals[i] = model.predict(features.reshape(-1,5))
rewards += reward
target = reward + discount*np.max(qa_vals)
features[0:2] = old_obs
features[2:5] = one_hot[a]
fill_index = add_exp(features,target,fill_index)
# Find new action
if np.random.random() < epsilon_greedy:
a = env.action_space.sample()
else:
a = np.argmax(qa_vals)
else:
a = env.action_space.sample()
obs, reward, done, info = env.step(a)
old_obs = new_obs
new_obs = obs
if done:
break
if filled_once:
samples_ids = np.random.choice(experience.shape[0],batch_size)
loss += model.train_on_batch(experience[samples_ids,0:5],experience[samples_ids,5].reshape(-1))[0]
mean_rewards[e%20] = rewards
print("e = {} and loss = {}".format(e,loss))
if e % 50 == 0:
print("e = {} and mean = {}".format(e,mean_rewards.mean()))
Thanks in advance!
There shouldn't be much difference between the actions as inputs to your network or as different outputs of your network. It does make a huge difference if your states are images for example. because Conv nets work very well with images and there would be no obvious way of integrating the actions to the input.
Have you tried the cartpole balancing environment? It is better to test if your model is working correctly.
Mountain climb is pretty hard. It has no reward until you reach the top, which often doesn't happen at all. The model will only start learning something useful once you get to the top once. If you are never getting to the top you should probably increase your time doing exploration. in other words take more random actions, a lot more...

Pygame breaking code, creating a wall of bricks [duplicate]

This question already has answers here:
How do I detect collision in pygame?
(5 answers)
How to detect collisions between two rectangular objects or images in pygame
(1 answer)
Closed 2 years ago.
This is my first time using pygame and I apologize for possible misunderstandings and 'silly' questions. I am developing a breaking game, consisting of a ball ,a paddle and a wall of bricks. So far I ve the ball, and that s fine. I am trying to implement my code in order to get the wall, but I am missing something and it doesn t show up on the screen when I run it. Could anyone help me? The bricks are 50x60, 3 rows. Thank you in advance. (If anyone would also recommend something about creating the paddle and making the ball stick to it, that would be highly appreciated.)
Here is my code so far:
""""BREAKING GAME"""
#Import and initialize pygame
import pygame as pg
pg.init()
#Set the colour of the background and store the ball data
backgroundcolour = (50,50,50)
randomcolour=(205,55,0)
ballimg = pg.image.load("ball.gif")
ballimage = ballimg.get_rect()
#Set the frame
xmax = 800
ymax = 800
screen = pg.display.set_mode((xmax,ymax))
#Starting values
horizontalposition = 400. #pixels (unit f length of computer)
verticalposition = 400. #pixels
v_x = 400. #pixels
v_y = 300. #pixels
#Set the clock of the computer
t0 = float(pg.time.get_ticks())/1000.
# Create wall of bricks
position_Bricks_x=[0,50,100,150,200,250,300,350,400,450,500,550,600,650,700,750]
position_Bricks_y = [16,32,48]
i = 0
j = 0
#Infinite loop
running = True
while running:
t = float(pg.time.get_ticks())/1000.
dt = min(t-t0, 0.1)
t0 = t
# Motion of the ball
horizontalposition = horizontalposition+v_x*dt
verticalposition = verticalposition+v_y*dt
#Bounce the ball on the edges of the screen
if horizontalposition > xmax:
v_x=-abs(v_x)
elif horizontalposition < 0:
v_x= abs(v_x)
if verticalposition > ymax:
v_y = -abs(v_y)
elif verticalposition<0:
v_y = abs(v_y)
# Draw the frame
screen.fill(backgroundcolour)
ballimage.centerx = int(horizontalposition)
ballimage.centery = int(verticalposition)
screen.blit(ballimg,ballimage)
while i < len(position_Bricks_x):
if j < len(position_Bricks_y):
pg.draw.rect(screen,randomcolour,[position_Bricks_x[i],position_Bricks_y[j],50,16])
j = j + 1
else:
j=0
i=i+1
pg.display.update()
pg.display.flip()
# Event handling
pg.event.pump()
for event in pg.event.get():
if event.type == pg.QUIT:
running = False
Quit pygame
pg.quit()
print "Ready"

Python3: Continuously update datetime value from inside while loop

I'm having trouble with the code below which is asking the user to enter a 'wakeup' time, which the python script will then use to calculate how long until the clock reaches zero. From what I have below, I think that I need to place the tdelta calculation inside the while loop so that it constantly checks for the current time.
At the moment, It seems that tdelta will check for the current time in seconds and then use that same value to run through the while loop. Therefore never ending because it uses the same time over and over again. Should I be using a function inside of the while loop to continuously check for the new value and then evaluate true or false?
from datetime import datetime
import time
now = datetime.now()
hms = "%s:%s:%s" % (now.hour, now.minute, now.second)
ans = input('Enter hour:minute:seconds')
s1 = hms
s2 = ans # for example
FMT = '%H:%M:%S'
tdelta = datetime.strptime(s2, FMT) - datetime.strptime(s1, FMT)
if tdelta.days < 0:
tdelta = timedelta(days=0,
seconds=tdelta.seconds, microseconds=tdelta.microseconds)
while tdelta.seconds != 0:
# Use this to see what is happening inside the loop. Here it continuously prints the same time in seconds so is a continuous loop. I need to somehow update the tdelta time.
if tdelta.seconds != 0:
print(tdelta.seconds)
time.sleep(1)
else:
print('time up...do something')
I've tried dozens of variations of the code above but with no luck. I appreciate any tips. Thanks.
Thanks to Padraic Cunningham who provided the missing code to get this working.
Step one: Import timedelta from datetime
Step two: add tdelta -= timedelta(seconds=1) to the first line under the while loop.
from datetime import datetime
from datetime import timedelta
import time
now = datetime.now()
hms = "%s:%s:%s" % (now.hour, now.minute, now.second)
ans = input('Enter hour:minute:seconds')
s1 = hms
s2 = ans # for example
FMT = '%H:%M:%S'
tdelta = datetime.strptime(s2, FMT) - datetime.strptime(s1, FMT)
if tdelta.days < 0:
tdelta = timedelta(days=0,
seconds=tdelta.seconds, microseconds=tdelta.microseconds)
while tdelta.seconds != 0:
tdelta -= timedelta(seconds=1)
"""Use this to see what is happening inside the loop. Here it continuously prints the same time in seconds so is a continuous loop. I need to somehow update the tdelta time."""
if tdelta.seconds != 0:
print(tdelta.seconds)
time.sleep(1)
else:
print('time up...do something')
The end result is that the command line will count down to zero and once tdelta IS equal to 0, it will print from the else statement.

FiPy Simple Convection

I am trying to understand how FiPy works by working an example, in particular I would like to solve the following simple convection equation with periodic boundary:
$$\partial_t u + \partial_x u = 0$$
If initial data is given by $u(x, 0) = F(x)$, then the analytical solution is $u(x, t) = F(x - t)$. I do get a solution, but it is not correct.
What am I missing? Is there a better resource for understanding FiPy than the documentation? It is very sparse...
Here is my attempt
from fipy import *
import numpy as np
# Generate mesh
nx = 20
dx = 2*np.pi/nx
mesh = PeriodicGrid1D(nx=nx, dx=dx)
# Generate solution object with initial discontinuity
phi = CellVariable(name="solution variable", mesh=mesh)
phiAnalytical = CellVariable(name="analytical value", mesh=mesh)
phi.setValue(1.)
phi.setValue(0., where=x > 1.)
# Define the pde
D = [[-1.]]
eq = TransientTerm() == ConvectionTerm(coeff=D)
# Set discretization so analytical solution is exactly one cell translation
dt = 0.01*dx
steps = 2*int(dx/dt)
# Set the analytical value at the end of simulation
phiAnalytical.setValue(np.roll(phi.value, 1))
for step in range(steps):
eq.solve(var=phi, dt=dt)
print(phi.allclose(phiAnalytical, atol=1e-1))
As addressed on the FiPy mailing list, FiPy is not great at handling convection only PDEs (absent diffusion, pure hyperbolic) as it's missing higher order convection schemes. It is better to use CLAWPACK for this class of problem.
FiPy does have one second order scheme that might help with this problem, the VanLeerConvectionTerm, see an example.
If the VanLeerConvectionTerm is used in the above problem, it does do a better job of preserving the shock.
import numpy as np
import fipy
# Generate mesh
nx = 20
dx = 2*np.pi/nx
mesh = fipy.PeriodicGrid1D(nx=nx, dx=dx)
# Generate solution object with initial discontinuity
phi = fipy.CellVariable(name="solution variable", mesh=mesh)
phiAnalytical = fipy.CellVariable(name="analytical value", mesh=mesh)
phi.setValue(1.)
phi.setValue(0., where=mesh.x > 1.)
# Define the pde
D = [[-1.]]
eq = fipy.TransientTerm() == fipy.VanLeerConvectionTerm(coeff=D)
# Set discretization so analytical solution is exactly one cell translation
dt = 0.01*dx
steps = 2*int(dx/dt)
# Set the analytical value at the end of simulation
phiAnalytical.setValue(np.roll(phi.value, 1))
viewer = fipy.Viewer(phi)
for step in range(steps):
eq.solve(var=phi, dt=dt)
viewer.plot()
raw_input('stopped')
print(phi.allclose(phiAnalytical, atol=1e-1))