When using the frame skipping wrapper for OpenAI Gym, what is the purpose of the np.max line? - deep-learning

I'm implementing the following wrapper used commonly in OpenAI's Gym for Frame Skipping. It can be found in dqn/atari_wrappers.py
I'm very confused about the following line:
max_frame = np.max(np.stack(self._obs_buffer), axis=0)
I have added comments throughout the code for the parts I understand and to aid anyone who may be able to help.
np.stack(self._obs_buffer) stacks the two states in _obs_buffer.
np.max returns the maximum along axis 0.
But what I don't understand is why we're doing this or what it's really doing.
class MaxAndSkipEnv(gym.Wrapper):
"""Return only every 4th frame"""
def __init__(self, env=None, skip=4):
super(MaxAndSkipEnv, self).__init__(env)
# Initialise a double ended queue that can store a maximum of two states
self._obs_buffer = deque(maxlen=2)
# _skip = 4
self._skip = skip
def _step(self, action):
total_reward = 0.0
done = None
for _ in range(self._skip):
# Take a step
obs, reward, done, info = self.env.step(action)
# Append the new state to the double ended queue buffer
self._obs_buffer.append(obs)
# Update the total reward by summing the (reward obtained from the step taken) + (the current
# total reward)
total_reward += reward
# If the game ends, break the for loop
if done:
break
max_frame = np.max(np.stack(self._obs_buffer), axis=0)
return max_frame, total_reward, done, info

At the end of the for loop the self._obs_buffer holds the last two frames.
Those two frames are then max-pooled over, resulting in an observation, that contains some temporal information.

Related

Problem while implementating of “Continual Learning Through Synaptic Intelligence” paper

I am trying to reproduce the results of “Continual Learning Through Synaptic Intelligence” paper [1]. I tried implementing the algorithm as best as I could understand after going through paper many times. I also looked at it’s official implementation on github which is in tensorflow 1.0, but could not understand much as I don’t have much familiarity with that.
Though I got some results but not good enough as paper. I wanted to ask if anyone can help me to find out where I am going wrong. Before going into coding details I want to discuss sudo code so that I undersatnd what is going wrong with my implementation.
Here is kind of sudo code that I have implemented. Please help me.
lambda = 1
xi = 1e-3
total_tasks = 5
model = NN(total_tasks)
## multiheaded linear model ([784(input)-->256-->256-->2(output)(*5, 5 separate heads)])
## output layer is 2 neuron head (separate heads for each task, total 5 tasks)
## output is vector of size 2 (for 2 classes)
prev_theta = model.theta(copy=True) # updated at end of task
## model.theta() returns list of shared parameters (i.e. layer1 and layer2 excluding output layer)
## copy=True, gives copy of parameters
## so it don't effect original params connected to computaitonal graph
omega_total = zero_like(prev_theta) ## Capital Omega in paper (per-parameter regularization strength)
omega = zero_like(prev_theta) ## small omega in paper (per-parameter contribution to loss)
for task_num in range(total_tasks):
optmizer = ADAM() # created before every task (or reset it)
prev_theta_step = model.theta(copy=True) # updated at end of step
## trainig for task start
for epoch in range(10):
for steps in range(steps_per_epoch):
X, Y = train_dataset[task_num].sample()
## X is flattened image of size 784
## Y is binary vector of size 2 ([0,1] or [1,0])
Y_pred = model(X, task_num) # model is multihead, task_num selects the head
loss = CROSS_ENTROPY(Y_pred, Y)
if(task_num>0): ## reg_loss starts from second task
theta = model.theta()
## here copy is not true so it returns params connected to computaitonal graph
reg_loss = torch.sum(omega_total*torch.square(theta - prev_theta))
loss = loss + lambda*reg_loss
optmizer.zero_grad()
loss.backward()
theta = model.theta(copy=True)
grads = model.theta_grads() ## grads of shared paramters only
omega = omega - grads*(theta - prev_theta_step)
prev_theta_step = theta
optimizer.step()
## training for task complete, update importance parameters
theta = model.theta(copy=True)
omega_total += relu( omega/( (theta - prev_theta)**2 + xi) )
prev_theta = theta
omega = torch.zeros(theta_shape)
## evaluation code
...
...
...
## evaluation done
I am also attaching result I got. In results ‘one’ (blue) represents without regression loss (lambda=0), ‘two’ (green) represents with regression loss (lambda=1).
Thank you for reading so far. Kindly help me out.

Q values overshoot in Double Deep Q Learning

I am trying to teach the agent to play ATARI Space Invaders video game, but my Q values overshoot.
I have clipped positive rewards to 1 (agent also receives -1 for losing a life), so the maximum expected return should be around 36 (maybe I am wrong about this). I have also implemented the Huber loss.
I have noticed that when my Q values start overshooting, the agent stops improving (the reward stops increasing).
Code can be found here
Plots can be found here
Note:
I have binarized frames, so that I can use bigger replay buffer (my replay buffer size is 300 000 which is 3 times smaller than in original paper)
EDIT:
I have binarized frames so I can use 1 bit(instead of 8 bits) to store one pixel of the image in the replay buffer, using numpy.packbits function. In that way I can use 8 times bigger replay buffer. I have checked if image is distorted after packing it with packbits, and it is NOT. So sampling from replay buffer works fine. This is the main loop of the code (maybe the problem is in there):
frame_count = 0
LIFE_CHECKPOINT = 3
for episode in range(EPISODE,EPISODES):
# reset the environment and init variables
frames, _, _ = space_invaders.resetEnv(NUM_OF_FRAMES)
state = stackFrames(frames)
done = False
episode_reward = 0
episode_reward_clipped = 0
frames_buffer = frames # contains preprocessed frames (not stacked)
while not done:
if (episode % REPORT_EPISODE_FREQ == 0):
space_invaders.render()
# select an action from behaviour policy
action, Q_value, is_greedy_action = self.EGreedyPolicy(Q, state, epsilon, len(ACTIONS))
# perform action in the environment
observation, reward, done, info = space_invaders.step(action)
episode_reward += reward # update episode reward
reward, LIFE_CHECKPOINT = self.getCustomReward(reward, info, LIFE_CHECKPOINT)
episode_reward_clipped += reward
frame = preprocessFrame(observation, RESOLUTION)
# pop first frame from the buffer, and add new at the end (s1=[f1,f2,f3,f4], s2=[f2,f3,f4,f5])
frames_buffer.append(frame)
frames_buffer.pop(0)
new_state = stackFrames(frames_buffer)
# add (s,a,r,s') tuple to the replay buffer
replay_buffer.add(packState(state), action, reward, packState(new_state), done)
state = new_state # new state becomes current state
frame_count += 1
if (replay_buffer.size() > MIN_OBSERVATIONS): # if there is enough data in replay buffer
Q_values.append(Q_value)
if (frame_count % TRAINING_FREQUENCY == 0):
batch = replay_buffer.sample(BATCH_SIZE)
loss = Q.train_network(batch, BATCH_SIZE, GAMMA, len(ACTIONS))
losses.append(loss)
num_of_weight_updates += 1
if (epsilon > EPSILON_END):
epsilon = self.decayEpsilon(epsilon, EPSILON_START, EPSILON_END, FINAL_EXPLORATION_STATE)
if (num_of_weight_updates % TARGET_NETWORK_UPDATE_FREQ == 0) and (num_of_weight_updates != 0): # update weights of target network
Q.update_target_network()
print("Target_network is updated!")
episode_rewards.append(episode_reward)
I have also checked the Q.train_network and Q.update_target_network functions and they work fine.
I was wondering if problem can be in hyper parameters:
ACTIONS = {"NOOP":0,"FIRE":1,"RIGHT":2,"LEFT":3,"RIGHTFIRE":4,"LEFTFIRE":5}
NUM_OF_FRAMES = 4 # number of frames that make 1 state
EPISODES = 10000 # number of episodes
BUFFER_SIZE = 300000 # size of the replay buffer(can not put bigger size, RAM)
MIN_OBSERVATIONS = 30000
RESOLUTION = 84 # resolution of frames
BATCH_SIZE = 32
EPSILON_START = 1 # starting value for the exploration probability
EPSILON_END = 0.1
FINAL_EXPLORATION_STATE = 300000 # final frame for which epsilon is decayed
GAMMA = 0.99 # discount factor
TARGET_NETWORK_UPDATE_FREQ = 10000
REPORT_EPISODE_FREQ = 100
TRAINING_FREQUENCY = 4
OPTIMIZER = RMSprop(lr=0.00025,rho=0.95,epsilon=0.01)

How to automatically crop an .OBJ 3D model to a bounding box?

In the now obsoleted Autodesk ReCap API it was possible to specify a "bounding box" around the scene to be generated from images.
In the resulting models, any vertices outside the bounding box were discarded, and any volumes that extended beyond the bounding box were truncated to have faces at the box boundaries.
I am now using Autodesk's Forge Reality Capture API which replaced ReCap. Apparently, This new API does not allow the user to specify a bounding box.
So I am now searching for a program that takes an .OBJ file and a specified bounding box as input, and outputs a file of just the vertices and faces within this bounding box.
Given that there is no way to specify the bounding box in Reality Capture API, I created this python program. It is crude, in that it only discards faces that have vertices that are outside the bounding box. And it actually does discards nondestructively, only by commenting them out in the output OBJ file. This allows you to uncomment them and then use a different bounding box.
This may not be what you need if you truly want to remove all relevant v, vn, vt, vp and f lines that are outside the bounding box, because the OBJ file size remains mostly unchanged. But for my particular needs, keeping all the records and just using comments was preferable.
# obj3Dcrop.py
# (c) Scott L. McGregor, Dec 2019
# License: free for all non commercial uses. Contact author for any other uses.
# Changes and Enhancements must be shared with author, and be subject to same use terms
# TL;DR: This program uses a bounding box, and "crops" faces and vertices from a
# Wavefront .OBJ format file, created by Autodesk Forge Reality Capture API
# if one of the vertices in a face is not within the bounds of the box.
#
# METHOD
# 1) All lines other than "v" vertex definitions and "f" faces definitions
# are copied UNCHANGED from the input .OBJ file to an output .OBJ file.
# 2) All "v" vertex definition lines have their (x, y, z) positions tested to see if:
# minX < x < maxX and minY < y < maxY and minZ < z < maxZ ?
# If TRUE, we want to keep this vertex in the new OBJ, so we
# store its IMPLICIT ORDINAL position in the file in a dictionary called v_keepers.
# If FALSE, we will use its absence from the v_keepers file as a way to identify
# faces that contain it and drop them. All "v" lines are also copied unchanged to the
# output file.
# 3) All "f" lines (face definitions) are inspected to verify that all 3 vertices in the face
# are in the v_keepers list. If they are, the f line is output unchanged.
# 4) Any "f" line that refers to a vertex that was cropped, is prefixed by "# CROPPED: "
# in the output file. Lines beginning # are treated as comments, and ignored in future
# processing.
# KNOWN LIMITATIONS: This program generates models in which the outside of bound faces
# have been removed. The vertices that were found outside the bounding box, are still in the
# OBJ file, but they are now disconnected and therefore ignored in later processing.
# The "f" lines for faces with vertices outside the bounding box are also still in the
# output file, but now commented out, so they don't process. Because this is non-destructive.
# we can easily change our bounding box later, uncomment cropped lines and reprocess.
#
# This might be an incomplete solution for some potential users. For such users
# a more complete program would delete unneeded v, vn, vt and vp lines when the v vertex
# that they refer to is dropped. But note that this requires renumbering all references to these
# vertice definitions in the "f" face definition lines. Such a more complete solution would also
# DISCARD all 'f' lines with any vertices that are out of bounds, instead of making them copies.
# Such a rewritten .OBJ file would be var more compact, but changing the bounding box would require
# saving the pre-cropped original.
# QUIRK: The OBJ file format defines v, vn, vt, vp and f elements by their
# IMPLICIT ordinal occurrence in the file, with each element type maintaining
# its OWN separate sequence. It then references those definitions EXPLICITLY in
# f face definitions. So deleting (or commenting out) element references requires
# appropriate rewriting of all the"f"" lines tracking all the new implicit positions.
# Such rewriting is not particularly hard to do, but it is one more place to make
# a mistake, and could make the algorithm more complicated to understand.
# This program doesn't bother, because all further processing of the output
# OBJ file ignores unreferenced v, vn, vt and vp elements.
#
# Saving all lines rather than deleting them to save space is a tradeoff involving considerations of
# Undo capability, compute cycles, compute space (unreferenced lines) and maintenance complexity choice.
# It is left to the motivated programmer to add this complexity if needed.
import sys
#bounding_box = sys.argv[1] # should be in the only string passsed (maxX, maxY, maxZ, minX, minY, minZ)
bounding_box = [10, 10, 10, -10, -10, 1]
maxX = bounding_box[0]
maxY = bounding_box[1]
maxZ = bounding_box[2]
minX = bounding_box[3]
minY = bounding_box[4]
minZ = bounding_box[5]
v_keepers = dict() # keeps track of which vertices are within the bounding box
kept_vertices = 0
discarded_vertices = 0
kept_faces = 0
discarded_faces = 0
discarded_lines = 0
kept_lines = 0
obj_file = open('sample.obj','r')
new_obj_file = open('cropped.obj','w')
# the number of the next "v" vertex lines to process.
original_v_number = 1 # the number of the next "v" vertex lines to process.
new_v_number = 1 # the new ordinal position of this vertex if out of bounds vertices were discarded.
for line in obj_file:
line_elements = line.split()
# Python doesn't have a SWITCH statement, but we only have three cases, so we'll just use cascading if stmts
if line_elements[0] != "f": # if it isn't an "f" type line (face definition)
if line_elements[0] != "v": # and it isn't an "v" type line either (vertex definition)
# ************************ PROCESS ALL NON V AND NON F LINE TYPES ******************
# then we just copy it unchanged from the input OBJ to the output OBJ
new_obj_file.write(line)
kept_lines = kept_lines + 1
else: # then line_elements[0] == "v":
# ************************ PROCESS VERTICES ****************************************
# a "v" line looks like this:
# f x y z ...
x = float(line_elements[1])
y = float(line_elements[2])
z = float(line_elements[3])
if minX < x < maxX and minY < y < maxY and minZ < z < maxZ:
# if vertex is within the bounding box, we include it in the new OBJ file
new_obj_file.write(line)
v_keepers[str(original_v_number)] = str(new_v_number)
new_v_number = new_v_number + 1
kept_vertices = kept_vertices +1
kept_lines = kept_lines + 1
else: # if vertex is NOT in the bounding box
new_obj_file.write(line)
discarded_vertices = discarded_vertices +1
discarded_lines = discarded_lines + 1
original_v_number = original_v_number + 1
else: # line_elements[0] == "f":
# ************************ PROCESS FACES ****************************************
# a "f" line looks like this:
# f v1/vt1/vn1 v2/vt2/vn2 v3/vt3/vn3 ...
# We need to delete any face lines where ANY of the 3 vertices v1, v2 or v3 are NOT in v_keepers.
v = ["", "", ""]
# Note that v1, v2 and v3 are the first "/" separated elements within each line element.
for i in range(0,3):
v[i] = line_elements[i+1].split('/')[0]
# now we can check if EACH of these 3 vertices are in v_keepers.
# for each f line, we need to determine if all 3 vertices are in the v_keepers list
if v[0] in v_keepers and v[1] in v_keepers and v[2] in v_keepers:
new_obj_file.write(line)
kept_lines = kept_lines + 1
kept_faces = kept_faces +1
else: # at least one of the vertices in this face has been deleted, so we need to delete the face too.
discarded_lines = discarded_lines + 1
discarded_faces = discarded_faces +1
new_obj_file.write("# CROPPED "+line)
# end of line processing loop
obj_file.close()
new_obj_file.close()
print ("kept vertices: ", kept_vertices ,"discarded vertices: ", discarded_vertices)
print ("kept faces: ", kept_faces, "discarded faces: ", discarded_faces)
print ("kept lines: ", kept_lines, "discarded lines: ", discarded_lines)
Unfortunately, (at least for now) there is no way to specify the bounding box in Reality Capture API.

Custom environment Gym for step function processing with DDPG Agent

I'm new to reinforcement learning, and I would like to process audio signal using this technique. I built a basic step function that I wish to flatten to get my hands on Gym OpenAI and reinforcement learning in general.
To do so, I am using the GoalEnv provided by OpenAI since I know what the target is, the flat signal.
That is the image with input and desired signal :
The step function calls _set_action which performs achieved_signal = convolution(input_signal,low_pass_filter) - offset, low_pass_filter takes a cutoff frequency as input as well.
Cutoff frequency and offset are the parameters that act on the observation to get the output signal.
The designed reward function returns the frame to frame L2-norm between the input signal and the desired signal, to the negative, to penalize a large norm.
Following is the environment I created:
def butter_lowpass(cutoff, nyq_freq, order=4):
normal_cutoff = float(cutoff) / nyq_freq
b, a = signal.butter(order, normal_cutoff, btype='lowpass')
return b, a
def butter_lowpass_filter(data, cutoff_freq, nyq_freq, order=4):
b, a = butter_lowpass(cutoff_freq, nyq_freq, order=order)
y = signal.filtfilt(b, a, data)
return y
class `StepSignal(gym.GoalEnv)`:
def __init__(self, input_signal, sample_rate, desired_signal):
super(StepSignal, self).__init__()
self.initial_signal = input_signal
self.signal = self.initial_signal.copy()
self.sample_rate = sample_rate
self.desired_signal = desired_signal
self.distance_threshold = 10e-1
max_offset = abs(max( max(self.desired_signal) , max(self.signal))
- min( min(self.desired_signal) , min(self.signal)) )
self.action_space = spaces.Box(low=np.array([10e-4,-max_offset]),\
high=np.array([self.sample_rate/2-0.1,max_offset]), dtype=np.float16)
obs = self._get_obs()
self.observation_space = spaces.Dict(dict(
desired_goal=spaces.Box(-np.inf, np.inf, shape=obs['achieved_goal'].shape, dtype='float32'),
achieved_goal=spaces.Box(-np.inf, np.inf, shape=obs['achieved_goal'].shape, dtype='float32'),
observation=spaces.Box(-np.inf, np.inf, shape=obs['observation'].shape, dtype='float32'),
))
def step(self, action):
range = self.action_space.high - self.action_space.low
action = range / 2 * (action + 1)
self._set_action(action)
obs = self._get_obs()
done = False
info = {
'is_success': self._is_success(obs['achieved_goal'], self.desired_signal),
}
reward = -self.compute_reward(obs['achieved_goal'],self.desired_signal)
return obs, reward, done, info
def reset(self):
self.signal = self.initial_signal.copy()
return self._get_obs()
def _set_action(self, actions):
actions = np.clip(actions,a_max=self.action_space.high,a_min=self.action_space.low)
cutoff = actions[0]
offset = actions[1]
print(cutoff, offset)
self.signal = butter_lowpass_filter(self.signal, cutoff, self.sample_rate/2) - offset
def _get_obs(self):
obs = self.signal
achieved_goal = self.signal
return {
'observation': obs.copy(),
'achieved_goal': achieved_goal.copy(),
'desired_goal': self.desired_signal.copy(),
}
def compute_reward(self, goal_achieved, goal_desired):
d = np.linalg.norm(goal_desired-goal_achieved)
return d
def _is_success(self, achieved_goal, desired_goal):
d = self.compute_reward(achieved_goal, desired_goal)
return (d < self.distance_threshold).astype(np.float32)
The environment can then be instantiated into a variable, and flattened through the FlattenDictWrapper as advised here https://openai.com/blog/ingredients-for-robotics-research/ (end of the page).
length = 20
sample_rate = 30 # 30 Hz
in_signal_length = 20*sample_rate # 20sec signal
x = np.linspace(0, length, in_signal_length)
# Desired output
y = 3*np.ones(in_signal_length)
# Step signal
in_signal = 0.5*(np.sign(x-5)+9)
env = gym.make('stepsignal-v0', input_signal=in_signal, sample_rate=sample_rate, desired_signal=y)
env = gym.wrappers.FlattenDictWrapper(env, dict_keys=['observation','desired_goal'])
env.reset()
The agent is a DDPG Agent from keras-rl, since the actions can take any values in the continuous action_space described in the environment.
I wonder why the actor and critic nets need an input with an additional dimension, in input_shape=(1,) + env.observation_space.shape
nb_actions = env.action_space.shape[0]
# Building Actor agent (Policy-net)
actor = Sequential()
actor.add(Flatten(input_shape=(1,) + env.observation_space.shape, name='flatten'))
actor.add(Dense(128))
actor.add(Activation('relu'))
actor.add(Dense(64))
actor.add(Activation('relu'))
actor.add(Dense(nb_actions))
actor.add(Activation('linear'))
actor.summary()
# Building Critic net (Q-net)
action_input = Input(shape=(nb_actions,), name='action_input')
observation_input = Input(shape=(1,) + env.observation_space.shape, name='observation_input')
flattened_observation = Flatten()(observation_input)
x = Concatenate()([action_input, flattened_observation])
x = Dense(128)(x)
x = Activation('relu')(x)
x = Dense(64)(x)
x = Activation('relu')(x)
x = Dense(1)(x)
x = Activation('linear')(x)
critic = Model(inputs=[action_input, observation_input], outputs=x)
critic.summary()
# Building Keras agent
memory = SequentialMemory(limit=2000, window_length=1)
policy = BoltzmannQPolicy()
random_process = OrnsteinUhlenbeckProcess(size=nb_actions, theta=0.6, mu=0, sigma=0.3)
agent = DDPGAgent(nb_actions=nb_actions, actor=actor, critic=critic, critic_action_input=action_input,
memory=memory, nb_steps_warmup_critic=2000, nb_steps_warmup_actor=10000,
random_process=random_process, gamma=.99, target_model_update=1e-3)
agent.compile(Adam(lr=1e-3, clipnorm=1.), metrics=['mae'])
Finally, the agent is trained:
filename = 'mem20k_heaviside_flattening'
hist = agent.fit(env, nb_steps=10, visualize=False, verbose=2, nb_max_episode_steps=5)
with open('./history_dqn_test_'+ filename + '.pickle', 'wb') as handle:
pickle.dump(hist.history, handle, protocol=pickle.HIGHEST_PROTOCOL)
agent.save_weights('h5f_files/dqn_{}_weights.h5f'.format(filename), overwrite=True)
Now here is the catch: the agent seems to always be stuck to the same neighborhood of output values across all episodes for a same instance of my env:
The cumulated reward is negative since I just allowed the agent to get negative rewards. I used it from https://github.com/openai/gym/blob/master/gym/envs/robotics/fetch_env.py which is part of OpenAI code as example.
Across one episode, I should get varying sets of actions converging towards a (cutoff_final, offset_final) that would get my input step signal close to my output flat signal, which is clearly not the case. In addition, I thought, for successive episodes, I should get different actions.
I wonder why the actor and critic nets need an input with an additional dimension, in input_shape=(1,) + env.observation_space.shape
I think the GoalEnv is designed with HER (Hindsight Experience Replay) in mind, since it will use the "sub-spaces" inside the observation_space to learn from sparse reward signals (there is a paper in OpenAI website that explains how HER works). Haven't look at the implementation, but my guess is that there needs to be an additional input since HER also process the "goal" parameter.
Since it seems you are not using HER (works with any off-policy algorithm, including DQN, DDPG, etc), you should handcraft an informative reward function (rewards are not binary, eg, 1 if objective achieved, 0 otherwise) and use the base Env class. The reward should be calculated inside the step method, since rewards in MDP's are functions like r(s, a, s`) you probably will have all the information you need. Hope it helps.

Python how to define a function that only accepts an integer input that can be used for multiple lines

I am trying to make a program that has a user input numbers into multiple different lines of code and I am trying to make it so that if the user inputs something other than a number the program will ask the user again to input the number correctly. I was trying to define a function that I could use for all of them but every time I run the program, it crashes. Any help would be much appreciated, thank you.
My code:
def error():
global m1
global m2
global w1
global w2
while True:
try:
int(m1 or m2 or w1 or w2)
except ValueError:
try:
float(m1 or m2 or w1 or w2)
except ValueError:
m1 or m2 or w1 or w2=input("please input your response correctly...")
break
m1=input("\nWhat was your first marking period percentage?")
error()
w1=input("\nWhat is the weighting of the first marking period? (in decimal)")
error()
m2=input("\nWhat was your second marking period percentage?")
error()
w2=input("\nWhat is the weighting of the second marking period? (in decimal)")
error()
def user_input(msg):
inp = input(msg)
try:
return int(inp) if inp.isnumeric() else float(inp)
except ValueError as e:
return user_input("Please enter a numeric value")
m1=user_input("\nWhat was your first marking period percentage?")
w1=user_input("\nWhat is the weighting of the first marking period? (in decimal)")
m2=user_input("\nWhat was your second marking period percentage?")
w2=user_input("\nWhat is the weighting of the second marking period? (in decimal)")
You should write your function to get one number at a time. If at exception is triggered somewhere, it should be handled. Note how the get_number function shown below will keep asking for a number but also shows the prompt specified by its caller. If you are not running Python 3.6 or higher, you will need to comment out the call to print in the main function.
#! /usr/bin/env python3
def main():
p1 = get_number('What is your 1st marking period percentage? ')
w1 = get_number('What is the weighting of the 1st marking period? ')
p2 = get_number('What is your 2nd marking period percentage? ')
w2 = get_number('What is the weighting of the 2nd marking period? ')
score = calculate_score((p1, p2), (w1, w2))
print(f'Your score is {score:.2f}%.')
def get_number(prompt):
while True:
try:
text = input(prompt)
except EOFError:
raise SystemExit()
else:
try:
number = float(text)
except ValueError:
print('Please enter a number.')
else:
break
return number
def calculate_score(percentages, weights):
if len(percentages) != len(weights):
raise ValueError('percentages and weights must have same length')
return sum(p * w for p, w in zip(percentages, weights)) / sum(weights)
if __name__ == '__main__':
main()
By the following code you can able to make a function that only accept integer value:
def input_type(a):
if(type(10)==type(a)):
print("integer")
else:
print("not integer")
a=int(input())
input_type(a)