What is the best way to model an environment to force an agent to select `x out of n` choices? - reinforcement-learning

I have an RL problem where I want the agent to make a selection of x out of an array of size n.
I.e. if I have [0, 1, 2, 3, 4, 5] then n = 6 and if x = 3 a valid action could be
[2, 3, 5].
Right now what I tried is have n scores:
Output n continuous numbers, and select the x highest ones. This works quite ok.
And I tried iteratively replacing duplicates out of a Multi Discrete action. Where we have x values that can be anything from 0 to n-1.
Is there some other optimal action space I am missing that would force the agent to make unique choices?
Many thanks for your valuable insights and tips in advance! I am happy to try all!

Since reinforcement learning mostly about interacting with environment, you can approach like this:
Your agent starts choosing actions. After choosing the first action, you can either update the possible choices it has by removing the last choice (with temporary action list) or you can update the values of the chosen action (giving it either negative reward or punishing it). I think this could solve your problem.

Related

Lasso Regression

For the lasso (linear regression with L1 regularization) with a fixed value of λ, it is necessary to
use cross–validation to select the best optimization algorithm.
I know for a fact that we can use cross validation to find optimal value of λ, but is it neccesary to use cross validation in case λ is fixed?
Any thoughts please?
Cross Validation isn't about if your Regularization Parameter is Fixed or not. Its more related to the R^2 metric.
Lets say you consider 100 records and divide your data into 5 sub-datasets , means each sub-data contains 20 records.
Now out of 5 sub-datas , there are 5 different ways to assign anyone of the sub-data as Cross-Validation (CV) Data.
For all these 5 scenarios, we can find out the R^2, and then find out the Average R^2.
This way, you can have a comparison of your R-score with the Average R-score.

Find marginal effects in multiple equation model with ordered probit - cmp

I am really new to Stata, so that my question might be trivial
I am using package cmp to estimate a bivariate model that goes as follows:
cmp(d_ln_jobs = d_layer) (d_layer = d_tariff), vce(robust) ind($cmp_cont $cmp_probit) nolr quietly difficult
d_layer is an ordered variable that assumes -4, -3, ... 4.
How could I obtain the marginal effect of d_tariff on both dependent variables, evaluated at d_tariff's median?
Here is what I've tried:
margins, dydx(d_tariff) at((median)) force
I don't think this is correct since, as an output, the entry related to dy/dx says 0, and at the header of the output it shows:
"Expression: linear prediction, predict()"
Does this last part mean that it would show predicted probabilities rather than marginal effects? Besides, shouldn't I get a value different from 0? In my mind, d_tariff would change d_layer, which would change d_ln_jobs? Why don't I get two values, one showing the marginal effect on d_layer and the other on d_ln_jobs?

Is it possible to specify "episodes_this_iter" with the ray Tune search algorithm?

I'm new to programming/ray and have a simple question about which parameters can be specified when using Ray Tune. In particular, the ray tune documentation says that all of the auto-filled fields (steps_this_iter, episodes_this_iter, etc.) can be used as stopping conditions or in the Scheduler/Search Algorithm specification.
However, the following only works once I remove the "episodes_this_iter" specification. Does this work only as part of the stopping criteria?
ray.init()
tune.run(
PPOTrainer,
stop = {"training_iteration": 1000},
config={"env": qsdm.QSDEnv,
"env_config": defaultconfig,
"num_gpus": 0,
"num_workers": 1,
"lr": tune.grid_search([0.00005, 0.00001, 0.0001]),},
"episodes_this_iter": 2500,
)
tune.run() is the one filling up those fields so we can use them elsewhere. And the stopping criterion is just one of the places where we can use them in.
To see why the example doesn't work, consider a simpler analogue:
episodes_total: 100
The trainer itself is the one incrementing the episode count so the rest of the system knows how far along we are. It doesn't work on them if we try to change it or fix it to a particular value. The same reasoning applies to other fields in the list.
As for the scheduler and search algorithms, I have no experience with.
But what we want to do is put those conditions inside the schedule or search algorithm itself, and not in the trainer directly.
Here's an example with Bayesian optimisation search, although I don't know what it would mean to do this:
from ray.tune.suggest.bayesopt import BayesOptSearch
tune.run(
# ...
# 10 trials
num_samples=10,
search_alg=BayesOptSearch(
# look for learning rates within this range:
{'lr': (0.1, 0.00001)},
# optimise for this metric:
metric='episodes_this_iter', # <------- auto-filled field here
mode='max',
utility_kwargs={
'kind': 'ucb',
'kappa': '2.5',
'xi': 0.0
}
)
)

reinforcement learning model design - how to add upto 5

I am experimenting with reinforcement learning in python using Keras. Most of the tutorials available use OpenAI Gym library to create the environment, state, and action sets.
After practicing with many good examples written by others, I decided that I want to create my own reinforcement learning environment, state, and action sets.
This is what I think will be fun to teach the machine to do.
An array of integers from 1 to 4. I will call these targets.
targets = [[1, 2, 3, 4]]
Additional numbers list (at random) from 1 to 4. I will call these bullets.
bullets = [1, 2, 3, 4]
When I shoot a bullet to a target, the target's number will be the sum of original target num + bullet num.
I want to shoot a bullet (one at a time) at one of the targets to make
For example, given targets [1 2 3 4] and bullet 1, I want the machine to predict the correct index to shoot at.
In this case, it should be index 3, because 4 + 1 = 5
curr_state = [[1, 2, 3, 4]]
bullet = 1
action = 3 (<-- index of the curr_state)
next_state = [[1, 2, 3, 5]]
I have been picking my brain to think of the best way to construct this into a reinforcement design. I tried some, but the model result is not very good (meaning, it most likely fails to make number 5).
Mostly because the state is a 2D: (1) targets; (2) bullet at that time. The method I employed so far is to convert the state as the following:
State = 5 - targets - bullet
I was wondering if anyone can think of a better way to design this model?
Thanks in advance!
Alright, so it looks like no one is helping you out, so I just wrote a Python environment file for you as you described. I also made it as much OpenAI style for you as possible, here is the link to it, it is in my GitHub repository. You can copy the code or fork it. I will explain it below:
https://github.com/RuiNian7319/Miscellaneous/blob/master/ShootingRange.py
States = [0, 1, 2, ..., 10]
Actions = [-2, -1, 0, 1, 2]
So the game starts at a random number between 0 - 10 (you can change this easily if you want), and the random number is your "target" you described above. Given this target, your AI agent can fire the gun, and it shoots bullets corresponding to the numbers above. The objective is for your bullet and the target to add up to 5. There are negatives in case your AI agent overshoots 5, or if the target is a number above 5.
To get a positive reward, the agent has to get 5. So if the current value is 3, and the agent shoots 2, then the agent will get a reward of 1 since he got the total value of 5, and that episode will end.
There are 3 ways for the game to end:
1) Agent gets 5
2) Agent fails to get 5 in 15 tries
3) The number is above 10. In this case, we say the target is too far
Sometimes, you need to shoot multiple times to get 5. So, if your agent shoots, its current bullet will be added to the state, and the agent tries again from that new state.
Example:
Current state = 2. Agent shoots 2. New state is 4. And the agent starts at 4 at the next time step. This "sequential decision making" creates a reinforcement learning environment, rather than a contextual bandit.
I hope this makes sense, let me know if you have any questions.

Determining edge weights given a list of walks in a graph

These questions regard a set of data with lists of tasks performed in succession and the total time required to complete them. I've been wondering whether it would be possible to determine useful things about the tasks' lengths, either as they are or with some initial guesstimation based on appropriate domain knowledge. I've come to think graph theory would be the way to approach this problem in the abstract, and have a decent basic grasp of the stuff, but I'm unable to know for certain whether I'm on the right track. Furthermore, I think it's a pretty interesting question to crack. So here we go:
Is it possible to determine the weights of edges in a directed weighted graph, given a list of walks in that graph with the lengths (summed weights) of said walks? I recognize the amount and quality of permutations on the routes taken by the walks will dictate the quality of any possible answer, but let's assume all possible walks and their lengths are given. If a definite answer isn't possible, what kind of things can be concluded about the graph? How would you arrive at those conclusions?
What if there were several similar walks with possibly differing lengths given? Can you calculate a decent average (or other illustrative measure) for each edge, given enough permutations on different routes to take? How will discounting some permutations from the available data set affect the calculation's accuracy?
Finally, what if you had a set of initial guesses as to the weights and had to refine those using the walks given? Would that improve upon your guesstimation ability, and how could you apply the extra information?
EDIT: Clarification on the difficulties of a plain linear algebraic approach. Consider the following set of walks:
a = 5
b = 4
b + c = 5
a + b + c = 8
A matrix equation with these values is unsolvable, but we'd still like to estimate the terms. There might be some helpful initial data available, such as in scenario 3, and in any case we can apply knowledge of the real world - such as that the length of a task can't be negative. I'd like to know if you have ideas on how to ensure we get reasonable estimations and that we also know what we don't know - eg. when there's not enough data to tell a from b.
Seems like an application of linear algebra.
You have a set of linear equations which you need to solve. The variables being the lengths of the tasks (or edge weights).
For instance if the tasks lengths were t1, t2, t3 for 3 tasks.
And you are given
t1 + t2 = 2 (task 1 and 2 take 2 hours)
t1 + t2 + t3 = 7 (all 3 tasks take 7 hours)
t2 + t3 = 6 (tasks 2 and 3 take 6 hours)
Solving gives t1 = 1, t2 = 1, t3 = 5.
You can use any linear algebra techniques (for eg: http://en.wikipedia.org/wiki/Gaussian_elimination) to solve these, which will tell you if there is a unique solution, no solution or an infinite number of solutions (no other possibilities are possible).
If you find that the linear equations do not have a solution, you can try adding a very small random number to some of the task weights/coefficients of the matrix and try solving it again. (I believe falls under Perturbation Theory). Matrices are notorious for radically changing behavior with small changes in the values, so this will likely give you an approximate answer reasonably quickly.
Or maybe you can try introducing some 'slack' task in each walk (i.e add more variables) and try to pick the solution to the new equations where the slack tasks satisfy some linear constraints (like 0 < s_i < 0.0001 and minimize sum of s_i), using Linear Programming Techniques.
Assume you have an unlimited number of arbitrary characters to represent each edge. (a,b,c,d etc)
w is a list of all the walks, in the form of 0,a,b,c,d,e etc. (the 0 will be explained later.)
i = 1
if #w[i] ~= 1 then
replace w[2] with the LENGTH of w[i], minus all other values in w.
repeat forever.
Example:
0,a,b,c,d,e 50
0,a,c,b,e 20
0,c,e 10
So:
a is the first. Replace all instances of "a" with 50, -b,-c,-d,-e.
New data:
50, 50
50,-b,-d, 20
0,c,e 10
And, repeat until one value is left, and you finish! Alternatively, the first number can simply be subtracted from the length of each walk.
I'd forget about graphs and treat lists of tasks as vectors - every task represented as a component with value equal to it's cost (time to complete in this case.
In tasks are in different orderes initially, that's where to use domain knowledge to bring them to a cannonical form and assign multipliers if domain knowledge tells you that the ratio of costs will be synstantially influenced by ordering / timing. Timing is implicit initial ordering but you may have to make a function of time just for adjustment factors (say drivingat lunch time vs driving at midnight). Function might be tabular/discrete. In general it's always much easier to evaluate ratios and relative biases (hardnes of doing something). You may need a functional language to do repeated rewrites of your vectors till there's nothing more that romain knowledge and rules can change.
With cannonical vectors consider just presence and absence of task (just 0|1 for this iteratioon) and look for minimal diffs - single task diffs first - that will provide estimates which small number of variables. Keep doing this recursively, be ready to back track and have a heuristing rule for goodness or quality of estimates so far. Keep track of good "rounds" that you backtraced from.
When you reach minimal irreducible state - dan't many any more diffs - all vectors have the same remaining tasks then you can do some basic statistics like variance, mean, median and look for big outliers and ways to improve initial domain knowledge based estimates that lead to cannonical form. If you finsd a lot of them and can infer new rules, take them in and start the whole process from start.
Yes, this can cost a lot :-)