Why do parameters calc_cv_statistics and search_by_train_test_split are set to True by default in grid search CatBoost?
I don't get the sense: we choose the best values by mean score on each fold in cv, why do wee need to choose them then on test split?
Tried default parameters and different values for cv, don't get how the parameters in grid search exactly chosen
Related
If I calculate a two-way ANOVA with the code "aov", what should I use to interpret the data, the "summary" of the ANOVA shown in the console or the apa.aov.table which produces a APA style table in word? The p-Values differ a lot.
I suspect that the reason why the p values differs is that the aov() function defaults to type I sums of squares. However, the apa.aov.table() function defaults to type III sums of squares, unless specified otherwise (documentation here: https://www.rdocumentation.org/packages/apaTables/versions/2.0.8/topics/apa.aov.table).
At least in my area, type III sums of squares is kind of the standard at the moment, so if that is also the case for you, I would recommend the Anova() function from the "Car" package, which allows you to specify which type you would like to use.
I use the seq2seq model and it can compute BLEU score (a NMT score) every epoch. However, I cannot set BLEU score as validation metric so it cannot early stop in training. I read the source code, but there are no hints as to what kind of string could be added to the validation metrics except for "+loss". Please save me, thanks!
The default validation_metric is actually "-loss", not "+loss". The "-" means this is a metric that should be minimized, not maximized.
So to use BLEU score instead, set the validation_metric to "+BLEU".
In general, you can use any metric that's returned by your model's .get_metric() method. The name of the metric you use for validation_metric just has to match the corresponding key from the dictionary returned by .get_metric().
In your case, presumably your model's .get_metric() method returns something like this: {"BLEU": ...}, which is why validation_metric should be set to "+BLEU".
I'm new to programming/ray and have a simple question about which parameters can be specified when using Ray Tune. In particular, the ray tune documentation says that all of the auto-filled fields (steps_this_iter, episodes_this_iter, etc.) can be used as stopping conditions or in the Scheduler/Search Algorithm specification.
However, the following only works once I remove the "episodes_this_iter" specification. Does this work only as part of the stopping criteria?
ray.init()
tune.run(
PPOTrainer,
stop = {"training_iteration": 1000},
config={"env": qsdm.QSDEnv,
"env_config": defaultconfig,
"num_gpus": 0,
"num_workers": 1,
"lr": tune.grid_search([0.00005, 0.00001, 0.0001]),},
"episodes_this_iter": 2500,
)
tune.run() is the one filling up those fields so we can use them elsewhere. And the stopping criterion is just one of the places where we can use them in.
To see why the example doesn't work, consider a simpler analogue:
episodes_total: 100
The trainer itself is the one incrementing the episode count so the rest of the system knows how far along we are. It doesn't work on them if we try to change it or fix it to a particular value. The same reasoning applies to other fields in the list.
As for the scheduler and search algorithms, I have no experience with.
But what we want to do is put those conditions inside the schedule or search algorithm itself, and not in the trainer directly.
Here's an example with Bayesian optimisation search, although I don't know what it would mean to do this:
from ray.tune.suggest.bayesopt import BayesOptSearch
tune.run(
# ...
# 10 trials
num_samples=10,
search_alg=BayesOptSearch(
# look for learning rates within this range:
{'lr': (0.1, 0.00001)},
# optimise for this metric:
metric='episodes_this_iter', # <------- auto-filled field here
mode='max',
utility_kwargs={
'kind': 'ucb',
'kappa': '2.5',
'xi': 0.0
}
)
)
I have an agent which should react to different inputs I give it. Let 'A->B' stand for the agent's reaction B to an input A.
I want my agent to learn to react differently depending on the history of inputs.
For example, let each 'episode' consists of: 1. Me giving an input. 2. Agent reacting. 3. Me giving another input. 4. Agent Reacting. 5. End of episode.
If there are two possible inputs i1 and i2, and two possible actions a1 and a2, I want my agent to react as follows in all possible episodes (values not so important):
i1->a2, i1->a1; i1->a2, i2->a1; i2->a2, i1->a2; i2->a2, i2->a1;
i.e. I want my agent to react differently to an input in the second step depending on the inputs of both the first and second step.
Question: What would be an appropriate RL algorithm to learn this? In the beginning I wanted to use Q-Learning, but the problem is that my state transitions do not depend on the agent's action. I.e. if it reacts with a1 to i1, the agent doesn't at that point know whether the next 'state' will be i1 or i2.
Help would be greatly appreciated.
Here is sample dataviz
Heatmap of linear order quantity (region vs quantity)
Created calculated field logarithmic = int(log([Order Quantity])) and later on logarithmic = int(log([Order Quantity],10))
Heatmap where size is based on logarithmic.
Size doesn't change and number is incorrect, please guide.
tl;dr Sum the order quantities before taking the logarithm.
int(log(SUM[Order Quantity]))
Otherwise you are taking the logarithm of each individual Order Item, and then adding the logarithms. The aggregation function, sum() in your case, is specified when you place the field on the shelf unless you make it explicit in the calculated field.
Here are a couple of ways to use the log field, dual or triple encoding the log by size, color and shape. A custom legend works better with multiple encoded symbols than the default legends.