Is it possible to access the A2C total loss and whether the environment truncated or terminated within a custom callback?
I'd like to access truncated and terminated in _on_step. That would allow me to terminate training when the environment truncates, and also allow me to record training episode durations. I'd also like to be able to record total loss, I assume in _on_rollout_end?
You need to attach a callback that implements _on_step method that returns a bool by checking your env's variables. Something like this (I always check my env for being a VecEnv since it has a bit different way of accessing its variables in compare to non-vectorized one):
class StopOnTruncCallback(BaseCallback):
def __init__(self, verbose: int = 0):
super().__init__(verbose)
def _on_step(self):
return self._is_trunc()
def _is_trunc(self):
if isinstance(self.training_env, VecEnv):
return self.training_env.get_attr("truncated")[0]
else:
return self.training_env.truncated
I am trying to train a custom environment using PPO via Stable-Baselines3 and OpenAI Gym. For some reason the rollout statistics are not being reported for this custom environment when I try to train the PPO model.
The code that I am using is below ( I have not included the code for the CustomEnv for brevity):
env = CustomEnv(mode = "discrete")
env = Monitor(env, log_dir)
model = PPO("MlpPolicy", env, verbose=1, tensorboard_log = log_dir)
timesteps = 5000
for i in range(3):
model.learn(total_timesteps = timesteps, reset_num_timesteps = False, tb_log_name = "PPO")
model.save(f"{models_dir}/car_model_{timesteps * i}")
Below is an image demonstrating the output from the above code (on the right of the image), and the left side of the image demonstrates the usual output from a dummy environment that I am using for debugging.
I have already tried adding the line of code:
env = Monitor(env, log_dir)
But that doesnt change the output.
SOLVED: There was an edge case where the environment was not ending, and the done variable remained False indefinitely.
After fixing this bug, the Rollout statistics reappeared.
I'm building a python app around an existing (mysql) database and am using automap to infer tables and relationships:
base = automap_base()
self.engine = create_engine(
'mysql://%s:%s#%s/%s?charset=utf8mb4' % (
config.DB_USER, config.DB_PASSWD, config.DB_HOST, config.DB_NAME
), echo=False
)
# reflect the tables
base.prepare(self.engine, reflect=True)
self.TableName = base.classes.table_name
Using this I can do things like session.query(TableName) etc...
However, I'm worried about performance, because every time the app runs it will do the whole inference again.
Is this a legitimate concern?
If so, is there a possibility to 'cache' the output of Automap?
I think that "reflecting" the structure of your database is not the way to go. Unless your app tries to "infer" things from the structure, like static code analysis would for source files, then it is unnecessary. The other reason for reflecting it at run-time would be the reduced time to begin "using" the database using SQLAlchemy. However:
Another option would be to use something like SQLACodegen (https://pypi.python.org/pypi/sqlacodegen):
It will "reflect" your database once and create a 99.5% accurate set of declarative SQLAlchemy models for you to work with. However, this does require that you keep the model subsequently in-sync with the structure of the database. I would assume that this is not a big concern seeing as the tables you're already-working with are stable-enough such that run-time reflection of their structure does not impact your program much.
Generating the declarative models is essentially a "cache" of the reflection. It's just that SQLACodegen saved it into a very readable set of classes + fields instead of data in-memory. Even with a changing structure, and my own "changes" to the generated declarative models, I still use SQLACodegen later-on in a project whenever I make database changes. It means that my models are generally consistent amongst one-another and that I don't have things such as typos and data-mismatches due to copy-pasting.
Performance can be a legitimate concern. If the database schema is not changing, it can be time consuming to reflect the database every time a script is run. This is more of an issue during development, not starting up a long running application. It's also a significant time saver if your database is on a remote server (again, particularly during development).
I use code that is similar to the answer here (as noted by #ACV). The general plan is to perform the reflection the first time, then pickle the metadata object. The next time the script is run, it will look for the pickle file and use that. The file can be anywhere, but I place mine in ~/.sqlalchemy_cache. This is an example based on your code.
import os
from sqlalchemy.ext.declarative import declarative_base
self.engine = create_engine(
'mysql://%s:%s#%s/%s?charset=utf8mb4' % (
config.DB_USER, config.DB_PASSWD, config.DB_HOST, config.DB_NAME
), echo=False
)
metadata_pickle_filename = "mydb_metadata"
cache_path = os.path.join(os.path.expanduser("~"), ".sqlalchemy_cache")
cached_metadata = None
if os.path.exists(cache_path):
try:
with open(os.path.join(cache_path, metadata_pickle_filename), 'rb') as cache_file:
cached_metadata = pickle.load(file=cache_file)
except IOError:
# cache file not found - no problem, reflect as usual
pass
if cached_metadata:
base = declarative_base(bind=self.engine, metadata=cached_metadata)
else:
base = automap_base()
base.prepare(self.engine, reflect=True) # reflect the tables
# save the metadata for future runs
try:
if not os.path.exists(cache_path):
os.makedirs(cache_path)
# make sure to open in binary mode - we're writing bytes, not str
with open(os.path.join(cache_path, metadata_pickle_filename), 'wb') as cache_file:
pickle.dump(Base.metadata, cache_file)
except:
# couldn't write the file for some reason
pass
self.TableName = base.classes.table_name
For anyone using declarative table class definitions, assuming a Base object defined as e.g.
Base = declarative_base(bind=engine)
metadata_pickle_filename = "ModelClasses_trilliandb_trillian.pickle"
# ------------------------------------------
# Load the cached metadata if it's available
# ------------------------------------------
# NOTE: delete the cached file if the database schema changes!!
cache_path = os.path.join(os.path.expanduser("~"), ".sqlalchemy_cache")
cached_metadata = None
if os.path.exists(cache_path):
try:
with open(os.path.join(cache_path, metadata_pickle_filename), 'rb') as cache_file:
cached_metadata = pickle.load(file=cache_file)
except IOError:
# cache file not found - no problem
pass
# ------------------------------------------
# define all tables
#
class MyTable(Base):
if cached_metadata:
__table__ = cached_metadata.tables['my_schema.my_table']
else:
__tablename__ = 'my_table'
__table_args__ = {'autoload':True, 'schema':'my_schema'}
...
# ----------------------------------------
# If no cached metadata was found, save it
# ----------------------------------------
if cached_metadata is None:
# cache the metadata for future loading
# - MUST DELETE IF THE DATABASE SCHEMA HAS CHANGED
try:
if not os.path.exists(cache_path):
os.makedirs(cache_path)
# make sure to open in binary mode - we're writing bytes, not str
with open(os.path.join(cache_path, metadata_pickle_filename), 'wb') as cache_file:
pickle.dump(Base.metadata, cache_file)
except:
# couldn't write the file for some reason
pass
Important Note!! If the database schema changes, you must delete the cached file to force the code to autoload and create a new cache. If you don't, the changes will be be reflected in the code. It's an easy thing to forget.
The answer to your first question is largely subjective. You are adding database queries to fetch the reflection metadata to the application load time. Whether or not that overhead is significant depends on your project requirements.
For reference, I have an internal tool at work that uses a reflection pattern because the the load-time is acceptable for our team. That might not be the case if it were an externally-facing product. My hunch is that for most applications the reflection overhead will not dominate the total application load time.
If you decide it is significant for your purposes, this question has an interesting answer where the user pickles the database metadata in order to locally cache it.
Adding to this, what #Demitri answered is close to correct but (at least in sqlalchemy 1.4.29), the example will fail on the last line self.TableName = base.classes.table_name when generating from the cached file. In this case declarative_base() has no attribute classes.
To fix is as simple as altering:
if cached_metadata:
base = declarative_base(bind=self.engine, metadata=cached_metadata)
else:
base = automap_base()
base.prepare(self.engine, reflect=True) # reflect the tables
to
if cached_metadata:
base = automap_base(declarative_base(bind=self.engine, metadata=cached_metadata))
base.prepare()
else:
base = automap_base()
base.prepare(self.engine, reflect=True) # reflect the tables
This will create your automap object with the proper attributes.
I am having trouble to access the coefficients of a support vector regression model (SVR) in scikit learn when the model is embedded in a pipeline and a grid search.
Consider the following example:
from sklearn.datasets import load_iris
import numpy as np
from sklearn.grid_search import GridSearchCV
from sklearn.svm import SVR
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import Pipeline
iris = load_iris()
X_train = iris.data
y_train = iris.target
clf = SVR(kernel='linear')
select = SelectKBest(k=2)
steps = [('feature_selection', select), ('svr', clf)]
pipeline = Pipeline(steps)
grid = GridSearchCV(pipeline, param_grid={"svr__C":[10,10,100],"svr__gamma": np.logspace(-2, 2)})
grid.fit(X_train, y_train)
This seems to work fine but when I try to access the coefficient of the best fitting model
grid.best_estimator_.coef_
I get an error message: AttributeError: 'Pipeline' object has no attribute 'coef_'.
I also tried to access the individual steps of the pipeline:
pipeline.named_steps['svr']
but could not find the coefficients there.
Just happened to come across the same problem and this post
had the answer:
grid.best_estimator_ contains an instance of the pipeline, which consists of steps. The last step should always be the estimator, so you should always find the coefficients at:
grid.best_estimator_.steps[-1][1].coef_
Our web app is based on sqlalchemy in pyramid framework and we are looking to use alembic for managing database migrations. The web application consists of various packages that operate on one database. This consequently means we have multiple models.py that need to be migrated. I am confused as to how to handle this. I could progress some far using the following in my env.py
from pkg_a.app.models import Base as pkg_a_base
from pkg_b.app.models import Base as pkg_b_base
from pkg_c.app.models import Base as pkg_c_base
def combine_metadata(*args):
m = MetaData()
for metadata in args:
for t in metadata.tables.values():
t.tometadata(m)
return m
target_metadata = combine_metadata(pkg_a_base, pkg_b_base, pkg_c_base)
This works great the first time. However, if I add one more model later, just adding that to this list doesn't do much. I was expecting that running
alembic revision -m "added a new model pkg_d.models" --version-path=migrations/versions --autogenerate
would generate a new version file that would have the code for adding the tables from pkg_d.models. But it isn't so. What am I doing wrong here.
If your packages are completely independent and separate then each of them should have a separate migration history - either stored inside each package (pkg_a.migrations, pkg_b.migrations etc.) or at least stored in a separate top-level migrations directory via having a separate section in alembic.ini and using -n parameter to alembic command to specify which section to use:
[pkg_a]
# path to migration scripts
script_location = migrations_a
sqlalchemy.url = xxx
[pkg_b]
script_location = migrations_b
sqlalchemy.url = xxx
[pkg_c]
script_location = migrations_c
sqlalchemy.url = xxx
And then you'll be able to use alembic revision -n pkg_a -m "added a new model pkg_a.models"
If, however, your models are dependent in any way then they should use a common Base - you do realize you don't have to keep all your SQLAlchemy stuff in a single models.py file, don't you? I would create a separate "base" package which would contain a common MetaData, Base and other SQLAlchemy configuration stuff which would then be imported by your other packages.