nose test class with generator for multiple tests but only one instance of the class - generator

I am trying to find a way to use nose to run multiple test cases within a class but I need to do so where nose only creates one instance of that class. This class will test a network and the setup of the network takes a few minutes thus the need to run all of the tests via one instance of the class. Here is a basic example of what I am trying to do:
class TestUmbrella(object):
def __init__(self):
log.info('__init__ called')
def run_A(self):
log.info('Test A is running')
def run_B(self):
log.info('Test B is running')
def run_C(self):
log.info('Test C is running')
def run_test(self):
for x in (self.run_A, self.run_B, self.run_C):
yield x
This produces:
2015-03-19 12:22:31,330: INFO: tests.l3.FooTest2: __init__ called
2015-03-19 12:22:31,331: INFO: tests.l3.FooTest2: __init__ called
2015-03-19 12:22:31,331: INFO: tests.l3.FooTest2: Test A is running
.2015-03-19 12:22:31,331: INFO: tests.l3.FooTest2: __init__ called
2015-03-19 12:22:31,332: INFO: tests.l3.FooTest2: Test B is running
.2015-03-19 12:22:31,332: INFO: tests.l3.FooTest2: __init__ called
2015-03-19 12:22:31,332: INFO: tests.l3.FooTest2: Test C is running
.
----------------------------------------------------------------------
Ran 3 tests in 0.002s
OK
What I would like to see is:
2015-03-19 12:22:31,330: INFO: tests.l3.FooTest2: __init__ called
2015-03-19 12:22:31,331: INFO: tests.l3.FooTest2: Test A is running
2015-03-19 12:22:31,332: INFO: tests.l3.FooTest2: Test B is running
2015-03-19 12:22:31,332: INFO: tests.l3.FooTest2: Test C is running
Any ideas on how to get nose to do this?

Two ways to get what you want:
Use unittest.TestCase class with setUpClass for your TestUmbrella:
from unittest import TestCase
import logging as log
class TestUmbrella(TestCase):
#classmethod
def setUpClass(cls):
log.info('__init__ called')
def run_A_test(self):
log.info('Test A is running')
def run_B_test(self):
log.info('Test B is running')
def run_C_test(self):
log.info('Test C is running')
Note that you will no longer be able yield things on the fly, and would have to rename methods to comply with nose test pattern. That will give you:
$ nosetests cls_test.py -v
INFO:root:__init__ called
run_A_test (cls_test.TestUmbrella) ... INFO:root:Test A is running
ok
run_B_test (cls_test.TestUmbrella) ... INFO:root:Test B is running
ok
run_C_test (cls_test.TestUmbrella) ... INFO:root:Test C is running
ok
----------------------------------------------------------------------
Ran 3 tests in 0.007s
OK
Alternatively, you can just inject your setup method into the class, but not as part of class initialization:
import logging as log
class TestUmbrella(object):
def my_setup(self):
log.info('__init__ called')
def run_A(self):
log.info('Test A is running')
def run_B(self):
log.info('Test B is running')
def run_C(self):
log.info('Test C is running')
def run_test(self):
self.my_setup()
for x in (self.run_A, self.run_B, self.run_C):
yield x
Finally, if you really cannot offload heavy logic from the constructor, you can run your tests from a standalone function like this:
def run_test():
tu = TestUmbrella()
for x in (tu.run_A, tu.run_B, tu.run_C):
yield x

Related

connection in a different thread can't read tables created in another thread?

In a testing suite I have a fixture that drop all the tables in an engine, then start fresh and create all the tables. After this fixture logic, my test case runs, using the newly created table.
The fixture and the test case are run in the MainThread, while the database consumer is a web application server run in another thread.
However, I keep getting: sqlite3.OperationalError: no such table: ***
I've checked that they are using the same in-memory engine, but different connections(this is correct). And I've checked that the fixture does run before the consumer thread starts running.
What could be possible cause?
My code is as below:
import os
import pytest
import cherrypy
class DAL:
def __init__(self,
path="database",
filename=None,
conn_string=None,
echo=False):
if filename is None and conn_string is None:
conn_string = "sqlite:///:memory:"
elif conn_string is not None:
conn_string = conn_string
else:
conn_string = f'sqlite:///{os.path.abspath(path)}/{filename}'
self.conn_string = conn_string
engine = create_engine(conn_string, echo=echo)
Session_Factory = sessionmaker(bind=engine)
self.Session = sqlalchemy.orm.scoped_session(Session_Factory)
def __str__(self):
return f"<DAL>object: {self.conn_string}, at {hex(id(self))}"
def get_a_dbsession(self):
opened_db = self.Session()
return opened_db
def __enter__(self):
return self.get_a_dbsession()
def __exit__(self, exception_type, exception_value, exception_traceback):
opened_db = self.Session()
try:
opened_db.commit()
except:
opened_db.rollback()
raise
finally:
self.Session.remove()
else:
opened_db.close()
raise exception_type
def create_schema(self):
SchemaBase.metadata.create_all(self.Session().connection().engine)
class SAEnginePlugin(cherrypy.process.plugins.SimplePlugin):
def __init__(self, bus, dal):
"""
The plugin is registered to the CherryPy engine.
"""
cherrypy.process.plugins.SimplePlugin.__init__(self, bus)
self.dal = dal
def start(self):
self.bus.subscribe("bind-session", self.bind)
def stop(self):
self.bus.unsubscribe("bind-session", self.bind)
if self.dal:
del self.dal
def bind(self):
"""
Whenever this plugin receives the 'bind-session' message, it applies
this method and bind the received session to the engine.
"""
# self.dal.Session.configure(bind=self.dal.engine)
session = self.dal.get_a_dbsession()
return session
class SATool(cherrypy.Tool):
def __init__(self):
"""
This tool binds a session to the engine each time
a requests starts and commits/rollbacks whenever
the request terminates.
"""
cherrypy.Tool.__init__(self,
'on_start_resource',
self.bind_session,
priority=20)
def _setup(self):
cherrypy.Tool._setup(self)
cherrypy.request.hooks.attach('on_end_resource',
self.close_session,
priority=80)
def bind_session(self):
"""
Attaches a session to the request's scope by requesting
the SA plugin to bind a session to the SA engine.
"""
session = cherrypy.engine.publish('bind-session').pop()
cherrypy.request.db = session
def close_session(self):
"""
Commits the current transaction or rollbacks if an error occurs.
In all cases, the current session is unbound and therefore
not usable any longer.
"""
if not hasattr(cherrypy.request, 'db'):
return
try:
cherrypy.request.db.commit()
except:
cherrypy.request.db.rollback()
raise
finally:
cherrypy.request.db.close()
cherrypy.request.db = None
# Register the SQLAlchemy tool
cherrypy.tools.db = SATool()
class UnitServer:
...
#cherrypy.expose
#cherrypy.tools.json_in()
def list_filtered_entries(self):
...
queryOBJ = cherrypy.request.db.query(classmodel_obj)
...
############# main module code below ############:
# mocking 'db':
dal = database.DAL()
# configure cherrypy:
SAEnginePlugin(cherrypy.engine, dal).subscribe()
#pytest.fixture(autouse=True) # automatically run before every test case
def mocked_dal(request):
# first, clean the database by dropping all tables:
database.SchemaBase.metadata.drop_all(dal.Session().connection().engine)
# second, create the schema from blank:
dal.create_schema()
# third, insert some dummy data record:
...
db.commit()
class TestMyUnitServer(cherrypy.test.helper.CPWebCase):
#staticmethod
def setup_server():
...
server_app = UnitServer()
cherrypy.tree.mount(server_app, '', {'/': {'tools.db.on': True}})
def test_list_filtered_entries_allentries(self):
...
self.getPage('/list_filtered_entries',
headers=[("Accept", "application/json"),
('Content-type', 'application/json'),
('Content-Length',
str(len(json.dumps(query_params)))),
("Connection", "keep-alive"),
("Cache-Control", "max-age=0")],
body=serialized_query_params,
method="POST")
self.assertStatus('200 OK')

How do I Force db commit of a single save in django-orm in a celery task

I am using django and celery. I have a long running celery task and I would like it to report progress. I am doing this:
#shared_task
def do_the_job(tracker_id, *args, **kwargs):
while condition:
#Do a long operation
tracker = ProgressTracker.objects.get(pk=tracker_id)
tracker.task_progress = F('task_progress') + 1
tracker.last_update = timezone.now()
tracker.save(update_fields=['task_progress', 'last_update'])
The problem is that the view that is supposed to show the progress to the user cannot see the updates until the task finishes. Is there a way to get the django orm to ignore transactions for just this one table? Or just this one write?
You can use bound tasks to define custom states for your tasks and set/update the state during execution:
#celery.task(bind=True)
def show_progress(self, n):
for i in range(n):
self.update_state(state='PROGRESS', meta={'current': i, 'total': n})
You can dump the state of currently executing tasks to get the progress:
>>> from celery import Celery
>>> app = Celery('proj')
>>> i = app.control.inspect()
>>> i.active()

pytest: skip addfinalizer if exception in fixture

I have a function, that should do report, if test function success.
But, I don't want to do report, if there is an Exception inside test function.
I try to use pytest.fixture, pytest.yield_fixture, but all of them always call finalizers. How can I understand, that Exception had been raised in test function?
test.py StatisticClass: start
FStatisticClass: stop
finalizer
contest of test.py:
#pytest.mark.usefixtures("statistic_maker")
def test_dummy():
raise Exception()
content of conftest.py:
class StatisticClass():
def __init__(self, req):
self.req = req
pass
def start(self):
print "StatisticClass: start"
def stop(self):
print "StatisticClass: stop"
def if_not_exception(self):
"""
I don't want to call this if Exception inside yield.
Maybe, there is any info in request object?
"""
print "finalizer"
#pytest.yield_fixture(scope="function")
def statistic_maker(request):
ds = StatisticClass(request)
ds.start()
request.addfinalizer(ds.if_not_exception)
yield
ds.stop()
P.S. I can't use decorator because, I use fixture.

How to count sqlalchemy queries in unit tests

In Django I often assert the number of queries that should be made so that unit tests catch new N+1 query problems
from django import db
from django.conf import settings
settings.DEBUG=True
class SendData(TestCase):
def test_send(self):
db.connection.queries = []
event = Events.objects.all()[1:]
s = str(event) # QuerySet is lazy, force retrieval
self.assertEquals(len(db.connection.queries), 2)
In in SQLAlchemy tracing to STDOUT is enabled by setting the echo flag on
engine
engine.echo=True
What is the best way to write tests that count the number of queries made by SQLAlchemy?
class SendData(TestCase):
def test_send(self):
event = session.query(Events).first()
s = str(event)
self.assertEquals( ... , 2)
I've created a context manager class for this purpose:
class DBStatementCounter(object):
"""
Use as a context manager to count the number of execute()'s performed
against the given sqlalchemy connection.
Usage:
with DBStatementCounter(conn) as ctr:
conn.execute("SELECT 1")
conn.execute("SELECT 1")
assert ctr.get_count() == 2
"""
def __init__(self, conn):
self.conn = conn
self.count = 0
# Will have to rely on this since sqlalchemy 0.8 does not support
# removing event listeners
self.do_count = False
sqlalchemy.event.listen(conn, 'after_execute', self.callback)
def __enter__(self):
self.do_count = True
return self
def __exit__(self, *_):
self.do_count = False
def get_count(self):
return self.count
def callback(self, *_):
if self.do_count:
self.count += 1
Use SQLAlchemy Core Events to log/track queries executed (you can attach it from your unit tests so they don't impact your performance on the actual application:
event.listen(engine, "before_cursor_execute", catch_queries)
Now you write the function catch_queries, where the way depends on how you test. For example, you could define this function in your test statement:
def test_something(self):
stmts = []
def catch_queries(conn, cursor, statement, ...):
stmts.append(statement)
# Now attach it as a listener and work with the collected events after running your test
The above method is just an inspiration. For extended cases you'd probably like to have a global cache of events that you empty after each test. The reason is that prior to 0.9 (current dev) there is no API to remove event listeners. Thus make one global listener that accesses a global list.
what about the approach of using flask_sqlalchemy.get_debug_queries() btw. this is the methodology used by internal of Flask Debug Toolbar check its source
from flask_sqlalchemy import get_debug_queries
def test_list_with_assuring_queries_count(app, client):
with app.app_context():
# here generating some test data
for _ in range(10):
notebook = create_test_scheduled_notebook_based_on_notebook_file(
db.session, owner='testing_user',
schedule={"kind": SCHEDULE_FREQUENCY_DAILY}
)
for _ in range(100):
create_test_scheduled_notebook_run(db.session, notebook_id=notebook.id)
with app.app_context():
# after resetting the context call actual view we want asserNumOfQueries
client.get(url_for('notebooks.personal_notebooks'))
assert len(get_debug_queries()) == 3
keep in mind that for having reset context and count you have to call with app.app_context() before the exact stuff you want to measure.
Slightly modified version of #omar-tarabai's solution that removes the event listener when exiting the context:
from sqlalchemy import event
class QueryCounter(object):
"""Context manager to count SQLALchemy queries."""
def __init__(self, connection):
self.connection = connection.engine
self.count = 0
def __enter__(self):
event.listen(self.connection, "before_cursor_execute", self.callback)
return self
def __exit__(self, *args, **kwargs):
event.remove(self.connection, "before_cursor_execute", self.callback)
def callback(self, *args, **kwargs):
self.count += 1
Usage:
with QueryCounter(session.connection()) as counter:
session.query(XXX).all()
session.query(YYY).all()
print(counter.count) # 2

WSGI application middleware to handle SQLAlchemy session

My WSGI application uses SQLAlchemy. I want to start session when request starts, commit it if it's dirty and request processing finished successfully, make rollback otherwise. So, I need to implement behavior of Django's TransactionMiddleware.
So, I suppose that I should create WSGI middleware and make following stuff:
Create and add DB session to environ on pre-processing.
Get DB session from environ and call commit() on post-processing, if no errors occurred.
Get DB session from environ and call rollback() on post-processing, if some errors occurred.
Step 1 is obvious for me:
class DbSessionMiddleware:
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
environ['db_session'] = create_session()
return self.app(environ, start_response)
Step 2 and 3 - not. I found the example of post-processing task:
class Caseless:
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
for chunk in self.app(environ, start_response):
yield chunk.lower()
It contains comment:
Note that the __call__ function is a Python generator, which is typical for this sort of “post-processing” task.
Could you please clarify how does it work and how can I solve my issue similarly.
Thanks,
Boris.
For step 1 I use SQLAlchemy scoped sessions:
engine = create_engine(settings.DB_URL, echo=settings.DEBUG, client_encoding='utf8')
Base = declarative_base()
sm = sessionmaker(bind=engine)
get_session = scoped_session(sm)
They return the same thread-local session for each get_session() call.
Step 2 and 3 for now is following:
class DbSessionMiddleware:
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
try:
db.get_session().begin_nested()
return self.app(environ, start_response)
except BaseException:
db.get_session().rollback()
raise
finally:
db.get_session().commit()
As you can see, I start nested transaction on session to be able to rollback even queries that were already committed in views.