pybottle gunicorn gevent not sharing global dict - gunicorn

from gevent import monkey
monkey.patch_all()
import bottle
app = bottle.Bottle()
COUNT = 0
#app.route('/inc', method='GET')
def count():
global COUNT
COUNT += 1
return COUNT
def main():
app.run(
server="gunicorn",
host="0.0.0.0",
port=port,
workers=50,
worker_class="gevent",
debug=False,
reloader=False,
)
if __name__ == '__main__':
main(sys.argv[1:])
when i refresh 0.0.0.0/inc, it is not increasing linearly, it goes to 1 to 0 to 1 to 0 to 2 to 1 to 0, etc
That tells me each thread (gevent worker) maintains COUNT value
how to share global var between all gevent workers?
--
ive confirmed that indeed separate greenlet is keeping track of its own COUNT
INFO:__main__:(<Greenlet at 0x7fcae40bbd00: _handle_and_close_when_done(functools.partial(<bound method GeventWorker.handl, <bound method StreamServer.do_close of <StreamServ, (<gevent._socket3.socket at 0x7fcae256ebe0 object,)>) COUNT 1
INFO:__main__:(<Greenlet at 0x7fcae40bbd00: _handle_and_close_when_done(functools.partial(<bound method GeventWorker.handl, <bound method StreamServer.do_close of <StreamServ, (<gevent._socket3.socket at 0x7fcae256ebe0 object,)>) COUNT 2
INFO:__main__:(<Greenlet at 0x7fcae40bbd00: _handle_and_close_when_done(functools.partial(<bound method GeventWorker.handl, <bound method StreamServer.do_close of <StreamServ, (<gevent._socket3.socket at 0x7fcae256ebe0 object,)>) COUNT 3
INFO:__main__:(<Greenlet at 0x7fcae40bbd00: _handle_and_close_when_done(functools.partial(<bound method GeventWorker.handl, <bound method StreamServer.do_close of <StreamServ, (<gevent._socket3.socket at 0x7fcae256ebe0 object,)>) COUNT 4
INFO:__main__:(<Greenlet at 0x7fcae40bbd00: _handle_and_close_when_done(functools.partial(<bound method GeventWorker.handl, <bound method StreamServer.do_close of <StreamServ, (<gevent._socket3.socket at 0x7fcae409fa00 object,)>) COUNT 1

app.run(
server='gevent',
host="0.0.0.0",
port=port
)
removing gunicorn worked for me

Related

Python Mock Patch Two Functions that are similar

Let's say I have a function that has two similar function calls:
def foo():
test = one_func("SELECT * from Users;")
test1 = one_func("SELECT * from Addresses;")
return test, test1
How do I patch each of these function contents? Here's my attempt:
#patch('one_func')
def test_foo(self, mock_one_func):
mock_one_func.return_value = one_func("SELECT * from TestUsers;")
mock_one_func.return_value = one_func("SELECT * from TestAddresses;")
But I think this method patches one_func as a whole per function. Which results to:
def foo():
test = one_func("SELECT * from TestUsers;")
test1 = one_func("SELECT * from TestUsers;")
return test, test1
Then on the next line
def foo():
test = one_func("SELECT * from TestAddresses;")
test1 = one_func("SELECT * from TestAddresses;")
return test, test1
What I want to happen in the patched function is.
def foo():
test = one_func("SELECT * from TestUsers;")
test1 = one_func("SELECT * from TestAddresses;")
return test, test1
The way to achieve what you need is using side_effect instead of return_value. side_effect can be many things. If it is an Exception class or object, it will raise that exception whenever the patched method is called. If it is a list of values, will return each value in sequence for each method call. If it is a function, will execute that function with the parameters from each mocked method call.
Here is a working example, where I show you how to use a list of values on side_effect and a side_effect function. What's nice about using a function is that you can make it return specific values depending on the arguments of the patched method call.
from mock import patch
import unittest
class MyClass(object):
def one_func(self, query):
return ''
def foo(self):
test = self.one_func("SELECT * from Users;")
test1 = self.one_func("SELECT * from Addresses;")
return test, test1
class Test(unittest.TestCase):
#patch.object(MyClass, 'one_func')
def test_foo(self, one_func_mock):
# side_effect can be a list of responses that will be returned in
# subsequent calls
one_func_mock.side_effect = ['users', 'addresses']
self.assertEqual(('users', 'addresses'), MyClass().foo())
# side_effect can also be a method which will return different mock
# responses depending on args:
def side_effect(args):
if args == "SELECT * from Users;":
return 'other users'
if args == "SELECT * from Addresses;":
return 'other addresses'
one_func_mock.side_effect = side_effect
self.assertEqual(('other users', 'other addresses'), MyClass().foo())
unittest.main()

Compile and deploy an Ethereum Viper Smart Contract automatically

In there any way to compile and deploy Viper Smart Contract automatically to some custom chain (not tester chain from ethereum.tools)
According to the GitHub issue and two posts that I found (this one and that one), the best option is to compile contract and then insert into geth manually.
Can anyone share their solutions?
As mentioned in Github issue you provided - you can achieve it by using web3.py library and a Viper library itself.
Here is an example of a script which probably covers your needs:
from web3 import Web3, HTTPProvider
from viper import compiler
from web3.contract import ConciseContract
from time import sleep
example_contract = open('./path/to/contract.v.py', 'r')
contract_code = example_contract.read()
example_contract.close()
cmp = compiler.Compiler()
contract_bytecode = cmp.compile(contract_code).hex()
contract_abi = cmp.mk_full_signature(contract_code)
web3 = Web3(HTTPProvider('http://localhost:8545'))
web3.personal.unlockAccount('account_addr', 'account_pwd', 120)
# Instantiate and deploy contract
contract_bytecode = web3.eth.contract(contract_abi, bytecode=contract_bytecode)
# Get transaction hash from deployed contract
tx_hash = contract_bytecode.deploy(transaction={'from': 'account_addr', 'gas': 410000})
# Waiting for contract to be delpoyed
i = 0
while i < 5:
try:
# Get tx receipt to get contract address
tx_receipt = web3.eth.getTransactionReceipt(tx_hash)
contract_address = tx_receipt['contractAddress']
break # if success, then exit the loop
except:
print("Reading failure for {} time(s)".format(i + 1))
sleep(5+i)
i = i + 1
if i >= 5:
raise Exception("Cannot wait for contract to be deployed")
# Contract instance in concise mode
contract_instance = web3.eth.contract(contract_abi, contract_address, ContractFactoryClass=ConciseContract)
# Calling contract method
print('Contract value: {}'.format(contract_instance.some_method()))

pytest: skip addfinalizer if exception in fixture

I have a function, that should do report, if test function success.
But, I don't want to do report, if there is an Exception inside test function.
I try to use pytest.fixture, pytest.yield_fixture, but all of them always call finalizers. How can I understand, that Exception had been raised in test function?
test.py StatisticClass: start
FStatisticClass: stop
finalizer
contest of test.py:
#pytest.mark.usefixtures("statistic_maker")
def test_dummy():
raise Exception()
content of conftest.py:
class StatisticClass():
def __init__(self, req):
self.req = req
pass
def start(self):
print "StatisticClass: start"
def stop(self):
print "StatisticClass: stop"
def if_not_exception(self):
"""
I don't want to call this if Exception inside yield.
Maybe, there is any info in request object?
"""
print "finalizer"
#pytest.yield_fixture(scope="function")
def statistic_maker(request):
ds = StatisticClass(request)
ds.start()
request.addfinalizer(ds.if_not_exception)
yield
ds.stop()
P.S. I can't use decorator because, I use fixture.

How to count sqlalchemy queries in unit tests

In Django I often assert the number of queries that should be made so that unit tests catch new N+1 query problems
from django import db
from django.conf import settings
settings.DEBUG=True
class SendData(TestCase):
def test_send(self):
db.connection.queries = []
event = Events.objects.all()[1:]
s = str(event) # QuerySet is lazy, force retrieval
self.assertEquals(len(db.connection.queries), 2)
In in SQLAlchemy tracing to STDOUT is enabled by setting the echo flag on
engine
engine.echo=True
What is the best way to write tests that count the number of queries made by SQLAlchemy?
class SendData(TestCase):
def test_send(self):
event = session.query(Events).first()
s = str(event)
self.assertEquals( ... , 2)
I've created a context manager class for this purpose:
class DBStatementCounter(object):
"""
Use as a context manager to count the number of execute()'s performed
against the given sqlalchemy connection.
Usage:
with DBStatementCounter(conn) as ctr:
conn.execute("SELECT 1")
conn.execute("SELECT 1")
assert ctr.get_count() == 2
"""
def __init__(self, conn):
self.conn = conn
self.count = 0
# Will have to rely on this since sqlalchemy 0.8 does not support
# removing event listeners
self.do_count = False
sqlalchemy.event.listen(conn, 'after_execute', self.callback)
def __enter__(self):
self.do_count = True
return self
def __exit__(self, *_):
self.do_count = False
def get_count(self):
return self.count
def callback(self, *_):
if self.do_count:
self.count += 1
Use SQLAlchemy Core Events to log/track queries executed (you can attach it from your unit tests so they don't impact your performance on the actual application:
event.listen(engine, "before_cursor_execute", catch_queries)
Now you write the function catch_queries, where the way depends on how you test. For example, you could define this function in your test statement:
def test_something(self):
stmts = []
def catch_queries(conn, cursor, statement, ...):
stmts.append(statement)
# Now attach it as a listener and work with the collected events after running your test
The above method is just an inspiration. For extended cases you'd probably like to have a global cache of events that you empty after each test. The reason is that prior to 0.9 (current dev) there is no API to remove event listeners. Thus make one global listener that accesses a global list.
what about the approach of using flask_sqlalchemy.get_debug_queries() btw. this is the methodology used by internal of Flask Debug Toolbar check its source
from flask_sqlalchemy import get_debug_queries
def test_list_with_assuring_queries_count(app, client):
with app.app_context():
# here generating some test data
for _ in range(10):
notebook = create_test_scheduled_notebook_based_on_notebook_file(
db.session, owner='testing_user',
schedule={"kind": SCHEDULE_FREQUENCY_DAILY}
)
for _ in range(100):
create_test_scheduled_notebook_run(db.session, notebook_id=notebook.id)
with app.app_context():
# after resetting the context call actual view we want asserNumOfQueries
client.get(url_for('notebooks.personal_notebooks'))
assert len(get_debug_queries()) == 3
keep in mind that for having reset context and count you have to call with app.app_context() before the exact stuff you want to measure.
Slightly modified version of #omar-tarabai's solution that removes the event listener when exiting the context:
from sqlalchemy import event
class QueryCounter(object):
"""Context manager to count SQLALchemy queries."""
def __init__(self, connection):
self.connection = connection.engine
self.count = 0
def __enter__(self):
event.listen(self.connection, "before_cursor_execute", self.callback)
return self
def __exit__(self, *args, **kwargs):
event.remove(self.connection, "before_cursor_execute", self.callback)
def callback(self, *args, **kwargs):
self.count += 1
Usage:
with QueryCounter(session.connection()) as counter:
session.query(XXX).all()
session.query(YYY).all()
print(counter.count) # 2

WSGI application middleware to handle SQLAlchemy session

My WSGI application uses SQLAlchemy. I want to start session when request starts, commit it if it's dirty and request processing finished successfully, make rollback otherwise. So, I need to implement behavior of Django's TransactionMiddleware.
So, I suppose that I should create WSGI middleware and make following stuff:
Create and add DB session to environ on pre-processing.
Get DB session from environ and call commit() on post-processing, if no errors occurred.
Get DB session from environ and call rollback() on post-processing, if some errors occurred.
Step 1 is obvious for me:
class DbSessionMiddleware:
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
environ['db_session'] = create_session()
return self.app(environ, start_response)
Step 2 and 3 - not. I found the example of post-processing task:
class Caseless:
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
for chunk in self.app(environ, start_response):
yield chunk.lower()
It contains comment:
Note that the __call__ function is a Python generator, which is typical for this sort of “post-processing” task.
Could you please clarify how does it work and how can I solve my issue similarly.
Thanks,
Boris.
For step 1 I use SQLAlchemy scoped sessions:
engine = create_engine(settings.DB_URL, echo=settings.DEBUG, client_encoding='utf8')
Base = declarative_base()
sm = sessionmaker(bind=engine)
get_session = scoped_session(sm)
They return the same thread-local session for each get_session() call.
Step 2 and 3 for now is following:
class DbSessionMiddleware:
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
try:
db.get_session().begin_nested()
return self.app(environ, start_response)
except BaseException:
db.get_session().rollback()
raise
finally:
db.get_session().commit()
As you can see, I start nested transaction on session to be able to rollback even queries that were already committed in views.