Compile and deploy an Ethereum Viper Smart Contract automatically - ethereum

In there any way to compile and deploy Viper Smart Contract automatically to some custom chain (not tester chain from ethereum.tools)
According to the GitHub issue and two posts that I found (this one and that one), the best option is to compile contract and then insert into geth manually.
Can anyone share their solutions?

As mentioned in Github issue you provided - you can achieve it by using web3.py library and a Viper library itself.
Here is an example of a script which probably covers your needs:
from web3 import Web3, HTTPProvider
from viper import compiler
from web3.contract import ConciseContract
from time import sleep
example_contract = open('./path/to/contract.v.py', 'r')
contract_code = example_contract.read()
example_contract.close()
cmp = compiler.Compiler()
contract_bytecode = cmp.compile(contract_code).hex()
contract_abi = cmp.mk_full_signature(contract_code)
web3 = Web3(HTTPProvider('http://localhost:8545'))
web3.personal.unlockAccount('account_addr', 'account_pwd', 120)
# Instantiate and deploy contract
contract_bytecode = web3.eth.contract(contract_abi, bytecode=contract_bytecode)
# Get transaction hash from deployed contract
tx_hash = contract_bytecode.deploy(transaction={'from': 'account_addr', 'gas': 410000})
# Waiting for contract to be delpoyed
i = 0
while i < 5:
try:
# Get tx receipt to get contract address
tx_receipt = web3.eth.getTransactionReceipt(tx_hash)
contract_address = tx_receipt['contractAddress']
break # if success, then exit the loop
except:
print("Reading failure for {} time(s)".format(i + 1))
sleep(5+i)
i = i + 1
if i >= 5:
raise Exception("Cannot wait for contract to be deployed")
# Contract instance in concise mode
contract_instance = web3.eth.contract(contract_abi, contract_address, ContractFactoryClass=ConciseContract)
# Calling contract method
print('Contract value: {}'.format(contract_instance.some_method()))

Related

'InterfaceContainer' object has no attribute 'WethInterface'

from brownie import accounts, config, network, interface
def main():
"""
Runs the get_weth function to get WETH
"""
get_weth()
def get_weth(account=None):
"""
Mints WETH by depositing ETH.
"""
account = (
account if account else accounts.add(config["wallets"]["from_key"])
) # add your keystore ID as an argument to this call
weth = interface.WethInterface(
config["networks"][network.show_active()]["weth_token"]
)
tx = weth.deposit({"from": account, "value": 0.1 * 1e18})
print("Received 0.1 WETH")
return tx
I ran into the same issue a few days ago. I changed the name of the interface file from "IWeth.sol" to "WethInterface.sol".
I don't know why it worked, but you can give it a try.
you can name the file what you want but, when you want to call the interface you need to call the name of the contract exactly per example
contract WethInterface {}
wethContract = interface.WethInterface(wethAddress)

Working with coroutines in Python Tornado Web Server

I am working on an autonomous car implementation for a web browser game with Python 2x. I use Tornado Web Server to run game on localhost and I post and receive data from game with JSON data format in the function called "FrameHandler" and also I determine what the act of car should be in "to_dict_faster()" function.
Here, my problem is that I can write data to text file which is hold in speed_data variable in specific time interval with help of a coroutine. However, I can't dump JSON data to function in this specific time interval because "FrameHandler" acts like While True and it always requests data to dump. What I am trying to do is sending desired acts as writing text file in specific time interval while not changing flow frame handler because it affects FPS of the game.
I am trying to figure out How can I do that for a long time any help would be great here:
#gen.coroutine
def sampler():
io_loop = tornado.ioloop.IOLoop.current()
start = time.time()
while True:
with open("Sampled_Speed.txt", "a") as text_file:
text_file.write("%d,%.2f\n" % (speed_data, ((time.time() - start))))
yield gen.Task(io_loop.add_timeout, io_loop.time() + period)
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.redirect("/static/v2.curves.html")
class FrameHandler(tornado.web.RequestHandler):
def post(self):
global speed_data
data = json.loads(self.get_arguments("telemetry")[0])
ar = np.fromstring(base64.decodestring(self.request.body), dtype=np.uint8)
image = ar.reshape(hp.INPUT_SIZE, hp.INPUT_SIZE, hp.NUM_CHANNELS)
left, right, faster, slower = data["action"]
terminal, action, all_data, was_start = (
data["terminal"],
Action(left=left, right=right, faster=faster, slower=slower),
data["all_data"],
data["was_start"]
)
for i in range(len(all_data)):
data_dict=all_data[i]
speed_data = data_dict[u'speed']
position_data=data_dict[u'position']
result_action = agent.steps(image, 0.1, terminal, was_start, action, all_data)
if speed_data < 4000:
self.write(json.dumps(result_action.to_dict_faster()))
else:
self.write(json.dumps(result_action.to_dict_constant()))
def make_app():
return tornado.web.Application([
(r"/", MainHandler),
(r"/frame", FrameHandler),
(r"/static/(.*)", tornado.web.StaticFileHandler, {"path": static_path})
], debug=True)
if __name__ == "__main__":
app = make_app()
if "SERVER_PORT" in os.environ:
port = int(os.environ["SERVER_PORT"])
else:
port = 8880
print "LISTENING ON PORT: %d" % port
app.listen(port)
tornado.ioloop.IOLoop.current().run_sync(sampler)
tornado.ioloop.IOLoop.current().start()
You can move file writing to a different thread (using tornado's run_on_executor for example), so python interpreter will automatically switch from Sampler to main thread with FrameHandler on write. But you have to use thread-safe speed_data variable, I've used stdlib Queue.Queue as an example:
class Handler(tornado.web.RequestHandler):
#gen.coroutine
def get(self):
global speed_data
speed_data.put("REALLY BIG TEST DATA\n")
self.finish("OK")
class Sampler():
executor = concurrent.futures.ThreadPoolExecutor(max_workers=1)
def __init__(self, queue):
self._q = queue
#run_on_executor
def write_sample(self):
with open("foobar.txt", "w") as f:
while True:
data = self._q.get()
f.write(data)
if __name__ == '__main__':
application = Application(
[("/status", Handler)]
)
server = HTTPServer(application)
server.listen(8888)
speed_data = Queue.Queue()
smp = Sampler(speed_data)
IOLoop.current().add_callback(smp.write_sample)
IOLoop.current().start()

AWS Boto3 and Classic ELBs

I'm trying to get the active TLS policy on a classic load balancer (elb, not elbv2) and I'm having trouble identifying what is going wrong here:
import boto3
from botocore.exceptions import ClientError
#Declare Constant
EXPECTED_POLICY = 'ELBSecurityPolicy-TLS-1-1-2017-01'
IAMID = '518031149234'
def set_session(awsprofile, awsregion):
try:
session = boto3.Session(profile_name=awsprofile, region_name=awsregion)
return session
except ClientError as e:
print("Failed to run session setter for profile: {0} %s" % e).format(awsprofile)
def assume_role_into_account(profileId, assumeId, sessionName, assetType, regionName):
try:
setSession = set_session(profileId, regionName)
stsSession = setSession.client('sts')
response = stsSession.assume_role(RoleArn=("arn:aws:iam::{0}:role/security").format(assumeId),RoleSessionName=sessionName)
credentials = response['Credentials']
session = setSession.client(assetType, aws_access_key_id=credentials['AccessKeyId'],aws_secret_access_key=credentials['SecretAccessKey'],aws_session_token=credentials['SessionToken'])
return session
except ClientError as e:
print("AssumeRole exception for profile: {0} %s" % e).format(profileId)
def main():
try:
srev2 = assume_role_into_account('sre', IAMID,'Security-Audit-AssumeRole-Session2', 'elb', 'us-east-1')
print("AssumeRole into Account: {0} for Region: {1} .").format(IAMID, 'us-east-1')
elbs = srev2.describe_load_balancers()
for elb in elbs:
policy = session.describe_load_balancer_policies(LoadBalancerName=elb)
except ClientError as e:
print("AssumeRole: Cannot assumerole for id: {0}." % e).format(IAMID)
if __name__ == '__main__':
main()
So when I return policy when calling describe_load_balancer_policies(), there is no way to distinguish which policy is selected.
Any help?
TIA!
It is hard to help if you don't paste the related error message.
From a quick view, I guess you define local variable session in assume_role_into_account which can't be accessed in main()
If this is the problem, you can change it to
def assume_role_into_account(profileId, assumeId, sessionName, assetType, regionName):
global session
....
Refer:
Python - Global, Local and nonlocal Variables
Ok, after a long discussion with the API and ELB team folks at Amazon... here is what we came up with, note this is only for classic ELB's. This will indeed return the ELB Policy you see in the AWS Web Console, every time.
I spent a lot of time on this and i hope it benefits someone else that has also looked into this time-suck, near-fruitless endeavor:
elbs = client.describe_load_balancers()
for elb in elbs:
#Get Named Policy to pass to get the active policy. -1 denotes the last in the list.
policy_name = jmespath.search('ListenerDescriptions[].PolicyNames[] | [-1]', elb)
policy_description = client.describe_load_balancer_policies(LoadBalancerName=elb, PolicyNames=[policyname])
console_policy = jmespath.search('PolicyDescriptions[?PolicyName==`{0}`] | [0].PolicyAttributeDescriptions[0].AttributeValue'.format(policyname), policy_description)
return console_policy

Pyudev's ObserverMonitor locks up GIL when monkey patched

Whenever I use eventlet's monkey patching (necessary for Flask-SocketIO) the disk_monitor_thread() prevents other threads from firing. Eventlet and monkey patching is a must for me. Is there a way to get pyudev's MonitorObserver to place nice and release the GIL with monkey patching?
from threading import Thread
from pyudev import Context, Monitor, MonitorObserver
import eventlet
eventlet.monkey_patch()
def useless_thread():
while True:
print 'sleep thread 1'
time.sleep(2)
# Monitor UDEV for drive insertion / removal
def disk_monitor_thread():
context = Context()
monitor = Monitor.from_netlink(context)
monitor.filter_by('block')
def print_device_event(action, device):
if 'ID_FS_TYPE' in device and device.get('ID_FS_UUID') == '123-UUIDEXAMPLE':
print('{0}, {1}'.format(device.action, device.get('ID_FS_UUID')))
print 'Starting Disk Monitor...'
observer = MonitorObserver(monitor, print_device_event, name='monitor-observer')
print 'Disk Monitor Started'
observer.start()
t1 = Thread(name='uselessthread', target=useless_thread)
t1.start()
disk_monitor_thread()
results in:
sleep thread 1
Starting Disk Monitor...
and never moves forward from there

WSGI application middleware to handle SQLAlchemy session

My WSGI application uses SQLAlchemy. I want to start session when request starts, commit it if it's dirty and request processing finished successfully, make rollback otherwise. So, I need to implement behavior of Django's TransactionMiddleware.
So, I suppose that I should create WSGI middleware and make following stuff:
Create and add DB session to environ on pre-processing.
Get DB session from environ and call commit() on post-processing, if no errors occurred.
Get DB session from environ and call rollback() on post-processing, if some errors occurred.
Step 1 is obvious for me:
class DbSessionMiddleware:
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
environ['db_session'] = create_session()
return self.app(environ, start_response)
Step 2 and 3 - not. I found the example of post-processing task:
class Caseless:
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
for chunk in self.app(environ, start_response):
yield chunk.lower()
It contains comment:
Note that the __call__ function is a Python generator, which is typical for this sort of “post-processing” task.
Could you please clarify how does it work and how can I solve my issue similarly.
Thanks,
Boris.
For step 1 I use SQLAlchemy scoped sessions:
engine = create_engine(settings.DB_URL, echo=settings.DEBUG, client_encoding='utf8')
Base = declarative_base()
sm = sessionmaker(bind=engine)
get_session = scoped_session(sm)
They return the same thread-local session for each get_session() call.
Step 2 and 3 for now is following:
class DbSessionMiddleware:
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
try:
db.get_session().begin_nested()
return self.app(environ, start_response)
except BaseException:
db.get_session().rollback()
raise
finally:
db.get_session().commit()
As you can see, I start nested transaction on session to be able to rollback even queries that were already committed in views.