Brownie Errors when attempting to compile - ethereum

When I type "brownie compile" it doesn't work and I get this error. Anybody know why?
File "c:\users\sircr\appdata\local\programs\python\python39\lib\site-packages\brownie\_cli\__main__.py", line 64, in main
importlib.import_module(f"brownie._cli.{cmd}").main()
File "c:\users\sircr\appdata\local\programs\python\python39\lib\site-packages\brownie\_cli\compile.py", line 50, in main
proj = project.load()
File "c:\users\sircr\appdata\local\programs\python\python39\lib\site-packages\brownie\project\main.py", line 750, in load
return Project(name, project_path)
File "c:\users\sircr\appdata\local\programs\python\python39\lib\site-packages\brownie\project\main.py", line 182, in __init__
self.load()
File "c:\users\sircr\appdata\local\programs\python\python39\lib\site-packages\brownie\project\main.py", line 237, in load
self._compile(changed, self._compiler_config, False)
File "c:\users\sircr\appdata\local\programs\python\python39\lib\site-packages\brownie\project\main.py", line 89, in _compile
_install_dependencies(self._path)
File "c:\users\sircr\appdata\local\programs\python\python39\lib\site-packages\brownie\project\main.py", line 756, in _install_dependencies
install_package(package_id)
File "c:\users\sircr\appdata\local\programs\python\python39\lib\site-packages\brownie\project\main.py", line 778, in install_package
return _install_from_github(package_id)
File "c:\users\sircr\appdata\local\programs\python\python39\lib\site-packages\brownie\project\main.py", line 851, in _install_from_github
raise ConnectionError(msg)
ConnectionError: Status 404 when getting package versions from Github: 'Not Found'

check your config file, it mostly gives an error due to typos. in my file i had written the dependency right after the hyphen(-) without any space and when i fixed it the contract was compiled properly
brownie-config.yaml file before solving the error
dependencies:
-smartcontractkit/chainlink-brownie-contracts#1.1.1
brownie-config.yaml file after solving the error
dependencies:
- smartcontractkit/chainlink-brownie-contracts#1.1.1
do the same whenever using an hyphen(-)

It seems you didn't written the right link and also, when you paste the import for the V3 aggregator check the version, I changed V0.8 to V0.6 to be correct with the solidity version 0.6.6
ex: for me, I just added a s at contract to solve the issue.
smartcontractkit/chainlink-brownie-contracts#1.1.1 is ok instead of smartcontractkit/chainlink-brownie-contract#1.1.1 I've written at first (wrongly)
after those changes, it worked perfectly

Related

I am getting an error when training yolov5

I am trying to train a yolov5 model, but I'm getting an exception error when I try to execute the training module. The error occurs after the model is loaded and when it tries to read the training images. Below is my code and an excerpt of the error. Any help would be appreciated.
!python train.py --img 640 --batch 16 --epochs 150 --data pollen_data.yaml --weights yolov5x.pt
Model summary: 567 layers, 86217814 parameters, 86217814 gradients, 204.2 GFLOPs
Transferred 739/745 items from yolov5x.pt
Scaled weight_decay = 0.0005
optimizer: SGD with parameter groups 123 weight (no decay), 126 weight, 126 bias
albumentations: version 1.0.3 required by YOLOv5, but version 0.1.12 is currently installed
Traceback (most recent call last):
File "/content/yolov5/utils/datasets.py", line 405, in __init__
t = t.read().strip().splitlines()
File "/usr/lib/python3.7/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train.py", line 643, in <module>
main(opt)
File "train.py", line 539, in main
train(opt.hyp, opt, device, callbacks)
File "train.py", line 227, in train
prefix=colorstr('train: '), shuffle=True)
File "/content/yolov5/utils/datasets.py", line 110, in create_dataloader
prefix=prefix)
File "/content/yolov5/utils/datasets.py", line 415, in __init__
raise Exception(f'{prefix}Error loading data from {path}: {e}\nSee {HELP_URL}')
Exception: train: Error loading data from /content/datasets/images/training/im0.jpg: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
The training images I have (im0.jpg and im1.jpg) are two large files. The first has dimensions of 9058 x 11185, and the second file is 13385 x 12832. I realize they are not square but I'm assuming that the train.py module will make them square, so it's okay. Is that right?
Or could the non-square dimensions be causing the choke?
Also, what is the meaning of the exception "error loading data from /content/datasets/images/training/im0.jpg: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte"?
Thank you.
I've been using yolov5 for the past 1 month.I must say your error is wierd.
And also, you cant train your model with image size as 12000. By default it should be 640.In your case it might change based on your dataset but i'm quite sure that it wont be 12000.
There is a mistake in your data directory also.
--data /content/datasets/annotations/dataset.yaml.txt
The data file wont have '.txt' extension. It should be a '.yaml' file. SO change that to
--data /content/datasets/annotations/dataset.yaml
It should start training after these changes. If not, close this question and please provide additional information and ask another question.
the error
'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
is raised when you use a image of format which is not mentioned by default.
IMG_FORMATS = 'bmp', 'dng', 'jpeg', 'jpg', 'mpo', 'png', 'tif', 'tiff', 'webp' # include image suffixes
But you have mentioned that it is an jpg. I'm confused now. And also if it helps, pls try this solution provided in this issue. link

Are Explicit Tags in mkdocs config supported in readthedocs?

I've been working on writing some mkdocs documentation which includes mermaid diagrams that I'd like to keep in the markdown files instead of turning into images and embedding them
I came across this great solution here: https://github.com/squidfunk/mkdocs-material/issues/693#issuecomment-411885426
Which uses the super-fences feature of the pymdown-extensions plugin to create a custom code block which renders the mermaid diagrams inside the code block.
It works in mkdocs running locally, but when I submit the configuration file to readthedocs it fails the yaml validation
Your mkdocs.yml could not be loaded, possibly due to a syntax error (line 18, column 19)
Line 18 in the mkdocs.yml config file is the section which calls the superfences python class
format: !!python/name:pymdownx.superfences.fence_div_format
Looking in the yaml specification https://yaml.org/spec/1.2/spec.html Shows that !! is for an explicit tag and it seems to have been part of the spec for quite some time ( back to version 1). I've tried making the value a string but this then causes issue with python reading it as a string
Does anyone know if readthedocs supports this or have you been able to get this working some other way?
ReadTheDocs is parsing the mkdocs.yaml file using pyyaml and it seems that it does not recognize !!.
For example:
>>> import yaml
>>> document = """
a: 1
b:
c: 3
d: !!4
"""
>>> print(yaml.dump(yaml.load(document)))
<stdin>:1: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.8/site-packages/yaml/__init__.py", line 114, in load
return loader.get_single_data()
File "/usr/lib/python3.8/site-packages/yaml/constructor.py", line 51, in get_single_data
return self.construct_document(node)
File "/usr/lib/python3.8/site-packages/yaml/constructor.py", line 60, in construct_document
for dummy in generator:
File "/usr/lib/python3.8/site-packages/yaml/constructor.py", line 413, in construct_yaml_map
value = self.construct_mapping(node)
File "/usr/lib/python3.8/site-packages/yaml/constructor.py", line 218, in construct_mapping
return super().construct_mapping(node, deep=deep)
File "/usr/lib/python3.8/site-packages/yaml/constructor.py", line 143, in construct_mapping
value = self.construct_object(value_node, deep=deep)
File "/usr/lib/python3.8/site-packages/yaml/constructor.py", line 100, in construct_object
data = constructor(self, node)
File "/usr/lib/python3.8/site-packages/yaml/constructor.py", line 427, in construct_undefined
raise ConstructorError(None, None,
yaml.constructor.ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:4'
in "<unicode string>", line 5, column 8:
d: !!4
^
>>>
See: https://github.com/readthedocs/readthedocs.org/issues/6889

Can't import csv to neo4j

My code:
LOAD CSV FROM "C:\Users\Elmar\Desktop\tmp-raise.csv" AS line
WITH line
RETURN line
The error that it gives:
Invalid input ':': expected 'o/O' (line 1, column 18 (offset: 17))
"LOAD CSV FROM "C:\Users\Elmar\Desktop\tmp-raise.csv" AS line"
^
I have also tried:
USING PERIODIC COMMIT 10000
LOAD CSV FROM ""C:\Users\Elmar\Desktop\tmp-raise.csv" AS line
WITH line
RETURN line
What is the problem? Can anyone help me?
According to the CSV import guide, your path should be prefixed with file: and should use forward slashes. The example path given in the guide for windows is file:c:/path/to/data.csv (though I have seen example paths starting with file://). Give this a try:
USING PERIODIC COMMIT 10000
LOAD CSV FROM 'file:c:/Users/Elmar/Desktop/tmp-raise.csv' AS line
WITH line
RETURN line
If that doesn't work, give it a try with file:// as the path prefix.
EDIT: Looks like CSV loads use a relative path from the default.graphdb/import folder. I had thought that was for Mac/Unix only, but it looks like Windows does the same. If you move CSVs you want to import into the import folder, you should be able to load them using file:///theFileName.csv
Load csv from "file:///C:/xyz.csv" as line
return line
The above code works well. But do comment out the configuration
dbms.directories.import=import
in the settings.
Other solution is you can drop a (.txt, .cyp, .cql) file in the drag to import box.

NLTK POS tagset help not working

I downloaded nltk tagset help is not working.
Whenever I try to access tagset meanings by:-
nltk.help.upenn_tagset('NN')
I get result as :-
Traceback (most recent call last):
File "<pyshell#30>", line 1, in <module>
nltk.help.upenn_tagset('NN')
File "C:\Python34\lib\site-packages\nltk\help.py", line 25, in upenn_tagset
_format_tagset("upenn_tagset", tagpattern)
File "C:\Python34\lib\site-packages\nltk\help.py", line 39, in _format_tagset
tagdict = load("help/tagsets/" + tagset + ".pickle")
File "C:\Python34\lib\site-packages\nltk\data.py", line 774, in load
opened_resource = _open(resource_url)
File "C:\Python34\lib\site-packages\nltk\data.py", line 888, in _open
return find(path_, path + ['']).open()
File "C:\Python34\lib\site-packages\nltk\data.py", line 618, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource 'help/tagsets/upenn_tagset.pickle' not found. Please
use the NLTK Downloader to obtain the resource: >>>
nltk.download()
Searched in:
- 'C:\\Users\\aarushi/nltk_data'
- 'C:\\nltk_data'
- 'D:\\nltk_data'
- 'E:\\nltk_data'
- 'C:\\Python34\\nltk_data'
- 'C:\\Python34\\lib\\nltk_data'
- 'C:\\Users\\aarushi\\AppData\\Roaming\\nltk_data'
But I already downloaded tagset from models tab by nltk.download()
So what am I doing wrong here?
As nltk is telling you, it searched for the file help/tagsets/upenn_tagset.pickle in the directories:
- 'C:\\Users\\aarushi/nltk_data'
- 'C:\\nltk_data'
- 'D:\\nltk_data'
- 'E:\\nltk_data'
- 'C:\\Python34\\nltk_data'
- 'C:\\Python34\\lib\\nltk_data'
- 'C:\\Users\\aarushi\\AppData\\Roaming\\nltk_data'
And could not find it.
Is it there?
If not, use nltk.download() to get it, and make sure it's in one of these directories.
https://i.stack.imgur.com/Ri7E2.png "Reference"
Hi ,
Check for above tagged image to summarize what exactly needs to be downloaded.
One of the fastest and easiest way to resolve this issue by downloading the help_tagsets from nltk.download() .
One can follow below steps:-
open your jupyter notebook or python shell in your OS(thru terminal).
ask for nltk.download() - (shell/jupyter) - it will open the GUI.
search the help_datasets in ALL PACKAGES column.
Just download it :-) Here you go .

Configure Plone to use Relstorage as blobstorage

I have a installation of Plone 4.3.3 with one site. First the buildout was configured to use the Data.fs file in var/filestorage and a shared blob storage in var/blobstorage. Then I added a Relstorage to the buildout and converted the content of the Data.fs file to the underlying MySQL database. Now Plone is using Relstorage instead of Data.fs.
But now I want also to use the Relstorage instead of the blobstorage. Because I am relatively new to Plone and especially the Relstorage thing, my idea was to first setup a new empty Plone. Then I copied the buildout.cfg and base.cfg from the first one to the new one. Then I created a new database userZodb and changed the base.cfg for using the new database and I also changed the ports for zeoserver and clients. The next step was to reconfigure the relstorage for not using the file based blobstorage.
rel-storage =
type mysql
db userZodb
user zodbuser
passwd innzop
blob-dir ${buildout:var-dir}/blobstorage
shared-blob-dir false
# shared blobs are much faster if we're on the same server.
# if not, turn it off.
shared-blob = off
Then I ran the buildout. All was built successfully. After starting the zeoserver, I got this error from the client:
user#server:~/Plone433-dev/zeocluster3$ ./bin/zeoserver start
.
daemon process started, pid=35136
user#server:~/Plone433-dev/zeocluster3$ ./bin/client1 fg
2014-12-17 14:50:31 INFO ZServer HTTP server started at Wed Dec 17 14:50:31 2014
Hostname: 0.0.0.0
Port: 9180
2014-12-17 14:50:32 INFO Products.PloneFormGen gpg_subprocess initialized, using /usr/bin/gpg
Traceback (most recent call last):
File "/home/user/Plone433-dev/zeocluster3/parts/client1/bin/interpreter", line 289, in <module>
exec(compile(__file__f.read(), __file__, "exec"))
File "/home/user/Plone433-dev/buildout-cache/eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/run.py", line 76, in <module>
run()
File "/home/user/Plone433-dev/buildout-cache/eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/run.py", line 22, in run
starter.prepare()
File "/home/user/Plone433-dev/buildout-cache/eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/__init__.py", line 86, in prepare
self.startZope()
File "/home/user/Plone433-dev/buildout-cache/eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/__init__.py", line 262, in startZope
Zope2.startup()
File "/home/user/Plone433-dev/buildout-cache/eggs/Zope2-2.13.22-py2.7.egg/Zope2/__init__.py", line 47, in startup
_startup()
File "/home/user/Plone433-dev/buildout-cache/eggs/Zope2-2.13.22-py2.7.egg/Zope2/App/startup.py", line 81, in startup
DB = dbtab.getDatabase('/', is_root=1)
File "/home/user/Plone433-dev/buildout-cache/eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/datatypes.py", line 287, in getDatabase
db = factory.open(name, self.databases)
File "/home/user/Plone433-dev/buildout-cache/eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/datatypes.py", line 185, in open
DB = self.createDB(database_name, databases)
File "/home/user/Plone433-dev/buildout-cache/eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/datatypes.py", line 182, in createDB
return ZODBDatabase.open(self, databases)
File "/home/user/Plone433-dev/buildout-cache/eggs/ZODB3-3.10.5-py2.7-linux-x86_64.egg/ZODB/config.py", line 101, in open
storage = section.storage.open()
File "/home/user/Plone433-dev/buildout-cache/eggs/RelStorage-1.6.0b2-py2.7.egg/relstorage/config.py", line 33, in open
return RelStorage(adapter, name=config.name, options=options)
File "/home/user/Plone433-dev/buildout-cache/eggs/RelStorage-1.6.0b2-py2.7.egg/relstorage/storage.py", line 212, in __init__
self.blobhelper = BlobHelper(options=options, adapter=adapter)
File "/home/user/Plone433-dev/buildout-cache/eggs/RelStorage-1.6.0b2-py2.7.egg/relstorage/blobhelper.py", line 118, in __init__
fshelper.create()
File "/home/user/Plone433-dev/buildout-cache/eggs/ZODB3-3.10.5-py2.7-linux-x86_64.egg/ZODB/blob.py", line 359, in create
(self.layout_name, self.base_dir, layout))
ValueError: Directory layout `zeocache` selected for blob directory /home/user/Plone433-dev/zeocluster3/var/blobstorage/, but marker found for layout `bushy`
Unfortunality I have no idea where the problem could be. Anyone a suggestion?
Thank you!
The solution was to use zodbconvert again. Correctly configured it can convert from one storage to another, for example from Blobstorage to Relstorage. In my case the configuration looks like that:
<filestorage source>
path /home/user/Plone433-dev/zeocluster/var/filestorage/Data20141230.fs
blob-dir /home/user/Plone433-dev/zeocluster/var/blobstorage
</filestorage>
<relstorage destination>
shared-blob-dir false
# ZODB Cache Dir
blob-dir ./var/cacheblob
blob-cache-size 10mb
<mysql>
host localhost
db Zodb
user zodbuser
passwd XXXXXXXXX
</mysql>
</relstorage>
After that you have to change your base.cfg and buildout.cfg for using only relstorage. You can find more information about how exactly it works here: https://www.techidiots.net/notes/plone-1/plone-4-3-3-relstorage
If you're using relstorage the blob directory is still used for caching (Check for more infos). In your case there's a problem with your directory layout.
You can remove the hole var directory in ${buildout:directory} and rerun buildout.
This will create you a new var directory. After starting the instance you should have a new blob directory with the correct layout.
OR
You can modify the .layout file in ${buildout:directory}/var/blobstorage and change the value from bushy to zeocache
Explanation:
The first time you started the Plone instance, it creates the blobstorage directory with the given layout - in your case it was the default bushy. Since you changed the storage it expects zeocache. But the marker file .layout is not changed automatically.
If this doesn't help, please post your full buildout.cfg.