How do you overcome the "RuntimeError: Gamma ramp size is reported as 0." without changing the window type when using psychopy? - pygame

I have developed a program to play videos in psychopy using a machine with Ubuntu 16.04, a NVIDIA GPU and associated driver. The program works perfectly fine on this machine. The program is rather large but of note, it uses visual.Window(fullscr=True) which as default uses the pyglet backend and also uses visual.MovieStim3. I am now trying to run this program on a different machine with Ubuntu 18.04.1 LTS and integrated intel graphics (HD Graphics 620 (Kaby Lake GT2)) and am having problems.
new machine driver info below:
*-display
description: VGA compatible controller
product: Intel Corporation
vendor: Intel Corporation
physical id: 2
bus info: pci#0000:00:02.0
version: 02
width: 64 bits
clock: 33MHz
capabilities: pciexpress msi pm vga_controller bus_master cap_list rom
configuration: driver=i915 latency=0
resources: irq:128 memory:ee000000-eeffffff memory:d0000000-dfffffff ioport:f000(size=64) memory:c0000-dffff
Here is the traceback when I run the program:
Traceback (most recent call last):
File "/home/adf/mxj719/experiments/video_sorting/video_sorting.py", line 456, in <module>
start_sorting(av_original_csv, user, usr_csv, bonus, last_video)
File "/home/adf/mxj719/experiments/video_sorting/video_sorting.py", line 357, in start_sorting
win = visual.Window(fullscr=True)
File "/home/adf/mxj719/.conda/envs/conda_psychopy/lib/python2.7/site-packages/psychopy/visual/window.py", line 375, in __init__
self.backend = backends.getBackend(win=self, *args, **kwargs)
File "/home/adf/mxj719/.conda/envs/conda_psychopy/lib/python2.7/site-packages/psychopy/visual/backends/__init__.py", line 32, in getBackend
return Backend(win, *args, **kwargs)
File "/home/adf/mxj719/.conda/envs/conda_psychopy/lib/python2.7/site-packages/psychopy/visual/backends/pygletbackend.py", line 227, in __init__
self._origGammaRamp = self.getGammaRamp()
File "/home/adf/mxj719/.conda/envs/conda_psychopy/lib/python2.7/site-packages/psychopy/visual/backends/pygletbackend.py", line 326, in getGammaRamp
return getGammaRamp(self.screenID, self.xDisplay)
File "/home/adf/mxj719/.conda/envs/conda_psychopy/lib/python2.7/site-packages/psychopy/visual/backends/gamma.py", line 120, in getGammaRamp
rampSize = getGammaRampSize(screenID, xDisplay=xDisplay)
File "/home/adf/mxj719/.conda/envs/conda_psychopy/lib/python2.7/site-packages/psychopy/visual/backends/gamma.py", line 285, in getGammaRampSize
raise RuntimeError("Gamma ramp size is reported as 0.")
RuntimeError: Gamma ramp size is reported as 0.
Segmentation fault
This is a common problem it seems:
https://discourse.psychopy.org/t/gamma-problem-in-v1-90-2/4549
But each solution involves either switching the visual.Window type to pygame (which is now archaic for psychopy) or glfw (which is not a simple setup and is very new and not well documented). I would like a solution that allows me to continue using pyglet.
Another solution given involves ensuring that xf86-video-intel is installed to give a sensible LUT size, I have tried this and it seems that this driver is installed as default on Ubuntu 18, so this does not work for me.
Someone else mentions that I could comment out the RuntimeError in the source code but I could not find those lines of code where they pointed it out. (/usr/lib/python2.7/dist-packages/psychopy/visual/backends/gamma.py).
Please also note that I have tried both the python2.7 and python3.5 psychopy install instructions with anaconda as outlined here:
http://psychopy.org/installation.html

Another solution given involves ensuring that xf86-video-intel is installed to give a sensible LUT size, I have tried this and it seems that this driver is installed as default on Ubuntu 18, so this does not work for me.
You might need to set the driver in a configuration file. To do this, edit your /etc/X11/xorg.conf.d/20-intel.conf file (or create it if it does not exist) to contain the following lines:
Section "Device"
Identifier "Intel Graphics"
Driver "intel"
EndSection

Related

Problems runing Street pyghter - 1.2

I'm trying to run this project
on my Raspberry Pi 4 but there is some errors, like some files missing, and variables not defined.
I tried to change and fix the problems but still not working.
Someone have a tutorial to build one from zero or a new version?
In every project creating my own or clone gaves me this error on the terminal:
Hello from the pygame community. https://www.pygame.org/contribute.html /home/pi/Desktop/pygame/main.py:3: RuntimeWarning: use font: libSDL2_ttf-2.0.so.0: cannot open shared object file: No such file or directory (ImportError: libSDL2_ttf-2.0.so.0: cannot open shared object file: No such file or directory) pygame.font.init() Traceback (most recent call last): File "/home/pi/Desktop/pygame/main.py", line 3, in <module> pygame.font.init() File "/home/pi/.local/lib/python3.7/site-packages/pygame/__init__.py", line 59, in __getattr__ raise NotImplementedError(missing_msg)
I resolve it in Raspberry Pi, setting it from zero, deleting all in the micro SD card and installing again Raspberry Pi OS, the problem was that I tried to install on the terminal the pygame but is already installed on Raspberry Pi, so probably was an error during the installation.

Assertion Error while using stanford-sentiment-treebank-roberta.2021-03-11.tar.gz in ALLENNLP

I have created a virtual ubuntu machine and installed ALLENNLP,
In that and tried example from ALLENNLP demo website
I have executed below code
from allennlp.predictors.predictor import Predictor
import allennlp_models.tagging
predictor = Predictor.from_path("myLocalPath/stanford-sentiment-treebank-roberta.2021-03-11.tar.gz")
predictor.predict("a very well-made, funny and entertaining picture.")
which gave me below error.
>>> predictor.predict("a very well-made, funny and entertaining picture.")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/.local/lib/python3.8/site-packages/allennlp/predictors/text_classifier.py", line 24, in predict
return self.predict_json({"sentence": sentence})
File "/home/.local/lib/python3.8/site-packages/allennlp/predictors/predictor.py", line 54, in predict_json
instance = self._json_to_instance(inputs)
File "/home/.local/lib/python3.8/site-packages/allennlp/predictors/text_classifier.py", line 40, in _json_to_instance
return self._dataset_reader.text_to_instance(sentence)
File "/home/.local/lib/python3.8/site-packages/allennlp_models/classification/dataset_readers/stanford_sentiment_tree_bank.py", line 114, in text_to_instance
assert isinstance(
AssertionError
But when I executed below code
from allennlp.predictors.predictor import Predictor
import allennlp_models.tagging
predictor = Predictor.from_path("myLocalPath/sst-roberta-large-2020.06.08.tar.gz")
predictor.predict("a very well-made, funny and entertaining picture.")
It worked.
Only difference between above two code is version of roberta-large train data
I have installed latest version of ALLENNLP in my virtual machine
I don't have NVIDIA graphic card in my virtual machine could this be a reason?
But then how come other version is working?
Please help
Allen NLP version was the issue it was ALLENNLP 2.1.0, It worked with ALLENNLP version 2.4.0.
Use command "pip install allennlp"
and then "pip install allennlp-models"
instead of "pip install allennlp==[some version] allennlp-models==[some version]"
or make sure you have version greater than or equal to 2.4.0.

Windows 10 Rtree installation successful from .whl file, but error when running code

I am running Python 3.7, 64bit on Windows 10 and trying desperately to get Rtree running. I use the package Rtree-0.9.1-cp37-cp37m-win_amd64.whl from Christoph Gohlke (https://www.lfd.uci.edu/~gohlke/pythonlibs/).
I have tried for very long to get it to work, but keep on getting the following error message when running a script that uses geopandas.
Traceback (most recent call last):
File "C:\Python37\lib\site-packages\rtree\core.py", line 90, in <module>
rt = ctypes.CDLL(os.path.join(here, 'spatialindex_c.dll'))
File "C:\Python37\lib\ctypes\__init__.py", line 364, in __init__
self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] The specified module could not be found
The installation of the whl-package should include the libspatialindex files, but they are not found when running the code. I tried to use Python 2.7 first to run it, then installed Python 3.7. I've checked all the dependencies and checked whether the "spatialindex_c.dll" files are at the right place, but nothing helps. Would be great to get an answer on that.

Selenium python library via docker, Chrome error failed to start: exited abnormally

I am trying to run some python scripts with the selenium library from within a docker container based on miniconda/anaconda, but I keep getting this error: selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally. I am also using a python wrapper for xvfb to avoid opening a real Chrome window.
To reproduce this (from a running docker container):
root#304ccd3bae83:/opt# python
Python 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 18:10:19)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>>
>>> from selenium import webdriver
>>> from xvfbwrapper import Xvfb
>>>
>>> with Xvfb(width=1366, height=768) as xvfb:
... my_driver = webdriver.Chrome('/opt/chromedriver/2.33/chromedriver')
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/opt/conda/lib/python3.6/site-packages/selenium/webdriver/chrome/webdriver.py", line 69, in __init__
desired_capabilities=desired_capabilities)
File "/opt/conda/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 151, in __init__
self.start_session(desired_capabilities, browser_profile)
File "/opt/conda/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 240, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/opt/conda/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 308, in execute
self.error_handler.check_response(response)
File "/opt/conda/lib/python3.6/site-packages/selenium/webdriver/remote/errorhandler.py", line 194, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally
(Driver info: chromedriver=2.33.506092 (733a02544d189eeb751fe0d7ddca79a0ee28cce4),platform=Linux 4.4.0-116-generic x86_64)
According to this: https://sites.google.com/a/chromium.org/chromedriver/help/chrome-doesn-t-start it seems someone may need to use a stand-alone version of Chrome that works for all users, but I am not sure how the docker build works, I guess the docker image is built as root, and all the code inside it is executed as root, so there should not be any issue with different users controlling Chrome.
This python code works fine on a normal Ubuntu laptop with X windows. I need to carefully pick both the versions of Chrome and chromedriver, at the moment when checking from within the running docker container:
root#304ccd3bae83:/opt# /opt/chromedriver/2.33/chromedriver --version
ChromeDriver 2.33.506092 (733a02544d189eeb751fe0d7ddca79a0ee28cce4)
root#304ccd3bae83:/opt# google-chrome-stable --version
Google Chrome 62.0.3202.75
These options helped solving the issue.
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument("--disable-setuid-sandbox")
One of them is needed when seeing Chrome failed to start: crashed.
Also: make sure there are no zombies (from previous executions) for the chrome-driver process using ps aux | grep chrome-driver to find the PIDs to kill.
Bare in mind that if you are using the Python multiprocessing library to spawn many processes involving their own instance of the Chrome browser, then you can not use Docker (which is supposed to start just one Python process, unless using stuff like supervisor), so you may see: selenium.common.exceptions.WebDriverException: Message: chrome not reachable if you try anyway.

Stanford Tagger in nltk not working due to JVM parameters

I am having a wired error while running following example code snippet
st = StanfordTagger('bidirectional-distsim-wsj-0-18.tagger')
st.tag('What is the airspeed of an unladen swallow ?'.split())
The first line worked properly but second line is giving following error.
Could not create the Java virtual machine.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.6/dist-packages/nltk-2.0.1rc1- py2.6.egg/nltk/tag/stanford.py", line 51, in tag
return self.batch_tag([tokens])[0]
File "/usr/local/lib/python2.6/dist-packages/nltk-2.0.1rc1-py2.6.egg/nltk/tag/stanford.py", line 77, in batch_tag
stdout=PIPE, stderr=PIPE)
File "/usr/local/lib/python2.6/dist-packages/nltk-2.0.1rc1-py2.6.egg/nltk/internals.py", line 166, in java
raise OSError('Java command failed!')
OSError: Java command failed!
I have tried adding .usr/lib/jvm into path but still not working
It wasn't working for me either. So I tried the following and its working perfectly.
st = POSTagger('path-to/stanford-postagger-full-2012-07-09/models/wsj-0-18-left3words.tagger','path-to/stanford-postagger-full-2012-07-09/stanford-postagger.jar')
and use nltk's tokenize method instead of Python's split()
taggedSentence= st.tag(nltk.word_tokenize(sentence))
I see that question is very outdated, but this days I got same error for unknown reason. It gives me a lot of headache. But I found solution.
First, I installed Oracle Java (here is instructions: How To Manually Install Oracle Java on a Debian or Ubuntu VPS)
Now, my python script told me more information on error. It outputs something like:
Forking JVM: error=12, Cannot allocate memory or error=12, Not enough space
Here you can read more about such problem: Forking the JVM
And to avoid that annoying error I need to edit /etc/sysctl.conf and add the following:
vm.overcommit_memory = 1
Then restart system for the change to take effect.