How do I improve the performance of my Code Assist in Code Repository? - palantir-foundry

Code Assist is very slow and takes up to an hour to resolve the environment. How would I improve the performance here?

Short answer: If there has been no commits made the past 7 days, the environment cache (used by Code Assist) may have expired. Try making a commit to make the CI recompute the environment cache.
Long answer: When working with python transforms, a valid python environment needs to be computed that satisfies all required packages and their dependencies. The more python packages that are required by the environment, and the more specific their versions, the harder (and thus longer) this problem becomes to solve. For this particular code repository, here are all the python packages and versions set as required in the meta.yaml file:
# If you need to modify the runtime requirements for your package,
# update the 'requirements.run' section in this file
package:
name: "{{ PACKAGE_NAME }}"
version: "{{ PACKAGE_VERSION }}"
source:
path: ../src
requirements:
build:
- python 3.6.*
- setuptools
- unionutils 0.0.39
- palantirutils 0.1.100
# Any extra packages required to run your package.
run:
- python 3.6.*
- transforms {{ PYTHON_TRANSFORMS_VERSION }}
- unionutils 0.0.39
- palantirutils 0.1.100
- transforms-expectations
build:
script: python setup.py install --single-version-externally-managed --record=record.txt
In practice though, these packages have their own dependencies, and so the resolved environment for this repo actually requires a lot more different python packages with specific versions (you can see these in the conda lock file which is a hidden file in the repo). In this case there are 123 additional dependencies.
1. # This file is autogenerated and supposed to be committed to the git repository.
2. # Changing this file is not recommended and will likely break the build.
3. # Run `./gradlew --write-locks` to regenerate.
4. _libgcc_mutex=0.1=conda_forge # linux-64
5. _openmp_mutex=4.5=2_gnu # linux-64
6. alsa-lib=1.2.7.2=h166bdaf_0 # linux-64
7. brotlipy=0.7.0=py36h27cfd23_1003 # linux-64
8. ca-certificates=2022.12.7=ha878542_0 # linux-64
9. certifi=2021.5.30=py36h5fab9bb_0 # linux-64
10. cffi=1.14.6=py36hd8eec40_1 # linux-64
11. charset-normalizer=2.1.1=pyhd8ed1ab_0 # noarch
12. cryptography=35.0.0=py36hd23ed53_0 # linux-64
13. cycler=0.11.0=pyhd8ed1ab_0 # noarch
14. dbus=1.13.18=hb2f20db_0 # linux-64
15. decorator=5.1.1=pyhd3eb1b0_0 # noarch
16. expat=2.5.0=h27087fc_0 # linux-64
17. font-ttf-dejavu-sans-mono=2.37=h6964260_0 # noarch
18. font-ttf-inconsolata=3.000=h77eed37_0 # noarch
19. font-ttf-source-code-pro=2.038=h77eed37_0 # noarch
20. font-ttf-ubuntu=0.83=h8b1ccd4_0 # noarch
21. fontconfig=2.14.1=hc2a2eb6_0 # linux-64
22. fonts-anaconda=1=h8fa9717_0 # noarch
23. fonts-conda-ecosystem=1=hd3eb1b0_0 # noarch
24. freetype=2.12.1=hca18f0e_1 # linux-64
25. freezegun=1.2.2=pyhd8ed1ab_0 # noarch
26. future=0.18.2=py36h5fab9bb_3 # linux-64
27. gettext=0.21.1=h27087fc_0 # linux-64
28. glib-tools=2.72.1=h6239696_0 # linux-64
29. glib=2.72.1=h6239696_0 # linux-64
30. gst-plugins-base=1.20.3=h57caac4_2 # linux-64
31. gstreamer=1.20.3=hd4edc92_2 # linux-64
32. icu=69.1=h9c3ff4c_0 # linux-64
33. idna=3.4=pyhd8ed1ab_0 # noarch
34. jpeg=9e=h166bdaf_2 # linux-64
35. keyutils=1.6.1=h166bdaf_0 # linux-64
36. kiwisolver=1.3.1=py36h605e78d_1 # linux-64
37. krb5=1.20.1=hf9c8cef_0 # linux-64
38. lcms2=2.14=hfd0df8a_1 # linux-64
39. ld_impl_linux-64=2.39=hcc3a1bd_1 # linux-64
40. lerc=4.0.0=h27087fc_0 # linux-64
41. libblas=3.9.0=16_linux64_openblas # linux-64
42. libcblas=3.9.0=16_linux64_openblas # linux-64
43. libclang=13.0.1=default_hc23dcda_0 # linux-64
44. libdeflate=1.14=h166bdaf_0 # linux-64
45. libedit=3.1.20210714=h7f8727e_0 # linux-64
46. libevent=2.1.10=h9b69904_4 # linux-64
47. libffi=3.4.2=h6a678d5_6 # linux-64
48. libgcc-ng=12.1.0=h8d9b700_17 # linux-64
49. libgfortran-ng=12.2.0=h69a702a_19 # linux-64
50. libgfortran5=12.2.0=h337968e_19 # linux-64
51. libglib=2.72.1=h2d90d5f_0 # linux-64
52. libgomp=12.1.0=h8d9b700_17 # linux-64
53. libiconv=1.16=h516909a_0 # linux-64
54. liblapack=3.9.0=16_linux64_openblas # linux-64
55. libllvm13=13.0.1=hf817b99_2 # linux-64
56. libnsl=2.0.0=h7f98852_0 # linux-64
57. libogg=1.3.5=h27cfd23_1 # linux-64
58. libopenblas=0.3.21=pthreads_h78a6416_3 # linux-64
59. libopus=1.3.1=h7f98852_1 # linux-64
60. libpng=1.6.39=h753d276_0 # linux-64
61. libpq=14.5=h2baec63_4 # linux-64
62. libsqlite=3.40.0=h753d276_0 # linux-64
63. libstdcxx-ng=12.1.0=ha89aaad_17 # linux-64
64. libtiff=4.5.0=h82bc61c_0 # linux-64
65. libuuid=2.32.1=h14c3975_1000 # linux-64
66. libvorbis=1.3.7=he1b5a44_0 # linux-64
67. libwebp-base=1.2.4=h166bdaf_0 # linux-64
68. libxcb=1.13=h7f98852_1004 # linux-64
69. libxkbcommon=1.0.3=he3ba5ed_0 # linux-64
70. libxml2=2.9.12=h885dcf4_1 # linux-64
71. libzlib=1.2.13=h166bdaf_4 # linux-64
72. lz4-c=1.9.3=h295c915_1 # linux-64
73. matplotlib-base=3.3.4=py36hd391965_0 # linux-64
74. matplotlib=3.3.4=py36h5fab9bb_0 # linux-64
75. mysql-common=8.0.31=haf5c9bc_0 # linux-64
76. mysql-libs=8.0.31=h28c427c_0 # linux-64
77. ncurses=6.2=h58526e2_4 # linux-64
78. nspr=4.35=h27087fc_0 # linux-64
79. nss=3.82=he02c5a1_0 # linux-64
80. numpy=1.19.5=py36hfc0c790_2 # linux-64
81. olefile=0.46=pyh9f0ad1d_1 # noarch
82. openjpeg=2.4.0=hb52868f_1 # linux-64
83. openssl=1.1.1=h7b6447c_0 # linux-64
84. palantir-spark-time=3.15.0=py_0 # noarch
85. palantirutils=0.1.100=py36_0 # linux-64
86. pandas=1.1.5=py36ha9443f7_0 # linux-64
87. pcre=8.45=h9c3ff4c_0 # linux-64
88. pillow=8.3.2=py36h676a545_0 # linux-64
89. pip=22.0.2=pyhd8ed1ab_0 # noarch
90. pthread-stubs=0.4=h36c2ea0_1001 # linux-64
91. py4j=0.10.9.7=pyhd8ed1ab_0 # noarch
92. pycparser=2.21=pyhd3eb1b0_0 # noarch
93. pydicom=2.3.1=pyh1a96a4e_0 # noarch
94. pyopenssl=22.0.0=pyhd8ed1ab_1 # noarch
95. pyparsing=3.0.9=pyhd8ed1ab_0 # noarch
96. pyqt-impl=5.12.3=py36h7ec31b9_7 # linux-64
97. pyqt5-sip=4.19.18=py36hc4f0c31_7 # linux-64
98. pyqt=5.12.3=py36h5fab9bb_7 # linux-64
99. pyqtchart=5.12=py36h7ec31b9_7 # linux-64
100. pyqtwebengine=5.12.1=py36h7ec31b9_7 # linux-64
101. pysocks=1.7.1=py36h5fab9bb_3 # linux-64
102. pyspark-src=3.2.1_palantir.30=py_0 # noarch
103. pyspark=3.2.1_palantir.30=py_0 # noarch
104. python-dateutil=2.8.2=pyhd3eb1b0_0 # noarch
105. python=3.6.15=hb7a2778_0_cpython # linux-64
106. python_abi=3.6=2_cp36m # linux-64
107. pytz=2022.7=pyhd8ed1ab_0 # noarch
108. qt=5.12.9=h1304e3e_6 # linux-64
109. readline=8.1=h27cfd23_0 # linux-64
110. requests=2.28.1=pyhd8ed1ab_0 # noarch
111. sas7bdat=2.2.3=pyhd8ed1ab_0 # noarch
112. setuptools=58.0.4=py36h5fab9bb_2 # linux-64
113. six=1.16.0=pyhd3eb1b0_1 # noarch
114. sqlite=3.40.0=h5082296_0 # linux-64
115. tk=8.6.12=h27826a3_0 # linux-64
116. tornado=6.1=py36h8f6f2f9_1 # linux-64
117. transforms-expectations=0.153.0=py_0 # noarch
118. transforms=1.575.0=py_0 # noarch
119. unionutils=0.0.39=py36_0 # linux-64
120. urllib3=1.26.13=pyhd8ed1ab_0 # noarch
121. wheel=0.37.1=pyhd3eb1b0_0 # noarch
122. xorg-libxau=1.0.9=h7f98852_0 # linux-64
123. xorg-libxdmcp=1.1.3=h516909a_0 # linux-64
124. xz=5.2.9=h166bdaf_0 # linux-64
125. zlib=1.2.13=h166bdaf_4 # linux-64
126. zstd=1.5.2=h8a70e8d_4 # linux-64
[Conda lock version]
v2 - fc1db1a3046b71820e26b48a2910180bd17d802db0610c30351282a31083fe9b
When the first commit is made to a new python repo there are two relevant tasks that take place during the continuous integration (CI) checks. The first task solves the python environment by calculating a mix of packages that satisfies all the dependencies and requirements of the repo, and the second one then downloads these specific package versions. This resolved environment is recorded in the hidden conda lock file, and is valid for 7 days (essentially it is cached).
Any time a new commit is made in the following 7 days after an environment solve, the CI checks can use the cached python environment to skip those two tasks of resolving the environment and downloading the packages, which dramatically speeds up the CI check time.
If however the commit is made after the cache is 7 days old, or if a commit is made that changes the required python packages or versions in the meta.yaml file, then the environment will need to be solved again, which will lead to a longer CI time.
So how does this relate to code assist?
Well, code assist essentially needs to run the same tasks, but can also read from that cache file. So if you notice that code assist is taking a long time to run, that's because it is not able to read from the cache (either because of changed packages or cache expiry) and is instead having to solve the environment itself. However, as code assist is not allowed to write code to your repository it cannot modify the cache after it has finished its solve.
So in this case I would recommend making a commit to the code of the repo to reset the cache, and then you should notice that code assist starts a lot faster than 40 minutes (until the 7 day cache expires or the packages change).

Related

Why does GitKraken crash after my last Fedora 36 update?

I enjoyed using GitKraken (v8.9) until I updated my current Fedora 36 distribution.
After the update, I had the problem that GitKraken crashed every time I wanted to open a repository tab or when I wanted to open the "File/Settings..." view.
I've updated GitKraken to the latest version and also looked at what the Fedora update includes, which may have caused the problem.
Additionally, I collected the coredump of the crash (via coredumpctl gdb and the crash log via journalctl -xf)
Command Line: $'/usr/share/gitkraken/gitkraken --type=renderer --enable-crashpad --crashpad-handler-pid=3178 --enable-crash-reporter=dde15cd3-eccc-4db0-abbf-4626ba853f5f,no_channel --user-data-dir=/home/hugo/.config/GitKraken --standard-schemes --secure-schemes --bypasscsp-schemes=sentry-ipc --cors-schemes=sentry-ipc --fetch-schemes=sentry-ipc --service-worker-schemes --streaming-schemes --app-path=/usr/share/gitkraken/resources/app.asar --no-sandbox --no-zygote --node-integration-in-worker --disable-gpu-compositing --lang=en-US --num-raster-threads=4 --enable-main-frame-before-activation --renderer-client-id=9 --launch-time-ticks=54958457852 --shared-files=v8_context_snapshot_data:100 --field-trial-handle=0,890488252484140780,4810308880598401477,131072 --disable-features=PlzServiceWorker,SpareRendererForSitePerProcess --enable-crashpad'
Executable: /usr/share/gitkraken/gitkraken
Control Group: /user.slice/user-1000.slice/user#1000.service/app.slice/app-gnome-gitkraken-2553.scope
Unit: user#1000.service
User Unit: app-gnome-gitkraken-2553.scope
Slice: user-1000.slice
Owner UID: 1000 (hugo)
Boot ID: 8bdd7f9dd7194682b2286579c96c8ec7
Machine ID: 70b436a43d284e9f9e7f1073bc0cf15b
Hostname: tag009442760151
Storage: /var/lib/systemd/coredump/core.gitkraken.1000.8bdd7f9dd7194682b2286579c96c8ec7.56008.1667909034000000.zst (present)
Disk Size: 77.8M
Message: Process 56008 (gitkraken) of user 1000 dumped core.
Module linux-vdso.so.1 with build-id 28de22885e5a5f761e8b05fe6d610d65bb875b04
Module nsfw.node without build-id.
Module libxkbfile.so.1 with build-id e9e99444598d67ff8514b6adfb390156611311d6
Stack trace of thread 56072:
#0 0x00007f26c336cffc __strlen_evex (libc.so.6 + 0x16cffc)
#1 0x00007f26bf3d351e _ZN14FontDescriptorC2EPKcS1_S1_S1_10FontWeight9FontWidthbb (fontmanager.node + 0xb51e)
#2 0x00007f26bf3d8f19 _Z20createFontDescriptorP10_FcPattern (fontmanager.node + 0x10f19)
#3 0x00007f26bf3d8fb9 _Z12getResultSetP10_FcFontSet (fontmanager.node + 0x10fb9)
#4 0x00007f26bf3d9182 _ZN15FontManagerImpl17getAvailableFontsEPP9ResultSet (fontmanager.node + 0x11182)
#5 0x00007f26bf3d7c7e _ZN18PromiseAsyncWorkerI9ResultSetE7ExecuteEv (fontmanager.node + 0xfc7e)
#6 0x00005602f9af87c1 n/a (gitkraken + 0x20257c1)
#7 0x00007f26c328cded start_thread (libc.so.6 + 0x8cded)
#8 0x00007f26c3312370 __clone3 (libc.so.6 + 0x112370)
With this information I contacted GitKraken support. The information I got from them is that it's a known issue that GitKraken can't handle some custom fonts, and the most common culprit is JetBrains fonts.
I also got a list of certain fonts that are known to cause problems viz
RedHatFont
JetBrains Mono font on Fedora 34 and Ubuntu 20.04 (see comments on this issue).
Vazir
Rubik
I didn't have any of these fonts installed, so it wasn't applicable to me.
Does someone face the same problem and has a solution to that?
I played around a bit with reverting packages I updated with in the buggy update, taking into account the information that GitKraken is having issues with some custom fonts.
And finally, I found a bunch of fonts related to the wine package that fixed the problem on rollback.
I didn't actually do the rollback, but completely uninstalled all wine related fonts because I don't need them anyway.
Removing following fonts solved my problem:
$ sudo dnf remove wine-\*-fonts
Dependencies resolved.
=====================================================================================================================================================
Package Architecture Version Repository Size
=====================================================================================================================================================
Removing:
wine-arial-fonts noarch 7.19-1.fc36 #updates 157 k
wine-courier-fonts noarch 7.19-1.fc36 #updates 170 k
wine-fixedsys-fonts noarch 7.19-1.fc36 #updates 37 k
wine-marlett-fonts noarch 7.19-1.fc36 #updates 32 k
wine-ms-sans-serif-fonts noarch 7.19-1.fc36 #updates 4.6 M
wine-small-fonts noarch 7.19-1.fc36 #updates 65 k
wine-symbol-fonts noarch 7.19-1.fc36 #updates 51 k
wine-system-fonts noarch 7.19-1.fc36 #updates 121 k
wine-tahoma-fonts noarch 7.19-1.fc36 #updates 300 k
wine-times-new-roman-fonts noarch 7.19-1.fc36 #updates 170 k
wine-webdings-fonts noarch 7.19-1.fc36 #updates 30 k
wine-wingdings-fonts noarch 7.19-1.fc36 #updates 35 k
Removing dependent packages:
wine-fonts noarch 7.19-1.fc36 #updates 0
Removing unused dependencies:
liberation-narrow-fonts noarch 2:1.07.6-8.fc36 #fedora 504 k
Transaction Summary
=====================================================================================================================================================
Remove 14 Packages
BTW, the 'old' (rollback) version of these wine fonts where GitKraken still worked was 7.12-2, but I haven't explicitly tried this rollback alone.
After searching around I found, that the problem might be loading fonts.
There is a support request ( might be done already ): https://feedback.gitkraken.com/suggestions/218000/add-ligature-font-support
With that hint I installed fira-code-fonts from the repository and this solved my problem to open preferences.
My System:
Gitkraken 8.10.3
Fedora 36

python setup.py egg_info did not run successfully. WARNING: Ignoring invalid distribution -yaudio

I need some help please... I tried to instal pydictionary and it give me the same errors and warnings. And for anything else the Warning is the same and I can not understand why. I went to that directory to see what is in there but is empty.
C:\WINDOWS\system32>pip install PyDictionary
WARNING: Ignoring invalid distribution -yaudio (c:\users\robi\appdata\local\programs\python\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -yaudio (c:\users\robi\appdata\local\programs\python\python310\lib\site-packages)
Collecting PyDictionary
Using cached PyDictionary-2.0.1-py3-none-any.whl (6.1 kB)
Collecting goslate
Using cached goslate-1.5.4.tar.gz (14 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: requests in c:\users\robi\appdata\local\programs\python\python310\lib\site-packages (from PyDictionary) (2.27.1)
Requirement already satisfied: click in c:\users\robi\appdata\local\programs\python\python310\lib\site-packages (from PyDictionary) (8.1.3)
Collecting bs4
Using cached bs4-0.0.1.tar.gz (1.1 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: beautifulsoup4 in c:\users\robi\appdata\local\programs\python\python310\lib\site-packages (from bs4->PyDictionary) (4.11.1)
Requirement already satisfied: colorama in c:\users\robi\appdata\local\programs\python\python310\lib\site-packages (from click->PyDictionary) (0.4.4)
Collecting futures
Using cached futures-3.0.5.tar.gz (25 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [27 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 14, in <module>
File "C:\Users\Robi\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\__init__.py", line 247, in <module>
monkey.patch_all()
File "C:\Users\Robi\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\monkey.py", line 97, in patch_all
patch_for_msvc_specialized_compiler()
File "C:\Users\Robi\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\monkey.py", line 157, in patch_for_msvc_specialized_compiler
patch_func(*msvc14('_get_vc_env'))
File "C:\Users\Robi\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\monkey.py", line 147, in patch_params
mod = import_module(mod_name)
File "C:\Users\Robi\AppData\Local\Programs\Python\Python310\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "C:\Users\Robi\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\_msvccompiler.py", line 20, in <module>
import unittest.mock as mock
File "C:\Users\Robi\AppData\Local\Programs\Python\Python310\lib\unittest\mock.py", line 26, in <module>
import asyncio
File "C:\Users\Robi\AppData\Local\Programs\Python\Python310\lib\asyncio\__init__.py", line 8, in <module>
from .base_events import *
File "C:\Users\Robi\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 18, in <module>
import concurrent.futures
File "C:\Users\Robi\AppData\Local\Temp\pip-install-jx1giu6v\futures_b2c696095539418c98c7048813756f80\concurrent\futures\__init__.py", line 8, in <module>
from concurrent.futures._base import (FIRST_COMPLETED,
File "C:\Users\Robi\AppData\Local\Temp\pip-install-jx1giu6v\futures_b2c696095539418c98c7048813756f80\concurrent\futures\_base.py", line 357
raise type(self._exception), self._exception, self._traceback
^
SyntaxError: invalid syntax
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
WARNING: Ignoring invalid distribution -yaudio (c:\users\robi\appdata\local\programs\python\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -yaudio (c:\users\robi\appdata\local\programs\python\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -yaudio (c:\users\robi\appdata\local\programs\python\python310\lib\site-packages)
I also tried to see my list and upgrade the setup tools but...
C:\WINDOWS\system32>pip list
WARNING: Ignoring invalid distribution -yaudio (c:\users\robi\appdata\local\programs\python\python310\lib\site-packages)
Package Version
beautifulsoup4 4.11.1
cachetools 5.1.0
calculator1 1.0.0
certifi 2021.10.8
cffi 1.15.0
charset-normalizer 2.0.12
click 8.1.3
colorama 0.4.4
comtypes 1.1.11
DateTime 4.4
distlib 0.3.4
ez-setup 0.9
filelock 3.7.0
Flask 2.1.2
future 0.18.2
google-api-core 2.8.0
google-api-python-client 2.48.0
google-auth 2.6.6
google-auth-httplib2 0.1.0
google-auth-oauthlib 0.5.1
googleapis-common-protos 1.56.1
gTTS 2.2.4
httplib2 0.20.4
idna 3.3
iso8601 1.0.2
itsdangerous 2.1.2
Jinja2 3.1.2
keyboard 0.13.5
MarkupSafe 2.1.1
MouseInfo 0.1.3
numpy 1.23.4
oauthlib 3.2.0
Pillow 9.1.1
pip 22.3.1
platformdirs 2.5.2
playsound 1.3.0
protobuf 3.20.1
pyAlarm 1.0.1
pyasn1 0.4.8
pyasn1-modules 0.2.8
PyAudio 0.2.12
PyAutoGUI 0.9.53
pycparser 2.21
PyGetWindow 0.0.9
pyjokes 0.6.0
PyMsgBox 1.0.9
pyparsing 3.0.9
pyperclip 1.8.2
pypiwin32 223
PyRect 0.2.0
PyScreeze 0.1.28
pyserial 3.5
pyttsx3 2.90
pytweening 1.0.4
pytz 2022.1
pywhatkit 5.3
pywin32 304
PyYAML 6.0
requests 2.27.1
requests-oauthlib 1.3.1
rsa 4.8
scipy 1.9.3
serial 0.0.97
setuptools 65.5.1
six 1.16.0
sounddevice 0.4.4
soupsieve 2.3.2.post1
SpeechRecognition 3.8.1
uritemplate 4.1.1
urllib3 1.26.9
virtualenv 20.14.1
Werkzeug 2.1.2
wheel 0.38.4
wikipedia 1.4.0
xgboost 1.7.1
zope.interface 5.4.0
WARNING: Ignoring invalid distribution -yaudio (c:\users\robi\appdata\local\programs\python\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -yaudio (c:\users\robi\appdata\local\programs\python\python310\lib\site-packages)
WARNING: Ignoring invalid distribution -yaudio (c:\users\robi\appdata\local\programs\python\python310\lib\site-packages)
I tried to upgrade pip and tools in cmd but that warning is still there. I uninstall the pyaudio to see if that is the problem but still the same errors. I hope someone can help me with this problem please
I appreciate your help

Segfault triggered by multiple GPUs

I am running a training script with caffe on a 8 GPU (1080Ti) server.
If I train on 6 or fewer gpus (using CUDA_VISIBLE_DEVICES), everything is fine.
(I set export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5 and specify these GPUs in training script.)
But if I train on 7 or 8 GPUs, I see this error at the start of training consistently:
Error: (unix time) try if you are using GNU date
SIGSEGV (#0x70) received by PID 17206 (TID 0x7fc678ffd700) from PID 112; stack trace:
# 0x7fc86186b4b0 (unknown)
# 0x7fc861983f75 (unknown)
# 0x7fc863c4b4c7 std::__cxx11::basic_string<>::_M_construct<>()
# 0x7fc863c4c60b _ZN5caffe2db10LMDBCursor5valueB5cxx11Ev
# 0x7fc863ace3e7 caffe::AnnotatedDataLayer<>::DataLayerSetUp()
# 0x7fc863a6e4d5 caffe::BasePrefetchingDataLayer<>::LayerSetUp()
# 0x7fc863cbf2b4 caffe::Net<>::Init()
# 0x7fc863cc11ae caffe::Net<>::Net()
# 0x7fc863bb9c9a caffe::Solver<>::InitTestNets()
# 0x7fc863bbb84d caffe::Solver<>::Init()
# 0x7fc863bbbb3f caffe::Solver<>::Solver()
# 0x7fc863ba7d61 caffe::Creator_SGDSolver<>()
# 0x7fc863ccc1c2 caffe::Worker<>::InternalThreadEntry()
# 0x7fc863cf94c5 caffe::InternalThread::entry()
# 0x7fc863cfa38e boost::detail::thread_data<>::run()
# 0x7fc85350d5d5 (unknown)
# 0x7fc83fee56ba start_thread
# 0x7fc86193d41d clone
# 0x0 (unknown)```
The Error: (unix time) ... at the start of the trace is apparently thrown by glog.
It appears to be thrown when a general failure happens.
This thread show many different issues triggering Error: (unix time)... and similar trace.
In the thread, it is noted that multiple GPUs may trigger this error.
That is what appears to be the root cause in my case.
Are there things I can further look into to understand what is happening?

What to do with 'Bus error' in caffe while training?

I am using NVIDIA Jetson TX1 and caffe to train the AlexNet on my own data.
I have 104,000 train and 20,000 validation images fed to my model. with batch size of 16 for both test and train.
I run the solver for training and I get this Bus error after 1300 iterations:
.
.
.
I0923 12:08:37.121116 2341 sgd_solver.cpp:106] Iteration 1300, Ir = 0.01
*** Aborted at 1474628919 (unix time) try "date -d #1474628919" if you are using GNU date ***
PC: # 0x0 (unknown)
*** SIGBUS (#0x7ddea45000) received by PID 2341 (TID 0x7faa9fdf70) from PID 18446744073149894656; stack trace: ***
# 0x7fb4b014e0 (unknown)
# 0x7fb3ebe8b0 (unknown)
# 0x7fb4057248 (unknown)
# 0x7fb40572b4 (unknown)
# 0x7fb446e120 caffe::db::LMDBCursor::value()
# 0x7fb4587624 caffe::DataReader::Body::read_one()
# 0x7fb4587a90 caffe::DataReader::Body::InternalThreadEntry()
# 0x7fb458a870 caffe::InternalThread::entry()
# 0x7fb458b0d4 boost::detail::thread_data<>::run()
# 0x7fafdf7ef0 (unknown)
# 0x7fafcfde48 start_thread
Bus error
I use ubuntu 14, NVIDIA TegraX1, RAM 3.8 GB.
As i understood it is a memory issue. Could you please explain better about it and help me how I can solve this problem?
If any other information is needed please let me know.

Asking for the installation of Caffe

I was installing the Caffe library on Mac OS, but when I type 'make run test', I encountered the following problem. What should I do? Thanks in advance. My macbook doesn't contain Cudas, does this affect the installation?
.build_release/test/test_all.testbin 0 --gtest_shuffle
Cuda number of devices: 32767
Setting to use device 0
Current device id: 0
Current device name:
Note: Randomizing tests' orders with a seed of 14037 .
[==========] Running 1927 tests from 259 test cases.
[----------] Global test environment set-up.
[----------] 4 tests from BlobSimpleTest/0, where TypeParam = f
[ RUN ] BlobSimpleTest/0.TestPointersCPUGPU
E0306 11:45:15.035683 2126779136 common.cpp:104] Cannot create Cublas handle. Cublas won't be available.
E0306 11:45:15.114891 2126779136 common.cpp:111] Cannot create Curand generator. Curand won't be available.
F0306 11:45:15.115012 2126779136 syncedmem.cpp:55] Check failed: error == cudaSuccess (35 vs. 0) CUDA driver version is insufficient for CUDA runtime version
*** Check failure stack trace: ***
# 0x10d2c976a google::LogMessage::Fail()
# 0x10d2c8f14 google::LogMessage::SendToLog()
# 0x10d2c93c7 google::LogMessage::Flush()
# 0x10d2cc679 google::LogMessageFatal::~LogMessageFatal()
# 0x10d2c9a4f google::LogMessageFatal::~LogMessageFatal()
# 0x10e023406 caffe::SyncedMemory::to_gpu()
# 0x10e022c5e caffe::SyncedMemory::gpu_data()
# 0x108021d9c caffe::BlobSimpleTest_TestPointersCPUGPU_Test<>::TestBody()
# 0x10849ba5c testing::internal::HandleExceptionsInMethodIfSupported<>()
# 0x10848a1ba testing::Test::Run()
# 0x10848b0e2 testing::TestInfo::Run()
# 0x10848b7d0 testing::TestCase::Run()
# 0x108491f86 testing::internal::UnitTestImpl::RunAllTests()
# 0x10849c264 testing::internal::HandleExceptionsInMethodIfSupported<>()
# 0x108491c99 testing::UnitTest::Run()
# 0x107f8c89a main
# 0x7fff903e15c9 start
# 0x3 (unknown)
make: *** [runtest] Abort trap: 6
I had the same issue. But i have a Graphic card specifically to run Caffe on it, so CPU_ONLY was not an option ;-)
To check if it's the same cause that mine, try to run CUDA Samples deviceQuery example
I fixed using CUDA Guide runfile verifications
sudo chmod 0666 /dev/nvidia*
Finally, I find a solution by setting CPU_ONLY := 1 in Makefile.config (uncomment the original one by removing the '#' in line "CPU_ONLY := 1 in Makefile.config") and rerun the command"make clean", "make all", then "make test", then "make runtest" referring to this link - https://github.com/BVLC/caffe/issues/736