COLMAP running error in remote server while running Non-Rigid NeRF - deep-learning

I was checking the github code of LLFF : https://github.com/Fyusion/LLFF, Non-Rigid NeRF : https://github.com/facebookresearch/nonrigid_nerf and followed the suggested steps to install requirements. While running a preprocess file which return poses from images by SfM using COLMAP. I was getting the following error while executing the preprocessing in a remote server. Can anyone please help me with solving this?
python preprocess.py --input data/example_sequence1/
Need to run COLMAP
qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, minimal, minimalegl, offscreen, vnc, webgl, xcb.
*** Aborted at 1660905461 (unix time) try "date -d #1660905461" if you are using GNU date ***
PC: # 0x0 (unknown)
*** SIGABRT (#0x3e900138a9f) received by PID 1280671 (TID 0x7f5740d49000) from PID 1280671; stack trace: ***
# 0x7f57463a2197 google::(anonymous namespace)::FailureSignalHandler()
# 0x7f574421f420 (unknown)
# 0x7f5743bf300b gsignal
# 0x7f5743bd2859 abort
# 0x7f57442be35b QMessageLogger::fatal()
# 0x7f574477c799 QGuiApplicationPrivate::createPlatformIntegration()
# 0x7f574477cb6f QGuiApplicationPrivate::createEventDispatcher()
# 0x7f57443dbb62 QCoreApplicationPrivate::init()
# 0x7f574477d1e1 QGuiApplicationPrivate::init()
# 0x7f5744c03bc5 QApplicationPrivate::init()
# 0x562bbb634975 colmap::RunFeatureExtractor()
# 0x562bbb61d1a0 main
# 0x7f5743bd4083 __libc_start_main
# 0x562bbb620e39 (unknown)
Traceback (most recent call last):
File "imgs2poses.py", line 18, in <module>
gen_poses(args.scenedir, args.match_type)
File "/data1/user_data/ashish/NeRF/LLFF/llff/poses/pose_utils.py", line 268, in gen_poses
run_colmap(basedir, match_type)
File "/data1/user_data/ashish/NeRF/LLFF/llff/poses/colmap_wrapper.py", line 35, in run_colmap
feat_output = ( subprocess.check_output(feature_extractor_args, universal_newlines=True) )
File "/home/ashish/anaconda3/envs/nrnerf/lib/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
File "/home/ashish/anaconda3/envs/nrnerf/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['colmap', 'feature_extractor', '--database_path', 'scenedir/database.db', '--image_path', 'scenedir/images', '--ImageReader.single_camera', '1']' died with <Signals.SIGABRT: 6>.
'''

Related

how should I import aws-sdk?

I tried to import aws-sdk/client-personalize-import but it didnt work.
it showed error described below.
Should i import another modules or something?
or anybody has idea if it is built on vue.js?
ERROR Failed to compile with 15 errors friendly-errors 15:24:07
ERROR in ./node_modules/#aws-sdk/client-personalize-runtime/node_modules/#aws-sdk/config-resolver/dist-es/index.js friendly-errors 15:24:07
Module build failed: Error: ENOENT: no such file or directory, open 'C:\Users\sara.yamashita\project\ec-front\node_modules\#aws-sdk\client-personalize-runtime\node_modules\#aws-sdk\config-resolver\dist-es\index.js'
friendly-errors 15:24:07
# ./node_modules/#aws-sdk/client-personalize-runtime/dist-es/PersonalizeRuntimeClient.js 1:0-63 17:26-45
# ./node_modules/#aws-sdk/client-personalize-runtime/dist-es/index.js
# ./node_modules/babel-loader/lib??ref--2-0!./node_modules/vue-loader/lib??vue-loader-options!./node_modules/string-replace-loader??ref--12!./pages/item/_code/index.vue?vue&type=script&lang=js&
# ./pages/item/_code/index.vue?vue&type=script&lang=js&
# ./pages/item/_code/index.vue
# ./.nuxt/router.js
# ./.nuxt/index.js
# ./.nuxt/client.js
# multi ./node_modules/eventsource-polyfill/dist/browserify-eventsource.js (webpack)-hot-middleware/client.js?reload=true&timeout=30000&ansiColors=&overlayStyles=&path=%2F__webpack_hmr%2Fclient&name=client ./.nuxt/client.js
friendly-errors 15:24:07
ERROR in ./node_modules/#aws-sdk/client-personalize-runtime/node_modules/#aws-sdk/middleware-content-length/dist-es/index.js friendly-errors 15:24:07
Module build failed: Error: ENOENT: no such file or directory, open 'C:\Users\sara.yamashita\project\ec-front\node_modules\#aws-sdk\client-personalize-runtime\node_modules\#aws-sdk\middleware-content-length\dist-es\index.js'
friendly-errors 15:24:07
# ./node_modules/#aws-sdk/client-personalize-runtime/dist-es/PersonalizeRuntimeClient.js 2:0-76 26:33-55
# ./node_modules/#aws-sdk/client-personalize-runtime/dist-es/index.js
# ./node_modules/babel-loader/lib??ref--2-0!./node_modules/vue-loader/lib??vue-loader-options!./node_modules/string-replace-loader??ref--12!./pages/item/_code/index.vue?vue&type=script&lang=js&
# ./pages/item/_code/index.vue?vue&type=script&lang=js&
# ./pages/item/_code/index.vue
# ./.nuxt/router.js
# ./.nuxt/index.js
# ./.nuxt/client.js
# multi ./node_modules/eventsource-polyfill/dist/browserify-eventsource.js (webpack)-hot-middleware/client.js?reload=true&timeout=30000&ansiColors=&overlayStyles=&path=%2F__webpack_hmr%2Fclient&name=client ./.nuxt/client.js
friendly-errors 15:24:07

openpose issue when running the example: check failed: error == cudaSuccess (2 vs. 0) out of memory, result in core dumped

Does anyone encounter this issue when using the openpose 1.7 under ubuntu 20.04?
I cannot run the example provided. It will simply core dumped. CUDA version 11.3, Nvidia driver version 465.19.01, GPU geforce rtx 3070.
dys#dys:~/Desktop/openpose$ ./build/examples/openpose/openpose.bin --video examples/media/video.avi
Starting OpenPose demo...
Configuring OpenPose...
Starting thread(s)...
Auto-detecting all available GPUs... Detected 1 GPU(s), using 1 of them starting at GPU 0.
F0610 18:34:51.300406 28248 syncedmem.cpp:71] Check failed: error == cudaSuccess (2 vs. 0) out of memory
*** Check failure stack trace: ***
# 0x7f63e2bc51c3 google::LogMessage::Fail()
# 0x7f63e2bca25b google::LogMessage::SendToLog()
# 0x7f63e2bc4ebf google::LogMessage::Flush()
# 0x7f63e2bc56ef google::LogMessageFatal::~LogMessageFatal()
# 0x7f63e28ffe2a caffe::SyncedMemory::mutable_gpu_data()
# 0x7f63e27796a6 caffe::Blob<>::mutable_gpu_data()
# 0x7f63e293a9ee caffe::CuDNNConvolutionLayer<>::Forward_gpu()
# 0x7f63e28bfb62 caffe::Net<>::ForwardFromTo()
# 0x7f63e327a25e op::NetCaffe::forwardPass()
# 0x7f63e32971ea op::PoseExtractorCaffe::forwardPass()
# 0x7f63e329228b op::PoseExtractor::forwardPass()
# 0x7f63e328fd80 op::WPoseExtractor<>::work()
# 0x7f63e32c0c7f op::Worker<>::checkAndWork()
# 0x7f63e32c0e0b op::SubThread<>::workTWorkers()
# 0x7f63e32ce8ed op::SubThreadQueueInOut<>::work()
# 0x7f63e32c5981 op::Thread<>::threadFunction()
# 0x7f63e2f04d84 (unknown)
# 0x7f63e2c07609 start_thread
# 0x7f63e2d43293 clone
Aborted (core dumped)

Why is celery not working on Elastic Beans stalk?

I have an application that runs well with celery on local but when I deploy it into elastic beanstalk, celery seems to shutdown or not run my task. I am using supervisor to run celery.
This is my configuration for supervisord
I also set a global env of C_FORCE_ROOT=true
Error: 2020-12-21 04:49:56,076 INFO waiting for app, celery-worker to die [2020-12-21 04:49:57,732: DEBUG/MainProcess] removing tasks from inqueue until task handler finished
Unrecoverable error: WorkerLostError('Could not start worker processes')
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/celery/worker/worker.py", line 208, in start
self.blueprint.start(self)
File "/usr/local/lib/python3.8/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/local/lib/python3.8/site-packages/celery/bootsteps.py", line 369, in start
return self.obj.start()
File "/usr/local/lib/python3.8/site-packages/celery/worker/consumer/consumer.py", line 318, in start
blueprint.start(self)
File "/usr/local/lib/python3.8/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/local/lib/python3.8/site-packages/celery/worker/consumer/consumer.py", line 599, in start
c.loop(*c.loop_args())
File "/usr/local/lib/python3.8/site-packages/celery/worker/loops.py", line 59, in asynloop
raise WorkerLostError('Could not start worker processes')
billiard.exceptions.WorkerLostError: Could not start worker processes
[supervisord]
nodaemon=true
[program:app]
command = gunicorn -b 0.0.0.0:5000 --worker-class gevent application.app:app
user=root
directory = /usr/src/app/restful
priority = 900
autostart=true
autorestart = true
stopsignal = TERM
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
stdin_open = true
tty=true
[program:celery-worker]
command= python -m celery worker -A application.libs.celery_config.celery --loglevel=DEBUG --uid=nobody --gid=nogroup
user=root
directory = /usr/src/app/restful
autostart=true
autorestart = false
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
stdin_open = true
tty=true
[program:celery-beat]
command= python -m celery beat -A application.libs.celery_config.celery --schedule=/tmp/celerybeat-schedule --loglevel=DEBUG
user=root
directory = /usr/src/app/restful
autostart=true
autorestart = false
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
stdin_open = true
tty=true

Google Cloud Function error "OperationError: code=3, message=Function failed on loading user code"

I get an error from time to time when deploying nodejs10 cloud functions to GCP. The error seems to go away on it's own, I just redeploy the same thing a few times. Anyone know what causes it? He's the log:
command: gcloud beta functions deploy exchangeIcon --verbosity debug --runtime nodejs10 --memory 128 --region europe-west1 --timeout 5 --trigger-http --set-env-vars=FUNCTION_REGION=europe-west1,BUILD_DATE=2019-05-09T10:01:05.497Z --entry-point app
DEBUG: Running [gcloud.beta.functions.deploy] with arguments: [--entry-point: "app", --memory: "134217728", --region: "europe-west1", --runtime: "nodejs10", --set-env-vars: "OrderedDict([(u'FUNCTION_REGION', u'europe-west1'), (u'BUILD_DATE', u'2019-05-09T10:01:05.497Z')])", --timeout: "5", --trigger-http: "True", --verbosity: "debug", NAME: "exchangeIcon"]
INFO: Not using a .gcloudignore file.
INFO: Not using a .gcloudignore file.
Deploying function (may take a while - up to 2 minutes)...
..........................................................................failed.
DEBUG: (gcloud.beta.functions.deploy) OperationError: code=3, message=Function failed on loading user code. Error message:
Traceback (most recent call last):
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 985, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 795, in Run
resources = command_instance.Run(args)
File "/Users/me/Downloads/google-cloud-sdk/lib/surface/functions/deploy.py", line 231, in Run
enable_vpc_connector=True)
File "/Users/me/Downloads/google-cloud-sdk/lib/surface/functions/deploy.py", line 175, in _Run
return api_util.PatchFunction(function, updated_fields)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 300, in CatchHTTPErrorRaiseHTTPExceptionFn
return func(*args, **kwargs)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 356, in PatchFunction
operations.Wait(op, messages, client, _DEPLOY_WAIT_NOTICE)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 126, in Wait
_WaitForOperation(client, request, notice)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 101, in _WaitForOperation
sleep_ms=SLEEP_MS)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/core/util/retry.py", line 219, in RetryOnResult
result = func(*args, **kwargs)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 65, in _GetOperationStatus
raise exceptions.FunctionsError(OperationErrorToString(op.error))
FunctionsError: OperationError: code=3, message=Function failed on loading user code. Error message:
ERROR: (gcloud.beta.functions.deploy) OperationError: code=3, message=Function failed on loading user code.
In my Stackdriver Logging I just see INVALID_ARGUMENT, but nothing else.
The problem stems from your terminal commands not being properly formatted.
--verbosity=debug
is the proper way to type this. Also same thing with your runtime.

3D caffe make runtest error

make 3D-caffe, make all and make test is fine. But make runtest is wrong here
it looks like a relation with GPU setting, but I am not sure
[----------] 4 tests from SoftmaxWithLossLayerTest/3, where TypeParam
= caffe::GPUDevice<double> [ RUN ] SoftmaxWithLossLayerTest/3.TestGradient
*** Aborted at 1493416676 (unix time) try "date -d #1493416676" if you are using GNU date *** PC: # 0x7f4ddfd59a05 caffe::Blob<>::gpu_data()
*** SIGSEGV (#0x17ec) received by PID 15580 (TID 0x7f4de5f8fac0) from PID 6124; stack trace: ***
# 0x7f4ddf3a1390 (unknown)
# 0x7f4ddfd59a05 caffe::Blob<>::gpu_data()
# 0x7f4ddfd93ad0 caffe::SoftmaxWithLossLayer<>::Forward_gpu()
# 0x45ba59 caffe::Layer<>::Forward()
# 0x4844a0 caffe::GradientChecker<>::CheckGradientSingle()
# 0x487603 caffe::GradientChecker<>::CheckGradientExhaustive()
# 0x5d44c7 caffe::SoftmaxWithLossLayerTest_TestGradient_Test<>::TestBody()
# 0x8ac7d3 testing::internal::HandleExceptionsInMethodIfSupported<>()
# 0x8a5dea testing::Test::Run()
# 0x8a5f38 testing::TestInfo::Run()
# 0x8a6015 testing::TestCase::Run()
# 0x8a72ef testing::internal::UnitTestImpl::RunAllTests()
# 0x8a7613 testing::UnitTest::Run()
# 0x4512a9 main
# 0x7f4ddefe7830 __libc_start_main
# 0x4577c9 _start
# 0x0 (unknown) Makefile:468: recipe for target 'runtest' failed make: *** [runtest] Segmentation fault (core dumped)
I would love to help you, but your message is unclear. I would suggest checking your grammar and word choice next time.