I was trying to build the pjsip library according to the instruction given here
http://trac.pjsip.org/repos/wiki/Getting-Started/Windows-Phone
i followed each step,but the following error occurs
error CS0006: Metadata file 'F:\Windows-Phone-Wordspace\Pjsip\pjsip-apps\src\pjsua\wp\lib\PjsuaWP.BackEnd.winmd' could not be found
i went to the pjsip-apps\src\pjsua\wp\lib directory and found that the folder is empty.
what could be done to properly build the sample on windows phone 8 ?
http://trac.pjsip.org/repos/wiki/Getting-Started/Windows-Phone
Set your config_site.h to the following:
#define PJMEDIA_AUDIO_DEV_HAS_PORTAUDIO 0
#define PJMEDIA_AUDIO_DEV_HAS_WMME 0
#define PJMEDIA_AUDIO_DEV_HAS_WASAPI 1
go through solution and set all
PJMEDIA_AUDIO_DEV_HAS_WMME 0
at ..\pjmedia\include\pjmedia-audiodev\config.h at Ln 112 too
OR
add ;!PJMEDIA_AUDIO_DEV_HAS_PORTAUDIO;!PJMEDIA_AUDIO_DEV_HAS_WMME;PJMEDIA_AUDIO_DEV_HAS_WASAPI
to your project build options (at pjsua_wp or ypur own one)
Related
I'm trying to update the manifest included in a Visual FoxPro app with some registry free COM entries. A compiled FoxPro exe appears to contain a VFP runtime along with with some string resources and a default manifest, as well as the pre-compiled app code appended to the end of the exe. When using mt.exe -manifest app.manifest -outputresource:app.exe;#1 the resulting exe is truncated. The manifest is placed at the end of the exe and all the pre-compiled app code is simply removed. Is there a way to update the embedded manifest using mt.exe without removing the app code from the exe, which is normally appended after the manifest?
I've found two alternatives that do NOT work for me. I'm forced to compile the exe with VFP 8, due to code incompatibility with VFP 9.
An article written by Rick Strahl https://www.west-wind.com/wconnect/weblog/ShowEntry.blog?id=890 that assumes the app is compiled using FoxPro 9 SP 2, which isn't an option for me.
A project hook class that assumes the app is compiled in VFP 9 https://www.sweetpotatosoftware.com/blog/index.php/2009/08/03/apply-application-manifest-at-compile-time-with-projecthook/ This is kind of close, but compiling with VFP 9 is not an option for me.
I'm hoping mt.exe provides a better alternative than building my own app to update the manifest in a VFP 8 exe.
I compile cocos2d-x(version 3.6) using visual studio 2015, the error occurred, saying:
fatal error C1189: #error: Macro definition of snprintf conflicts with Standard Library function declaration
Almost the same question like this link
here
I try to follow the first answer and then search most results on cocos forum but also failed, I'm noob and really have no idea now ..
And here it's my source code where defined snprintf on header file stdio.h
#if defined snprintf
// This definition of snprintf will generate "warning C4005: 'snprintf': macro
// redefinition" with a subsequent line indicating where the previous definition
// of snprintf was. This makes it easier to find where snprintf was defined.
#pragma warning(push, 1)
#pragma warning(1: 4005)
#define snprintf Do not define snprintf as a macro
#pragma warning(pop)
#error Macro definition of snprintf conflicts with Standard Library function declaration
#endif
Could someone help me .. thanks!
I am getting the same error trying to build libsndfile-1. I solved it by building using VS2013 instead of VS2015. (I think it should be possible to simply install VS2013 Build Tools and build from VS2015).
edit: to install the VS2013 build toolset, run the VS2015 installer and select 'Windows 8.1 and Windows Phone 8.0/8.1 Tools'
I am trying to compile a CUDA 5.5 application on nsight with ubuntu 12.04
At first I was getting an issue about missing header files such as #include <helper_cuda_drvapi.h>
To fix this I added the path /usr/include/samples/common/inc to my includes list.
This solved the missing header file issue but caused a new issue.
when trying to compile the program on nsight I get the following errors
/usr/local/cuda-5.5/samples/common/inc/helper_cuda_drvapi.h:278: undefined reference to cuInit'
/usr/local/cuda-5.5/samples/common/inc/helper_cuda_drvapi.h:279: undefined reference tocuDeviceGetCount'
/usr/local/cuda-5.5/samples/common/inc/helper_cuda_drvapi.h:290: undefined reference to cuDeviceGetName'
/usr/local/cuda-5.5/samples/common/inc/helper_cuda_drvapi.h:291: undefined reference tocuDeviceComputeCapability'
/usr/local/cuda-5.5/samples/common/inc/helper_cuda_drvapi.h:294: undefined reference to cuDeviceGetAttribute'
/usr/local/cuda-5.5/samples/common/inc/helper_cuda_drvapi.h:327: undefined reference tocuDeviceGetAttribute'
/usr/local/cuda-5.5/samples/common/inc/helper_cuda_drvapi.h:330: undefined reference to cuDeviceGetAttribute'
/usr/local/cuda-5.5/samples/common/inc/helper_cuda_drvapi.h:333: undefined reference tocuDeviceComputeCapability'
/usr/local/cuda-5.5/samples/common/inc/helper_cuda_drvapi.h:336: undefined reference to `cuDeviceGetAttribute'
any suggestions?
Thanks in advance
*****************UPDATE************
What it basically comes down to is I am trying to compile the "CUDA Video Decoder GL API" sample program on linux and it is not working because of some error with the header files. Does anyone know why this is?
UPDATE
The undefined references are to CUDA driver API methods. helper_cuda_drvapi.h has the following comment near the top:
Helper functions for CUDA Driver API error handling (make sure that CUDA_H is included in your projects)
So, in your .cu and .cpp files, before the #include <helper_cuda_drvapi.h>, include cuda.h:
#include "cuda.h"
#include <helper_cuda_drvapi.h>
See this question for more information about the CUDA headers.
You need to manually link with libcuda (Nsight projects use Runtime API)
To link with this library:
Go to Properties for your project, open General / Path and Symbols
On the Libraries tab add cuda (without prefix or suffix - in theory should make your project more clossplatform. You may also want to check "Add to all configurations" when adding the library - otherwise it will be for your current build configuration (e.g. "Debug" or "Release" only).
Update: Project settings screenshot:
In C there is no main program. Sure, C programmers begin with int main(int argc char *argv[]), but this only works because there is a routine that tells the compiler/IDE to run the function named main first.
I can't seem to find this routine in MinGW, though. Where is it defined? I just searched because I wanted to change it (only as a test) and play around with it a bit. Can someone link me to the correct file in the MinGW folders?
The ld linker will look for a match of one of several symbols to use as the entry point when linking a PE file:
entry point subsystem
--------------------- --------------
NtProcessStartup native
WinMainCRTStartup Windows GUI
mainCRTStartup Windows CUI (console)
__PosixProcessStartup POSIX CUI
WinMainCRTStartup WinCE GUI
mainCRTStartup Xbox
mainCRTStartup other
DllMainCRTStartup#12 (or possibly DllMainCRTStartup) for DLLs
MinGW will have an object file that gets automatically linked in that has the actual PE entry point. - you can see what object files are being automatically linked in by using gcc's -v option.
In a quick test using MinGW 4.6.1 building a console subsystem "hello world" program, the object file containing the entry point is crt2.o and it has a symbol mainCRTStartup that is picked up by the linker as the entry point.
The source file containing the entrypoint code is crtexe.c (or crtdll.c).
You can override the entry point using the --entry option to the linker (Wl,--entry=whatever when used on the gcc command line).
I am trying to set up Point Cloud Library trunk build with CUDA options enabled.
I believe I have installed CUDA correctly, following these instructions.
In the cmake options for the PCL build, some options are unrecognised:
Is there something I can manually set CUDA_SDK_ROOT_DIR to? Likewise for the other unfound options.
CUDA_SDK_ROOT_DIR should be set to the direction in which you installed the NVIDIA's GPU Computing SDK. The GPU Computing SDK is downloadable from the same page at NVIDIA where you downloaded CUDA. By default, this SDK will install to $HOME/NVIDIA_GPU_Computing_SDK. Set it appropriately and then rerun cmake.
Edit:
The CUDA_SDK_ROOT_DIR variable is actually looking for the sub-directory beneath $HOME/NVIDIA_GPU_Computing_SDK that contains the version of CUDA you're using. For me, this is $HOME/NVIDIA_GPU_Computing_SDK/CUDA/v4.1.
The source code for FindCUDA.cmake gives some hints on how this path is found:
########################
# Look for the SDK stuff. As of CUDA 3.0 NVSDKCUDA_ROOT has been replaced with
# NVSDKCOMPUTE_ROOT with the old CUDA C contents moved into the C subdirectory
find_path(CUDA_SDK_ROOT_DIR common/inc/cutil.h
"$ENV{NVSDKCOMPUTE_ROOT}/C"
"$ENV{NVSDKCUDA_ROOT}"
"[HKEY_LOCAL_MACHINE\\SOFTWARE\\NVIDIA Corporation\\Installed Products\\NVIDIA SDK 10\\Compute;InstallDir]"
"/Developer/GPU\ Computing/C"
)
I.e. check that NVSDKCOMPUTE_ROOT or NVSDKCUDA_ROOT environment variables are set correctly.
On a Linux machine,..
Add "$ENV{HOME}/NVIDIA_GPU_Computing_SDK/C" to the 'find_path' options in FindCUDA.cmake module: (usr/share/cmake-2.8/Modules/FindCUDA.cmake)
########################
# Look for the SDK stuff. As of CUDA 3.0 NVSDKCUDA_ROOT has been replaced with
# NVSDKCOMPUTE_ROOT with the old CUDA C contents moved into the C subdirectory
find_path(CUDA_SDK_ROOT_DIR common/inc/cutil.h
"$ENV{HOME}/NVIDIA_GPU_Computing_SDK/C"
"$ENV{NVSDKCOMPUTE_ROOT}/C"
"$ENV{NVSDKCUDA_ROOT}"
"[HKEY_LOCAL_MACHINE\\SOFTWARE\\NVIDIA Corporation\\Installed Products\\NVIDIA SDK 10\\Compute;InstallDir]"
"/Developer/GPU\ Computing/C"
)
cmake now finds my 4.0 SDK automatically.
But my build still fails to find cutil.h, even though it is there. $HOME/NVIDIA_GPU_Computing_SDK/C/common/inc/cutil.h. I had to add an include flag to the project to get it to finally work. CUDA_NVCC_FLAGS : -I/home/bill/NVIDIA_GPU_Computing_SDK/C/common/inc
Note: -I/$HOME/NVIDIA_GPU_Computing_SDK/C/common/inc does NOT work. (The env $HOME is set correctly.)