What misconfiguration might cause Boost Filesystem to fail with an access violation when compiled for Debug model under Visual Studio 2019? - boost-locale

I am struggling to understand why some of my code using boost, which was working fine under Visual Studio 2017, is now resulting in an access violation under Visual Studio 2019. However, I only encounter this failure under Debug build. Release build works fine, with no issues.
What could I have set up incorrectly in my build, environment, or code, that could cause such a failure?
My environment:
Windows 10
Boost 1.74 (dynamic link)
Visual Studio 2019 v16.7.6
Compiling for C++ x64
The failing line of my code is this:
boost::filesystem::path dir = (boost::filesystem::temp_directory_path() / boost::filesystem::unique_path("%%%%-%%%%-%%%%-%%%%"));
The failing line in Boost filesystem is this here in boost/filesystem/path.hpp:
namespace path_traits
{ // without codecvt
inline
void convert(const char* from,
const char* from_end, // 0 for null terminated MBCS
std::wstring & to)
{
convert(from, from_end, to, path::codecvt());
}
The failure message reported by Visual Studio is as follows:
Exception thrown at 0x00007FF9164F1399 (vcruntime140d.dll) in ezv8.tests.exe: 0xC0000005: Access violation reading location 0xFFFFFFFFFFFFFFFF.
The call stack looks like this:
vcruntime140d.dll!00007ff9164f1550() Unknown
> boost_filesystem-vc142-mt-gd-x64-1_74.dll!wmemmove(wchar_t * _S1, const wchar_t * _S2, unsigned __int64 _N) Line 248 C++
boost_filesystem-vc142-mt-gd-x64-1_74.dll!std::_WChar_traits<wchar_t>::move(wchar_t * const _First1, const wchar_t * const _First2, const unsigned __int64 _Count) Line 204 C++
boost_filesystem-vc142-mt-gd-x64-1_74.dll!std::wstring::append(const wchar_t * const _Ptr, const unsigned __int64 _Count) Line 2864 C++
boost_filesystem-vc142-mt-gd-x64-1_74.dll!std::wstring::append<wchar_t *,0>(wchar_t * const _First, wchar_t * const _Last) Line 2916 C++
boost_filesystem-vc142-mt-gd-x64-1_74.dll!`anonymous namespace'::convert_aux(const char * from, const char * from_end, wchar_t * to, wchar_t * to_end, std::wstring & target, const std::codecvt<wchar_t,char,_Mbstatet> & cvt) Line 77 C++
boost_filesystem-vc142-mt-gd-x64-1_74.dll!boost::filesystem::path_traits::convert(const char * from, const char * from_end, std::wstring & to, const std::codecvt<wchar_t,char,_Mbstatet> & cvt) Line 153 C++
appsvcs.dll!boost::filesystem::path_traits::convert(const char * from, const char * from_end, std::wstring & to) Line 1006 C++
appsvcs.dll!boost::filesystem::path_traits::dispatch<std::wstring>(const std::string & c, std::wstring & to) Line 257 C++
appsvcs.dll!boost::filesystem::path::path<char [20]>(const char[20] & source, void * __formal) Line 168 C++
I use UTF-8 strings throughout my code, so I have configured boost::filesystem to expect UTF-8 strings as follows:
boost::nowide::nowide_filesystem();

The cause of this issue turned out to be inconsistent use of _ITERATOR_DEBUG_LEVEL. This setting does affect ABI compatibility. I was setting this flag (to 0) in my own code, but it was not set in the Boost build. The solution is to either remove the flag from one's own code, or add the flag to the Boost build by adding define=_ITERATOR_DEBUG_LEVEL=0 to the b2 arguments (from another stack overflow answer).

Related

interfacing g77 and free pascal

I have forced a free-pascal object to almost-work with a g77 main. The problem is that the g77 loader does not load a pascal library, so any attempt to insert writeln in the pascal code fails because FPC-IOCHECK, fpc_get_output .. are not found. Without I/O it works. Anyone knows a fix of the form -lPASCAL? There is a 30-year history that justifies this step; I understand perfectly that the request should seem foolish.
thanks.
I work with UBUNTU 20.4
g77 -v says
Configured with: ../src/configure -v --enable-languages=c,c++,f77,pascal --prefix=/usr --libexecdir=/usr/lib --with-gxx-include-dir=/usr/include/c++/3.4 --enable-shared --with-system-zlib --enable-nls --without-included-gettext --program-suffix=-3.4 --enable-__cxa_atexit --enable-clocale=gnu --enable-libstdcxx-debug x86_64-linux-gnu
Thread model: posix
gcc version 3.4.6 (Ubuntu 3.4.6-6ubuntu5)
There are three units:
main.f:
10 WRITE (6,'(A,$)') '0 to stop>'
READ (5,*) I
IF (I.EQ.0) STOP
CALL INFACE (I)
GOTO 10
STOP
END
SUBROUTINE PRINTER (I,R)
INTEGER*4 I
REAL*8 R
WRITE (6,*) 'received',I,R
RETURN
END
inter.c
include <stdio.h>
extern void SUB_$$_PROC1_$SMALLINT();
extern void SUB_$$_PRINTNOW_$SMALLINT$REAL();
extern void printer_();
extern void printnow();
extern void inface_(int*);
void inface_(i)
int *i; { int j; SUB_$$_PROC1_$SMALLINT(i); }
void SUB_$$_PRINTNOW_$SMALLINT$REAL(i,r)
int i; double r; {printf("route 3.0.4 "); printer_(&i,&r);}
void printnow_(i,r)
int i; double r; {printf("route 3.2.0 "); printer_(&i,&r);}
sub.p:
unit sub;
interface
procedure PROC1_(var i:integer);
procedure printnow_(i:integer; r:real); external;
implementation
procedure PROC1_(var i:integer);
var r:real;
begin r:=2*sqrt(i); printnow_(i,r); {writeln('a');} end;
end.
I have compiled with
fpc -Un sub.p (either versions 3.0.4 and 3.2.0)
g77 main.f inter.c sub.o
All works with writeln commented, the link fails otherwise.
Note that the C-routine called is different with fpc-3.0.4 and fpc-3.2.0.

How to develop tool in C/C++ whose command interface is Tcl shell?

Suppose a tool X need to developed which are written in C/C++ and having Tcl commanline interface, what will the steps or way?
I know about Tcl C API which can be used to extend Tcl by writing C extension for it.
What you're looking to do is embedding Tcl (totally a supported use case; Tcl remembers that it is a C library) but still making something tclsh-like. The simplest way of doing this is:
Grab a copy of tclAppInit.c (e.g., this is the current one in the Tcl 8.6 source tree as I write this) and adapt it, probably by putting the code to register your extra commands, linked variables, etc. in the Tcl_AppInit() function; you can probably trim a bunch of stuff out simply enough. Then build and link directly against the Tcl library (without stubs) to get effectively your own custom tclsh with your extra functionality.
You can use Tcl's API more extensively than that if you're not interested in interactive use. The core for non-interactive use is:
// IMPORTANT: Initialises the Tcl library internals!
Tcl_FindExecutable(argv[0]);
Tcl_Interp *interp = Tcl_CreateInterp();
// Register your custom stuff here
int code = Tcl_Eval(interp, "your script");
// Or Tcl_EvalFile(interp, "yourScriptFile.tcl");
const char *result = Tcl_GetStringResult(interp);
if (code == TCL_ERROR) {
// Really good idea to print out error messages
fprintf(stderr, "ERROR: %s\n", result);
// Probably a good idea to print error traces too; easier from in Tcl
Tcl_Eval(interp, "puts stderr $errorInfo");
exit(1);
}
// Print a non-empty result
if (result[0]) {
printf("%s\n", result);
}
That's about all you need unless you're doing interactive use, and that's when Tcl_Main() becomes really useful (it handles quite a few extra fiddly details), which the sample tclAppInit.c (mentioned above) shows how to use.
Usually, SWIG (Simplified Wrapper and Interface Generator) is the way to go.
SWIG HOMEPAGE
This way, you can write code in C/C++ and define which interface you want to expose.
suppose you have some C functions you want added to Tcl:
/* File : example.c */
#include <time.h>
double My_variable = 3.0;
int fact(int n) {
if (n <= 1) return 1;
else return n*fact(n-1);
}
int my_mod(int x, int y) {
return (x%y);
}
char *get_time()
{
time_t ltime;
time(&ltime);
return ctime(&ltime);
}
Now, in order to add these files to your favorite language, you need to write an "interface file" which is the input to SWIG. An interface file for these C functions might look like this :
/* example.i */
%module example
%{
/* Put header files here or function declarations like below */
extern double My_variable;
extern int fact(int n);
extern int my_mod(int x, int y);
extern char *get_time();
%}
extern double My_variable;
extern int fact(int n);
extern int my_mod(int x, int y);
extern char *get_time();
At the UNIX prompt, type the following:
unix % swig -tcl example.i
unix % gcc -fpic -c example.c example_wrap.c \
-I/usr/local/include
unix % gcc -shared example.o example_wrap.o -o example.so
unix % tclsh
% load ./example.so example
% puts $My_variable
3.0
% fact 5
120
% my_mod 7 3
1
% get_time
Sun Feb 11 23:01:07 2018
The swig command produces a file example_wrap.c that should be compiled and linked with the rest of the program. In this case, we have built a dynamically loadable extension that can be loaded into the Tcl interpreter using the 'load' command.
Taken from http://www.swig.org/tutorial.html

package require with static lib

I am working on app which uses tcl package implemented in C++ and linked as static library (app is developed long time ago). It does following:
// Library code
extern "C" int testlib_SafeInit _ANSI_ARGS_((Tcl_Interp *interp))
{
return Tcl_PkgProvide(interp, "testlib", "1.6");
}
extern "C" int testlib_Init _ANSI_ARGS_((Tcl_Interp *interp))
{
return testlib_SafeInit(interp);
}
// Application code
extern "C" int testlib_SafeInit _ANSI_ARGS_((Tcl_Interp *interp));
extern "C" int testlib_Init _ANSI_ARGS_((Tcl_Interp *interp));
int main()
{
Tcl_Interp* interp = Tcl_CreateInterp();
Tcl_Init(interp);
Tcl_PkgProvide(interp, "testlib", "1.6");
Tcl_StaticPackage(interp, "testlib", testlib_Init, testlib_SafeInit);
Tcl_Eval(interp, "package require testlib");
std::cout << "Res = " << Tcl_GetStringResult(interp);
return 0;
}
When I am removing line Tcl_PkgProvide(interp, "testlib", "1.6"); from main, package becomes invisible. Also I have noticed that testlib_Init and testlib_SafeInit are not called. I am expecting that they must be called from package require testlib. As I understand from docs each package must have pkgIndex.tcl in auto_path or tcl_pkgPath which must contain line
(package ifneeded testlib 1.6 {load {} testlib}), but here both variables does not contain such index file.
Is this a correct way of providing packages? Is there a documentation related with providing packages using static libraries?
Well, the simplest technique for statically providing a package is to just install it directly. The package init code should be the one calling Tcl_PkgProvide — you don't do so from main() usually — and you probably don't need Tcl_StaticPackage at all unless you're wanting to install the code into sub-interpreters.
int main(int argc, char*argv[])
{
Tcl_FindExecutable(argv[0]);
Tcl_Interp* interp = Tcl_CreateInterp();
Tcl_Init(interp);
testlib_Init(interp);
// OK, setup is now done
Tcl_Eval(interp, "package require testlib");
std::cout << "Res = " << Tcl_GetStringResult(interp) << "\n";
return 0;
}
However, we can move to using Tcl_StaticPackage. That allows code to say “instead of loading a DLL with this sort of name, I already know that code: here are its entry points”. If you are doing that, you need to also install a package ifneeded script; those are done through the script API only.
int main(int argc, char*argv[])
{
Tcl_FindExecutable(argv[0]);
Tcl_Interp* interp = Tcl_CreateInterp();
Tcl_Init(interp);
Tcl_StaticPackage(interp, "testlib", testlib_Init, testlib_SafeInit);
Tcl_Eval(interp, "package ifneeded testlib 1.6 {load {} testlib}");
// OK, setup is now done
Tcl_Eval(interp, "package require testlib");
std::cout << "Res = " << Tcl_GetStringResult(interp) << "\n";
return 0;
}
The testlib in the load call needs to match the testlib in the Tcl_StaticPackage call. The testlib in the package require, package ifneeded and Tcl_PkgProvide also need to all match (as do the occurrences of 1.6, the version number).
Other minor issues
Also, you don't need to use the _ANSI_ARGS_ wrapper macro. That's utterly obsolete, for really ancient and crappy compilers that we don't support any more. Just replace _ANSI_ARGS_((Tcl_Interp *interp)) with (Tcl_Interp *interp). And remember to call Tcl_FindExecutable first to initialise the static parts of the Tcl library. If you don't have argv[0] available to pass into it, use NULL instead; it affects a couple of more obscure introspection systems on some platforms, but you probably don't care about them. However, initialising the library overall is very useful: for example, it lets you make sure that the filesystem's filename encoding scheme is correctly understood! That can be a little important to code…

How do identify STATUS_INVALID_CRUNTIME_PARAMETER exception

Platform is Windows 7 SP1.
I recently spent some time debugging an issue that was caused because a code was passing an invalid parameter to one of the "safe" CRT functions. As a result my application was aborted right away with no warning or anything -- not even a crash dialog.
At first, I tried to figure this out by attaching Windbg to my application. However when the crash happened, by the time the code broke into Windbg pretty much every thread had been killed save for ONE thread on which Windbg had to break into. There was no clue as to what was wrong. So, I attached Visual Studio as a debugger instead and when my application terminated, I saw every thread exiting with error code 0xc0000417. That is what gave me the clue that there is an invalid parameter issue somewhere.
Next, the way I went about trying to debug this is to once again attach Windbg to my application but this time randomly (by trial & error) place breakpoints in various places like kernel32!TerminateThread, kernel32!UnhandledExceptionFilter and kernel32!SetUnhandledExceptionFilter.
Of the lot, placing a break point at SetUnhandledExceptionFilter immediately showed the callstack of the offending thread when the crash occurred and the CRT function that we were calling incorrectly.
Question: Is there anything intuitive that should have told me to place bp on SUEF right away? I would like to understand this a bit better and not do this by trial and error. Second question is w.r.t to the error code I determined via Visual Studio. Without resorting to VS, how do I determine thread exit codes on Windbg?
i was going to just comment but this became bigger so an answer
setting windbg as postmortem debugger using Windbg -I will also route all the unhandled exception to windbg
Windbg -I should Register windbg as postmortem debugger
by default Auto is set to 1 in AeDebug Registry Key
if you don't want to debug every program you can edit this to 0
to provide you an additional DoYouWanttoDebug option in the wer Dialog
reg query "hklm\software\microsoft\windows nt\currentversion\aedebug"
HKEY_LOCAL_MACHINE\software\microsoft\windows nt\currentversion\aedebug
Debugger REG_SZ "xxxxxxxxxx\windbg.exe" -p %ld -e %ld -g
Auto REG_SZ 0
assuming you registered a postmortem debugger and you run this code
#include <stdio.h>
#include <stdlib.h>
int main (void)
{
unsigned long input[] = {1,45,0xf001,0xffffffff};
int i = 0;
char buf[5] = {0};
for(i=0;i<_countof(input);i++)
{
_ultoa_s(input[i],buf,sizeof(buf),16);
printf("%s\n",buf);
}
return 1;
}
on the exception you will see a wer dialog like this
you can now choose to debug this program
windows also writes the exit code on unhandled exception to event log
you can use powershell to retrieve one event like this
PS C:\> Get-EventLog -LogName Application -Source "Application Error" -newest 1| format-list
Index : 577102
EntryType : Error
InstanceId : 1000
Message : Faulting application name:
ultos.exe, version: 0.0.0.0, time stamp: 0x577680f1
Faulting module name: ultos.exe, version:
0.0.0.0, time stamp: 0x577680f1
Exception code: 0xc0000417
Fault offset: 0x000211c2
Faulting process id: 0x4a8
Faulting application start time: 0x01d1d3aaf61c8aaa
Faulting application path: E:\test\ulto\ultos.exe
Faulting module path: E:\test\ulto\ultos.exe
Report Id: 348d86fc-3f9e-11e6-ade2-005056c00008
Category : Application Crashing Events
CategoryNumber : 100
ReplacementStrings : {ultos.exe, 0.0.0.0, 577680f1, ultos.exe...}
Source : Application Error
TimeGenerated : 7/1/2016 8:42:21 PM
TimeWritten : 7/1/2016 8:42:21 PM
UserName :
and if you choose to debug
you can view the CallStack
0:000> kPL
# ChildEBP RetAddr
00 001ffdc8 77cf68d4 ntdll!KiFastSystemCallRet
01 001ffdcc 75e91fdb ntdll!NtTerminateProcess+0xc
02 001ffddc 012911d3 KERNELBASE!TerminateProcess+0x2c
03 001ffdec 01291174 ultos!_invoke_watson(
wchar_t * expression = 0x00000000 "",
wchar_t * function_name = 0x00000000 "",
wchar_t * file_name = 0x00000000 "",
unsigned int line_number = 0,
unsigned int reserved = 0)+0x31
04 001ffe10 01291181 ultos!_invalid_parameter(
wchar_t * expression = <Value unavailable error>,
wchar_t * function_name = <Value unavailable error>,
wchar_t * file_name = <Value unavailable error>,
unsigned int line_number = <Value unavailable error>,
unsigned int reserved = <Value unavailable error>)+0x7a
05 001ffe28 0128ad96 ultos!_invalid_parameter_noinfo(void)+0xc
06 001ffe3c 0128affa ultos!common_xtox<unsigned long,char>(
unsigned long original_value = 0xffffffff,
char * buffer = 0x001ffea4 "",
unsigned int buffer_count = 5,
unsigned int radix = 0x10,
bool is_negative = false)+0x58
07 001ffe5c 0128b496 ultos!common_xtox_s<unsigned long,char>(
unsigned long value = 0xffffffff,
char * buffer = 0x001ffea4 "",
unsigned int buffer_count = 5,
unsigned int radix = 0x10,
bool is_negative = false)+0x59
08 001ffe78 012712b2 ultos!_ultoa_s(
unsigned long value = 0xffffffff,
char * buffer = 0x001ffea4 "",
unsigned int buffer_count = 5,
int radix = 0n16)+0x18
09 001ffeac 0127151b ultos!main(void)+0x52
0a (Inline) -------- ultos!invoke_main+0x1d
0b 001ffef8 76403c45 ultos!__scrt_common_main_seh(void)+0xff
0c 001fff04 77d137f5 kernel32!BaseThreadInitThunk+0xe
0d 001fff44 77d137c8 ntdll!__RtlUserThreadStart+0x70
0e 001fff5c 00000000 ntdll!_RtlUserThreadStart+0x1b

How can I discover whether my CPU is 32 or 64 bits?

How do I find out if my processor is 32 bit or 64 bit (in your language of choice)? I want to know this for both Intel and AMD processors.
Windows, C/C++:
#include <windows.h>
SYSTEM_INFO sysInfo, *lpInfo;
lpInfo = &sysInfo;
::GetSystemInfo(lpInfo);
switch (lpInfo->wProcessorArchitecture) {
case PROCESSOR_ARCHITECTURE_AMD64:
case PROCESSOR_ARCHITECTURE_IA64:
// 64 bit
break;
case PROCESSOR_ARCHITECTURE_INTEL:
// 32 bit
break;
case PROCESSOR_ARCHITECTURE_UNKNOWN:
default:
// something else
break;
}
C#, OS agnostic
sizeof(IntPtr) == 4 ? "32-bit" : "64-bit"
This is somewhat crude but basically tells you whether the CLR is running as 32-bit or 64-bit, which is more likely what you would need to know. The CLR can run as 32-bit on a 64-bit processor, for example.
For more information, see here: How to detect Windows 64-bit platform with .NET?
The tricky bit here is you might have a 64 bit CPU but a 32 bit OS. If you care about that case it is going to require an asm stub to interrogate the CPU. If not, you can ask the OS easily.
In .NET you can differentiate x86 from x64 by looking at the Size property of the IntPtr structure. The IntPtr.Size property is returned in bytes, 8 bits per byte so it is equal to 4 on a 32-bit CPU and 8 on a 64-bit CPU. Since we talk about 32-bit and 64-bit processors rather than 4-byte or 8-byte processors, I like to do the comparison in bits which makes it more clear what is going on.
C#
if( IntPtr.Size * 8 == 64 )
{
//x64 code
}
PowerShell
if( [IntPtr]::Size * 8 -eq 64 )
{
#x64 code
}
In Python :
In [10]: import platform
In [11]: platform.architecture()
Out[11]: ('32bit', 'ELF')
As usual, pretty neat. But I'm pretty sure these functions return the platform where the exec has been built, not the the platforms it running on. There is still a small chance that some geek is running a 32 bits version on a 64 bits computer.
You can have some more infos like :
In [13]: platform.system()
Out[13]: 'Linux'
In [19]: platform.uname()
Out[19]:
('Linux',
'asus-u6',
'2.6.28-11-generic',
'#42-Ubuntu SMP Fri Apr 17 01:57:59 UTC 2009',
'i686',
'')
ETC.
This looks more like live data :-)
VBScript, Windows:
Const PROCESSOR_ARCHITECTURE_X86 = 0
Const PROCESSOR_ARCHITECTURE_IA64 = 6
Const PROCESSOR_ARCHITECTURE_X64 = 9
strComputer = "."
Set oWMIService = GetObject("winmgmts:" & _
"{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
Set colProcessors = oWMIService.ExecQuery("SELECT * FROM Win32_Processor")
For Each oProcessor In colProcessors
Select Case oProcessor.Architecture
Case PROCESSOR_ARCHITECTURE_X86
' 32-bit
Case PROCESSOR_ARCHITECTURE_X64, PROCESSOR_ARCHITECTURE_IA64
' 64-bit
Case Else
' other
End Select
Next
Another possible solution for Windows Script Host, this time in JScript and using the PROCESSOR_ARCHITECTURE environment variable:
var oShell = WScript.CreateObject("WScript.Shell");
var oEnv = oShell.Environment("System");
switch (oEnv("PROCESSOR_ARCHITECTURE").toLowerCase())
{
case "x86":
// 32-bit
case "amd64":
// 64-bit
default:
// other
}
I was thinking, on a 64-bit processor, pointers are 64-bit. So, instead of checking processor features, it maybe possible to use pointers to 'test' it programmatically. It could be as simple as creating a structure with two contiguous pointers and then checking their 'stride'.
C# Code:
int size = Marshal.SizeOf(typeof(IntPtr));
if (size == 8)
{
Text = "64 bit";
}
else if (size == 4)
{
Text = "32 bit";
}
In linux you can determine the "bitness" by reading
/proc/cpuinfo
eg.
cat /proc/cpuinfo | grep flags
if it contains the
lm
flag it's a x86 64 bit CPU (even if you have 32 bit linux installed)
Not sure if this works for non x86 CPUs as well such as PPC or ARM.