Use boost locale together with Firebreath - firebreath

I create a chrome extension using Firebreath: http://slimtext.org And I meet a problem: the extension does not support Chinese characters very well on Windows. After a lot of research I found this: http://www.boost.org/doc/libs/1_50_0/libs/locale/doc/html/default_encoding_under_windows.html
I think the solution is to use boost/locale. But the https://github.com/firebreath/firebreath-boost project does not seem to contain boost/locale. The 1.50.0 branch contains newer boost than the master branch but neither of them contains boost/locale.
I tried to use external boost, or copy the locale code from external boost, but failed.(couldn't link to locale when doing make)
What's your suggestion? How can I Use boost locale together with Firebreath ?

firebreath-boost is just a subset of the full boost. To use all of boost install boost manually and use system boost. see http://www.firebreath.org/display/documentation/Prep+Scripts

I failed to compile my Firebreath project with external Boost on Windows. And after a lot of investigation, I start to doubt that boost/locale is the key to my original problem: Chinese characters encoding problem.
Finally I resolved it without boost/locale:
use wstring instead of string whenever possible
you might have to write code separately for windows and other operating systems, example:
#ifdef _WIN32
file.open(path.c_str()); //path is std::wstring
#else
fs::path the_path(path);
file.open(the_path.generic_string().c_str());
#endif

Related

Wii Broadway disassembly with libopcodes

I want to disassemble Wii game executable binaries in C, which use the broadway microprocessor and unfortunately the only disassembler I am aware that I can use is libopcodes.
Documentation about this library is scarce and I'm using this tutorial https://blog.yossarian.net/2019/05/18/Basic-disassembly-with-libopcodes to get a basic disassembler, from which (after reading) I copy pasted the last complete code snippet. I initially used the default binutils version of Ubuntu 20, which worked for the x86 architecture but immediately segfaulted with no output for my architecture of interest (bfd_arch_powerpc and bfd_mach_ppc_750). I now built from source the latest binutils version (2.39.50), which now demands an fprintf_styled argument (I provided a very simple one which vprintfs to stdout). Now I am getting an a floating point exception on buffer_read_memory (?) when disassembling the tutorial's architecture and a segfault when diassembling mine.
I am not familiar at all with libopcodes and am pretty much blindly following the only tutorial I could find for it on the internet. If anyone could help be up to create a basic powerpc disassembler with libopcodes that disassembles a void* buffer (or at least point me to any resource) it would be greatly appreciated.
A ppc example usage of libbfd can be seen in the disasm() function of qtrace-tools/qtdis. This is used to disassemble a buffer of powerpc64 instructions.
I solved my issue. I had to install binutils-multiarch-dev to support bfd_arch_powerpc and bfd_mach_ppc_750. In my case, I also had to remove my custom installation of binutils because the custom build with no flags apparently does not support PowerPC and dis-asm.h from /usr/local/include was taking priority over the one in /usr/include.

Is there a list of headers that can be used in an string to compile with NVRTC? [duplicate]

Specifically, my issue is that I have CUDA code that needs <curand_kernel.h> to run. This isn't included by default in NVRTC. Presumably then when creating the program context (i.e. the call to nvrtcCreateProgram), I have to send in the name of the file (curand_kernel.h) and also the source code of curand_kernel.h? I feel like I shouldn't have to do that.
It's hard to tell; I haven't managed to find an example from NVIDIA of someone needing standard CUDA files like this as a source, so I really don't understand what the syntax is. Some issues: curand_kernel.h also has includes... Do I have to do the same for each of these? I am not even sure the NVRTC compiler will even run correctly on curand_kernel.h, because there are some language features it doesn't support, aren't there?
Next: if you've sent in the source code of a header file to nvrtcCreateProgram, do I still have to #include it in the code to be executed / will it cause an error if I do so?
A link to example code that does this or something like it would be appreciated much more than a straightforward answer; I really haven't managed to find any.
You have to send the "filename" and the source of each header separately.
When the preprocessor does its thing, it'll use any #include filenames as a key to find the source for the header, based on the collection that you provide.
I suspect that, in this case, the compiler (driver) doesn't have file system access, so you have to give it the source in much the same way that you would for shader includes in OpenGL.
So:
Include your header's name when calling nvrtcCreateProgram. The compiler will, internally, generate the equivalent of a std::map<string,string> containing the source of each header indexed by the given name.
In your kernel source, use #include "foo.cuh" as usual.
The compiler will use foo.cuh as an index or key into its internal map (created when you called nvrtcCreateProgram), and will retrieve the header source from that collection
Compilation proceeds as normal.
One of the reasons that nvrtc provides only a "subset" of features is that the compiler plays in a somewhat sandboxed environment, without necessarily having all of the supporting tools and utilities lying around that you have with offline compilation. So, you have to manually handle a lot of the stuff that the normal nvcc + (gcc | MSVC| clang) combination provides.
A possible, but non-ideal, solution would be to preprocess the file that you need in your IDE, save the result and then #include that. However, I bet there is a better way to do that. if you just want curand, consider diving into the library and extracting the part you need (blech) or using another GPU-friendly rand implementation. On older CUDA versions, I just generated a big array of random floats on the host, uploaded it to the GPU, and sampled it in the kernels.
This related link may be helpful.
You do not need to load curand_kernel.h yourself and add it to the include "aliases" mechanism.
Instead, you can simply add the CUDA include directory to your (set of) include paths, e.g. by adding --include-path=/usr/local/cuda/include to your NVRTC compiler options.
(I do this in my GPU-kernel-runner test harness, by default, to be on the safe side.)

Delphi - Unit x was compiled with a different version of x, when fixing a VCL bug

I am using Delphi XE6 and using Datasnap and JSON in my project. There is a bug I want to correct in the VCL unit System.JSON.pas (in the TJSONString.ToString function) where it should be escaping backslash characters as well as quotes. In order to fix this I carried out the following :
Copied System.JSON.pas from the standard VCL source folder to my project source folder
Added System.JSON.pas to my project (using the newly copied file)
Fixed the bug and attempted to compile
I get the error 'Unit Data.DBXCommon was compiled with a different version of System.JSON.TJSONObject'
I can see that the Data.DBXCommon unit references System.JSON, so I guess the compiler is now seeing 2 versions - my fixed version and the standard VCL version.
What is the correct way to implement VCL changes to avoid this problem?
There are two common reasons for this issue:
You made changes to the interface section of the unit. You cannot do this without also re-compiling all units that use the unit you are modifying.
You re-compile the unit with different compiler options from those used to build it originally. Deal with that by ensuring the compiler options used to compile the unit you modify are the same as used by Embarcadero. Typically Embarcadero compiles with default options. Impose these directly in the source file being modified, right at the very top of the file.
Having said this, a recent question here on a similar topic could not be resolved using option 2 above. In that question, under XE6 only, the unmodified Classes unit could not be re-compiled and linked at all. Which makes me wonder if this particular technique has had its day. Perhaps it's not even possible. Before you give up, see if you can compile and link the unmodified unit.
More broadly, using a detour is generally an easier way to solve such problems as you face. Using a detour rather than re-compiling makes the management of the fix cleaner and simpler.
Update 1
I cannot get the unmodified System.JSON unit to re-compile and link. Which I think means that the issue raised in that other question is broader than just the Classes unit. I think you will find this a tricky hurdle to overcome and recommend the use of a detour.
Update 2
The problem that appears to have been introduced in XE6, seems to have been resolved by the release of XE7. The unmodified System.JSON unit will compile and link in XE7.
What if Delphi XE6 original System.JSON.dcu wasn't compiled with Delphi XE6 but it was compiled with one of the previous versions of Delphi.
You claim that you managed to implement your fix in Delphi XE2 using same approach by changing source and then recompiling System.JSON. SO I suggest you first make a comparison between original System.JSON files that ship with both Delphi XE2 and Delphi XE6.
If they are the same then the changed System.JSON.dcu that you managed to recompile with Delphi XE2 might also work with Delphi XE6.
I resolved a similar issue by :
Deleting the .dcu files which are on different versions ( i.e. conflicting files).
Re-build the project to create new .dcu files.

Can I write a program in binary directly ? How can I get the computer to execute it?

I know that may seem weird and looking for troubles but I think experiencing what the ancient programmers experienced before is something interesting. So how can I execute a program written only in binary? (Suppose that I know what I am doing and not using assembly of course.)
I just want to write a series of bits like 111010111010101010101 and execute that. So how can I do that?
Use a hex editor. You'll need to find out the relevant executable format for your operating system, of course - assuming you want to use an operating system... I suppose you could always write your own bootloader and just run the code directly that way, if you want to get all hardcore.
I don't think you'll really be experiencing what programmers experienced back then though - for one thing, you won't be using punch cards, paper tape etc. For another, your context is completely different - you know what computers are like now, so it'll feel painfully primitive to you... whereas back then, it would have been bleeding edge and exciting just on those grounds.
Use a hex editor, write your bits and save it as an executable file (either just with the file extension .exe in Windows or with chmod a+x filename in Linux).
The problem is: You'd also have to write all the OS-specific stuff in binary format, and you'll have to have a table that translates from assembler code to binary stuff.
Why not, if you want to experience low-level programming, give D.E. Knuth's assembler MMIX a try?
It really depends on the platform you are using. But that's sort of irrelevant based on your proposed purpose. The earliest programmers of modern computers as you think of them did not program in binary -- they programmed in assembly.
You will learn nothing trying to program in binary for a specific Operating System and specific CPU type using a hex editor.
If you want to find out how pre-assembly programmers worked (with plain binary data), look up Punch Cards.
.
Use a hex editor to create your file, be sure to use a format that the loader of your respective OS understands and then double click it.
most assemblers (MMIX assembler for instance see www.mmix.cs.hm.edu) dont care if
you write instructions or data.
So instead of wirting
Main ADD $0,$0,3
SUB $1,$0,4
...
you can write
Main TETRA #21000003
TETRA #25010004
...
So this way you can assemble your program by hand and then have the assembler transform it in a form the loader needs. Then you execute it. Normaly you use hex notatition not binary because keeping track of so many digits is difficult. You can also use decimal, but the charts that tell you which instructions have which codes are typically in hex notation.
Good luck! I had to do things like this when I started programming computers. Everybody was glad to have an assembler or even a compiler then.
Martin
Or he is just writing some malicious code.
I've seen some funny methods that use a AVR as a keyboard emulator, open some simple text editor, write the code that's in the AVR eeprom memory, and pipe it to "debug" (in windows systems), and run it. It's a good way to escape some restrictions too ;)
I imagine that by interacting directly with hardware you could write in binary. To flip the proper binary bits, you could use a magnetized needle on your disk drive. Or butterflies.

How To Distribute a Project Built In a Interpreted Language?

I've started a project(developer text editor) in a interpreted language(Tcl/Tk) and another with Perl(both are open-source), but with some time, when it gets in a Beta version, I will need to distribute it for the users(developers of course), but I want to know some things about this:
It's possible to compile it to a executable?
How?
Can I compile for other platforms?
Or in this case it's better to use a compiled language or a interpreted?
Is usual things like this?
In the users machine, he will need to have Tcl/Tk or Perl?
Both Tcl and Perl can be compiled into executables. For windows, there's perl2exe and perlcc for systems running UNIX style operating systems. As for Tcl, there is freewrap and starpacks.
If you're just doing this for the benefit of a single executable, and eliminating the need for installing Perl and other dependencies, then there's no real reason you can't do this. It's quite a nice method for testing your application without having to constantly compile, though defeats the point of using an interpreted language in the first place.
Also take a look at The Simplest Steps to Converting TCL TK to a Stand Alone Application, this page is also useful, How can I compile Tcl type scripts into binary code
The usual and common way for such scripts is to distribute the source. A binary would only work on some very specific systems but Tcl/Tk/Perl runs on so many systems, so that would be a really big restriction for no real reason. It also helps other developers more to reuse your scripts in some good way. In most cases, even when somebody could execute your binary, it wouldn't be of much help without the source.