Using hunspell in C - hunspell

I would like to use hunspell in my C program. I'm working in a Unix environment (shell) and it has already hunspell built in.
I know I can use by typing
hunspell filename
at the command line, but I wanna know how to use it inside my program. Ultimately I want to store every dictionary word but first I just want to know how to import it into my program.
Like is there #include type of thing?

Yes, there's a hunspell API you can use through #include <hunspell/hunspell.h> The API details can be found at the hunspell documentation page (hunspell3.pdf). Examples are a bit scarce, but this or this should get you started.

Related

Where are the predefined functions in Lua's files?

I'm looking to create my own scripting language, highly based off of lua (im planning on it being lua but easier to understand for me) so I need to know where the predetermined functions / variables file is, because I would like to edit that. If you have a solution, please comment and let me know!
Every table in the standard library is defined in its own C file. In the src directory of the Lua source package, look for the file whose name is similar to the library name. Global functions are defined in lbaselib.c.

passing c++ variables to python via gdb

I am developing/debugging a c++ code which extensively uses c++ STL vectors and blitz cpp arrays
(vectors/arrays are multidimensional, upto 4D/5D arrays)
I am currently using cout/print to log the outputs of inputs/outputs of functions but it is getting very tedious. To be able to print the vectors/arrays while debugging, can you suggest any options.
I thought of a couple of options
(a) write template functions on c++ to print and use GDBs "call" feature. but unable to use the "call" functionality of GDB for c++ template functions but works for normal functions though.
(b) Is it possible to pass c++ variables to python interface of GDB and print them ? any examples for the same ?
I googled before posting this question, but did not find any useful thread.
Any help is highly appreciated (even if some links can be provided)
Thanks a lot in advance !
Writing code in C++ to print the array and call it from gdb is certainly an option, but it might be unreliable because the print function you write might not be accessible (the linker might have dropped it because it was not used in your c++ code, for instance). Also, remember that templates are just "recipes" and you actually need to use them in order for the compiler to generate a class/function from it.
Is it possible to pass c++ variables to python interface of GDB and print them ? any examples for the same ?
A simple answer to this is "yes". You can use the parse_and_eval function in the gdb module when you use gdb's python API. Something such as
py print(gdb.parse_and_eval('your_variable'))
would print the value of a variable called your_variable using gdb's python API. But just that would be the same as just p your_variable in gdb's regular prompt without using the python API. The real power comes when you use gdb's python API to write pretty-printers for the types you want to debug.
A pretty-printer is basically just some code that you or someone else wrote to tell gdb how to print some type in a nice way. With a pretty-printer for a type just p your_variable in gdb's prompt prints the variable in the nice way defined by your pretty-printer.
I couldn't find a pretty-printer for blitz with a quick google search and I haven't used blitz before. However, I have used another library for vectors and matrices in scientific computing called armadillo and thus faced similar problems. I have thus written some pretty printers for armadillo here that might help you in case you decide to write pretty printers for blitz.
As an illustration, below you can see how the arma::mat (a matrix of doubles) type from armadillo is printed in gdb without a pretty printer (the m1 variable, which is a 6x3 matrix of doubles)
Notice that we can't even see the matrix elements. They are stored in a continuous memory region pointed by the mem attribute of the arma::mat object.
Now the same matrix with the pretty printer available here.
That makes debugging code a lot easier.
Note: You can also write pretty printers in the guile language, but I bet python is a much more common choice.

How do I set up a custom target triplet for Rust, using JSON?

I have done cross compilation with Rust before, but where I got the JSON didn't explain anything about creating one outside of what we need to change from the x86_64 Linux target, but I need an avr8 target. This requires rewriting most of the file.
That blog post is all I know about cross-compilation with Rust, but I have setup GCC cross compilers.
You can find JSON file where you can put multiple configuration to create you custom target triple: https://book.avr-rust.com/005.1-the-target-specification-json-file.html
And if you still need to understand some basic details then you can refer to this tutorial as well: https://os.phil-opp.com/minimal-rust-kernel/#target-specification

Is there a list of headers that can be used in an string to compile with NVRTC? [duplicate]

Specifically, my issue is that I have CUDA code that needs <curand_kernel.h> to run. This isn't included by default in NVRTC. Presumably then when creating the program context (i.e. the call to nvrtcCreateProgram), I have to send in the name of the file (curand_kernel.h) and also the source code of curand_kernel.h? I feel like I shouldn't have to do that.
It's hard to tell; I haven't managed to find an example from NVIDIA of someone needing standard CUDA files like this as a source, so I really don't understand what the syntax is. Some issues: curand_kernel.h also has includes... Do I have to do the same for each of these? I am not even sure the NVRTC compiler will even run correctly on curand_kernel.h, because there are some language features it doesn't support, aren't there?
Next: if you've sent in the source code of a header file to nvrtcCreateProgram, do I still have to #include it in the code to be executed / will it cause an error if I do so?
A link to example code that does this or something like it would be appreciated much more than a straightforward answer; I really haven't managed to find any.
You have to send the "filename" and the source of each header separately.
When the preprocessor does its thing, it'll use any #include filenames as a key to find the source for the header, based on the collection that you provide.
I suspect that, in this case, the compiler (driver) doesn't have file system access, so you have to give it the source in much the same way that you would for shader includes in OpenGL.
So:
Include your header's name when calling nvrtcCreateProgram. The compiler will, internally, generate the equivalent of a std::map<string,string> containing the source of each header indexed by the given name.
In your kernel source, use #include "foo.cuh" as usual.
The compiler will use foo.cuh as an index or key into its internal map (created when you called nvrtcCreateProgram), and will retrieve the header source from that collection
Compilation proceeds as normal.
One of the reasons that nvrtc provides only a "subset" of features is that the compiler plays in a somewhat sandboxed environment, without necessarily having all of the supporting tools and utilities lying around that you have with offline compilation. So, you have to manually handle a lot of the stuff that the normal nvcc + (gcc | MSVC| clang) combination provides.
A possible, but non-ideal, solution would be to preprocess the file that you need in your IDE, save the result and then #include that. However, I bet there is a better way to do that. if you just want curand, consider diving into the library and extracting the part you need (blech) or using another GPU-friendly rand implementation. On older CUDA versions, I just generated a big array of random floats on the host, uploaded it to the GPU, and sampled it in the kernels.
This related link may be helpful.
You do not need to load curand_kernel.h yourself and add it to the include "aliases" mechanism.
Instead, you can simply add the CUDA include directory to your (set of) include paths, e.g. by adding --include-path=/usr/local/cuda/include to your NVRTC compiler options.
(I do this in my GPU-kernel-runner test harness, by default, to be on the safe side.)

How to get voidptr out of capsule using python cffi?

Is there any way to use cffi to extract the contents of a capsule and convert it into a voidptr which I can send into C code?
Background info -- numpy arrays can give you a capsule containing a very handy struct, namely the PyArrayInterface. I don't think capsules exist for PyPy yet, so the answer is probably no, but I believe that the future contains capsules for all python versions, so I'm hoping the answer is yes :).
I don't think so. Capsules are a way for some CPython C extension modules to pass around pointers; typically, between two different C extension modules. If you replace one of these modules with a CFFI version, you loose: there is no official way to get the "void *" value from Python, with or without CFFI. It looks like it would be a valid enhancement. Feel free to open a feature request here:
https://bitbucket.org/cffi/cffi/issues?status=new&status=open