Should we place C code in Static library or Runtime component? - windows-phone-8

We're moving to Windows Phone 8. But since many good libraries out there are in pure C. So what is the best way for Windows Phone C# application to consume this C library?
Place C code in WP Static library. Then reference it from WP Runtime
component
Place C code in WP Runtime component
What is the best practice ?

There isn't any real difference between the two approaches. A static library is nothing but a collection of .obj files, the exact same kind of .obj files that you'll get from approach #2. After the linker is done, there won't be any difference in the result.
That's when everything is perfect, an ideal that can be very difficult to achieve when you use open source C code. An advantage of a static .lib is that it improves build time, not having to re-generate the .obj files. But that's also their disadvantage, you'll shoot your foot if you use a .lib that was created by somebody else and he didn't use the same compiler version or compile options. The simplest example of such a trap is building your Debug version and the .lib was built for Release. Or if it uses winapi functions that are verboten in a Phone app, pretty common. So #3 is the best way to avoid problems, build the .lib yourself so you can control all the compile and link settings. Do beware however that it can be very difficult to get open source C code to build, it often comes with a very extensive configuration script, designed to deal with the differences between the many architecture and Unix variants.

Related

How to enforce module boundaries in Java <= 8

I want to modularize a monolithic application by using Java modules (introduced by Project Jigsaw).
Unfortunately I'm currently stuck to use Java 8. I would like to build those modules (jar files) anyway, but without the feature of using a module-info.java file for declaring the dependencies and the exported API.
Some weeks ago I stumbled upon an API/framework which allows to define nearly the same things that you can do in a module-info.java file. It could be used in JUnit tests in order to enforce module's not to use the internal's of another module and that module's only can access modules they explicitly depend on.
Of course you do not have any assistence from the compiler or the IDE, but you can enforce the access by JUnit. Unfortunately I do not find the project which provides the API any more.
Can anyone help?
Btw. if there is another good approach beside a unit test, please let me know, too!
https://www.archunit.org/ is what I have been looking for.

How do you use existing C libraries with the Chrome Native Client?

I'm new to Chrome Application development and the Native Client/PNaCL pipeline. I'm a bit confused over the process of using existing C code/libraries in a chrome application. The FAQs and NaCL official docs suggest that using existing code is easy, and one of the advantages of developing Chrome applications.
However, there is also mention of nacl-ports, an official list of ported C libraries to be used in Native Client apps.
If I have some random C library I've used, what is the process for actually using it in my Chrome application? This question feels silly, but I'm quite confused over the process. Do I have to recompile the source with a NaCL compiler? Where in the SDK can I find this?
The FAQ also mentions that things like forks, file i/o is not allowed in the library, so I will have to rewrite any code that does these things, is that correct?
Bottomline: I have an existing C library. What is the process for using it correctly, and making calls to it, in a Chrome application?
The short answer is that you'll have to recompile your library with a NaCl C compiler.
You may want to take a look at naclports: this repository contains ports of many common C libraries to Native Client. You can browse the source more easily here.
As for File I/O, we often suggest using the nacl_io library when porting existing code. This provides a POSIX interface (e.g. fopen/fclose/fread, etc.) Many ported libraries require no modifications when using nacl_io.

Coming from a Flash Builder background, how can I import a library into IntelliJ IDEA?

the only IDE I've used for many years was Flash Builder. Sadly the 4.7 version is in a poor state, so, I started looking at other IDE's lately.
I'm trying IDEA, but I don't know how to add a library to my project.
In FB it was simple, I go to the compiler settings and I just add a folder or swc. But here, it seems like there are 2 options, one under the "Libraries" tab and other is creating a new module, however, this new module requires a main app, SDK, and a lot of other stuff that doesn't make any sense to have in a library.
So, what is the equivalent of adding a library path here?.
Thanks.
"Libraries" is the correct way to do it. There are two kinds of libraries: libraries and global libraries. Global libraries can configured once and used in every project. Libraries are configured for each project. See http://www.jetbrains.com/idea/webhelp/configuring-project-and-global-libraries.html

How to use library of other languages in Action Script 3?

Is there any way to use the library of other language in Action Script 3 ?
For C/C++
I learned that we can use Alchemy: http://labs.adobe.com/technologies/alchemy/
But for other languages (here I mean Ruby or Java), can I use their libs via AS3 ?
Alchemy is a cross-compiler from LLVM to ABC, if the language you are interested in can compile to LLVM, then, in theory, you can compile to ABC. So, Alchemy supports (with some tweaks) also C# or Java (but you will have to spend some time getting them to work together, there's no out of the box solution).
AIR (which is a desktop variant of Flash) can use something called "NativeProcess" which means that granted the operating system can execute the code in the library, then you can do so by creating a new process that calls the function from the library you need.
In AIR, again, there's a new feature called ANE (AIR Native Extensions) - this allows you to bind AIR more tightly to the native code. But, it doesn't need to be native as in object files only, it should be something the underlying system can execute. But I have to confess, I never tried creating any such extension, so my knowledge of it is theoretical.
No you can not use Ruby or Java code/libraries in Flash. Alchemy only allows you to use C and C++ libraries. The only way is to port those libraries to ActionScript, or maybe the library you want to use has already been ported to AS3.
It is also possible that there is something similar in AS3. You could ask another question here or on an Flash forum if anyone knows if the library you need has an equivalent in flash.

Using CUDA Kernels

I'm interested in using CUSP library for CUDA (available here). However, I'm either having trouble getting this library to work with my application linking with CUDA and/or CUBLAS static libraries. I'm assuming from glancing through the header and source files that I either use the kernels by building the related files as a static library file (using nvcc compiler) to be used in my application (which is built using MS Visual Studio compiler), or use the kernels directly in my application (which I don't know how it's going to work out). The CUSP library also uses METIS library as well, which I also have trouble figuring out how to install it in Windows. What would be your suggestions on the best way of using CUSP features in my application? Thanks in advance.
After a quick look through the CUSP source, it seems that CUSP follows the same model as (and even makes use of) Thrust. These are template-based libraries that only make use of header files (with some #included inline code), like most of the STL and boost libraries. Take dia_matrix.h for example. The 'implementation' is in dia_matrix.inl, which is #included at the bottom of dia_matrix.h.
Take a look at the Thrust and CUSP examples for how to use these libraries in your own code. It should be nothing more than a matter of including the correct header files and working with the data types they provide. The CUDA kernels will be generated at compile time for you and you shouldn't need to worry about those details.