is clojurescript suited for use with Sencha/ExtJS? - clojurescript

there is a trivial sample gist of using clojurescript with Sencha. I thought clojurescript was designed with first-class interop with javascript libraries in mind, but the more I read the more it seems that only Google Closure is a first class citizen to clojurescript, and interop with other javascript frameworks isn't important to them.
i see no reason why it can't work, am i missing something? i don't want to be 2 or 3 weeks into a prototype before giving up due to problems i can't forsee.

You can use any external JavaScript library. The main issue - if the library doesn't provide an externs.js, then you'll have trouble compiling your ClojureScript with the external library under advanced compilation. That may or may not matter for your use case.

Related

Did they use the clojurescript transpiler to transpile the transpiler?

I don't know how the "self hosted" clojurescript implementations like this and this are implemented.
However, given that the clojurescript compiler is written in clojure and it compiles clojure to javascript, I can reason that the clojurescript transpiler could theoretically transpile it's own source code to javascript, producing a clojurescript transpiler on the browser/node platform. I was just curious, is that feasible and actually how it's done?
Yes, your description sounds fairly accurate.
Here is a post that provides some explanation:
https://blog.fikesfarm.com/posts/2015-07-17-what-is-bootstrapped-clojurescript.html
and a talk that covers some of the same subject, especially near the beginning:
https://youtu.be/HnQ89r_dKEM

PyPy - SWIG - QuickFix mix

PyPy has some compatibility limitations, especially regarding the CPython C API.
I use QuickFix package which comes with precompiled SWIG bindings, and I'm considering using it with PyPy. As I am not fluent in C API and SWIG, my questions are:
Does PyPy's C API compatibility limitations hinder work with SWIG? Could you explain why?
Do I need to recompile the SWIG bindings to work specifically with PyPy? Is that possible? How?
PyPy's C API compatibility layer would not work with SWIG. The main reason is that SWIG uses internal APIs and pokes into C structures without using APis. I guess SWIG could be fixed, but so far it has not been.
You would have to recompile it if it have worked, but it will not work anyway.
Just stumbled across this. These days SWIG 4.0.2 and PyPy 7.3.7 or higher should play well together, it is worth a try.

Extending embedded Python in C++ - Design to interact with C++ instances

There are several packages out there that help in automating the task of writing bindings between C\C++ and other languages.
In my case, I'd like to bind Python, some options for such packages are: SWIG, Boost.Python and Robin.
It seems that the straight forward process is to use these packages to create C\C++ linkable libraries (with mostly static functions) and have the higher language be extended using them.
However, my situation is that I already have a developed working system in C++ therefore plan to embed Python into it so that future development will be in Python.
It's not clear to me how, and if at all possible, to use these packages in helping to extend embedded Python in such a way that the Python code would be able to interact with the various Singleton instances already running in the system, and instantiate C++ classes and interact with them.
What I'm looking for is an insight regarding the design best fitted for this situation.
Boost.python lets you do a lot of those things right out of the box, especially if you use smart pointers. You can even inherit from C++ classes in Python, then pass instances of those back to your C++ code and have everything still work. My favorite resource on how to do various stuff is this (especially check out the "How To" section): http://wiki.python.org/moin/boost.python/ .
Boost.python is especially good if you're using smart pointers or intrusive pointers, as those translate transparently into PyObject reference counting. Also, it's very good at making factory functions look like Python constructors, which makes for very clean Python APIs.
If you're not using smart pointers, it's still possible to do all the things you want, but you have to mess with various return and lifetime policies, which can give you a headache.
To make it short: There is the modern alternative pybind11.
Long version: I also had to embed python. The C++ Python interface is small so I decided to use the C Api. That turned out to be a nightmare. Exposing classes lets you write tons of complicated boilerplate code. Boost::Python greatly avoids this by using readable interface definitions. However I found that boost lacks a sophisticated documentation and dor some things you still have to call the Python api. Further their build system seems to give people troubles. I cant tell since i use packages provided by the system. Finally I tried the boost python fork pybind11 and have to say that it is really convenient and fixes some shortcomings of boost like the necessity of the use of the Python Api, ability to use lambdas, the lack of an easy comprehensible documentation and automatic exception translation. Further it is header only and does not pull the huge boost dependency on deployment, so I can definitively recommend it.

How do they write different language wrappers for same library?

Generally a library will be released in a single language (for example C). If the library tuns out to be useful then many language wrappers for that library will be written. How exactly do they do it?
Kindly someone throw little light on this topic. If it is too language dependent pick language of your choice and explain it.
There are a few options that come to mind:
Port the original C library to the language/platform of your choice
Compile the C library into something (like a DLL) that can be invoked from other components
Put the library on the web, expose an API over HTTP and wrap that on the client
If I wanted to wrap a C library with a managed (.NET) layer, I'd compile the library into a DLL, exposing the APIs I wanted. Then, I'd use P/Invoke to call those APIs from my C# code.

MooTools vs Prototype & script. aculo.us

Can we use MooTools AND Prototype & script. aculo.us, both in single project ?
Is there any problem occurs if we use both framework in single project ?
Is there any adapter, which help us to use both framework in single project ?
No, cause both mootools and prototyp extending native javascript objeccts like string and array.
The last script will override the extending the first scrript does. So both frameworks have an Array.each function, and if an script. aculo.us script try do use each but mootools was inserted after prototyp script. aculo.us will use mootools each. Maybe it works but you cant trust anything.
I don't know any adapter.
Btw it isn't a good idea to mix frameworks. First there is an overhead in script load. Second the every framework is build with an specific goal in mind, like jquery is more dom related and easy to use, mootools prototyp are more in the oop business, they all cover most common task. So there is no need to have more then one solution for array.each.
You can try using Prototype and Mootools side by side, because recent MooTools version includes the mechanism which detects the existence of $ function.
However there is very little point in using both frameworks in one project - both have simillar capabilities (extensive DOM manipulation and some goodies which extend JS core like OOP).
I found both frameworks quite simillar, and honestly if one of them can do something - the other one can surely do it as well.