It seems that the ClojureScript compiler compiles files in src in alphabetical order. Is there a way to make it start with the main target instead? I currently include a file aaa_init.cljs in my projects which just happens to allow me to ensure certain things happen first... but this feels like an awkward solution.
It is useful to control the order that files are processed in so that I can ensure (enable-console-print!) happens before printing, and I can use conveniences like defonce, and re-frames dispatch to set initial values.
Your question talks about two phases: compilation, and runtime. As far as I know, the order that you compile namespaces in will have no effect on runtime behaviour.
If you want (enable-console-print!) to be called before printing, or to dispatch initial values to re-frame, then those should happen in the :main function that you specify to the compiler. ClojureScript also has the ability to set :preloads which run before your main function. These are typically more for development time tooling that you want compiled out of your production build, but could possibly be used for what you're asking.
Related
I'm using reagent to build several alternate root components, only one of which will be mounted on any given page; definitely either/or. These have a degree of commonality in their makeup, hence it will be convenient to move what is common among them to a common namespace.
What would be ideal is if in the file for each of these components I had the option to switch namespace into common, and add defs particular to the component, then switch back, thus avoiding circular dependencies nor needing any kind of inheritance.
I recalled this being possible in common lisp, how wonderful it was, and it also seems possible in clojure.
From Clojurescript docs: ns must be the first form and can only be used once, and in-ns is only usable from the repl.
I'm wondering if there's a way to achieve this kind of thing in clojurescript which is still eluding me.
If not I may need to reconsider my assumptions behind multiple alternate root components; the "many builds within one build" kind of idea, if that makes sense.
Update after some futher experimentation and confusion:
another option might be to split a single namespace across multiple files (is this possible?). Not sure what direction to turn in here.
The fact that in reagent I am using atoms in the global namespace is what's creating the need for circular dependencies if I use a separate namespace for common. Hence, wonder about one global namespace, in which case multiple files might help. Or is the way forward one giant file and one namespace??
Update: I've realised there is a great tension between keeping all app state globally (in my current case, multiple atoms), and passing app state around. My pattern currently is everything global, don't pass any of it around. Passing the necessary state as parameters to fns in the common namespace would solve the problem here (duh!), but then there's the question of what principles are being followed here regarding app state. If I just added a param whenever I needed one, but started with the idea that everything was global, there'd be no real principle to it...
In ClojureScript, everything is pre-compiled into a single static JavaScript "executable", so there is nothing like the repl you are used to in Clojure. Indeed, in CLJS the "Var" concept doesn't really after the compiler, they are just static (constant) variables and cannot be rebound.
Having said that, CLJS does emulate the behavior of Clojure dynamic variables via the binding form, so that may help you to reach your goal. As in CLJ, it creates what amounts to a (thread-local) global variable. This is a degenerate case in CLJS since there is only one thread. However, the source code looks identical to the CLJ case.
Another way to accomplish this is to just use a plain atom as a global variable so you don't have to pass a parameter around.
As always, when using a global variable, it reduces the number of parameters in function call trees, but it creates invisible dependencies between different parts of the code. Somethimes convenient, but usually a bad tradeoff.
Specifically, my issue is that I have CUDA code that needs <curand_kernel.h> to run. This isn't included by default in NVRTC. Presumably then when creating the program context (i.e. the call to nvrtcCreateProgram), I have to send in the name of the file (curand_kernel.h) and also the source code of curand_kernel.h? I feel like I shouldn't have to do that.
It's hard to tell; I haven't managed to find an example from NVIDIA of someone needing standard CUDA files like this as a source, so I really don't understand what the syntax is. Some issues: curand_kernel.h also has includes... Do I have to do the same for each of these? I am not even sure the NVRTC compiler will even run correctly on curand_kernel.h, because there are some language features it doesn't support, aren't there?
Next: if you've sent in the source code of a header file to nvrtcCreateProgram, do I still have to #include it in the code to be executed / will it cause an error if I do so?
A link to example code that does this or something like it would be appreciated much more than a straightforward answer; I really haven't managed to find any.
You have to send the "filename" and the source of each header separately.
When the preprocessor does its thing, it'll use any #include filenames as a key to find the source for the header, based on the collection that you provide.
I suspect that, in this case, the compiler (driver) doesn't have file system access, so you have to give it the source in much the same way that you would for shader includes in OpenGL.
So:
Include your header's name when calling nvrtcCreateProgram. The compiler will, internally, generate the equivalent of a std::map<string,string> containing the source of each header indexed by the given name.
In your kernel source, use #include "foo.cuh" as usual.
The compiler will use foo.cuh as an index or key into its internal map (created when you called nvrtcCreateProgram), and will retrieve the header source from that collection
Compilation proceeds as normal.
One of the reasons that nvrtc provides only a "subset" of features is that the compiler plays in a somewhat sandboxed environment, without necessarily having all of the supporting tools and utilities lying around that you have with offline compilation. So, you have to manually handle a lot of the stuff that the normal nvcc + (gcc | MSVC| clang) combination provides.
A possible, but non-ideal, solution would be to preprocess the file that you need in your IDE, save the result and then #include that. However, I bet there is a better way to do that. if you just want curand, consider diving into the library and extracting the part you need (blech) or using another GPU-friendly rand implementation. On older CUDA versions, I just generated a big array of random floats on the host, uploaded it to the GPU, and sampled it in the kernels.
This related link may be helpful.
You do not need to load curand_kernel.h yourself and add it to the include "aliases" mechanism.
Instead, you can simply add the CUDA include directory to your (set of) include paths, e.g. by adding --include-path=/usr/local/cuda/include to your NVRTC compiler options.
(I do this in my GPU-kernel-runner test harness, by default, to be on the safe side.)
I am building a module that exports a cmdlet that I would like to make available through my profile. The implementation of this cmdlet is spread across multiple implementation files that contain implementation functions I don't want to make publicly available. So I use Export-ModuleMember to hide them.
File get_something.psm1
import-module .\get_something_impl.psm1
function Get-Something {
[cmdletbinding()]
Get-SomethingImplementation
}
Export-ModuleMember -Function Get-Something
I then add get_something.psm1 to my profile. By exporting only Get-Something, all of my implementation functions remain "private".
The issue I'm experiencing is that when using the Export-ModuleMember command, I have to import a module in my implementation files every time I need a function inside of it. For example, assume I have a module, person.psm1, with a function, Get-Person, that I need to call throughout all of my implementation files. Now I must import person.psm1 in every single file that I need to call Get-Person. This is a result of using Export-ModuleMember-Function Get-Something. Without it, I would only need to import person.psm1 once and it would be available.
In essence, Export-ModuleMember is not only blocking my implementation to the outside. It's blocking it from my own implementation.
Is this expected and considered a normal aspect of designing PowerShell modules?
This was actually a bit of debate during the development of modules. Originally, Export-ModuleMember was required to export any function. This became tedious and limiting. So, by default, all functions from a module are visible, but variables and aliases are not, as long as you've never used Export-ModuleMember within the .PSM1.
If you use Export-ModuleMember, it begins to restrict that list. It may not be a bad idea to export a smaller number of functions, but you have to use it somewhat carefully.
You can either write:
Export-ModuleMember -Function a,b,c
which exports a few functions.
or
Export-ModuleMember -Function *
The latter one is equivalent to omitting Export-ModuleMember altogether.
You can use more restrictive wildcards if you'd like, but I find that 99% of the time, you don't need to bother with it at all.
The other thing you seem to be asking is how best to handle module dependencies. Nowadays, it's fairly common to import a module or two when writing a script, just like it's fairly common to include an assembly or two in a C# project. If you're doing this inside of a module, you can use the -Global flag on Import-Module, and avoid using -Force (which will reload the module). This makes it a notch more efficient to reuse the module in different functions. It also makes it less likely to have problems with "cycling" (unloading and reloading) the module, which, unfortunately, many modules do not do well.
The alternative to referencing the module in each function is using a module manifest (Get-Help New-ModuleManifest). Module manifests are very interesting, and required learning for many parts of module development. If you include a module in the RequiredModules list of the Module manifest, it will be automatically loaded before the module is imported (at least in PowerShell 3 and greater). If you include a module in the NestedModules list of the module manifest, it will be loaded as part of the module, and the commands exported by the module will be exported by your module instead.
Module design is a tricky beast, but it's very rewarding to do right. Best of luck.
When I compile the same project with ant many times it gives a different sizes for each compilation
I added the rsl and more option to mxml it works fine ,but the size is still changed
Please any idea to unify the size
This is not really possible, unless you do some, quite involved post-processing. Below is the list of things I know to change the size between compiles, but it may not be exhaustive:
When you compile pure AS3 project or a project that uses framework:
Resources you embed on variables have their class names generated using the current date in the name.
Flex compiler generates a tag of obscure purpose (can be removed manually, but persists in release builds), which seem to contain a GUID and a compilation time or something similar. It is usually found at the very beginning of the SWF file, somewhere after the rectangle of the SWF dimensions.
In a project that uses framework:
All code generation is suspect to generating inconsistent names, particularly, all bindings will certainly produce different assembly upon each compile. Styles and some other rarely used metadata will cause this too.
Specifically for spark skins, which come as sources rather then compiled libraries - some of them embed resources in a bad way, so you would probably need to compile them into a library and plug it into a project, removing the sources from the source path.
All in all, if you are using pure AS3 project, your task is difficult, but doable (will require following certain conventions and some post-build script that unzips the SWF, purges the compiler-added extra tag and zips the SWF back. But if it is a SWF based on the framework - I'd say the effort isn't worth it, just accept it cannot be done.
I've been looking at some Lua source code, and I often see things like this at the beginning of the file:
local setmetatable, getmetatable, etc.. = setmetatable, getmetatable, etc..
Do they only make the functions local to let Lua access them faster when often used?
Local data are on the stack, and therefore they do access them faster. However, I seriously doubt that the function call time to setmetatable is actually a significant issue for some program.
Here are the possible explanations for this:
Prevention from polluting the global environment. Modern Lua convention for modules is to not have them register themselves directly into the global table. They should build a local table of functions and return them. Thus, the only way to access them is with a local variable. This forces a number of things:
One module cannot accidentally overwrite another module's functions.
If a module does accidentally do this, the original functions in the table returned by the module will still be accessible. Only by using local modname = require "modname" will you be guaranteed to get exactly and only what that module exposed.
Modules that include other modules can't interfere with one another. The table you get back from require is always what the module stores.
A premature optimization by someone who read "local variables are accessed faster" and then decided to make everything local.
In general, this is good practice. Well, unless it's because of #2.
In addition to Nicol Bolas's answer, I'd add on to the 3rd point:
It allows your code to be run from within a sandbox after it's been loaded.
If the functions have been excluded from the sandbox and the code is loaded from within the sandbox, then it won't work. But if the code is loaded first, the sandbox can then call the loaded code and be able to exclude setmetatable, etc, from the sandbox.
I do it because it allows me to see the functions used by each of my modules
Additionally it protects you from others changing the functions in global environment.
That it is a free (premature) optimisation is a bonus.
Another subtle benefit: It clearly documents which variables (functions, modules) are imported by the module. And if you are using the module statement, it enforces such declarations, because the global environment is replaced (so globals are not available).