Cmake: Exporting targets with the same name in the build tree - namespaces

let's say I have two separate cmake projects, namely:
.../mylib1/ (CMakelists.txt)
project(mylib_1)
add_library(mylib STATIC ${SOURCES})
...
.../mylib2/ (CMakelists.txt)
project(mylib_2)
add_library(mylib STATIC ${SOURCES})
Now if my app project had:
.../app/: (CMakelists.txt)
add_subdirectories(mylib1)
add_subdirectories(mylib2)
cmake would complain about there being two library targets with the same name. To get around this problem, I use the export functionality:
export(EXPORT <export-name> [NAMESPACE <namespace>] [FILE <filename>])
so I have something like:
project(mylib_1)
add_library(mylib STATIC ${SOURCES})
install(TARGETS mylib Export myExports ...)
export(myExports NAMESPACE One)
And the root project accessing these projects would:
execute_process(COMMAND cmake path/to/lib1
WORKING_DIRECTORY ${CMAKE_BINARY_DIR}/lib1
)
include(${CMAKE_BINARY_DIR}/lib1/myExports.cmake)
after that, one would use for example:
target_link_libraries(app mylib One::mylib)
target_link_libraries(app mylib Two::mylib)
so there is no name clash. This works fine but this requires me modifying the packages that I am trying to include. So I was wondering if there is anyway of processing these export files and assign/change their namespaces at the parent level.
The reason I am interested in this setup is if for example, I wanted to use two versions of the same library or if I had project A depending on project B and C who both depend on project D. I could leave the name space in D to be something that will be evaluated dynamically based on its hierarchy in the project: ie A-B or A-C But I don't think that's a nice way of doing that.
So is there any way of importing/processing an export file and give it a new namespace?

Related

PhpStorm: multiple projects with common core

Due to security reasons, I have to split one project to divisions (client, admin, ...), and deploy them to different web servers. These divisions have one common script base, but each division has its own functions. As an IDE I use PhpStorm.
The question: what is the best way to organize project's structure and settings, so the common core part will be visible for IDE indexing in all project's divisions, but at the same time, being maintained from a single project (perhaps, standalone)?
In Java you can do lib jar files for further linking in various projects.
But how it can be done in PHP?
There are multiple ways of how to reference extra PHP code in a project.
If you plan to actively edit such extra code in the same project (and want to see their TODOs, code inspection warnings, include references in code refactoring etc):
You can just add it as an additional Content Root: Settings/Preferences | Directories. Folder added this way will be treated as a part of the project itself and will be shown as a separate node in the Project View panel (just as the main code, which is a Content Root as well).
Or you can open 2nd project while 1st one is already opened and when asked, just chose "Attach":
It's not going to be full 2 projects in one frame, more like something in between attaching Content Root and having 2 projects opened in separate frames.
https://www.jetbrains.com/help/phpstorm/opening-multiple-projects.html
Simple symlink will also do the job (but you need to place it somewhere in a project, e.g. PROJECT_ROOT/libs/my_symlinked_code). You then will need to provide a path mapping for that folder for debugger (if you will debug it of course) as PHP/Xdebug works with "final/resolved path" while IDE works with the path as is.
If you do not need to actively edit that extra code in the same window (and ignore any TODOs, code inspection warnings and other inspection results etc):
Do it as a composer package then? Composer can use custom sources (e.g. GitHub repo or a folder on a local filesystem).
Just add the path to that folder as a "Include Path" at Settings/Preferences | Languages & Frameworks | PHP --> Include Path tab. Code referenced this way is meant for 3rd party libraries (the code that you just use but not edit, e.g. framework code, your send mail/ORM library etc). Composer packages will also be included here by default.
https://www.jetbrains.com/help/phpstorm/php.html#include-path-tab

Compile file with two separate libraries in Cython

I wrote a library in Cython that has two different "modes":
If rendering, I compile using GLFW.
If not rendering, I compile using EGL, which is faster, but I have not figured out how to render with it.
What is the recommended way to handle this situation?
Right now, I have the following directory structure:
mujoco
├── __init__.py
├── simEgl.pyx
├── simGlfw.pyx
├── sim.pxd
└── sim.pyx
simEgl.pyx contains EGL code and simGlfw.pyx contains GLFW code. setup.py uses an environment variable to choose one or the other for the build.
This works ok, except that I need to recompile the code every time I want to switch between modes. There must be a better way.
Update
I agree that the best approach is to simultaneously compile two different libraries and use a toggle to choose which one to import. I already do have a base class in sim.pyx with shared functionality. However this base class must itself be compiled with the separate libraries. Specifically, sim.pyx depends on libmujoco.so which depends on either GLFW or EGL.
Here is my exhaustive search of possible approaches:
If I do not compile an extension for sim.pyx, I get ImportError: No module named 'mujoco.sim'
If I compile an extension for sim.pyx without including graphics libraries in the extension, I get ImportError: /home/ethanbro/.mujoco/mjpro150/bin/libmujoco150.so: undefined symbol: __glewBlitFramebuffer
If I compile an extension for sim.pyx and choose one set of graphics libraries (GLFW), then when I try to use the other set of graphics libraries (EGL) this does not work either unsurprisingly:
ERROR: GLEW initalization error: Missing GL version
If I compile two different versions of the sim.pyx library, one with one set of libraries, one with the other, I get: TypeError: unorderable types: dict() < dict() which is not a very helpful error message, but appears to result from trying to share a source file between two different extensions.
Something like option 4 should be possible. In fact, if I were working in raw C, I would simply build two shared objects side by side using the different libraries. Any advice on how to get around this Cython limitation would be very welcome.
(This answer is just a summary of the comments with a bit more explanation.)
My initial suggestion was to create two extension modules defining a common interface. That way you pick which to import in Python but be able to use them in the same way once imported:
if rendering:
import simGlfw as s
else:
import simEgl as s
s.do_something() # doesn't matter which you imported
It appears from the comments that the two modules also share a large chunk of their code and its really just the library that they're linked with that defines how they behave. Trying to re-use the same sources with
Extension(name='sim1', sources=["sim.pyx",...)
Extension(name='sim2', sources=["sim.pyx",...)
fails. This is because Cython assumes that the module name will be the same as the filename, and so creates a function PyInit_sim (on Python 3 - Python 2 is named slightly differently but the idea is the same). However, when you import sim1.so it looks for the function PyInit_sim1, fails to find it, and gives an error.
An easy way round it is to put the common code in "sim.pxi" and use Cython's largely obsolete include mechanism to textually include that code in sim1.pyx and sim2.pyx
include "sim.pxi"
Although include is generally no longer recommended and cimport is preferred since it provides more "Python-like" behaviour, include is a simple solution to this particular problem.

What do the square brackets mean in a bundle source pattern?

The aurelia.json file has a bundles.source property. It appears to use the glob syntax that minimatch supports. The out-of-the-box au new template, though, includes square brackets around some patterns. E.g.
"[**/*.js]"
In my experience, square brackets have meant ranges, such as [a-z] mapping to abcdefg...wxyz. That is also what minimatch respects.
> match = require("minimatch");
> match("q", "[a-z]");
true
What do square brackets mean to the Aurelia CLI when processing the bundles.source property?
The brackets actually define whether or not we trace the dependencies of what we find based off the glob pattern. The double star pattern (**/*) is actually what defines the "search sub folders too" part of the pattern.
While it's documented in the section for configuring JSPM, it is also applicable for configuring with the CLI. documentation
Our goal is to create a bundle of our application code only. We have to somehow instruct the bundler not to recursively trace the dependencies. Guess what? [*.js] is how we do it.
[*.js] will exclude the dependencies of each module that the glob pattern *.js yields. In the above case it will exclude aurelia-framework, aurelia-fetch-client and so on.
For example, you'll make a pattern like this: [src/**/*.js], you are asking for every javascript file in the folder and every sub-folder of src without tracing any dependencies. This mean that if module A in src requires module B in test, then module B won't be included because we indicated with the brackets that we're not tracing dependencies.
Again, if you took a pattern like this: src/**/*.js, you are asking for every javascript file in the folder and every sub-folder of src including any dependencies of those files. This means that if module A in src requires module B in test, then module B will be included because we are including dependencies.
It is important to note that this is how Aurelia defines its dependencies. While we use glob patterns and minimatching, the bracket syntax (as far as I know) is not part of those libraries, but rather a way for Aurelia to quickly and easily define if we're tracing or not.

Organizing a large number of classes under a Namespace

We are converting windows applications to browser based in .Net using Visual Studio 2010. While I'm not new to .Net, I am new to creating applications based on 3 tier architecture. I want to create classes for each table in our database and put them under a Namespace. The class name will be the same as the table name, so each class in the the Namespace would be referenced something like CompanyName.ProductName.Entity.{table name}. Now for my questions:
Is this feasable?
Would I create seperate project for each table class? Note that new table classes will be created as more programs are converted.
Does this create one huge .dll?
How do other developers access the classes I created, and where are they stored so I can reference the classes with the Using directive?
Always create the class names with individual cs file, so it will be easy to do versioning of files. It does not relate to size of dll if we keep the class in a single file or multiple files.
Create the folder structure like Project > ProductName > Classes in your solution.
Would I create seperate project for each table class?
No, don't create a separate project for each table class. That's too granular.
Does this create one huge .dll?
Each project would create a separate DLL by default (I believe you can do things with IL merge to change that.) However, each namespace does not have a direct relationship to DLLs. That is, you can have multiple namespaces in a single DLL.
What we've typically done is create a DAL library. This would be it's own project, usually named something like ProductName.Data Then within that we might have a namespace like ProductName.Data.Models or ProductName.Data.Repositories.
Namespaces are largely used to help YOU organize the code. They also help out the compiler. For example, if you have a database class named Users, and it's in XYZ.Data, you can still have a view model named Users if it's in a separate namespace, e.g. XYZ.ViewModels.
Another thing that we've done is to keep the root namespace the same across DLLs for the same product. So we recently had our database in XYZ.Data. Then we put our application specific logic in a separate DLL and named it XYZ.AppLogic We also had view models in the namespace XYZ.ViewModels.
I don't believe there are any hard/fast rules that limit the number of namespaces you have. By default, Studio will try to create a new namespace for each folder in your project. That said, I often try to avoid namespace overload, because I don't want to see something like this at the top of my files:
using XYZ.Data.Models.Accounts;
using XYZ.Data.Models.Users;
using XYZ.AppLogic.Authentication;
using XYZ.AppLogic.Users;
using XYZ.AppLogic.Settings;
using XYZ.ViewModels.UserPreferences;
However, that's more of a personal preference than anything else.
EDIT Solution View
MySolution
MyProj.Data
Models
User.cs
Account.cs
Settings.cs
Repository
UserRepository.cs
AccountRepository.cs
User.cs is my POCO (Plain Ol' CLR Object) that defines the table.
The Repository folder has things specific to my ORM (I'm using PetaPoco) which let me actually access my user data.
For example, my UserRepository might have a method
public User GetById(int id)
{
var db = new Database(<myConnectionStringName>);
return db.SingleOrDefault<User>(id);
}
That syntax is specific to PetaPoco, but it is how I separate the data object from the actual DB connection.
The simple answer that I was looking for is to create one solution with my namespace as the default namespace. For each file, create a new class, and specify the namespace in each. Thanks to all who responded.

AS3 alternatives to *_exclude.xml files, loading class definitions dynamically

I've been banging my head against a wall for this for almost a couple days now and hoping that someone can point me in the right direction.
Working in a very large Flash application, previously in AS2/CS3 I would have a setup like the following:
root.swf
-- modules
---- code_a.swf
---- code_b.swf
-- views
---- view_a.swf
---- view_b.swf
Using _exclude.xml files, I could exclude the classes defined in code_a and code_b from the ouptut .swf of view_a and view_b. root.swf would be responsible for loading the code modules before view_a or view_b, ensuring that class definitions that view_a and view_b depended on existed.
The Problem
We've recently migrated to using Actionscript 3/CS5. *_exclude.xml files no longer exist. To get the same functionality as above, I've tried the following:
My setup now looks something like:
root.swf
-- modules
---- class_a.as
---- class_b.as
-- views
---- view_a.swf
---- view_b.swf
Use mxmlc to compile root.swf, view_a.swf and view_b.swf, passing it -externs option to specify classes that will be loaded externally (the two classes in modules). This ensures that the class is excluded from the compiled swf.
Use compc to compile class_a.as and class_b.as into classes.swf, using -directory=true to access library.swf for external loading.
However, when I try running one of the two view files which depend on classes.swf, I get runtime errors telling me that a class definition is not present.
Current Workaround
I've devised a workaround which I'm currently not happy with as it's backwards to the modular approach that I was previously using:
Rather than loading the code modules, I statically link all class definitions required by child movies into root.swf. When building root.swf, I use the -link-report option of mxmlc to provide a list of included classes. When building child swfs, I can the use -load-externs to ensure that class definitions that already exist will not be included in the compile output.
Is there a way that anyone is aware of to replicate the AS2/_exclude.xml solution that I had using AS3/CS5?
I'd recommend compiling shared libraries to SWCs.
There are other options such as RSLs:
http://help.adobe.com/en_US/flex/using/WS2db454920e96a9e51e63e3d11c0bf674ba-7fff.html#WS2db454920e96a9e51e63e3d11c0bf69084-7f1e
Hope that helps.