Managing different project configurations in Dynamics AX - configuration

We have trouble with keeping apart our different configurations. Allow me to explain with a little
Scenario:
Let's say you have two AX projects, e.g. P and M, that both alter the ProdRouteJob table, calling methods in one of their own project specific classes.
You have all these classes on your developer machine, of course, and ProdRouteJob compiles fine, but for an installation on a new server, you don't want to add stub classes for every non-installed project, do you? So you wrap these calls to project classes in something like
if( Global::isConfigurationkeyEnabled( <projectPConfKey> ) )
// call project P stuff
to encapsulate them cleanly. You declare configuration keys for all your projects, and you're ready to go, right?
However, this throws an error if you haven't installed project P on this machine, because its projectPConfKey is unknown. Now you could at every installation install configuration keys for all your projects, just to tell the server that there is such a thing as a projectPConfKey, but then all these ifs would evaluate to true...
In short: how do you include configuration keys in your project XPOs so that your code compiles, but at the same time so that some configuration keys are disabled from the start?
Or is there something totally basic that I'm missing here?
EDIT:
Consensus among the answers (thank you, demas; thank you, Mr. Kjeldsen) is that it's impractical to attempt a more or less automatic client-side configuration via macros or configuration keys. Thus, when we are installing at the client, we will have to import the standard tables with our changes and manually take out all changes not pertinent to the current installation.
I am a bit at a loss which answer to accept, because demas answered the question I asked, while Mr. Kjeldsen answered the questions that arose in the comment conversation under demas' answer. I shall, with apologies to Mr. Kjeldsen, mark demas' answer as accepted.

if( Global::isConfigurationkeyEnabled( <projectPConfKey> ) )
This check works only execution time and when you compile code you get the error.
So you need create stub classes or compare objects when you import them on production server and import only changes of project M.
There is another one approach but I don't have Axapta to check it. Try:
#if.never
someMissingCall();
#endif
But i'm not sure that it will works now.

Big solution centers (VAR) typically have separate applications for each module.
If you have 2 modules, you have 2 applications.
Good modules have very small collision areas to the standard application, these are kept in a separate project.
Collisions with the standard and other modules are then handled manually at install time. As the collision area is small and rarely change (it should not contain buisness logic) the problem is minimal.

Related

How to manage JSON-Schemas for multiple projects?

Suppose you have a Schema that is used in a UI-App (e.g. Vue), a Node.js or Springboot Server and has to validate against Databases (e.g. SQL, mongoDB,...), and maybe some Micro-services running on whatever.
How and where do I manage a this JSON-Schema, so that if I have to change the schema for whatever Reason, that every architectural component can handle the new JSON-Schema(s).
Otherwise I need to update the Schema in up to 10 projects so none is incompatible.
Is it really as simple as having a git project full with just JSON-Schemas or do I need specific loaders for each language/environment?
Are there best practices that I am unaware of?
PS: I don't really think I need the automatically synchronized on runtime, so don't really think I need another Microservice to achieve that.
That being said, if a Microservice is the best way to go, then getting a Microservice it is.
If you keep them in a git project, how do you load them? Clone the project each time the app starts? It may work, but I would go with a more flexible approach that should take too much effort to be done:
Build a JSON schema repository accessible via a REST API
When the app starts, it makes a request to grab the schema (latest, or a specific version)
That way you get an uniform (and scalable) way of playing with the schemas. Even if you think about a hot-reload sometime in the future, you can leverage this approach to do that.
Here is an old project in this direction, you may give it a shot to see if it works (or for some inspiration, at least)

NativeScript, Code Sharing and different environments

Note: this is not a dupe of this or this other question. Read on: this question is specific to the Code-Sharing template.
I am doing some pretty basic experiments with NativeScript, Angular and the code sharing templates (see: #nativescript/schematics).
Now I am doing some exploration / poc work on how different "build configuration" are supported by the framework. To be clear, I am searching for a simple -and hopefully official- way to have the application use a different version of a specific file (let's call it configuration.ts) based on the current platform (web/ios/android) and environment (development/production/staging?).
Doing the first part is obviously trivial - after all that is the prime purpose of the code sharing schematics. So, different versions of the same file are identified by different extensions. This page explain things pretty simply.
What I don't get as easily is if the framework/template supports any similar convention-based rule that can be used to switch between debug/release (or even better development/staging/production) versions of a file. Think for example of a config.ts file that contains different parameters based on the environment.
I have done some research in the topic, but I was unable to find a conclusive answer:
the old and now retired documentation for the appbuilder platform mentions a (.debug. and .release.) naming convention for files. I don't think this work anymore.
other sources mention passing parameters during the call to tns build / tns run and then fetching them via webpack env variable... See here. This may work, but seems oddly convoluted
third option that gets mentioned is to use hooks to customize the build (or use a plugin that should do the same)
lastly, for some odd reason, the #nativescript/schematics seems to generate a default project that contains two files called environment.ts and environment.prod.ts. I suspect those only work for the web version of the project (read: ng serve) - I wasn't able to get the mobile compiler to recognize files that end with debug.ts, prod.ts or release.ts
While it may be possible that what I am trying to do isn't just supported (yet?), the general confusion an dissenting opinions on the matter make me think I may be missing something.. somewhere.
In case this IS somehow supported, I also wonder how it may integrate with the NativeScript Sidekick app that is often suggested as a tool to ease the build/run process of NativeScript applications (there is no way to specify additional parameters for the tns commands that the Sidekick automates, the only options available are switching between debug/release mode), but this is probably better to be left for another question.
Environment files are not yet supported, passing environment variables from build command could be the viable solution for now.
But of course, you may write your own schematics if you like immediate support for environment files.
I did not look into sharing environment files between web and mobile yet - I do like Manoj's suggestion regarding modifying the schematics, but I'll have to cross that bridge when I get there I guess. I might have an answer to your second question regarding Sidekick. The latest version does support "Webpack" build option which seems to pass the --bundle parameter to tns. The caveat is that this option seems to be more sensitive to typescript errors, even relatively benign ones, so you have to be careful and make sure to fix them all prior to building. In my case I had to lock the version of #types/jasmine in package.json to "2.8.6" in order to avoid some incompatibility between that and the version of typescript that Sidekick's cloud solution is using. Another hint is to check "Clean Build" after npm dependency changes are made. Good luck!

What is "vendoring"?

What is "vendoring" exactly? How would you define this term?
Does it mean the same thing in different programming languages? Conceptually speaking, not looking at the exact implementation.
Based on this answer
Defined here for Go as:
Vendoring is the act of making your own copy of the 3rd party packages
your project is using. Those copies are traditionally placed inside
each project and then saved in the project repository.
The context of this answer is in the Go language, but the concept still applies.
If your app depends on certain third-party code to be available you could declare a dependency and let your build system install the dependency for you.
If however the source of the third-party code is not very stable you could "vendor" that code. You take the third-party code and add it to your application in a more or less isolated way. If you take this isolation seriously you should "release" this code internally to your organization/working environment.
Another reason for vendoring is if you want to use certain third-party code but you want to change it a little bit (a fork in other words). You can copy the code, change it, release it internally and then let your build system install this piece of code.
Vendoring means putting a dependency into you project folder (vs. depending on it globally) AND committing it to the repo.
For example, running cp /usr/local/bin/node ~/yourproject/vendor/node & committing it to the repo would "vendor" the Node.js binary – all devs on the project would use this exact version. This is not commonly done for node itself but e.g. Yarn 2 ("Berry") is used like this (and only like this; they don't even install the binary globally).
The committing act is important. As an example, node_modules are already installed in your project but only committing them makes them "vendored". Almost nobody does that for node_modules but e.g. PnP + Zero Installs of Yarn 2 are actually built around vendoring – you commit .yarn/cache with many ZIP files into the repo.
"Vendoring" inherently brings tradeoffs between repo size (longer clone times, more data transferred, local storage requirements etc.) and reliability / reproducibility of installs.
Summarizing other, (too?) long answers:
Vendoring is hard-coding the often forked version of a dependency.
This typically involves static linking or some other copy but it doesn't have to.
Right or wrong, the term "hard-coding" has an old and bad reputation. So you won't find it near projects openly vendoring, however I can't think of a more accurate term.
As far as I know the term comes from Ruby on Rails.
It describes a convention to keep a snapshot of the full set of dependencies in source control, in directories that contain package name and version number.
The earliest occurrence of vendor as a verb I found is the vendor everything post on err the blog (2007, a bit before the author co-founded GitHub). That post explains the motivation and how to add dependencies. As far as I understand the code and commands, there was no special tool support for calling the directory vendor at that time (patches and code snippets were floating around).
The err blog post links to earlier ones with the same convention, like this fairly minimal way to add vendor subdirectories to the Rails import path (2006).
Earlier articles referenced from the err blog, like this one (2005), seemed to use the lib directory, which didn't make the distinction between own code and untouched snapshots of dependencies.
The goal of vendoring is more reproducibility, better deployment, the kind of things people currently use containers for; as well as better transparency through source control.
Other languages seem to have picked up the concept as is; one related concept is lockfiles, which define the same set of dependencies in a more compact form, involving hashes and remote package repositories. Lockfiles can be used to recreate the vendor directory and detect any alterations. The lockfile concept may have come from the Ruby gems community, but don't quote me on that.
The solution we’ve come up with is to throw every Ruby dependency in vendor. Everything. Savvy? Everyone is always on the same page: we don’t have to worry about who has what version of which gem. (we know) We don’t have to worry about getting everyone to update a gem. (we just do it once) We don’t have to worry about breaking the build with our libraries. […]
The goal here is simple: always get everyone, especially your production environment, on the same page. You don’t want to guess at which gems everyone does and does not have. Right.
There’s another point lurking subtlety in the background: once all your gems are under version control, you can (probably) get your app up and running at any point of its existence without fuss. You can also see, quite easily, which versions of what gems you were using when. A real history.

How to prevent tclIndex collisions?

If multiple tcl scripts are running in the same directory, they can crash if one tries to auto_mkindex at the same exact time as another.
How can I prevent this properly? I do not want to just place catch around auto_mkindex, nor do I want to implement a semaphore system for this simple problem.
Why would you be building the tclIndex files at the same time in the first place? That's a step that I would expect as part of installation (i.e., something done once as a special action) and not as part of operation (i.e., many times, in parallel potentially). If it's part of installation, it's entirely your own problem if you try to run the code while you're installing it.
I also wouldn't tend to use tclIndex for anything shared between applications, as that's optimized for simple scripts. Shared components are better off made into packages, especially as they're versioned entities, and they have their own indexing mechanism (the pkgIndex.tcl). (Having the same version of the same package installed twice in such a way that things interfere… well, that wouldn't be sensible, would it?)

How does one make a self-contained package / library of functions

I am writing some support code in the common subset of Matlab/Octave, which comes in the form of a bunch of functions. Let's call it a package.
I want to be able to organize the package, i.e.,
put all the relevant function files in a single place, where users
are not supposed to store their code;
have some internal organization ('subpackages');
prevent namespace pollution;
have some mechanism for user code to 'import' parts of the package;
I don't necessarily want all functions I provide to be
visible from user clients.
On the Matlab side of things, this functionality is pretty much provided by package directories and the 'import' mechanism. This functionality doesn't appear to be available in Octave though (as of 3.6.1).
Given that, I wonder what options remain for organizing my support code package in Octave.
The option of putting everything in a directory and just have the user code do an ADDPATH feels rather unrefined, and doesn't give the level of control I want -- it only addresses point #1 of the list above.
There is plenty documentation here and examples in OctaveForge. Just browse the SVN.
Also there are personal packages all around. For example this one
Happy coding!