NativeScript, Code Sharing and different environments - configuration

Note: this is not a dupe of this or this other question. Read on: this question is specific to the Code-Sharing template.
I am doing some pretty basic experiments with NativeScript, Angular and the code sharing templates (see: #nativescript/schematics).
Now I am doing some exploration / poc work on how different "build configuration" are supported by the framework. To be clear, I am searching for a simple -and hopefully official- way to have the application use a different version of a specific file (let's call it configuration.ts) based on the current platform (web/ios/android) and environment (development/production/staging?).
Doing the first part is obviously trivial - after all that is the prime purpose of the code sharing schematics. So, different versions of the same file are identified by different extensions. This page explain things pretty simply.
What I don't get as easily is if the framework/template supports any similar convention-based rule that can be used to switch between debug/release (or even better development/staging/production) versions of a file. Think for example of a config.ts file that contains different parameters based on the environment.
I have done some research in the topic, but I was unable to find a conclusive answer:
the old and now retired documentation for the appbuilder platform mentions a (.debug. and .release.) naming convention for files. I don't think this work anymore.
other sources mention passing parameters during the call to tns build / tns run and then fetching them via webpack env variable... See here. This may work, but seems oddly convoluted
third option that gets mentioned is to use hooks to customize the build (or use a plugin that should do the same)
lastly, for some odd reason, the #nativescript/schematics seems to generate a default project that contains two files called environment.ts and environment.prod.ts. I suspect those only work for the web version of the project (read: ng serve) - I wasn't able to get the mobile compiler to recognize files that end with debug.ts, prod.ts or release.ts
While it may be possible that what I am trying to do isn't just supported (yet?), the general confusion an dissenting opinions on the matter make me think I may be missing something.. somewhere.
In case this IS somehow supported, I also wonder how it may integrate with the NativeScript Sidekick app that is often suggested as a tool to ease the build/run process of NativeScript applications (there is no way to specify additional parameters for the tns commands that the Sidekick automates, the only options available are switching between debug/release mode), but this is probably better to be left for another question.

Environment files are not yet supported, passing environment variables from build command could be the viable solution for now.
But of course, you may write your own schematics if you like immediate support for environment files.

I did not look into sharing environment files between web and mobile yet - I do like Manoj's suggestion regarding modifying the schematics, but I'll have to cross that bridge when I get there I guess. I might have an answer to your second question regarding Sidekick. The latest version does support "Webpack" build option which seems to pass the --bundle parameter to tns. The caveat is that this option seems to be more sensitive to typescript errors, even relatively benign ones, so you have to be careful and make sure to fix them all prior to building. In my case I had to lock the version of #types/jasmine in package.json to "2.8.6" in order to avoid some incompatibility between that and the version of typescript that Sidekick's cloud solution is using. Another hint is to check "Clean Build" after npm dependency changes are made. Good luck!

Related

What is "vendoring"?

What is "vendoring" exactly? How would you define this term?
Does it mean the same thing in different programming languages? Conceptually speaking, not looking at the exact implementation.
Based on this answer
Defined here for Go as:
Vendoring is the act of making your own copy of the 3rd party packages
your project is using. Those copies are traditionally placed inside
each project and then saved in the project repository.
The context of this answer is in the Go language, but the concept still applies.
If your app depends on certain third-party code to be available you could declare a dependency and let your build system install the dependency for you.
If however the source of the third-party code is not very stable you could "vendor" that code. You take the third-party code and add it to your application in a more or less isolated way. If you take this isolation seriously you should "release" this code internally to your organization/working environment.
Another reason for vendoring is if you want to use certain third-party code but you want to change it a little bit (a fork in other words). You can copy the code, change it, release it internally and then let your build system install this piece of code.
Vendoring means putting a dependency into you project folder (vs. depending on it globally) AND committing it to the repo.
For example, running cp /usr/local/bin/node ~/yourproject/vendor/node & committing it to the repo would "vendor" the Node.js binary – all devs on the project would use this exact version. This is not commonly done for node itself but e.g. Yarn 2 ("Berry") is used like this (and only like this; they don't even install the binary globally).
The committing act is important. As an example, node_modules are already installed in your project but only committing them makes them "vendored". Almost nobody does that for node_modules but e.g. PnP + Zero Installs of Yarn 2 are actually built around vendoring – you commit .yarn/cache with many ZIP files into the repo.
"Vendoring" inherently brings tradeoffs between repo size (longer clone times, more data transferred, local storage requirements etc.) and reliability / reproducibility of installs.
Summarizing other, (too?) long answers:
Vendoring is hard-coding the often forked version of a dependency.
This typically involves static linking or some other copy but it doesn't have to.
Right or wrong, the term "hard-coding" has an old and bad reputation. So you won't find it near projects openly vendoring, however I can't think of a more accurate term.
As far as I know the term comes from Ruby on Rails.
It describes a convention to keep a snapshot of the full set of dependencies in source control, in directories that contain package name and version number.
The earliest occurrence of vendor as a verb I found is the vendor everything post on err the blog (2007, a bit before the author co-founded GitHub). That post explains the motivation and how to add dependencies. As far as I understand the code and commands, there was no special tool support for calling the directory vendor at that time (patches and code snippets were floating around).
The err blog post links to earlier ones with the same convention, like this fairly minimal way to add vendor subdirectories to the Rails import path (2006).
Earlier articles referenced from the err blog, like this one (2005), seemed to use the lib directory, which didn't make the distinction between own code and untouched snapshots of dependencies.
The goal of vendoring is more reproducibility, better deployment, the kind of things people currently use containers for; as well as better transparency through source control.
Other languages seem to have picked up the concept as is; one related concept is lockfiles, which define the same set of dependencies in a more compact form, involving hashes and remote package repositories. Lockfiles can be used to recreate the vendor directory and detect any alterations. The lockfile concept may have come from the Ruby gems community, but don't quote me on that.
The solution we’ve come up with is to throw every Ruby dependency in vendor. Everything. Savvy? Everyone is always on the same page: we don’t have to worry about who has what version of which gem. (we know) We don’t have to worry about getting everyone to update a gem. (we just do it once) We don’t have to worry about breaking the build with our libraries. […]
The goal here is simple: always get everyone, especially your production environment, on the same page. You don’t want to guess at which gems everyone does and does not have. Right.
There’s another point lurking subtlety in the background: once all your gems are under version control, you can (probably) get your app up and running at any point of its existence without fuss. You can also see, quite easily, which versions of what gems you were using when. A real history.

Configure applications using environment variables

12-Factor Apps suggest that you configure your application using environment variables. So far, so good. I can easily imagine that this is a good way to do it if you need to set a connection string, e.g.
But what if you have more complex configuration with lots and lots of values? I for sure do not want to have 50+ environment variables, do I?
How could I solve this, and still be compliant to the idea of 12-Factor Apps?
From a quick read of the configure link you provided, I agree with the author's claim that there is a widespread problem, but I am not convinced that their proposed solution is going to always be best. Like you, I don't relish the idea of having to define dozens of environment variables to configure an application. So here are some alternative ideas.
First, read Chapter 2 of the Config4* Getting Started Guide (disclaimer: I am the main author of that software). In particular, notice that its support for what I call adaptive configuration can go a long way towards addressing the concern that you ask about. Is Config4* the ultimate solution? Possibly not, but I think it is a good step in the right direction.
Second, the chances are that whatever application you are developing/maintaining has already settled on a particular configuration technology, such as XML files or Java property files, and it won't be feasible to migrate to using Config4*. This raises the question: is there anything you can do to avoid having a proliferation of, say, XML-based configuration files when you have multiple environments (such as dev, UAT, staging and production) in which the application will be deployed? I have outlined an approach for dealing with this issue in another StackOverflow article.

Why config files should't be changed line-by-line with Chef / Puppet?

Why is changing lines in configuration file considered an anti-pattern in Chef or Puppet?
It's something like bad habit, as I understood. I assume that this file-editing is done in some idempotent way and with advanced tools (augeas for example).
Why is deploying the whole files, with ERB templates, considered a preferred method?
You can find a lot of examples where dev-ops are suggesting usage of templates instead of file-editing. For example here, here, here, etc.
Actually there is a large part of the DevOps community that sees accepting system/package defaults for config files and only modifying what you need through augeas as the preferred method, Github devops would be one of them(if you happened to catch them at Puppet Conf 2012).
I think having a default pattern of always using templates creates too high of a maintenance load and almost always requires you lock in specific versions for everything across your stack or you risk having an incompatible template against a newer version of that resource.
There's use cases for both options but in general I favor the "own as little as possible" practice vs the "own everything even if you don't have to" practice.
In terms of setting the your system to a known state, deploying whole files is better than editing, because you are sure the file is exactly as intended when you are done.
If you are tinkering around finding potential solutions to a problem and hand edit some configuration file, you don't have to worry about the hand edit you made staying around as an uncontrolled part of your environment. The next time you run chef-client, you know that the state will be exactly as specified in the Chef recipe, and won't include your edit.
Also, it is just in general harder and more complicated to robustly edit a file than it is to just generate one. You might write something that is idempotent in the basic case, but if the file contains a syntax error or something invalid, than your editing no longer works.
As always though, sometimes you don't have a choice, and editing is the only way to go.

Simplest way to add XML doc to a WinRT project

We have a group of developers moving from C++ to C# and WinRT. We used D'Oxygen as part of our C++ developer builds, and I'd like to continue to have document generation as part of the developer build in C#/WinRT.
It's easy to turn on XML Doc generation, and I believe that will provide warnings for malformed tags, but without actual HTML output, I think our developers will be missing valuable feedback.
Looks like NDoc is now defunct, and I took a quick look at Sandcastle, but found it rather complex. Ideally, I'm looking for something that doesn't unduly burden developers, or require them to remember extra steps as they edit, build, test, and commit. In other words, the best solution would be something that "just happens", like a post-build step, and doesn't add significantly to each developer's build time.
If anyone has had some experience doing this in C#/WinRT, I'd sure like some advice.
Thanks in advance!
Get Sandcastle Help File Builder.
Create a help project for your library in the Visual Studio solution.
Remove Build check mark from Debug solution configuration to build the documentation project only in Release configurations, since Debug is most often used during development. For release build testing or performance testing you can either create another solution configuration or simply switch the option back and forth.
Build the documentation once
Include the documentation file in the solution so it shows up in the Pending Changes window when the file changes.
Kindly ask your developers to build with the release configuration that updates the documentation before check-in or use any other policy to require updating the documentation.
I don't think it makes sense to build the documentation all the time, but it helps to make it easy to do so that when you actually need an updated version - you can build it really quickly.
You can also make sure to use FXCop or StyleCop (forgot which) and configure it to treat missing XML documentation warnings as errors - at least in release builds. Doing it for debug configurations might slow down development and make changes difficult since developers often want to try things out before committing to a final implementation worth documenting.
EDIT*
Sandcastle provides various output formats as shown in the project properties:
I would like to mention ForgeDoc (of which I'm the developer), it could be what you are looking for. It is designed to be fast and simple, and it generates proper MSDN-like HTML output. It also has a command-line interface so you can just call it from a post-build event command in Visual Studio.
I think you should give it a try, as I would really like to hear about your thoughts.

How to display credits

I want to give credit to all open source libraries we use in our (commercial) application. I thought of showing a HTML page in our about dialog. Our build process uses ant and the third party libs are committed in svn.
What do you think is the best way of generating the HTML-Page?
Hard code the HTML-Page?
Switch dependency-management to apache-ivy and write some ant task to generate the html
Use maven-ant-tasks and write some ant task to generate the HTML
Use maven only to handle the dependencies and the HTML once, download them and commit them. The rest is done by the unchanged ant-scripts
Switch to maven2 (Hey boss, I want to switch to maven, in 1 month the build maybe work again...)
...
What elements should the about-dialog show?
Library name
Version
License
Author
Homepage
Changes made with link to source archive
...
Is there some best-practise-advice? Some good examples (applications having a nice about-dialog showing the dependencies)?
There are two different things you need to consider.
First, you may need to identify the licenses of the third-party code. This is often down with a THIRDPARTYLICENSE file. Sun Microsystems does this a lot. Look in the install directory for OpenOffice.org, for example. There are examples of .txt and .html versions of such files around.
Secondly, you may want to identify your dependencies in the About box in a brief way (and also refer to the file of license information). I would make sure the versions appear in the About box. One thing people want to quickly check for is an indication of whether the copy of your code they have needs to be replaced or updated because one of your library dependencies has a recently-disclosed bug or security vulnerability.
So I guess the other thing you want to include in the about box is a way for people to find your support site and any notices of importance to users of the particular version (whether or not you have a provision in your app for checking on-line for updates).
Ant task seems to be the best way. We do a similar thing in one of our projects. All the open source libraries are present in a specified folder. An Ant task reads the manifest of these libraries, versions and so on and generates an HTML, copies into another specified folder from where it is picked up by the web container.
Generating the page with each build would be wasteful if the libraries are not going to change often. Library versions may change, but the actual libraries don't. Easier to just create a HTML page would be the easiest way out, but that's one more maintenance head ache. Generate it once and include it with the package. The script can always be run again in case some changes are being made to the libraries (updating versions, adding new libraries).