Separate author&public mvn projects or standard Magnolia bundle? - configuration

What are the pros and cons between creating a custom bundle that has separate author and public webapps in a bundle and the standard bundle that has only one complete webapp (magnoliaAuthor) that is then partially copied to create the second webapp (magnoliaPublic)?
Considerations should be made for a typical setup, for example a web shop with lots of integration. Recommended characteristics to consider: security, ease of understanding the project structure/neatness, ease of development - code reuse.

The best practices from the Magnolia documentation.

PRO: Having it separately is more appropriate if we might want to install some modules on the author instance only as they are relevant only to authors, or when they are in some way perceived as security-critical and not desired on the public instance.
CON: If different wars are required for the two environments then you would have to use that approach, but that is not ideal for a typical setup. It's possible to secure a module from starting on public by checking the flag. Every module has a module class where you can control how and when a module should be started.
It's not necessary to bloat the package to have author and public instances. The standard is to have a single war with multiple configurations. Starting from scratch, with an empty mangolia bundle you get this setup, to get separate a&p mvn projects, you have to duplicate the empty webapp project and customize them to obtain a separate public&author.
Relevant docs:
War file with multiple configurations
Magnolia best practices from a SysAdmin point of view "should resist the temptation of using multiple war deployables, use one war with more config."
Example for separate maven modules

Related

Bundle assets with libGDX dependency

I'm making a card game engine on top of libGDX for many similar games I plan to make. Here's how I plan to structure this: each game is a separate project and the engine is a dependency added to the core module. The engine itself will have a lot of assets like card sprites and other UI elements, and they need to be included too.
How can I make that structure work? Is there any way to make a dependency include its assets? The alternative is to duplicate all assets for each game which I don't think is very efficient. Also the assets are in the android module by default, which the engine dependency doesn't have (the engine is a single module). Where do I put the assets in the engine module?
We have a setup that seems similar to what you've outlined above with a "many to one" relationship of projects to assets. Here's a potential way to go about it.
The basic idea is:
Have a single, authoritative assets folder
Have individual projects copy this folder to their build output at build time
Accomplish this by having a project's compile task dependsOn or be finalizedBy a copy task.
Ensure that the Android and other projects are happy by copying the assets to the place that libgdx's internal File APIs look for that particular type of project. (For example, android projects automatically get an assets/ prepended to the URI provided to Gdx.files.internal(). This step is more dependent on your personal file structure, so it may take a little tweaking to get the pathing right for all projects, but don't get discouraged!
Side-note: Gradle should automatically track whether or not the assets dir actually changed. If nothing's been updated, then the copy tasks will effectively become no-ops, which speeds up the build quite a bit for non-first runs. Obviously if you do a cleanAssets like I mention below, then this won't apply.
The advantage of this approach (to me anyway) is that it no longer relies on cross-project links or funky classpath manipulation. It's just real files in real directories. The downside is that it increases the disk space used because there can be multiple physical copies of the assets in the various projects.
The following is not a complete example, but should hopefully give you enough to go by.
Example of a copy task in action. This particular one takes an assets dir from a "core" project and copies it into an android project.
android/build.gradle
task copyAssets(type: Copy) {
from "../core/assets"
into "./assets"
}
Example of how to make the android project's build depend on this task:
android/build.gradle
afterEvaluate { project ->
project.tasks.preDebugBuild {
dependsOn copyAssets
}
project.tasks.preReleaseBuild {
dependsOn cleanAssets
finalizedBy copyAssets
}
}
You'll notice in the preReleaseBuild I added a cleanAssets task as well. It's always a good idea to clean up any junk and do a fresh copy during a production build. cleanAssets is just a basic Delete task.
Example of a copy task dependency for a non-android project:
build {
finalizedBy copyAssets
}
If you're still stuck, let me know where and I'll try to help.
Do it like libgdx does itself. There are assets included in the classpath, like arial-15.fnt which is located in the core project at gdx/src/com/badlogic/gdx/utils/. Take a look at BitmapFont's no-param constructor how it is referenced.

Grav's asset manager can fully replace a need for gulpjs?

GravCMS has it's own asset manager: https://learn.getgrav.org/themes/asset-manager
It can minify, concatenate js/scss files.
Is it better to do such tasks via gulp (gulp-sass, gulp-concat, gulp-uglify), rather than using built-in asset manager in Grav?
Well I don't have any experience with the grav asset manager. But since you want to do grav-related tasks, I'd recommend to use the built in manager, as long as it can cover all your needs.
I don't see any benefit of using and adding an extra dependency in gulp/grunt or any other build tool.
The only time I'd consider using gulp is, if you already have an existing project with a gulp configuration and you can re-use it and have complex logic.
However probably you just want to minify/concatenate files. So the built in asset manager is the way to go

Correct design using dependency inversion principle across modules?

I understand dependency inversion when working inside a single module, but I would like to also apply it when I have a cross-module dependency. In the following diagrams I have an existing application and I need to implement some new requirements for reference data services. I thought I will create a new jar (potentially a stand-alone service in the future). The first figure shows the normal way I have approached such things in the past. The referencedataservices jar has an interface which the app will use to invoke it.
The second figure shows my attempt to use DIP, the app now owns its abstraction so it is not subject to change just because the reference data service changes. This seems to be a wrong design though, because it creates a circular dependency. MyApp depends on referencedataservices jar, and referencedataservices jar depends on MyApp.
So the third figure gets back to the more normal dependency by creating an extra layer of abstraction. Am I right? Or is this really not what DIP was intended for? Interested in hearing about other approaches or advice.
,
The second example is on the right track by separating the implementation from its abstraction. To achieve modularity, a concrete class should not be in the same package (module) as its abstract interface.
The fault in the second example is that the client owns the abstraction, while the service owns the implementation. These two roles must be reversed: services own interfaces; clients own implementations. In this way, the service presents a contract (API) for the client to implement. The service guarantees interaction with any client that adheres to its API. In terms of dependency inversion, the client injects a dependency into the service.
Kirk K. is something of an authority on modularity in Java. He had a blog that eventually turned into a book on the subject. His blog seems to be missing at the moment, but I was able to find it in the Wayback Machine. I think you would be particularly interested in the four-part series titled Applied Modularity. In terms of other approaches or alternatives to DIP, take a look at Fun With Modules, which covers three of them.
In second approach that you presented, if you move RefDataSvc abstraction to separate package you break the cycle and referencedataservices package use only package with RefDataSvc abstraction.
Other code apart from Composition Root in MyApp package should depend also on RefDataSvc. In Composition Root of your application you should then compose all dependencies that are needed in your app.

What is "vendoring"?

What is "vendoring" exactly? How would you define this term?
Does it mean the same thing in different programming languages? Conceptually speaking, not looking at the exact implementation.
Based on this answer
Defined here for Go as:
Vendoring is the act of making your own copy of the 3rd party packages
your project is using. Those copies are traditionally placed inside
each project and then saved in the project repository.
The context of this answer is in the Go language, but the concept still applies.
If your app depends on certain third-party code to be available you could declare a dependency and let your build system install the dependency for you.
If however the source of the third-party code is not very stable you could "vendor" that code. You take the third-party code and add it to your application in a more or less isolated way. If you take this isolation seriously you should "release" this code internally to your organization/working environment.
Another reason for vendoring is if you want to use certain third-party code but you want to change it a little bit (a fork in other words). You can copy the code, change it, release it internally and then let your build system install this piece of code.
Vendoring means putting a dependency into you project folder (vs. depending on it globally) AND committing it to the repo.
For example, running cp /usr/local/bin/node ~/yourproject/vendor/node & committing it to the repo would "vendor" the Node.js binary – all devs on the project would use this exact version. This is not commonly done for node itself but e.g. Yarn 2 ("Berry") is used like this (and only like this; they don't even install the binary globally).
The committing act is important. As an example, node_modules are already installed in your project but only committing them makes them "vendored". Almost nobody does that for node_modules but e.g. PnP + Zero Installs of Yarn 2 are actually built around vendoring – you commit .yarn/cache with many ZIP files into the repo.
"Vendoring" inherently brings tradeoffs between repo size (longer clone times, more data transferred, local storage requirements etc.) and reliability / reproducibility of installs.
Summarizing other, (too?) long answers:
Vendoring is hard-coding the often forked version of a dependency.
This typically involves static linking or some other copy but it doesn't have to.
Right or wrong, the term "hard-coding" has an old and bad reputation. So you won't find it near projects openly vendoring, however I can't think of a more accurate term.
As far as I know the term comes from Ruby on Rails.
It describes a convention to keep a snapshot of the full set of dependencies in source control, in directories that contain package name and version number.
The earliest occurrence of vendor as a verb I found is the vendor everything post on err the blog (2007, a bit before the author co-founded GitHub). That post explains the motivation and how to add dependencies. As far as I understand the code and commands, there was no special tool support for calling the directory vendor at that time (patches and code snippets were floating around).
The err blog post links to earlier ones with the same convention, like this fairly minimal way to add vendor subdirectories to the Rails import path (2006).
Earlier articles referenced from the err blog, like this one (2005), seemed to use the lib directory, which didn't make the distinction between own code and untouched snapshots of dependencies.
The goal of vendoring is more reproducibility, better deployment, the kind of things people currently use containers for; as well as better transparency through source control.
Other languages seem to have picked up the concept as is; one related concept is lockfiles, which define the same set of dependencies in a more compact form, involving hashes and remote package repositories. Lockfiles can be used to recreate the vendor directory and detect any alterations. The lockfile concept may have come from the Ruby gems community, but don't quote me on that.
The solution we’ve come up with is to throw every Ruby dependency in vendor. Everything. Savvy? Everyone is always on the same page: we don’t have to worry about who has what version of which gem. (we know) We don’t have to worry about getting everyone to update a gem. (we just do it once) We don’t have to worry about breaking the build with our libraries. […]
The goal here is simple: always get everyone, especially your production environment, on the same page. You don’t want to guess at which gems everyone does and does not have. Right.
There’s another point lurking subtlety in the background: once all your gems are under version control, you can (probably) get your app up and running at any point of its existence without fuss. You can also see, quite easily, which versions of what gems you were using when. A real history.

Best practices for version information?

I am currently working on automating/improving the release process for packaging my shop's entire product. Currently the product is a combination of:
Java server-side codebase
XML configuration and application files
Shell and batch scripts for administrators
Statically served HTML pages
and some other stuff, but that's most of it
All or most of which have various versioning information contained in them, used for varying purposes. Part of the release packaging process involves doing a lot of finding, grep'ing and sed'ing (in scripts) to update the information. This glue that packages the product seems to have been cobbled together in an organic, just-in-time manner, and is pretty horrible to maintain. For example, some Java methods create Date objects for the time of release, the arguments for which are updated by a textual replacement, without compiler validation... just, urgh.
I'm trying avoid giving examples of actual software used (i.e. CVS, SVN, ant, etc.) because I'd like to avoid the "use xyz's feature to do this" and concentrate more on general practices. I'd like to blame shoddy design for the problem, but if I had to start again, still using varying technologies, I'd be unsure how best to go about handling this, beyond laying down conventions.
My questions is, are there any best practices or hints and tips for maintaining and updating versioning information across different technologies, filetypes, platforms and version control systems?
Create a properties file that contains the version number and have all of the different components reference the properties file
Java files can reference the properties through
XML can use includes?
HTML can use a JavaScript to write the version number from the properties in the HTML
Shell scripts can read in the file
Indeed, to complete Craig Angus's answer, the rule of thumb here should be to not include any meta-informations in your normal delivery files, but to report those meta-data (version number, release date, and so on) into one special file -- included in the release --.
That helps when you use one VCS (Version Control System) tool from the development to homologation to pre-production.
That means whenever you load a workspace (either for developing, or for testing or for preparing a release into production), it is the versionning tool which gives you all the details.
When you prepare a delivery (a set of packaged files), you should ask that VCS tool about every meta-information you want to keep, and write them in a special file itself included into the said set of files.
That delivery should be packaged in an external directory (outside any workspace) and:
copied to a shared directory (or a maven repository) if it is a non-official release (but just a quick packaging for helping the team next door who is waiting for your delivery). That way you can make 10 or 20 delivers a day, it does not matter: they are easily disposable.
imported into the VCS in order to serve as official deliveries, and in order to be deployed easily since all you need is to ask the versionning tool for the right version of the right deliver, and you can begin to deploy it.
Note: I just described a release management process mostly used for many inter-dependant projects. For one small single project, you can skip the import in the VCS tool and store your deliveries elsewhere.
In addition to Craig Angus' ones include the version of tools used.