Actionscript 3.0 cross-project folder/package structure best practise - actionscript-3

I'm currently looking at structuring my teams projects into a consistent manner that properly utilises packages and is easily version-controlled (via SVN).
I'm interested in any 'best practise' with regards to project structuring and how to use consistent packaging without lumping everything into a gigantic com.domainname.projects folder structure whilst maintaining that package structure. I'm also keen to use the src/bin/lib folder structure within each project.
I guess I'm asking 'how do you do it?' and 'why?'. Sorry if this is a bit abstract for Stack Overflow but you guys give the best answers I've found.

You might need to provide more information for a "complete" answer, like your shop size, number of active projects, how you build them, and how you reuse code.
Here are some insights anyway... hope I understood the direction of you question :)
If your projects are coupled and you release new versions of each reusable artifact frequently, a single svn repo keeping a folder for each should do it. Tag/branches should remain at the project level.
Having separate repos for code that is linked may become a maintainability headache.

Package1
src\com\domainname\Package1
source files
Package2
src\com\domainname\Package2
source files
Package3
src\com\domainname\Package3
source files
Lib
All packages export swcs into this folder
Problem with re-usability is going backwards -- sometimes changing something in the corelib will break all sorts of stuff going backwards, so I suggest doing running builds against it to create one large swc with all the packages in it.
I don't know why you're keen to use the bin\lib folder -- it makes more sense to have them point at things explicity rather than having parent projects export into children ...

Related

Bundle assets with libGDX dependency

I'm making a card game engine on top of libGDX for many similar games I plan to make. Here's how I plan to structure this: each game is a separate project and the engine is a dependency added to the core module. The engine itself will have a lot of assets like card sprites and other UI elements, and they need to be included too.
How can I make that structure work? Is there any way to make a dependency include its assets? The alternative is to duplicate all assets for each game which I don't think is very efficient. Also the assets are in the android module by default, which the engine dependency doesn't have (the engine is a single module). Where do I put the assets in the engine module?
We have a setup that seems similar to what you've outlined above with a "many to one" relationship of projects to assets. Here's a potential way to go about it.
The basic idea is:
Have a single, authoritative assets folder
Have individual projects copy this folder to their build output at build time
Accomplish this by having a project's compile task dependsOn or be finalizedBy a copy task.
Ensure that the Android and other projects are happy by copying the assets to the place that libgdx's internal File APIs look for that particular type of project. (For example, android projects automatically get an assets/ prepended to the URI provided to Gdx.files.internal(). This step is more dependent on your personal file structure, so it may take a little tweaking to get the pathing right for all projects, but don't get discouraged!
Side-note: Gradle should automatically track whether or not the assets dir actually changed. If nothing's been updated, then the copy tasks will effectively become no-ops, which speeds up the build quite a bit for non-first runs. Obviously if you do a cleanAssets like I mention below, then this won't apply.
The advantage of this approach (to me anyway) is that it no longer relies on cross-project links or funky classpath manipulation. It's just real files in real directories. The downside is that it increases the disk space used because there can be multiple physical copies of the assets in the various projects.
The following is not a complete example, but should hopefully give you enough to go by.
Example of a copy task in action. This particular one takes an assets dir from a "core" project and copies it into an android project.
android/build.gradle
task copyAssets(type: Copy) {
from "../core/assets"
into "./assets"
}
Example of how to make the android project's build depend on this task:
android/build.gradle
afterEvaluate { project ->
project.tasks.preDebugBuild {
dependsOn copyAssets
}
project.tasks.preReleaseBuild {
dependsOn cleanAssets
finalizedBy copyAssets
}
}
You'll notice in the preReleaseBuild I added a cleanAssets task as well. It's always a good idea to clean up any junk and do a fresh copy during a production build. cleanAssets is just a basic Delete task.
Example of a copy task dependency for a non-android project:
build {
finalizedBy copyAssets
}
If you're still stuck, let me know where and I'll try to help.
Do it like libgdx does itself. There are assets included in the classpath, like arial-15.fnt which is located in the core project at gdx/src/com/badlogic/gdx/utils/. Take a look at BitmapFont's no-param constructor how it is referenced.

Composer dependency with minor differences in composer.json file

I'm creating an application that is going to work on two separate versions of the same software. These frameworks will have completely different modules with a shared dependency to a JavaScript framework I've built.
Let's say
dave/version1
dave/version2
Which both share the dependency via a require of
dave/framework
I want to maintain this framework (dave/framework) in one repository that both of the parent modules can require. However the location that these framework files need to be placed are slightly different between the two modules, along with slightly different requirements for the composer.json files to ensure everything gets moved around correctly (the two versions of these software implement composer rather differently).
With my limited knowledge of composer and Git I've formulated a couple of solutions:
Create three repositories, two wrapper repositories with specific composer.json files to support each different version of the software. With another dependency to a third repository which contains the actual framework. I'm unsure if this would ever work outside of a theory. Also ends up being a little messy.
Use some form of clever tagging and have version1 and version2 depend on separate versions of the framework which would in turn have slightly different makeup. Composer would then struggle with pulling the latest version of the module as we'd be running two ever so slightly different code bases in weird versions.
However both of these seem potentially messy and the incorrect way of structuring what I'm trying to achieve.
Is there a nice way to achieve this? or am I better of maintaining two separate repositories for the framework?
With some correct planning, your first option is going to be easier than you think.
Git has a system called Submodules built right in.
Version1 and Version2 repos would include the Frameworks repo within them. Then when you update Framework, you can pull those updates no problem in Version1/2. You control when and how to, so you can make sure you don't introduce a bug down the road either.
Here is some documentation for submodules: https://git-scm.com/book/en/v2/Git-Tools-Submodules
https://git-scm.com/docs/git-submodule

What is "vendoring"?

What is "vendoring" exactly? How would you define this term?
Does it mean the same thing in different programming languages? Conceptually speaking, not looking at the exact implementation.
Based on this answer
Defined here for Go as:
Vendoring is the act of making your own copy of the 3rd party packages
your project is using. Those copies are traditionally placed inside
each project and then saved in the project repository.
The context of this answer is in the Go language, but the concept still applies.
If your app depends on certain third-party code to be available you could declare a dependency and let your build system install the dependency for you.
If however the source of the third-party code is not very stable you could "vendor" that code. You take the third-party code and add it to your application in a more or less isolated way. If you take this isolation seriously you should "release" this code internally to your organization/working environment.
Another reason for vendoring is if you want to use certain third-party code but you want to change it a little bit (a fork in other words). You can copy the code, change it, release it internally and then let your build system install this piece of code.
Vendoring means putting a dependency into you project folder (vs. depending on it globally) AND committing it to the repo.
For example, running cp /usr/local/bin/node ~/yourproject/vendor/node & committing it to the repo would "vendor" the Node.js binary – all devs on the project would use this exact version. This is not commonly done for node itself but e.g. Yarn 2 ("Berry") is used like this (and only like this; they don't even install the binary globally).
The committing act is important. As an example, node_modules are already installed in your project but only committing them makes them "vendored". Almost nobody does that for node_modules but e.g. PnP + Zero Installs of Yarn 2 are actually built around vendoring – you commit .yarn/cache with many ZIP files into the repo.
"Vendoring" inherently brings tradeoffs between repo size (longer clone times, more data transferred, local storage requirements etc.) and reliability / reproducibility of installs.
Summarizing other, (too?) long answers:
Vendoring is hard-coding the often forked version of a dependency.
This typically involves static linking or some other copy but it doesn't have to.
Right or wrong, the term "hard-coding" has an old and bad reputation. So you won't find it near projects openly vendoring, however I can't think of a more accurate term.
As far as I know the term comes from Ruby on Rails.
It describes a convention to keep a snapshot of the full set of dependencies in source control, in directories that contain package name and version number.
The earliest occurrence of vendor as a verb I found is the vendor everything post on err the blog (2007, a bit before the author co-founded GitHub). That post explains the motivation and how to add dependencies. As far as I understand the code and commands, there was no special tool support for calling the directory vendor at that time (patches and code snippets were floating around).
The err blog post links to earlier ones with the same convention, like this fairly minimal way to add vendor subdirectories to the Rails import path (2006).
Earlier articles referenced from the err blog, like this one (2005), seemed to use the lib directory, which didn't make the distinction between own code and untouched snapshots of dependencies.
The goal of vendoring is more reproducibility, better deployment, the kind of things people currently use containers for; as well as better transparency through source control.
Other languages seem to have picked up the concept as is; one related concept is lockfiles, which define the same set of dependencies in a more compact form, involving hashes and remote package repositories. Lockfiles can be used to recreate the vendor directory and detect any alterations. The lockfile concept may have come from the Ruby gems community, but don't quote me on that.
The solution we’ve come up with is to throw every Ruby dependency in vendor. Everything. Savvy? Everyone is always on the same page: we don’t have to worry about who has what version of which gem. (we know) We don’t have to worry about getting everyone to update a gem. (we just do it once) We don’t have to worry about breaking the build with our libraries. […]
The goal here is simple: always get everyone, especially your production environment, on the same page. You don’t want to guess at which gems everyone does and does not have. Right.
There’s another point lurking subtlety in the background: once all your gems are under version control, you can (probably) get your app up and running at any point of its existence without fuss. You can also see, quite easily, which versions of what gems you were using when. A real history.

How can multiple developers use the same vcproj files?

I'm working on a project with two other developers that's built on FireBreath. So far, I've been able to get things working perfectly on my machine, but we need to coordinate our development via Mercurial. So I pushed my files to the repository and thought all was well.
Unfortunately, that doesn't work.
The various .vcproj files that make up the solution all contain hard-coded references to my local file system. This works fine for me, because I'm not moving the project around. But when you try to build the solution on another machine with a different file structure (different drive letter, different folder location, etc.) everything breaks.
I used FireBreath's standard project generation script (Python) and then the Visual Studio CMake script (prep2008.cmd) to generate the solution files. What can I do to tweak things so that other developers can use the same code base?
If your developers are not using the same build/make/project files, this could quickly become a maintenance nightmare. So you should definitively all use the same .vcproj files. (An exception to this would be if the project files were generated from some other files. In that case treat those other files in the way described above.)
there's two ways to deal with the problem of differing setups on different machines. One is to make all paths relative to the project's path. The other is to use environment variables to refer to files/tools/libraries/whatever. IME it's best to use relative paths for everything that can be checked out with the project, and use environment variables for the rest. Add a script that checks for the existence of all necessary environment variable, pointing out the meaning of any missing ones, and run this as a build prerequisite, so whoever tries to get a new build machine up and running gets hints at what to do.
To make sure that everyone caught the updated comments from sbi's answer, let me give you the "definitive" answer from the FireBreath devs.
Your build directory is disposable; you should never share .vcproj files. Instead, you should regenerate your build/ directory any time you change the project and on each new computer, just like any project that uses CMake.
For more information, see http://colonelpanic.net/2010/11/firebreath-tips-working-with-source-control/
For reference, I am the primary author of FireBreath and I wrote the article.
I'm not familiar with FireBreath, but you need to make the references relative, and then recreate that relative structure on every machine. That is, if your project sits in "c:\myprojects\thisproject" and has an additional include directory "c:\mydir\mylib\include", then the latter path needs to be replaced with "....\mydir\mylib\include".
EDIT: I rewrote my anyswer to make it clearer. When I got you correctly, your problem is that FireBreath generates those .vcproj files with absolute paths in it, and you want to use this .vcproj files on a different developer machine.
I see 3 options:
Live with it. That means, make sure, every team member has the same file structure / view to the file system, tools installed in the same place.
Ask the authors of FireBreath to change their .vcproj generator to allow relative paths, use of environment variables etc.
If 1 or 2 does not work, write a program or script for changing the absolute path to relatives in those .vcproj files. Run this script whenever you have to regenerate your FireBreath project.
What you should not do due to the FireBreath FAQ: don't change the .vcproj manually, those changes will be lost next time the project is regenerated.
EDIT: seems that "option 4." turned out to be the best solution: generating those .vcproj files for each developer individually. Hope my suggestions were helpful, either.

How to display credits

I want to give credit to all open source libraries we use in our (commercial) application. I thought of showing a HTML page in our about dialog. Our build process uses ant and the third party libs are committed in svn.
What do you think is the best way of generating the HTML-Page?
Hard code the HTML-Page?
Switch dependency-management to apache-ivy and write some ant task to generate the html
Use maven-ant-tasks and write some ant task to generate the HTML
Use maven only to handle the dependencies and the HTML once, download them and commit them. The rest is done by the unchanged ant-scripts
Switch to maven2 (Hey boss, I want to switch to maven, in 1 month the build maybe work again...)
...
What elements should the about-dialog show?
Library name
Version
License
Author
Homepage
Changes made with link to source archive
...
Is there some best-practise-advice? Some good examples (applications having a nice about-dialog showing the dependencies)?
There are two different things you need to consider.
First, you may need to identify the licenses of the third-party code. This is often down with a THIRDPARTYLICENSE file. Sun Microsystems does this a lot. Look in the install directory for OpenOffice.org, for example. There are examples of .txt and .html versions of such files around.
Secondly, you may want to identify your dependencies in the About box in a brief way (and also refer to the file of license information). I would make sure the versions appear in the About box. One thing people want to quickly check for is an indication of whether the copy of your code they have needs to be replaced or updated because one of your library dependencies has a recently-disclosed bug or security vulnerability.
So I guess the other thing you want to include in the about box is a way for people to find your support site and any notices of importance to users of the particular version (whether or not you have a provision in your app for checking on-line for updates).
Ant task seems to be the best way. We do a similar thing in one of our projects. All the open source libraries are present in a specified folder. An Ant task reads the manifest of these libraries, versions and so on and generates an HTML, copies into another specified folder from where it is picked up by the web container.
Generating the page with each build would be wasteful if the libraries are not going to change often. Library versions may change, but the actual libraries don't. Easier to just create a HTML page would be the easiest way out, but that's one more maintenance head ache. Generate it once and include it with the package. The script can always be run again in case some changes are being made to the libraries (updating versions, adding new libraries).