Got an email from a buddy about versions and source control tagging.. thought I’d share:
I was just wondering, in writing code deployment scripts, is there a compelling reason to use a separate or proprietary “tagging” system rather than rely on source control tags? For example creating code release versioning that is independent from source control tagged versions, and using the release versions when specifying what code to deploy.
I’m curious because my old company did this and I wonder if that abstraction is useful or necessary with more complicated code deployment schemes.
Interesting question. As with anything like this, the answer is “it depends”.
If I get what you’re asking, you’re wondering about the usefulness/necessity of separating out “versioned” artifacts for deployment – e.g., having a versioning scheme for “deployable” artifacts that deviates from the “tagging” convention you use in ur svn.
This sounds like something that you see a lot with Maven – the “maven way” almost requires the shoving off of artifacts to a shared location, to be picked up and deployed at a later time (“snapshots”, “releases”, etc), which sort of mandates a way of managing/naming these artifacts separately from svn tags. The very concept of an artifact repository is central to the “maven way”.
In general, yes it is useful, though maybe not always necessary. One compelling reason is the “rollback” scenario – it’s really handy to have an archive of “certified” deployable artifacts readily available when you gotta abort a deployment ad rollback to a previous version (rather than having to wait to re-build/package off of a tag, which in large systems could take a long time).
Obviously, there are lots of approaches to dealing with this scenario, but this seems to work pretty well. A side benefit is that you can readily deploy any particular historical version of the app for, say, QA developers to identify/isolate a particular bug in a particular version.
Also, the concept of a “build pipeline” is very powerful, and is most useful when the different stages of a build/package/test/deploy are performed using the exact same artifacts – so you may have a build step that creates a war, then another step picks up that specific war and deploys it, tests against it, etc, and then further down the line you take that same exact artifact and deploy that where it needs to go (staging, prod, whatever). This helps minimize the risk of inadvertently introducing unknown/undesired code and/or property changes as the code moves through it’s lifecycle.
One more thing this helps with is speeding up build times (both locally and on a build server) for large complex systems in a shared/distributed development environment through better management of dependencies. Say, for example, you’re working on a module in a project with dozens (or hundreds perhaps) of other, shared modules. As a best practice, you should be compiling and running unit tests several times a day against the stuff you’re changing. If you need to build the entire stack, on every change, prior to every commit, that could get a little out of control and may discourage frequent local builds. However, if you toss in an “artifact repository”, where you can keep fixed versions of all sorts of shared modules (your project dependencies), then you don’t need to compile (or even keep that source locally) every single thing in order to get a full project. You can just grab the pre-compiled, “versioned” binaries from the shared repo, and you’re set. The tradeoff of developer time for a little storage and network traffic is usually a no-brainer.