Archive for the ‘Uncategorized’ Category

Speeding up a build

November 18, 2010

Recently, I was asked how one could speed up a build. In this particular scenario, cost wasn’t a constraint (ahh, to dream), but the content is still pretty relevant.

One thing to note – often time a long running build is not necessarily a problem in itself, but may be more of an architecture issue. Meaning: if some particular component “used to” build pretty quick, and now it’s taking forever, then it’s likely time to refactor and decouple some stuff into smaller components or services or whatever. Note that this is a different kind of “breaking things into smaller pieces” than is described below as build pipelining…

That being said, here are some tips for speeding up a build:

1. Hardware investment
2. Dependency management
3. Establish build pipeline, or “chained” builds

Hardware – assuming budget isn’t a concern, the first thing I’d do is provide more processing power for these builds, and invest in faster disks. This is one of those rare cases where “throwing money at the problem” can actually help. I’d also purchase tons of RAM, and set up our builds to run on a RAM disk – which would really speed things up. With 1500 builds in a day, that’s just under 1 build per minute so we’d need to evaluate what hardware it’s running on now to assess/estimate how much we’d gain by using more processor (how much of that ~1 minute can we realistically shave off?), but it’s one of the simplest, quickest things we could consider. How fast would the build finish on a Cray?

Dependency management – Introduce some sort of dependency management so that we don’t need to build each and every artifact for each and every build. Now, this one may not be ideal for you as you didn’t state “we build only those components that have changed” as an assumption, but I’m going to go out on a limb and assume that you don’t need to rebuild a component/artifact to which there have been no changes made. If that’s a safe assumption, you can make huge gains by using a dependency management tool and a shared artifact repository (like Artifactory or Nexus) to manage and publish versioned artifacts. Some tools that provide this ability are Ivy (and ANT) and Maven, though Maven has other features/uses too.

So, for example, say we have 2 components/projects: A and B, where A depends on B. When initiating a build, rather than simply building B, then A, we define the dependency between them and let the build tool resolve those dependencies for us. Additionally, we could specify a version of the B module that A depends on (essentially peg A to a fixed version of B, say 1.0 for example), and then we just run a build of A. The build script now knows that A depends on v1.0 of B, and checks for it’s existence in the shared repository. If it finds it, it’ll simply grab that artifact and use that to compile A rather than rebuilding it. Alternatively, we could tell A to use always use the “latest” version of B, in which case it’ll just grab the most recently built version of B.

Ivy and Maven make this possible by publishing metadata about the resulting artifacts along with the actual artifact itself. When a build is initiated, it first maps out it’s dependencies, and attempts to resolve those dependencies with existing (pre-built) artifacts in the shared repository.

This approach will reduce overall build times by simply skipping those components that haven’t changed. If we assume that all 1500 projects have been changed since the last build and need to be rebuilt, then I’m afraid that this approach wouldn’t help much.

Build Pipeline – break the build into smaller discrete chunks, and run them in a “pipeline” or a “build chain”. Again, I’m not sure this is an assumption I can make, as you stated a “build” consists of only compiling and linking, but there are often things you can do to “move around the load” if not outright reduce it. So, for example, you may have a “quick” build that simply checks out the source, compiles/links, and runs unit tests, whereas running a “full, clean build” will do much more (delete local source, check out, compile/link, run unit tests, package, run integration tests, deploy, regress, etc).

We could define a “pipeline” that code moves though on it’s way from source to deployment/release, and each stage of the pipeline would be it’s own discrete activity, providing different types of feedback (eg if compile passes, move on to assembly/packaging. if assembly/packaging passes, move on to deployment, etc, etc).

If the goal is to provide more rapid feedback to developers, we might look to see what’s the most useful for a developer and try to isolate that feedback (maybe something like “your code won’t compile as is!”) and provide it faster, eg – make that the “quick build” to give faster feedback to developers.

Advertisement

Interesting article on CI

June 23, 2010

Excellent article to help evaluate your CI fu:

http://www.cmcrossroads.com/cm-journal-articles/13530-an-evaluation-framework-for-continuous-integration-tools

Versioning Question from a friend

May 19, 2010

Got an email from a buddy about versions and source control tagging.. thought I’d share:

Hey Dood,

I was just wondering, in writing code deployment scripts, is there a compelling reason to use a separate or proprietary “tagging” system rather than rely on source control tags?  For example creating code release versioning that is independent from source control tagged versions, and using the release versions when specifying what code to deploy.

I’m curious because my old company did this and I wonder if that abstraction is useful or necessary with more complicated code deployment schemes.

– Developer

My reply:

Interesting question.  As with anything like this, the answer is “it depends”.
🙂

If I get what you’re asking, you’re wondering about the usefulness/necessity of separating out “versioned” artifacts for deployment – e.g., having a versioning scheme for “deployable” artifacts that deviates from the “tagging” convention you use in ur svn.

This sounds like something that you see a lot with Maven – the “maven way” almost requires the shoving off of artifacts to a shared location, to be picked up and deployed at a later time (“snapshots”, “releases”, etc), which sort of mandates a way of managing/naming these artifacts separately from svn tags.  The very concept of an artifact repository is central to the “maven way”.

In general, yes it is useful, though maybe not always necessary.  One compelling reason is the “rollback” scenario – it’s really handy to have an archive of “certified” deployable artifacts readily available when you gotta abort a deployment ad rollback to a previous version (rather than having to wait to re-build/package off of a tag, which in large systems could take a long time).

Obviously, there are lots of approaches to dealing with this scenario, but this seems to work pretty well.  A side benefit is that you can readily deploy any particular historical version of the app for, say, QA developers to identify/isolate a particular bug in a particular version.

Also, the concept of a “build pipeline” is very powerful, and is most useful when the different stages of a build/package/test/deploy are performed using the exact same artifacts – so you may have a build step that creates a war, then another step picks up that specific war and deploys it, tests against it, etc, and then further down the line you take that same exact artifact and deploy that where it needs to go (staging, prod, whatever).  This helps minimize the risk of inadvertently introducing unknown/undesired code and/or property changes as the code moves through it’s lifecycle. 

One more thing this helps with is speeding up build times (both locally and on a build server) for large complex systems in a shared/distributed development environment through better management of dependencies.  Say, for example, you’re working on a module in a project with dozens (or hundreds perhaps) of other, shared modules.  As a best practice, you should be compiling and running unit tests several times a day against the stuff you’re changing.  If you need to build the entire stack, on every change, prior to every commit, that could get a little out of control and may discourage frequent local builds.  However, if you toss in an “artifact repository”, where you can keep fixed versions of all sorts of shared modules (your project dependencies), then you don’t need to compile (or even keep that source locally) every single thing in order to get a full project.  You can just grab the pre-compiled, “versioned” binaries from the shared repo, and you’re set.  The tradeoff of developer time for a little storage and network traffic is usually a no-brainer.

Make sense?

Nice article on build automation

July 4, 2009

Check out this nice write up about build automation.  In particular, note the bit about keeping tabs on code quality.

I think it’s often overlooked that “quality injection” is a huge benefit of CI.  Yes, it’s all well and good that your code compiles, but that doesn’t really tell you much about the quality or give you any useful metrics you can act on.

There’s a handful of utilities out there that you can tie into your build to collect info about your codebase (checkstyle, coverity, simian, findbugs to name a few).

Point is – when you start thinking about how you can leverage your automated build to inject quality into your process, things can get really interesting.

How does your CM fu stack up?

July 3, 2009

One of the challenges of investing the time and effort into pimping out your build and ci setup (or more generally, CM processes) is how to measure success. Where’s the ROI in having your top dude spend days writing ANT scripts?

Check out this great post about how to measure your success with change management.

justinlittle.com redesign

June 30, 2009

We just released a redesign of justinlittle.com, check it out and let me know what you think!

As always, it’s a work in progress, but at least I’ve got a decent base to work with now…

If you like the design/layout, check in with Sleepless Media out of Santa Cruz, CA.  They did the design for me, and they’re a great team to work with.  I hacked up the html/css a bit (hey, what can I say, I’m not a design guy), their original stuff was even tighter.

They do really, really nice stuff, check out their portfolio of work.

Electric cloud and coverity?

June 29, 2009

This sounds like a pretty good match-up:

http://www.sqazone.net/modules/news/article.php?storyid=426

My new favorite youtube vid

June 19, 2009

This is hilarious.  Agile Hitler.

Software as an organism

January 21, 2009

I hadn’t realized quite how fitting my tagline was when I started this blog.  The “care and feeding” of software is one of the main roles of a release manager, or anyone involved in managing change to software for that matter.  It occurred to me today, that software has a lot in common with living things, more and more every year.

“What?!” you ask?.  How is a web based app like a living organism you wonder?  Well, there’s a lot of similarities, and it makes for a good analogy.  We’re decades into software engineering now, and we’ve discovered a lot of stuff along the way.

One key development I’ve noticed is that more and more, organizations (both software consumers and creators) are beginning to realize that developing apps is less like mechanical engineering and more like giving birth.  I’ve even heard people use giving birth as an analogy, and we’ve all heard some kind of app referred to as “my baby”.  It’s no longer an exercise of “identify requirements, design, code, test, release” in isolation – it’s a nearly never-ending repetition of this cycle.  Agile methodologies are compelling because they acknowledge this at the outset of a project, and cater to this reality.  Software is never “done”.  If it is, so is the company that made it…

Mechanical or even electrical engineering approaches are not really suited to software.  Sure, they provide a framework for getting things done, but there’s a key difference between what’s typically been “engineering” and the engineering of software solutions.

Engineering: the art or science of making practical application of the knowledge of pure sciences, as physics or chemistry, as in the construction of engines, bridges, buildings, mines, ships, and chemical plants.

Rarely do people start off to develop some kind of app, thinking they’ll “nail it” in version 1.0, and never need to spend more time/energy on it.  Certainly, there are phases in the development of an app, as there are in the life of an organism.  So while it is an ongoing cycle (the SDLC), there is a sort of linear path that an app goes through, that’s made up of each of these iterations.

In this 5 part series, I’ll ponder the following topics:

1. Definition of an “application”

2. Comparison of an app with an organism

3. Trends in software design, development

4. Release manager, build engineer as doctor, triage nurse

5. What does it all mean?

Hyperlink Legal goes live!

January 21, 2009

Hyperlink Legal has just launched their website. They’re a small company, specializing in creating hyperlinks in PDF docs for the legal industry. Check em out!

www.hyperlinklegal.com