Wednesday, May 05, 2010

Agile RTP (ARTp) meetup

Last night I attended my first meetup with the group Agile RTP. This is a group of Agile enthusiasts, practitioners, and those who just want to learn more about Agile.  They meet about once a month and have various presentations and discussions around the Agile movement and community. If you are unfamiliar with Agile, see my last post.

This month the presentation was entitled How "Bottom Up" is Failing the Agile CommunityIt was presented by the CEO of Gearstream inc., Brad Murphy. Brad is a veteran of software development starting out as a programmer with Smalltalk way back when and moving on to found several successful startups.

So how is “Bottom Up” failing the Agile community?  Well,  the premise of Brad’s presentation is pretty straight forward and one that I have talked about before in this space.  Agile is not just a practice that should be isolated to one group in an organization.  Though my experience has been that it is in these silos that Agile has it’s roots. In order for Agile to be as successful as it can be, the inclination, the encouragement and the participation needs to start at the top. Executives, Directors and Sr. level managers need to be actively involved in making and shaping the Agile effort in their organizations.  As Brad indicated in his presentation, without commitment and participation from this level, the effort is doomed.

One thing that I would have liked to hear more about is how small companies can break through that executive barrier.  Brad started his presentation with the disclaimer that his experience is primarily with very large companies and their executives.  There was a little bit of discussion around this topic after the presentation and what I got from that is that small companies and large companies have the same goal: create value to succeed. 

The Agile community needs to address that goal, not just show that Agile saves the company money and gets the product out the door quicker (which are valuable outcomes). Brad shows in his presentation that in some ways these quick fixes can diminish the movement if that's all that is presented.  He gave the example of Apple and Dell in 1997.  Apple was a smallish company with somewhere around $700 million in revenue and Dell’s revenue was around $3 billion (can’t remember exact figures).  Today Dell is a $30 billion company and Apple is around $250 billion company.  The difference is that Dell focused on the thought that the consumer wanted cheap PCs while Apple focused on providing value that the consumers didn’t even know they needed.  It’s all about providing value.

All-in-all it was a well received presentation with some very interesting questions, answers and discussion following. If you are at all interested in Agile, I would encourage you to find your nearest Agile community get-together. If you are in the triangle area of NC I hope to see you at the next meetup.

till next time…

Friday, April 23, 2010

Are you Agile?

I recently went to a seminar in Durham on Agile Development Methodologies. This seminar was sponsored by AccuRev and Anithillpro (UrbanCode). I learned a little bit, some of my core beliefs were re-enforced, and I was a bit surprised by the level of Agile development my company was at compared to other companies that were represented at the seminar.

So, first of all, for those not acquainted with the term Agile, here is a link to wikipedia’s description and below is the Agile Manifesto.  I have been told that any presentation on Agile should include the Agile Manifesto. So, while this is not a presentation, I believe it to be valuable to see it in black and white.

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

There is also the related “Twelve Principles behind the Agile Manifesto”.

So what did I learn in this brief seminar? One of the main take-aways from the seminar was directly related to the last line of the Manifesto.  This was probably skewed a bit because the seminar was sponsored by a tools company. But, I learned that the items on the right are important so long as they support the items on the left.  That is, if someone tells you that they are doing Agile therefore they do not use tools, write documentation, or plan their design.  They are not doing Agile.  Tools, Documentation and Design are very important, so long as they support the items on the left.

What were some of the things that helped re-enforce my core beliefs? First, Agile requires management and executive by-in and participation.  Too often a team that practices Agile Development falls flat.  One reason for this is that the management and executives either, 1) do not have by-in to the process and do not promote the process. Or, and what I believe to be the worse of the two scenarios, 2) they do not participate in or own the process.  The success of any development methodology hinges on management and executive participation.

Second, and related, is the priorities of the development team need to be clearly represented and talked about daily. The Product Owners are key to making this happen.  If a Product owner is successful at transforming executive Value Story Cards to Story Cards for the development team and move them through the swim lanes, then the project has a great chance for success.  If this is difficult for the Product owner and priorities are not clear, the team is doomed. If the executive wants this specific feature to be a part of this sprint then the Product owner needs to be adept enough to help the executive decide what feature will NOT be a part of that sprint. This is a give and take. I believe this to be one of the most challenging aspects of Agile to overcome.

What surprised me? Well, in short, I was surprised by how much further along my company is in practicing Agile Methodologies than most of those organizations represented at the seminar.  And yet, how far away we are from implementing some of the key principles.  I am excited that we are moving closer to adopting these principles but the realist in me knows that we have a ways to go.  Especially in the area of priority setting.  We continue to learn and move closer, like most teams. And, I am encouraged.

The Agile Manifesto is an important step in the evolution of Software Engineering.  When an organization follows the principles associated with the Manifesto there is no limit to what they can accomplish.

Tell next time…

Thursday, April 08, 2010

Managers Getting Value from Their Team

One of the blogs I read is by Scott Berkun. He writes about management and public speaking.  One of his latest entries is entitled “Should managers know how to code?”.  He basically puts managers into 2 categories.  A. Managers of software development teams and B. project managers and team leads.  Then he makes a series of points that a manager in any role should strive towards.  I’ll let you read the blog to get all the details.

What I would like to comment on is the statement he makes in one of his points. Scott says that “Managers don’t need to be experts – they need to be great at getting functional value out of experts of any kind.” I believe this to be a pivotal statement regarding a manager’s success or failure.

A few years back, when I was a Programmer Analyst with the Mashantucket Pequot Tribal Nation in Connecticut, I had the opportunity to listen to Sir Richard Branson speak about being successful.   One of the most important things I learned that day, and something that has stuck with me throughout my career, is that in order to to be successful one has to surround themselves with people who know what they are doing and let them do it. In my eyes, this is exactly what Scott is talking about.  But Scott goes a little further.

Letting the experts do what they are experts at is only part of the equation.  Another part of the equation is the latter part of Scott’s statement. If the experts you surround yourself with do not add value to your particular function in an organization then their worth as an expert is diminished. Being able to direct an expert’s knowledge and know-how to a particular function to add value is key to the success of any manager.

I have had the privilege of having managers throughout my career in software engineering who are exceptional at this key concept.  I have also had a couple who were very bad at it and it showed. Aside from these two specific cases, one from early in my career and one from later in my career, the managers I have had have allowed me to do what it was I did best and helped me focus on adding value to the company while I was doing it.

The first manager I had that demonstrated this ability, after I really understood its value and looked for it, was Fritz Kade at MPTN.  He demonstrated the innate ability to get his group to, not only accomplish the task at hand, but also to learn from it and apply that learning to future tasks. He helped me through some very troubling times early in my career and I thank him for that because I would not be who I am today without his learning opportunities.  Though this has nothing to do with the subject at hand, he also will forever be memorialized with his statement to me at dinner one night with my wife: “The more you eat, the more you can eat.” Although, he only weighed about 150 lbs.

One of my managers that did not demonstrate this ability very well, (not to be named to protect the guilty) fortunately for me, was not my direct manager but my manager’s manager.  He was guilty of one of the most egregious managerial mistakes I have seen: Drive-by Management.  We spent 45 minutes in a working meeting, that he should have attended, working through the details of a pretty complex insurance policy task.  He arrived at the end of the meeting and stated “this is the way we are going to do it.”  He had no input from his staff and offered no chance to poke holes in “his way.”  We all left that meeting drained.  We wasted our time and effort and worse yet, he lost the respect of almost everyone in that room. This is a classic case of mis-managing your team and your project.

So as a manager I strive to see the value my staff can give in any given situation and help them focus on that value. I believe I have one of the most talented IT staff available so I consider myself lucky.  And they are helping me be successful and hopefully I am returning the favor. Of course, if you have a staff that is not as good as you would like them to be, then you have to have a different ability.  That is the ability get the most out of what you’ve got.  But that is another story and a story of mine that I might tell you about in a later post.

till next time…

Wednesday, March 24, 2010

Interesting CCNet behavior

We have a utility that generates our build scripts when it is passed a directory path as an argument.  The utility finds all the .sln files in the directory and its sub directories.  Then it interrogates the sln files to find out what the project dependencies' are and then generates a build script.  We use this utility in all of our CCNet projects.  CCNet calls the utility and then the next CCNet task calls the resultant batch script.

The engineers wanted to use this utility for their daily local builds so I created a single batch file that calls the utility based on a known Build location that contains all our sln files and then calls the batch files that were just generated.  Unfortunately, if one of the batch files failed it would exit out of the entire script with the exit code not allowing the engineer’s build to continue.  Now this is expected behavior given that none of the generated scripts’ EXIT statements have the /B argument. So I added it.

Everything seemed to be fine when I ran my first tests.  These were positive tests to make sure existing green CCNet projects would stay green.  But then on my negative tests something unexpected came to light.  My red build returned green.  I wanted to validate this was the case so I removed the /B from the batch script and ran the test again. The build turned red.

I am not sure why this is the case.  I looked through the CCNet doc but could not find the answer.  I am not sure if this is a CCNet issue or a CMD.exe issue (CCNet uses this when shelling out to batch files.)

My solution was to add an argument to our script building utility. The utility, with the new argument present, will generate the scripts with the /B on the EXIT command.  If the argument is not present then it does not include the /B.  I then added the new argument to the batch script for the engineers and now we are all happy. 

Well sort of. I still want to know why CCNet behaves this way. I created a thread on the ccnet-user google group discussion board.  I’ll update you on any responses i get.

till next time…

Thursday, March 18, 2010

New Feature in CCNet 1.5

So I was looking at the CCNet documentation this morning because I needed to refresh my mind regarding some syntax.  As I was searching I found the Parallel Task feature.  This feature allows you to run several tasks at the same time as the name might indicate.  Well this is all well and good and I will use the Parallel Task feature.  But what was even more exciting to me (I get excited easily) was the Sequential Task feature.  I know, I know.  You say CCNet has always had sequential tasks.  In fact, they are built sequentially automatically.  Yes this is true.

But what I could never do is have CCNet move on to the next task if the first task failed.  Now I can.  See, the Sequential Task has an attribute associated with it called continueOnFailure.  This is a boolean attribute that if set to “true” allows the set of tasks inside the sequence to build even if the previous one failed.  I love it!

I love it because I am using a ccnet applet tool which I have written about in a previous post that allows me to forcebuild a CruiseControl.Net project from the command line.  This comes in very handy.  Well, I use this in all of our builds here at Emergent.  But sometimes because of the nature of our re-usable config files I want to force a build that may not exist yet on some build machines, say, if I were just testing it on one machine. So I put these forcebuild steps inside this sequential task with the continueOnFailure set to true and I am golden. 

I haven’t put this into practice yet because I just found out about it.  But, I’ll let you know if it doesn’t work.  I’m thinking it will.

Till next time…

The Evolution of a Build System Part 2

In my last post I talked about how Continuous Integration was where I started when faced with rebuilding an entire build process from scratch.  I talked about CruiseControl.Net and some of the challenges I faced at a high level.  Today I am going to talk about taking the Continuous Integration system and moving to a nightly Full Build system.

I began here:

Nightly Build and Test

  • Only built portion of code base
  • Built from the Tip
  • VC80 and VC90 would build on alternate nights
  • All platforms were built and tested serially
  • Packaging was done as a separate process only when deemed necessary
  • Close to 24 hour turn around

This is where we are now:

  • Builds all solutions and all configurations of entire mainline
  • Builds from latest Green CI Change-list #
  • Builds all platforms and compilers in same night
  • Build takes advantage of multi-core hardware and RAID drives by building concurrently where possible
  • Build and Integrated Tests timing is down to 14hrs total
  • Have deployable ISOs for all platforms prior to 10am daily

I built a couple of tools along the way and made use of my staff (thanks Scott and David) and our IT infrastructure and staff.  Being the IT Manager helped considerably.  The IT staff (thanks Jean and Luke) was extremely helpful in automating the process and continues to play a part. 

Here are some next steps:

  • Automate the deployment to a QA environment
  • Automate tests against the QA deployment
  • Continue to bring down the build time

All of this was made possible because we were able to reproduce the build environment and standardize on a process.

Till next time…

The Evolution of a Build System Part 1

A couple of weeks ago I began setting up a new set of build servers for our west coast office.  I’ve pretty much got it setup so it is almost a push button operation.  Not quite there with all the DCC tools yet but almost.  We typically stand up one machine with the OS and run a single installation script which installs Visual Studio 2005, 2008 and the SPs.  It also installs Perforce, CruiseControl.Net and the latest console SDKs automatically. There are other minor tools the script installs as well.  I began to think about how far we had come. It wasn’t always like that…

This is a far cry from where we were when I started working at Emergent about a year and a half ago. Then there was a Twiki page that listed all of the various items required for each build machine. Your fingers ached after scrolling through the instructions.  Add this set of environment variables, make sure this path was added to the system path, install this tool at this path.  It was a very manually intensive process to setup a build machine.  Since the first thing I was challenged with was to streamline the build process, I had to have a reproducible build environment.

So, a combination of organic build scripts and manually setup machines made for a very unstable, and in my eyes, unreliable build system.  I had to first address the reproducibility of a build machine.  But where to start?  I knew we would be finishing a release soon.  Then I think it was Gamebryo 2.5. So I would start preparing for the next release cycle.

The build was taking close to 24 hours.  That was just too long.  Don’t get me wrong.  It was doing a lot of stuff.  It was building all console versions (Win32, PS3, Xbox360 and Wii) of the product and every solution configuration for each.  Then it was executing a rigorous automated testing suite.  So I chose to let that be for now.

Continuous Integration was where I started.  I had experience implementing CruiseControl.Net before so it seemed natural that I mentally moved to it.  My budget was not large so I had to make choices.  I reviewed ElectricCloud and AntHillPro. Both would have done what I wanted but where did I want to spend the little money I had?  I decided to forgo the expensive build systems and spend my money else where. 

I started small. I took a piece of the technology, the base framework, and created a CCNet project on my own box just to prove that I could do it with the C++ code. Then I incorporated each compiler, VS2005 using devenv, VS2008 using MSBuild, VSI from Sony for the PS3, and GCC and make for the Wii builds.  It took a while but the POC was working in a week.  I had some issues with MSBuild and C++ which I outline in a previous post so I decided to move to devenv for all the Win32 builds.  I will spare you the details further.

As my team was piecing the CI process and hardware together, we slowly started putting together a script that evolved through the process for build machine setup. I made sure my team knew I wanted there to be as little manual intervention to a new build setup as possible.  We re-factored existing scripts, combined them into a series of standard named configuration scripts in a well-known-location.  The team slowly began to change their way of thinking.  My mantra was as soon as you have to do something twice, evaluate and automate.  I tried to lead by example.  I engineered a lot of the initial setup and brought the team up to speed.

Continuous Integration as a culture was fairly new to Emergent product engineers.  Yes, they understood the concept and new it would help them be more efficient.  They understood the bringing feedback closer to the time of development was crucial to limiting the time it took to fix the issue.  They understood all of the reasons. But, it took a while for them to actually catch on to the whole “Red means dead” culture.  Finally, after a few months of CI in its full operation,  The engineers were on board and even worked with each other to promote the whole methodology. After all, I believe, these engineers to be the smartest people on the planet.

I was, indeed, pleased to see the fruits of my team’s labors playing out and helping the engineering team as a whole be more efficient.

In Part 2 I’ll talk about the transfiguration of a nightly build system “pieced together with duct tape”, to a streamline, reliable, reproducible build system and process.

Till next time…

Tuesday, February 02, 2010

Emergent Announces 2009 Results

I’m extremely pleased to let everyone know that the company I work for, Emergent Game Technologies, had a better 2009 than 2008 in terms of Revenue year-over-year.  Here is an excerpt from the press release:

CALABASAS, Calif. – February 2, 2010 – Emergent Game Technologies, a worldwide leader in 3D videogame engines, today reported a 35% increase in revenues year over year and a 58% increase in profitability for the year ending 2009.  Strong licensing deals and reduced expenses led the fourth quarter, ending the year with over 120 deals closed. Growth was strongest in North America, Japan and China, with Emergent’s Gamebryo LightSpeed gaining rapid adoption and driving the strong 2009 results.

“Emergent has made changes to weather the tough economic climate and is poised to become stronger than ever as a result of those changes,” says Scott M. Johnson, Emergent’s CEO. “While the Asian market and the US visual simulation market never slowed for us, we are now seeing signs that game development is starting to pick up in North America and Europe.  Our core message of reducing redundancies by using Gamebryo LightSpeed is resonating with publishers and developers.”

Based in Calabasas, CA, Emergent creates technologies that are driving professional videogame development. Emergent’s revolutionary game engine, Gamebryo LightSpeed, was released in May 2009 and provides both established and newly emerging studios with a flexible engine that continues to expand and grow with their evolving development needs.  LightSpeed empowers developers to create their games in any genre and continues Emergent’s dedication to providing a one-stop cross-platform toolset for PlayStation®3 computer entertainment system, Xbox®360, Wiiand PC.

I’d like to think that my organizations had a lot to do with Emergent’s success.  But then doesn’t every manager hope that they have a direct impact on their company’s bottom line. In the last year the Development Technologies Group had some significant accomplishments, not the least of which are outlined below:

  • completed a comprehensive Continuous Integration System and process for all platforms (Win32, Xbox360, PS3, and Wii)
  • redesigned and implemented new nightly build process
  • redesigned and implemented new packaging process
  • redesigned SCM depot to better accommodate automation
  • significantly expanded build and test infrastructure to accommodate multiple branch builds
  • created a new PC Test Farm for expanded platform testing

The IT Group, which I also manage, also had some significant accomplishments this past year as well.

  • upgraded IT infrastructure
  • implemented comprehensive backup solution using EMC’s Avamar
  • upgraded internet bandwidth at our Calabasas offices
  • consolidated all our web assets to a single provider
  • implemented Inventory Management system
  • implemented standard PC and laptop policy
  • successfully addressed more than 10,000 internal Help Desk Tickets

So where do we go from here?  Well we don’t stop.  We keep moving towards helping to make our company successful! Some things we are working on in 2010 include:

  • a new SVN repository for our clients to download the latest source
  • a one-button build for all our products
  • consolidating our IT infrastructure
  • continue to provide the best service for our engineers as possible!

All-in-all, it was a good 2009 for my groups and for Emergent.  There were bumps along the way, but all organizations hit these bumps.  Its how you deal with the bumps and how you learn from the bumps that set you aside from your competition!

till next time…

Tuesday, January 12, 2010

This Will Make Someone Happy

Today I read a blog from Sean McCown that encourages developers to be professionals when it comes to interacting with a database.  In a nut shell, (Go read his blog for more) he says that coders should not write code a specific way to make the DBA happy.  They should write the code that way because it is right way to access a database.

I appreciate Sean’s sentiment and would like to take that a bit further.  This paradigm should be applied to all disciplines surrounding an application developer.  Whether it be the DBA, the QA analyst, the Build Engineer or the Automation Engineer. They all have the “right” way to access or to test or to automate. These are not because they want to make things hard for the app dev, no. It is because each discipline has specific knowledge about how their systems work the best. 

Just as a DBA can say this way to code will retrieve the data you want faster and more reliably than the other way, an Automation Engineer can and should say, coding this way will make the automation work more efficiently and have a higher quality return rate.

A coder will have a greater understanding of where efficiencies can be gained in his application if he can reach out to these disciplines and understand the “whys” instead of just making someone happy.

Till next time…

Monday, January 11, 2010

Fan of Deduplication!

It has been about 4 months now since we have implemented our new backup solution. I must say that I am a fan! Deduplication is eliminating the redundant data in a given set of data being backed up.  For instance, we have several VMs with Windows on them.  These VMs take up very little space because the Windows bits are only stored once.

We purchased from EMC their Avamar backup solution. We have two replicating nodes, one here in Chapel Hill and one at our data center at Peak10 in Charlotte. These nodes are currently backing up more than 2.7 TB of data and is only using about 1.5 TB of space.  Now you might say that “that's not that great”, but wait.  That includes 4 months of additions, changes and restore points. Our retention policy is fairly conservative and the backup times are amazing.

Avamar works seamlessly with our VMs, Exchange, SQL Server and our users directories.  Backing up in both locations, it gives us a very easy restore scenario. Also, one in which we had to act on just recently with our web server having issues.

I use Microsoft’s Windows Home Server at home which has its form of deduplication as well.  And if you read this blog from time to time you know what a fan of Windows Home Server I am.

Till next time…