Setup 100% non-windows build-deploy-test flow from Tfs 2015

Tfs 2015 comes with many new build management features. One of my favorites is the new cross-platform build agent that they introduced and its open sourced at github. Feels like this capability unlocks huge opportunities for those who have invested in VS ALM stack. Already VS ALM offers best suite tools for Windows/Microsoft world of things. With this new cross platform build capability and revamped build authoring from web browser experience makes the job much easier.

Lets assume, I’ve a web app and want to deploy that to Nginx on an Ubuntu server. In order to coordinate Build-Deploy-Test flow, I’m going to use the vso-agent. This is 100% non-Microsoft stuff and lets see how it works together.

Setup a brand new Ubuntu box

I use Vagrant to spin up new base Ubuntu from Hashicorp.

I dont have NodeJs, Nginx or Npm installed.

Screen Shot 2015-07-29 at 7.39.54 AM

Setup Ansible playbook for app deployment

Assuming you are familiar with Ansible. I’m not an expert in this cool technology but it was too hard to pick up lately, very interesting one for orchestrating DevOps flow from environment provisioning thru monitoring. I got a simple playbook that will install NodeJs, Nginx then copy my static content over to the Nginx box, run post deployment xml/json transformation.

Below is the snippet, pretty simple self explanatory .yml file..

  • highlighted ones are the modules I use to install NodeJs and other prerequisites
  • task to copy the index.html
  • task to transform the config file using exising node modules

Screen Shot 2015-07-29 at 8.33.39 PM

Prepare_env module, just installs nginx and set of node modules required for my app

Screen Shot 2015-07-29 at 8.38.15 PM

Another simple run.sh to kick off the playbook (I’ve setup .ssh keys behind the scene and sudo password for remote installation is encrypted using ansible vault module and vault password is given for execution via –vault-password-file)

Screen Shot 2015-07-29 at 8.41.57 PM

Setup Tfs cross-platform build agents

Just follow the instructions here and from this walkthru video, you should be all set. Just run the agent interactively.

I’ve installed the agent on my mac, its up and running

Screen Shot 2015-07-29 at 8.49.46 PM

Setup Ansible on the build Agent

In order to run the deployments on Ubuntu box, I’m going to use Ansible. vso-agent will kick off the playbook and hence we need to install Ansible on the agent machine. In my case its the Mac book. Run “brew install ansible” to install Ansible.

Connect the dots

Now that we got all fundamentals covered, idea is to create a build that will pull my source code from tfs git repo, run Mocha tests (unit and integration tests) if successful deploy thru Ansible.

I use built-in tasks provided by the new build system.

Npm Install – to install dependencies

Screen Shot 2015-07-29 at 8.57.01 PM

Shell script that kicks off Mocha tests (really it’s one line and that is “npm test”. I dont know if there is a better way to kick off npm tests)

Screen Shot 2015-07-29 at 8.57.13 PM

Shell script that kicks off Ansible playbook that I mentioned earlier (run.sh above)

Screen Shot 2015-07-29 at 8.57.33 PM

Queue the build ..

Make sure to select the correct Agent, in this case not Hosted, its Default because I added my Mac vso-agent part of Default pool

Screen Shot 2015-07-29 at 9.04.53 PM

Running the build in my local vso-agent

Screen Shot 2015-07-29 at 9.11.10 PM

Seeing the Build status in VSO .. nice rolling log

Screen Shot 2015-07-29 at 9.05.15 PM

Screen Shot 2015-07-29 at 9.05.22 PM

Below output shows NodeJs, Nginx installation, copying files, transforming the sample .xml/.json config and finally starting Nginx..

Screen Shot 2015-07-29 at 9.46.03 PM

Screen Shot 2015-07-29 at 9.46.17 PM

So far, cross-platform build agent is very impressive capability, we might see some shortcomings interms of existing OOB tasks, like it comes with Gulp and if we want to run Grunt which is still widely being used, we need to wrap the grunt execution in some shell script and run that shell script. However, step in right direction to enable everyone to use VS ALM stack.

Advertisements

Classic Build Maturity path

Often there are some questions from the management with respect to build management space, especially when code promotions/release take more time, late night fire fighting over a period or build promotion flunked and QA team raises concerns

1. What is my build maturity?

2. Where does it stand against industry best practices?

3. My build engineer says we have the best process but still it fails often, takes longer duration and I feel some gap….

I tried putting together a classic build maturity path which could help us to comprehend the current state and best state too…I certainly think the maturity could vary based on the application/product architecture and there could be genuine reasons towards some of the current facts … however, I trust this could be a starting point when we stand clueless

I categorize them from Stage 0 thru Stage 4 as shown below

If you are following Agile methodology, I swear anything lesser than Stage 4 is affecting your delivery, quality & productivity. There are plenty of tools in the market which could help you to uplift from wherever you are today and reach Stage 4 fairly in about 6 months time.

Hope this post is useful.

Thanks!!

Find and replace a content in Web.Config using MSBuild….

If you wonder how to read & update a specific node in web.config(or any other xml file) using MsBuild, here is a sample…

This is very handy for automatic, unattended deployments…

You need to have Tigris MSBuild Community tasks installed ….. this script uses XMLRead & XMLUpdate tasks from that MSBuild extension….

You need to have .NET FWK 3.5 installed too…

——————————————————————————————————————————————————————

<Project

xmlns=”http://schemas.microsoft.com/developer/msbuild/2003&#8243; ToolsVersion =”3.5″ InitialTargets=”FindReplace”>
<!– Required Import to use MSBuild Community Tasks –>
<UsingTask AssemblyFile=”$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.dll”
TaskName=”MSBuild.Community.Tasks.XmlRead”/>
<UsingTask AssemblyFile=”$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.dll”
TaskName=”MSBuild.Community.Tasks.XmlUpdate”/>
<!– *********************  Target to read the xml file ********************************–>
<Target Name=”Read”>
<!– Read Test Service end point –>
<XmlRead
XPath=”//configuration/appSettings/add[@key=’TestService’]/@value”
XmlFileName=”Web.config”>
<Output TaskParameter=”Value” PropertyName=”TestServiceEndPoint” />
</XmlRead>
<Message Text=”$(TestServiceEndPoint)”/>
<!– Read ConnectionString –>
<XmlRead
XPath=”//configuration/connectionStrings/add[@name=’DevConnection’]/@connectionString”
XmlFileName=”Web.config”>
<Output TaskParameter=”Value” PropertyName=”ConnectionString” />
</XmlRead>
<Message Text=”(ConnectionString)”/>
</Target>
<!– *********************  Target to find and replace a value within the xml file *************–>
<Target Name =”FindReplace”>
<XmlUpdate
XmlFileName=”Web.config”
XPath=”//configuration/connectionStrings/add[@name=’DevConnection’]/@connectionString”
Value=”CrapCrapCrap123456″ />
</Target>
</Project>

Essential build management for modern day software development

Some of the items that we should be watchful inorder to make configuration & build management more productive and useful for the whole SDLC….
here are some common challenges and after effects if we ignore to take care of them….

Repeatability & reliability

Repeatability
-Is about being able to do the same thing over and over
-Is something repeatable after  say 6 months
-Do developers have the ability to pullout a code and assembly which is in live production today

Reliability
-Does the process produces correct and accurate results every time
-Are you confident that the code is being delivered into the build has the defect,  not the build/package/deploy process itself

Lack of ‘Repeatability & reliability’

Low defect fix rate
-Not able to repeat the build which is in production already..
-Do we have the ability to reconstruct the development environment with the PROD code, fix a defect and ship a patch quickly
-How long does it take to reconstruct matching code and assembly on a developer box? Should be in minutes…
-Inaccurate fixes

Nonstop issues with reliability of build execution
-When a defect found,  are you sure the problem is with the code or could it be the build process
Management lost confidence on build process

Traceability & completeness

Traceability
-Ways to understand the complete life cycle of  defect/feature that goes into a build….

Completeness
-Ability to trace it back completely and figure out whether the build contains all of what was intended…
-Does it add any value towards the program goals and objectives
– Why and What are we delivering in this build

Lack of ‘traceability and completeness’

–Not being able to say exactly what is in the build…. 
    ->what new features, enhancement, defects have been added and why?
 –Incomplete builds  & Missing builds 
    ->
Missing some artifacts
    ->Post deploy manual hacks/changes on the environments beyond build process
    ->Sometimes builds are missing from the source control
–Not being able to say where the build is being used? 
   ->Where the build has been deployed? Is it being testing on different environments? What version do I have currently on different environments?
–Not being able to say how the build was carried out…. 
  ->Did the source got baselined?
  ->With or without third party assemblies? What version of 3rd party assemblies used?
  ->Was it environment specific build? Any configuration items change based on the environment?
  ->Were there any special compiler/packaging options, instructions followed?

Speed, Agility 

Speed
-Is about how quickly a developer can integrate defects and test his changes?
-How fast and integrated is the build process
-Is the process efficient & has absolutely essential steps
-Is it an unattended build/package and deploy process – if manual process is essential, that’s a big bottle neck

Agility
-Is about having the build/deploy process in which changes can be integrated
  ->quickly
  ->efficiently
  ->independently, as and when needed
 

Lack of ‘speed, agility’

Late integration, long builds…. late night firefighting…

–High possibility of incomplete build
– features/enhancements/defects may not meet the entry criteria
–Uncertain defect quality
–Possibility of ‘high defect re-open rates’
–Not being able to integrate changes quickly
-Does the build process take so long – results in weekly build rather than CI builds– what if that build fails
-Travelling with a hidden tiger until next build
–Deferred testing puts the milestone in risk
–Lack of confidence on scheduled build
– not sure what can/can’t go
–Risk of missing milestones in the wake of late integrated testing
–Risk of late night fire fighting towards the end of development cycle – team morale will go down 

So how can the process be improved?

Following high level goals could make you better
–Implement Continuous Integration
•Define a build once in 30-45 mins interval to make sure source code is syntatically correct and produces binary
–Write light weight tools to encourage teams working on Source Control every single minute
•Create some tools to quickly refresh source & assembly based on build number– improve developer productivity
–Envision efficient check in policy & make developer life easy
•like gated checkins to ensure a change is built along with the latest content from the source tree automatically prior to check in
–Automate and make unattended build/package/deploy/sanity testing
•Use best and authentic tools to ensue maximum benefits from automation
–Ensure build process is simple, transparent, fast and easy
•Anyone should be able to initiate and manage the build process with no or negligible training
-Try to adopt Application Life cycle Management model – Integrate tools suite right from Requirements thru testing..

PowerShell+update TFS build properties

If you are wondering how to update some build properties like ‘build quality’ or overwrite ‘retention policy’ or some other TFS Build property dynamically/scriptically…. PowerShell is a good option….

After I deploy the build on a test environment, I wanted to update ‘build quality’ and ‘lock’ a specific build for reference –  so that my retention policy doesn’t wipe off the build….. naturally Visual Studio Team Foundation Client provides a tiny ‘lock’ symbol and it does provide interface to manage build quality too…. they appear right on the build explorer window and are simply perfect….however, I tend to forget that operation post deploy – meaning the way I got my deploy is completely unattended……of course, my retention policy retains only 5 builds…its embarrassing  when a developer seeks a specific build inorder to  reproduce a defect….. 😦

When  I was seeking ways to automate this task..certainly there are ways like MSBuild custom task etc…. but I hate to write c# code, compile, reference that in MSBuild …. PowerShell was an elegant and neat option for my need…. loads the assembly at runtime…. no compilation, msbuild etc….

So the following script can be used to set build properties automatically post deploy – simply call powershell.exe with appropriate parameters….

here we go..

#must pass inputs in the following order

#1. TFS Uri 2. project name 3. build definition 4.build number 5. build quality 6. retain(true,false)

param (

[string]$TfsServerUri = $args[0],

[string]$ProjectName = $args[1],

[string]$BuildDefinitionName = $args[2],

[string]$MyBuildNumber = $args[3],

[string]$MyQaluty = $args[4],

[bool] $retainBuild= $args[5]

)

$ErrorActionPreference = “Stop” ;

if ($Verbose) { $VerbosePreference = “Continue” ; }

Write-Verbose -Message “Loading TFS Build assembly…”

[void][System.Reflection.Assembly]::Load(“Microsoft.TeamFoundation.Build.Client, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a”)

Write-Verbose -Message “Loading TFS PowerShell snapin…”

$TfsSnapin = Get-PSSnapin -Name Microsoft.TeamFoundation.PowerShell -ErrorAction SilentlyContinue ;

if ($TfsSnapin -eq $null) {

Add-PSSnapin -Name Microsoft.TeamFoundation.PowerShell ;

}

Write-Verbose -Message “Getting TFS server instance…”

if ([string]::IsNullOrEmpty($TfsServerUri)) {

$TfsServer = Get-TfsServer -Path (Get-Location) ;

} else {

$TfsServer = Get-TfsServer -Name $TfsServerUri ;

}

Write-Verbose -Message “Querying builds…”

$BuildServer = $TfsServer.GetService([Microsoft.TeamFoundation.Build.Client.IBuildServer]) ;

$DetailSpec = $BuildServer.CreateBuildDetailSpec($ProjectName, $BuildDefinitionName) ;

$DetailSpec.QueryOrder=”FinishTimeDescending”;

$DetailSpec.MaxBuildsPerDefinition=10;

$Builds = $BuildServer.QueryBuilds($DetailSpec).Builds;

$(foreach ($build in $Builds) {

if ($build.BuildNumber -match $myBuildNumber)

{

$build.Quality=$myQuality;

$build.KeepForever=$retainBuild;

$BuildServer.SaveBuilds($build);

}

});
hope this helps someone…..

Configuration management metrics….performance indicators

How Integrated Builds, package and deploy process helps to increase overall SDLC productivity…..

Often I see questions, how does good config management help for better productivity or bad config management kills productivity? is there any metrics around that? where do I stand? what should I watch out for?

Here are some common key performance indicators that could help you to review the health of software configuration, build and release management for an integrated development environment.

Problem 1:

High build cycle time….. On an ideal condition for a 900KLOC C# code, complete product compilation can be accomplished in less than 15  minutes with an ordinary build machine….another 20 minutes to package & deploy to a few boxes…..

Problem indicators:

One more level finer, how long does it take you to?

  • Generate a candidate build from development environment? Are you experiencing more and often compilation failure? Especially while integrating the code?
  • Get it packaged according to the product deployment plan?
  • Get those bits deployed on the development environment for initial testing?

Problem 2:

High build rollout time to QA – typically how long does it take you to promote a build to different environments?  a day or more?

Considering candidate build generation is 15 minutes, an efficient build rollout can happen as quickly as 1 hour to any testing environment…..and to as many as 50 tester workstations….

Problem indicators:

  • Where do you get stuck? Environment maintenance? More people cook in your kitchen? do people directly manipulate things on the deployed environment rather than maintaining a source controlled version??
  • Do you manually double click the .msi and install on every box?
  • When you promote the same build to higher environments, should you manually touch up the application configuration parameters according to the environment? Meaning, for dev x=50, for test x=900, how that change is being handled? Manually?
  • Do you manually configure your application?
  • Release coordination consumes more time to get a candidate build to be promoted to the next level? Possibly due to late integration…check problem #1.
  • Have you defined entry and exit criteria for a build?

Problem 3:

How many $$$ do you spend for configuration/build/release management? 100+ hrs for a build rollout? (Possibly because manual build/package/deploy process…. Consumes a lot of time from different roles from release coordinator, build engineer, DBA etc…)
In an ideal world, it need not be more than 2 hrs/per build rollout, may be 5 hrs….

If not…. Here are the problem indicators:

  • Manual build/package/deploy
  • Manual coordination with external teams like DBA…
  • No defined entry/exit criteria..

If your shop is experiencing few or some of the above mentioned challenges…. Certainly there are ways to improve your overall config, build and release management process for an integrated development environment…..get ready to with the following solutions..

Implement Continuous Integration

Many shops still integrate code in an old fashion – say, on a 6 weeks development cycle, 4 weeks independent development (only local builds) and last 2 weeks for integration….typically late integration fakes the progress… lot of hidden tigers will appear when you start to integrate….

If your SCM tool doesn’t support frequent integration, worth considering a switch over to a tool like TFS which supports CI builds out of box….

Do you integrate continuously? NO? are you panic about the frequent changes on your foundational code base..?

Please consider implementing continuous integration – great ways to make sure a code change works well with all others and enables you to test the change immediately in next 30 minutes by efficient build/package/deploy automation…..

Unattended Build/package/deploy process

Build:

  • Efficiently automate the build
  • Coach the team in such a way that “build break is a crime” – it should get immediate attention and action…

If it’s manual – that certainly hinder your progress… automate the builds… almost all the tools available in the market have the ability for continuous integration and automated build process…. Make sure compiles assembly is ready for testing at the earliest after the change…..

Package:

  • Automate the packaging process
  • Tightly integrate it along with build process – build process should automatically trigger the packaging..
  • Maintain centralized packaging scripts – otherwise you will end up packaging some MS Windows assemblies along with your package.. 😉
  • Keep it simple and fast… say, if .vdproj takes more time to package something… consider trying a PoC with WiX – in my experience WiX has helped us in reducing the size, time and it’s so flexible….

Automated and integrated packaging process can really boost your dev team productivity.
Consider the current packaging model to be manual… post build, the build server sends out an e-mail… build engineer has to manually copy the assemblies to some location and kick start the packaging process. By the way, he/she might accidently forget some assembly while copying or xcopy might simply fail for an assembly which we did not notice…. the .msi generated off of the manual process certainly fakes the testing… and manual process could be more time consuming as well….

Deploy and post deploy:

  • Automate the deploy process – there are plenty of easy tools available like psExec, WinRs, PowerShell scripts etc….
  • Integrate deploy with build/package process…. This is awesome to have… if a developer wants to test a code change….he just have to click a button to Q the build on the build server… then it should build->package->deploy off of 1 central id…this gives another ability for you to restrict all other people from accessing the application servers too… so the environment is so clean….
  • Establish a process to have an “operational application with bare minimum functionality’ at the end of each business day – create another automated build for the end of each day which deploys the latest bits to the environment
  • Establish a build verification test processes
  • If possible automate BVT too… sometimes; BVT might be a killer… daily executing some test cases for hrs might irritate the development community…..

Configuration Management

Finally make some changes to Source Control too…..

  • Envision the source tree structure in such a way that helps rapid development cycle
  • Define efficient check in policies defined to avoid code duplication, confirm unit testing, make sure review process  etc..
  • If your team often breaks builds….consider creating some tiny tools to mimic exactly what happens on the build server…like take the latest & build everything locally in a fashion how that happens on the build server..Otherwise teams might not be working in the source control and often code integration results in failure…..

Largely, continuous integration, integrated builds, package, deploy process can really boost your overall SDLC productivity….