Setup 100% non-windows build-deploy-test flow from Tfs 2015

Tfs 2015 comes with many new build management features. One of my favorites is the new cross-platform build agent that they introduced and its open sourced at github. Feels like this capability unlocks huge opportunities for those who have invested in VS ALM stack. Already VS ALM offers best suite tools for Windows/Microsoft world of things. With this new cross platform build capability and revamped build authoring from web browser experience makes the job much easier.

Lets assume, I’ve a web app and want to deploy that to Nginx on an Ubuntu server. In order to coordinate Build-Deploy-Test flow, I’m going to use the vso-agent. This is 100% non-Microsoft stuff and lets see how it works together.

Setup a brand new Ubuntu box

I use Vagrant to spin up new base Ubuntu from Hashicorp.

I dont have NodeJs, Nginx or Npm installed.

Screen Shot 2015-07-29 at 7.39.54 AM

Setup Ansible playbook for app deployment

Assuming you are familiar with Ansible. I’m not an expert in this cool technology but it was too hard to pick up lately, very interesting one for orchestrating DevOps flow from environment provisioning thru monitoring. I got a simple playbook that will install NodeJs, Nginx then copy my static content over to the Nginx box, run post deployment xml/json transformation.

Below is the snippet, pretty simple self explanatory .yml file..

  • highlighted ones are the modules I use to install NodeJs and other prerequisites
  • task to copy the index.html
  • task to transform the config file using exising node modules

Screen Shot 2015-07-29 at 8.33.39 PM

Prepare_env module, just installs nginx and set of node modules required for my app

Screen Shot 2015-07-29 at 8.38.15 PM

Another simple run.sh to kick off the playbook (I’ve setup .ssh keys behind the scene and sudo password for remote installation is encrypted using ansible vault module and vault password is given for execution via –vault-password-file)

Screen Shot 2015-07-29 at 8.41.57 PM

Setup Tfs cross-platform build agents

Just follow the instructions here and from this walkthru video, you should be all set. Just run the agent interactively.

I’ve installed the agent on my mac, its up and running

Screen Shot 2015-07-29 at 8.49.46 PM

Setup Ansible on the build Agent

In order to run the deployments on Ubuntu box, I’m going to use Ansible. vso-agent will kick off the playbook and hence we need to install Ansible on the agent machine. In my case its the Mac book. Run “brew install ansible” to install Ansible.

Connect the dots

Now that we got all fundamentals covered, idea is to create a build that will pull my source code from tfs git repo, run Mocha tests (unit and integration tests) if successful deploy thru Ansible.

I use built-in tasks provided by the new build system.

Npm Install – to install dependencies

Screen Shot 2015-07-29 at 8.57.01 PM

Shell script that kicks off Mocha tests (really it’s one line and that is “npm test”. I dont know if there is a better way to kick off npm tests)

Screen Shot 2015-07-29 at 8.57.13 PM

Shell script that kicks off Ansible playbook that I mentioned earlier (run.sh above)

Screen Shot 2015-07-29 at 8.57.33 PM

Queue the build ..

Make sure to select the correct Agent, in this case not Hosted, its Default because I added my Mac vso-agent part of Default pool

Screen Shot 2015-07-29 at 9.04.53 PM

Running the build in my local vso-agent

Screen Shot 2015-07-29 at 9.11.10 PM

Seeing the Build status in VSO .. nice rolling log

Screen Shot 2015-07-29 at 9.05.15 PM

Screen Shot 2015-07-29 at 9.05.22 PM

Below output shows NodeJs, Nginx installation, copying files, transforming the sample .xml/.json config and finally starting Nginx..

Screen Shot 2015-07-29 at 9.46.03 PM

Screen Shot 2015-07-29 at 9.46.17 PM

So far, cross-platform build agent is very impressive capability, we might see some shortcomings interms of existing OOB tasks, like it comes with Gulp and if we want to run Grunt which is still widely being used, we need to wrap the grunt execution in some shell script and run that shell script. However, step in right direction to enable everyone to use VS ALM stack.

Advertisements

What to focus in API testing effort

Test Automation pyramid and related discussions offers enough education as to why GUI tests could become a beast of burden over time. It could risk one’s neck if GUI tests were written without considering the user intentions. Heard situations where regression tests were passing but the system isn’t fit for use because it doesn’t fulfil a meaningful workflow.. tests were barely checking whether UI elements were populated and populated with values. Isn’t the job of UI unit tests? There are plenty of guidance around BDD, designing and writing meaningful GUI tests.. and in addition to that, what I think will add more value and minify the burden of brittle GUI tests is more emphasis on API/service tests. These days, most of the products are built around service oriented architecture, offers RESTful or some sort of web services. Most of the UI is modern, developed for the web, decoupled from the business logic and talks to the web service to perform business logic. In these kind of situations, API tests are absolutely necessary. Even though, web services are private, known to my GUI only , it would add a lot of value testing thru web services, will reduce considerable overhead and heavy lifting that the GUI tests carry.

While there are plenty of tools around to help with API testing, I’m very impressed with Visual Studio Web Testing. Exploring other tools like Frisby Js and Yadda. More tooling specific write up will be out sooner, but for now, staying away from tooling and stressing more around the intentions. Tests with good intentions will yield better results, so, what needs be tested at the API layer that can offer more confidence?

Assume for every button click or tab out or equivalent action from GUI, your application calls out an API/web service to perform some business logic. GUI is updated based on web service call response. So, GUI needs to be tested for user interactions and GUI representation …data driven verification, thorough business logic verification can be offloaded to API testing. Also, API tests can be written while writing API code and need not wait for GUI to be ready., short list in my opinion, of course, there are more..

  • How do we send data – Headers, Query string parameters, payloads?
  • What headers are mandatory?
  • What parameters are mandatory? What is optional?
  • What if expected header information is not sent?
  • How payload is validated?
  • What is the content type for payload?
  • Is payload validated for types, null, or correct values?

One of my recent experience reminds the last point. One of the steps from the API test workflow made a call to an endpoint with appropriate headers and payload. In the business logic that call serializes the data and calls out to rules engine. We observed Rules engine crash and after 2 hours of debugging with the developer, we found that data serialization returned null and a call to rules engine with null caused all the trouble. Root cause was the API call did carry payload but not content-type for the payload.. although this may not happen as our GUI always calls out with appropriate headers, this leaves an easy whole to crash our system for intruders. Also, surfaces real code quality issues. It was evident that the entire class/controller implementation had no null, type check what so ever.. forgotten hero “Defensive coding” ..

Pluralsight course
“Defensive Coding in C#”.

Especially, API testing will help elevating confidence around Validation and Exception handling

  • Does input validation work?
  • Do we evaluate input data schema?
  • How do we handle missing parameters?
  • How do we handle wrong input?
  • Is API giving right error codes?
  • Does it issue appropriate error if wrong content type is requested?
  • Are these errors logged?
  • What if whole system or some part of the system is unavailable, how would it affect user?
  • What if system crash during transaction, how would it recover? What error would it give?’

Invariably, some of these checks will happen during some transaction from GUI and we would end up spending hours in analyzing and finding API issues.

Almost the time spent in proper API testing vs bug fix may be the same, however,

It’s about bug prevention vs bug fix,

It’s about Developer mindset, pressure while developing vs bug fixing under the gun,

It’s about more stable API tests vs brittle GUI tests,

It’s about test within the sprint and gain confidence vs defer GUI tests for subsequent sprints and building Tech Debt

API testing is certainly valuable and worthy investment

Extend Visual Studio Web Test framework to extract JSon value

Previous post hinted about how Visual Studio Web Test framework can be used to perform Service/API level tests. These type of tests will come handy to gain more coverage, hit those end points that your GUI hits but without depending on the GUI, drive end-end or system tests with data mix combinations etc

Visual Studio Web Performance Test capability has been there for a while, mainly used in performance and load testing. However, with some tweaks, we may use the same framework to test our APIs.

First requirement could be, VS web tests handle HTML responses from the call. APIs calls typically return xml or Json responses. How to handle this? Pretty simple, VS web test framework is extensible, here is the API documentation.

Below is a sample gist how I created a custom extraction rule to extract JSon node value for my API test. Using Newtonsoft Json library, JToken.parse to parse the Json response..nothing much to explain..

 

using System.ComponentModel;

using Microsoft.VisualStudio.TestTools.WebTesting;

using Newtonsoft.Json.Linq;

 

namespace My.TestingComponents.WebTestExtensions

{

[DisplayName(“Extract Json node value”)]

[Description(“Extracts the value of a Json node from the response”)]

public class CustomExtractJsonNodeValue : ExtractionRule

{

 

[Description(“Json path to extract value with”)]

public string JsonNodeName { get; set; }

 

// The Extract method. The parameter e contains the web performance test context.

public override void Extract(object sender, ExtractionEventArgs e)

{


var o = JToken.Parse(e.Response.BodyString);

string propertyValue = (string)o.SelectToken(JsonNodeName);

if (propertyValue != null)

{

e.WebTest.Context.Add(ContextParameterName, propertyValue);

e.Message = “Successfully added ” + propertyValue + ” to the context parameter ” + ContextParameterName;

e.Success = true;

}

else

{

e.Message = “Couldn’t add ” + JsonNodeName + ” to the context parameter ” + ContextParameterName;

e.Success = false;

 

}

}

}

}

Test Automation Pyramid and TFS

In last post, there was a little bit of grousing about Test Automation, need for approaching it strategically and side effects if Test Automation == GUI automation.

Also, there was another post talking about Quality practices for Agile teams..

My intend here to connect them and share how I would approach the Test Automation strategy for a .NET project using Visual Studio stack of tools

If I approach the entire Quality engineering with different perspectives that Janet Gregory & Lisa Crispin explained in their book, below diagram will be my goal

 

 

In that, Test Automation plays significant role in Quad 2 and hypothesis being a .NET team, VS ALM will help them to be up and running in no time. Beyond coverage on Test Automation, tight integration with other ALM capabilities are a great plus.

 

 

Additionally, on the Quad 4 tests such as Performance and Load testing, Visual Studio offers native support for local execution. Any of the tests written already can be reused. Also, Visual Studio Online offers support to execute load tests from the cloud.

Test Automation Pyramid in reality

Test Automation Pyramid?

Book Succeeding with Agile: Software Development Using Scrum coined the importance of approaching the Test Automation in a systematic way, do’s and don’s..

In essence, the idea here is approach Test automation in a meaningful way that offers ROI.

Sadly, in many places, Tools steal attention when it comes to automation. Precisely, those GUI automation tools steal much attention as those places got Testers who prefer to sit on top of the application and just automate to test the functionality. Those who hate record and replay tools, get obsessed with other set of tools which programmatically talk to the web browsers and other GUI technologies and help automating GUI tests.

While these approaches offer great value compared to manual testing, the real road to success could be in potential risk and raise the need to look at Test Automation strategically.

For example, awareness towards Unit testing which tests the lowest level unit of the product does exist, many frameworks exist and teams invest time in writing unit tests. While the quality is relative, there are other tools like Sonar could help injecting thoughts around continuous inspection and emphasis on continuous quality inspection vs one time quality audit. In reality, knowingly/unknowingly many teams extend their unit tests to be functional tests and suffer from side effects.

On top of Unit tests, many teams just right into GUI automations. There are zillion tools out there to help with record and replay, efficient techniques to spy the GUI, automatically spin up different browsers and run tests.

Pyramid becomes a cone..adding the manual test effort on top, it’ll look like an ice-cream cone

So, what do we do, how to get back to basics and align with the Pyramid.. First of all, Is the Pyramid important, meaningful, does one size fit for all? The book definitely offers answers for most of your questions. In my opinion, aligning to Pyramid will increase the ROI, reliability, trust and help hitting go to market timeline. Approaching test automation strategically is mission critical to gain the best of out of automation which could increase the possibility of delivering value to our customers.

Too heavy GUI automation might work initially. Over time, when product grows, functionalities gets changed for a business goal, GUI automation will require lot of attention, rework to make sure test failures make sense. Because these automations depend on the GUI, these tend to break when a simple change occurs on the GUI and needs careful coordination between team members to deliver value. industry standard GUI automation tools don’t offer the capability of automated detention and correction when something changes in GUI. GUI automation needs completed GUI and hence they always lag behind, sometimes, n-2 or 3 sprints behind the code. When automating functionalities lag behind, late feedback to the development team is a biggest challenge. Another critical side effect could be that end-end test is at great risk, below picture says the rest.

 

 

Quality practices for Agile projects, a different perspective

Almost everything has evolved, reshaped and been taken to a different level in software development. It’s highly likely that we are no more approaching the product development/software development the same way that we would have been 10 years back. A lot of enterprises embraced the Agile movement that guided and emphasized the left over right.

At a high level, we are developing in short cycles, adding value in continuously, demonstrating value addition to the stakeholders, being receptive to their candid feedback.

That being the drive at high level, internal engineering practices went thru vivid changes notably

Silo functional team members vs cross functional team members

Managers planning vs whole team planning games

Weekly status reports vs daily standups

PUSH vs PUSH work units

Large big bang release vs shorter incremental releases and supporting practices

UAT testing at the end vs continuous usability testing

One Performance at the end vs continuous performance testing

and so on..

Plenty of changes have been forced at the engineering level, these are not mere changes in terminologies and ceremonies but require significant shift in mindset.

While there are plenty of motivation to transform the teams and help adopt Agile practices, most of them tend to be focusing on developer practices such as Whole team planning, Test Driven Development, Simple design, Pair programming, Continuous Integration, Continuous delivery etc, where do the Quality engineering practices fit in? no more specialist testers, testers learn to develop code, do developers learn those needed testing skills? What do we do to prevent bugs, detect bugs early in the cycle and how do we transform those traditional quality engineering practices into Agile, what would be the role of a tester in an Agile team..many more questions..

If you have questions somewhere along the lines, please read Agile Testing by Janet Gregory. They talk about a lot of good practices can be adopted by Agile teams to build quality in.

Amongst many great thoughts, one of the most interesting factor is that Quality != Testing

They remind the Underlined goal that ultimately we need to make sure our implementation meets our intention which meets the business need on a continuous basis.

Q

This is a wonderful goal, how do we accomplish that..looks like we need 360 degree check on everything we do? How can we do this with every sprint?

Further reading on that book will guide us towards achieving this goal. However, some highlights that I like

Quality is about making sure that the team shares common understanding about the problem space. Team collaboration, ask right questions to the PO and understands before committing. This is something not new, what quality has to do here? Correct question but the emphasis here is to think beyond a story during planning, think like an eco-system and how this little story could affect the eco system, think like this little story is part of the bigger puzzle board and the importance of making it right, think about chain reactions that this little change could trigger and try to identify those end-user workflows that this story contributes or affects and hence derive those testing scenarios. So at the end of the planning meeting, the entire team would have talked about the story, especially, beyond design and coding, understood how an end-user’s action could cross this functionality, what an end-user would do before and after and essentially help the whole team to understand the problem space. When time comes, it’s highly likely that someone will write code and someone else might write tests but towards the same goal. Potentially, tests can be designed, written even before the code is ready. Upfront Test design and test case development could potentially help the developer and prevent bugs. Chances of finishing the story with desired quality within the sprint is high rather than deferring the test automation to the subsequent sprint. Of course, Automation plays a role here but Automation with appropriate intend plays a greater role.

These are partly my takes from that book from Quadrant 2. There are other Quadrants

Quad 3 – talks more about the importance of applying quality practices from Customer/end-user perspective and asking hard questions to your product

Quad 4 – talks about how Tools and technologies helps to apply quality practices from customer/end-user perspective

We started practicing Quadrant 2 in some of our projects and seeing great value..