Thursday, April 7, 2016

Approaches to Distributed Development of a Software Product

Note: This article will be published as a series of installments. See the installment history at the end of the article to track changes.

Introduction

Just recently I came to think about a proper setup for a product developed by distributed teams. As it happens the use of git was a prerequisite. When working with Distributed Version Control Systems (DVCS) like git teams see themselves faced with the task to figure out what would be their way of organizing source code they contribute to a larger system. As git is a powerful tool with loads of features and means to do things one way or the other it offers both simple and rather complex solutions. In this post I want to explore some major approaches and compare them with each other. I admit that I am biased by concepts like Continuous Integration (CI) and Continuous Delivery (CD) which may influence my conclusions.

I consider this an experiment and will define an example product to set some constraints for the exploration. Some findings may be restricted to this setup others may be more commonly valid. However none of the conclusions claim to be universal. The approaches investigated are taken from daily life. Every single of them crossed my way and I consider all of them worth looking at. Any idea as far fetched or remote as possible would be worth to at least provide reasons why it wouldn't be a good idea.

Sample Product

Let's assume a product of considerable size, say 1M LoC. Let's further assume the product consists of a number of large components, say 5-10, which themselves may be made of smaller components. Team setup follows the top level component structure by and large, although an individual may see the need for changes in several components. The development team consists of 50-500 people actually touching code. Finally the product ships as one. There are no independent releases nor patches of parts of it.
None of the components is intended to be reused by other products. Components are a reflection of the current architecture of the product.
The current dependency structure of our product looks like:


Components A, B and E are top level components forming a collection of services the product consists of. Component F forms a UI framework Components A and B plug in to when present. F does not care for any components plugging into it. Components BA and BB are backends to A and B respectively. Component E is an extension to B.
Components are not necessarily identical with libraries, archives or any other sort of distributable artifact. Basically they are a logical structuring of the source code.
The product has a maintenance obligation with respect to already released versions.
What approaches could be applied to organize the source code of the given product? These are the ones that pop up on my mind:
  • Component Repositories
  • Topic Branch
  • Feature Branch
  • Trunk Based Development

Approaches

When exploring the different approaches I will try to shed some light on a bunch of questions that are far too often not getting considered. These questions touch several aspects of a software development life cycle. Think of questions like:
  • How do we get access to the component we depend on to make use of it in our component?
  • How do we make sure we get information about public API changes soon enough for us to incorporate them?
  • Should there be orchestrated schedules for component releases?
  • How do we handle splits/merges of components?
  • How do new top level components come to life?
  • How do top level components cease to exist?
  • How often do components ship new versions?
  • How do we make sure there will only be one version of each component used inside the product? Or how do we make sure components A and B are developed against the same version of component F?
  • When will integration testing be done?
  • How will it be done?
  • How does a component test itself in the context of the product? Or in the subcontext of its dependency tree?
  • How is the product being assembled?
  • Which component versions should be used?
  • How are component versions get managed?
The list may not be complete. It already holds some tough issues, though. What would be the answers when we work with separate Component Repositories?

Component Repositories

Our development team values highly decoupled components which interact via public APIs only. Any use of non-public APIs is prohibited. The developers understand the temptation of using non-public APIs for the sake of re-use and want to avoid that by obscuring the sources of their components towards other components as much as possible.
Our development team learned that a git repository for a large product worked on by many developers tends to become large which would increase time to clone and fetch. I've seen such repositories exceeding 4GB.
Idea: A small repository only containing one top level component worked on by 5-10 people seems to be a fair trade. The team could work in isolation. Their repository would not be littered with source code they do not own or are likely never touch. Things are simple when it comes to developing their component.

Development

For a developer working on component F with no dependencies or only dependencies to 3rd party life would be rather easy in this world. There are only things present one has to deal with directly. One could build the whole component including tests quite focused. As developers are free to add or remove sub components of their component rather freely they would not feel much of a downside.
Not all of the teams are happy with that setup, though. While the team providing the top level component F is quite happy with this approach, the teams depending on them are not. Why is that?
In order to plug into component F component A and B need to know about and have access to the current valid public API to at least be able to mock the dependency away in their tests and to use the right calls in their production code. However this may be done in the particular language the product is being built with, there has to be some sort of communication. Either interfaces have to be provided as files or as API documentation. Depending on the language these files have to be present during build. Latest when running integration tests of component A or B with component F a real component F would be required.
This would add an obligation to any top level component development teams responsibilities: There has to be some sort of releasing the component in a way other teams could rely on for their development. They have to maintain a release schedule and they have to actually release the component and make it available for the other teams to consume. Usually there will be some stable and some development version available. These versions could be used by components A and B for their development.

Integration

As long as component F publishes new versions on a regular base there will be some sort of "continuous" integration be available. Component A and B could make use of the latest component F version and report bugs if found in component F or fix their components usage of F accordingly. Depending on the release cycle of F the feedback loop length would stretch from rather short to pretty long. During development phase this may not be a problem but when release date closes in it rather certainly will turn into an issue.
A real continuous integration would be hard to achieve. Even if component F publishes release candidates with every pipeline run they would have to be verified by the depending on components before they could turn into released versions. Thus component F depends on the pipelines of each component up the dependency tree to verify successful usage of F and any component that uses F and so forth. The verification pipeline for F will become pretty long and in case of bugs found it would have to start all over again.
Whats more, if component A uses the development version of F to stay close to the newest features of F it relies on these development versions actually being released before release date of the product as no development versions of F will be shipped with the released version of the product.
Another complication would be the possible divergence of F's version used by components A and B. Just to make sure they are actually using the very same version of F there needs to be some governance enforcing this constraint.

Integration Testing

When teams are focused on developing their components they tend to consider any usage of their component the business of someone else. The integration of component A or B with component F will probably tested but the product as a whole will not. Who would be responsible for performing this assembly task with all its required testing?
The product in question is an assembly of the just right versions of all its components. Thus the product would be represented by a bill of material (BOM) only. The product assembly would drag all the named versions of the components and would perform the required packaging. What about the testing then? There would have to be a team that would take care of this assembly and the integration testing to make sure the BOM holds a valid and working combination of component versions. The assembly pipeline would have to run the integration tests of all components and eventually would have to provide for additional integration tests on product level. This team would not at all develop anything in terms of production code which bears the risk of them not knowing about features implemented. A dedicated communication would be required to make sure the assembly (or testing) team knows what to test for.
Another risk would be product level test breaking due to changes in components. As the product level tests run in the product assembly pipeline no component pipeline will run them and thus will not get feedback from them. It is the same as component A running integration tests with component F which could find bugs in F outside the pipeline of F. At any of these points there will be feedback someone would have to communicate to the depending on component. This feedback would have to dribble down the dependency tree with all the communication that comes along with that.
It would be best if the component could test itself in the context of the product within its own pipeline. To do that it would have to get access to the current BOM describing the product and to the product level tests. In order to run the product based tests it would have to build the product based on this BOM and replace itself with the component version under test. Component A's build and test process suddenly needs to know about the product and its assembly thus duplicating knowledge.
Another way would be to trigger the product assembly pipeline by replacing the version of component A in the BOM with the latest release candidate of A. If the product assembly pipeline succeeds the release candidate could be considered verified, it could be released and the BOM of the product could be changed accordingly. In this case the knowledge would not be duplicated but we would need a feedback loop from the product assembly pipeline back to the pipeline of component A. In order to get close to continuous integration any pipeline run of component A would include and wait for a pipeline run of the product assembly.

Release

As said before the product is being represented as a bill of material (BOM) containing the proper components and their versions. At this point uniqueness of components could be enforced, i.e. the version of component F to be used. Following the one repository per top level component the product as the topmost component will reside in its own repository along with the product level tests.
Releasing would include collecting all released versions of components making up the product and to run the product level tests as there would be no other tests available. If the versions of lets say component B and component F do not fit together for component B was using a different version of F in their integration testing there would be the risk that product level test would not discover this mismatch as they will not redo the level of testing done at component B's level. To avoid this means would have to be provided that would mitigate this like: components integration tests will be made available to the using components as well, a component will get hold on the BOM of the product to make sure they will use the proper versions of all components they depend on and so on.
This would introduce yet another set of communications required to mitigate issues induced by the general approach of Component Repositories.

Refactoring

As long as refactoring takes place within the boundaries of a top-level component things are fine. When it comes to structural refactorings of the product, i.e. introduction of new top-level components, removal of top-level components, it becomes cumbersome.
If there will be a new service C we could just create a new component repository and start working on C, adding it to the integration like the other components.
If an existing component ceases to exist in a newer product version we just could not get rid of it as long as the maintenance obligation exists. There will be a legacy component around in a repository no one will work full-time any more. This usually will cause the component to rot for no one will take on responsibility for that one. It is just to far out of sight.
Component Repositories make it hard to factor out new components. Consider a part of component A that would be useful for component B as well. How would B get hold of it? The cost of introducing a new component repository for the new reusable component would be quite high. So, copying the code and adding it to component B's repository seems reasonable especially as long as one would be wondering whether this part really is reusable by B. If it would be reusable and if someone really would avoid the code duplication and open up a new component repository who would be responsible for that? The new found component would not be a top-level component, so there would be no dedicated team despite the one for component A. Would this team be responsible for two repositories now?

Summary

The Component Repositories approach has got its advantages when it comes to the development of a leaf component. As soon as interaction with other components due to dependencies is involved things get messy. Components suddenly need a release management and version governance to make sure every component is on the same page. Especially the product assembly part will become a matter of discussions for no component development team will take responsibility for this integration level. A product assembly team would have to deal with that and would have to take care for product level testing itself.
Communication would be key in this approach. Whether it is done by introduction of additional automatisms to connect component repository pipelines with each other or by human interaction it adds complexity and the "one has to think of it" sort of things which tend to not been thought of.
As long as the component is not a real deliverable in its own right, i.e. will be used outside of the product, will be patched individually, I would consider this approach as not practicable.


Approach Component Repositories

Development (leaf component)


Development (non-leaf component)


Integration


Release


Refactoring


Organizational Complexity








Conclusion

As I've only considered one approach yet the only conclusion I could offer now is that I would not like to go for component repositories not knowing a better alternative for now.

Change History

This is the first installment.

Wednesday, November 11, 2015

Agile Testing Days 2015: There Are No Agile Testers - There Are Testing Facilitators

I had the opportunity to attend Agile Testing Days 2015 in Potsdam. It's been the second time. And again, there haven been great sessions, inspiring talks, eye-opening chats. But still there is something that bothers me. If you following my blog post you might have noticed the "There Are no Agile Testers" blog posts:


To say the least they have been received a bit controversially. Many testers felt personally offended. I could understand this to some extend. A year has passed since then and the idea circulated in my thoughts. I was trying to understand what really bothers me about the agile tester thing. I'm not sure I'm done with it yet. But new aspects revealed themselves. 

The thing that bothers me the most is the fact that by just bringing testers to agile teams does not solve the issues we've had with the development and QA departments. Testing still comes last in the development cycles and tends to be skipped or blamed for being a bottleneck. This is nothing I made up, but something that has been said by testers at the conference. In a way the testers in agile teams establish a silo just as the developers do. Developers rely on testers to get quality into the product and complain if testers do not manage to handle the amount of user stories done for coding stories tends to be faster then thoroughly testing them.

Over the years I became more and more opposed to silo thinking in teams. This discomfort still grows. I try to find ways that could help to overcome this dangerous tendency. My experience from many years shows that whenever a teams starts to separate into silos team performance, quality, and outcome drop dramatically. The teams start to dissolve and I've even seen teams to dissect.

A second aspect that I grow ever more uncomfortable with is the fact how developers are pictured as guys not willing to test, not willing to care for customer needs, not willing to care for quality. A great many developers may fit this description. But another great many developers care for concepts like Continuous Delivery, Lean Startup and DevOps. Both of them are heavily relying on being responsible and accountable for ones quality. Developers show that they are willing to produce stable, high quality code that covers actual customer needs. That they are willing to measure customer acceptance and to act accordingly. That they are willing to ship to production as often as possible. I reckon (a new generation of) developers understand(s) pretty well that they are no longer sitting in the basement coding all day long without ever bothering themselves with any consequences their work might have for the world around them.

For quite some time now developers proved to be no good at testing. Whether they are just testing agnostics, arrogant my-code-will-not-break guys or anything you might think they are doesn't count. Testing did not take place in a way and amount that would have been desirable. Because of that QA people had to be hired to clean up the mess they left behind. But no one bothered to tackle the root cause: Improving quality of the code from line one. This would have meant one has to deal wit these strange guys in the basement. So, an opportunity has been missed. The mere existence of a QA department that made up for the mistakes developers made encouraged them to code even more carelessly. There simply has been no need to do otherwise.

It is time to reverse this development. Now, as developers develop a sense of responsibility testers are urgently needed to share their knowledge and experience gained over so many years of testing. This knowledge has to be shared with developers. Testers are urgently needed to challenge developers to take testing beyond unit testing seriously. There is far more to testing than that as one could learn form "Agile Testing" by Janet Gregory and Lisa Crispin. 

Me, being a developer for most of my professional life, I would wish if not expect from testers that they make testing an integral part of a developers daily business. Testers in agile teams have to become test facilitators. There is no way around that. If an agile team would be staffed with as many testers as you would need to make sure all user stories are covered with acceptance tests, all new code is covered by unit and component tests, all security, usability, performance and you name it what test are required up to thorough exploratory testing one could easily end up with maybe 3 (or more?) testers per developer. Would this be the way you would like to go?

In my humble and honest opinion we would need to tackle the problem from two sides.

1. Testers Coach Developers


Testers gained insights on lots of different aspects of testing including experience of typical hot spots especially when it comes to integration testing. Testers would need to pair with developers to support them when writing these kinds of tests figuring out what test cases are needed and how to best test them. It may be that while doing so testers become familiar with development itself and eventually cease calling themselves a tester. I would consider this a great achievement for our industry when testers did their coaching job that well that developers are able to do the necessary testing and former testers turn to writing code themselves. In a way we would have to call them all Agile Engineers. Then Continuous Delivery and DevOps would fully unfold their potential.

2. Testers Provide Automation Frameworks


Not all testers would like to turn to bare development. I would propose another direction for them. Many developers do not care for security or performance testing because there are no frameworks and no infrastructure available that would perform these tests reliably and easy to write and evaluate. Any of these frameworks needs to be made available in build and test pipelines. If they are not available that way they will hardly be done. Developers need to be urged into writing  these kinds of tests by making this unavoidably easy. Whether or not these frameworks have to implemented from scratch or could be bought. Someone has to set them up for use in pipelines. Someone has to educate developers of how to utilize them.

3. Testers as Integrators


Developers tend to be off by one, so am I. There is a third category of testers. The ones that never ever wrote a line of code and are not fond of doing so at all. There are businesses that have to fulfill legal requirements with respect to quality assurance. There businesses that built huge products with millions lines of code contributed by teams not at all co-located to each other and often enough not well connected. Products like that tend to have integration issues and no one feeling responsible for them. These are areas testers in a more classical sense still would be needed without the urge to turn into developers.

Conclusion


Testers in agile teams should try to see themselves as coaches and facilitators to spread the art of testing. Developers need to be educated and enabled to do lots of testing on their own while and before writing any code. Developers need to learn to look at what they do from a users side in order to enable them to decide in favor of a users needs.

Testers could provide frameworks for automated security, performance, product life-cycle testing and alike. These frameworks have to be made available to developers in their daily work to make these kinds of testing an integral part of coding well before a user story is labeled DONE.


The tasks testers will face in the future might change. For some this change may even be dramatic. But I think we could not afford moving on as we did before. It is time for Developers 2.0






Read also:
On Coaching Agile - What I've learned from being an agile coach
On Sporadics - How to deal with intermittent tests in continuous delivery
The opinions expressed in this blog are my own views and not those of SAP

Thursday, September 24, 2015

On Coaching Agile: A Matter of Gray - Don't let prejudice guide you

Just recently I was receiving a mail containing a quote which went like this:

"I get paid for features not for tests"

It has been put in a context where we were discussing something like code coverage or test coverage issues. That's the setting. What happened next?

To me this quote in this very moment and this very context has been proof that there something was totally wrong. Based on this quote we would not need to talk about application of agile methodologies with these guys any longer for they are not prepared for it. And I went off into my standard narrative about ignorant developers and ignorant management, that most of the time were complaining about what went wrong and that everything would have to change but no one would ever start doing anything least did they question themselves. And on and on it went.

"I told them many times and they just don't understand. They just ignore the evidence", is what I could tell myself over and over again.

That very same day while taking a shower in the evening it struck me: What did I know about the quote?

Well basically nothing! Who did say this? I don't know. What has been the context it has been said in? I don't know. Was there anything else that has been said to put this into relation? I don't know. How was it said? Angry? Regretfully? Complainingly? Resignedly? Ironically? I don't know! Anything? I JUST DON'T KNOW!

What I did know, it felt perfect to me. I could be the good guy knowing about all the "right" stuff, doing all the "right" stuff - well most of the time. And they (as anonymous as could be), they are the guys doing it wrong. Again. And over and over again. With that two things are for sure:
1. My work as an agile coach would never ever come to an end, and
2. I would always be the good guy. I could always feel superior to "them"

A question arises here: Could I really be a coach to "them" when I consider myself superior? Would I ever be able to bring my message across? Would I really like to bring my message across to help "them" get better?

To be honest the answers would be something like: I don't think so. No. To some extend, yes.

At least in retrospective I would need to admit that "being better than them" gave me a feeling of comfort, a feeling of importance, somehow even a feeling of doing the right thing which supports me in trying to move on despite the little impact one achieves from time to time. So, in a way this attitude supported my will to strive for a change.

Thinking about this I figured that I would have to deal differently with the quote. First of all. Listen. In this case I wasn't able to listen for I've got this second hand. So it would have been asking. Asking all that questions to fill in the obvious gaps, to understand what the speaker really wanted to say. Only then I would have been able to build an opinion about that. Only then I would be able to decide on which steps to go the mitigate a possible issue that could have been meant by saying what has been said. - You see, all purely conjunctive.

What did I learn from this?

Instead of speculating about the manifold of possible reasons one should try to get a hold on the real reason. If you can't just let it go.
Do not exploit a quote to make a point. You could try to do this with the very person the quote came from. You would be spoiled right away. Nothing could reestablish the lost trust.
Do not preach your solutions. Try to understand the issues your coachees have and help them solving them no matter whether your prepared solutions or methodologies could be applied or not.
Don't exaggerate your position. You are not the guy anyone is supposed to follow, the guy that knows. You are the servant. You should do what many blogs, articles and books tell about coaches:

You are a facilitator, enabler, mentor, partner whatever the coachee needs at this very point.

After all the most important thing you have to be is being

humble.



Read also:

The opinions expressed in this blog are my own views and not those of SAP

Tuesday, August 11, 2015

Agile Methodology Breast Feeding

How come that many of us developers are struggling with agile methodologies like test-driven development or componentization/modularization of a software?

I was asking myself this for a long time now. Being an agile coach I frequently see developers fail applying these techniques in their daily work. Intelligent people that are able to write code to take care of the most complex problems around. And yet an apparently simple technique like write-your-test-first doesn't get a grip with them. Why? 

In the past I tended to blame the workload and pressure of management and the urgent need to put out the next feature and QA it afterwards that somehow caves in the people into this treadmill of assembly-line work like feature production. In this treadmill they are not able to get the time or freedom or even will to learn something new, to question what they are doing, or how they could do better.

It's easier to just do what one has always been doing for this they would know, this would require less efforts and less energy. Change on the other hand comes with inconvenience, with insecurity, with the need to work more consciously which slows down compared to working with a routine.

So, changing the approach developers use to get their things done becomes a long hard process. Sometimes it feels like a behavior therapy with me being the therapist. 

But I get carried away - a bit.

Just recently I came up with another source of evil, though: Textbooks!

I was curious about JavaScript and node.js recently. What would you do to get a concise guidance on any new language? Well, me I would buy a good book and work through it - or google a good tutorial on the web or something.

This time though I wanted something else. For me being an agile coach I wanted to do what I am preaching. Do TDD right from the start. So I skimmed through the pages and in the index I found "Testing" at page 10-something. Wow! I thought. There is something about testing and it's quite early in the book.Promising! 

Well. what a disappointment. It simply suggested to create an index.html file where the JavaScript would be written to or linked and to open this in a browser to witness the effects of my changes.

That's all to it.

After this the whole thing was like every time: Gradually introduce the language and write some code to learn it.

That way I learned some other languages as well. It never ever crossed my mind that I should write some tests first. Instead I wrote a few tests afterwards. Just like the others did. Fine!

No!

Thousands of young people learned their first programming language that way. They did not learn about testing. They got the notion that programming meant just writing code that accomplished the task at hand. Then they would discover that there are bugs in what they wrote and they learned how to debug even nasty code. Some of them are rather skillful debuggers and learn about seldom used features of their tools. They feel like the best programmers around for they would nail down even the strangest bugs. Bug fixed. Fine. No test needed. The bug has gone, hasn't it?

I've seen many of these guys entering corporate software development where they would move along their way and would just hack on a bigger scale. 

Try to tell these people that there are other ways of doing it. They imbibed programming with no testing from their infancy. It's hard to change something about it. They would lack the awareness of potential problems with that approach.

I'd plead for a new style of textbooks. 

It came to my mind that the textbook I would like to read (or even write for that matter) would be different. I would like to introduce a language TDD style. Explaining testing first. Why not writing a HelloWorldTest first? It's a program as well, isn't it? It's a minor change of perspective but a huge change in mind. 

Tell you what, while thinking about this idea I stumbled across this book on node.js:
http://leanpub.com/nodecraftsman by Manuel Kiessling.

To my big surprise it did exactly what I would like to do: it started introducing node.js with TDD. Thanks for providing this enlightened little book!



Read also:

The opinions expressed in this blog are my own views and not those of SAP

Friday, December 19, 2014

Agile Testing Days 2014: There are no agile testers - and no agile developers either

Just recently I wrote a blog post in response to the Agile Testing Days 2014 I was glad to have attended. This post was received controversially, to say the least. There have been many in favor for what I was saying, and there were many that didn't like it at all. For several reasons. I've got involved with interesting discussions with many of my readers., most of which led to the conclusion that there has been some misconception due to the fact that some of my arguments presented in the conversations were not mentioned in the post. I promised to write a follow-up to get to the point I was trying to make with the first post but didn't manage.

Here it is.

My Background

For there have been some not so polite responses questioning my right to express my thoughts regarding testers and testing I will explain where I come from and what my premises are. Currently I'm in QA for a 2M lines of code rich client application. As QA we define the processes for the development teams to follow to ensure a high quality of the product including everything from continuous integration, automated testing up to E2E and exploratory testing as well as static code checks and security scans. We follow the goal of establishing a continuous delivery process for this application.

My background over the years, however, was in development. As a developer I've come a long road from being a typical developer (from a testers perspective) to becoming an coach for agile methodologies (including agile testing). At certain times I did user acceptance testing and scripted testing. So, I've done my fair share on almost everything a tester would usually do. But my upbringing as a developer certainly makes sure I'm not completely familiar with the terminology a tester would use. Please forgive me this lack of knowledge but rest assured it is not because of ignorance.

Hypothesis

The existence of agile testers (aka testers as members of agile teams) prevents the urgently needed mind shift development has to undergo and the existence of agile testers does prevent agile teams to become what they are supposed to be: agile small units that could build and ship the highest rated new features with nearly constant velocity. 

Waterfall

Development departments have a long record of software products not shipped in time and not having the expected quality. There have been many ideas around how this could be improved. The most successful of these has been the introduction of a QA department. Just like in any other industrial production QA would check the product after it has been assembled, and before it would be shipped to customers. For cars or machines this worked out pretty well. So, why not adopting this approach?

Development was known to deliver bad quality. QA would establish a Q-gate for the product to pass. Testers would test the product thoroughly before they would turn the thumb up. Fixes for issues would be requested and delivered by development. Sounds great. But virtually never worked out. Why? There always is the temptation to squeeze yet another feature in and yet one more, thus stretching the time for development beyond any deadline. QA would suffer for they are required to accomplish their tasks in a much shorter time frame then originally planned. Issues they found did not get fixed anymore for release date approaches to fast.

In such environments the guild of testers had to develop methods, techniques, and tools to somehow manage the increasing workload in a shortening time frame to work in. They had to mitigate the lack of information, the lack of involvement, and the fact that they were forced to test at the wrong end of the cycle. They sure would develop a pride in their work and their abilities. For they would act closer to the customers and would receive complaints and issue reports their understanding of customers needs would be better by far compared to the understanding a typical developer would have. They sure took pride in that as well. And rightly so.

Turning an eye on the developers. Before QA has been around they at least had to do some testing. Now there would be QA testing just enough quality in. Fine! More time to develop the features development was forced to squeeze in. It wouldn't come as a surprise if a developer wouldn't even felt relieved of the burden to spent valuable time on testing instead of walking further down the list of features he was supposed to finish. Once QA came into existence developers were disconnected from customers by yet another layer. On the other side of the process there were architects or system analysts that would lay out the whole product before even the first line of code has been written. They would talk to customers to collect requirements. Then they would put down all their design documents and contracts and stuff. And the developer had to walk down the list. Completely disconnected from the source of the requirements and the consumer of his work. They got cushioned in between these two layers. What would you expect them to become within this sandwich. No time to really change the architecture if they would find a flaw. No time for that. Just keep working around it. What connection to your work would you develop if you would have no means to influence major parts of it? A certain archetype would be required to survive in such environments. Nowadays we call them developers (at least when we stick to the cliché). This typos gathered in development departments. At one hand cushioned in and pampered, kept away from customers. At the other in splendid isolation just doing geeky stuff feeling proud of their ability to work around every problem in the shortest possible time.

To be sure: Not every developer fits into the cliché. Just like not every tester fits into the cliché built by many articles and books and which has been presented to me at Agile Testing Days this year or in replies to my postings. Things are not black or white, as usually.

Agile Teams

Fortunately there has been developers that did not fit into the cliché. Guys who wanted to do things differently as well as other people in this industry from other roles. And over the years ideas like XP, agile, scrum, and lean were invented. Now we are faced with agile teams. Suddenly things like backlogs, priorities, and short cycles matter. Frequent shipment of high quality product increments were expected. Customer feedback which leads to major adjustments in the product would be welcome. Imagine a developer from the safe haven of the good old days when they were supposed to just code and a whole organization built around them would take care of the rest. What would he make of this?

Imagine a tester with his safe haven QA department. Far away from developers. Now she was supposed to join teams with the developers. Two parties were put together that felt comfortable with the prejudices they had on each other. What would you expect to happen? Well, the safest thing would be to keep the respective specialization and organize the agile team like the departments they came from. The developers in the teams would, well, develop and the testers would just test. In fact the agile team would turn into a micro version of what has been the organization back then. I have seen this happen several times. And this does not seem to be special to me. Many of the sessions at Agile Testing Days shared the same experience to some extend.

Even the bible for agile testing the book by Lisa Crispin and Janet Gregory did elaborate on this. I like this book it has got a lot of knowledge and experience in it. I certainly like the insights from their experience over the years they would share in there. And it has the right title for sure: Agile Testing. I will come back to this just a little bit later.

Part I of the book elaborates on testers and what they could bring into an agile team. In what sense they are special and would yield a great benefit for the team. Yes, testers have their unique specialties. Yes, these are badly needed close to development and even before the first line of code has been written. We definitely would need more agile testing in agile teams or in software development in general. 

At some point in this part the "Whole team approach" was mentioned and welcomed. No distinction between testers and developers. Just a team working on the top priority tasks doing whatever has to be done by anyone available when a new task has to be worked on. The agile team as a bunch of generalizing specialists. I very much like this idea. In fact that would be the Holy Grail. An agile team that would be able to really work according to priorities.

Unfortunately the remainder of this part very much keeps the distinction between testers and developers and tries to explain why a tester would be required to write acceptance tests, to talk to customers to understand what he really needs and similar stuff. There is one question popping up my mind: Why would I manifest the old DEV - QA pattern in an agile team? Why would I still keep developers away from activities they would urgently need to do to improve their understanding of the product or the customer needs. If they just be kept away from these experiences, then they are just cushioned in and pampered as before and we are all no better off. 

We would still play this DEV & QA game. Testing happens after implementation. I have seen this happening: Testing gets squeezed in between end of implementation and sprint closing. Often resulting in not properly testing for lack of time. Letting bugs slip through and so on. Where would be the difference to the old days? Testing needs to be first priority. To get this done anyone in the teams needs testing skills to some extend.

If you really want developers to change, to develop new capabilities you need to let them do those things they never did before. A tester would be a great teacher to them if he shares his knowledge with them. A developer needs the feeling of pain a bad quality product gives the customer. A developer needs to write acceptance tests herself to understand what a user story requires them to produce. These first hand experiences are what would trigger insights for them. 

Are developers afraid of this? Yes. Are developers not prepared to do something else than develop? Yes, for sure. We are talking about change. Change leads to a feeling of uncertainty, leads to occurrences of not knowing what would be expected of them. Change brings about fear. Same goes for testers. Are they really required to code? Are they really required to do architecture? Bug fixing? Yes they are. In an agile team. And this brings about fear, uncertainty, feel of loss for them too.

So, what happens most of the time? Developers and testers comprising a general non-aggression pact. And each sticks with what she could do best. Keeping specialism alive. And this specialism yields problems itself. Specialists are only able to work on a limited amount of tasks. Those who they do not cover with their specialization they would not take on. Specialists tend to be bottlenecks if their special knowledge would be required by two concurring tasks one would be blocked. Specialism leads to situations where the user stories not get worked on in the order of decreasing priority but by the need for some specialist. Thus the next increment of the product will not contain what the customer needed the most but the features the team in its combination of specializations was able to get done. This would not comply to the ideas of agile anymore. A team like that wouldn't really be an agile team.

Agile Engineer

What would that be? She would be a generalizing specialist. Willing to learn new methods, techniques, new skills. A member of the team that would be able to take on almost any task belonging to the most important user story on the board. If writing some code it would be, then she will code. If it is testing, she would test. If it is something else, she would do this as well. She would not be afraid of asking when she lacks knowledge or experience. She would be willing and open to share her knowledge.

Okay, lets go with agile testers and agile developers. We need to start somewhere. But support them in becoming an agile engineer. This will be a though transition for many of them. They would need help and guidance on the way. And they need the right environment. Just because an organization decides to have agile teams and to go agile doesn't mean that there is anything else than a new label put on the stuff they have. An agile engineer would not yields the effects he could if the organization around him is not transitioning too. Fixed schedules, fixed resources, and fixed scope do not work with agile. If the organization is not going agile anything else could remain as it has been back then.

Agile engineers would build great agile teams working in great agile organizations. They would DO agile testing, would DO agile development, or agile planning. They would cease to be an agile tester, an agile developer, or an agile someone. This would be the time when there would be 

No agile tester - and no agile developers either




Read also:

On Coaching Agile
Part 1: Killer Questions - What do you want to test?
Part 2: Techniques - Make a Bold Statement
Part 3: Techniques - Show and Tell

On Sporadics - How to deal with intermittent tests in continuous delivery

The opinions expressed in this blog are my own views and not those of SAP