In this lecture I'm going to talk about software process improvement techniques. We'll look at software process evolution generally - the reason why we want to monitor our software process and identify opportunities for improvement - and we'll look at some of the standards that have been developed to support that, but we'll focus in particular during the lecture on agile process improvement.

You should be, during your project iterations from now on, conducting a retrospective at the end of that iteration as a means of improving your software processing practices. There is some reading associated with this video lecture; it's a very long document describing what's known as the Capability Maturity Model. I don't expect you to read it all, but I would like you to dip into it to get a feel for the level of detail involved in that model.

So what do we mean by software process improvement? You've probably seen in various parts of your software development career that there are various different quality assurance practices that we can adopt, in order to ensure the quality of the software product itself. We might engage in source configuration management; hopefully as a fundamental activity to quality assurance of a software product, if we're engaging in planning/management of user storage and tasks/progress monitoring, that kind of thing. We should have a pretty good idea of our project schedule, so we deliver the right features to the customer at the right time. We use all sorts of other techniques, such as software testing inspections, to maintain the quality of a software product. We'll engage in refactoring/re-engineering activities as opportunities arise, for restructuring the architecture of our software to support long-term maintainability.

These are all very useful and invaluable practices, but they're all focused on quality assurance of the software product itself. Software process improvement, in contrast, is concerned with enhancing the quality/performance of the software process, the workflow you're engaged in; individually as software developers, but also within your wider team. It's looking at these activities that you're engaged in and attempting to identify opportunities for improving; how you format software testing or configuration management, and so on. That's software process improvement; it's concerned with not improving the product itself, but improving the software process in the attempt to avoid accidents.

Any software process improvement activity goes through the same basic workflow; we'll gather data from the existing project - either from ongoing efforts, or usually at the end of an iteration - from the data available in that previous iteration. We'll analyze and evaluate that data that we've gathered, identify things that we can do differently within the software project, and then we'll implement actions in the next iteration; and we'll monitor the effect of those actions as part of the next process improvement activity.

There are various standards and models that have been developed to help guide and structure that practice of process monitoring and improvement. There are characterization standards which are used to assess how well a team is performing, and then there are management standards which explain the methods/practices that can be adopted in order to improve those metrics/measurements that we've gathered from a software project. Examples of those include the ISO 9001 Quality Assurance Standards (QAS); there is a model called Six Sigma, which is quite popular in management. Capability Maturity Model came out of Carnegie Mellon's Software Engineering Institute, which is a very in-depth description of the different aspects of software process quality, and mechanisms for transitioning a software team from different stages of professional maturity.

Following this agile process improvement - which we'll turn to in a second, which is a lighter-weight, project-specific set of practices for identifying opportunities for improvement - we're just briefly looking at the maturity model. The Capability Maturity Model (CMM) defines software projects in terms of different stages of maturity, and provides mechanisms for transitioning between these different stages of maturity. So at the very basic level, software project can be characterized as in the initial stages of maturity, in which the software process is very hot; workflows are poorly defined - there are ill-defined processes, for example, for quality assurance or inspection of testing, that kind of thing - then a software project can progress through increasing stages of repeatability and managed software processes, so increasingly the team adopts a professional/repeatable/methodical approach to the development of software. We can see at the very top level that optimizing software processes are continually looking/monitoring for opportunities to improve, and we'll see how that feeds into the agile approach to software process improvement shortly.

Agile process improvement is intended to be domain-specific, which means that:

  1. Each project team will identify opportunities for improvement based on the specific context of their software problem/ project, based on the participants in the project and their capabilities & expectations.
  2. It involves continual reassessment of new information as it becomes available.
  3. A final principle of agile process improvement is that there the whole team is required to participate, in seeking for opportunities to improve the software processing team.

One of the most popular ways of conducting process improvement in agile teams is using something called a SCRUM retrospective, which comes out of the SCRUM software development practice. Retrospectives occur at the end of a software project iteration, before the start of the next iteration; so they're usually timed at the end, UNCLEAR to occur immediately after the demonstration of the current release of a software product to a customer. It shouldn't last too long, typically about an hour is acceptable.

The participants in the retrospective can vary, but at a minimum it's usually the development team itself. Sometimes the team also employs an external SCRUM master to coordinate the retrospective - and also may invite the customer representative to participate in the SCRUM retrospective as well - in order to review how the team performed during the previous iteration and provide their own insights.

Typically retrospectives take about an hour to perform. Materials used for the retrospective can vary depending on the particular practices employed, but often sticky notes on a whiteboard are used in order to document the team's perspectives & each individual members of the team's points of view, in order for those to be collated and then discussed during the retrospective.

We can see here a team of software developers engaged in a retrospective; they're in the early stages of data gathering during the retrospective. the point of a retrospective is to try and answer three different questions:

  1. What did the team do well in the previous iteration", and what should they keep doing or perhaps do more of? That might, for example, be a particular change that was made to the process in the software team that worked particularly well in the previous iteration. Having experimented with it in the previous iteration, the team identifies it as something that should be increased or enhanced in the following iteration.
  2. The team should also use a retrospective to identify what did not went well; where was effort expended in activities that didn't provide any value to the software development team. For example, were people engaging in excessive documentation of code that wasn't of particularly useful value in the long -term, because the documentation became out-of-date very quickly? Perhaps less effort should be applied to that activity, and instead focus on activities that generate value.
  3. The team should also attempt to identify things that they should be doing, and aren't doing yet. Are there particular practices the team could attempt to experiment with in order to improve the software process?

Notice that none of these questions revolve around blame. The goal of the project retrospective is not to identify who was at fault, or who should be blamed in order to allocate responsibility for something that went wrong; or indeed, for that matter, to identify somebody who should be praised or something going right. That's not the purpose of a retrospective. It's not the purpose to identify, if you like, get-out clauses for not improving. The goal of a retrospective is instead to identify the practical things that you can do to improve your software process performance in the next iteration; people are not looking for excuses or praise.

So where do these sources of improvement come from? There are two main sources, both of which can be highly valuable; and often used collectively, so we don't use them of sources of information in isolation, we use them as a collective source.

The first source is the team members themselves; everybody who participated in the project will have an insight as to what went well and what didn't go so well, and they'll be able to provide that UNCLEAR "by inserting that information". The disadvantage of using the team members as the sole source of information is that their views may be subjective, or they may be biased by particular viewpoints /preferences/political agendas.

There needs to be a way of balancing the different viewpoints of the team members in order to determine a step in the UNCLEAR right/particular action to take. That doesn't mean that any particular team member's viewpoint should be immediately discounted. Every team member should be entitled to participate in the retrospective and put their views across without being ignored or denigrated, but it does mean that each individual team members views have to be balanced and taken into account, within the context of the whole team.

One way of supplementing the different viewpoints of team members is using evidence from the Project Artifacts as a valuable source of information, about how the team performed in the previous iteration (and UNCLEAR perhaps early iterations as well). For example, the project management tool might give an indication of the project's progress during the previous iteration with respect to a Burn Down Chart; or the continuous integration environment might give an indication of how long a software team assisted with a broken build before repairing it.

These are different metrics that can provide objective sorts of information about the team's performance and opportunities for movement; but again, these sources need to be contextualised with the understanding about why those metrics are the way they are, and that can only come from the team members themselves.

We need both sources of data - both the Objective Project Artifacts, but also the understanding and context of those artifacts - and that can be useful. That can be easily generated by the team members themselves; they're the ones who participated in the project. eliciting the data/information from team members can be done in lots of different ways.

If a team is very experienced, it might be suitable simply to ask the team to take a UNCLEAR very well-structured approach, and simply propose the ideas for discussion. However, it's often more useful - particularly when we're trying to live in the time and scope of a retrospective - to use a structured technique with particular timings.

Theme boards are a particular way of generating ideas from team members, under different categories and topics; we'll go over an example of that in a second. Once that's been done (once initial ideas have been generated) it can also be useful to conduct a root cause analysis; a deeper diagnosis of particular themes that emerge. The UNCLEAR technique is a good example of that, which we'll look at in a second. Here's an example of a theme board; in this particular UNCLEAR tech case the team have gone for themes called Liked - Learned - Lacked - Longed For. Each of the team members is asked to propose one or UNCLEAR "more" ideas or suggestions, under each of these different categories:

  1. What did they like in the previous iteration?
  2. What they learn from the previous iteration?
  3. What did they lack during the previous iteration, which would have helped them do their job better?
  4. What did they long for? What was what was needed in the project that was lacking?

This simple structuring mechanism provides a means of organising the data into different themes and categories that can then be discussed. Once the team have generated their data independently and posted their stickies containing each of the ideas on this board, the UNCLEAR team can then review, categorize, organize the different themes that I mentioned as they choose to. Often it's useful to have a single Scrum Master /retrospective owner, if you like, who's responsible for managing that process of:

  1. identifying themes
  2. Proposing suggestions for discussion
  3. Identifying opportunities to improve as a result of the things discovered during the retrospective.

Sometimes it can be useful to go beyond the initial ideas that are identified during a theme board, and that can involve performing what's called a Root Cause Analysis (RCA) of a particular theme. For example, somebody might have posted on the theme board the note:

"We didn't deliver any functionality in the last iteration."

It's something that they lacked or didn't like about the previous iteration, but that doesn't really provide a means of discovering what the team should do next, as a result of that issue being raised. So we can go to RCA to investigate that issue more deeply.

The UNCLEAR "5 Why's" is a simple technique for conducting RCA, in which we keep asking the question "Why?" until a satisfactory answer - that can actually be directly addressed by a process improvement - is identified. In this case:

  1. Why wasn't any new functionality delivered in the previous iteration? Because the team spent more time fixing defects from the previous iteration than they did implementing new features.
  2. Why did that happen? Because six critical defects were released into production as the end of the last iteration, which meant in the iteration being reviewed they spent most of this time fixing those defects.
  3. How did those defects make it into production? Because there was pressure to implement the final feature - which was completed late and was included in the release - even though Quality Assurance Process (QAPs) were bypassed.

The team has learned at this point that allowing QAPs to be bypassed - because of the pressure to implement a particular feature - was inappropriate and incorrect. They can review their process for making management decisions about feature inclusion in a particular release, to prevent incomplete or immature features being included in the release process in the future. They have a more realistic schedule for their software project.

Once a particular improvement has been identified, UNCLEAR "the teams 'to' decide" which of those improvements actions to implement in the next iteration; and it may not be possible to implement all improvement actions at once. Instead the team needs to at the end of a retrospective prioritize which improvement actions are going to be chosen, and that can be done by:

  1. Criticality: how important is it to start doing a particular action, or enhance the use of the particular practice in the project team?
  2. Feasibility: the team may identify actions that could be done in an ideal world, but aren't actually the current practical. For example, perhaps a team decides that they need to change Version Control System (VCS) for their source code. That might be an attractive option, but it may not be feasible within the constraints of the project. The amount of effort required to move from, say, Subversion (SVN) to Mercurial UNCLEAR "on/or" Git isn't realistic within the constraints of a single iteration, and so that might be something that's left to a longer term review.
  3. Another option may be to identify improvement actions that are most likely to have a significant impact on Return On Investment (RIO) in terms of value delivered to the customer.

Once improvement actions are selected, it's important to monitor that they actually get implemented. There's very little value in conducting a retrospective, if the actions identified are then simply pushed to one side and ignored before the next iteration. In order to each this happens, it's necessary to create improvement actions; typically on a UNCLEAR (should this be initialled?) Ticket Management System (TMS), as a particular ticket allocated to somebody as being responsible for making sure that action takes place. It's also useful to review process improvement decisions in future retrospectives, to see what effect they had on the software development team's performance. Were they actually as effective as intended; and if not why not, and how could they be altered in order to achieve the desired outcome?

It's perfectly acceptable for a proposed improvement action not to work; the key thing for the team to do is then to decide why it didn't work, and again adjust their UNCLEAR "Pro- Software Process" in order to attempt to identify a desirable outcome.

Finally, it can be valuable to the retrospective process periodically:

  1. Are you using the best practices for identifying themes in the software team?
  2. Are you using the most appropriate root cause analysis techniques?
  3. Is the person conducting the retrospective the most suitable person for that role? Could it be alternated to somebody else for a personal retrospective, to identify different improvement actions?

There's lots of opportunities for improving the review process itself. A key point here is that process improvement is essential for ensuring the quality assurance of the software process, and that feeds directly then into the quality assurance of the underlying software product.

--- END OF TRANSCRIPT ---

In this lecture I'm going to talk about software process improvement techniques. We'll look at software process evolution generally - the reason why we want to monitor our software process and identify opportunities for improvement - and we'll look at some of the standards that have been developed to support that, but we'll focus in particular during the lecture on agile process improvement.

You should be, during your project iterations from now on, conducting a retrospective at the end of that iteration as a means of improving your software processing practices. There is some reading associated with this video lecture; it's a very long document describing what's known as the Capability Maturity Model. I don't expect you to read it all, but I would like you to dip into it to get a feel for the level of detail involved in that model.

So what do we mean by software process improvement? You've probably seen in various parts of your software development career that there are various different quality assurance practices that we can adopt, in order to ensure the quality of the software product itself. We might engage in source configuration management; hopefully as a fundamental activity to quality assurance of a software product, if we're engaging in planning/management of user storage and tasks/progress monitoring, that kind of thing. We should have a pretty good idea of our project schedule, so we deliver the right features to the customer at the right time. We use all sorts of other techniques, such as software testing inspections, to maintain the quality of a software product. We'll engage in refactoring/re-engineering activities as opportunities arise, for restructuring the architecture of our software to support long-term maintainability.

These are all very useful and invaluable practices, but they're all focused on quality assurance of the software product itself. Software process improvement, in contrast, is concerned with enhancing the quality/performance of the software process, the workflow you're engaged in; individually as software developers, but also within your wider team. It's looking at these activities that you're engaged in and attempting to identify opportunities for improving; how you format software testing or configuration management, and so on. That's software process improvement; it's concerned with not improving the product itself, but improving the software process in the attempt to avoid accidents.

Any software process improvement activity goes through the same basic workflow; we'll gather data from the existing project - either from ongoing efforts, or usually at the end of an iteration - from the data available in that previous iteration. We'll analyze and evaluate that data that we've gathered, identify things that we can do differently within the software project, and then we'll implement actions in the next iteration; and we'll monitor the effect of those actions as part of the next process improvement activity.

There are various standards and models that have been developed to help guide and structure that practice of process monitoring and improvement. There are characterization standards which are used to assess how well a team is performing, and then there are management standards which explain the methods/practices that can be adopted in order to improve those metrics/measurements that we've gathered from a software project. Examples of those include the ISO 9001 Quality Assurance Standards (QAS); there is a model called Six Sigma, which is quite popular in management. Capability Maturity Model came out of Carnegie Mellon's Software Engineering Institute, which is a very in-depth description of the different aspects of software process quality, and mechanisms for transitioning a software team from different stages of professional maturity.

Following this agile process improvement - which we'll turn to in a second, which is a lighter-weight, project-specific set of practices for identifying opportunities for improvement - we're just briefly looking at the maturity model. The Capability Maturity Model (CMM) defines software projects in terms of different stages of maturity, and provides mechanisms for transitioning between these different stages of maturity. So at the very basic level, software project can be characterized as in the initial stages of maturity, in which the software process is very hot; workflows are poorly defined - there are ill-defined processes, for example, for quality assurance or inspection of testing, that kind of thing - then a software project can progress through increasing stages of repeatability and managed software processes, so increasingly the team adopts a professional/repeatable/methodical approach to the development of software. We can see at the very top level that optimizing software processes are continually looking/monitoring for opportunities to improve, and we'll see how that feeds into the agile approach to software process improvement shortly.

Agile process improvement is intended to be domain-specific, which means that:

  1. Each project team will identify opportunities for improvement based on the specific context of their software problem/ project, based on the participants in the project and their capabilities & expectations.
  2. It involves continual reassessment of new information as it becomes available.
  3. A final principle of agile process improvement is that there the whole team is required to participate, in seeking for opportunities to improve the software processing team.

One of the most popular ways of conducting process improvement in agile teams is using something called a SCRUM retrospective, which comes out of the SCRUM software development practice. Retrospectives occur at the end of a software project iteration, before the start of the next iteration; so they're usually timed at the end, §to occur immediately after the demonstration of the current release of a software product to a customer. It shouldn't last too long, typically about an hour is acceptable.

The participants in the retrospective can vary, but at a minimum it's usually the development team itself. Sometimes the team also employs an external SCRUM master to coordinate the retrospective - and also may invite the customer representative to participate in the SCRUM retrospective as well - in order to review how the team performed during the previous iteration and provide their own insights.

Typically retrospectives take about an hour to perform. Materials used for the retrospective can vary depending on the particular practices employed, but often sticky notes on a whiteboard are used in order to document the team's perspectives & each individual members of the team's points of view, in order for those to be collated and then discussed during the retrospective.

We can see here a team of software developers engaged in a retrospective; they're in the early stages of data gathering during the retrospective. the point of a retrospective is to try and answer three different questions:

  1. What did the team do well in the previous iteration", and what should they keep doing or perhaps do more of? That might, for example, be a particular change that was made to the process in the software team that worked particularly well in the previous iteration. Having experimented with it in the previous iteration, the team identifies it as something that should be increased or enhanced in the following iteration.
  2. The team should also use a retrospective to identify what did not went well; where was effort expended in activities that didn't provide any value to the software development team. For example, were people engaging in excessive documentation of code that wasn't of particularly useful value in the long -term, because the documentation became out-of-date very quickly? Perhaps less effort should be applied to that activity, and instead focus on activities that generate value.
  3. The team should also attempt to identify things that they should be doing, and aren't doing yet. Are there particular practices the team could attempt to experiment with in order to improve the software process?

Notice that none of these questions revolve around blame. The goal of the project retrospective is not to identify who was at fault, or who should be blamed in order to allocate responsibility for something that went wrong; or indeed, for that matter, to identify somebody who should be praised or something going right. That's not the purpose of a retrospective. It's not the purpose to identify, if you like, get-out clauses for not improving. The goal of a retrospective is instead to identify the practical things that you can do to improve your software process performance in the next iteration; people are not looking for excuses or praise.

So where do these sources of improvement come from? There are two main sources, both of which can be highly valuable; and often used collectively, so we don't use them of sources of information in isolation, we use them as a collective source.

The first source is the team members themselves; everybody who participated in the project will have an insight as to what went well and what didn't go so well, and they'll be able to provide that § "by inserting that information". The disadvantage of using the team members as the sole source of information is that their views may be subjective, or they may be biased by particular viewpoints /preferences/political agendas.

There needs to be a way of balancing the different viewpoints of the team members in order to determine a step in the §right/particular action to take. That doesn't mean that any particular team member's viewpoint should be immediately discounted. Every team member should be entitled to participate in the retrospective and put their views across without being ignored or denigrated, but it does mean that each individual team members views have to be balanced and taken into account, within the context of the whole team.

One way of supplementing the different viewpoints of team members is using evidence from the Project Artifacts as a valuable source of information, about how the team performed in the previous iteration (and §perhaps early iterations as well). For example, the project management tool might give an indication of the project's progress during the previous iteration with respect to a Burn Down Chart; or the continuous integration environment might give an indication of how long a software team assisted with a broken build before repairing it.

These are different metrics that can provide objective sorts of information about the team's performance and opportunities for movement; but again, these sources need to be contextualised with the understanding about why those metrics are the way they are, and that can only come from the team members themselves.

We need both sources of data - both the Objective Project Artifacts, but also the understanding and context of those artifacts - and that can be useful. That can be easily generated by the team members themselves; they're the ones who participated in the project. eliciting the data/information from team members can be done in lots of different ways.

If a team is very experienced, it might be suitable simply to ask the team to take a §very well-structured approach, and simply propose the ideas for discussion. However, it's often more useful - particularly when we're trying to live in the time and scope of a retrospective - to use a structured technique with particular timings.

Theme boards are a particular way of generating ideas from team members, under different categories and topics; we'll go over an example of that in a second. Once that's been done (once initial ideas have been generated) it can also be useful to conduct a root cause analysis; a deeper diagnosis of particular themes that emerge. The § technique is a good example of that, which we'll look at in a second. Here's an example of a theme board; in this particular §tech case the team have gone for themes called Liked - Learned - Lacked - Longed For. Each of the team members is asked to propose one or §"more" ideas or suggestions, under each of these different categories:

  1. What did they like in the previous iteration?
  2. What they learn from the previous iteration?
  3. What did they lack during the previous iteration, which would have helped them do their job better?
  4. What did they long for? What was what was needed in the project that was lacking?

This simple structuring mechanism provides a means of organising the data into different themes and categories that can then be discussed. Once the team have generated their data independently and posted their stickies containing each of the ideas on this board, the §team can then review, categorize, organize the different themes that I mentioned as they choose to. Often it's useful to have a single Scrum Master /retrospective owner, if you like, who's responsible for managing that process of:

  1. identifying themes
  2. Proposing suggestions for discussion
  3. Identifying opportunities to improve as a result of the things discovered during the retrospective.

Sometimes it can be useful to go beyond the initial ideas that are identified during a theme board, and that can involve performing what's called a Root Cause Analysis (RCA) of a particular theme. For example, somebody might have posted on the theme board the note:

"We didn't deliver any functionality in the last iteration."

It's something that they lacked or didn't like about the previous iteration, but that doesn't really provide a means of discovering what the team should do next, as a result of that issue being raised. So we can go to RCA to investigate that issue more deeply.

The §"5 Why's" is a simple technique for conducting RCA, in which we keep asking the question "Why?" until a satisfactory answer - that can actually be directly addressed by a process improvement - is identified. In this case:

  1. Why wasn't any new functionality delivered in the previous iteration? Because the team spent more time fixing defects from the previous iteration than they did implementing new features.
  2. Why did that happen? Because six critical defects were released into production as the end of the last iteration, which meant in the iteration being reviewed they spent most of this time fixing those defects.
  3. How did those defects make it into production? Because there was pressure to implement the final feature - which was completed late and was included in the release - even though Quality Assurance Process (QAPs) were bypassed.

The team has learned at this point that allowing QAPs to be bypassed - because of the pressure to implement a particular feature - was inappropriate and incorrect. They can review their process for making management decisions about feature inclusion in a particular release, to prevent incomplete or immature features being included in the release process in the future. They have a more realistic schedule for their software project.

Once a particular improvement has been identified, §"the teams 'to' decide" which of those improvements actions to implement in the next iteration; and it may not be possible to implement all improvement actions at once. Instead the team needs to at the end of a retrospective prioritize which improvement actions are going to be chosen, and that can be done by:

  1. Criticality: how important is it to start doing a particular action, or enhance the use of the particular practice in the project team?
  2. Feasibility: the team may identify actions that could be done in an ideal world, but aren't actually the current practical. For example, perhaps a team decides that they need to change Version Control System (VCS) for their source code. That might be an attractive option, but it may not be feasible within the constraints of the project. The amount of effort required to move from, say, Subversion (SVN) to Mercurial §"on/or" Git isn't realistic within the constraints of a single iteration, and so that might be something that's left to a longer term review.
  3. Another option may be to identify improvement actions that are most likely to have a significant impact on Return On Investment (RIO) in terms of value delivered to the customer.

Once improvement actions are selected, it's important to monitor that they actually get implemented. There's very little value in conducting a retrospective, if the actions identified are then simply pushed to one side and ignored before the next iteration. In order to each this happens, it's necessary to create improvement actions; typically on a § (should this be initialled?) Ticket Management System (TMS), as a particular ticket allocated to somebody as being responsible for making sure that action takes place. It's also useful to review process improvement decisions in future retrospectives, to see what effect they had on the software development team's performance. Were they actually as effective as intended; and if not why not, and how could they be altered in order to achieve the desired outcome?

It's perfectly acceptable for a proposed improvement action not to work; the key thing for the team to do is then to decide why it didn't work, and again adjust their §"Pro- Software Process" in order to attempt to identify a desirable outcome.

Finally, it can be valuable to the retrospective process periodically:

  1. Are you using the best practices for identifying themes in the software team?
  2. Are you using the most appropriate root cause analysis techniques?
  3. Is the person conducting the retrospective the most suitable person for that role? Could it be alternated to somebody else for a personal retrospective, to identify different improvement actions?

There's lots of opportunities for improving the review process itself. A key point here is that process improvement is essential for ensuring the quality assurance of the software process, and that feeds directly then into the quality assurance of the underlying software product.

--- END OF TRANSCRIPT ---


-VIEW MODE-


 
00:00:00