top of page
blog-bg.png
Writer's pictureAvalia

Business and Technology: Bridging the Gap with Data

Hands touching with Avalia Systems colors on the background

We often hear people talking about “the gap between business and software development”.

But what does this actually mean? Can we better characterize this broad statement and make it more concrete with examples? And with this understanding, can we sketch out solutions to improve the situation?

In this article, our goal is to discuss these questions, by looking at data-driven software engineering approaches. We examine the particular case of software assessments, where the goal is to understand the current state of a software organization (technology, processes and people). We share our experience with the Goal Question Metric method, which has proven to be very effective in these situations.

Why is it so hard to bridge the gap between business and technical domains?

One reason is that it is difficult to establish measurable causal relationships between the two domains. Will we increase revenue if we do more automated testing? Why should we invest resources to reduce technical debt? Are we going to reduce costs if we move towards continuous delivery? 

Even when business stakeholders understand this jargon, even when they understand the rationale of software engineering practices, they have a hard time quantifying their impact. As a result, investment decisions are not easy to make.


Piece of paper where it says on the right: Automated tests, Dev Ops, Technical Debt and pair programming. On the left it's written: Increase revenue, reduce costs, delight customers

The challenge of connecting the dots between business and technology


Models, Conversations and Shared Understanding

To improve the situation, we argue for data-driven approaches that make explicit connections between the technical and business domains. The general strategy is to capture relationships in a model, to feed data into the model and to collaboratively analyze the results. Often overlooked, the last step is fundamental: the conversations enabled by data-driven models produce rich insights, which are the most valuable outcome of the process.

We have applied this approach in different types of organizations (startups, large companies), across geographies and in various situations (M&A due diligences, internal assessments). We have seen consistent benefits in every one of these projects. Data-driven software assessments are effective in triggering and framing conversations. They do a lot to improve mutual understanding and decision making.


How does it improve a technical due diligence?

The stakeholders who drive an overall due diligence process are familiar with formulating questions and identifying risks in the commercial, financial and legal domains. They are familiar with measurement models that are specific to these domains.

However, they usually have a hard time specifying what to assess in the technical domain. Yes, they know that “scalability”, “security”, “quality” and “the dev team” are dimensions that need to be evaluated. But how? And what is even more difficult for them is to grasp how the technical domain impacts the business domain.

For example, the links between specific DevOps practices, the frequency of software releases, the ability to respond quickly to market changes, and an increase in revenue are not obvious for non-technical stakeholders.

When we deliver due diligence projects, we address this problem by applying a data-driven approach described:

  1. we first create a model that logically connects business concerns with technical practices,

  2. we then collect data from different sources, feed the data into the model and create visual representations of the results,

  3. we finally use this material to provide context for interviews and deep-dive sessions with various stakeholders (management, product owners, engineers, etc.).

Compared to traditional methods, this allows us to get a deeper and more insightful understanding of the situation. We can look at it from various angles and better grasp the strengths of the company under evaluation, as well as the areas that needs improvement and where the acquirer or investor can bring value. This is a learning experience for everyone involved in the process and the feedback from both ends has always been very positive.


Research in Empirical Software Engineering

Now that we have a sense for the general data-driven approach, we can look at some techniques that are helpful in guiding the process. There are numerous examples in the empirical software engineering literature, with much of this work already completed in the seventies. Research results are now making their way into commercial products and services at an increasing pace. The term software analytics is often used describe to this broad topic.

Software analytics is the analysis of heterogeneous software engineering data, with the goal to understand a situation and to make better decisions.

The premise is that software engineering is an activity that generates huge amounts of data, and that analyzing this data can help make organizations more efficient. Think about what goes on in version control systems, bug trackers and collaboration spaces. The analysis of traces left by people in these systems can reveal insights about the code, the processes and even the culture of an organization. This feedback is most helpful for organizations who have a continuous improvement mindset, in line with core agile values.

The idea of analyzing software engineering data to understand a given context is not new. It was already many years ago that Meir Manny Lehman formulated the "laws of software evolution” based on quantitative studies of software data.

Among other things, he looked at the frequency of releases and system complexity over time. He derived general principles that apply to the evolution of any large scale software system. Many of the observations he formalized at the time resonate with the rationale of agile approaches.


The Goal Question Metric Method

One particular research outcome in literature is the Goal-Question-Metric (GQM) method. The result of pioneering work by Victor Basili, GQM is a structured approach that can be used to characterize a situation, to make predictions and to gear improvements.

When applying the method, one starts by defining a set of high-level goals. Every goal is then split in a set of more concrete questions. Finally, one identified metrics that make it possible to give quantitative answers to the questions. For instance, our goal might be to understand how well we serve our customers, which is fairly abstract. We might then come up with more concrete questions such as: 

How quickly do we respond to their needs? How often do we cause issues that prevent them from working? Is the frequency of such situations decreasing or increasing? How happy are they with our products and services?

To answer each of these questions, we might look at very precise metrics such as average response timelead time for bug fixes per severitynumber of new critical bugs per weekNet Promoter Score, churn rate, etc. For every question, one needs to define a formula that combines multiple metrics and produces a measure.

The Goal-Question-Metric method is one way to build a model that connect high-level, qualitative outcomes with concrete, measurable elements. It enables informed conversations about business impact based on facts, some of which are measured in the technical domain.


Data-Driven Due Diligence in Practice

We have seen that data-driven models spanning across business and technical boundaries provide valuable insights, especially when they are used in conversations and interviews. We have also seen that the GQM is one method to build these models and that it helps connect abstract outcomes (goals) with concrete outputs (metrics).

But what effort is required to apply this technique during a software assessment? What are the skills, time and resources required to get the job done?

First of all, the team leading the process must have deep expertise in both the business and technical domains. This is necessary to build sound models, but more importantly to moderate and facilitate the discussions between parties. The team must be able to deal with strategic questions, but also to drill down to a very concrete operational level. The team must have hands-on experience with software engineering practices, ideally lasting several decades (many assessments are done when transforming legacy systems into modern cloud-based systems).

Then, the use of tools can make the process time efficient. We have seen that the process involves 1) the collection of raw data in heterogenous systems, 2) the analysis of the raw data (this sometimes relies on advanced techniques, such as machine learning), 3) the creation of visual and interactive models, 4) the documentation of insights and 5) the review and presentation of findings. There is no shortage of tools that address one or the other of these steps. However, it is not trivial to build a cohesive platform that integrates some of these tools, augments them with advanced capabilities and provides a streamlined, end-to-end experience to the due diligence team.


Key Takeaways

  1. Data-driven models can help bridge the gap between business and technical domains. These models should make explicit connections between technical outputs and business outcomes. They should strive to make this connections measurable.

  2. The Goal Question Metric method comes from research in empirical software engineering. It is one method to build the data-driven models. It is well suited to the task because it helps link broad qualitative outcomes with concrete measurable elements.

  3. Software assessments are one situation where the approach is effective, whether they are done as part of a due diligence or for internal purposes. Our experience is that these models are very useful to provide context in conversations and interviews. They have repeatedly helped us gain deeper insights and a complete picture of the situation.

  4. Applying this approach is not trivial and requires both expertise and time. An integrated toolset can address the second concern, providing a seamless experience during the entire due diligence process.

  5. Reach out at info@avalia.systems for feedback and more information about our data-driven due diligence method .


 



References

Red Hat’s hidden treasure: The people that are worth billions to IBM. Avalia Systems, 2018.

Basili, Victor R. Software modeling and measurement: the Goal/Question/Metric paradigm. 1992.

Lehman, Meir M. “Programs, life cycles, and laws of software evolution.” Proceedings of the IEEE 68.9 (1980): 1060–1076.

Liechti, Olivier., Pasquier, Jacques., & Reis, Rodney.Beyond dashboards: on the many facets of metrics and feedback in agile organizations”. Proceedings of the 10th International Workshop on Cooperative and Human Aspects of Software Engineering. IEEE Press, 2017.

Menzies, Tim. & Zimmermann, Thomas. “Software analytics: so what?”. IEEE Software 4 (2013): 31–37.

bottom of page