Which type of (non-) innovator are you?

When creating software, innovation is often taken for granted as a desirable objective and mentality. Looking at it more closely, there are entire classes of software systems for which innovation is actually not in the discussion. In 2020 you rarely see a COBOL, Ada, Fortran, or Delphi developer position advertised as an opportunity to innovate. This is because any attempt to fundamentally change an old enough software system has a good chance of exposing software rot, so it’s best for all parties involved to not try.

However, “Legacy systems are not innovative!” is not breaking news, and it’s also not the point of this post. The point is that almost everyone talks about innovative software, but not everyone means the same thing. My hypothesis is that there are several types of software-centered innovation.

Fast innovation

I propose the term fast innovation to identify the currently prevailing practices of consumer-facing software companies, and especially startups. This is a set of practices loosely aligned with the famous “move fast and break things” motto formerly employed by Facebook:

  • The single most important activity is gathering feedback as fast as possible;
  • The second most important activity is generating ideas and features on which to gather said feedback;
  • Introducing bugs is not catastrophic because a fast release cycle mitigates their effects;
  • Setting and meeting deadlines is not seen as a productivity driver;
  • Some things don’t matter at all, such as avoiding the deployment of hidden features – doing so is actually useful for A/B testing.

This philosophy has different incarnations, notably the Spotify Squad framework and a very large palette of Agile methodologies (The Agile Manifesto in fact predates the Facebook motto by several years). Nowadays most organizations that produce software attempt to adopt or adapt one of these processes. Sadly, some end up with a very expensive Agile cargo cult.

Slow innovation

Another type of useful and legitimate information is slow innovation. This term applies when software is used to solve hard but at the same time well defined problems, often in conjunction with some type of research or standardization effort.

For example, a company contributing to a hypothetical new version of the Geography Markup Language  then implementing it would be right to see itself as an innovative organization, even though the entire process of delivering the new features will have taken years. Most levels of safety-critical software also fit in this category because of the extensive testing and certification which all but prevents moving fast.

The following are some examples practices aligned with slow innovation:

  • Extensive (but not unlimited) time is allocated to research with no immediate measure of success;
  • It’s important or even critical to avoid introducing bugs;
  • Project deadlines are important in order to align with validation and certification timelines, and also because this type of work often follows a business-to-business sales model;
  • The feedback loop between generating and validating an idea can be long, and it’s not always feasible to shorten it;
  • Aspects such as hidden features and A/B testing can violate regulations, and may be avoided entirely.

You should know which type of innovation you are aiming for

When picking or creating a software development process and culture for your organization, a wise first step is to identify as precisely as possible which type of innovation your business is engaged in: fast innovation, slow innovation, or no innovation. In case you are somewhere in between, try to identify which parts of the mentality and processes you want to adopt from the communities and trend-setters in each of these categories.

How long until the cloud community re-invents MDE?

A blog post by Satnam Singh, former member of the Kubernetes team, caught my attention recently. Among other ideas, the post laments the lack of an abstraction layer isolating application developers from the low-level details of adapting their implementation to a particular deployment configuration. To my ears, this sounds like a textbook application for Model-Driven Engineering (MDE), a by now mature software development paradigm with a reasonable but not stellar level of industry adoption.

What really intrigued me was how the post ended up literally stating the definition of MDE in an effort to describe a desirable solution to the problem of cloud deployment. Here are some choice quotes, together with my translations to MDE-speak.

I think we should develop cloud computing applications by writing a program in a first class programming language like Go or Haskell that denotes the desired deployment of our system; specifying an executable model of the desired system rather than its explicit implementation.

In MDE, this would traditionally be understood as behavioral modeling, and accomplished via state machines and process languages such as UML Activity Diagrams or BPMN, rather than general-purpose programming languages. But the main idea remains: specify the behavior, not its implementation.

Using tools on a local computer, this code could then be analyzed for bugs in its functionality, formal techniques could scrutinize it for security issues and aspects of the behaviour of the desired system could be simulated.

Indeed, behavioral models can be formally analyzed and executed using an action language. The quality of tool support for these tasks may vary depending on modeling language, but these aspects of behavioral models are relatively well understood.

When we are ready we could run something other than a compiler on this code to “synthesize” the actual implementation that should be deployed.

In MDE terms, this would be called code generation. As the post goes on to describe the generation of deployment configuration specifications, this “synthesis” step could more accurately be described as a model-to-text (M2T) transformation. Luckily, there are several production-ready M2T tools out there, such as Acceleo and Xpand.

Alternatively, you could think of some other technique that tries to raise the abstraction level at which we design, develop and deploy cloud computing applications and close the semantic gap between what’s in the developer’s head and the low level nuts and bolts we have today for actually creating cloud computing applications.

My point is that people have thought of such techniques for decades. Some even claim that the history of software engineering is one of rising levels of abstraction. MDE is certainly not the first attempt – the fourth-generation programming languages (4GL) effort of the 80’s comes to mind here, but there were probably even earlier proposals for programming at the level of “what’s in the developer’s head” (a timeline of these would be really interesting).

Interestingly, the natural fit between cloud computing and MDE has already been identified by the research community. There are even annual workshops dedicated to this topic. Here’s hoping that this time around we don’t end up re-inventing our past in an effort to create the future, as so often seems to be the case in software engineering.

Conducting an SLR: tips and pitfalls

I recently had the pleasure of giving a guest talk at the 2015 Empirical Research Methods in Informatics (ERMI) Summer School, hosted by Harald Störrle at the Technical University of Denmark. The purpose of my talk was to share some of the insights I had gathered as a PhD student conducting his first Structured Literature Review (SLR). Or, in other words, to hopefully spare attendees the pain of making the same mistakes I had made. As SLRs are quickly gaining acceptance as a research method in Software Engineering, this might be a topic of more general interest – so I decided to make it the topic of my very first blog post.

If, like me in 2013, you have no clue what an SLR is, feel free to think of it as a regular literature review “with a twist”. The twist is that every step of the review (deciding on a motivation and scope, searching for publications, selecting publications to include, extracting relevant data from the selected publications, and reporting on the findings) is clearly described in a review protocol, a document one prepares before embarking on the review proper. An SLR has several advantages over a regular literature review, the chief of which is that it produces reliable, science-grade results, as opposed to an expert opinion. Other advantages are repeatability and a much higher confidence that all relevant literature is covered. Of course, these advantages come at a price: an SLR is usually much more time consuming than a regular literature review. The de facto standard guidelines for conducting an SLR in Software Engineering were published a few years ago by Kitchenham et al., and I strongly recommend reading them for a more thorough introduction to the topic.

SLR or SMS?

One of the first things to decide when embarking on an SLR is its scope – the precise topic under review. This was also the first sticking point for me, as I started out with the rather misguided idea that I could write an SLR on model transformation languages, the broad area of my PhD work. I estimate that there are currently over a hundred such languages in existence, with many hundreds or even thousands of relevant publications addressing them. Covering all of these languages in an SLR, which is generally understood to also address qualitative aspects (i.e. reading the publications in some detail) is very likely a tall order.

Pitfall #1: Adopting an excessively wide scope for the SLR.

Faced with this challenge, my decision was to convert the SLR into a Structured Mapping Study (SMS). Unlike an SLR, a mapping study does not address qualitative aspects of the reviewed papers. As its name suggests, it simply aims to map a field of research by finding out what has been published and which are the  unadressed gaps. In this light, an SMS is more of an exploratory endeavor.

An alternative course of action is, of course, to keep the SLR methodology but decrease the scope of the review. In my example regarding model transformation languages, suitable decreased scopes could be debugging support in model transformation or formal verification of model transformations (by the way, someone should actually write these).

Where to search for primary studies

There are two options for performing a search for primary studies: manually searching relevant outlets (journals, conference proceedings), and automatically searching digital libraries (DLs) and indexing databases. I have only ever performed the second type of search, so I cannot give advice on the first.

There are many relevant DLs for Software Engineering research. Apart from publisher’s own libraries, indexing databases such as Inspec, Scopus, and Compendex contain comprehensive bibliographic data. Prior to starting an SLR, I had never heard of these. They are, however, not free to consult.

Tip #1: Consider including indexing databases in your automated search.

Your institution may provide access to a metasearch tool allowing a single search to be executed across many DLs. If such a tool is available, I recommend using it, as it will save a lot of duplicate effort and allow you to circumvent the quirks of individual digital libraries (and there are plenty of quirks, some of which are listed in what follows).

Tip #2: If you have access to one, use a metasearch tool.

The search string

Kitchenham et al. recommend formulating search strings using a conjunctive normal form. This essentially means enumerating the search terms in a logic conjunction, while using disjunctions to specify synonyms for each term. As most DLs support these logic operators, this is a good way to systematically build a search term. However, all DLs place an upper limit on the number of terms that can be included in a search string. I found this limitation to be around 20 terms per string, with slight variations between DLs. If your search string exceeds this limit, you will have to split it into shorter ones and execute several searches.

Tip #3: Long search strings might force you to perform more than one search.

Using wildcards is a tempting method to shorten search strings. For instance, the “*” wildcard in “transformation*” will be matched by most DLs to any string starting with the prefix “transformation”. My experience, however, is that wildcards can considerably increase the number of false positive search results, so I try to avoid them.

One aspect to keep in mind is search term stemming, especially if it is performed by default. Stemming means reducing search terms to their elemental root, such that, for example, a search for the term “testing” will also retrieve matches for the terms “test” and “tested”. The metasearch tool provided by my university performs stemming by default, which adds a large number of false positive hits to my searches (to make matters worse, stemming cannot be turned off).

Pitfall #2: Beware of search term stemming – it might be performed by default.

Exporting search results

Once you have performed your DL search, you will want to export the results in a convenient format (e.g. RIS, BibTeX, CSV) for further processing in a reference management or spreadsheet tool.

The length of search strings is not the only limit imposed by DLs. Every DL I have worked with enforces a cap on the number of search results that can be exported at a time. This cap is usually set around 1000 results. The only way to export all results is to break down the search into several smaller ones.

Pitfall #3: If your DL search returns more than 1000 results, you will likely not be able to export them in one go.

Even more notable is some DLs lack of support for bulk search results export – I’m looking at you, ACM Digital Library. The lack of this feature makes life very hard, if not downright impossible, for researchers conducting SLRs. One workaround that I have found is good old web scraping. Thankfully, tools such as Zotero and Mendeley provide browser plug-ins that do their best to extract lists of references from webpages (in my experience, the Zotero plug-in is more accurate). This method is, unfortunately, not bulletproof. Some references may not be exported properly and will need further manual processing. Furthermore, the paging feature implemented by DLs means that you will have to manually go through each page of search results to execute the scraping process.

Tip #4: Reference scraping browser plug-ins can be used to export search results from DLs that don’t provide an export feature.

Study selection criteria

After exporting your search results, it’s time to apply the study selection criteria in order to decide which of the original results is truly relevant for your SLR. One aspect that I found confusing about this is the suggested use of both inclusion criteria and exclusion criteria. Isn’t inclusion the dual of exclusion? I ended up using both, while assigning them different roles. I first used exclusion criteria as a fast filter for eliminating irrelevant studies based on their title, abstract, and metadata. I then used inclusion criteria for making the final call on whether a paper is included or not, taking into account the full paper contents as well as any quality conditions specified in the protocol.

Tip #5: Decide if you want to use inclusion criteria, exclusion criteria, or both. If you use both, have a clear definition of their respective roles.

The more general suggestion I would make here is to apply the selection criteria in increasing order of the amount of time it takes to evaluate them. The goal is to eliminate irrelevant studies quickly and with as little effort as possible, while avoiding the elimination of relevant studies. To further speed things up, I recommend extracting the data of interest from a study immediately after deciding it should be included in the SLR. Coming back to it at a later time for data extraction will impose an additional time penalty, as you will have to read it again to refresh your memory.

Tip #6: Streamline study processing so that you avoid the time-consuming task of “refreshing your memory” regarding a primary study.

Filling in the gaps

Many of the SLRs I have read complement the systematic search process by manually adding primary studies known to be of interest that were not returned by the search, as well as by performing reference snowballing. One of the ERMI participants expressed his concern that this might undermine the value of the original search. My view is that filling in the (sometimes inevitable) gaps in the DL search results is beneficial, as long as the “gaps” don’t turn out to be larger than the search results themselves – that would indicate that an inaccurate search term was used.

Tip #7: Reference snowballing and even manually adding relevant studies to the SLR is not only acceptable, but recommendable.

Quality assessment criteria

When assessing the quality of a primary study, it’s important to have a predefined list of quality assessment criteria to evaluate. I found that the criteria suggested by Kitchenham et al. are a good starting point, although their level of detail might not correspond to the often insufficient amount of study design information presented in Software Engineering papers. Conducting a pilot study will help calibrate the quality assessment criteria with the level of study design details presented in the  studies of interest.

Tip #8: Calibrate your quality assessment criteria with the amount of study design details presented in the primary studies – in Software Engineering, this amount is unfortunately rather low.

Data visualization

It is often difficult to find the most expressive type of visualization for a given data set. At the same time, practically all SLRs report on the same kind of data, such as included/excluded studies, and numbers of studies in different categories. Adopting a set of visualization best practices could be beneficial to readers. For instance, Sankey diagrams  are a great way of visualizing the outcomes of a study selection process, and bubble charts offer an expressive visualization of study categories.

When searching for an appropriate visualization, I often find myself consulting The Data Visualization Catalog, a useful online collection of common types of plots and graphs.

Tip #8: Choose your data visualization methods wisely, possibly consulting a visualization catalog beforehand.

Final thoughts

The nuggets of advice presented above may be obvious for those with even a moderate level of experience regarding SLRs. However, they would have collectively saved me several weeks of work had I known about them when starting out. I shared them here in the hope that someone reading this post will be able to save some time. After all, time seems to be one of the most valuable resources needed for an SLR.

As a side-note, the SLR-turned-SMS on model transformation languages I started two years ago is now shelved, possibly indefinitely. In the end I found the scope simply too large. I have since focused on a more narrow scope, which in my view is also more interesting – but that is a topic for another post.