How much documentation is enough documentation? What are the requirements in the definition of done regarding documentation? Cannot simply all code be self-documenting? These are common questions in DevOps, Scrum and other type of software development teams. Typically bottom up discussions among software developers at the time of developing the software. Not the best time to have this discussion – when all focus is on deliverable or delivered functionality – and not necessarily all the best participants to have it either. And before you know it, someone quotes the Agile manifesto: “working software over comprehensive documentation” to kill the debate and dodge the bullet that having to produce documentation seems to be to many in our profession.
If you do not need documentation, you should not create it. But if you do, you should. So do you need documentation now becomes the relevant question? Maybe not you personally, but the organization represented by your product owner.
It seems sensible to take a step back from this discussion and think from the top down about documentation. Realizing that documentation is not an end in itself. We do not need documentation per se after all. Our real objective with or software could be described perhaps as follows: provide required business functionality according to stipulated non-functional characteristics . Additionally we could add: allow evolution of the business functionality (without too much effort, risk or cost) and adaptation to changing non functional requirements.
Documentation is an element that can contribute to this high level objective – and should be regarded in that way. Documentation helps provide insight in the workings of software – what the software does and how it has been implemented to do that. We should wonder: when, for whom and in what way
can should documentation support the top objective?
- When: upon fixing a problem, evolving and extending functionality, changing non-functional characteristics and analyzing software (for root cause, regarding security or compliance)
- Who: DevOps engineer charged with analyzing and fixing a problem, QA officer reviewing software for quality, security, compliance and other purposes, architect assessing impact of non functional changes, software engineer determining how to implement a new or changed functional requirement, architect or software engineer trying to assess if and how to (re)use a software component; in short: someone who has a need to understand how [and why] the software works, typically not the person who designed or programmed the code (or has a vivid memory of doing that). Note: end users of software are also consumers of documentation; I am not including that type of documentation in this discussion
- How: provide quicker and better understanding of the working and structure of software – both on a code line by code line level and on higher component and application architecture level; insight in flow of control (call stack) and data through the software, insight in dependencies (compile/build time and run time)
It is important to realize that documentation is not created at the same time it is needed, nor is is created by the people for which it is primarily or necessarily intended. It is created ahead of time by people who are probably not the consumers of it (or who may have a need for documentation in very different state of mind and memory than they are in at the time of creating documentation). In order to create documentation that will satisfy the requirements of the eventual consumers, we should involve those consumers: find out about their requirements, have them participate when the documentation is put together and have them accept the documentation deliverables. At the very least we should describe each persona for which we have to provide documentation, describe the objectives of each personal and derive purpose and requirements regarding documentation for each persona in each relevant situation. In a way very similar to how we produce software to satisfy business’ and end users’ requirements.
What If Scenario
Some agile proponents suggest that you should only create documentation when you need it and only change it when it [the current state of documentation] hurts. They typically will indicate that this may cause problems, when the documentation is needed in a hurry or the people with the knowledge to produce the required documentation are not of no longer available. In order to have the required documentation (and no more) created when the team is still around and the knowledge is still fresh in everyone’s mind – we could go through a thought experiment: what are the scenarios that we want to be prepared for when the application has gone live and we are a few months down the road? What could reasonably happen and what would the response be of the team responsible at that time. And what would that team need in terms of tools, skills, information? Some of the obvious scenarios include:
- there is a bug in some part of the system that requires an urgent fix (the team need to be able to analyze the problem, find the root cause using logging and monitoring information from the live system, design and implement a change that patches the issue without undesirable side effects, test and deploy the change with confidence)
- a new functional requirement is expressed by the product owner (the team needs to be able to determine how this feature can be implemented in the existing system, efficiently and without impacting existing functionality and non-functional behavior; it needs to be able to implement the change, test and deploy the change with confidence; it needs to know how apply logging, tracing and monitoring mechanisms to the new code they introduce)
- one of the components (versions) that the system relies on – for example a library, database, application server or home grown framework – can no longer be used [because of licensing | security | enterprise architecture | regulatory considerations …
- a new team member joins the team and needs to be brought up to speed with [certain areas of] the system; note: the new member may join the team in a remote location or in a period of high work pressure
- the product owner considers replacing part of the system with a SaaS service and wants to know what parts of the system can be replaced by said SaaS product (because of functional parity)…
- the product owner considers outsourcing DevOps responsibility to an external vendor for strategic reasons; this vendor needs to be able to first of all assess state and scope of the system and calculate a quote
By walking through the scenarios our product owner thinks we should be able to support – and determining what would be required to handle each situation (for the software assets the product owner considers relevant) we can derive the requirements for documentation for real world situations without waiting for these situations to actually occur. It gives us the best of both worlds – only create relevant documentation and create it when we still can do so efficiently.
What can we expect from the professionals involved in interpreting software? We can assume they have general expertise regarding the programming languages, frameworks and tools used for the software. They are familiar with mainstream features and mechanisms. We should not expect them to be familiar with obscure, exotic, specialized constructs, mechanisms and functionality. Nor can we assume they know about niche frameworks and libraries off the beaten track, including homegrown frameworks.
Note: a periodic review of our software and its dependencies should bring to light dependencies on frameworks and libraries that were once considered mainstream and broadly known but now have been relegated to the category of obscure and exotic that we cannot assume IT professionals have expertise with or can easily find reliable resources for. An obvious example of such a technology is Apache Struts.
When a review indicates that some of our software assets rely on technology that should now be considered not mainstream we need to decide on appropriate action. If we do nothing, we may find ourselves in a situation where evolving or even fixing the software and operating it under evolving non functional requirements (for example regarding security) will become increasingly hard. That may be acceptable, given rate of change, roadmap, relevance of the software assets involved. Or it may not be. An extreme action would be to eradicate the problem by modifying the software to completely remove all dependencies on the offending technology. A milder alternative could involve extending the documentation regarding the dependencies on the components under scrutiny (to explain explicitly what used to be understood implicitly) and to make sure that relevant resources on the technology (such as a documentation set and the sources) are available in our own environment and we do not rely on external sources whose availability is not guaranteed.
The Agile (Project) View
As we are moving to a DevOps style of software life cycle management where a team assumes responsibility for the continuous lifecycle of software products, the notion of projects becomes less relevant. For those of us who still work in [temporary] projects to produce new or substantially changed software assets, we have to deal with the inherent friction between the short term, explicit objectives of the project – produce working software – and the typically longer term, less explicit objectives of the IT organization of having software assets that can be operated and evolved over time. Frequently the members of the project team will not be involved in that long term and therefore have no skin in the game. No one in the project team represents these longer term interests, unless very clear acceptance criteria have been specified and are enforced.
It is not uncommon for Scrum teams to start pointing at the Agile Manifesto when documentation comes up as a topic. The manifesto states “working software over comprehensive documentation”. Note the word comprehensive. It is crucial. No one suggests that there should be no documentation. In fact, “Documentation is as much a part of the system as the source code”. However, documentation should be there for a reason. It has a purpose and should be created to support that purpose. Content is more important than representation – documentation can take many forms, including diagrams, audio and video and of course annotated source code. The effort to produce documentation is justified only by the value it will have in supporting the true objectives. In the core agile principles is stated that after producing working software as the primary objective, Enabling the Next Effort is Your Secondary Goal. That next effort can be the next sprint or the next project – undertaken by the current team or a completely new group of people.
The article Agile/Lean Documentation: Strategies for Agile Software Development on AgileModeling.com provides a good overview of documentation and how it is an intrinsic part of an agile software methodology. Some of its guidelines are:
- You should understand the total cost of ownership (TCO) for any piece of documentation, and someone must explicitly choose to make that investment. Note: the TCO of not having said section of documentation should also be clear – although it will include a risk factor
- Well-written documentation supports organizational memory effectively
- Documentation should be concise: overviews/roadmaps are generally preferred over detailed documentation.
- With high quality source code and a test suite to back it up you need a lot less system documentation.
- Put the documentation in the most appropriate place – as much as possible in a single place (for each type of documentation) and at least in a consistent way
- Document with a purpose: every aspect you document needs to be relevant for at least some identified persona in a well defined situation or with a specific task
- Do not document(manually) what can be automatically extracted ; use automation (as part of the build process) to produce documentation from source code and fully in synch with source code
What type of documentation?
Documentation is used to provide insight in the working and implementation of software assets to IT professionals with specific tasks to perform – such as analyze an incident, fix a bug, change functionality or non functional behavior and perform a type of QA. Depending on the task, different information is required. We typically discern different types of documentation, each providing different information for different tasks. We will ignore process oriented documentation and documentation intended for end users. Instead, we focus on system documentation – documentation produced during the software design and development and intended for later in the life cycle (after the current sprint or project). Documentation that describes the system itself and its parts – including requirements documents, design decisions, architecture descriptions, program source code, and help guides. This article from the Agile Modelling site contains a detailed list of potential documents to consider producing.
I have identified three main categories – but I realize this is fairly arbitrary.
1. What the System does: The Functionality
An overview of what the system does [or is supposed to do] is of course quite relevant. Rather than guessing and reverse engineering by inspecting code and trying to work through all user interfaces and APIs – as I have seen happen on several occasions when systems were to be replaced – it is crucial that documentation is available to clarify the functionality and individual features of the system. Ideally, this documentation also clarifies the reasoning behind features: what is the business value, the purpose, the origin, the proponents and key stakeholders. This documentation can be constructed from requirements artifacts such as business rule definitions, use cases, user stories, or essential user interface prototypes (to name a few).
Note: simply holding on to all [Jira] user stories is not good enough. User stories are process documents, used to build functionality in iterative steps. The current state of the system in terms of functionality and features is an aggregation of many user stories, some of which would have to be subtracted. User stories are primarily used to support the agile process. They are temporary, relevant during the process. They should not be regarded as the persisted truth about the present functionality of the system. Of course completed user stories provide input to the functional documentation of the system. And it would make perfect sense to refer from this documentation to user stories – to provide background to the what and why of specific features. Given the fact that code commits usually refer to user stories as well, this may help find links between functionality and software assets.
The documentation that describes functionality of the system is typically organized by business process, end user task, feature and end-to-end chain. It regards the system from the perspective of its business or functional stakeholders.
Note: Test cases, test scenarios and other test artifacts – especially when Test Driven Development is employed – can be part of the requirements documentation. These artifacts capture in structured form the functional requirements on the system. While perhaps not easily accessible to human readers, these artifacts can certainly help to understand the intended behavior of the system.
The non-functional characteristics of the system should be documented as well. What are the requirements regarding for example availability, scalability and response times, security that apply to the system – and what is the origin of these requirements. The non-functional constraints help us to understand design decisions and solution details and are quite relevant input for future modifications and extensions.
2. How the System is Implemented
(overarching) Architecture & Design Principles – the rules that apply – guiding and constraining the system, including high level technology choices
Design decisions – A summary of critical decisions pertaining to design and architecture that the team made throughout development – to understand the current state of the application and to avoid needless refactoring at some point in the future or rehashing a previously made decision. The design decisions include the choice of [versions of] technology, 3rd party components, tools and platform.
System Documentation, Software design & Solution Details – to provide an overview of the system and to help people understand the system. Common information includes an overview of the technical architecture, the business architecture and detailed architecture and design models, or references to them. Note: Agile Documentation refers to this kind of documentation as ‘truck insurance’: stuff you right down in case someone on the team is hit by a truck (or decides on a new career path as a truck driver); it provides memory to those who do not remember.
Just as StreetView is not enough to travel through a city let alone a country and detailed models of individual organs need to be complemented with a body atlas to understand anatomy, function and interdependencies, it is not enough to have details on individual software assets. Self documenting code is great – and helps with understanding the logic within a single asset. But it is not enough to understand flow and dependencies across assets – to gain insight into the bigger picture of how assets hang together and how functional features have been implemented across multiple assets. Please read this article on [the myth of] self-documenting code.
Anyone tasked with fixing currently flawed system behavior or adding a new feature should be able to use System Documentation as map or guide to find the [locations in] software assets where the current behavior has been implemented. References should be available between the System Documentation and the Requirements & Features documentation to indicate which features are implemented in which software assets.
Some of the information offered in system documentation can be compiled automatically, using tools for analyzing code – statically (from the sources) and dynamically (at run time). Additionally, document generators can be used to extract information from source code to produce not just asset level documentation but system overview information as well. Wherever we can automate compilation of documentation, we should do so: in order to reduce the manual effort, increase the accuracy and timeliness. Machine learning assisted tooling is expected to bring fully automated documentation generation or Just in Time code analysis within reach.
For the foreseeable future, a human touch will be required, to lay down in system documentation what tools cannot extract: considerations, decisions, known limitations, special conditions, behavior under extreme situations, alternatives evaluated. Anything that went into designing and implementing the system that makes understanding the system easier or that will help prevent developers from introducing tried-and-rejected implementations should be recorded. If your team pondered several approaches, perhaps even tried out a few and finally with good reasons settled on a specific approach – shell sort over bubble sort or Arrays over LinkedList or MongoDB over Cassandra – you should record what you evaluated, what your findings were, what you decided and what was rejected. If only from preventing your successors from having to go through the same debates and investigations.
Source Code – the ultimate source of truth regarding the implementation of the system is of course the source code. Having the source code readily accessible – organized in commits, branches, releases and tags – is a necessity in many cases. Ideally we can search through the source code, follow calls using hyperjumps and get human readable renditions of comments and inline documentation.
In order for human consumers to make sense of source code – it should be readable (and not just to the compiler). “Readable code is code that clearly communicates its intention to the reader. Code that is not readable takes longer to understand and increases the likelihood of defects.” Read this interesting article on what makes code readable – or not.
Some elements that contribute to readability:
- Layout (for example indentation, consistent use of spacing)
- Naming (parameters, variables, functions, modules|classes|services) in a consistent and meaningful way
- Consistent structure and ordering of units ,
- Use of established design patterns
- Stay away from exotic language features (or comment their use)
- DRY (Don’t Repeat Yourself) for example avoid repeating code)
- KISS (Keep It Simple, Stupid) for example avoid premature or unnecessary optimization that may unnecessarily complicate code
- Appropriately comment code inline. Do not comment the obvious (this getFirstName method returns …). However, do explain the not-so-trivial such as the intention of a Regular Expression, a recursive function call, an invocation of an unclearly named function of service. Provide references to stackoverflow or other resources consulted for special code constructs
- Use “footnotes” in inline comments (explicit references to sections in the separate accompanying documentation that provide background information – such as design decisions, R&D resulting in the current implementation, suggestions for improving the implementation, non-functional findings (memory usage, performance); this allows associating relevant information directly with source code without creating bloated source code assets
- Do not commit code with commented out sections of code (at the very least include a comment that explains the meaning of the commented out code block)
The objective is to have an IT professional, well versed in the technologies, programming language and main stream libraries used in implementing the software asset, quickly be able to understand what the code does, why it is structured the way it is and how it could be modified in a controlled fashion. We could create an overview of the prerequisites we assume for all readers of the source code – for example listing specific libraries we use in our code, not so common language constructs that we have adopted – and providing references to resources our readers may want to consult when (or before) trying to interpret our sources.
3. How the System should be Operated
This type of documentation is intended for daily operation of the system and its software assets. We assume they are available, functionally acceptable, ready to be deployed, configured and run. This documentation should help us with that. It describes how to deploy, start & stop, scale, relocate, configure. It describes how logging, tracing, health check & monitoring is done. How to spot errors and how to perform periodic clean up. Explains different types of behaviors of the system in different environments and with other systems. It also should provide instructions how to deal with malfunction situations.
Documentation Twin – compare digital twin
As Wikipedia puts it: “Digital twin refers to a digital replica of physical assets (physical twin), processes, people, places, systems and devices that can be used for various purposes” The digital twin represents a physical asset in a digital way. It used to visualize metrics that have been collected from the physical world (for example through IoT), from business applications and technical systems and from human observation. By inspecting the digital twin, interested parties gain most of the insight they require about the physical asset, in a much easier way than by having to inspect the physical asset itself. A challenge with digital twins is of course ensuring they follow closely any relevant changes in their physical counterpart.
Suggestion: create ‘documentation twins’ to (critical) software components, key business processes, crucial end-to-end chains. A documentation twin represents a software asset. It provides insight into the inner workings and dependencies of the software asset, including its logging, health check and monitoring hooks and the associated test-sets – and it could even represent the actual state of deployed instances of the software asset. Using pseudo code, [links to] generated documentation, visualization/overview of flow logic and dependencies (possibly extracted using code analysis tools) and (references to) designs, design decisions, implementation considerations and known limitations, it is the go-to place for stakeholders to learn about software assets.
A documentation twin will commonly be exposed as a website – in browsers, using hyperlinks and other HTML features. A wiki seems like a suitable platform for a documentation twin, although perhaps GitHub and ReadTheDocs are options too.
Wikipedia on Software Documentation: https://en.wikipedia.org/wiki/Software_documentation
Tools to support Production and Management of Software Documentation – https://www.process.st/software-documentation/
Read the Docs – open platform for creating, managing, publishing software documentation: https://readthedocs.org/
Wikipedia – Documentation Generators: https://en.wikipedia.org/wiki/Comparison_of_documentation_generators
Agile Modeling (Scott Ambler and partners) on Documentation: Agile/Lean Documentation Strategies – http://www.agilemodeling.com/essays/agileDocumentation.htm
Software Documentation Types – https://www.altexsoft.com/blog/business/software-documentation-types-and-best-practices/
Quora Discussion Thread – What are the best practices for documenting a (software development) Agile/Scrum project? – https://www.quora.com/What-are-the-best-practices-for-documenting-a-software-development-Agile-Scrum-project
The Self-Documenting Code Myth – short blog article arguing the need for overview documentation complementing well documented source code – https://arpytoth.com/2015/12/23/the-self-documenting-code-myth/
Thread on Readability of source code – https://softwareengineering.stackexchange.com/questions/162923/what-defines-code-readability
Musings on code readability: What Makes Readable Code: Not What You Think – https://simpleprogrammer.com/what-makes-code-readable-not-what-you-think/