BOOK: SOFTWARE ENGINEERING FOR SELF-DIRECTED LEARNERS
SECTION: INTRODUCTION
Software Engineering
Introduction
The following description of the Joys of the Programming Craft was taken (and emphasis added) from Chapter 1 of the famous book The Mythical Man-Month, by Frederick P. Brooks.
Why is programming fun? What delights may its practitioner expect as his reward?
First is the sheer joy of making things. As the child delights in his mud pie, so the adult enjoys building things, especially things of his own design. I think this delight must be an image of God's delight in making things, a delight shown in the distinctness and newness of each leaf and each snowflake.
Second is the pleasure of making things that are useful to other people. Deep within, you want others to use your work and to find it helpful. In this respect the programming system is not essentially different from the child's first clay pencil holder "for Daddy's office."
Third is the fascination of fashioning complex puzzle-like objects of interlocking moving parts and watching them work in subtle cycles, playing out the consequences of principles built in from the beginning. The programmed computer has all the fascination of the pinball machine or the jukebox mechanism, carried to the ultimate.
Fourth is the joy of always learning, which springs from the nonrepeating nature of the task. In one way or another the problem is ever new, and its solver learns something: sometimes practical, sometimes theoretical, and sometimes both.
Finally, there is the delight of working in such a tractable medium. The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by the exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures....
Yet the program construct, unlike the poet's words, is real in the sense that it moves and works, producing visible outputs separate from the construct itself. It prints results, draws pictures, produces sounds, moves arms. The magic of myth and legend has come true in our time. One types the correct incantation on a keyboard, and a display screen comes to life, showing things that never were nor could be.
Programming then is fun because it gratifies creative longings built deep within us and delights sensibilities you have in common with all men.
Not all is delight, however, and knowing the inherent woes makes it easier to bear them when they appear.
First, one must perform perfectly. The computer resembles the magic of legend in this respect, too. If one character, one pause, of the incantation is not strictly in proper form, the magic doesn't work. Human beings are not accustomed to being perfect, and few areas of human activity demand it. Adjusting to the requirement for perfection is, I think, the most difficult part of learning to program.
Next, other people set one's objectives, provide one's resources, and furnish one's information. One rarely controls the circumstances of his work, or even its goal. In management terms, one's authority is not sufficient for his responsibility. It seems that in all fields, however, the jobs where things get done never have formal authority commensurate with responsibility. In practice, actual (as opposed to formal) authority is acquired from the very momentum of accomplishment.
The dependence upon others has a particular case that is especially painful for the system programmer. He depends upon other people's programs. These are often maldesigned, poorly implemented, incompletely delivered (no source code or test cases), and poorly documented. So he must spend hours studying and fixing things that in an ideal world would be complete, available, and usable.
The next woe is that designing grand concepts is fun; finding nitty little bugs is just work. With any creative activity come dreary hours of tedious, painstaking labor, and programming is no exception.
Next, one finds that debugging has a linear convergence, or worse, where one somehow expects a quadratic sort of approach to the end. So testing drags on and on, the last difficult bugs taking more time to find than the first.
The last woe, and sometimes the last straw, is that the product over which one has labored so long appears to be obsolete upon (or before) completion. Already colleagues and competitors are in hot pursuit of new and better ideas. Already the displacement of one's thought-child is not only conceived, but scheduled.
This always seems worse than it really is. The new and better product is generally not available when one completes his own; it is only talked about. It, too, will require months of development. The real tiger is never a match for the paper one, unless actual use is wanted. Then the virtues of reality have a satisfaction all their own.
Of course the technological base on which one builds is always advancing. As soon as one freezes a design, it becomes obsolete in terms of its concepts. But implementation of real products demands phasing and quantizing. The obsolescence of an implementation must be measured against other existing implementations, not against unrealized concepts. The challenge and the mission are to find real solutions to real problems on actual schedules with available resources.
This then is programming, both a tar pit in which many efforts have floundered and a creative activity with joys and woes all its own. For many, the joys far outweigh the woes....
SECTION: PROGRAMMING PARADIGMS
Object-Oriented Programming
Introduction
Object-Oriented Programming (OOP) is a programming paradigm. A programming paradigm guides programmers to analyze programming problems, and structure programming solutions, in a specific way.
Programming languages have traditionally divided the world into two parts—data and operations on data. Data is static and immutable, except as the operations may change it. The procedures and functions that operate on data have no lasting state of their own; they’re useful only in their ability to affect data.
This division is, of course, grounded in the way computers work, so it’s not one that you can easily ignore or push aside. Like the equally pervasive distinctions between matter and energy and between nouns and verbs, it forms the background against which you work. At some point, all programmers—even object-oriented programmers—must lay out the data structures that their programs will use and define the functions that will act on the data.
With a procedural programming language like C, that’s about all there is to it. The language may offer various kinds of support for organizing data and functions, but it won’t divide the world any differently. Functions and data structures are the basic elements of design.
Object-oriented programming doesn’t so much dispute this view of the world as restructure it at a higher level. It groups operations and data into modular units called objects and lets you combine objects into structured networks to form a complete program. In an object-oriented programming language, objects and object interactions are the basic elements of design.
Some other examples of programming paradigms are:
Paradigm | Programming Languages |
---|---|
Procedural Programming paradigm | C |
Functional Programming paradigm | F#, Haskell, Scala |
Logic Programming paradigm | Prolog |
Some programming languages support multiple paradigms.
Java is primarily an OOP language but it supports limited forms of functional programming and it can be used to (although not recommended to) write procedural code. e.g. se-edu/addressbook-level1
JavaScript and Python support functional, procedural, and OOP programming.
Objects
An object in Object-Oriented Programming (OOP) has state and behavior, similar to objects in the real world.
Every object has both state (data) and behavior (operations on data). In that, they’re not much different from ordinary physical objects. It’s easy to see how a mechanical device, such as a pocket watch or a piano, embodies both state and behavior. But almost anything that’s designed to do a job does, too. Even simple things with no moving parts such as an ordinary bottle combine state (how full the bottle is, whether or not it’s open, how warm its contents are) with behavior (the ability to dispense its contents at various flow rates, to be opened or closed, to withstand high or low temperatures).
It’s this resemblance to real things that gives objects much of their power and appeal. They can not only model components of real systems, but equally as well fulfill assigned roles as components in software systems.
OOP views the world as a network of interacting objects.
A real world scenario viewed as a network of interacting objects:
You are asked to find out the average age of a group of people Adam, Beth, Charlie, and Daisy. You take a piece of paper and pen, go to each person, ask for their age, and note it down. After collecting the age of all four, you enter it into a calculator to find the total. And then, use the same calculator to divide the total by four, to get the average age. This can be viewed as the objects You
, Pen
, Paper
, Calculator
, Adam
, Beth
, Charlie
, and Daisy
interacting to accomplish the end result of calculating the average age of the four persons. These objects can be considered as connected in a certain network of certain structure that dictates how these objects can interact. For example, You
object is connected to the Pen
object, and hence You
can use the Pen
object to write.
OOP solutions try to create a similar object network inside the computer’s memory – a sort of virtual simulation of the corresponding real world scenario – so that a similar result can be achieved programmatically.
OOP does not demand that the virtual world object network follow the real world exactly.
Our previous example can be tweaked a bit as follows:
- Use an object called
Main
to represent your role in the scenario. - As there is no physical writing involved, you can replace the
Pen
andPaper
with an object calledAgeList
that is able to keep a list of ages.
Every object has both state (data) and behavior (operations on data).
The state and behavior of our running example are as follows:
Object | Real World? | Virtual World? | Example of State (i.e. Data) | Examples of Behavior (i.e. Operations) |
---|---|---|---|---|
Adam | Name, Date of Birth | Calculate age based on birthday | ||
Pen | - | Ink color, Amount of ink remaining | Write | |
AgeList | - | Recorded ages | Give the number of entries, Accept an entry to record | |
Calculator | Numbers already entered | Calculate the sum, divide | ||
You/Main | Average age, Sum of ages | Use other objects to calculate |
Every object has an interface and an implementation.
Every real world object has,
- an interface through which other objects can interact with it, and,
- an implementation that supports the interface but may not be accessible to the other object.
The interface and implementation of some real-world objects in our example:
- Calculator: the buttons and the display are part of the interface; circuits are part of the implementation.
- Adam: In the context of our 'calculate average age' example,
- the interface of Adam consists of requests that Adam will respond to, e.g. "Give age to the nearest year, as at Jan 1st of this year" "State your name".
- the implementation includes the mental calculation Adam uses to calculate the age which is not visible to other objects.
Similarly, every object in the virtual world has an interface and an implementation.
The interface and implementation of some virtual-world objects in our example:
Adam
: the interface might have a methodgetAge(Date asAt)
; the implementation of that method is not visible to other objects.
Objects interact by sending messages. Both real world and virtual world object interactions can be viewed as objects sending messages to each other. The message can result in the sender object receiving a response and/or the receiver object’s state being changed. Furthermore, the result can vary based on which object received the message, even if the message is identical (see rows 1 and 2 in the example below).
Same messages and responses from our running example:
World | Sender | Receiver | Message | Response | State Change |
---|---|---|---|---|---|
Real | You | Adam | "What is your name?" | "Adam" | - |
Real | as above | Beth | as above | "Beth" | - |
Real | You | Pen | Put nib on paper and apply pressure | Makes a mark on your paper | Ink level goes down |
Virtual | Main | Calculator (current total is 50) | add(int i): int i = 23 | 73 | total = total + 23 |
The concept of Objects in OOP is an abstraction mechanism because it allows us to abstract away the lower level details and work with bigger granularity entities i.e. ignore details of data formats and the method implementation details and work at the level of objects.
You can deal with a Person
object that represents the person Adam and query the object for Adam's age instead of dealing with details such as Adam’s date of birth (DoB), in what format the DoB is stored, the algorithm used to calculate the age from the DoB, etc.
Encapsulation protects an implementation from unintended actions and from inadvertent access.
-- Object-Oriented Programming with Objective-C, Apple
An object is an encapsulation of some data and related behavior in terms of two aspects:
1. The packaging aspect: An object packages data and related behavior together into one self-contained unit.
2. The information hiding aspect: The data in an object is hidden from the outside world and are only accessible using the object's interface.
Classes
Writing an OOP program is essentially writing instructions that the computer will use to,
- create the virtual world of the object network, and
- provide it the inputs to produce the outcome you want.
A class contains instructions for creating a specific kind of objects. It turns out sometimes multiple objects keep the same type of data and have the same behavior because they are of the same kind. Instructions for creating a 'kind' (or ‘class’) of objects can be done once and those same instructions can be used to objects of that kind. We call such instructions a Class.
Classes and objects in an example scenario
Consider the example of writing an OOP program to calculate the average age of Adam, Beth, Charlie, and Daisy.
Instructions for creating objects Adam
, Beth
, Charlie
, and Daisy
will be very similar because they are all of the same kind: they all represent ‘persons’ with the same interface, the same kind of data (i.e. name
, dateOfBirth
, etc.), and the same kind of behavior (i.e. getAge(Date)
, getName()
, etc.). Therefore, you can have a class called Person
containing instructions on how to create Person
objects and use that class to instantiate objects Adam
, Beth
, Charlie
, and Daisy
.
Similarly, you need classes AgeList
, Calculator
, and Main
classes to instantiate one each of AgeList
, Calculator
, and Main
objects.
Class | Objects |
---|---|
Person | objects representing Adam, Beth, Charlie, Daisy |
AgeList | an object to represent the age list |
Calculator | an object to do the calculations |
Main | an object to represent you (i.e., the one who manages the whole operation) |
While all objects of a class have the same attributes, each object has its own copy of the attribute value.
All Person
objects have the name
attribute but the value of that attribute varies between Person
objects.
However, some attributes are not suitable to be maintained by individual objects. Instead, they should be maintained centrally, shared by all objects of the class. They are like ‘global variables’ but attached to a specific class. Such variables whose value is shared by all instances of a class are called class-level attributes.
The attribute totalPersons
should be maintained centrally and shared by all Person
objects rather than copied at each Person
object.
Similarly, when a normal method is being called, a message is being sent to the receiving object and the result may depend on the receiving object.
Sending the getName()
message to the Adam
object results in the response "Adam"
while sending the same message to the Beth
object results in the response "Beth"
.
However, there can be methods related to a specific class but not suitable for sending messages to a specific object of that class. Such methods that are called using the class instead of a specific instance are called class-level methods.
The method getTotalPersons()
is not suitable to send to a specific Person
object because a specific object of the Person
class should not have to know about the total number of Person
objects.
Class-level attributes and methods are collectively called class-level members (also called static members sometimes because some programming languages use the keyword static
to identify class-level members). They are to be accessed using the class name rather than an instance of the class.
An Enumeration is a fixed set of values that can be considered as a data type. An enumeration is often useful when using a regular data type such as int
or String
would allow invalid values to be assigned to a variable.
Suppose you want a variable called priority
to store the priority of something. There are only three priority levels: high, medium, and low. You can declare the variable priority
as of type int
and use only values 2
, 1
, and 0
to indicate the three priority levels. However, this opens the possibility of an invalid value such as 9
being assigned to it. But if you define an enumeration type called Priority
that has three values HIGH
, MEDIUM
and LOW
only, a variable of type Priority
will never be assigned an invalid value because the compiler is able to catch such an error.
Priority
: HIGH
, MEDIUM
, LOW
Associations
Objects in an OO solution need to be connected to each other to form a network so that they can interact with each other. Such connections between objects are called associations.
Suppose an OOP program for managing a learning management system creates an object structure to represent the related objects. In that object structure you can expect to have associations between a Course
object that represents a specific course and Student
objects that represent students taking that course.
Associations in an object structure can change over time.
To continue the previous example, the associations between a Course
object and Student
objects can change as students enroll in the course or drop the course over time.
Associations among objects can be generalized as associations between the corresponding classes too.
In our example, as some Course
objects can have associations with some Student
objects, you can view it as an association between the Course
class and the Student
class.
Implementing associations
You use instance level variables to implement associations.
When two classes are linked by an association, it does not necessarily mean the two objects taking part in an instance of the association knows about (i.e., has a reference to) each other. The concept of which object in the association knows about the other object is called navigability.
Navigability can be unidirectional or bidirectional. Suppose there is an association between the classes Box
and Rope
, and the Box
object b
and the Rope
object r
is taking part in one instance of that association.
- Unidirectional: If the navigability is from
Box
toRope
,b
will have a reference tor
butr
will not have a reference tob
. Similarly, if the navigability is in the other direction,r
will have a reference tob
butb
will not have a reference tor
. - Bidirectional:
b
will have a reference tor
andr
will have a reference tob
i.e., the two objects will be pointing to each other for the same single instance of the association.
Note that two unidirectional associations in opposite directions do not add up to a single bidirectional association.
In the code below, there is a bidirectional association between the Person
class and the Cat
class i.e., if Person
p
is the owner of the Cat
c
, p
it will result in p
and c
having references to each other.
class Person {
Cat pet;
//...
}
class Cat{
Person owner;
//...
}
class Person:
def __init__(self):
self.pet = None # a Cat object
class Cat:
def __init__(self):
self.owner = None # a Person object
The code below has two unidirectional associations between the Person
class and the Cat
class (in opposite directions) because the breeder is not necessarily the same person keeping the cat as a pet i.e., there are two separate associations here, which rules out it being a bidirectional association.
class Person {
Cat pet;
//...
}
class Cat{
Person breeder;
//...
}
class Person:
def __init__(self):
self.pet = None # a Cat object
class Cat:
def __init__(self):
self.breeder = None # a Person object
Multiplicity is the aspect of an OOP solution that dictates how many objects take part in each association.
The multiplicity of the association between Course
objects and Student
objects tells you how many Course
objects can be associated with one Student
object and vice versa.
Implementing multiplicity
A normal instance-level variable gives us a 0..1
multiplicity (also called optional associations) because a variable can hold a reference to a single object or null
.
In the code below, the Logic
class has a variable that can hold 0..1
i.e., zero or one Minefield
objects.
class Logic {
Minefield minefield;
// ...
}
class Minefield {
//...
}
class Logic:
def __init__(self):
self.minefield = None
# ...
class Minefield:
# ...
A variable can be used to implement a 1
multiplicity too (also called compulsory associations).
In the code below, the Logic
class will always have a ConfigGenerator
object, provided the variable is not set to null
at some point.
class Logic {
ConfigGenerator cg = new ConfigGenerator();
...
}
class Logic:
def __init__(self):
self.config_gen = ConfigGenerator()
Bidirectional associations require matching variables in both classes.
In the code below, the Foo
class has a variable to hold a Bar
object and vice versa i.e., each object can have an association with an object of the other type.
class Foo {
Bar bar;
//...
}
class Bar {
Foo foo;
//...
}
class Foo:
def __init__(self, bar):
self.bar = bar;
class Bar:
def __init__(self, foo):
self.foo = foo;
To implement other multiplicities, choose a suitable data structure such as Arrays, ArrayLists, HashMaps, Sets, etc.
This code uses a two-dimensional array to implement a 1-to-many association from the Minefield
to Cell
.
class Minefield {
Cell[][] cell;
...
}
class Minefield:
def __init__(self):
self.cells = {1:[], 2:[], 3:[]}
In the context of OOP associations, a dependency is a need for one class to depend on another without having a direct association in the same direction. Reason for the exclusion: If there is an association from class Foo
to class Bar
(i.e., navigable from Foo
to Bar
), that means Foo
is obviously dependent on Bar
and hence there is no point in mentioning dependency specifically. In other words, we are only concerned about non-obvious dependencies here. One cause of such dependencies is interactions between objects that do not have a long-term link between them.
A Course
class can have a dependency on a Registrar
class because the Course
class needs to refer to the Registrar
class to obtain the the maximum number of students it can support (e.g., Registrar.MAX_COURSE_CAPACITY
).
In the code below, Foo
has a dependency on Bar
but it is not an association because it is only a interaction and there is no long term relationship between a Foo
object and a Bar
object. i.e. the Foo
object does not keep the Bar
object it receives as a parameter.
class Foo {
int calculate(Bar bar) {
return bar.getValue();
}
}
class Bar {
int value;
int getValue() {
return value;
}
}
class Foo:
def calculate(self, bar):
return bar.value;
class Bar:
def __init__(self, value):
self.value = value
A composition is an association that represents a strong whole-part relationship. When the whole is destroyed, parts are destroyed too i.e., the part should not exist without being attached to a whole.
A Board
(used for playing board games) consists of Square
objects.
Composition also implies that there cannot be cyclical links.
The ‘sub-folder’ association between Folder
objects is a composition type association. That means if the Folder
object foo
is a sub-folder of Folder
object bar
, bar
cannot be a sub-folder of foo
.
Whether a relationship is a composition can depend on the context.
Is the relationship between Email
and EmailSubject
composition? That is, is the email subject part of an email to the extent that an email subject cannot exist without an email?
- When modeling an application that sends emails, the answer is 'yes'.
- When modeling an application that gather analytics about email traffic, the answer may be 'no' (e.g., the application might collect just the email subjects for text analysis).
A common use of composition is when parts of a big class are carved out as smaller classes for the ease of managing the internal design. In such cases, the classes extracted out still act as parts of the bigger class and the outside world has no business knowing about them.
Cascading deletion alone is not sufficient for composition. Suppose there is a design in which Person
objects are attached to Task
objects and the former get deleted whenever the latter is deleted. This fact alone does not mean there is a composition relationship between the two classes. For it to be composition, a Person
must be an integral part of a Task
in the context of that association, at the concept level (not simply at implementation level).
Identifying and keeping track of composition relationships in the design has benefits such as helping to maintain the data integrity of the system. For example, when you know that a certain relationship is a composition, you can take extra care in your implementation to ensure that when the whole object is deleted, all its parts are deleted too.
Implementing composition
Composition is implemented using a normal variable. If correctly implemented, the ‘part’ object will be deleted when the ‘whole’ object is deleted. Ideally, the ‘part’ object may not even be visible to clients of the ‘whole’ object.
class Email {
private Subject subject;
...
}
class Email:
def __init__(self):
self.__subject = Subject()
In this code, the Email
has a composition type relationship with the Subject
class, in the sense that the subject is part of the email.
Aggregation represents a container-contained relationship. It is a weaker relationship than composition.
SportsClub
can act as a container for Person
objects who are members of the club. Person
objects can survive without a SportsClub
object.
Implementing aggregation
Implementation is similar to that of composition except the containee object can exist even after the container object is deleted.
In the code below, there is an aggregation association between the Team
class and the Person
class in that a Team
contains a Person
object who is the leader of the team.
class Team {
Person leader;
...
void setLeader(Person p) {
leader = p;
}
}
class Team:
def __init__(self):
self.__leader = None
def set_leader(self, person):
self.__leader = person
An association class represents additional information about an association. It is a normal class but plays a special role from a design point of view.
A Man
class and a Woman
class are linked with a ‘married to’ association and there is a need to store the date of marriage. However, that data is related to the association rather than specifically owned by either the Man
object or the Woman
object. In such situations, an additional association class can be introduced, e.g. a Marriage
class, to store such information.
Implementing association classes
There is no special way to implement an association class. It can be implemented as a normal class that has variables to represent the endpoint of the association it represents.
In the code below, the Transaction
class is an association class that represents a transaction between a Person
who is the seller and another Person
who is the buyer.
class Transaction {
//all fields are compulsory
Person seller;
Person buyer;
Date date;
String receiptNumber;
Transaction(Person seller, Person buyer, Date date, String receiptNumber) {
//set fields
}
}
Inheritance
The OOP concept Inheritance allows you to define a new class based on an existing class.
For example, you can use inheritance to define an EvaluationReport
class based on an existing Report
class so that the EvaluationReport
class does not have to duplicate data/behaviors that are already implemented in the Report
class. The EvaluationReport
can inherit the wordCount
attribute and the print()
method from the base class Report
.
- Other names for Base class: Parent class, Superclass
- Other names for Derived class: Child class, Subclass, Extended class
A superclass is said to be more general than the subclass. Conversely, a subclass is said to be more specialized than the superclass.
Applying inheritance on a group of similar classes can result in the common parts among classes being extracted into more general classes.
Man
and Woman
behave the same way for certain things. However, the two classes cannot be simply replaced with a more general class Person
because of the need to distinguish between Man
and Woman
for certain other things. A solution is to add the Person
class as a superclass (to contain the code common to men and women) and let Man
and Woman
inherit from Person
class.
Inheritance implies the derived class can be considered as a sub-type of the base class (and the base class is a super-type of the derived class), resulting in an is a relationship.
Inheritance does not necessarily mean a sub-type relationship exists. However, the two often go hand-in-hand. For simplicity, at this point let us assume inheritance implies a sub-type relationship.
To continue the previous example,
Woman
is aPerson
Man
is aPerson
Inheritance relationships through a chain of classes can result in inheritance hierarchies (aka inheritance trees).
Two inheritance hierarchies/trees are given below. Note that the triangle points to the parent class. Observe how the Parrot
is a Bird
as well as it is an Animal
.

Multiple Inheritance is when a class inherits directly from multiple classes. Multiple inheritance among classes is allowed in some languages (e.g., Python, C++) but not in other languages (e.g., Java, C#).
The Honey
class inherits from the Food
class and the Medicine
class because honey can be consumed as a food as well as a medicine (in some oriental medicine practices). Similarly, a Car
is a Vehicle
, an Asset
and a Liability
.

Method overriding is when a sub-class changes the behavior inherited from the parent class by re-implementing the method. Overridden methods have the same name, same type signature, and same return type.
Consider the following case of EvaluationReport
class inheriting the Report
class:
Report methods | EvaluationReport methods | Overrides? |
---|---|---|
print() | print() | Yes |
write(String) | write(String) | Yes |
read():String | read(int):String | No. Reason: the two methods have different signatures; This is a case of overloading (rather than overriding). |
Method overloading is when there are multiple methods with the same name but different type signatures. Overloading is used to indicate that multiple operations do similar things but take different parameters.
Type signature: The type signature of an operation is the type sequence of the parameters. The return type and parameter names are not part of the type signature. However, the parameter order is significant.
Example:
Method | Type Signature |
---|---|
int add(int X, int Y) | (int, int) |
void add(int A, int B) | (int, int) |
void m(int X, double Y) | (int, double) |
void m(double X, int Y) | (double, int) |
In the case below, the calculate
method is overloaded because the two methods have the same name but different type signatures (String)
and (int)
.
calculate(String): void
calculate(int): void
An interface is a behavior specification i.e. a collection of . If a class , it means the class is able to support the behaviors specified by the said interface.
There are a number of situations in software engineering when it is important for disparate groups of programmers to agree to a "contract" that spells out how their software interacts. Each group should be able to write their code without any knowledge of how the other group's code is written. Generally speaking, interfaces are such contracts. --Oracle Docs on Java
Suppose SalariedStaff
is an interface that contains two methods setSalary(int)
and getSalary()
. AcademicStaff
can declare itself as implementing the SalariedStaff
interface, which means the AcademicStaff
class must implement all the methods specified by the SalariedStaff
interface i.e., setSalary(int)
and getSalary()
.
A class implementing an interface results in an is-a relationship, just like in class inheritance.
In the example above, AcademicStaff
is a SalariedStaff
. An AcademicStaff
object can be used anywhere a SalariedStaff
object is expected e.g. SalariedStaff ss = new AcademicStaff()
.
Abstract class: A class declared as an abstract class cannot be instantiated, but it can be subclassed.
You can declare a class as abstract when a class is merely a representation of commonalities among its subclasses in which case it does not make sense to instantiate objects of that class.
The Animal
class that exists as a generalization of its subclasses Cat
, Dog
, Horse
, Tiger
etc. can be declared as abstract because it does not make sense to instantiate an Animal
object.
Abstract method: An abstract method is a method signature without a method implementation.
The move
method of the Animal
class is likely to be an abstract method as it is not possible to implement a move
method at the Animal
class level to fit all subclasses because each animal type can move in a different way.
A class that has an abstract method becomes an abstract class because the class definition is incomplete (due to the missing method body) and it is not possible to create objects using an incomplete class definition.
Dynamic binding ( ): a mechanism where method calls in code are at , rather than at compile time.
Overridden methods are resolved using dynamic binding, and therefore resolves to the implementation in the actual type of the object.
Consider the code below. The declared type of s
is Staff
and it appears as if the adjustSalary(int)
operation of the Staff
class is invoked.
void adjustSalary(int byPercent) {
for (Staff s: staff) {
s.adjustSalary(byPercent);
}
}
However, at runtime s
can receive an object of any subclass of Staff
. That means the adjustSalary(int)
operation of the actual subclass object will be called. If the subclass does not override that operation, the operation defined in the superclass (in this case, Staff
class) will be called.
Static binding (aka early binding): When a method call is resolved at compile time.
In contrast, overloaded methods are resolved using static binding.
Note how the constructor is overloaded in the class below. The method call new Account()
is bound to the first constructor at compile time.
class Account {
Account() {
// Signature: ()
...
}
Account(String name, String number, double balance) {
// Signature: (String, String, double)
...
}
}
Similarly, the calculateGrade
method is overloaded in the code below and a method call calculateGrade("A1213232")
is bound to the second implementation, at compile time.
void calculateGrade(int[] averages) { ... }
void calculateGrade(String matric) { ... }
Every instance of a subclass is an instance of the superclass, but not vice-versa. As a result, inheritance allows substitutability: the ability to substitute a child class object where a parent class object is expected.

An AcademicStaff
is an instance of a Staff
, but a Staff
is not necessarily an instance of an AcademicStaff
. i.e. wherever an object of the superclass is expected, it can be substituted by an object of any of its subclasses.
The following code is valid because an AcademicStaff
object is substitutable as a Staff
object.
Staff staff = new AcademicStaff(); // OK
But the following code is not valid because staff
is declared as a Staff
type and therefore its value may or may not be of type AcademicStaff
, which is the type expected by variable academicStaff
.
Staff staff;
...
AcademicStaff academicStaff = staff; // Not OK
Polymorphism
Polymorphism:
The ability of different objects to respond, each in its own way, to identical messages is called polymorphism. -- Object-Oriented Programming with Objective-C, Apple
Polymorphism allows you to write code targeting superclass objects, use that code on subclass objects, and achieve possibly different results based on the actual class of the object.
Assume classes Cat
and Dog
are both subclasses of the Animal
class. You can write code targeting Animal
objects and use that code on Cat
and Dog
objects, achieving possibly different results based on whether it is a Cat
object or a Dog
object. Some examples:
- Declare an array of type
Animal
and still be able to storeDog
andCat
objects in it. - Define a method that takes an
Animal
object as a parameter and yet be able to passDog
andCat
objects to it. - Call a method on a
Dog
or aCat
object as if it is anAnimal
object (i.e., without knowing whether it is aDog
object or aCat
object) and get a different response from it based on its actual class e.g., call theAnimal
class's methodspeak()
on objecta
and get a"Meow"
as the return value ifa
is aCat
object and"Woof"
if it is aDog
object.
Polymorphism literally means "ability to take many forms".
Three concepts combine to achieve polymorphism: substitutability, operation overriding, and dynamic binding.
- Substitutability: Because of substitutability, you can write code that expects objects of a parent class and yet use that code with objects of child classes. That is how polymorphism is able to treat objects of different types as one type.
- Overriding: To get polymorphic behavior from an operation, the operation in the superclass needs to be overridden in each of the subclasses. That is how overriding allows objects of different subclasses to display different behaviors in response to the same method call.
- Dynamic binding: Calls to overridden methods are bound to the implementation of the actual object's class dynamically during the runtime. That is how the polymorphic code can call the method of the parent class and yet execute the implementation of the child class.
More
What is the difference between a Class, an Abstract Class, and an Interface?
- An interface is a behavior specification with no implementation.
- A class is a behavior specification + implementation.
- An abstract class is a behavior specification + a possibly incomplete implementation.
How does overriding differ from overloading?
Overloading is used to indicate that multiple operations do similar things but take different parameters. Overloaded methods have the same method name but different method signatures and possibly different return types.
Overriding is when a sub-class redefines an operation using the same method name and the same type signature. Overridden methods have the same name, same method signature, and same return type.
SECTION: REQUIREMENTS
Requirements
A software requirement specifies a need to be fulfilled by the software product.
A software project may be,
- a brownfield project i.e., develop a product to replace/update an existing software product
- a greenfield project i.e., develop a totally new system with no precedent
In either case, requirements need to be gathered, analyzed, specified, and managed.
Requirements come from stakeholders.
Stakeholder: A party that is potentially affected by the software project. e.g. users, sponsors, developers, interest groups, government agencies, etc.
Identifying requirements is often not easy. For example, stakeholders may not be aware of their precise needs, may not know how to communicate their requirements correctly, may not be willing to spend effort in identifying requirements, etc.
Requirements can be divided into two in the following way:
- Functional requirements specify what the system should do.
- Non-functional requirements specify the constraints under which the system is developed and operated.
Some examples of non-functional requirement categories:
- Data requirements e.g. size, , etc.,
- Environment requirements e.g. technical environment in which the system would operate in or needs to be compatible with.
- Accessibility, Capacity, Compliance with regulations, Documentation, Disaster recovery, Efficiency, Extensibility, Fault tolerance, Interoperability, Maintainability, Privacy, Portability, Quality, Reliability, Response time, Robustness, Scalability, Security, Stability, Testability, and more ...
You may have to spend an extra effort in digging NFRs out as early as possible because,
- NFRs are easier to miss e.g., stakeholders tend to think of functional requirements first
- sometimes NFRs are critical to the success of the software. E.g. A web application that is too slow or that has low security is unlikely to succeed even if it has all the right functionality.
Requirements can be prioritized based on the importance and urgency, while keeping in mind the constraints of schedule, budget, staff resources, quality goals, and other constraints.
A common approach is to group requirements into priority categories. Note that all such scales are subjective, and stakeholders define the meaning of each level in the scale for the project at hand.
An example scheme for categorizing requirements:
Essential
: The product must have this requirement fulfilled or else it does not get user acceptance.Typical
: Most similar systems have this feature although the product can survive without it.Novel
: New features that could differentiate this product from the rest.
Other schemes:
High
,Medium
,Low
Must-have
,Nice-to-have
,Unlikely-to-have
Level 0
,Level 1
,Level 2
, ...
Some requirements can be discarded if they are considered ‘out of ’.
The requirement given below is for a Calendar application. Stakeholders of the software (e.g. product designers) might decide the following requirement is not in the scope of the software.
The software records the actual time taken by each task and show the difference between the actual and scheduled time for the task.
Here are some characteristics of well-defined requirements [📖 zielczynski]:
- Unambiguous
- Testable (verifiable)
- Clear (concise, terse, simple, precise)
- Correct
- Understandable
- Feasible (realistic, possible)
- Independent
- Necessary
- Implementation-free (i.e. abstract)
Besides these criteria for individual requirements, the set of requirements as a whole should be
- Consistent
- Non-redundant
- Complete
Gathering Requirements
Brainstorming: A group activity designed to generate a large number of diverse and creative ideas for the solution of a problem.
In a brainstorming session there are no "bad" ideas. The aim is to generate ideas; not to validate them. Brainstorming encourages you to "think outside the box" and put "crazy" ideas on the table without fear of rejection.
Surveys can be used to solicit responses and opinions from a large number of stakeholders regarding a current product or a new product.
Observing users in their natural work environment can uncover product requirements. Usage data of an existing system can also be used to gather information about how an existing system is being used, which can help in building a better replacement e.g. to find the situations where the user makes mistakes when using the current system.
Interviewing stakeholders and domain experts can produce useful information about project requirements.
[source]
Focus groups are a kind of informal interview within an interactive group setting. A group of people (e.g. potential users, beta testers) are asked about their understanding of a specific issue, process, product, advertisement, etc.
Prototype: A prototype is a mock up, a scaled down version, or a partial system constructed
- to get users’ feedback.
- to validate a technical concept (a "proof-of-concept" prototype).
- to give a preview of what is to come, or to compare multiple alternatives on a small scale before committing fully to one alternative.
- for early field-testing under controlled conditions.
Prototyping can uncover requirements, in particular, those related to how users interact with the system. UI prototypes or mock ups are often used in brainstorming sessions, or in meetings with the users to get quick feedback from them.
A mock up (also called a wireframe diagram) of a dialog box:
[source: plantuml.com]
Prototyping can be used for discovering as well as specifying requirements e.g. a UI prototype can serve as a specification of what to build.
Studying existing products can unearth shortcomings of existing solutions that can be addressed by a new product. Product manuals and other forms of documentation of an existing system can tell us how the existing solutions work.
When developing a game for a mobile device, a look at a similar PC game can give insight into the kind of features and interactions the mobile game can offer.
Specifying Requirements
Prose
A textual description (i.e. prose) can be used to describe requirements. Prose is especially useful when describing abstract ideas such as the vision of a product.
The product vision of the TEAMMATES Project given below is described using prose.
TEAMMATES aims to become the biggest student project in the world (biggest here refers to 'many contributors, many users, large code base, evolving over a long period'). Furthermore, it aims to serve as a training tool for Software Engineering students who want to learn SE skills in the context of a non-trivial real software product.
Avoid using lengthy prose to describe requirements; they can be hard to follow.
Feature List
Feature list: A list of features of a product grouped according to some criteria such as aspect, priority, order of delivery, etc.
A sample feature list from a simple Minesweeper game (only a brief description has been provided to save space):
- Basic play – Single player play.
- Difficulty levels
- Medium levels
- Advanced levels
- Versus play – Two players can play against each other.
- Timer – Additional fixed time restriction on the player.
- ...
User Stories
User story: User stories are short, simple descriptions of a feature told from the perspective of the person who desires the new capability, usually a user or customer of the system. [Mike Cohn]
A common format for writing user stories is:
User story format: As a {user type/role} I can {function} so that {benefit}
Examples (from a Learning Management System):
- As a student, I can download files uploaded by lecturers, so that I can get my own copy of the files
- As a lecturer, I can create discussion forums, so that students can discuss things online
- As a tutor, I can print attendance sheets, so that I can take attendance during the class
You can write user stories on index cards or sticky notes, and arrange them on walls or tables, to facilitate planning and discussion. Alternatively, you can use a software (e.g., GitHub Project Boards, Trello, Google Docs, ...) to manage user stories digitally.
The {benefit}
can be omitted if it is obvious.
As a user, I can login to the system so that I can access my data
It is recommended to confirm there is a concrete benefit even if you omit it from the user story. If not, you could end up adding features that have no real benefit.
You can add more characteristics to the {user role}
to provide more context to the user story.
- As a forgetful user, I can view a password hint, so that I can recall my password.
- As an expert user, I can tweak the underlying formatting tags of the document, so that I can format the document exactly as I need.
You can write user stories at various levels. High-level user stories, called epics (or themes) cover bigger functionality. You can then break down these epics to multiple user stories of normal size.
[Epic] As a lecturer, I can monitor student participation levels
- As a lecturer, I can view the forum post count of each student
so that I can identify the activity level of students in the forum - As a lecturer, I can view webcast view records of each student
so that I can identify the students who did not view webcasts - As a lecturer, I can view file download statistics of each student
so that I can identify the students who did not download lecture materials
You can add conditions of satisfaction to a user story to specify things that need to be true for the user story implementation to be accepted as ‘done’.
As a lecturer, I can view the forum post count of each student so that I can identify the activity level of students in the forum.
Conditions:
Separate post count for each forum should be shown
Total post count of a student should be shown
The list should be sortable by student name and post count
Other useful info that can be added to a user story includes (but not limited to)
- Priority: how important the user story is
- Size: the estimated effort to implement the user story
- Urgency: how soon the feature is needed
User stories capture user requirements in a way that is convenient for , , and .
[User stories] strongly shift the focus from writing about features to discussing them. In fact, these discussions are more important than whatever text is written. [Mike Cohn, MountainGoat Software 🔗]
User stories differ from mainly in the level of detail. User stories should only provide enough details to make a reasonably low risk estimate of how long the user story will take to implement. When the time comes to implement the user story, the developers will meet with the customer face-to-face to work out a more detailed description of the requirements. [more...]
User stories can capture non-functional requirements too because even NFRs must benefit some stakeholder.
An example of an NFR captured as a user story:
As a | I want to | so that |
---|---|---|
impatient user | to be able experience reasonable response time from the website while up to 1000 concurrent users are using it | I can use the app even when the traffic is at the maximum expected level |
Given their lightweight nature, user stories are quite handy for recording requirements during early stages of requirements gathering.
A recipe for brainstorming user stories
Given below is a possible recipe you can take when using user stories for early stages of requirement gathering.
Step 0: Clear your mind of preconceived product ideas
Even if you already have some idea of what your product will look/behave like in the end, clear your mind of those ideas. The product is the solution. At this point, we are still at the stage of figuring out the problem (i.e., user requirements). Let's try to get from the problem to the solution in a systematic way, one step at a time.
Step 1: Define the target user as a persona:
Decide your target user's profile (e.g. a student, office worker, programmer, salesperson) and work patterns (e.g. Does he work in groups or alone? Does he share his computer with others?). A clear understanding of the target user will help when deciding the importance of a user story. You can even narrow it down to a persona. Here is an example:
Jean is a university student studying in a non-IT field. She interacts with a lot of people due to her involvement in university clubs/societies. ...
Step 2: Define the problem scope:
Decide the exact problem you are going to solve for the target user. It is also useful to specify what related problems it will not solve so that the exact scope is clear.
ProductX helps Jean keep track of all her school contacts. It does not cover communicating with contacts.
Step 3: List scenarios to form a narrative:
Think of the various scenarios your target user is likely to go through as she uses your app. Following a chronological sequence as if you are telling a story might be helpful.
A. First use:
- Jean gets to know about ProductX. She downloads it and launches it to check out what it can do.
- After playing around with the product for a bit, Jean wants to start using it for real.
- ...
B. Second use: (Jean is still a beginner)
- Jean launches ProductX. She wants to find ...
- ...
C. 10th use: (Jean is a little bit familiar with the app)
- ...
D. 100th use: (Jean is an expert user)
- Jean launches the app and does ... and ... followed by ... as per her usual habit.
- Jean feels some of the data in the app are no longer needed. She wants to get rid of them to reduce clutter.
More examples that might apply to some products:
- Jean uses the app at the start of the day to ...
- Jean uses the app before going to sleep to ...
- Jean hasn't used the app for a while because she was on a three-month training programme. She is now back at work and wants to resume her daily use of the app.
- Jean moves to another company. Some of her clients come with her but some don't.
- Jean starts freelancing in her spare time. She wants to keep her freelancing clients separate from her other clients.
Step 4: List the user stories to support the scenarios:
Based on the scenarios, decide on the user stories you need to support. For example, based on the scenario 'A. First use', you might have user stories such as these:
- As a potential user exploring the app, I can see the app populated with sample data, so that I can easily see how the app will look like when it is in use.
- As a user ready to start using the app, I can purge all current data, so that I can get rid of sample/experimental data I used for exploring the app.
To give another example, based on the scenario 'D. 100th use', you might have user stories such as these:
- As an expert user, I can create shortcuts for tasks, so that I can save time on frequently performed tasks.
- As a long-time user, I can archive/hide unused data, so that I am not distracted by irrelevant data.
Do not 'evaluate' the value of user stories while brainstorming. Reason: an important aspect of brainstorming is not judging the ideas generated.
Other tips:
- Don't be too hasty to discard 'unusual' user stories:
Those might make your product unique and stand out from the rest, at least for the target users. - Don't go into too much detail:
For example, consider this user story:
As a user, I want to see a list of tasks that need my attention most at the present time, so that I pay attention to them first.
When discussing this user story, don't worry about what tasks should be considered 'needs my attention most at the present time'. Those details can be worked out later. - Don't be biased by preconceived product ideas:
When you are at the stage of identifying user needs, clear your mind of ideas you have about what your end product will look like. That is, don't try to reverse-engineer a preconceived product idea into user stories. - Don't discuss implementation details or whether you are actually going to implement it:
When gathering requirements, your decision is whether the user's need is important enough for you to want to fulfil it. Implementation details can be discussed later. If a user story turns out to be too difficult to implement later, you can always omit it from the implementation plan.
While use cases can be recorded on in the initial stages, an online tool is more suitable for longer-term management of user stories, especially if the team is not .
Use Cases
Use case: A description of a set of sequences of actions, including variants, that a system performs to yield an observable result of value to an actor [ 📖 : ].
A use case describes an interaction between the user and the system for a specific functionality of the system.

UML includes a diagram type called use case diagrams that can illustrate use cases of a system visually, providing a visual ‘table of contents’ of the use cases of a system.
In the example on the right, note how use cases are shown as ovals and user roles relevant to each use case are shown as stick figures connected to the corresponding ovals.
Use cases capture the functional requirements of a system.
A use case is an interaction between a system and its actors.
Actors in Use Cases
Actor: An actor (in a use case) is a role played by a user. An actor can be a human or another system. Actors are not part of the system; they reside outside the system.
Some example actors for a Learning Management System:
- Actors: Guest, Student, Staff, Admin, , .
A use case can involve multiple actors.
- Software System: LearnSys
- Use case: UC01 Conduct Survey
- Actors: Staff, Student
An actor can be involved in many use cases.
- Software System: LearnSys
- Actor: Staff
- Use cases: UC01 Conduct Survey, UC02 Set Up Course Schedule, UC03 Email Class, ...
A single person/system can play many roles.
- Software System: LearnSys
- Person: a student
- Actors (or Roles): Student, Guest, Tutor
Many persons/systems can play a single role.
- Software System: LearnSys
- Actor (or role): Student
- Persons that can play this role: undergraduate student, graduate student, a staff member doing a part-time course, exchange student
Use cases can be specified at various levels of detail.
Consider the three use cases given below. Clearly, (a) is at a higher level than (b) and (b) is at a higher level than (c).
- System: LearnSys
- Use cases:
a. Conduct a survey
b. Take the survey
c. Answer survey question
While modeling user-system interactions,
- Start with high level use cases and progressively work toward lower level use cases.
- Be mindful of which level of detail you are working at and not to mix use cases of different levels.
Writing use case steps
The main body of the use case is a sequence of steps that describes the interaction between the system and the actors. Each step is given as a simple statement describing who does what.
An example of the main body of a use case.
- Student requests to upload file
- LMS requests for the file location
- Student specifies the file location
- LMS uploads the file
A use case describes only the externally visible behavior, not internal details, of a system i.e. should minimize details that are not part of the interaction between the user and the system.
This example use case step refers to behaviors not externally visible.
- LMS saves the file into the cache and indicates success.
A step gives the intention of the actor (not the mechanics). That means UI details are usually omitted. The idea is to leave as much flexibility to the UI designer as possible. That is, the use case specification should be as general as possible (less specific) about the UI.
The first example below is not a good use case step because it contains UI-specific details. The second one is better because it omits UI-specific details.
Bad : User right-clicks the text box and chooses ‘clear’
Good : User clears the input
A use case description can show loops too.
An example of how you can show a loop:
Software System: SquareGame
Use case: - Play a Game
Actors: Player (multiple players)
MSS:
- A Player starts the game.
- SquareGame asks for player names.
- Each Player enters his own name.
- SquareGame shows the order of play.
- SquareGame prompts for the current Player to throw a die.
- Current Player adjusts the throw speed.
- Current Player triggers the die throw.
- SquareGame shows the face value of the die.
- SquareGame moves the Player's piece accordingly.
Steps 5-9 are repeated for each Player, and for as many rounds as required until a Player reaches the 100th square. - SquareGame shows the Winner.
Use case ends.
The Main Success Scenario (MSS) describes the most straightforward interaction for a given use case, which assumes that nothing goes wrong. This is also called the Basic Course of Action or the Main Flow of Events of a use case.
Note how the MSS in the example below assumes that all entered details are correct and ignores problems such as timeouts, network outages etc. For example, the MSS does not tell us what happens if the user enters incorrect data.
System: Online Banking System (OBS)
Use case: UC23 - Transfer Money
Actor: User
MSS:
- User chooses to transfer money.
- OBS requests for details of the transfer.
- User enters the requested details.
- OBS requests for confirmation.
- OBS transfers the money and displays the new account balance.
Use case ends.
Extensions are "add-on"s to the MSS that describe exceptional/alternative flow of events. They describe variations of the scenario that can happen if certain things are not as expected by the MSS. Extensions appear below the MSS.
This example adds some extensions to the use case in the previous example.
System: Online Banking System (OBS) Use case: UC23 - Transfer Money Actor: User MSS: 1. User chooses to transfer money. 2. OBS requests for details of the transfer. 3. User enters the requested details. 4. OBS requests for confirmation. 5. User confirms. 6. OBS transfers the money and displays the new account balance. Use case ends.
Extensions: 3a. OBS detects an error in the entered data. 3a1. OBS requests for the correct data. 3a2. User enters new data. Steps 3a1-3a2 are repeated until the data entered are correct. Use case resumes from step 4. 3b. User requests to effect the transfer in a future date. 3b1. OBS requests for confirmation. 3b2. User confirms future transfer. Use case ends. *a. At any time, User chooses to cancel the transfer. *a1. OBS requests to confirm the cancellation. *a2. User confirms the cancellation. Use case ends. *b. At any time, 120 seconds lapse without any input from the User. *b1. OBS cancels the transfer. *b2. OBS informs the User of the cancellation. Use case ends.
Note that the numbering style is not a universal rule but a widely used convention. Based on that convention,
- either of the extensions marked
3a.
and3b.
can happen just after step3
of the MSS. - the extension marked as
*a.
can happen at any step (hence, the*
).
When separating extensions from the MSS, keep in mind that the MSS should be self-contained. That is, the MSS should give us a complete usage scenario.
Also note that it is not useful to mention events such as power failures or system crashes as extensions because the system cannot function beyond such catastrophic failures.

In use case diagrams you can use the <<extend>>
arrows to show extensions. Note the direction of the arrow is from the extension to the use case it extends and the arrow uses a dashed line.
A use case can include another use case. Underlined text is used to show an inclusion of a use case.
This use case includes two other use cases, one in step 1 and one in step 2.
- Software System: LearnSys
- Use case: UC01 - Conduct Survey
- Actors: Staff, Student
- MSS:
- Staff creates the survey (UC44).
- Student completes the survey (UC50).
- Staff views the survey results.
Use case ends.
Inclusions are useful,
- when you don't want to clutter a use case with too many low-level steps.
- when a set of steps is repeated in multiple use cases.
You use a dotted arrow and an <<include>>
annotation to show use case inclusions in a use case diagram. Note how the arrow direction is different from the <<extend>>
arrows.
Preconditions specify the specific state you expect the system to be in before the use case starts.
Software System: Online Banking System
Use case: UC23 - Transfer Money
Actor: User
Preconditions: User is logged in
MSS:
- User chooses to transfer money.
- OBS requests for details for the transfer.
...
Guarantees specify what the use case promises to give us at the end of its operation.
Software System: Online Banking System
Use case: UC23 - Transfer Money
Actor: User
Preconditions: User is logged in.
Guarantees:
- Money will be deducted from the source account only if the transfer to the destination account is successful.
- The transfer will not result in the account balance going below the minimum balance required.
MSS:
- User chooses to transfer money.
- OBS requests for details for the transfer.
...
You can use actor generalization in use case diagrams using a symbol similar to that of UML notation for inheritance.
In this example, actor Blogger
can do all the use cases the actor Guest
can do, as a result of the actor generalization relationship given in the diagram.

Do not over-complicate use case diagrams by trying to include everything possible. A use case diagram is a brief summary of the use cases that is used as a starting point. Details of the use cases can be given in the use case descriptions.
Some include ‘System’ as an actor to indicate that something is done by the system itself without being initiated by a user or an external system.
The diagram below can be used to indicate that the system generates daily reports at midnight.

However, others argue that only use cases providing value to an external user/system should be shown in the use case diagram. For example, they argue that view daily report
should be the use case and generate daily report
is not to be shown in the use case diagram because it is simply something the system has to do to support the view daily report
use case.
You are recommended to follow the latter view (i.e. not to use System as a user). Limit use cases for modeling behaviors that involve an external actor.
UML is not very specific about the text contents of a use case. Hence, there are many styles for writing use cases. For example, the steps can be written as a continuous paragraph.
Use cases should be easy to read. Note that there is no strict rule about writing all details of all steps or a need to use all the elements of a use case.
There are some advantages of documenting system requirements as use cases:
- Because they use a simple notation and plain English descriptions, they are easy for users to understand and give feedback.
- They decouple user intention from mechanism (note that use cases should not include UI-specific details), allowing the system designers more freedom to optimize how a functionality is provided to a user.
- Identifying all possible extensions encourages us to consider all situations that a software product might face during its operation.
- Separating typical scenarios from special cases encourages us to optimize the typical scenarios.
One of the main disadvantages of use cases is that they are not good for capturing requirements that do not involve a user interacting with the system. Hence, they should not be used as the sole means to specify requirements.
Glossary
Glossary: A glossary serves to ensure that all stakeholders have a common understanding of the noteworthy terms, abbreviations, acronyms etc.
Here is a partial glossary from a variant of the Snakes and Ladders game:
- Conditional square: A square that specifies a specific face value which a player has to throw before his/her piece can leave the square.
- Normal square: a normal square does not have any conditions, snakes, or ladders in it.
Supplementary Requirements
SECTION: DESIGN
Design
Introduction
Design is the creative process of transforming the problem into a solution; the solution is also called design. -- 📖 Software Engineering Theory and Practice, Shari Lawrence; Atlee, Joanne M. Pfleeger
Software design has two main aspects:
- Product/external design: designing the external behavior of the product to meet the users' requirements. This is usually done by product designers with input from business analysts, user experience experts, user representatives, etc.
- Implementation/internal design: designing how the product will be implemented to meet the required external behavior. This is usually done by software architects and software engineers.
Design Fundamentals
Abstraction
Abstraction is a technique for dealing with complexity. It works by establishing a level of complexity we are interested in, and suppressing the more complex details below that level.
The guiding principle of abstraction is that only details that are relevant to the current perspective or the task at hand need to be considered. As most programs are written to solve complex problems involving large amounts of intricate details, it is impossible to deal with all these details at the same time. That is where abstraction can help.
Data abstraction: abstracting away the lower level data items and thinking in terms of bigger entities
Within a certain software component, you might deal with a user data type, while ignoring the details contained in the user data item such as name, and date of birth. These details have been ‘abstracted away’ as they do not affect the task of that software component.
Control abstraction: abstracting away details of the actual control flow to focus on tasks at a higher level
print(“Hello”)
is an abstraction of the actual output mechanism within the computer.
Abstraction can be applied repeatedly to obtain progressively higher levels of abstraction.
An example of different levels of data abstraction: a File
is a data item that is at a higher level than an array and an array is at a higher level than a bit.
An example of different levels of control abstraction: execute(Game)
is at a higher level than print(Char)
which is at a higher level than an Assembly language instruction MOV
.
Abstraction is a general concept that is not limited to just data or control abstractions.
Some more general examples of abstraction:
- An OOP class is an abstraction over related data and behaviors.
- An architecture is a higher-level abstraction of the design of a software.
- Models (e.g., UML models) are abstractions of some aspect of reality.
Coupling
Coupling is a measure of the degree of dependence between components, classes, methods, etc. Low coupling indicates that a component is less dependent on other components. High coupling (aka tight coupling or strong coupling) is discouraged due to the following disadvantages:
- Maintenance is harder because a change in one module could cause changes in other modules coupled to it (i.e. a ripple effect).
- Integration is harder because multiple components coupled with each other have to be integrated at the same time.
- Testing and reuse of the module is harder due to its dependence on other modules.
In the example below, design A
appears to have more coupling between the components than design B
.

X is coupled to Y if a change to Y can potentially require a change in X.
If the Foo
class calls the method Bar#read()
, Foo
is coupled to Bar
because a change to Bar
can potentially (but not always) require a change in the Foo
class e.g. if the signature of Bar#read()
is changed, Foo
needs to change as well, but a change to the Bar#write()
method may not require a change in the Foo
class because Foo
does not call Bar#write()
.
Some examples of coupling: A
is coupled to B
if,
A
has access to the internal structure ofB
(this results in a very high level of coupling)A
andB
depend on the same global variableA
callsB
A
receives an object ofB
as a parameter or a return valueA
inherits fromB
A
andB
are required to follow the same data format or communication protocol
Some examples of different coupling types:
- Content coupling: one module modifies or relies on the internal workings of another module e.g., accessing local data of another module
- Common/Global coupling: two modules share the same global data
- Control coupling: one module controlling the flow of another, by passing it information on what to do e.g., passing a flag
- Data coupling: one module sharing data with another module e.g. via passing parameters
- External coupling: two modules share an externally imposed convention e.g., data formats, communication protocols, device interfaces.
- Subclass coupling: a class inherits from another class. Note that a child class is coupled to the parent class but not the other way around.
- Temporal coupling: two actions are bundled together just because they happen to occur at the same time e.g. extracting a contiguous block of code as a method although the code block contains statements unrelated to each other
Cohesion
Cohesion is a measure of how strongly-related and focused the various responsibilities of a component are. A highly-cohesive component keeps related functionalities together while keeping out all other unrelated things.
Higher cohesion is better. Disadvantages of low cohesion (aka weak cohesion):
- Lowers the understandability of modules as it is difficult to express module functionalities at a higher level.
- Lowers maintainability because a module can be modified due to unrelated causes (reason: the module contains code unrelated to each other) or many modules may need to be modified to achieve a small change in behavior (reason: because the code related to that change is not localized to a single module).
- Lowers reusability of modules because they do not represent logical units of functionality.
Cohesion can be present in many forms. Some examples:
- Code related to a single concept is kept together, e.g. the
Student
component handles everything related to students. - Code that is invoked close together in time is kept together, e.g. all code related to initializing the system is kept together.
- Code that manipulates the same data structure is kept together, e.g. the
GameArchive
component handles everything related to the storage and retrieval of game sessions.
Suppose a Payroll application contains a class that deals with writing data to the database. If the class includes some code to show an error dialog to the user if the database is unreachable, that class is not cohesive because it seems to be interacting with the user as well as the database.
Modeling
Introduction
A model is a representation of something else.
A class diagram is a model that represents a software design.
A model provides a simpler view of a complex entity because a model captures only a selected aspect. This omission of some aspects implies models are abstractions.
A class diagram captures the structure of the software design but not the behavior.
Multiple models of the same entity may be needed to capture it fully.
In addition to a class diagram (or even multiple class diagrams), a number of other diagrams may be needed to capture various interesting aspects of the software.
In software development, models are useful in several ways:
a) To analyze a complex entity related to software development.
Some examples of using models for analysis:
- Models of the can be built to aid the understanding of the problem to be solved.
- When planning a software solution, models can be created to figure out how the solution is to be built. An architecture diagram is such a model.
b) To communicate information among stakeholders. Models can be used as a visual aid in discussions and documentation.
Some examples of using models to communicate:
- You can use an architecture diagram to explain the high-level design of the software to developers.
- A business analyst can use a use case diagram to explain to the customer the functionality of the system.
- A class diagram can be reverse-engineered from code so as to help explain the design of a component to a new developer.
c) As a blueprint for creating software. Models can be used as instructions for building software.
Some examples of using models as blueprints:
- A senior developer draws a class diagram to propose a design for an OOP software and passes it to a junior programmer to implement.
- A software tool allows users to draw UML models using its interface and the tool automatically generates the code based on the model.
The following diagram uses the class diagram notation to show the different types of UML diagrams.
source:https://en.wikipedia.org/
Modeling Structures
Contents of the panels given below belong to a different chapter; they have been embedded here for convenience and are collapsed by default to avoid content duplication in the printed version.
Classes form the basis of class diagrams.
Associations are the main connections among the classes in a class diagram.
The most basic class diagram is a bunch of classes with some solid lines among them to represent associations, such as this one.
An example class diagram showing associations between classes.

In addition, associations can show additional decorations such as association labels, association roles, multiplicity and navigability to add more information to a class diagram.
Here is the same class diagram shown earlier but with some additional information included:

A class diagram can also show different types of relationships between classes: inheritance, compositions, aggregations, dependencies.
A class diagram can also show different types of class-like entities:
Object diagrams can be used to complement class diagrams. For example, you can use object diagrams to model different object structures that can result from a design represented by a given class diagram.
The analysis process for identifying objects and object classes is recognized as one of the most difficult areas of object-oriented development. --Ian Sommerville, in the book Software Engineering
Class diagrams can also be used to model objects in the (i.e. to model how objects actually interact in the real world, before emulating them in the solution). Class diagrams that are used to model the problem domain are called conceptual class diagrams or OO domain models (OODMs).
The OO domain model of a snakes and ladders game is given below.
Description: The snakes and ladders game is played by two or more players using a board and a die. The board has 100 squares marked 1 to 100. Each player owns one piece. Players take turns to throw the die and advance their piece by the number of squares they earned from the die throw. The board has a number of snakes. If a player’s piece lands on a square with a snake head, the piece is automatically moved to the square containing the snake’s tail. Similarly, a piece can automatically move from a ladder foot to the ladder top. The player whose piece is the first to reach the 100th square wins.

OODMs do not contain solution-specific classes (i.e. classes that are used in the solution domain but do not exist in the problem domain). For example, a class called DatabaseConnection could appear in a class diagram but not usually in an OO domain model because DatabaseConnection is something related to a software solution but not an entity in the problem domain.
OODMs represents the class structure of the problem domain and not their behavior, just like class diagrams. To show behavior, use other diagrams such as sequence diagrams.
OODM notation is similar to class diagram notation but omit methods and navigability.
A deployment diagram shows a system's physical layout, revealing which pieces of software run on which pieces of hardware.
A component diagram is used to show how a system is divided into components and how they are connected to each other through interfaces.
A package diagram shows packages and their dependencies. A package is a grouping construct for grouping UML elements (classes, use cases, etc.).
Here is an example package diagram:
A composite structure diagram hierarchically decomposes a class into its internal structure.
Modeling Behaviors
Software projects often involve workflows. Workflows define the flow in which a process or a set of tasks is executed. Understanding such workflows is important for the success of the software project.
Some examples in which a certain workflow is relevant to software project:
A software that automates the work of an insurance company needs to take into account the workflow of processing an insurance claim.
The algorithm of a piece of code represents the workflow (i.e. the execution flow) of the code.
Sequence diagrams model the interactions between various entities in a system, in a specific scenario. Modelling such scenarios is useful, for example, to verify the design of the internal interactions is able to provide the expected outcomes.
Some examples where a sequence diagram can be used:
To model how components of a system interact with each other to respond to a user action.
To model how objects inside a component interact with each other to respond to a method call it received from another component.
Use case diagrams model the mapping between features of a system and its user roles i.e., which user roles can perform which tasks using the software.
A simple use case diagram:

A timing diagram focuses on timing constraints.
Here is an example timing diagram:

Adapted from: UML Distilled by Martin Fowler
Interaction overview diagrams are a combination of activity diagrams and sequence diagrams.
Communication diagrams are like sequence diagrams but emphasize the data links between the various participants in the interaction rather than the sequence of interactions.
An example:

Adapted from: UML Distilled by Martin Fowler
A State Machine Diagram models state-dependent behavior.
Consider how a CD player responds when the “eject CD” button is pushed:
- If the CD tray is already open, it does nothing.
- If the CD tray is already in the process of opening (opened half-way), it continues to open the CD tray.
- If the CD tray is closed and the CD is being played, it stops playing and opens the CD tray.
- If the CD tray is closed and CD is not being played, it simply opens the CD tray.
- If the CD tray is already in the process of closing (closed half-way), it waits until the CD tray is fully closed and opens it immediately afterwards.
What this means is that the CD player’s response to pushing the “eject CD” button depends on what it was doing at the time of the event. More generally, the CD player’s response to the event received depends on its internal state. Such a behavior is called a state-dependent behavior.
Often, state-dependent behavior displayed by an object in a system is simple enough that it needs no extra attention; such a behavior can be as simple as a conditional behavior like if x > y, then x = x - y
.
Occasionally, objects may exhibit state-dependent behavior that is complex enough such that it needs to be captured in a separate model. Such state-dependent behavior can be modeled using UML state machine diagrams (SMD for short, sometimes also called ‘state charts’, ‘state diagrams’ or ‘state machines’).
An SMD views the life-cycle of an object as consisting of a finite number of states where each state displays a unique behavior pattern. SMDs capture information such as the states an object can be in during its lifetime, how the object responds to various events while in each state, and how the object transits from one state to another. In contrast to sequence diagrams that capture object behavior one scenario at a time, SMDs capture the object’s behavior over its full life-cycle.
An SMD for the Minesweeper game.

Modeling a Solution
You can use models to analyze and design software before you start coding.
Suppose you are planning to implement a simple minesweeper game that has a text based UI and a GUI. Given below is a possible OOP design for the game.

Before jumping into coding, you may want to find out things such as,
- Is this class structure able to produce the behavior you want?
- What API should each class have?
- Do you need more classes?
To answer these questions, you can analyze how the objects of these classes will interact with each other to produce the behavior you want.
As mentioned in [Design → Modeling → Modeling a Solution → Introduction], this is the Minesweeper design you have come up with so far. Our objective is to analyze, evaluate, and refine that design.

Let us start by modeling a sample interaction between the person playing the game and the TextUi
object.

newgame
and clear x y
represent commands typed by the Player
on the TextUi
.
How does the TextUi
object carry out the requests it has received from the player? It would need to interact with other objects of the system. Because the Logic
class is the one that controls the game logic, the TextUi
needs to collaborate with Logic
to fulfill the newgame
request. Let us extend the model to capture that interaction.

W
= Width of the minefield; H
= Height of the minefield
The above diagram assumes that W
and H
are the only information TextUi
requires to display the minefield to the Player
. Note that there could be other ways of doing this.
The Logic
methods you conceptualized in our modeling so far are:

Now, let us look at what other objects and interactions are needed to support the newGame()
operation. It is likely that a new Minefield
object is created when the newGame()
method is called.

Note that the behavior of the Minefield
constructor has been abstracted away. It can be designed at a later stage.
Given below are the interactions between the player and the TextUi
for the whole game.

Note that can be used when discovering/defining the architecture-level APIs.
Defining the architecture-level APIs for a small Tic-Tac-Toe game:
Continuing with the example in [Design → Modeling → Modeling a Solution → Basic], next let us model how the TextUi
interacts with the Logic
to support the mark and clear operations until the game is won or lost.


This interaction adds the following methods to the Logic
class:
clearCellAt(int x, int y)
markCellAt(int x, int y)
getGameState(): GAME_STATE (GAME_STATE: READY, IN_PLAY, WON, LOST, …)
And it adds the following operation to Logic API:
getAppearanceOfCellAt(int,int): CELL_APPEARANCE (CELL_APPEARANCE: HIDDEN, ZERO, ONE, TWO, THREE, …, MARKED, INCORRECTLY_MARKED, INCORRECTLY_CLEARED)
In the above design, TextUi
does not access Cell
objects directly. Instead, it gets values of type CELL_APPEARANCE
from Logic
to be displayed as a minefield to the player. Alternatively, each cell or the entire minefield can be passed directly to TextUi
.
Here is the updated class diagram:

The above is for the case when Actor Player
interacts with the system using a text UI. Additional operations (if any) required for the GUI can be discovered similarly.
Suppose Logic
supports a reset()
operation. You can model it like this:

Our current model assumes that the Minefield
object has enough information (i.e. H, W, and mine locations) to create itself.

An alternative is to have a ConfigGenerator
object that generates a string containing the minefield information as shown below.

In addition, getWidth()
, getHeight()
, markCellAt(x,y)
and clearCellAt(x,y)
can be handled like this.

The updated class diagram:

How is the getGameState()
operation supported? Given below are two ways (there could be other ways):
- The
Minefield
class knows the state of the game at any time. TheLogic
class retrieves it from theMinefield
class as and when required. - The
Logic
class maintains the state of the game at all times.
Here’s the SD for option 1.

Here’s the SD for option 2. Assume that the game state is updated after every mark/clear action.

It is now time to explore what happens inside the Minefield
constructor. One way is to design it as follows.

Now let us assume that Minesweeper
supports a ‘timing’ feature.

Updated class diagram:

When designing components, it is not necessary to draw elaborate UML diagrams capturing all details of the design. They can be done as rough sketches. For example, draw sequence diagrams only when you are not sure which operations are required by each class, or when you want to verify that your class structure can indeed support the required operations.
Software Architecture
Introduction
The software architecture of a program or computing system is the structure or structures of the system, which comprise software elements, the externally visible properties of those elements, and the relationships among them. Architecture is concerned with the public side of interfaces; private details of elements—details having to do solely with internal implementation—are not architectural. -- Software Architecture in Practice (2nd edition), Bass, Clements, and Kazman
The software architecture shows the overall organization of the system and can be viewed as a very high-level design. It usually consists of a set of interacting components that fit together to achieve the required functionality. It should be a simple and technically viable structure that is well-understood and agreed-upon by everyone in the development team, and it forms the basis for the implementation.
A possible architecture for a Minesweeper game:
![]() | ![]() |
Main components:
GUI
: Graphical user interfaceTextUi
: Textual user interfaceATD
: An automated test driver used for testing the game logicLogic
: Computation and logic of the gameStore
: Storage and retrieval of game data (high scores etc.)
The architecture is typically designed by the software architect, who provides the technical vision of the system and makes high-level (i.e. architecture-level) technical decisions about the project.
Architecture Diagrams
Architecture diagrams are free-form diagrams. There is no universally adopted standard notation for architecture diagrams. Any symbols that reasonably describe the architecture may be used.
Some example architecture diagrams:
While architecture diagrams have no standard notation, try to follow these basic guidelines when drawing them.
Minimize the variety of symbols. If the symbols you choose do not have widely-understood meanings e.g. A drum symbol is widely-understood as representing a database, explain their meaning.
Avoid the indiscriminate use of double-headed arrows to show interactions between components.
Consider the two architecture diagrams of the same software given below. Because Diagram 2
uses double-headed arrows, the important fact that GUI has a bidirectional dependency with the Logic component is no longer captured.

Architectural Styles
Introduction
Software architectures follow various high-level styles (aka architectural patterns), just like how building architectures follow various architecture styles.
n-tier style, client-server style, event-driven style, transaction processing style, service-oriented style, pipes-and-filters style, message-driven style, broker style, ...
N-tier Architectural Style
In the n-tier style, higher layers make use of services provided by lower layers. Lower layers are independent of higher layers. Other names: multi-layered, layered.

Operating systems and network communication software often use n-tier style.
Client-Server Architectural Style
The client-server style has at least one component playing the role of a server and at least one client component accessing the services of the server. This is an architectural style used often in distributed applications.

The online game and the web application below use the client-server style.

Transaction Processing Architectural Style
The transaction processing style divides the workload of the system down to a number of transactions which are then given to a dispatcher that controls the execution of each transaction. Task queuing, ordering, undo etc. are handled by the dispatcher.

In this example from a banking system, transactions are generated by the terminals used by , which are then sent to a central dispatching unit, which in turn dispatches the transactions to various other units to execute.

Service-oriented Architectural Style
The service-oriented architecture (SOA) style builds applications by combining functionalities packaged as programmatically accessible services. SOA aims to achieve interoperability between distributed services, which may not even be implemented using the same programming language. A common way to implement SOA is through the use of XML web services where the web is used as the medium for the services to interact, and XML is used as the language of communication between service providers and service users.
Suppose that Amazon.com provides a web service for customers to browse and buy merchandise, while HSBC provides a web service for merchants to charge HSBC credit cards. Using these web services, an ‘eBookShop’ web application can be developed that allows HSBC customers to buy merchandise from Amazon and pay for them using HSBC credit cards. Because both Amazon and HSBC services follow the SOA architecture, their web services can be reused by the web application, even if all three systems use different programming platforms.

Event-driven Architectural Style
Event-driven style controls the flow of the application by detecting from event emitters and communicating those events to interested event consumers. This architectural style is often used in GUIs.

When the ‘button clicked’ event occurs in a GUI, that event can be transmitted to components that are interested in reacting to that event. Similarly, events detected at a printer port can be transmitted to components related to operating the printer. The same event can be sent to multiple consumers too.

More
Other well-known architectural styles include the pipes-and-filters architecture, the broker architecture, the peer-to-peer architecture, and the message-oriented architecture.
Software Design Patterns
Introduction
Design pattern: An elegant reusable solution to a commonly recurring problem within a given context in software design.
In software development, there are certain problems that recur in a certain context.
Some examples of recurring design problems:
Design Context | Recurring Problem |
---|---|
Assembling a system that makes use of other existing systems implemented using different technologies | What is the best architecture? |
UI needs to be updated when the data in the application backend changes | How to initiate an update to the UI when data changes without coupling the backend to the UI? |
After repeated attempts at solving such problems, better solutions are discovered and refined over time. These solutions are known as design patterns, a term popularized by the seminal book Design Patterns: Elements of Reusable Object-Oriented Software by the so-called "Gang of Four" (GoF) written by Eric Gamma, Richard Helm, Ralph Johnson, and John Vlissides.
The common format to describe a pattern consists of the following components:
- Context: The situation or scenario where the design problem is encountered.
- Problem: The main difficulty to be resolved.
- Solution: The core of the solution. It is important to note that the solution presented only includes the most general details, which may need further refinement for a specific context.
- Anti-patterns (optional): Commonly used solutions, which are usually incorrect and/or inferior to the Design Pattern.
- Consequences (optional): Identifying the pros and cons of applying the pattern.
- Other useful information (optional): Code examples, known uses, other related patterns, etc.
Singleton Pattern
Context
Certain classes should have no more than just one instance (e.g. the main controller class of the system). These single instances are commonly known as singletons.
Problem
A normal class can be instantiated multiple times by invoking the constructor.
Solution
Make the constructor of the singleton class private
, because a public
constructor will allow others to instantiate the class at will. Provide a public
class-level method to access the single instance.
Example:

The <<Singleton>>
in the class above uses the UML stereotype notation, which is used to (optionally) indicate the purpose or the role played by a UML element. In this example, the class Logic
is playing the role of a Singleton class. The general format is <<role/purpose>>
.
Here is the typical implementation of how the Singleton pattern is applied to a class:
class Logic {
private static Logic theOne = null;
private Logic() {
...
}
public static Logic getInstance() {
if (theOne == null) {
theOne = new Logic();
}
return theOne;
}
}
Notes:
- The constructor is
private
, which prevents instantiation from outside the class. - The single instance of the singleton class is maintained by a
private
class-level variable. - Access to this object is provided by a
public
class-level operationgetInstance()
which instantiates a single copy of the singleton class when it is executed for the first time. Subsequent calls to this operation return the single instance of the class.
If Logic
was not a Singleton class, an object is created like this:
Logic m = new Logic();
But now, the Logic
object needs to be accessed like this:
Logic m = Logic.getInstance();
Pros:
- easy to apply
- effective in achieving its goal with minimal extra work
- provides an easy way to access the singleton object from anywhere in the code base
Cons:
- The singleton object acts like a global variable that increases coupling across the code base.
- In testing, it is difficult to replace Singleton objects with stubs (static methods cannot be overridden).
- In testing, singleton objects carry data from one test to another even when you want each test to be independent of the others.
Given that there are some significant cons, it is recommended that you apply the Singleton pattern when, in addition to requiring only one instance of a class, there is a risk of creating multiple objects by mistake, and creating such multiple objects has real negative consequences.
Abstraction Occurrence Pattern
Context
There is a group of similar entities that appear to be ‘occurrences’ (or ‘copies’) of the same thing, sharing lots of common information, but also differing in significant ways.
In a library, there can be multiple copies of the same book title. Each copy shares common information such as book title, author, ISBN, etc. However, there are also significant differences like purchase date and barcode number (assumed to be unique for each copy of the book).
Other examples:
- Episodes of the same TV series
- Stock items of the same product model (e.g. TV sets of the same model)
Problem
Representing the objects mentioned previously as a single class would be problematic because it results in duplication of data which can lead to inconsistencies in data (if some of the duplicates are not updated consistently).
Take for example the problem of representing books in a library. Assume that there could be multiple copies of the same title, bearing the same ISBN number, but different serial numbers.

The above solution requires common information to be duplicated by all instances. This will not only waste storage space, but also creates a consistency problem. Suppose that after creating several copies of the same title, the librarian realized that the author name was wrongly spelt. To correct this mistake, the system needs to go through every copy of the same title to make the correction. Also, if a new copy of the title is added later on, the librarian (or the system) has to make sure that all information entered is the same as the existing copies to avoid inconsistency.
Anti-pattern
Refer to the same Library example given above.

The design above segregates the common and unique information into a class hierarchy. Each book title is represented by a separate class with common data (i.e. Name, Author, ISBN) hard-coded in the class itself. This solution is problematic because each book title is represented as a class, resulting in thousands of classes (one for each title). Every time the library buys new books, the source code of the system will have to be updated with new classes.
Solution
Let a copy of an entity (e.g. a copy of a book) be represented by two objects instead of one, separating the common and unique information into two classes to avoid duplication.
Given below is how the pattern is applied to the Library example:

Here's a more generic example:

The general solution:

The <<Abstraction>>
class should hold all common information, and the unique information should be kept by the <<Occurrence>>
class. Note that ‘Abstraction’ and ‘Occurrence’ are not class names, but roles played by each class. Think of this diagram as a meta-model (i.e. a ‘model of a model’) of the BookTitle-BookCopy
class diagram given above.
Facade Pattern
Context
Components need to access functionality deep inside other components.
The UI
component of a Library
system might want to access functionality of the Book
class contained inside the Logic
component.

Problem
Access to the component should be allowed without exposing its internal details. e.g. the UI
component should access the functionality of the Logic
component without knowing that it contains a Book
class within it.
Solution
Include a class that sits between the component internals and users of the component such that all access to the component happens through the Facade class.
The following class diagram applies the Facade pattern to the Library System
example. The LibraryLogic
class is the Facade class.

Command Pattern
Context
A system is required to execute a number of commands, each doing a different task. For example, a system might have to support Sort
, List
, Reset
commands.
Problem
It is preferable that some part of the code executes these commands without having to know each command type. e.g., there can be a CommandQueue
object that is responsible for queuing commands and executing them without knowledge of what each command does.
Solution
The essential element of this pattern is to have a general <<Command>>
object that can be passed around, stored, executed, etc without knowing the type of command (i.e. via polymorphism).
Let us examine an example application of the pattern first:
In the example solution below, the CommandCreator
creates List
, Sort
, and Reset Command
objects and adds them to the CommandQueue
object. The CommandQueue
object treats them all as Command
objects and performs the execute/undo operation on each of them without knowledge of the specific Command
type. When executed, each Command
object will access the DataStore
object to carry out its task. The Command
class can also be an abstract class or an interface.

The general form of the solution is as follows.

The <<Client>>
creates a <<ConcreteCommand>>
object, and passes it to the <<Invoker>>
. The <<Invoker>>
object treats all commands as a general <<Command>>
type. <<Invoker>>
issues a request by calling execute()
on the command. If a command is undoable, <<ConcreteCommand>>
will store the state for undoing the command prior to invoking execute()
. In addition, the <<ConcreteCommand>>
object may have to be linked to any <<Receiver>>
of the command ( ) before it is passed to the <<Invoker>>
. Note that an application of the command pattern does not have to follow the structure given above.
Model View Controller (MVC) Pattern
Context
Most applications support storage/retrieval of information, displaying of information to the user (often via multiple UIs having different formats), and changing stored information based on external inputs.
Problem
The high coupling that can result from the interlinked nature of the features described above.
Solution
Decouple data, presentation, and control logic of an application by separating them into three different components: Model, View and Controller.
- View: Displays data, interacts with the user, and pulls data from the model if necessary.
- Controller: Detects UI events such as mouse clicks and button pushes, and takes follow up action. Updates/changes the model/view when necessary.
- Model: Stores and maintains data. Updates the view if necessary.
The relationship between the components can be observed in the diagram below. Typically, the UI is the combination of View and Controller.

Given below is a concrete example of MVC applied to a student management system. In this scenario, the user is retrieving the data of a student.

In the diagram above, when the user clicks on a button using the UI, the ‘click’ event is caught and handled by the UiController
. The ref
frame indicates that the interactions within that frame have been extracted out to another separate sequence diagram.
Note that in a simple UI where there’s only one view, Controller and View can be combined as one class.
There are many variations of the MVC model used in different domains. For example, the one used in a desktop GUI could be different from the one used in a web application.
Observer Pattern
Context
An object (possibly more than one) is interested in being notified when a change happens to another object. That is, some objects want to ‘observe’ another object.
Consider this scenario from a student management system where the user is adding a new student to the system.

Now, assume the system has two additional views used in parallel by different users:
StudentListUi
: that accesses a list of students andStudentStatsUi
: that generates statistics of current students.
When a student is added to the database using NewStudentUi
shown above, both StudentListUi
and StudentStatsUi
should get updated automatically, as shown below.

However, the StudentList
object has no knowledge about StudentListUi
and StudentStatsUi
(note the direction of the navigability) and has no way to inform those objects. This is an example of the type of problem addressed by the Observer pattern.
Problem
The ‘observed’ object does not want to be coupled to objects that are ‘observing’ it.
Solution
Force the communication through an interface known to both parties.

Here is the Observer pattern applied to the student management system.
During the initialization of the system,
First, create the relevant objects.
StudentList studentList = new StudentList(); StudentListUi listUi = new StudentListUi(); StudentStatsUi statsUi = new StudentStatsUi();
Next, the two UIs indicate to the
StudentList
that they are interested in being updated wheneverStudentList
changes. This is also known as ‘subscribing for updates’.studentList.addUi(listUi); studentList.addUi(statsUi);
Within the
addUi
operation ofStudentList
, all Observer object subscribers are added to an internal data structure calledobserverList
.// StudentList class public void addUi(Observer o) { observerList.add(o); }
Now, whenever the data in StudentList
changes (e.g. when a new student is added to the StudentList
),
All interested observers are updated by calling the
notifyUIs
operation.// StudentList class public void notifyUIs() { // for each observer in the list for (Observer o: observerList) { o.update(); } }
UIs can then pull data from the
StudentList
whenever theupdate
operation is called.// StudentListUI class public void update() { // refresh UI by pulling data from StudentList }
Note that
StudentList
is unaware of the exact nature of the two UIs but still manages to communicate with them via an intermediary.
Here is the generic description of the observer pattern:

<<Observer>>
is an interface: any class that implements it can observe an<<Observable>>
. Any number of<<Observer>>
objects can observe (i.e. listen to changes of) the<<Observable>>
object.- The
<<Observable>>
maintains a list of<<Observer>>
objects.addObserver(Observer)
operation adds a new<<Observer>>
to the list of<<Observer>>
s. - Whenever there is a change in the
<<Observable>>
, thenotifyObservers()
operation is called that will call theupdate()
operation of all<<Observer>>
s in the list.
In a GUI application, how is the Controller notified when the “save” button is clicked? UI frameworks such as JavaFX have inbuilt support for the Observer pattern.
More
Design patterns are usually embedded in a larger design and sometimes applied in combination with other design patterns.
Let us look at a case study that shows how design patterns are used in the design of a class structure for a Stock Inventory System (SIS) for a shop. The shop sells appliances and accessories for the appliances. SIS simply stores information about each item in the store.
Use Cases:
- Create a new item
- View information about an item
- Modify information about an item
- View all available accessories for a given appliance
- List all items in the store
SIS can be accessed using multiple terminals. Shop assistants use their own terminals to access SIS, while the shop manager’s terminal continuously displays a list of all items in the store. In the future, it is expected that suppliers of items use their own applications to connect to SIS to get real-time information about the current stock status. User authentication is not required for the current version, but may be required in the future.
A step by step explanation of the design is given below. Note that this is one out of many possible designs. Design patterns are also applied where appropriate.
A StockItem
can be an Appliance or an Accessory.

To track that each Accessory
is associated with the correct Appliance
, consider the following alternative class structures.

The third one seems more appropriate (the second one is suitable if accessories can have accessories). Next, consider between keeping a list of Appliances
, and a list of StockItems
. Which is more appropriate?

The latter seems more suitable because it can handle both appliances and accessories the same way. Next, an abstraction occurrence pattern is applied to keep track of StockItems
.

Note the inclusion of navigabilities. Here’s a sample object diagram based on the class model created thus far.

Next, apply the façade pattern to shield the SIS internals from the UI.

As UI consists of multiple views, the MVC pattern is applied here.

Some views need to be updated when the data changes; apply the Observer pattern here.

In addition, the Singleton pattern can be applied to the façade class.
The most famous source of design patterns is the "Gang of Four" (GoF) book which contains 23 design patterns divided into three categories:
- Creational: About object creation. They separate the operation of an application from how its objects are created.
- Abstract Factory, Builder, Factory Method, Prototype, Singleton
- Structural: About the composition of objects into larger structures while catering for future extension in structure.
- Adapter, Bridge, Composite, Decorator, Façade, Flyweight, Proxy
- Behavioral: Defining how objects interact and how responsibility is distributed among them.
- Chain of Responsibility, Command, Interpreter, Template Method, Iterator, Mediator, Memento, Observer, State, Strategy, Visitor
Design patterns provide a high-level vocabulary to talk about design.
Someone can say 'apply Observer pattern here' instead of having to describe the mechanics of the solution in detail.
Knowing more patterns is a way to become more ‘experienced’. Aim to learn at least the context and the problem of patterns so that when you encounter those problems you know where to look for a solution.
Some patterns are domain-specific e.g. patterns for distributed applications, some are created in-house e.g. patterns in the company/project and some can be self-created e.g. from past experience.
Be careful not to overuse patterns. Do not throw patterns at a problem at every opportunity. Patterns come with overhead such as adding more classes or increasing the levels of abstraction. Use them only when they are needed. Before applying a pattern, make sure that:
- there is substantial improvement in the design, not just superficial.
- the associated tradeoffs are carefully considered. There are times when a design pattern is not appropriate (or an overkill).
The notion of capturing design ideas as "patterns" is usually attributed to Christopher Alexander. He is a building architect noted for his theories about design. His book The Timeless Way of Building talks about "design patterns" for constructing buildings.
Here is a sample pattern from that book:
When a room has a window with a view, the window becomes a focal point: people are attracted to the window and want to look through it. The furniture in the room creates a second focal point: everyone is attracted toward whatever point the furniture aims them at (usually the center of the room or a TV). This makes people feel uncomfortable. They want to look out the window, and toward the other focus at the same time. If you rearrange the furniture, so that its focal point becomes the window, then everyone will suddenly notice that the room is much more “comfortable”.
Apparently, patterns and anti-patterns are found in the field of building architecture. This is because they are general concepts applicable to any domain, not just software design. In software engineering, there are many general types of patterns: Analysis patterns, Design patterns, Testing patterns, Architectural patterns, Project management patterns, and so on.
In fact, the abstraction occurrence pattern is more of an analysis pattern than a design pattern, while MVC is more of an architectural pattern.
New patterns can be created too. If a common problem that needs to be solved frequently leads to a non-obvious and better solution, it can be formulated as a pattern so that it can be reused by others. However, don’t reinvent the wheel; the pattern might already exist.
Design Approaches
In a smaller system, the design of the entire system can be shown in one place.
This class diagram of se-edu/addressbook-level2 depicts the design of the entire software.

The design of bigger systems needs to be done/shown at multiple levels.
This architecture diagram of se-edu/addressbook-level3 depicts the high-level design of the software.

Here are examples of lower level designs of some components of the same software:



Multi-level design can be done in a top-down manner, bottom-up manner, or as a mix.
- Top-down: Design the high-level design first and flesh out the lower levels later. This is especially useful when designing big and novel systems where the high-level design needs to be stable before lower levels can be designed.
- Bottom-up: Design lower level components first and put them together to create the higher-level systems later. This is not usually scalable for bigger systems. One instance where this approach might work is when designing a variation of an existing system or re-purposing existing components to build a new system.
- Mix: Design the top levels using the top-down approach but switch to a bottom-up approach when designing the bottom levels.
Agile design can be contrasted with full upfront design in the following way:
Agile designs are emergent, they’re not defined up front. Your overall system design will emerge over time, evolving to fulfill new requirements and take advantage of new technologies as appropriate. Although you will often do some initial architectural modeling at the very beginning of a project, this will be just enough to get your team going. This approach does not produce a fully documented set of models in place before you may begin coding. -- adapted from agilemodeling.com
SECTION: IMPLEMENTATION
IDEs
Introduction
Professional software engineers often write code using Integrated Development Environments (IDEs). IDEs support most development-related work within the same tool (hence, the term integrated).
An IDE generally consists of:
- A source code editor that includes features such as syntax coloring, auto-completion, easy code navigation, error highlighting, and code-snippet generation.
- A compiler and/or an interpreter (together with other build automation support) that facilitates the compilation/linking/running/deployment of a program.
- A debugger that allows the developer to execute the program one step at a time to observe the run-time behavior in order to locate bugs.
- Other tools that aid various aspects of coding e.g. support for automated testing, drag-and-drop construction of UI components, version management support, simulation of the target runtime platform, and modeling support.
Examples of popular IDEs:
- Java: Eclipse, Intellij IDEA, NetBeans
- C#, C++: Visual Studio
- Swift: XCode
- Python: PyCharm
Some web-based IDEs have appeared in recent times too e.g., Amazon's Cloud9 IDE.
Some experienced developers, in particular those with a UNIX background, prefer lightweight yet powerful text editors with scripting capabilities (e.g. Emacs) over heavier IDEs.
Debugging
Debugging is the process of discovering defects in the program. Here are some approaches to debugging:
- Bad -- By inserting temporary print statements: This is an ad-hoc approach in which print statements are inserted in the program to print information relevant to debugging, such as variable values. e.g.
Exiting process() method, x is 5.347
. This approach is not recommended due to these reasons:- Incurs extra effort when inserting and removing the print statements.
- These extraneous program modifications increase the risk of introducing errors into the program.
- These print statements, if not removed promptly after the debugging, may even appear unexpectedly in the production version.
- Bad -- By manually tracing through the code: Otherwise known as ‘eye-balling’, this approach doesn't have the cons of the previous approach, but it too is not recommended (other than as a 'quick try') due to these reasons:
- It is a difficult, time consuming, and error-prone technique.
- If you didn't spot the error while writing the code, you might not spot the error when reading the code either.
- Good -- Using a debugger: A debugger tool allows you to pause the execution, then step through the code one statement at a time while examining the internal state if necessary. Most IDEs come with an inbuilt debugger. This is the recommended approach for debugging.
Code Quality
Introduction
Always code as if the person who ends up maintaining your code will be a violent psychopath who knows where you live. -- Martin Golding
Production code needs to be of high quality. Given how the world is becoming increasingly dependent on software, poor quality code is something no one can afford to tolerate.
Guideline: Maximize Readability
Programs should be written and polished until they acquire publication quality. --Niklaus Wirth
Among various dimensions of code quality, such as run-time efficiency, security, and robustness, one of the most important is understandability. This is because in any non-trivial software project, code needs to be read, understood, and modified by other developers later on. Even if you do not intend to pass the code to someone else, code quality is still important because you will become a 'stranger' to your own code someday.
The two code samples given below achieve the same functionality, but one is easier to read.
Bad |
|
Good |
|
Bad
| | Good
|
Guideline: Follow a Standard
One essential way to improve code quality is to follow a consistent style. That is why software engineers usually follow a strict coding standard (aka style guide).
The aim of a coding standard is to make the entire code base look like it was written by one person. A coding standard is usually specific to a programming language and specifies guidelines such as the locations of opening and closing braces, indentation styles and naming styles (e.g. whether to use Hungarian style, Pascal casing, Camel casing, etc.). It is important that the whole team/company uses the same coding standard and that the standard is generally not inconsistent with typical industry practices. If a company's coding standard is very different from what is typically used in the industry, new recruits will take longer to get used to the company's coding style.
IDEs can help to enforce some parts of a coding standard e.g. indentation rules.
Go through the Java coding standard at @SE-EDU and learn the basic style rules.
Sample coding standard: PEP 8 Python Style Guide -- by Python.org
Go through the Java coding standard at @SE-EDU and learn the intermediate style rules.
Guideline: Name Well
Proper naming improves the readability of code. It also reduces bugs caused by ambiguities regarding the intent of a variable or a method.
There are only two hard things in Computer Science: cache invalidation and naming things. -- Phil Karlton
Guideline: Avoid Unsafe Shortcuts
It is safer to use language constructs in the way they are meant to be used, even if the language allows shortcuts. Such coding practices are common sources of bugs. Know them and avoid them.
Guideline: Comment Minimally, But Sufficiently
Good code is its own best documentation. As you’re about to add a comment, ask yourself, ‘How can I improve the code so that this comment isn’t needed?’ Improve the code and then document it to make it even clearer. -- Steve McConnell, Author of Clean Code
Some think commenting heavily increases the 'code quality'. That is not so. Avoid writing comments to explain bad code. Improve the code to make it self-explanatory.
Refactoring
The first version of the code you write may not be of production quality. It is OK to first concentrate on making the code work, rather than worry over the quality of the code, as long as you improve the quality later. This process of improving a program's internal structure in small steps without modifying its external behavior is called refactoring.
- Refactoring is not rewriting: Discarding poorly-written code entirely and re-writing it from scratch is not refactoring because refactoring needs to be done in small steps.
- Refactoring is not bug fixing: By definition, refactoring is different from bug fixing or any other modifications that alter the external behavior (e.g. adding a feature) of the component in concern.
Improving code structure can have many secondary benefits: e.g.
- hidden bugs become easier to spot
- improve performance (sometimes, simpler code runs faster than complex code because simpler code is easier for the compiler to optimize).
Given below are two common refactorings (more).
Refactoring Name: Consolidate Duplicate Conditional Fragments
Situation: The same fragment of code is in all branches of a conditional expression.
Method: Move it outside of the expression.
Example:
| → |
|
| → |
|
Refactoring Name: Extract Method
Situation: You have a code fragment that can be grouped together.
Method: Turn the fragment into a method whose name explains the purpose of the method.
Example:
void printOwing() {
printBanner();
// print details
System.out.println("name: " + name);
System.out.println("amount " + getOutstanding());
}
void printOwing() {
printBanner();
printDetails(getOutstanding());
}
void printDetails(double outstanding) {
System.out.println("name: " + name);
System.out.println("amount " + outstanding);
}
def print_owing():
print_banner()
# print details
print("name: " + name)
print("amount " + get_outstanding())
def print_owing():
print_banner()
print_details(get_outstanding())
def print_details(amount):
print("name: " + name)
print("amount " + amount)
Some IDEs have builtin support for basic refactorings such as automatically renaming a variable/method/class in all places it has been used.
Refactoring, even if done with the aid of an IDE, may still result in regressions. Therefore, each small refactoring should be followed by regression testing.
Given below are some more commonly used refactorings. A more comprehensive list is available at refactoring-catalog.
One way to identify refactoring opportunities is by code smells.
A code smell is a surface indication that usually corresponds to a deeper problem in the system. First, a smell is by definition something that's quick to spot. Second, smells don't always indicate a problem.
--adapted from https://martinfowler.com/bliki/CodeSmell.html
An example (from the same source as above) is the code smell data class i.e., a class with all data and no behavior. When you encounter the such a class, you can explore if refactoring it to move the corresponding behavior into that class is appropriate. Some more examples:
Periodic refactoring is a good way to pay off the technical debt a code base has accumulated.
Software systems are prone to the build up of cruft - deficiencies in internal quality that make it harder than it would ideally be to modify and extend the system further.Technical Debt is a metaphor, coined by Ward Cunningham, that frames how to think about dealing with this cruft, thinking of it like a financial debt. The extra effort that it takes to add new features is the interest paid on the debt.
--https://martinfowler.com/bliki/TechnicalDebt.html
While it is important to refactor frequently so as to avoid the accumulation of ‘messy’ code (aka technical debt), an important question is how much refactoring is too much refactoring? It is too much refactoring when the benefits no longer justify the cost. The costs and the benefits depend on the context. That is why some refactorings are ‘opposites’ of each other (e.g. extract method vs inline method).
Documentation
Introduction
Developer-to-developer documentation can be in one of two forms:
- Documentation for developer-as-user: Software components are written by developers and reused by other developers, which means there is a need to document how such components are to be used. Such documentation can take several forms:
- API documentation: APIs expose functionality in small-sized, independent and easy-to-use chunks, each of which can be documented systematically.
- Tutorial-style instructional documentation: In addition to explaining functions/methods independently, some higher-level explanations of how to use an API can be useful.
- Example of API Documentation: String API.
- Example of tutorial-style documentation: Java Internationalization Tutorial.
- Example of API Documentation: string API.
- Example of tutorial-style documentation: How to use Regular Expressions in Python
- Documentation for developer-as-maintainer: There is a need to document how a system or a component is designed, implemented and tested so that other developers can maintain and evolve the code. Writing documentation of this type is harder because of the need to explain complex internal details. However, given that readers of this type of documentation usually have access to the source code itself, only some information needs to be included in the documentation, as code (and code comments) can also serve as a complementary source of information.
- An example: se-edu/addressbook-level4 Developer Guide.
Another view proposed by Daniele Procida in this article is as follows:
There is a secret that needs to be understood in order to write good software documentation: there isn’t one thing called documentation, there are four. They are: tutorials, how-to guides, explanation and technical reference. They represent four different purposes or functions, and require four different approaches to their creation. Understanding the implications of this will help improve most software documentation - often immensely. ...
TUTORIALS
A tutorial:
- is learning-oriented
- allows the newcomer to get started
- is a lesson
Analogy: teaching a small child how to cook
HOW-TO GUIDES
A how-to guide:
- is goal-oriented
- shows how to solve a specific problem
- is a series of steps
Analogy: a recipe in a cookery book
EXPLANATION
An explanation:
- is understanding-oriented
- explains
- provides background and context
Analogy: an article on culinary social history
REFERENCE
A reference guide:
- is information-oriented
- describes the machinery
- is accurate and complete
Analogy: a reference encyclopedia article
Software documentation (applies to both user-facing and developer-facing) is best kept in a text format for ease of version tracking. A writer-friendly source format is also desirable as non-programmers (e.g., technical writers) may need to author/edit such documents. As a result, formats such as Markdown, AsciiDoc, and PlantUML are often used for software documentation.
Guidelines
Guideline: Go Top-down, Not Bottom-up
When writing project documents, a top-down breadth-first explanation is easier to understand than a bottom-up one.
The main advantage of the top-down approach is that the document is structured like an upside down tree (root at the top) and the reader can travel down a path she is interested in until she reaches the component she is interested to learn in-depth, without having to read the entire document or understand the whole system.
To explain a system called SystemFoo
with two sub-systems, FrontEnd
and BackEnd
, start by describing the system at the highest level of abstraction, and progressively drill down to lower level details. An outline for such a description is given below.
[First, explain what the system is, in a black-box fashion (no internal details, only the external view).]
SystemFoo
is a ....
[Next, explain the high-level architecture of SystemFoo
, referring to its major components only.]
SystemFoo
consists of two major components:FrontEnd
andBackEnd
.
The job ofFrontEnd
is to ... while the job ofBackEnd
is to ...
And this is howFrontEnd
andBackEnd
work together ...
[Now you can drill down to FrontEnd
's details.]
FrontEnd
consists of three major components:A
,B
,C
A
's job is to ...B
's job is to...C
's job is to...
And this is how the three components work together ...
[At this point, further drill down to the internal workings of each component. A reader who is not interested in knowing the nitty-gritty details can skip ahead to the section on BackEnd
.]
In-depth description of
A
In-depth description ofB
...
[At this point drill down to the details of the BackEnd
.]
...
Guideline: Aim for Comprehensibility
Technical documents exist to help others understand technical details. Therefore, it is not enough for the documentation to be accurate and comprehensive; it should also be comprehensible.
Here are some tips on writing effective documentation.
- Use plenty of diagrams: It is not enough to explain something in words; complement it with visual illustrations (e.g. a UML diagram).
- Use plenty of examples: When explaining algorithms, show a running example to illustrate each step of the algorithm, in parallel to worded explanations.
- Use simple and direct explanations: Convoluted explanations and fancy words will annoy readers. Avoid long sentences.
- Get rid of statements that do not add value: For example, 'We made sure our system works perfectly' (who didn't?), 'Component X has its own responsibilities' (of course it has!).
- It is not a good idea to have separate sections for each type of artifact, such as 'use cases', 'sequence diagrams', 'activity diagrams', etc. Such a structure, coupled with the indiscriminate inclusion of diagrams without justifying their need, indicates a failure to understand the purpose of documentation. Include diagrams when they are needed to explain something. If you want to provide additional diagrams for completeness' sake, include them in the appendix as a reference.
Guideline: Document Minimally, but Sufficiently
Aim for 'just enough' developer documentation.
- Writing and maintaining developer documents is an overhead. You should try to minimize that overhead.
- If the readers are developers who will eventually read the code, the documentation should complement the code and should provide only just enough guidance to get started.
Anything that is already clear in the code need not be described in words. Instead, focus on providing higher level information that is not readily visible in the code or comments.
Refrain from duplicating chunks of text. When describing several similar algorithms/designs/APIs, etc., do not simply duplicate large chunks of text. Instead, describe the similarities in one place and emphasize only the differences in other places. It is very annoying to see pages and pages of similar text without any indication as to how they differ from each other.
Tools
JavaDoc
JavaDoc is a tool for generating API documentation in HTML format from comments in the source code. In addition, modern IDEs use JavaDoc comments to generate explanatory tooltips.
An example method header comment in JavaDoc format:
/**
* Returns an Image object that can then be painted on the screen.
* The url argument must specify an absolute {@link URL}. The name
* argument is a specifier that is relative to the url argument.
* <p>
* This method always returns immediately, whether or not the
* image exists. When this applet attempts to draw the image on
* the screen, the data will be loaded. The graphics primitives
* that draw the image will incrementally paint on the screen.
*
* @param url an absolute URL giving the base location of the image
* @param name the location of the image, relative to the url argument
* @return the image at the specified URL
* @see Image
*/
public Image getImage(URL url, String name) {
try {
return getImage(new URL(url, name));
} catch (MalformedURLException e) {
return null;
}
}
Generated HTML documentation:
Tooltip generated by Intellij IDE:
Markdown
AsciiDoc
AsciiDoc is similar to Markdown but has more powerful (and also more complex) syntax.
- AsciiDoc Writers Guide -- from Asciidoctor.org
Error Handling
Introduction
Well-written applications include error-handling code that allows them to recover gracefully from unexpected errors. When an error occurs, the application may need to request user intervention, or it may be able to recover on its own. In extreme cases, the application may log the user off or shut down the system. -- Microsoft
Exceptions
Exceptions are used to deal with 'unusual' but not entirely unexpected situations that the program might encounter at runtime.
Exception:
The term exception is shorthand for the phrase "exceptional event." An exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions. –- Java Tutorial (Oracle Inc.)
Examples:
- A network connection encounters a timeout due to a slow server.
- The code tries to read a file from the hard disk but the file is corrupted and cannot be read.
Most languages allow code that encountered an "exceptional" situation to encapsulate details of the situation in an Exception object and throw/raise that object so that another piece of code can catch it and deal with it. This is especially useful when the code that encountered the unusual situation does not know how to deal with it.
The extract below from the -- Java Tutorial (with slight adaptations) explains how exceptions are typically handled.
When an error occurs at some point in the execution, the code being executed creates an exception object and hands it off to the runtime system. The exception object contains information about the error, including its type and the state of the program when the error occurred. Creating an exception object and handing it to the runtime system is called throwing an exception.
After a method throws an exception, the runtime system attempts to find something to handle it in the . The runtime system searches the call stack for a method that contains a block of code that can handle the exception. This block of code is called an exception handler. The search begins with the method in which the error occurred and proceeds through the call stack in the reverse order in which the methods were called. When an appropriate handler is found, the runtime system passes the exception to the handler. An exception handler is considered appropriate if the type of the exception object thrown matches the type that can be handled by the handler.
The exception handler chosen is said to catch the exception. If the runtime system exhaustively searches all the methods on the call stack without finding an appropriate exception handler, the program terminates.
Advantages of exception handling in this way:
- The ability to propagate error information through the call stack.
- The separation of code that deals with 'unusual' situations from the code that does the 'usual' work.
In general, use exceptions only for 'unusual' conditions. Use normal return
statements to pass control to the caller for conditions that are 'normal'.
Assertions
Assertions are used to define assumptions about the program state so that the runtime can verify them. An assertion failure indicates a possible bug in the code because the code has resulted in a program state that violates an assumption about how the code should behave.
An assertion can be used to express something like when the execution comes to this point, the variable v
cannot be null.
If the runtime detects an assertion failure, it typically takes some drastic action such as terminating the execution with an error message. This is because an assertion failure indicates a possible bug and the sooner the execution stops, the safer it is.
In the Java code below, suppose you set an assertion that timeout
returned by Config.getTimeout()
is greater than 0
. Now, if Config.getTimeout()
returns -1
in a specific execution of this line, the runtime can detect it as an assertion failure -- i.e. an assumption about the expected behavior of the code turned out to be wrong which could potentially be the result of a bug -- and take some drastic action such as terminating the execution.
int timeout = Config.getTimeout();
Use the assert
keyword to define assertions.
This assertion will fail with the message x should be 0
if x
is not 0 at this point.
x = getX();
assert x == 0 : "x should be 0";
...
Assertions can be disabled without modifying the code.
java -enableassertions HelloWorld
(or java -ea HelloWorld
) will run HelloWorld
with assertions enabled while java -disableassertions HelloWorld
will run it without verifying assertions.
Java disables assertions by default. This could create a situation where you think all assertions are being verified as true
while in fact they are not being verified at all. Therefore, remember to enable assertions when you run the program if you want them to be in effect.
Enable assertions in Intellij (how?) and get an assertion to fail temporarily (e.g. insert an assert false
into the code temporarily) to confirm assertions are being verified.
Java assert
vs JUnit assertions: They are similar in purpose but JUnit assertions are more powerful and customized for testing. In addition, JUnit assertions are not disabled by default. We recommend you use JUnit assertions in test code and Java assert
in functional code.
It is recommended that assertions be used liberally in the code. Their impact on performance is considered low and worth the additional safety they provide.
Do not use assertions to do work because assertions can be disabled. If not, your program will stop working when assertions are not enabled.
The code below will not invoke the writeFile()
method when assertions are disabled. If that method is performing some work that is necessary for your program, your program will not work correctly when assertions are disabled.
...
assert writeFile() : "File writing is supposed to return true";
Assertions are suitable for verifying assumptions about Internal Invariants, Control-Flow Invariants, Preconditions, Postconditions, and Class Invariants. Refer to [Programming with Assertions (second half)] to learn more.
Exceptions and assertions are two complementary ways of handling errors in software but they serve different purposes. Therefore, both assertions and exceptions should be used in code.
- The raising of an exception indicates an unusual condition created by the user (e.g. user inputs an unacceptable input) or the environment (e.g., a file needed for the program is missing).
- An assertion failure indicates the programmer made a mistake in the code (e.g., a null value is returned from a method that is not supposed to return null under any circumstances).
Logging
Logging is the deliberate recording of certain information during a program execution for future reference. Logs are typically written to a log file but it is also possible to log information in other ways e.g. into a database or a remote server.
Logging can be useful for troubleshooting problems. A good logging system records some system information regularly. When bad things happen to a system e.g. an unanticipated failure, their associated log files may provide indications of what went wrong and actions can then be taken to prevent it from happening again.
A log file is like the of an airplane; they don't prevent problems but they can be helpful in understanding what went wrong after the fact.
source: https://commons.wikimedia.org
Most programming environments come with logging systems that allow sophisticated forms of logging. They have features such as the ability to enable and disable logging easily or to change the logging .
This sample Java code uses Java’s default logging mechanism.
First, import the relevant Java package:
import java.util.logging.*;
Next, create a Logger
:
private static Logger logger = Logger.getLogger("Foo");
Now, you can use the Logger
object to log information. Note the use of a for each message. When running the code, the logging level can be set to WARNING
so that log messages specified as having INFO
level (which is a lower level than WARNING
) will not be written to the log file at all.
// log a message at INFO level
logger.log(Level.INFO, "going to start processing");
// ...
processInput();
if (error) {
// log a message at WARNING level
logger.log(Level.WARNING, "processing error", ex);
}
// ...
logger.log(Level.INFO, "end of processing");
Defensive Programming
A defensive programmer codes under the assumption "if you leave room for things to go wrong, they will go wrong". Therefore, a defensive programmer proactively tries to eliminate any room for things to go wrong.
Consider a method MainApp#getConfig()
that returns a Config
object containing configuration data. A typical implementation is given below:
class MainApp {
Config config;
/** Returns the config object */
Config getConfig() {
return config;
}
}
If the returned Config
object is not meant to be modified, a defensive programmer might use a more defensive implementation given below. This is more defensive because even if the returned Config
object is modified (although it is not meant to be), it will not affect the config
object inside the MainApp
object.
/** Returns a copy of the config object */
Config getConfig() {
return config.copy(); // return a defensive copy
}
Consider two classes, Account
and Guarantor
, with an association as shown in the following diagram:
Example:

Here, the association is compulsory i.e. an Account
object should always be linked to a Guarantor
. One way to implement this is to simply use a reference variable, like this:
class Account {
Guarantor guarantor;
void setGuarantor(Guarantor g) {
guarantor = g;
}
}
However, what if someone else used the Account
class like this?
Account a = new Account();
a.setGuarantor(null);
This results in an Account
without a Guarantor
! In a real banking system, this could have serious consequences! The code here did not try to prevent such a thing from happening. You can make the code more defensive by proactively enforcing the multiplicity constraint, like this:
class Account {
private Guarantor guarantor;
public Account(Guarantor g) {
if (g == null) {
stopSystemWithMessage(
"multiplicity violated. Null Guarantor");
}
guarantor = g;
}
public void setGuarantor(Guarantor g) {
if (g == null) {
stopSystemWithMessage(
"multiplicity violated. Null Guarantor");
}
guarantor = g;
}
// ...
}
Consider the association given below. A defensive implementation requires us to ensure that a MinedCell
cannot exist without a Mine
and vice versa which requires simultaneous object creation. However, Java can only create one object at a time. Given below are two alternative implementations, both of which violate the multiplicity for a short period of time.

Option 1:
class MinedCell {
private Mine mine;
public MinedCell(Mine m) {
if (m == null) {
showError();
}
mine = m;
}
// ...
}
Option 1 forces us to keep a Mine
without a MinedCell
(until the MinedCell
is created).
Option 2:
class MinedCell {
private Mine mine;
public MinedCell() {
mine = new Mine();
}
// ...
}
Option 2 is more defensive because the Mine
is immediately linked to a MinedCell
.
A bidirectional association in the design (shown in (a)) is usually emulated at code level using two variables (as shown in (b)).

class Man {
Woman girlfriend;
void setGirlfriend(Woman w) {
girlfriend = w;
}
// ...
}
class Woman {
Man boyfriend;
void setBoyfriend(Man m) {
boyfriend = m;
}
}
The two classes are meant to be used as follows:
Woman jean;
Man james;
// ...
james.setGirlfriend(jean);
jean.setBoyfriend(james);
Suppose the two classes were used like this instead:
Woman jean;
Man james, yong;
// ...
james.setGirlfriend(jean);
jean.setBoyfriend(yong);
Now James' girlfriend is Jean, while Jean's boyfriend is not James. This situation is a result of the code not being defensive enough to stop this "love triangle". In such a situation, you could say that the referential integrity has been violated. This means that there is an inconsistency in object references.

One way to prevent this situation is to implement the two classes as shown below. Note how the referential integrity is maintained.
public class Woman {
private Man boyfriend;
public void setBoyfriend(Man m) {
if (boyfriend == m) {
return;
}
if (boyfriend != null) {
boyfriend.breakUp();
}
boyfriend = m;
m.setGirlfriend(this);
}
public void breakUp() {
boyfriend = null;
}
// ...
}
public class Man {
private Woman girlfriend;
public void setGirlfriend(Woman w) {
if (girlfriend == w) {
return;
}
if (girlfriend != null) {
girlfriend.breakUp();
}
girlfriend = w;
w.setBoyfriend(this);
}
public void breakUp() {
girlfriend = null;
}
// ...
}
When james.setGirlfriend(jean)
is executed, the code ensures that james
breaks up with any current girlfriend before he accepts jean
as his girlfriend. Furthermore, the code ensures that jean
breaks up with any existing boyfriends before accepting james
as her boyfriend.
It is not necessary to be 100% defensive all the time. While defensive code may be less prone to be misused or abused, such code can also be more complicated and slower to run.
The suitable degree of defensiveness depends on many factors such as:
- How critical is the system?
- Will the code be used by programmers other than the author?
- The level of programming language support for defensive programming
- The overhead of being defensive
Design-by-Contract Approach
is an approach for designing software that requires defining formal, precise and verifiable interface specifications for software components.
Suppose an operation is implemented with the behavior specified precisely in the API (preconditions, post conditions, exceptions etc.). When following the defensive approach, the code should first check if the preconditions have been met. Typically, exceptions are thrown if preconditions are violated. In contrast, the Design-by-Contract (DbC) approach to coding assumes that it is the responsibility of the caller to ensure all preconditions are met. The operation will honor the contract only if the preconditions have been met. If any of them have not been met, the behavior of the operation is "unspecified".
Languages such as Eiffel have native support for DbC. For example, preconditions of an operation can be specified in Eiffel and the language runtime will check precondition violations without the need to do it explicitly in the code. To follow the DbC approach in languages such as Java and C++ where there is no built-in DbC support, assertions can be used to confirm pre-conditions.
Integration
Introduction
Combining parts of a software product to form a whole is called integration. It is also one of the most troublesome tasks and it rarely goes smoothly.
Approaches
In terms of timing and frequency, there are two general approaches to integration: late and one-time, early and frequent.
Late and one-time: wait till all components are completed and integrate all finished components near the end of the project.
This approach is not recommended because integration often causes many component incompatibilities (due to previous miscommunications and misunderstandings) to surface which can lead to delivery delays i.e. Late integration → incompatibilities found → major rework required → cannot meet the delivery date.
Early and frequent: integrate early and evolve each part in parallel, in small steps, re-integrating frequently.
A can be written first. This can be done by one developer, possibly the one in charge of integration. After that, all developers can flesh out the skeleton in parallel, adding one feature at a time. After each feature is done, simply integrate the new code into the main system.
Here is an animation that compares the two approaches:
Big-bang integration: integrate all components at the same time.
Big-bang is not recommended because it will uncover too many problems at the same time which could make debugging and bug-fixing more complex than when problems are uncovered incrementally.
Incremental integration: integrate a few components at a time. This approach is better than big-bang integration because it surfaces integration problems in a more manageable way.
Here is an animation that compares the two approaches:

Based on the order in which components are integrated, incremental integration can be done in three ways.
Top-down integration: higher-level components are integrated before bringing in the lower-level components. One advantage of this approach is that higher-level problems can be discovered early. One disadvantage is that this requires the use of stubs in place of lower level components until the real lower-level components are integrated into the system. Otherwise, higher-level components cannot function as they depend on lower level ones.
Bottom-up integration: the reverse of top-down integration. Note that when integrating lower level components, may be needed to test the integrated components because the UI may not be integrated yet, just like how top-down integration needs stubs.
Sandwich integration: a mix of the top-down and bottom-up approaches. The idea is to do both top-down and bottom-up so as to 'meet' in the middle.
Here is an animation that compares the three approaches:
Build Automation
Build automation tools automate the steps of the build process, usually by means of build scripts.
In a non-trivial project, building a product from its source code can be a complex multi-step process. For example, it can include steps such as: pull code from the revision control system, compile, link, run automated tests, automatically update release documents (e.g. build number), package into a distributable, push to repo, deploy to a server, delete temporary files created during building/testing, email developers of the new build, and so on. Furthermore, this build process can be done ‘on demand’, it can be scheduled (e.g. every day at midnight) or it can be triggered by various events (e.g. triggered by a code push to the revision control system).
Some of these build steps such as compiling, linking and packaging, are already automated in most modern IDEs. For example, several steps happen automatically when the ‘build’ button of the IDE is clicked. Some IDEs even allow customization of this build process to some extent.
However, most big projects use specialized build tools to automate complex build processes.
Some popular build tools relevant to Java developers: Gradle, Maven, Apache Ant, GNU Make
Some other build tools: Grunt (JavaScript), Rake (Ruby)
Some build tools also serve as dependency management tools. Modern software projects often depend on third party libraries that evolve constantly. That means developers need to download the correct version of the required libraries and update them regularly. Therefore, dependency management is an important part of build automation. Dependency management tools can automate that aspect of a project.
Maven and Gradle, in addition to managing the build process, can play the role of dependency management tools too.
An extreme application of build automation is called continuous integration (CI) in which integration, building, and testing happens automatically after each code change.
A natural extension of CI is Continuous Deployment (CD) where the changes are not only integrated continuously, but also deployed to end-users at the same time.
Some examples of CI/CD tools: Travis, Jenkins, Appveyor, CircleCI, GitHub Actions
Reuse
Introduction
Reuse is a major theme in software engineering practices. By reusing tried-and-tested components, the robustness of a new software system can be enhanced while reducing the manpower and time requirement. Reusable components come in many forms; it can be reusing a piece of code, a subsystem, or a whole software.
While you may be tempted to use many libraries/frameworks/platforms that seem to crop up on a regular basis and promise to bring great benefits, note that there are costs associated with reuse. Here are some:
- The reused code may be an overkill (think using a sledgehammer to crack a nut), increasing the size of, and/or degrading the performance of, your software.
- The reused software may not be mature/stable enough to be used in an important product. That means the software can change drastically and rapidly, possibly in ways that break your software.
- Non-mature software has the risk of dying off as fast as they emerged, leaving you with a dependency that is no longer maintained.
- The license of the reused software (or its dependencies) restrict how you can use/develop your software.
- The reused software might have bugs, missing features, or security vulnerabilities that are important to your product, but not so important to the maintainers of that software, which means those flaws will not get fixed as fast as you need them to.
- Malicious code can sneak into your product via compromised dependencies.
APIs
An Application Programming Interface (API) specifies the interface through which other programs can interact with a software component. It is a contract between the component and its clients.
A class has an API (e.g., API of the Java String
class, API of the Python str
class) which is a collection of public methods that you can invoke to make use of the class.
The GitHub API is a collection of web request formats that the GitHub server accepts and their corresponding responses. You can write a program that interacts with GitHub through that API.
When developing large systems, if you define the API of each component early, the development team can develop the components in parallel because the future behavior of the other components are now more predictable.
An API should be well-designed (i.e. should cater for the needs of its users) and well-documented.
When you write software consisting of multiple components, you need to define the API of each component.
One approach is to let the API emerge and evolve over time as you write code.
Another approach is to define the API up-front. Doing so allows us to develop the components in parallel.
You can use UML sequence diagrams to analyze the required interactions between components in order to discover the required API. Given below is an example.
Example:

As you analyze the interactions between components using sequence diagrams, you discover the API of those components. For example, the diagram above tells us that the MSLogic component API should have the methods:
new()
getWidth:int
getHeight():int
getRemainingMineCount():int
More details can be included to increase the precision of the method definitions before coding. Such precision is important to avoid misunderstandings between the developer of the class and developers of other classes that interact with the class.
- Operation: newGame(): void
- Description: Generates a new WxH minefield with M mines. Any existing minefield will be overwritten.
- Preconditions: None
- Postconditions: A new minefield is created. Game state is READY.
Preconditions are the conditions that must be true before calling this operation. Postconditions describe the system after the operation is complete. Note that postconditions do not say what happens during the operation. Here is another example:
- Operation: clearCellAt(int x, int y): void
- Description: Records the cell at x, y as cleared.
- Parameters: x, y coordinates of the cell
- Preconditions: game state is READY or IN_PLAY. x and y are in 0..(H-1) and 0..(W-1), respectively.
- Postconditions: Cell at x, y changes state to ZERO, ONE, TWO, THREE, …, EIGHT, or INCORRECTLY_CLEARED. Game state changes to IN_PLAY, WON or LOST as appropriate.
Libraries
A library is a collection of modular code that is general and can be used by other programs.
Java classes you get with the JDK (such as String
, ArrayList
, HashMap
, etc.) are library classes that are provided in the default Java distribution.
Natty is a Java library that can be used for parsing strings that represent dates e.g. The 31st of April in the year 2008
built-in modules you get with Python (such as csv
, random
, sys
, etc.) are libraries that are provided in the default Python distribution. Classes such as list
, str
, dict
are built-in library classes that you get with Python.
Colorama is a Python library that can be used for colorizing text in a CLI.
These are the typical steps required to use a library:
- Read the documentation to confirm that its functionality fits your needs.
- Check the license to confirm that it allows reuse in the way you plan to reuse it. For example, some libraries might allow non-commercial use only.
- Download the library and make it accessible to your project. Alternatively, you can configure your to do it for you.
- Call the library API from your code where you need to use the library's functionality.
Frameworks
The overall structure and execution flow of a specific category of software systems can be very similar. The similarity is an opportunity to reuse at a high scale.
Running example:
IDEs for different programming languages are similar in how they support editing code, organizing project files, debugging, etc.
A software framework is a reusable implementation of a software (or part thereof) providing generic functionality that can be selectively customized to produce a specific application.
Running example:
Eclipse is an IDE framework that can be used to create IDEs for different programming languages.
Some frameworks provide a complete implementation of a default behavior which makes them immediately usable.
Running example:
Eclipse is a fully functional Java IDE out-of-the-box.
A framework facilitates the adaptation and customization of some desired functionality.
Running example:
The Eclipse plugin system can be used to create an IDE for different programming languages while reusing most of the existing IDE features of Eclipse.
E.g. https://marketplace.eclipse.org/content/pydev-python-ide-eclipse
Some frameworks cover only a specific component or an aspect.
JavaFX is a framework for creating Java GUIs. Tkinter is a GUI framework for Python.
More examples of frameworks
- Frameworks for web-based applications: Drupal (PHP), Django (Python), Ruby on Rails (Ruby), Spring (Java)
- Frameworks for testing: JUnit (Java), unittest (Python), Jest (JavaScript)
Although both frameworks and libraries are reuse mechanisms, there are notable differences:
Libraries are meant to be used ‘as is’ while frameworks are meant to be customized/extended. e.g., writing plugins for Eclipse so that it can be used as an IDE for different languages (C++, PHP, etc.), adding modules and themes to Drupal, and adding test cases to JUnit.
Your code calls the library code while the framework code calls your code. Frameworks use a technique called inversion of control, aka the “Hollywood principle” (i.e. don’t call us, we’ll call you!). That is, you write code that will be called by the framework, e.g. writing test methods that will be called by the JUnit framework. In the case of libraries, your code calls libraries.
Platforms
A platform provides a runtime environment for applications. A platform is often bundled with various libraries, tools, frameworks, and technologies in addition to a runtime environment but the defining characteristic of a software platform is the presence of a runtime environment.
Technically, an operating system can be called a platform. For example, Windows PC is a platform for desktop applications while iOS is a platform for mobile applications.
Two well-known examples of platforms are JavaEE and .NET, both of which sit above the operating systems layer, and are used to develop enterprise applications. Infrastructure services such as connection pooling, load balancing, remote code execution, transaction management, authentication, security, messaging etc. are done similarly in most enterprise applications. Both JavaEE and .NET provide these services to applications in a customizable way without developers having to implement them from scratch every time.
- JavaEE (Java Enterprise Edition) is both a framework and a platform for writing enterprise applications. The runtime used by JavaEE applications is the JVM (Java Virtual Machine) that can run on different Operating Systems.
- .NET is a similar platform and framework. Its runtime is called CLR (Common Language Runtime) and it is usually used on Windows machines.
Cloud Computing
Cloud computing is the delivery of computing as a service over the network, rather than a product running on a local machine. This means the actual hardware and software is located at a remote location, typically, at a large server farm, while users access them over the network. Maintenance of the hardware and software is managed by the cloud provider while users typically pay for only the amount of services they use. This model is similar to the consumption of electricity; the power company manages the power plant, while the consumers pay them only for the electricity used. The cloud computing model optimizes hardware and software utilization and reduces the cost to consumers. Furthermore, users can scale up/down their utilization at will without having to upgrade their hardware and software. The traditional non-cloud model of computing is similar to everyone buying their own generators to create electricity for their own use.
source: https://commons.wikimedia.org
Cloud computing can deliver computing services at three levels:
Infrastructure as a service (IaaS) delivers computer infrastructure as a service. For example, a user can deploy virtual servers on the cloud instead of buying physical hardware and installing server software on them. Another example would be a customer using storage space on the cloud for off-site storage of data. Rackspace is an example of an IaaS cloud provider. Amazon Elastic Compute Cloud (Amazon EC2) is another one.
Platform as a service (PaaS) provides a platform on which developers can build applications. Developers do not have to worry about infrastructure issues such as deploying servers or load balancing as is required when using IaaS. Those aspects are automatically taken care of by the platform. The price to pay is reduced flexibility; applications written on PaaS are limited to facilities provided by the platform. A PaaS example is the Google App Engine where developers can build applications using Java, Python, PHP, or Go whereas Amazon EC2 allows users to deploy applications written in any language on their virtual servers.
Software as a service (SaaS) allows applications to be accessed over the network instead of installing them on a local machine. For example, Google Docs is a SaaS word processing software, while Microsoft Word is a traditional word processing software.
SECTION: QUALITY ASSURANCE
Quality Assurance
Introduction
Software Quality Assurance (QA) is the process of ensuring that the software being built has the required levels of quality.
While testing is the most common activity used in QA, there are other complementary techniques such as static analysis, code reviews, and formal verification.
Quality Assurance = Validation + Verification
QA involves checking two aspects:
- Validation: are you building the right system i.e., are the requirements correct?
- Verification: are you building the system right i.e., are the requirements implemented correctly?
Whether something belongs under validation or verification is not that important. What is more important is that both are done, instead of limiting to only verification (i.e., remember that the requirements can be wrong too).
Code Reviews
Code review is the systematic examination of code with the intention of finding where the code can be improved.
Reviews can be done in various forms. Some examples below:
Pull Request reviews
- Project Management Platforms such as GitHub and BitBucket allow the new code to be proposed as Pull Requests and provide the ability for others to review the code in the PR.
In pair programming
- As pair programming involves two programmers working on the same code at the same time, there is an implicit review of the code by the other member of the pair.
Formal inspections
Inspections involve a group of people systematically examining project artifacts to discover defects. Members of the inspection team play various roles during the process, such as:
- the author - the creator of the artifact
- the moderator - the planner and executor of the inspection meeting
- the secretary - the recorder of the findings of the inspection
- the inspector/reviewer - the one who inspects/reviews the artifact
Advantages of code review over testing:
- It can detect functionality defects as well as other problems such as coding standard violations.
- It can verify non-code artifacts and incomplete code.
- It does not require test drivers or stubs.
Disadvantages:
- It is a manual process and therefore, error prone.
Static Analysis
Static analysis: Static analysis is the analysis of code without actually executing the code.
Static analysis of code can find useful information such as unused variables, unhandled exceptions, style errors, and statistics. Most modern IDEs come with some inbuilt static analysis capabilities. For example, an IDE can highlight unused variables as you type the code into the editor.
The term static in static analysis refers to the fact that the code is analyzed without executing the code. In contrast, dynamic analysis requires the code to be executed to gather additional information about the code e.g., performance characteristics.
Higher-end static analysis tools (static analyzers) can perform more complex analysis such as locating potential bugs, memory leaks, inefficient code structures, etc.
Some example static analyzers for Java: CheckStyle, PMD, FindBugs
Linters are a subset of static analyzers that specifically aim to locate areas where the code can be made 'cleaner'.
Formal Verification
Formal verification uses mathematical techniques to prove the correctness of a program.
Advantages:
- Formal verification can be used to prove the absence of errors. In contrast, testing can only prove the presence of errors, not their absence.
Disadvantages:
- It only proves the compliance with the specification, but not the actual utility of the software.
- It requires highly specialized notations and knowledge which makes it an expensive technique to administer. Therefore, formal verifications are more commonly used in safety-critical software such as flight control systems.
Testing
Introduction
Testing: Operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component. –- source: IEEE

When testing, you execute a set of test cases. A test case specifies how to perform a test. At a minimum, it specifies the input to the software under test (SUT) and the expected behavior.
Example: A minimal test case for testing a browser:
- Input – Start the browser using a blank page (vertical scrollbar disabled). Then, load
longfile.html
located in thetest data
folder. - Expected behavior – The scrollbar should be automatically enabled upon loading
longfile.html
.
Test cases can be determined based on the specification, reviewing similar existing systems, or comparing to the past behavior of the SUT.
For each test case you should do the following:
- Feed the input to the SUT
- Observe the actual output
- Compare actual output with the expected output
A test case failure is a mismatch between the expected behavior and the actual behavior. A failure indicates a potential defect (or a bug), unless the error is in the test case itself.
Example: In the browser example above, a test case failure is implied if the scrollbar remains disabled after loading longfile.html
. The defect/bug causing that failure could be an uninitialized variable.
Testability is an indication of how easy it is to test an SUT. As testability depends a lot on the design and implementation, you should try to increase the testability when you design and implement software. The higher the testability, the easier it is to achieve better quality software.
Testing Types
Unit Testing
Unit testing: testing individual units (methods, classes, subsystems, ...) to ensure each piece works correctly.
In OOP code, it is common to write one or more unit tests for each public method of a class.
Here are the code skeletons for a Foo
class containing two methods and a FooTest
class that contains unit tests for those two methods.
class Foo {
String read() {
// ...
}
void write(String input) {
// ...
}
}
class FooTest {
@Test
void read() {
// a unit test for Foo#read() method
}
@Test
void write_emptyInput_exceptionThrown() {
// a unit tests for Foo#write(String) method
}
@Test
void write_normalInput_writtenCorrectly() {
// another unit tests for Foo#write(String) method
}
}
import unittest
class Foo:
def read(self):
# ...
def write(self, input):
# ...
class FooTest(unittest.TestCase):
def test_read(self):
# a unit test for read() method
def test_write_emptyIntput_ignored(self):
# a unit test for write(string) method
def test_write_normalInput_writtenCorrectly(self):
# another unit test for write(string) method
A proper unit test requires the unit to be tested in isolation so that bugs in the cannot influence the test i.e. bugs outside of the unit should not affect the unit tests.
If a Logic
class depends on a Storage
class, unit testing the Logic
class requires isolating the Logic
class from the Storage
class.
Stubs can isolate the from its dependencies.
Stub: A stub has the same interface as the component it replaces, but its implementation is so simple that it is unlikely to have any bugs. It mimics the responses of the component, but only for a limited set of predetermined inputs. That is, it does not know how to respond to any other inputs. Typically, these mimicked responses are hard-coded in the stub rather than computed or retrieved from elsewhere, e.g. from a database.
Consider the code below:
class Logic {
Storage s;
Logic(Storage s) {
this.s = s;
}
String getName(int index) {
return "Name: " + s.getName(index);
}
}
interface Storage {
String getName(int index);
}
class DatabaseStorage implements Storage {
@Override
public String getName(int index) {
return readValueFromDatabase(index);
}
private String readValueFromDatabase(int index) {
// retrieve name from the database
}
}
Normally, you would use the Logic
class as follows (note how the Logic
object depends on a DatabaseStorage
object to perform the getName()
operation):
Logic logic = new Logic(new DatabaseStorage());
String name = logic.getName(23);
You can test it like this:
@Test
void getName() {
Logic logic = new Logic(new DatabaseStorage());
assertEquals("Name: John", logic.getName(5));
}
However, this logic
object being tested is making use of a DataBaseStorage
object which means a bug in the DatabaseStorage
class can affect the test. Therefore, this test is not testing Logic
in isolation from its dependencies and hence it is not a pure unit test.
Here is a stub class you can use in place of DatabaseStorage
:
class StorageStub implements Storage {
@Override
public String getName(int index) {
if (index == 5) {
return "Adam";
} else {
throw new UnsupportedOperationException();
}
}
}
Note how the StorageStub
has the same interface as DatabaseStorage
, but is so simple that it is unlikely to contain bugs, and is pre-configured to respond with a hard-coded response, presumably, the correct response DatabaseStorage
is expected to return for the given test input.
Here is how you can use the stub to write a unit test. This test is not affected by any bugs in the DatabaseStorage
class and hence is a pure unit test.
@Test
void getName() {
Logic logic = new Logic(new StorageStub());
assertEquals("Name: Adam", logic.getName(5));
}
In addition to Stubs, there are other type of replacements you can use during testing, e.g. Mocks, Fakes, Dummies, Spies.
Integration Testing
Integration testing : testing whether different parts of the software work together (i.e. integrates) as expected. Integration tests aim to discover bugs in the 'glue code' related to how components interact with each other. These bugs are often the result of misunderstanding what the parts are supposed to do vs what the parts are actually doing.
Suppose a class Car
uses classes Engine
and Wheel
. If the Car
class assumed a Wheel
can support a speed of up to 200 mph but the actual Wheel
can only support a speed of up to 150 mph, it is the integration test that is supposed to uncover this discrepancy.
System Testing
System testing: take the whole system and test it against the system specification.
System testing is typically done by a testing team (also called a QA team).
System test cases are based on the specified external behavior of the system. Sometimes, system tests go beyond the bounds defined in the specification. This is useful when testing that the system fails 'gracefully' when pushed beyond its limits.
Suppose the SUT is a browser that is supposedly capable of handling web pages containing up to 5000 characters. Given below is a test case to test if the SUT fails gracefully if pushed beyond its limits.
Test case: load a web page that is too big
* Input: loads a web page containing more than 5000 characters.
* Expected behavior: aborts the loading of the page
and shows a meaningful error message.
This test case would fail if the browser attempted to load the large file anyway and crashed.
System testing includes testing against non-functional requirements too. Here are some examples:
- Performance testing – to ensure the system responds quickly.
- Load testing (also called stress testing or scalability testing) – to ensure the system can work under heavy load.
- Security testing – to test how secure the system is.
- Compatibility testing, interoperability testing – to check whether the system can work with other systems.
- Usability testing – to test how easy it is to use the system.
- Portability testing – to test whether the system works on different platforms.
Alpha-Beta Testing
Alpha testing is performed by the users, under controlled conditions set by the software development team.
Beta testing is performed by a selected subset of target users of the system in their natural work setting.
An open beta release is the release of not-yet-production-quality-but-almost-there software to the general population. For example, Google’s Gmail was in 'beta' for many years before the label was finally removed.
Dogfooding
Eating your own dog food (aka dogfooding), is when creators use their own product so as to test the product.
Developer Testing
Developer testing is the testing done by the developers themselves as opposed to dedicated testers or end-users.
Delaying testing until the full product is complete has a number of disadvantages:
- Locating the cause of a test case failure is difficult due to the larger search space; in a large system, the search space could be millions of lines of code, written by hundreds of developers! The failure may also be due to multiple inter-related bugs.
- Fixing a bug found during such testing could result in major rework, especially if the bug originated from the design or during requirements specification i.e. a faulty design or faulty requirements.
- One bug might 'hide' other bugs, which could emerge only after the first bug is fixed.
- The delivery may have to be delayed if too many bugs are found during testing.
Therefore, it is better to do early testing, as hinted by the popular rule of thumb given below, also illustrated by the graph below it.
The earlier a bug is found, the easier and cheaper to have it fixed.

Such early testing software is usually, and often by necessity, done by the developers themselves i.e., developer testing.
Exploratory vs Scripted Testing
Here are two alternative approaches to testing a software: Scripted testing and Exploratory testing.
Scripted testing: First write a set of test cases based on the expected behavior of the SUT, and then perform testing based on that set of test cases.
Exploratory testing: Devise test cases on-the-fly, creating new test cases based on the results of the past test cases.
Exploratory testing is ‘the simultaneous learning, test design, and test execution’ [source: bach-et-explained] whereby the nature of the follow-up test case is decided based on the behavior of the previous test cases. In other words, running the system and trying out various operations. It is called exploratory testing because testing is driven by observations during testing. Exploratory testing usually starts with areas identified as error-prone, based on the tester’s past experience with similar systems. One tends to conduct more tests for those operations where more faults are found.
Here is an example thought process behind a segment of an exploratory testing session:
“Hmm... looks like feature x is broken. This usually means feature n and k could be broken too; you need to look at them soon. But before that, you should give a good test run to feature y because users can still use the product if feature y works, even if x doesn’t work. Now, if feature y doesn’t work 100%, you have a major problem and this has to be made known to the development team sooner rather than later...”
Exploratory testing is also known as reactive testing, error guessing technique, attack-based testing, and bug hunting.
Which approach is better – scripted or exploratory? A mix is better.
The success of exploratory testing depends on the tester’s prior experience and intuition. Exploratory testing should be done by experienced testers, using a clear strategy/plan/framework. Ad-hoc exploratory testing by unskilled or inexperienced testers without a clear strategy is not recommended for real-world non-trivial systems. While exploratory testing may allow us to detect some problems in a relatively short time, it is not prudent to use exploratory testing as the sole means of testing a critical system.
Scripted testing is more systematic, and hence, likely to discover more bugs given sufficient time, while exploratory testing would aid in quick error discovery, especially if the tester has a lot of experience in testing similar systems.
In some contexts, you will achieve your testing mission better through a more scripted approach; in other contexts, your mission will benefit more from the ability to create and improve tests as you execute them. I find that most situations benefit from a mix of scripted and exploratory approaches. --[source: bach-et-explained]
Acceptance Testing
Acceptance testing (aka User Acceptance Testing (UAT): test the system to ensure it meets the user requirements.
Acceptance tests give an assurance to the customer that the system does what it is intended to do. Acceptance test cases are often defined at the beginning of the project, usually based on the use case specification. Successful completion of UAT is often a prerequisite to the project sign-off.
Acceptance testing comes after system testing. Similar to system testing, acceptance testing involves testing the whole system.
Some differences between system testing and acceptance testing:
System Testing | Acceptance Testing |
---|---|
Done against the system specification | Done against the requirements specification |
Done by testers of the project team | Done by a team that represents the customer |
Done on the development environment or a test bed | Done on the deployment site or on a close simulation of the deployment site |
Both negative and positive test cases | More focus on positive test cases |
Note: negative test cases: cases where the SUT is not expected to work normally e.g. incorrect inputs; positive test cases: cases where the SUT is expected to work normally
Requirement specification versus system specification
The requirement specification need not be the same as the system specification. Some example differences:
Requirements specification | System specification |
---|---|
limited to how the system behaves in normal working conditions | can also include details on how it will fail gracefully when pushed beyond limits, how to recover, etc. specification |
written in terms of problems that need to be solved (e.g. provide a method to locate an email quickly) | written in terms of how the system solves those problems (e.g. explain the email search feature) |
specifies the interface available for intended end-users | could contain additional APIs not available for end-users (for the use of developers/testers) |
However, in many cases one document serves as both a requirement specification and a system specification.
Passing system tests does not necessarily mean passing acceptance testing. Some examples:
- The system might work on the testbed environments but might not work the same way in the deployment environment, due to subtle differences between the two environments.
- The system might conform to the system specification but could fail to solve the problem it was supposed to solve for the user, due to flaws in the system design.
Regression Testing
When you modify a system, the modification may result in some unintended and undesirable effects on the system. Such an effect is called a regression.
Regression testing is the re-testing of the software to detect regressions. The typical way to detect regressions is retesting all related components, even if they had been tested before.
Regression testing is more effective when it is done frequently, after each small change. However, doing so can be prohibitively expensive if testing is done manually. Hence, regression testing is more practical when it is automated.
Test Automation
An automated test case can be run programmatically and the result of the test case (pass or fail) is determined programmatically. Compared to manual testing, automated testing reduces the effort required to run tests repeatedly and increases precision of testing (because manual testing is susceptible to human errors).
A simple way to semi-automate testing of a CLI (Command Line Interface) app is by using input/output re-direction. Here are the high-level steps:
- First, you feed the app with a sequence of test inputs that is stored in a file while redirecting the output to another file.
- Next, you compare the actual output file with another file containing the expected output.
Let's assume you are testing a CLI app called AddressBook
. Here are the detailed steps:
Store the test input in the text file
input.txt
.Store the output you expect from the SUT in another text file
expected.txt
.Run the program as given below, which will redirect the text in
input.txt
as the input toAddressBook
and similarly, will redirect the output ofAddressBook
to a text fileoutput.txt
. Note that this does not require any changes inAddressBook
code.java AddressBook < input.txt > output.txt
The way to run a CLI program differs based on the language.
e.g., In Python, assuming the code is inAddressBook.py
file, use the command
python AddressBook.py < input.txt > output.txt
If you are using Windows, use a normal MS-DOS terminal (i.e.,
cmd.exe
) to run the app, not a PowerShell window.
Next, you compare
output.txt
with theexpected.txt
. This can be done using a utility such as Windows'FC
(i.e. File Compare) command, Unix'sdiff
command, or a GUI tool such as WinMerge.FC output.txt expected.txt
Note that the above technique is only suitable when testing CLI apps, and only if the exact output can be predetermined. If the output varies from one run to the other (e.g. it contains a time stamp), this technique will not work. In those cases, you need more sophisticated ways of automating tests.
A test driver is the code that ‘drives’ the for the purpose of testing i.e. invoking the SUT with test inputs and verifying if the behavior is as expected.
PayrollTest
‘drives’ the Payroll
class by sending it test inputs and verifies if the output is as expected.
public class PayrollTest {
public static void main(String[] args) throws Exception {
// test setup
Payroll p = new Payroll();
// test case 1
p.setEmployees(new String[]{"E001", "E002"});
// automatically verify the response
if (p.totalSalary() != 6400) {
throw new Error("case 1 failed ");
}
// test case 2
p.setEmployees(new String[]{"E001"});
if (p.totalSalary() != 2300) {
throw new Error("case 2 failed ");
}
// more tests...
System.out.println("All tests passed");
}
}
JUnit is a tool for automated testing of Java programs. Similar tools are available for other languages and for automating different types of testing.
This is an automated test for a Payroll
class, written using JUnit libraries.
// other test methods
@Test
public void testTotalSalary() {
Payroll p = new Payroll();
// test case 1
p.setEmployees(new String[]{"E001", "E002"});
assertEquals(6400, p.totalSalary());
// test case 2
p.setEmployees(new String[]{"E001"});
assertEquals(2300, p.totalSalary());
// more tests...
}
Most modern IDEs have integrated support for testing tools. The figure below shows the JUnit output when running some JUnit tests using the Eclipse IDE.

If a software product has a GUI (Graphical User Interface) component, all product-level testing (i.e. the types of testing mentioned above) need to be done using the GUI. However, testing the GUI is much harder than testing the CLI (Command Line Interface) or API, for the following reasons:
- Most GUIs can support a large number of different operations, many of which can be performed in any arbitrary order.
- GUI operations are more difficult to automate than API testing. Reliably automating GUI operations and automatically verifying whether the GUI behaves as expected is harder than calling an operation and comparing its return value with an expected value. Therefore, automated regression testing of GUIs is rather difficult.
- The appearance of a GUI (and sometimes even behavior) can be different across platforms and even environments. For example, a GUI can behave differently based on whether it is minimized or maximized, in focus or out of focus, and in a high resolution display or a low resolution display.

Moving as much logic as possible out of the GUI can make GUI testing easier. That way, you can bypass the GUI to test the rest of the system using automated API testing. While this still requires the GUI to be tested, the number of such test cases can be reduced as most of the system will have been tested using automated API testing.
There are testing tools that can automate GUI testing.
Some tools used for automated GUI testing:
TestFX can do automated testing of JavaFX GUIs
Visual Studio supports the ‘record replay’ type of GUI test automation.
Selenium can be used to automate testing of web application UIs
Test Coverage
Test coverage is a metric used to measure the extent to which testing exercises the code i.e., how much of the code is 'covered' by the tests.
Here are some examples of different coverage criteria:
- Function/method coverage : based on functions executed e.g., testing executed 90 out of 100 functions.
- Statement coverage : based on the number of lines of code executed e.g., testing executed 23k out of 25k LOC.
- Decision/branch coverage : based on the decision points exercised e.g., an
if
statement evaluated to bothtrue
andfalse
with separate test cases during testing is considered 'covered'. - Condition coverage : based on the boolean sub-expressions, each evaluated to both true and false with different test cases. Condition coverage is not the same as the decision coverage.
if(x > 2 && x < 44)
is considered one decision point but two conditions.
For 100% branch or decision coverage, two test cases are required:
(x > 2 && x < 44) == true
: [e.g.x == 4
](x > 2 && x < 44) == false
: [e.g.x == 100
]
For 100% condition coverage, three test cases are required:
(x > 2) == true
,(x < 44) == true
: [e.g.x == 4
] [see note 1](x < 44) == false
: [e.g.x == 100
](x > 2) == false
: [e.g.x == 0
]
Note 1: A case where both conditions are true
is needed because most execution environments use a short circuiting behavior for compound boolean expressions e.g., given an expression c1 && c2
, c2
will not be evaluated if c1
is false
(as the final result is going to be false
anyway).
- Path coverage measures coverage in terms of possible paths through a given part of the code executed. 100% path coverage means all possible paths have been executed. A commonly used notation for path analysis is called the Control Flow Graph (CFG).
- Entry/exit coverage measures coverage in terms of possible calls to and exits from the operations in the SUT.
Entry points refer to all places from which the method is called from the rest of the code i.e., all places where the control is handed over to the method in concern.
Exit points refer to points at which the control is returned to the caller e.g., return statements, throwing of exceptions.
Measuring coverage is often done using coverage analysis tools. Most IDEs have inbuilt support for measuring test coverage, or at least have plugins that can measure test coverage.
Coverage analysis can be useful in improving the quality of testing e.g., if a set of test cases does not achieve 100% branch coverage, more test cases can be added to cover missed branches.
Measuring code coverage in Intellij IDEA (watch from 4 minutes 50 seconds
mark)
Dependency Injection
Dependency injection is the process of 'injecting' objects to replace current dependencies with a different object. This is often used to inject stubs to isolate the from its so that it can be tested in isolation.
A Foo
object normally depends on a Bar
object, but you can inject a BarStub
object so that the Foo
object no longer depends on a Bar
object. Now you can test the Foo
object in isolation from the Bar
object.

Polymorphism can be used to implement dependency injection, as can be seen in the example given in [Quality Assurance → Testing → Unit Testing → Stubs] where a stub is injected to replace a dependency.
Here is another example of using polymorphism to implement dependency injection:
Suppose you want to unit test Payroll#totalSalary()
given below. The method depends on the SalaryManager
object to calculate the return value. Note how the setSalaryManager(SalaryManager)
can be used to inject a SalaryManager
object to replace the current SalaryManager
object.
class Payroll {
private SalaryManager manager = new SalaryManager();
private String[] employees;
void setEmployees(String[] employees) {
this.employees = employees;
}
void setSalaryManager(SalaryManager sm) {
this.manager = sm;
}
double totalSalary() {
double total = 0;
for (int i = 0; i < employees.length; i++) {
total += manager.getSalaryForEmployee(employees[i]);
}
return total;
}
}
class SalaryManager {
double getSalaryForEmployee(String empID) {
// code to access employee’s salary history
// code to calculate total salary paid and return it
}
}
During testing, you can inject a SalaryManagerStub
object to replace the SalaryManager
object.
class PayrollTest {
public static void main(String[] args) {
// test setup
Payroll p = new Payroll();
// dependency injection
p.setSalaryManager(new SalaryManagerStub());
// test case 1
p.setEmployees(new String[]{"E001", "E002"});
assertEquals(2500.0, p.totalSalary());
// test case 2
p.setEmployees(new String[]{"E001"});
assertEquals(1000.0, p.totalSalary());
// more tests ...
}
}
class SalaryManagerStub extends SalaryManager {
/** Returns hard coded values used for testing */
double getSalaryForEmployee(String empID) {
if (empID.equals("E001")) {
return 1000.0;
} else if (empID.equals("E002")) {
return 1500.0;
} else {
throw new Error("unknown id");
}
}
}
TDD
Test-Driven Development(TDD) advocates writing the tests before writing the SUT, while evolving functionality and tests in small increments. In TDD you first define the precise behavior of the SUT using test code, and then update the SUT to match the specified behavior. While TDD has its fair share of detractors, there are many who consider it a good way to reduce defects. One big advantage of TDD is that it guarantees the code is testable.
Note that TDD does not imply writing all the test cases first before writing functional code. Rather, proceed in small steps:
- Decide what behavior to implement.
- Write/modify a test case to test that behavior.
- Run the test cases and watch them fail.
- Implement the behavior.
- Run the test cases.
- Keep modifying the code and rerunning test cases until they all pass.
- Refactor code to improve quality.
- Repeat the cycle for each small unit of behavior that needs to be implemented.
Some TDD proponents cite the following as the three rules of TDD, that once again emphasize the need to proceed in very small steps:
- You are not allowed to write any production code unless it is to make a failing unit test pass.
- You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
- You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
Test Case Design
Introduction
Except for trivial , is not practical because such testing often requires a massive/infinite number of test cases.
Consider the test cases for adding a string object to a :
- Add an item to an empty collection.
- Add an item when there is one item in the collection.
- Add an item when there are 2, 3, .... n items in the collection.
- Add an item that has an English, a French, a Spanish, ... word.
- Add an item that is the same as an existing item.
- Add an item immediately after adding another item.
- Add an item immediately after system startup.
- ...
Exhaustive testing of this operation can take many more test cases.
Program testing can be used to show the presence of bugs, but never to show their absence!
--Edsger Dijkstra
Every test case adds to the cost of testing. In some systems, a single test case can cost thousands of dollars e.g. on-field testing of flight-control software. Therefore, test cases need to be designed to make the best use of testing resources. In particular:
Testing should be effective i.e., it finds a high percentage of existing bugs e.g., a set of test cases that finds 60 defects is more effective than a set that finds only 30 defects in the same system.
Testing should be efficient i.e., it has a high rate of success (bugs found/test cases) a set of 20 test cases that finds 8 defects is more efficient than another set of 40 test cases that finds the same 8 defects.
For testing to be , each new test you add should be targeting a potential fault that is not already targeted by existing test cases. There are test case design techniques that can help us improve the E&E of testing.
A positive test case is when the test is designed to produce an expected/valid behavior. On the other hand, a negative test case is designed to produce a behavior that indicates an invalid/unexpected situation, such as an error message.
Consider the testing of the method print(Integer i)
which prints the value of i
.
- A positive test case:
i == new Integer(50);
- A negative test case:
i == null;
Test case design can be of three types, based on how much of the SUT's internal details are considered when designing test cases:
Black-box (aka specification-based or responsibility-based) approach: test cases are designed exclusively based on the SUT’s specified external behavior.
White-box (aka glass-box or structured or implementation-based) approach: test cases are designed based on what is known about the SUT’s implementation, i.e. the code.
Gray-box approach: test case design uses some important information about the implementation. For example, if the implementation of a sort operation uses different algorithms to sort lists shorter than 1000 items and lists longer than 1000 items, more meaningful test cases can then be added to verify the correctness of both algorithms.
Equivalence Partitions
Consider the testing of the following operation.
isValidMonth(m)
: returns true
if m
(and int
) is in the range [1..12]
It is inefficient and impractical to test this method for all integer values [-MIN_INT to MAX_INT]
. Fortunately, there is no need to test all possible input values. For example, if the input value 233
fails to produce the correct result, the input 234
is likely to fail too; there is no need to test both.
In general, most SUTs do not treat each input in a unique way. Instead, they process all possible inputs in a small number of distinct ways. That means a range of inputs is treated the same way inside the SUT. Equivalence partitioning (EP) is a test case design technique that uses the above observation to improve the E&E of testing.
Equivalence partition (aka equivalence class): A group of test inputs that are likely to be processed by the SUT in the same way.
By dividing possible inputs into equivalence partitions you can,
- avoid testing too many inputs from one partition. Testing too many inputs from the same partition is unlikely to find new bugs. This increases the efficiency of testing by reducing redundant test cases.
- ensure all partitions are tested. Missing partitions can result in bugs going unnoticed. This increases the effectiveness of testing by increasing the chance of finding bugs.
Equivalence partitions (EPs) are usually derived from the specifications of the SUT.
These could be EPs for the isValidMonth example:
- [MIN_INT ... 0]: below the range that produces
true
(producesfalse
) - [1 … 12]: the range that produces
true
- [13 … MAX_INT]: above the range that produces
true
(producesfalse
)
When the SUT has multiple inputs, you should identify EPs for each input.
Consider the method duplicate(String s, int n): String
which returns a String
that contains s
repeated n
times.
Example EPs for s
:
- zero-length strings
- string containing whitespaces
- ...
Example EPs for n
:
0
- negative values
- ...
An EP may not have adjacent values.
Consider the method isPrime(int i): boolean
that returns true if i
is a prime number.
EPs for i
:
- prime numbers
- non-prime numbers
Some inputs have only a small number of possible values and a potentially unique behavior for each value. In those cases, you have to consider each value as a partition by itself.
Consider the method showStatusMessage(GameStatus s): String
that returns a unique String
for each of the possible values of s (GameStatus
is an enum
). In this case, each possible value of s
will have to be considered as a partition.
Note that the EP technique is merely a heuristic and not an exact science, especially when applied manually (as opposed to using an automated program analysis tool to derive EPs). The partitions derived depend on how one ‘speculates’ the SUT to behave internally. Applying EP under a glass-box or gray-box approach can yield more precise partitions.
Consider the EPs given above for the method isValidMonth
. A different tester might use these EPs instead:
- [1 … 12]: the range that produces
true
- [all other integers]: the range that produces
false
Some more examples:
Specification | Equivalence partitions |
---|---|
| [ |
| [ |
When deciding EPs of OOP methods, you need to identify the EPs of all data participants that can potentially influence the behaviour of the method, such as,
- the target object of the method call
- input parameters of the method call
- other data/objects accessed by the method such as global variables. This category may not be applicable if using the black box approach (because the test case designer using the black box approach will not know how the method is implemented).
Consider this method in the DataStack
class:
push(Object o): boolean
- Adds
o
to the top of the stack if the stack is not full. - Returns
true
if the push operation was a success. - Throws
MutabilityException
if the global flagFREEZE==true
.InvalidValueException
ifo
is null.
EPs:
DataStack
object: [full] [not full]o
: [null] [not null]FREEZE
: [true][false]
Consider a simple Minesweeper app. What are the EPs for the newGame()
method of the Logic
component?
As newGame()
does not have any parameters, the only obvious participant is the Logic
object itself.
Note that if the glass-box or the grey-box approach is used, other associated objects that are involved in the method might also be included as participants. For example, the Minefield
object can be considered as another participant of the newGame()
method. Here, the black-box approach is assumed.
Next, let us identify equivalence partitions for each participant. Will the newGame()
method behave differently for different Logic
objects? If yes, how will it differ? In this case, yes, it might behave differently based on the game state. Therefore, the equivalence partitions are:
PRE_GAME
: before the game starts, minefield does not exist yetREADY
: a new minefield has been created and the app is waiting for the player’s first moveIN_PLAY
: the current minefield is already in useWON
,LOST
: let us assume thatnewGame()
behaves the same way for these two values
Consider the Logic
component of the Minesweeper application. What are the EPs for the markCellAt(int x, int y)
method? The partitions in bold represent valid inputs.
Logic
: PRE_GAME, READY, IN_PLAY, WON, LOSTx
: [MIN_INT..-1] [0..(W-1)] [W..MAX_INT] (assuming a minefield size of WxH)y
: [MIN_INT..-1] [0..(H-1)] [H..MAX_INT]Cell
at(x,y)
: HIDDEN, MARKED, CLEARED
Boundary Value Analysis
Boundary Value Analysis (BVA) is a test case design heuristic that is based on the observation that bugs often result from incorrect handling of boundaries of equivalence partitions. This is not surprising, as the end points of boundaries are often used in branching instructions, etc., where the programmer can make mistakes.
The markCellAt(int x, int y)
operation could contain code such as if (x > 0 && x <= (W-1))
which involves the boundaries of x’s equivalence partitions.
BVA suggests that when picking test inputs from an equivalence partition, values near boundaries (i.e. boundary values) are more likely to find bugs.
Boundary values are sometimes called corner cases.
Typically, you should choose three values around the boundary to test: one value from the boundary, one value just below the boundary, and one value just above the boundary. The number of values to pick depends on other factors, such as the cost of each test case.
Some examples:
Equivalence partition | Some possible test values (boundaries are in bold) |
---|---|
[1-12] | 0,1,2, 11,12,13 |
[MIN_INT, 0] | MIN_INT, MIN_INT+1, -1, 0 , 1 |
[any non-null String] | Empty String, a String of maximum possible length |
[prime numbers] | No specific boundary |
[non-empty Stack] | Stack with: no elements, one element, two elements, no empty spaces, only one empty space |
Combining Test Inputs
An SUT can take multiple inputs. You can select values for each input (using equivalence partitioning, boundary value analysis, or some other technique).
An SUT that takes multiple inputs and some values chosen for each input:
- Method to test:
calculateGrade(participation, projectGrade, isAbsent, examScore)
- Values to test:
Input Valid values to test Invalid values to test participation 0, 1, 19, 20 21, 22 projectGrade A, B, C, D, F isAbsent true, false examScore 0, 1, 69, 70, 71, 72
Testing all possible combinations is effective but not efficient. If you test all possible combinations for the above example, you need to test 6x5x2x6=360 cases. Doing so has a higher chance of discovering bugs (i.e. effective) but the number of test cases will be too high (i.e. not efficient). Therefore, you need smarter ways to combine test inputs that are both effective and efficient.
Given below are some basic strategies for generating a set of test cases by combining multiple test inputs.
Let's assume the SUT has the following three inputs and you have selected the given values for testing:
SUT: foo(char p1, int p2, boolean p3)
Values to test:
Input | Values |
---|---|
p1 | a, b, c |
p2 | 1, 2, 3 |
p3 | T, F |
The all combinations strategy generates test cases for each unique combination of test inputs.
This strategy generates 3x3x2=18 test cases.
Test Case | p1 | p2 | p3 |
---|---|---|---|
1 | a | 1 | T |
2 | a | 1 | F |
3 | a | 2 | T |
... | ... | ... | ... |
18 | c | 3 | F |
The at least once strategy includes each test input at least once.
This strategy generates 3 test cases.
Test Case | p1 | p2 | p3 |
---|---|---|---|
1 | a | 1 | T |
2 | b | 2 | F |
3 | c | 3 | VV/IV |
VV/IV = Any Valid Value / Any Invalid Value
The all pairs strategy creates test cases so that for any given pair of inputs, all combinations between them are tested. It is based on the observation that a bug is rarely the result of more than two interacting factors. The resulting number of test cases is lower than the all combinations strategy, but higher than the at least once approach.
This strategy generates 9 test cases:
Test Case | p1 | p2 | p3 |
---|---|---|---|
1 | a | 1 | T |
2 | a | 2 | T |
3 | a | 3 | F |
4 | b | 1 | F |
5 | b | 2 | T |
6 | b | 3 | F |
7 | c | 1 | T |
8 | c | 2 | F |
9 | c | 3 | T |
A variation of this strategy is to test all pairs of inputs but only for inputs that could influence each other.
Testing all pairs between p1 and p3 only while ensuring all p2 values are tested at least once:
Test Case | p1 | p2 | p3 |
---|---|---|---|
1 | a | 1 | T |
2 | a | 2 | F |
3 | b | 3 | T |
4 | b | VV/IV | F |
5 | c | VV/IV | T |
6 | c | VV/IV | F |
The random strategy generates test cases using one of the other strategies and then picks a subset randomly (presumably because the original set of test cases is too big).
There are other strategies that can be used too.
Consider the following scenario.
SUT: printLabel(String fruitName, int unitPrice)
Selected values for fruitName
(invalid values are underlined):
Values | Explanation |
---|---|
Apple | Label format is round |
Banana | Label format is oval |
Cherry | Label format is square |
Dog | Not a valid fruit |
Selected values for unitPrice
:
Values | Explanation |
---|---|
1 | Only one digit |
20 | Two digits |
0 | Invalid because 0 is not a valid price |
-1 | Invalid because negative prices are not allowed |
Suppose these are the test cases being considered.
Case | fruitName | unitPrice | Expected |
---|---|---|---|
1 | Apple | 1 | Print label |
2 | Banana | 20 | Print label |
3 | Cherry | 0 | Error message “invalid price” |
4 | Dog | -1 | Error message “invalid fruit" |
It looks like the test cases were created using the at least once strategy. After running these tests, can you confirm that the square-format label printing is done correctly?
- Answer: No.
- Reason:
Cherry
-- the only input that can produce a square-format label -- is in a negative test case which produces an error message instead of a label. If there is a bug in the code that prints labels in square-format, these tests cases will not trigger that bug.
In this case, a useful heuristic to apply is each valid input must appear at least once in a positive test case. Cherry
is a valid test input and you must ensure that it appears at least once in a positive test case. Here are the updated test cases after applying that heuristic.
Case | fruitName | unitPrice | Expected |
---|---|---|---|
1 | Apple | 1 | Print round label |
2 | Banana | 20 | Print oval label |
2.1 | Cherry | VV | Print square label |
3 | VV | 0 | Error message “invalid price” |
4 | Dog | -1 | Error message “invalid fruit" |
VV/IV = Any Invalid or Valid Value VV = Any Valid Value
Consider the test cases designed in [Heuristic: each valid input at least once in a positive test case].
After running these test cases, can you be sure that the error message “invalid price” is shown for negative prices?
- Answer: No.
- Reason:
-1
-- the only input that is a negative price -– is in a test case that produces the error message “invalid fruit”.
In this case, a useful heuristic to apply is no more than one invalid input in a test case. After applying that, you get the following test cases.
Case | fruitName | unitPrice | Expected |
---|---|---|---|
1 | Apple | 1 | Print round label |
2 | Banana | 20 | Print oval label |
2.1 | Cherry | VV | Print square label |
3 | VV | 0 | Error message “invalid price” |
4 | VV | -1 | Error message “invalid price" |
4.1 | Dog | VV | Error message “invalid fruit" |
VV/IV = Any Invalid or Valid Value VV = Any Valid Value
Consider the calculateGrade scenario given below:
- SUT:
calculateGrade(participation, projectGrade, isAbsent, examScore)
- Values to test: invalid values are underlined
- participation: 0, 1, 19, 20, 21, 22
- projectGrade: A, B, C, D, F
- isAbsent: true, false
- examScore: 0, 1, 69, 70, 71, 72
To get the first cut of test cases, let’s apply the at least once strategy.
Test cases for calculateGrade V1
Case No. | partici- pation | projectGrade | isAbsent | examScore | Expected |
---|---|---|---|---|---|
1 | 0 | A | true | 0 | ... |
2 | 1 | B | false | 1 | ... |
3 | 19 | C | VV/IV | 69 | ... |
4 | 20 | D | VV/IV | 70 | ... |
5 | 21 | F | VV/IV | 71 | Err Msg |
6 | 22 | VV/IV | VV/IV | 72 | Err Msg |
VV/IV = Any Valid or Invalid Value, Err Msg = Error Message
Next, let’s apply the each valid input at least once in a positive test case heuristic. Test case 5 has a valid value for projectGrade=F
that doesn't appear in any other positive test case. Let's replace test case 5 with 5.1 and 5.2 to rectify that.
Test cases for calculateGrade V2
Case No. | partici- pation | projectGrade | isAbsent | examScore | Expected |
---|---|---|---|---|---|
1 | 0 | A | true | 0 | ... |
2 | 1 | B | false | 1 | ... |
3 | 19 | C | VV | 69 | ... |
4 | 20 | D | VV | 70 | ... |
5.1 | VV | F | VV | VV | ... |
5.2 | 21 | VV/IV | VV/IV | 71 | Err Msg |
6 | 22 | VV/IV | VV/IV | 72 | Err Msg |
VV = Any Valid Value VV/IV = Any Valid or Invalid Value
Next, you have to apply the no more than one invalid input in a test case heuristic. Test cases 5.2 and 6 don't follow that heuristic. Let's rectify the situation as follows:
Test cases for calculateGrade V3
Case No. | partici- pation | projectGrade | isAbsent | examScore | Expected |
---|---|---|---|---|---|
1 | 0 | A | true | 0 | ... |
2 | 1 | B | false | 1 | ... |
3 | 19 | C | VV | 69 | ... |
4 | 20 | D | VV | 70 | ... |
5.1 | VV | F | VV | VV | ... |
5.2 | 21 | VV | VV | VV | Err Msg |
5.3 | 22 | VV | VV | VV | Err Msg |
6.1 | VV | VV | VV | 71 | Err Msg |
6.2 | VV | VV | VV | 72 | Err Msg |
Next, you can assume that there is a dependency between the inputs examScore
and isAbsent
such that an absent student can only have examScore=0
. To cater for the hidden invalid case arising from this, you can add a new test case where isAbsent=true
and examScore!=0
. In addition, test cases 3-6.2 should have isAbsent=false
so that the input remains valid.
Test cases for calculateGrade V4
Case No. | partici- pation | projectGrade | isAbsent | examScore | Expected |
---|---|---|---|---|---|
1 | 0 | A | true | 0 | ... |
2 | 1 | B | false | 1 | ... |
3 | 19 | C | false | 69 | ... |
4 | 20 | D | false | 70 | ... |
5.1 | VV | F | false | VV | ... |
5.2 | 21 | VV | false | VV | Err Msg |
5.3 | 22 | VV | false | VV | Err Msg |
6.1 | VV | VV | false | 71 | Err Msg |
6.2 | VV | VV | false | 72 | Err Msg |
7 | VV | VV | true | !=0 | Err Msg |
More
Use cases can be used for system testing and acceptance testing. For example, the main success scenario can be one test case while each variation (due to extensions) can form another test case. However, note that use cases do not specify the exact data entered into the system. Instead, it might say something like user enters his personal data into the system
. Therefore, the tester has to choose data by considering equivalence partitions and boundary values. The combinations of these could result in one use case producing many test cases.
To increase the E&E of testing, high-priority use cases are given more attention. For example, a scripted approach can be used to test high-priority test cases, while an exploratory approach is used to test other areas of concern that could emerge during testing.
Summary
SECTION: PROJECT MANAGEMENT
Revision Control
Revision control is the process of managing multiple versions of a piece of information. In its simplest form, this is something that many people do by hand: every time you modify a file, save it under a new name that contains a number, each one higher than the number of the preceding version.
Manually managing multiple versions of even a single file is an error-prone task, though, so software tools to help automate this process have long been available. The earliest automated revision control tools were intended to help a single user to manage revisions of a single file. Over the past few decades, the scope of revision control tools has expanded greatly; they now manage multiple files, and help multiple people to work together. The best modern revision control tools have no problem coping with thousands of people working together on projects that consist of hundreds of thousands of files.
Revision control software will track the history and evolution of your project, so you don't have to. For every change, you'll have a log of who made it; why they made it; when they made it; and what the change was.
Revision control software makes it easier for you to collaborate when you're working with other people. For example, when people more or less simultaneously make potentially incompatible changes, the software will help you to identify and resolve those conflicts.
It can help you to recover from mistakes. If you make a change that later turns out to be an error, you can revert to an earlier version of one or more files. In fact, a really good revision control tool will even help you to efficiently figure out exactly when a problem was introduced.
It will help you to work simultaneously on, and manage the drift between, multiple versions of your project. Most of these reasons are equally valid, at least in theory, whether you're working on a project by yourself, or with a hundred other people.
-- [adapted from bryan-mercurial-guide]
Revision: A revision (some seem to use it interchangeably with version while others seem to distinguish the two -- here, let us treat them as the same, for simplicity) is a state of a piece of information at a specific time that is a result of some changes to it e.g., if you modify the code and save the file, you have a new revision (or a new version) of that file.
RCS: Revision control software are the software tools that automate the process of Revision Control i.e. managing revisions of software artifacts.
Revision control software are also known as Version Control Software (VCS), and by a few other names.
Git is the most widely used RCS today. Other RCS tools include Mercurial, Subversion (SVN), Perforce, CVS (Concurrent Versions System), Bazaar, TFS (Team Foundation Server), and Clearcase.
Github is a web-based project hosting platform for projects using Git for revision control. Other similar services include GitLab, BitBucket, and SourceForge.
The repository is the database that stores the revision history. Suppose you want to apply revision control on files in a directory called ProjectFoo
. In that case, you need to set up a repo (short for repository) in the ProjectFoo
directory, which is referred to as the working directory of the repo. For example, Git uses a hidden folder named .git
inside the working directory, to store the database of the working directory's revision history.
Repository (repo for short): The database of the history of a directory being tracked by an RCS software (e.g. Git).
Working directory: the root directory revision-controlled by Git (e.g., the directory in which the repo was initialized).
You can have multiple repos in your computer, each repo revision-controlling files of a different working directory, for examples, files of different projects.
Tracking and ignoring
In a repo, you can specify which files to track and which files to ignore. Some files such as temporary log files created during the build/test process should not be revision-controlled.
Staging and committing
Committing saves a snapshot of the current state of the tracked files in the revision control history. Such a snapshot is also called a commit (i.e. the noun).
Commit (noun): a change (aka a revision) saved in the Git revision history.
(verb): the act of creating a commit i.e., saving a change in the working directory into the Git revision history.
When ready to commit, you first add the specific changes you want to commit to a staging area. This intermediate step allows you to commit only some changes while saving other changes for a later commit.
Stage (verb): Instructing Git to prepare a file for committing.
RCS tools store the history of the working directory as a series of commits. This means you should commit after each change that you want the RCS to 'remember'.
Each commit in a repo is a recorded point in the history of the project that is uniquely identified by an auto-generated hash e.g. a16043703f28e5b3dab95915f5c5e5bf4fdc5fc1
.
You can tag a specific commit with a more easily identifiable name e.g. v1.0.2
.
To see what changed between two points of the history, you can ask the RCS tool to diff the two commits in concern.
To restore the state of the working directory at a point in the past, you can checkout the commit in concern. i.e., you can traverse the history of the working directory simply by checking out the commits you are interested in.
Remote repositories are repos that are hosted on remote computers and allow remote access. They are especially useful for sharing the revision history of a codebase among team members of a multi-person project. They can also serve as a remote backup of your codebase.
It is possible to set up your own remote repo on a server, but the easier option is to use a remote repo hosting service such as GitHub or BitBucket.
You can clone a repo to create a copy of that repo in another location on your computer. The copy will even have the revision history of the original repo i.e., identical to the original repo. For example, you can clone a remote repo onto your computer to create a local copy of the remote repo.
When you clone from a repo, the original repo is commonly referred to as the upstream repo. A repo can have multiple upstream repos. For example, let's say a repo repo1
was cloned as repo2
which was then cloned as repo3
. In this case, repo1
and repo2
are upstream repos of repo3
.
You can pull from one repo to another, to receive new commits in the second repo, but only if the repos have a shared history. Let's say some new commits were added to the after you cloned it and you would like to copy over those new commits to your own clone i.e., sync your clone with the upstream repo. In that case, you pull from the upstream repo to your clone.
You can push new commits in one repo to another repo which will copy the new commits onto the destination repo. Note that pushing to a repo requires you to have write-access to it. Furthermore, you can push between repos only if those repos have a shared history among them (i.e., one was created by copying the other at some point in the past).
Cloning, pushing, and pulling can be done between two local repos too, although it is more common for them to involve a remote repo.
A repo can work with any number of other repositories as long as they have a shared history e.g., repo1
can pull from (or push to) repo2
and repo3
if they have a shared history between them.
A fork is a remote copy of a remote repo. As you know, cloning creates a local copy of a repo. In contrast, forking creates a remote copy of a Git repo hosted on GitHub. This is particularly useful if you want to play around with a GitHub repo but you don't have write permissions to it; you can simply fork the repo and do whatever you want with the fork as you are the owner of the fork.
A pull request (PR for short) is a mechanism for contributing code to a remote repo, i.e., "I'm requesting you to pull my proposed changes to your repo". For this to work, the two repos must have a shared history. The most common case is sending PRs from a fork to its repo.
Here is a scenario that includes all the concepts introduced above (click inside the slide to advance the animation):
Branching is the process of evolving multiple versions of the software in parallel. For example, one team member can create a new branch and add an experimental feature to it while the rest of the team keeps working on another branch. Branches can be given names e.g. master
, release
, dev
.
A branch can be merged into another branch. Merging usually results in a new commit that represents the changes done in the branch being merged.

Merge conflicts happen when you try to merge two branches that had changed the same part of the code and the RCS cannot decide which changes to keep. In those cases, you have to ‘resolve’ the conflicts manually.
RCS can be done in two ways: the centralized way and the distributed way.
Centralized RCS (CRCS for short) uses a central remote repo that is shared by the team. Team members download (‘pull’) and upload (‘push’) changes between their own local repositories and the central repository. Older RCS tools such as CVS and SVN support only this model. Note that these older RCS do not support the notion of a local repo either. Instead, they force users to do all the versioning with the remote repo.

The centralized RCS approach without any local repos (e.g., CVS, SVN)
Distributed RCS (DRCS for short, also known as Decentralized RCS) allows multiple remote repos and pulling and pushing can be done among them in arbitrary ways. The workflow can vary differently from team to team. For example, every team member can have his/her own remote repository in addition to their own local repository, as shown in the diagram below. Git and Mercurial are some prominent RCS tools that support the distributed approach.

The decentralized RCS approach

In the forking workflow, the 'official' version of the software is kept in a remote repo designated as the 'main repo'. All team members fork the main repo and create pull requests from their fork to the main repo.
To illustrate how the workflow goes, let’s assume Jean wants to fix a bug in the code. Here are the steps:
- Jean creates a separate branch in her local repo and fixes the bug in that branch.
Common mistake: Doing the proposed changes in themaster
branch -- if Jean does that, she will not be able to have more than one PR open at any time because any changes to themaster
branch will be reflected in all open PRs. - Jean pushes the branch to her fork.
- Jean creates a pull request from that branch in her fork to the main repo.
- Other members review Jean’s pull request.
- If reviewers suggested any changes, Jean updates the PR accordingly.
- When reviewers are satisfied with the PR, one of the members (usually the team lead or a designated 'maintainer' of the main repo) merges the PR, which brings Jean’s code to the main repo.
- Other members, realizing there is new code in the upstream repo, sync their forks with the new upstream repo (i.e. the main repo). This is done by pulling the new code to their own local repo and pushing the updated code to their own fork.
Possible mistake: Creating another 'reverse' PR from the team repo to the team member's fork to sync the member's fork with the merged code. PRs are meant to go from downstream repos to upstream repos, not in the other direction.
One main benefit of this workflow is that it does not require most contributors to have write permissions to the main repository. Only those who are merging PRs need write permissions. The main drawback of this workflow is the extra overhead of sending everything through forks.
Feature branch workflow is similar to forking workflow except there are no forks. Everyone is pushing/pulling from the same remote repo. The phrase feature branch is used because each new feature (or bug fix, or any other modification) is done in a separate branch and merged to the master
branch when ready. Pull requests can still be created within the central repository, from the feature branch to the main branch.
As this workflow require all team members to have write access to the repository,
- it is better to protect the main branch using some mechanism, to reduce the risk of accidental undesirable changes to it.
- it is not suitable for situations where the code contributors are not 'trusted' enough to be given write permission.

Project Planning
A Work Breakdown Structure (WBS) depicts information about tasks and their details in terms of subtasks. When managing projects, it is useful to divide the total work into smaller, well-defined units. Relatively complex tasks can be further split into subtasks. In complex projects, a WBS can also include prerequisite tasks and effort estimates for each task.
The high level tasks for a single iteration of a small project could look like the following:
Task ID | Task | Estimated Effort | Prerequisite Task |
---|---|---|---|
A | Analysis | 1 man day | - |
B | Design | 2 man day | A |
C | Implementation | 4.5 man day | B |
D | Testing | 1 man day | C |
E | Planning for next version | 1 man day | D |
The effort is traditionally measured in man hour/day/month i.e. work that can be done by one person in one hour/day/month. The Task ID is a label for easy reference to a task. Simple labeling is suitable for a small project, while a more informative labeling system can be adopted for bigger projects.
An example WBS for a game development project.
Task ID | Task | Estimated Effort | Prerequisite Task |
---|---|---|---|
A | High level design | 1 man day | - |
B |
Detail design
|
2 man day
| A |
C |
Implementation
|
4.5 man day
|
|
D | System Testing | 1 man day | C |
E | Planning for next version | 1 man day | D |
All tasks should be well-defined. In particular, it should be clear as to when the task will be considered done.
Some examples of ill-defined tasks and their better-defined counterparts:
Bad | Better |
---|---|
more coding | implement component X |
do research on UI testing | find a suitable tool for testing the UI |
A milestone is the end of a stage which indicates significant progress. You should take into account dependencies and priorities when deciding on the features to be delivered at a certain milestone.
Each intermediate product release is a milestone.
In some projects, it is not practical to have a very detailed plan for the whole project due to the uncertainty and unavailability of required information. In such cases, you can use a high-level plan for the whole project and a detailed plan for the next few milestones.
Milestones for the Minesweeper project, iteration 1
Day | Milestones |
---|---|
Day 1 | Architecture skeleton completed |
Day 3 | ‘new game’ feature implemented |
Day 4 | ‘new game’ feature tested |
A buffer is time set aside to absorb any unforeseen delays. It is very important to include buffers in a software project schedule because effort/time estimations for software development are notoriously hard. However, do not inflate task estimates to create hidden buffers; have explicit buffers instead. Reason: With explicit buffers, it is easier to detect incorrect effort estimates which can serve as feedback to improve future effort estimates.

Keeping track of project tasks (who is doing what, which tasks are ongoing, which tasks are done etc.) is an essential part of project management. In small projects, it may be possible to keep track of tasks using simple tools such as online spreadsheets or general-purpose/light-weight task tracking tools such as Trello. Bigger projects need more sophisticated task tracking tools.
Issue trackers (sometimes called bug trackers) are commonly used to track task assignment and progress. Most online project management software such as GitHub, SourceForge, and BitBucket come with an integrated issue tracker.
A screenshot from the Jira Issue tracker software (Jira is part of the BitBucket project management tool suite):

A Gantt chart is a 2-D bar-chart, drawn as time vs tasks (represented by horizontal bars).
A sample Gantt chart:

In a Gantt chart, a solid bar represents the main task, which is generally composed of a number of subtasks, shown as grey bars. The diamond shape indicates an important deadline/deliverable/milestone.
A PERT (Program Evaluation Review Technique) chart uses a graphical technique to show the order/sequence of tasks. It is based on the simple idea of drawing a directed graph in which:
- Nodes or vertices capture the effort estimations of tasks, and
- Arrows depict the precedence between tasks
An example PERT chart for a simple software project
md
= man days
A PERT chart can help determine the following important information:
- The order of tasks. In the example above,
Final Testing
cannot begin until all coding of individual subsystems have been completed. - Which tasks can be done concurrently. In the example above, the various subsystem designs can start independently once the
High level design
is completed. - The shortest possible completion time. In the example above, there is a path (indicated by the shaded boxes) from start to end that determines the shortest possible completion time.
- The Critical Path. In the example above, the shortest possible path is also the critical path.
Critical path is the path in which any delay can directly affect the project duration. It is important to ensure tasks on the critical path are completed on time.
Teamwork
Given below are three commonly used team structures in software development. Irrespective of the team structure, it is a good practice to assign roles and responsibilities to different team members so that someone is clearly in charge of each aspect of the project. In comparison, the ‘everybody is responsible for everything’ approach can result in more chaos and hence slower progress.

Egoless team
In this structure, every team member is equal in terms of responsibility and accountability. When any decision is required, consensus must be reached. This team structure is also known as a democratic team structure. This team structure usually finds a good solution to a relatively hard problem as all team members contribute ideas.
However, the democratic nature of the team structure bears a higher risk of falling apart due to the absence of an authority figure to manage the team and resolve conflicts.
Chief programmer team
Frederick Brooks proposed that software engineers learn from the medical surgical team in an operating room. In such a team, there is always a chief surgeon, assisted by experts in other areas. Similarly, in a chief programmer team structure, there is a single authoritative figure, the chief programmer. Major decisions, e.g. system architecture, are made solely by him/her and obeyed by all other team members. The chief programmer directs and coordinates the effort of other team members. When necessary, the chief will be assisted by domain specialists e.g. business specialists, database experts, network technology experts, etc. This allows individual group members to concentrate solely on the areas in which they have sound knowledge and expertise.
The success of such a team structure relies heavily on the chief programmer. Not only must he/she be a superb technical hand, he/she also needs good managerial skills. Under a suitably qualified leader, such a team structure is known to produce successful work.
Strict hierarchy team
At the opposite extreme of an egoless team, a strict hierarchy team has a strictly defined organization among the team members, reminiscent of the military or a bureaucratic government. Each team member only works on his/her assigned tasks and reports to a single “boss”.
In a large, resource-intensive, complex project, this could be a good team structure to reduce communication overhead.
SDLC Process Models
Introduction
Software development goes through different stages such as requirements, analysis, design, implementation and testing. These stages are collectively known as the software development life cycle (SDLC). There are several approaches, known as software development life cycle models (also called software process models), that describe different ways to go through the SDLC. Each process model prescribes a 'roadmap' for the software developers to manage the development effort. The roadmap describes the aims of the development stages, the outcome of each stage, and the workflow i.e. the relationship between stages.
The sequential model, also called the waterfall model, views software development as a linear process, in which the project is seen as progressing through the development stages. The name waterfall stems from how the model is drawn to look like a waterfall (see below).

When one stage of the process is completed, it produces some artifacts to be used in the next stage. For example, the requirements stage produces a comprehensive list of requirements, to be used in the design phase.
A strict sequential model project moves only in the forward direction i.e., each stage is completed before starting the next. For example, once the requirements stage is over, there is no provision for revising the requirements later.
This model can work well for a project that produces software to solve a well-understood problem, in which case the requirements can remain stable and the effort can be estimated accurately. Furthermore, as each stage has a well-defined outcome, it is easy to track the progress of the project because one can gauge the project progress by monitoring which stage the project is in.
However, real-world projects often tackle problems that are not well-understood at the beginning, making them unsuitable for this model. For example, target users of a software product may not be able to state their requirements accurately at the start of the project, if they have not used a similar product before.
The iterative model advocates producing the software by going through several iterations. Each of the iterations could potentially go through all the stages of the SDLC, from requirements gathering to deployment.

Each iteration produces a new version of the product, building upon the version produced in the previous iteration. Feedback from each iteration is factored into the subsequent iterations. For example, if an implementation task took longer than expected, the effort estimate for a similar tasks in future iterations can be adjusted accordingly. Similarly, if a feature introduced in the current iteration was not well-received by target users, it can be removed or tweaked in the next iteration.
The iterative model can be done in breadth-first or depth-first approach.
- In the breadth-first approach, an iteration evolves all major components and all functionality areas in parallel i.e., most features and most will be updated in each iteration, producing a working product at the end of each iteration.
- In the depth-first approach, an iteration focuses on fleshing out only some components or some functionality area. Accordingly, early depth-first iterations might not produce a working product.
Taking a Minesweeper game as an example,
- breadth-first iterations will deliver a fully playable version early. These early versions may have primitive functionality, for example, a rudimentary text based UI, fixed board size, limited minefield layouts, etc. These functionalities (and corresponding components) will then be improved in later releases.
- an early depth-first iteration could deliver the full user interface (UI) but with no game logic at all. Alternatively, an early iteration could focus on just the logic for generating initial layouts of the minefield. Neither will be a playable version of the game but both can be used to collect early feedback (about the UI, and the initial minefield layouts, respectively) which can then be used to guide later iterations.
A project can be done as a mixture of breadth-first and depth-first iterations i.e., an iteration can contain some breadth-first work as well as some depth-first work, or, some iterations can be breadth-first while others are depth-first.
In 2001, a group of prominent software engineering practitioners met and brainstormed for an alternative to documentation-driven, heavyweight software development processes that were used in most large projects at the time. This resulted in something called the agile manifesto (a vision statement of what they were looking to do).
You are uncovering better ways of developing software by doing it and helping others do it.
Through this work you have come to value:
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
That is, while there is value in the items on the right, you value the items on the left more.
-- Extract from the Agile Manifesto
Subsequently, some of the signatories of the manifesto went on to create process models that try to follow it. These processes are collectively called agile processes. Some of the key features of agile approaches are:
- Requirements are prioritized based on the needs of the user, are clarified regularly (at times almost on a daily basis) with the entire project team, and are factored into the development schedule as appropriate.
- Instead of doing a very elaborate and detailed design and a project plan for the whole project, the team works based on a rough project plan and a high level design that evolves as the project goes on.
- There is a strong emphasis on complete transparency and responsibility sharing among the team members. The team is responsible together for the delivery of the product. Team members are accountable, and regularly and openly share progress with each other and with the user.
There are a number of agile processes in the development world today. eXtreme Programming (XP) and Scrum are two of the well-known ones.
Example Process Models
The following description was adapted from the XP home page, emphasis added:
Extreme Programming (XP) stresses customer satisfaction. Instead of delivering everything you could possibly want on some date far in the future, this process delivers the software you need as you need it.
XP aims to empower developers to confidently respond to changing customer requirements, even late in the life cycle.
XP emphasizes teamwork. Managers, customers, and developers are all equal partners in a collaborative team. XP implements a simple, yet effective environment enabling teams to become highly productive. The team self-organizes around the problem to solve it as efficiently as possible.
XP aims to improve a software project in five essential ways: communication, simplicity, feedback, respect, and courage. Extreme Programmers constantly communicate with their customers and fellow programmers. They keep their design simple and clean. They get feedback by testing their software starting on day one. Every small success deepens their respect for the unique contributions of each and every team member. With this foundation, Extreme Programmers are able to courageously respond to changing requirements and technology.
XP has a set of simple rules. XP is a lot like a jig saw puzzle with many small pieces. Individually the pieces make no sense, but when combined together a complete picture can be seen. This flow chart shows how Extreme Programming's rules work together.

Pair programming, CRC cards, project velocity, and standup meetings are some interesting topics related to XP. Refer to extremeprogramming.org to find out more about XP.
This description of Scrum was adapted from Wikipedia [retrieved on 18/10/2011], emphasis added:
Scrum is a process skeleton that contains sets of practices and predefined roles. The main roles in Scrum are:
- The Scrum Master, who maintains the processes (typically in lieu of a project manager)
- The Product Owner, who represents the stakeholders and the business
- The Team, a cross-functional group who do the actual analysis, design, implementation, testing, etc.
A Scrum project is divided into iterations called Sprints. A sprint is the basic unit of development in Scrum. Sprints tend to last between one week and one month, and are a timeboxed (i.e. restricted to a specific duration) effort of a constant length.
Each sprint is preceded by a planning meeting, where the tasks for the sprint are identified and an estimated commitment for the sprint goal is made, and followed by a review or retrospective meeting, where the progress is reviewed and lessons for the next sprint are identified.
During each sprint, the team creates a potentially deliverable product increment (for example, working and tested software). The set of features that go into a sprint come from the product backlog, which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting, the Product Owner informs the team of the items in the product backlog that he or she wants completed. The team then determines how much of this they can commit to complete during the next sprint, and records this in the sprint backlog. During a sprint, no one is allowed to change the sprint backlog, which means that the requirements are frozen for that sprint. Development is timeboxed such that the sprint must end on time; if requirements are not completed for any reason they are left out and returned to the product backlog. After a sprint is completed, the team demonstrates the use of the software.
Scrum enables the creation of self-organizing teams by encouraging co-location of all team members, and verbal communication between all team members and disciplines in the project.
A key principle of Scrum is its recognition that during a project the customers can change their minds about what they want and need (often called requirements churn), and that unpredicted challenges cannot be easily addressed in a traditional predictive or planned manner. As such, Scrum adopts an empirical approach—accepting that the problem cannot be fully understood or defined, focusing instead on maximizing the team’s ability to deliver quickly and respond to emerging requirements.

Daily Scrum is another key scrum practice. The description below was adapted from https://www.mountaingoatsoftware.com (emphasis added):
In Scrum, on each day of a sprint, the team holds a daily scrum meeting called the "daily scrum.” Meetings are typically held in the same location and at the same time each day. Ideally, a daily scrum meeting is held in the morning, as it helps set the context for the coming day's work. These scrum meetings are strictly time-boxed to 15 minutes. This keeps the discussion brisk but relevant.
...
During the daily scrum, each team member answers the following three questions:
- What did you do yesterday?
- What will you do today?
- Are there any impediments in your way?
...
The daily scrum meeting is not used as a problem-solving or issue resolution meeting. Issues that are raised are taken offline and usually dealt with by the relevant subgroup immediately after the meeting.
The unified process is developed by the Three Amigos - Ivar Jacobson, Grady Booch and James Rumbaugh (the creators of UML).
The unified process consists of four phases: inception, elaboration, construction and transition. The main purpose of each phase can be summarized as follows:
Phase | Activities | Typical Artifacts |
---|---|---|
Inception |
|
|
Elaboration |
|
|
Construction |
|
|
Transition |
|
|

Given above is a visualization of a project done using the Unified process (source: Wikipedia). As the diagram shows, a phase can consist of several iterations. Each vertical column (labeled “I1” “E1”, “E2”, “C1”, etc.) represents a single iteration. Each of the iterations consists of a set of ‘workflows’ such as ‘Business modeling’, ‘Requirements’, ‘Analysis & Design’, etc. The shaded region indicates the amount of resources and effort spent on a particular workflow in a particular iteration.
Unified process is a flexible and customizable process model framework rather than a single fixed process. For example, the number of iterations in each phase, definition of workflows, and the intensity of a given workflow in a given iteration can be adjusted according to the nature of the project. Take the Construction Phase: to develop a simple system, one or two iterations would be sufficient. For a more complicated system, multiple iterations will be more helpful. Therefore, the diagram above simply records a particular application of the UP rather than prescribe how the UP is to be applied. However, this record can be refined and reused for similar future projects.
More
CMMI (Capability Maturity Model Integration) is a process improvement approach defined by Software Engineering Institute at Carnegie Melon University. CMMI provides organizations with the essential elements of effective processes, which will improve their performance. -- adapted from http://www.sei.cmu.edu/cmmi/
CMMI defines five maturity levels for a process and provides criteria to determine if the process of an organization is at a certain maturity level. The diagram below [taken from Wikipedia] gives an overview of the five levels.

SECTION: TOOLS
UML
Class Diagrams
Introduction
UML class diagrams describe the structure (but not the behavior) of an OOP solution. These are possibly the most often used diagrams in the industry and are an indispensable tool for an OO programmer.
An example class diagram:

Classes
The basic UML notations used to represent a class:

A Table
class shown in UML notation:


The 'Operations' compartment and/or the 'Attributes' compartment may be omitted if such details are not important for the task at hand. Similarly, some attributes/operations can be omitted if not relevant. 'Attributes' always appear above the 'Operations' compartment. All operations should be in one compartment rather than each operation in a separate compartment. Same goes for attributes.

The visibility of attributes and operations is used to indicate the level of access allowed for each attribute or operation. The types of visibility and their exact meanings depend on the programming language used. Here are some common visibilities and how they are indicated in a class diagram:
+
: public-
: private#
: protected~
: package private
Table
class with visibilities shown:


Generic classes can be shown as given below. The notation format is shown on the left, followed by two examples.

Associations
You should use a solid line to show an association between two classes.

This example shows an association between the Admin
class and the Student
class:
Use arrowheads to indicate the navigability of an association.
In this example, the navigability is unidirectional, and is from the Logic
class to the Minefield
class. That means if a Logic
object L
is associated with a Minefield
object M
, L
has a reference to M
but M
doesn't have a reference to L
.

class Logic {
Minefield minefield;
// ...
}
class Minefield {
//...
}
class Logic:
def __init__(self):
self.minefield = None
# ...
class Minefield:
# ...
Here is an example of a bidirectional navigability; i.e., if a Dog
object d
is associated with a Man
object m
, d
has a reference to m
and m
has a reference to d
.

class Dog {
Man man;
// ...
}
class Man {
Dog dog;
// ...
}
class Dog:
def __init__(self):
self.man = None
# ...
class Man:
def __init__(self):
self.dog = None
# ...
Navigability can be shown in class diagrams as well as object diagrams.
According to this object diagram, the given Logic
object is associated with and aware of two MineField
objects.

Association Role are used to indicate the role played by the classes in the association.

This association represents a marriage between a Man
object and a Woman
object. The respective roles played by objects of these two classes are husband
and wife
.

Note how the variable names match closely with the association roles.
class Man {
Woman wife;
}
class Woman {
Man husband;
}
class Man:
def __init__(self):
self.wife = None # a Woman object
class Woman:
def __init__(self):
self.husband = None # a Man object
The role of Student
objects in this association is charges
(i.e. Admin is in charge of students)

class Admin {
List<Student> charges;
}
class Admin:
def __init__(self):
self.charges = [] # list of Student objects
Association roles are optional to show. They are particularly useful for differentiating among multiple associations between the same two classes.
In each the three associations between the Flight
class and the Airport
class given below, the Airport
class plays a different role.

Association labels describe the meaning of the association. The arrow head indicates the direction in which the label is to be read.

In this example, the same association is described using two different labels.

- Diagram on the left:
Admin
class is associated withStudent
class because anAdmin
object uses aStudent
object. - Diagram on the right:
Admin
class is associated withStudent
class because aStudent
object is used by anAdmin
object.

Commonly used multiplicities:
0..1
: optional, can be linked to 0 or 1 objects.1
: compulsory, must be linked to one object at all times.*
: can be linked to 0 or more objects.n..m
: the number of linked objects must be withinn
tom
inclusive.
In the diagram below, an Admin
object administers (is in charge of) any number of students but a Student
object must always be under the charge of exactly one Admin
object.

In the diagram below,
- Each student must be supervised by exactly one professor. i.e. There cannot be a student who doesn't have a supervisor or has multiple supervisors.
- A professor cannot supervise more than 5 students but can have no students to supervise.
- An admin can handle any number of professors and any number of students, including none.
- A professor/student can be handled by any number of admins, including none.

Associations as Attributes
An association can be shown as an attribute instead of a line.
Association multiplicities and the default value can be shown as part of the attribute using the following notation. Both are optional.
name: type [multiplicity] = default value
The diagram below depicts a multi-player Square Game being played on a board comprising of 100 squares. Each of the squares may be occupied with any number of pieces, each belonging to a certain player.
A Piece
may or may not be on a Square
. Note how that association can be replaced by an isOn
attribute of the Piece
class. The isOn
attribute can either be null
or hold a reference to a Square
object, matching the 0..1
multiplicity of the association it replaces. The default value is null
.

The association that a Board
has 100 Square
s can be shown in either of these two ways:

Show each association as either an attribute or a line but not both. A line is preferred as it is easier to spot.
Diagram (a) given below shows the 'author' association between the Book
class and the Person
class as a line while (b) shows the same association as an attribute in the Book
class. Both are correct and the two are equivalent. But (c) is not correct as it uses both a line and an attribute to show the same association.
(a)
(b)
(c)
Enumerations
Class-Level Members
In UML class diagrams, underlines denote class-level attributes and methods.
In the class diagram below, the totalStudents
attribute and the getTotalStudents
method are class-level.

Association Classes
Association classes are denoted as a connection to an association link using a dashed line as shown below.

In this example Loan
is an association class because it stores information about the borrows
association between the User
and the Book
.

Composition
UML uses a solid diamond symbol to denote composition.
Notation:

A Book
consists of Chapter
objects. When the Book
object is destroyed, its Chapter
objects are destroyed too.

Aggregation
UML uses a hollow diamond to indicate an aggregation.
Notation:

Example:

Aggregation vs Composition
The distinction between composition (◆) and aggregation (◇) is rather blurred. Martin Fowler’s famous book UML Distilled advocates omitting the aggregation symbol altogether because using it adds more confusion than clarity.
Class Inheritance
You can use a triangle and a solid line (not to be confused with an arrow) to indicate class inheritance.
Notation:

Examples: The Car
class inherits from the Vehicle
class. The Cat
and Dog
classes inherit from the Pet
class.

Interfaces
An interface is shown similar to a class with an additional keyword <<interface>>
. When a class implements an interface, it is shown similar to class inheritance except a dashed line is used instead of a solid line.
The AcademicStaff
and the AdminStaff
classes implement the SalariedStaff
interface.

Abstract Classes
You can use italics or {abstract}
(preferred) keyword to denote abstract classes/methods.

Example:

Object Diagrams
An object diagram shows an object structure at a given point of time.
An example object diagram:

Notation:

Notes:
- The class name and object name e.g.
car1:Car
are underlined. objectName:ClassName
is meant to say 'an instance ofClassName
identified asobjectName
'.- Unlike classes, there is no compartment for methods.
- Attributes compartment can be omitted if it is not relevant to the task at hand.
- Object name can be omitted too e.g.
:Car
which is meant to say 'an unnamed instance of a Car object'.
Some example objects:

A solid line indicates an association between two objects.

An example object diagram showing two associations:
Sequence Diagrams
A UML sequence diagram captures the interactions between multiple objects for a given scenario.
Consider the code below.
class Machine {
Unit producePrototype() {
Unit prototype = new Unit();
for (int i = 0; i < 5; i++) {
prototype.stressTest();
}
return prototype;
}
}
class Unit {
public void stressTest() {
}
}
Here is the sequence diagram to model the interactions for the method call producePrototype()
on a Machine
object.

Notation:

This sequence diagram shows some interactions between a human user and the Text UI of a Minesweeper game.

The player runs the newgame
action on the TextUi
object which results in the TextUi
showing the minefield to the player. Then, the player runs the clear x y
command; in response, the TextUi
object shows the updated minefield.
The :TextUi
in the above example denotes an unnamed instance of the class TextUi. If there were two instances of TextUi
in the diagram, they can be distinguished by naming them e.g. TextUi1:TextUi
and TextUi2:TextUi
.
Arrows representing method calls should be solid arrows while those representing method returns should be dashed arrows.
Note that unlike in object diagrams, the class/object name is not underlined in sequence diagrams.
[Common notation error] Activation bar too long: The activation bar of a method cannot start before the method call arrives and a method cannot remain active after the method has returned. In the two sequence diagrams below, the one on the left commits this error because the activation bar starts before the method Foo#xyz()
is called and remains active after the method returns.

[Common notation error] Broken activation bar: The activation bar should remain unbroken from the point the method is called until the method returns. In the two sequence diagrams below, the one on the left commits this error because the activation bar for the method Foo#abc()
is not contiguous, but appears as two pieces instead.

Notation:

- The arrow that represents the constructor arrives at the side of the box representing the instance.
- The activation bar represents the period the constructor is active.
The Logic
object creates a Minefield
object.

UML uses an X
at the end of the lifeline of an object to show its deletion.
Although object deletion is not that important in languages such as Java that support automatic memory management, you can still show object deletion in UML diagrams to indicate the point at which the object ceases to be used.
Notation:

Note how the below diagram shows the deletion of the Minefield
object.

Notation:

The Player
calls the mark x,y
command or clear x y
command repeatedly until the game is won or lost.

UML can show a method of an object calling another of its own methods.
Notation:

The markCellAt(...)
method of a Logic
object is calling its own updateState(...)
method.

In this variation, the Book#write()
method is calling the Chapter#getText()
method which in turn does a call back by calling the getAuthor()
method of the calling object.

UML uses alt
frames to indicate alternative paths.
Notation:

Minefield
calls the Cell#setMine
method if the cell is supposed to be a mined cell, and calls the Cell:setMineCount(...)
method otherwise.

No more than one alternative partitions be executed in an alt
frame. That is, it is acceptable for none of the alternative partitions to be executed but it is not acceptable for multiple partitions to be executed.
UML uses opt
frames to indicate optional paths.
Notation:

Logic#markCellAt(...)
calls Timer#start()
only if it is the first move of the player.

UML uses par
frames to indicate parallel paths.
Notation:

Logic
is calling methods CloudServer#poll()
and LocalData#poll()
in parallel.

If you show parallel paths in a sequence diagram, the corresponding Java implementation is likely to be multi-threaded because a normal Java program cannot do multiple things at the same time.
UML uses ref frame to allow a segment of the interaction to be omitted and shown as a separate sequence diagram. Reference frames help you to break complicated sequence diagrams into multiple parts or simply to omit details you are not interested in showing.
Notation:

The details of the get minefield appearance
interactions have been omitted from the diagram.

Those details are shown in a separate sequence diagram given below.
To reduce clutter, optional elements (e.g, activation bars, return arrows) may be omitted if the omission does not result in ambiguities or loss of . Informal operation descriptions such as those given in the example below can be used, if more precise details are not required for the task at hand.
A minimal sequence diagram

Activity Diagrams
Introduction
UML activity diagrams (AD) can model workflows. Flow charts are another type of diagram that can model workflows. Activity diagrams are the UML equivalent of flow charts.
An example activity diagram:
[source:wikipeida]
Basic Notations
An activity diagram (AD) captures an activity through the actions and control flows that make up the activity.
- An action is a single step in an activity. It is shown as a rectangle with rounded corners.
- A control flow shows the flow of control from one action to the next. It is shown by drawing a line with an arrow-head to show the direction of the flow.

Note the slight difference between the start node and the end node which represent the start and the end of the activity, respectively.
This activity diagram shows the action sequence of the activity a passenger rides the bus:

A branch node shows the start of alternate paths. Each control flow exiting a branch node has a guard condition: a boolean condition that should be true for execution to take that path. Exactly one of the guard conditions should be true at any given branch node.
A merge node shows the end of alternate paths.
Both branch nodes and merge nodes are diamond shapes. Guard conditions must be in square brackets.

The AD below shows alternate paths involved in the workflow of the activity shop for product:

Some acceptable simplifications (by convention):
- Omitting the merge node if it doesn't cause any ambiguities.
- Multiple arrows can starting from the same corner of a branch node.
- Omitting the
[Else]
condition.
The AD below illustrates the simplifications mentioned above:

Fork nodes indicate the start of flows of control.
Join nodes indicate the end of parallel paths.
Both have the same notation: a bar.
In a , execution along all parallel paths should be complete before the execution can start on the outgoing control flow of the join.

In this activity diagram (from an online shop website) the actions User browses products and System records browsing data happen in parallel. Both of them need to finish before the log out action can take place.

The rake notation is used to indicate that a part of the activity is given as a separate diagram.
Here is the AD for a game of ‘Snakes and Ladders’.

The rake symbol (in the Move piece
action above) is used to show that the action is described in another subsidiary activity diagram elsewhere. That diagram is given below.

It is possible to partition an activity diagram to show who is doing which action. Such partitioned activity diagrams are sometime called swimlane diagrams.
A simple example of a swimlane diagram:

Notes
UML notes can augment UML diagrams with additional information. These notes can be shown connected to a particular element in the diagram or can be shown without a connection. The diagram below shows examples of both.
Example:

A constraint can be given inside a note, within curly braces. Natural language or a formal notation such as OCL (Object Constraint Language) may be used to specify constraints.
Example:

Miscellaneous
Compared to the notation for class diagrams, object diagrams differ in the following ways:
- Show objects instead of classes:
- Instance name may be shown
- There is a
:
before the class name - Instance and class names are underlined
- Methods are omitted
- Multiplicities are omitted. Reason: an association line in an object diagram represents a connection to exactly one object (i.e., the multiplicity is always 1).
Furthermore, multiple object diagrams can correspond to a single class diagram.
Both object diagrams are derived from the same class diagram shown earlier. In other words, each of these object diagrams shows ‘an instance of’ the same class diagram.


When the class diagram has an inheritance relationship, the object diagram should show either an object of the parent class or the child class, but not both.
Suppose Employee
is a child class of the Person
class. The class diagram will be as follows:

Now, how do you show an Employee
object named jake
?
This is not correct, as there should be only one object.
This is OK.
This is OK, as
jake
is aPerson
too. That is, we can show the parent class instead of the child class if the child class doesn't matter to the purpose of the diagram (i.e., the reader of this diagram will not need to know thatjake
is in fact anEmployee
).
Association labels/roles can be omitted unless they add value (e.g., showing them is useful if there are multiple associations between the two classes in concern -- otherwise you wouldn't know which association the object diagram is showing)
Consider this class diagram and the object diagram:


We can clearly see that both Adam and Eve lives in hall h1 (i.e., OK to omit the association label lives in
) but we can't see if History is Adam's major or his minor (i.e., the diagram should have included either an association label or a role there). In contrast, we can see Eve is an English major.
Intellij IDEA
Some useful navigation shortcuts:
- Quickly locate a file by name.
- Go to the definition of a method from where it is used.
- Go back to the previous location.
- View the documentation of a method from where the method is being used, without navigating to the method itself.
- Find where a method/field is being used.
This video (from LaunchCode) gives a pretty good explanation of how to use the IntelliJ IDEA debugger.
- IntelliJ IDEA Documentation: Debugging Basics - Can be used as a reference document when you want to recall how to use a debugging feature.