Agasti Development Strategy


This document outlines and describes the CUNY School of Professional Studies Agasti Team's internal development strategy. Wherever possible, each section of this document adheres to and emphasizes industry best practices and standards for software development processes. The intended audience for this document is primarily the CUNY SPS Agasti Team, though a potential secondary audience may include the greater Sahana development community. It is expected that this document will evolve into a set of common standards and guidelines for the workflow of the Agasti project going forward.


Requirements for Agasti generally come from the needs and wants of the customers, standards bodies, competing institutions in the emergency management space, end users, developers, or from a general consensus among the entire Sahana community. The purpose of the Requirements phase of development is to produce a more specific understanding of, and sufficient documentation on, the desired functionality and needs of the product or feature to begin the Design phase of the current sprint. This ensures the explicit communication of the goals and motivations of the project, as well as the generation of metrics by which to measure progress toward these goals.

Gathering and Analysis

A project can only move forward if it has a clear direction, a well-defined end goal, and the means to get there. The gathering and analysis of requirements ensures that each of these are all specific, accurate, realistic, attainable, and complete. Due to the nature of our incremental development cycle, requirements gathering and analysis will also be a periodic, incremental process that is revisited at the beginning of each sprint.

Customer Requirements

The customer or user might start with a vague idea of what they want, expressed with only a few words, or they may have specific needs that they provide as a lengthy, detailed specification. Many of these requirements are discovered during the maintenance, monitoring, and support of previous versions of the feature or module. It is vital that these requirements can be shared, interpreted, clarified, documented, and summarily agreed upon both by the customer, the Release Manager, and by the Review Team.

Below are some example requirements for a fictional project:

  ''The Time Machine MUST be able to travel through time; controls for the time
  travel behavior SHOULD be easy for the operator to learn and use; time travel
  within the Time Machine MUST be safe for the operator, the Machine, and the
  The Time Machine SHOULD be capable of interplanetary space travel.
  The Time Machine MAY look like a London police box from the outside.''

Note that the keywords MUST, SHOULD, and MAY are used here with the interpretation described in the Internet Engineering Task Force's RFC 2119, which outlines standard best current practices for indicating requirement levels in such documents. This might not be the way customer requirements are received, but it is a good practice and should be employed as often as possible when writing requirements and specifications for Agasti.

Measurable Goals

Every requirement stems from a goal. Unless the goal is measurable, there is no way to verify that it has been attained. Every requirement should be thought of both in terms of customer needs as well as how to unambiguously measure and verify at each step that these needs have been met or are on the way to being met.

Here are some examples of measurable goals related to the above customer requirements:

  ''The Time Machine MUST be able to travel both forwards and backwards through
  time at a sustainable rate of 5 years per relative second and with at least two
  live human operators/passengers. A new user SHOULD be capable of using the Time
  Machine for unsupervised time travel within 30 minutes of operator training, on
  The Time Machine SHOULD be capable of maintaining an operating speed of at
  least 90% of the speed of light in deep space; the Time Machine SHOULD be
  capable of successful liftoff from and touchdown onto planets with at least 85%
  similar atmospheric composition to that of Earth in 2010 CE and anywhere between
  40-400% of its size.
  The Time Machine MAY disguise itself within its surroundings to become undetectable
   by anyone but its operators/passengers.''

The Time Machine SHOULD be capable of maintaining an operating speed of at least 90% of the speed of light in deep space; the Time Machine SHOULD be capable of successful liftoff from and touchdown onto planets with at least 85% similar atmospheric composition to that of Earth in 2010 CE and anywhere between 40-400% of its size.

The Time Machine MAY disguise itself within its surroundings to become undetectable by anyone but its operators/passengers.

By the end of the Design phase, each requirement should eventually be associated with a quantifiable, testable goal. Then progress toward a goal can generally be measured in terms of time, accuracy, overall task success (e.g., a binary “succeed” or “fail”), user satisfaction based on quantifiable feedback, and other metrics. Many of the details of these goals will end up in the Functional Specification rather than the Requirements Document, which are both described in detail later in this section.

Requirements Complexity

In addition to metrics of progress and success, each requirement should have associated with it some notion or measure of complexity and work required. That is, for each task, the development team should come up with a numerical estimate of the level of effort required to complete it. In our agile process, we will provide work estimates in the form of story points, which are described later in this section.

Resource Analysis and Allocation

Before a set of requirements can be agreed upon, it needs to be determined whether they are attainable within the confines of whatever limitations exist. These limitations are usually in terms of available resources, including personnel, equipment, budget, time, and related assets. Only after the available and needed resources are identified can a realistic schedule be developed.

The following should be taken into account during resource analysis and allocation:

  • Staff: The people required to complete the project or feature, including number, needed skill sets, and roles throughout the process.
  • Equipment and Tools: Including the hardware, software, and other material requirements.
  • Time and Cost Estimates: These early estimates should reflect the scope and size of the project, as well as the above listed resources, in order to draft a preliminary budget and project schedule.

It should be noted that staff and equipment will generally remain fixed resources, but time and cost requirements are apt to be more radically adjusted between development cycles in order to reflect changes in requirements complexity.

Requirements Document

The Requirements Document is a formalized document that describes in (non-technical) detail the desired functionality of each aspect of the system or feature, as well as the specific goals and motivations behind them, and any known assumptions regarding the system's requirements for the current sprint. Before this document can be finalized, there are generally many rounds of feedback, clarification, and revision. The final document is reviewed and agreed upon by each party involved in the project.

In an agile setting, such as Scrum, the product backlog functions as an extended requirements document, and required features are decided and pulled from this backlog at the beginning of each sprint. Within the context of the Agasti Development cycle, requirements will essentially be written as an organized set of long-form user stories and scenarios, described later in this section, with examples as necessary. Sample user stories can be found in the User Stories Primer. Much of the requirements document may also be included as part of the Release Planning Document, which is described in the Agasti Release Strategy.

Functional Specification Draft

The Functional Specification outlines and describes in technical detail the functional requirements of each individual component of the software system or feature to be built by the core developers. These software requirements follow from and must satisfy all of the customer and user requirements set forth in the requirements document. Much of the functional specification may also be included as part of the Release Planning Document, which is described in the Agasti Release Strategy.

User Stories and Scenarios

A user story is a short, informal statement of a technical requirement traditionally used in Agile software development planning, implementation, and testing. Throughout the Requirements phase of the project, user stories should be written in straightforward language to demonstrate simple, specific actions and capabilities that the system or feature should support. Most user stories should be written after the requirements document is finalized, though more can be added or modified based on the functional specification. Just as with the requirements document and the functional specification draft, the comprehensive set of user stories should be reviewed and approved by all parties before final designs, implementation, or test writing for the described functionality are attempted.

Though user stories will be added and revised throughout the process, it is extremely helpful to have a comprehensive, prioritized set of user stories and scenarios (longer, multi-step stories) before moving onto the Design phase.

Story Points

As mentioned previously in the section on Requirements Complexity, our agile process will collect work estimates for each user story in the form of story points, which are estimated by developers. Story points are arbitrary units that are used to express the relative difficulty of implementing a user story. The higher the number of points, the greater the task complexity or difficulty. There are different scales that can be used for story points, such as the Cohn scale, though exact semantics behind each number evolve and stabilize over time as actual team output and progress “velocity” is monitored from sprint to sprint. These observations allow teams to adjust expectations, provide more accurate work estimates, and gain other useful insights that will allow them to better predict future progress.

It is worth noting that, based on the average amount of work completed in a sprint, one can easily convert story points into time estimates if necessary. It is more important, however, to know how many points the team can handle per sprint than how much time an individual feature or story might take to implement.


Here are some examples of user stories with associated story points assigned:

  • A passenger can enter the Time Machine.
    • Story points: 3
  • An operator and passenger can travel 10 absolute years into the past within two experiential seconds.
    • Story points: 40
  • An operator can set the Time Machine to camouflage mode.
    • Story points: 13

More on user stories, scenarios, and story points can be found in the Agasti Test Strategy.


After the requirements are decided, reviewed, and agreed upon, the Design phase begins. In this phase, the representation of data in the system is modeled, and all functional and visual details are described and documented in detail. The input of the Design phase is the approved Requirements documentation, and its output is a detailed description of the system or module down to the component and sub-component level. The goal of this output is to allow the core developers to develop the software with minimal need for additional input.

It is also important to note that, in addition to and in parallel with the software design, this phase also includes the creation of the Test Plan and initial test designs, which are described in the Agasti Test Strategy. Much of the Testing phase is very tightly integrated and interdependent with both the Design and Development phases of development. The rest of this document will attempt to highlight these interdependencies as they arise.

Data Modeling

During the data modeling process, the business and software requirements documentation are further analyzed, and the data requirements needed to support them are defined in the resulting data model. The data model describes how each individual entity in the software module or system is represented and accessed, as well as any dependencies among these entities. Special care must be made to address intra-module data design concerns and also to ensure inter-module and application-level data conformity. No software application can be designed or tested without a well-defined, well-documented data model.

The Agasti software is used internationally and has been adapted for several deployments and organizations with various needs, expectations, and semantics for their specific situations. As such, it is not only a good design practice, but it is also imperative that the data model is as extensible, modular, reusable, and maintainable as possible so that any future changes to the system are minimal, straightforward, and painless. It can be a great challenge to attain and maintain a good balance between a model that is powerful and robust, as well as general and extensible, but it is also of the utmost importance. All subsequent phases of the development process depend on the data model.

For additional data modeling guidelines, refer to the Agasti Information Architecture Strategy.

Application Design

The application design follows primarily from the functional specification, user stories, and the data model. Just as with the data model, the application design must be extensible, modular, reusable, and maintainable. The application design is the responsibility of the development team.

System Architecture and Interfaces

Before a new module can be built, the architecture of the entire system must be defined and agreed upon by all parties. This involves defining how each component of the software will interact with each other, as well as how the module will interact with other modules, including any third party tools, and how they are expected to interact with the system as a whole. Once the interfaces are explicitly defined and clearly documented, implementation can begin.

Wireframes and Mockups [OPTIONAL]

Wireframes and mockups are, respectively, conceptual sketches and more detailed images of the application front-end. These are useful not only to developers, but also to the quality assurance engineers that are designing and writing tests in parallel throughout the Design phase. Once sufficiently clear wireframes and mockups are generated, they may also be incorporated into any design documentation. All mockups and visual components must comply with Agasti User Interface Standards.

The table below illustrates some cases when wireframes and/or mockups should be used.

Module Component Properties Use Wireframes? Use Mockups?
module component follows an existing template no no
text-only modifications to existing component no no
introduces new page layout or structure yes no
heavy use of new visual elements yes yes
complex information presentation requirements yes usually
complex interaction requirements yes sometimes
user interface design overhaul yes yes
specific to a non standard display device yes yes


In order to test the design specifications before diving into writing actual application code, developers should periodically build prototypes of the software. These prototypes will serve as proofs-of-concept that can surface many ambiguities and make apparent any elusive design flaws early in the process and save time for everyone.


After designs are sufficiently well-defined, the Development process can begin. The SPS Agasti team adapts an Extreme Programming (XP) methodology for module development, which means that all phases of development are done in iterative cycles with much feedback between each phase and cycle. These cycles include implementation as well as testing. This method is described in more detail in the Agasti Software Development Policy.

Unit Tests

Writing unit tests are the first phase of Test-Driven Development (TDD). This is an ongoing process where, before a single line of application code is written, the developers use the application design documents and functional specifications to write code to test the smallest, most basic and independent components of the system. Ideally, most of these tests are written before application code, though they will be continually adjusted and more tests added as the codebase expands and as bugs are found.

A key property of unit tests is that they must fail initially, before the tested component is completed. Without a test that fails first, a test that passes is meaningless, and false positives can become an issue. No feature is released if it does not pass all of its unit tests.

The following example demonstrates one instance where unit tests should be written:

  • Story: An operator can set the Time Machine to return to her or his original timespace with the press of a button.
  • Unit test:
    • Precondition: The operator is not in her or his original timespace.
    • Action: The operator presses the “Reset” button.
    • Expected result: The “Destination” display is set to the operator's original timespace.


After unit tests have been written for the feature to be developed, the same core developer(s) will begin writing the code that implements the feature. After the code is written to the specification, team Coding Standards, and the developer's satisfaction, the developer will then run unit tests on the code to test for compliance. When all unit tests pass, the code may then be submitted for further testing, integration, and deployment.

Code Generation

The core developers will write the software from its component requirements, functional specification, interface designs, wireframes (if provided), user stories, unit tests, and all other documentation that is available. It is each developer's responsibility to follow team coding standards, run and pass all unit tests, and run static code analysis on the new code (described in the Testing section of this document) from her or his local machine before integrating it into a common, team-shared repository (see the Agasti Release Strategy). Only then will the feature be deployed to the appropriate environment and further tested.

Build and Deploy

Due to the continuous and cyclical nature of the XP method of development, the build process will also be continuous. As each development task is completed and new code is added to a repository, a new build is generated and a limited series of tests and sanity checks will be run on the code before it is deployed to an environment, where it will be further tested. The SPS Agasti team's build processes are discussed in more detail in the Agasti Build Strategy.

Technical Documentation

Though the Extreme Programming development method favors constant team communication over extensive documentation, the SPS Agasti team will generate and provide some form of technical documentation for external developers, described below, in order to facilitate maintainability and to promote openness.

Generated Documentation

Developers are expected to include comments in their code as it is developed. In addition to making the code more readable and maintainable, this will allow third-party tools, such as Doxygen, to extract this information from the code and generate organized documentation that is more accessible than reading comments directly from the raw code.

Organized API and Notes

Due to limitations inherent in generated documentation, there will be a cohesive document that presents an organized view of the API with any notes and high-level comments that weren't included or covered in the code comments.


As the XP principles and practices dictate, test planning, design, and execution all occur early and often throughout the development process, and integrate tightly with each phase. There are different types of testing that integrate with or follow different phases of development. Tests are coded, committed, and stored alongside application code. Testing must not be considered a standalone phase of a linear process, but must be seen as a closely coupled part of a cyclical, iterative development process. As a result, tests and test data may be rewritten and refactored as application specifications and code are being developed. Finally, all tests should be written with automation in mind, and any test plan should explicitly note if a test is difficult or impossible to automate.

Testing is covered in more detail in the Agasti Test Strategy.

Types of Testing

There are many different types of tests, which vary in size, scope, and complexity. Each type of test serves a different purpose and verifies that the software meets its requirements at different phases of the development process.

Unit Testing

Unit tests are covered in the Development section. Writing unit tests is the first phase of code development. These small, specific tests are written by the same core developers who write the application code, and they are run by the developer before code is submitted to a repository. Unit tests will also be run automatically, under supervision by the quality assurance team, after integration and before code deployment.

See Section 4.1 for an example of when a unit test is appropriate.

Integration Testing

Integration tests are designed and written from the user stories by the Quality Assurance Engineering (QAE) team iteratively throughout the Design and Development phases. These are generally small or medium in size and test individual components or atomic sub-features of a module. They may test the application through both the front-end and the back-end of the application. Integration tests are run by the QAE post-deployment as part of the build-and-deploy process and should be designed and written to be run as part of an automated process.

The following example demonstrates an instance in which integration tests should be written:

''Story: An operator can avoid potential paradoxes that would result directly
from the Time Machine's normal operation.''
  • Test:
    • Precondition: The operator is in her or his original timespace.
    • Actions:
      • The operator enables the Time Machine's Paradox Avoidance Drive.
      • The operator sets the destination timespace to an exact one in which his or her grandfather is known to occupy.
      • The operator activates the Time Machine.
    • Expected results:
      • The Time Machine appears in the destination time and approximate space, avoiding harming the operator's grandfather.
      • The operator continues to exist in the same universe.
      • The universe continues to exist.

Functional Testing

Sometimes referred to as acceptance testing or system testing, functional testing involves running tests that are written by the QAE from the Requirements Document, user stories and scenarios, and mock-ups during both the Design and Development phases. These tests validate the overall behavior and design of a module's features as expected by users, and they are often greater in size, scope, and execution time than integration tests. Functional tests are run by the QAE after all related integration tests have passed successfully and are largely executed through the application front-end in an automated manner, if possible.

The following example demonstrates an instance in which functional tests should be written:

  Story: A passenger can travel safely to the past, future, and original
   present spacetime. 
  • Test:
    • Precondition: The healthy passenger is in her or his original timespace.
    • Actions:
      • The passenger boards the Time Machine.
      • The operator travels to 100 years prior.
      • The operator activates travel to 100 years prior.
      • The passenger attempts to move freely inside and outside of the Time Machine.
      • These steps are repeated for future and original target spacetimes.
    • Expected results:
      • The passenger and operator arrive successfully to each destination spacetime.
      • The passenger can move freely inside and outside of the Time Machine for each destination spacetime.
      • The passenger can find and return to the Time Machine.
      • The passenger remains unharmed.

System Integration Testing

Modules may interact with each other in the application or system; these interactions must also be tested once the target module has passed functional testing. Interactions between the target core module and different versions of third-party libraries or other external dependencies should also be tested. System Integration Testing (SIT) is performed by the QAE, including the release team's Package Managers, during and after the Development phase.

The following example demonstrates an instance in which system integration tests should be written:

''Story: An operator may not set the Time Machine to return to an exact
timespace that has been previously visited by another Time Machine.''

* Test:
  * Precondition: A Time Machine's previous timespace destination is known.
  * Actions:
    * The operator attempts to set the destination to the above timespace.
  * Expected results:
    * The Time Machine "Destination" display shows an error.

This example tests interactions between two different Time Machines.

User Acceptance Testing

User Acceptance Testing (UAT) involves having target users of the new software use a pre-release version of it while being observed. These tests ensure the requirements are all met from a user or customer perspective and so are heavily based on the user stories and scenarios from the Requirements phase.

The following example demonstrates an instance of user acceptance testing:

''Story: An passenger can operate the Time Machine within 30 minutes of
operator training.''

* Test:
  * Precondition: The passenger has no prior Time Machine training.
  * Actions:
    * The passenger activates and follows the Time Machine's Training module for 30 minutes.
  * Expected results:
    * The newly-trained passenger should be comfortable operating the Time Machine.

Note that these results are equally qualitative and quantitative, in that the passenger must rate or otherwise express their comfort level and other feedback along with the recorded timing results.

Other Types of Testing

There are other ways to categorize tests that overlap with or consist of some of the above defined test types. There are also tests with specific purposes not covered by the aforementioned tests, some of which require specialized teams, processes, and tools. These other types of tests are covered in the following list.

  • Smoke Testing: A subset of all unit, integration, and functional tests must be run in order to ensure that basic functionality is preserved after each build of the system and that no new critical defects have been introduced. These so-called “smoke tests” must be run automatically, and all failures must be addressed before any further testing is possible.
  • Regression Testing: Similar to but more exhaustive than smoke tests, regression tests are comprised of the unit, integration, and functional tests that are run automatically after a fix or other code modification is introduced into the build. These tests are run to detect any new failures that result from any new code fixes which address previous failures.
  • Static Code Analysis and Standards Compliance Validation: Special code inspection and analysis tools are to be run on the application source code to ensure that common semantic errors and coding standards violations are caught; separate validation tools should be run on generated front-end components to ensure full compliance with current web standards and browsers.
  • Performance Testing: Load tests simulate expected user loads and examine application performance under these conditions, and stress tests try to determine the point at which the system becomes dysfunctional or unavailable. These tests are run to gain further insight into the characteristic behavior of the overall system under different workloads.
  • Security Testing: A security team will perform new and ongoing security analysis and audits of the source code as well as on active systems running released and preliminary versions of the application software. Any security vulnerabilities that are found must be immediately prioritized, patched, tested, and released upstream.

Multiple Environments

Each environment will have different testing requirements and performance evaluation metrics before code may be deployed to the next environment for testing.


The Development environment must run the same software stack as that which is running in the production environment and have sufficient hardware to meet development needs. It is the working environment for developers, runs the latest development application code, is populated with test accounts and sufficient test data, and is generally only available to the development team. Packages and third-party software versions in this environment can be newer than those in other environments. Most testing in the development environment focuses on the unit tests, daily continuous smoke tests, and regression tests.


The QA environment should be running an identical technology stack and have the same software versions as the production environment whenever possible. (In cases where this is not possible, testing should be done on notable platform targets as defined in the Agasti Release Planning Document.) The system should be running target module code from most recent build that has passed all tests that were run in the development environment, and all other application modules should be running stable versions. All but performance, security, and user acceptance tests must be run on the QA environment, though performance and security tests may be run.


Testing in a staging environment is an optional for Agasti purposes. If a team so chooses and is capable of setting up such an environment, it should be as identical as possible to the production environment in terms of hardware, software, versions, data, network conditions, and other variables. Thus, the results of testing on this system will give the most accurate idea of how the system will perform in the production environment. If such a system is available for testing, the target module should be running the latest code that has passed testing in the QA environment, and all tests must be run, including performance and security tests.


As previously stated, all tests should be written with automation in mind, and any test plan should explicitly note if a test is difficult or impossible to automate. The build process should also be automatically started at least once daily, and generally after any major code changes or additions to the repositories. The execution of automated tests should be part of the continuous build and deployment process; additionally, there should exist an easy way for the execution of selected individual tests or test suites to be triggered manually, both locally and in deployed environments.

Reporting and Analysis

After each phase of testing, test results will be generated, recorded, gathered, analyzed, and reported by the QAE. All automated test results should be stored in an organized, trackable database as soon as they are available. This may be an issue tracker that is maintained alongside the development and support teams' issue trackers. At the end of a release, the results of the QAE's analysis will be compiled into a document, called the Test Report. Details about the Test Report can be found in the Agasti Test Strategy.


Throughout each release cycle, various documentation will be compiled, maintained, and packaged by the Doc Team in accordance with the SPS OIT Documentation Strategy and the Agasti Release Strategy. Total documentation includes the following:

  • Release Planning Documentation: an overview of the release, as described in the Agasti Release Strategy; parts of this this document may appear in or evolve into the README user document.
  • Developer Documentation: including finalized design and technical documentation.
  • User Documentation: including README, installation and configuration notes, quick-start guide and tutorial for new users, user manual for in-depth reference, an FAQ for quick reference, a “What's New” description of major changes and additions, and a more detailed Changelog or Release Notes for the more technical users that list all new features, improvements, changes, and bug fixes, and reference any related bug tracker IDs when they exist.


Each release cycle will include a packaging process, series promotion (migration), a code push to public repositories for each series, and notification to the community. After each release begins a new cycle of monitoring, maintenance, and support, outlined in the Agasti Support Strategy. Details about the release process can be found in the Agasti Release Strategy.

QR Code
QR Code dep:nyc_coastalstormplan:sps_development_strategy (generated for current page)