Child pages
  • SAKAI - A Capabilities Engineering Perspective
Skip to end of metadata
Go to start of metadata

SAKAI: A Capabilities Engineering Perspective

Speaker(s): Ramya Ravichandar, James Arthur, Aaron Zeckoski
Date: Thursday 1:15 pm - 1:55 pm
Room: INTL 3

Session Abstract

We present an alternative approach to traditional requirements-based system design called Capabilities Engineering, which mathematically exploits
the structural semantics of the Function Decomposition graph ¿ a representation of user needs ¿ to formulate Capabilities. We analyze SAKAI because
it exhibits characteristics associated with complex emergent systems and has a comprehensive change history. These features assist the
statistical examination of the Capabilities-based approach for developing change-tolerant systems.

Presentation Materials

Software Engineering Functional Analysis of Sakai is part of the research being conducted by Ramya Ravichandar, Ph.D. candidate and her adviser,
Dr. James D. Arthur, Dept. of Computer Science, Virginia Tech in coordination with Aaron Zeckoski, Lead Developer - Learning Technologies. In
particular, Sakai is being used for the purposes validating Capabilities Engineering, an alternate methodology for developing complex
emergent systems. The following sections provides a brief overview of Capabilities Engineering, the Function Decomposition graph
that is used to capture Sakai's functionality and a pictorial representation of the graph itself.

Slides of the Presentation at Sakai Conference, Atlanta, 2006

Presentation Slides
http://issues.sakaiproject.org/confluence/download/attachments/31149/Sakai+Presentation+Dec+7%2C+Atlanta+2006.pdf

How is our Research important for the SAKAI community ?

Helps answer questions such as

  • Are the current set of components/modules in SAKAI change-tolerant ?
  • What sets of modules are highly coupled ?
  • Is there another set of Capabilities that are more change-tolerant ?
  • What is the Optimal order of implementation of functionality ?
  • What features should constitute the next release ?
  • How do we classify the different types of requirements ?

Benefits:

  • Structuring the requirements process
  • Reduced impact of change because of User Needs or Requirements change
  • Use of latest technology for the Capability to be implemented
  • Reflect schedule constraints

The research analysis depends on the validity of the graph. So, we request your feedback on the accuracy of the graph. Please email ramyar.at.vt.edu or leave your comments here regarding the same.

What is Function Decomposition Graph ?

The Function Decomposition (FD) graph, G=(V,E) strives to capture the basic functionality of a system, from users' perspectives, _i.e. their needs.

Vertex Set (V)
  • Root Node: The root represents the highest level mission or need of the system. There is exactly one overall system mission and hence, only one root node in an FD graph.
  • Leaves: The leaf node represents a directive of the system. A system has a finite number of directives and hence, its FD graph also has the same number of leaves. Directives are
    similar to low-level requirements but are stated in the language of the problem domain.
  • Internal Nodes: An internal node represents a functionality of the system. The level of abstraction of the functionality is inversely proportional to the length of the directed path
    from the root to the internal node representing the concerned functionality.
Edge Set (E)
  • Decomposition: The partitioning of a functionality into its constituent components is depicted by the construction of a decomposition edge. The direct edge between a parent
    and its child node represents functional decomposition and implies that the functionality of the child is a proper subset of the parent's functionality.
    Only non-leaf nodes i.e. internal nodes with an outdegree of at least two can have valid decomposition edges with their children.
  • Refinement: The refinement relationship is used when there is a need to express a node's functionality with more clarity, say, by furnishing additional details.
  • Intersection: To indicate the commonalities between functions defined at the same level of abstraction the intersection edge is used. Hence, a child node with an
    indegree greater than one represents a functionality common to all its parent nodes.
Relevance Values

The failure to implement a directive can be interpreted as a risk. Therefore, we use categories:
Catastrophic, Critical, Marginal and Negligible to guide the assignment of relevance values. Each impact category is
well-defined and has an associated description. This is used to estimate the relevance of a directive on the basis of its potential impact. The assignment of
relevance values are described in the table below:

Impact

description

Relevance Value

Catastrophic

Task Failure

10

Critical

Task Success Questionable

7

Marginal

Reduction in performance

3

Negligible

Inconvenience

1

NOTE: Relevance values are assigned only to an edge connecting a parent node and a directive. In some sense, this indicates to what extent is a directive (requirement) important
to the implementation. This is similar to the categories (critical, essential, desirable, not applicable) used in the requirements poll https://www.sakaiproject.org/requirements/results.php.

Request for your Feedback

Do you think the SAKAI Function Decomposition graph (in the following section) is correct ? For example :

  • Is the functionality of the module you are developing present in the graph?
  • Is it decomposed correctly?
  • Are there missing functions that have not been captured?
  • Can you suggest additional levels of hierarchy?
  • Are the edge weights of leaves (relevance values) accurate?

Please email me at ramyar.at.vt.edu (Ramya Ravichandar) with your suggestions, comments, critiques, or corrections. You are also welcome to comment here.
Please use the key (#xxx), present in the label of each node, to identify what your feedback refers to

Function Decomposition Graph of SAKAI

Software Engineering Research - Capabilities Engineering

Complex emergent systems need to be change-tolerant, as they have lengthy development cycles. Requirements and technology often evolve during such development periods,
and thereby, inhibit a comprehensive up-front solution specification. Failing to accommodate changed requirements or to incorporate latest technology results in an unsatisfactory
system, and thereby, invalidates the huge investments of time and money. Recent history of system failures provides ample evidence to support this fact. We propose an alternative
approach of development termed Capabilities Engineering (CE) to develop change-tolerant systems. It is a scientific, disciplined and deliberate process for defining Capabilities as
functional abstractions, which are the building blocks of the system. Capabilities are designed to exhibit high cohesion and low coupling, which are also desirable from
a software engineering perspective, to promote change-tolerance. In addition, the CE process touts a multi-disciplinary optimization approach for selecting an optimal set
of Capabilities that accommodates the constraints of technology advancement and development schedule.

We choose to validate our research using SAKAI, which displays the characteristics of a complex emergent system. In particular, SAKAI is being incrementally developed and is expected
to have an extended lifetime. Furthermore, the inherent complexity of SAKAI as a software system is compounded by the need to coordinate global development efforts. We capture
the functionality of SAKAI in a Function Decomposition (FD) graph. The FD graph illustrates user needs in terms of desired functionalities and captures their associated levels
of abstraction. From this graph we determine slices,sets of nodes, that can be considered as Capabilities and compute their cohesion and coupling values. We then identify
an optimal set based on scheduling needs, balanced abstraction levels and technology constraints. We utilize the change history associated with SAKAI and perform impact analysis
to determine if the set of Capabilities suggested by CE is more change-tolerant than the existing implementation. CE is a recursive process of selection, optimization, reorganization
and hence, the stabilization of Capabilities. We envision that the Capabilities based approach provides a high-level development framework, for complex emergent systems,
accommodating change and facilitating evolution with minimal impact.

Research Papers about Capabilities Engineering:

  • Ravichandar, R., Arthur, J.D. and S.A Bohner. "Capabilities Engineering:
    Constructing Change-Tolerant Systems" ACM: Computing Research Repository
    (CoRR), Technical Report, CS.SE/0611071, 2006. http://arxiv.org/abs/cs.SE/0611071
  • Ravichandar, R., Arthur, J.D. and R.P. Broadwater. "Reconciling
    Synthesis and Decomposition: A Composite Approach to Capability
    Identification" ACM: Computing Research Repository (CoRR), Technical
    Report, CS.SE/0611072, 2006. http://arxiv.org/abs/cs.SE/0611072
  • Session leaders are encouraged to post their presentation materials as Attachments to this Page. (See Attachments tab above.)

Podcasts

  • Session leaders are encouraged to post their podcasts on the main Atlanta Podcasts page (a central repository of podcasts) and may also choose to link to them from their session page. See the main Atlanta conference wiki page for more details.

Additional Information

  • Session leaders are also encouraged to appoint a session convener, a podcast recorder and/or a note-taker and post the minutes of their session on a Page (see Add Page link near top-right.)
  • Participants and Session Leaders are encouraged to post Comments (see Comment form below) or create additional Pages as needed to facilitate collaboration (see Add Page link near top-right.)
    • Child Pages for this session (Added Pages will automatically appear in this list)
  • No labels