Child pages
  • Recommendations for moving OAAI to sustainable Open Source
Skip to end of metadata
Go to start of metadata

Sandeep (Marist) and Aaron Zeckoski (Unicon) discussed some possible improvements to the current OAAI code and mechanisms. This is meant to document a possible route forward. Please feel free to comment and adjust.

Goals

  1. Attract more adopters
    1. Easier installation and more configurable setup
    2. More use of standards
  2. Attract contributors (developers)
    1. Easier to checkout (one source checkout)
    2. Able to contribute (use of github or some similar open repo)
  3. Move the code aspects of the project from Marist to Apereo
  4. Make the processing more automatic

Recommendations/Implementation

Data extraction (part 1)

Output(s): 4 CSV files (formats defined on the main page)

Issues

  1. Currently this is handled via a set of scripts for Sakai which produces CSV files AND manual extraction from an SIS or other data sources, this is acceptable for the short term.

Solutions

  1. Create a mechanism that will interface with an LRS via tincanapi and extract the data into CSV files
  2. For people not using tincanapi, maintain a community managed set of instructions or scripts which can be executed to extract the data from other sources (Banner, Peoplesoft, Sakai, Moodle, etc.)
    1. Make contributions easy (include it in the source code for the project and/or on a wiki page

Data processing (part 2)

Output(s): JSON file, SQL file (identifies the at risk and near risk students)
  Optionally: XL/CSV file

Issues

  1. Installation of the current processor is difficult
  2. Requires SQL server (or other separate database)
  3. Processing must be manually triggered and outputs manually selected

Solutions

  1. Move all related code and PMML files into an open source code repository
    1. Initial code in public github here: https://github.com/Unicon/OpenAcademicAnalytics
  2. Add a README to the code repo that references confluence and/or includes a straightforward set of installation instructions
  3. Replace the use of SQL server with H2 (http://www.h2database.com/)
  4. Use maven for the source build
  5. Include embedded version of Pentaho / Kettle with the source
    1. Possible tips here: http://labs.consol.de/lang/en/blog/kettle/pentaho-kettle-within-a-web-application/
    2. http://wiki.pentaho.com/display/EAI/PDI+Integration
  6. Add support for the processor to read the 4 CSV files from an input directory and send the JSON to an output directory
  7. Setup basic configuration processing which will read from properties file (so that config does not have to be managed from the source code)
  8. Add support for the processor to output CSV, SQL, and JSON of the results
    1. Probably would be good to make this configurable
  9. Initially we will package this as a simple war file and execute the processor using a REST trigger
    1. POST request to /{app}/process will trigger off the execution
    2. GET request to /{app}/results.{type} will return the output of the most recent run
  10. We will use maven 3 to build and manage the artifacts
    1. Java compatible with Java 6 or newer
  11. Unit tests should be added to verify the parts of the processor are working as expected

More can be done here but this gets us started nicely.

 

  • No labels