Forum .LRN Q&A: Response to Anyone working on Scorm ?

Collapse
Posted by Andrew Grumet on
The SCORM runtime stuff is a good read. It defines a data model and object-centric api which kinda sorta maps into OACS. They wrote it from the POV of a client-resident javascript object communicating back to a server-resident LMS. But the ideas might carry over to e.g. a server-resident survey package reporting a student's score back to the server-resident LMS. Instead of
// Javascript sample code from SCORM runtime spec.
LMSSetValue("cmi.core.score.raw","85");
we would have something like
# Possible OACS/Tcl.
# Here the object_id would uniquely identify
# to the student being scored.
lms::set_value [                 
  -object_id $object_id          
  -element "cmi.core.score.raw"  
  -value 85 ]
The only substantial difference here is that the object reference is somehow implicit in the Javascript example.

Given that, unlike SCORM, we're not assuming that our learning object is client-side, it's not clear to me yet what the right abstractions are (the Tcl package example above is just for illustrative purposes and not a proposed design).

Also, SCORM doesn't really address use cases that live above individual learning objects. This includes Malte's case where you want to assign relative weights to different learning objects. Perhaps that would just be an add-on (or maybe IMS defines it).

One piece of good news is that the SCORM scoring stuff adds only marginally to the simplest possible case. They propose three fields: raw score, max possible score, and min possible score, where all three numbers are assumed to be between 0 to 100. That at least lends support to the notion that the grading system need not be terribly complicated.