Forum .LRN Q&A: Re: Request for advice from the OCT

Collapse
Posted by Matthias Melcher on
Malte,
<blockquote> And this might be the case where I run heads on against
the "standards" wall, but unless I see a real use case,
where you have an assessments display governed by
external conditions, I'm not keen on designing it that
way from the beginning
</blockquote>

1. I don't think we should run against standards

2. the efforts you might save on designing assessments
in a less open manner would have to be paid back as
additional or duplicate efforts on the SS engine.

3. a use case where assessment-internal entities (sections
and items) should be addressible from sequencing is simply
reusability:

If a test question was carefully crafted for a formative
test at the end of chapter, it should be able to be
re-used in a summative test at the end of the term, as well.

Collapse
Posted by Malte Sussdorff on
Matthias, as Ola pointed out, it says nowhere in the IMS QTI specification that we need to implement sequencing internally using IMS SS specification. Furthermore, reuseability is easy, as you can reuse questions and sections even assessments wherever you want. But you define the reuse within the assessment system.

Let's try to clarify the steps for the Professor:

- Upload a chapter (chapter1) to LORSm.
- Define a test (test1) using assessment, that depends on students having read the uploaded chapter. You just create an assessment at this stage.
- Go to the IMS SS package. Define that "test1" is only available to students if they have read "chapter1".
- do a lot of other things til the end of the term.
- End of term: Create a summative test (test2) using assessment. As assessment allows the reuse of items and sections, pick the questions from the question catalogue or the section catalogue or say "copy test1 and create a new assessment".
- Go to the IMS SS package. Define that "test2" is only available after everything else has been done (end of term).

There is no need to use simple sequencing engine within the assessment to achieve *branching*, *randomizing* or assigning questions to sections and sections to an assessment.

Obviously you could think about using the SS package API to do this, but then you would make a package considerably more complex than it needs to be and stall development on assessment for ages. Reason:

- In assessment I can easily say within the data modell: these 15 items belong to my section1 and only display 10 of these items in a random fashion.
- In assessment I can easily say: If answer to question1 is "foo" then go to question2
- In assessment I can easily say: If answer to question1 is given, question4 has to be answered as well

If I were to mirror these use cases through an external API to another package I'd have a hell of a time. And there is no use for the user, as the steps suddenly would be:

- Go to assessment
- Create a question
- Create multiple choice answers to the question
- Go to SS package
- Select a relationship between your question and the answers.
- Link multiple choice answers to the question using the SS package
- Store additional information like "10 out of 15" in the SS package
- Go back to assessment (aso.)

I'm not sure what you want to gain from defining all these steps with the SS package, as it can be as well achieved by the assement system.

I totally disagree with your second statement, as the order is wrong. If there was a cleverly designed SS engine that would take care of all sequencing needs of assessment (and the above examples are only the tip of the iceberg), then we could safe assessment the effort of implementing things on it's own. *BUT* this will take considerable ressources and the gain is more then questionable, as I don't see a performance boost or additional functionality given to assessment by using an overly complex SS package.

Don't get me wrong, I have thought about using SS package e.g. for branching, but to be totally honest, unless we do have a well specified SS package and API it does not make sense to go down that road. Furthermore, we need a very flexible system internally anyway for checks on items (is an answer in a valid range aso.). It is very easy to use this flexible system to extend to branching as well. So from a strictly ressource point of view it does not make much sense and as I said, I don't see any use case, where it would be beneficial to use the IMS SS package to control the *internals* of assessment, that cannot be solved as well with the current thinking.

Hope this clarifies my standpoint better and also clarifies my statement which you picked on.

Collapse
Posted by Malte Sussdorff on
Okay, I guess we *urgently* need to meet on IRC, or this discussion goes totally out of hand. Furthermore, let's sit down and talk about what an SS engine would need to do in the following scenarios:
  • Limited to creating Sequences between Learning Objects
  • Opened up to create sequences within other packages.
Especially in the latter case, we need to think of:
  • What is the data modell.
  • What are the API functions that are needed (I can surely can come up with a couple from assessment, but I assume Ernie has some ideas on this as well).
  • What is the user interface and how does this user interface integrate with the existing packages. Some options:
    • The existing package (e.g. assessment) only uses the API and datamodell
    • The existing package includes some ADP library code from the SS package.
    • There is no UI for the existing package (assessment). The sequences will only be handled by the SS package.
  • Last but not least. What is to gain from opening up SS package to be used in other packages and to what degree of integration is the ROI (both money as well as user experience) positive.