Forum .LRN Q&A: Request for advice from the OCT

Collapse
Posted by Ola Hansson on
There has been a very long and fruitless discussion about the implementation of IMS Simple Sequencing last seen here:
https://openacs.org/forums/message-view?message_id=188985

I would very much like to get some (impartial) comments or guidance from members of the OCT. If you can help us determine how it is going to be done conceptually, that would help immensely. We are just wasting time and energy and need to have this issue resolved one way or another.

Thanks!

/Ola

Collapse
Posted by Jeff Davis on
To be honest it's not all that clear to me what technical questions there is disagreement about. If I read things right you want IMS SS engine as it's own package without a public interface and Ernie want's it somewhere (he does not much care where) with a public interface.

Anyway, maybe you can post a summary of where you and Ernie disagree and then we would be better able to adjudicate.

Collapse
Posted by Ola Hansson on
Thanks Jeff, that is a good point!

I think you read things right, if you by "public interface" mean "public TCL and/or SC API". (We obviously want a public UI for the Simple Sequencing package.) Ernie thinks a public API is absolutely necessary and we don't. The reasons why we reach these different conclusions is because we have different understandings of IMS SS and envision different end products.

Here's our (Polyxena's) partial interpretation of the main differences in opinion:

- We think that SS is specificly designed for traversal of sequences of learning activities.

- Ernie thinks that SS is a general service which could be used for many types of sequencing.

*

- To us, SS is about sequencing "learning activities" which map to learning objects and other content, such as a test, for instance.

- To Ernie, SS seems to be about sequencing the actual learning objects in LORSm themselves.

*

- We envision one central front-end for creating learning sequences throughout the toolkit.

- Ernie envisions many front-ends for creating learning sequences in many packages (among them LORSm).

*

- We think of LORSm as a filestorage (on steroids) for learning objects (a course library, if you will).

- Ernie thinks of LORSm as an LMS internally providing sequenced delivery of its learning objects.

*

- We want the whole learning sequence user experience to be server-side.

- Ernie wants the SCORM solution with client-side UI and an adapter demanding strictly defined content.

Collapse
Posted by Malte Sussdorff on
Ola, where is the problem. Design it in a way that it supports both your and Ernie's interest (and vice versa if you do not have the funding to do it right away and Ernie needs it).

I do agree with Ernie's vision as well as yours, as we need both. We need sequences within LORS, but we should also have the capabilities to create sequences in general between objects within a community. My understanding is:

- I want to read a paper and once I read it I want to take a test to see if I learned something.

- I read a couple of papers in a certain order.

- I finish one class (mathematics 1) and immediatly am signed up for the next class.

- I finish a class and get a broader choice of classes to choose from, which have the preliminary condition "finished class 1"

- Depending on the grade in the test, I get to see different papers, depending on the grade in the class I am allowed to attend different classes.

Collapse
Posted by Matthias Melcher on
Thank you Ola for the useful clarifying effort to
contrast the both views, although I think Ernie's
view was not completely hit. Perhaps the complicated
concepts could be further illustrated if the
assessment people could specify how, for instance,
a chapter completion assessment would plug into
the sequencing, and which terminology applies from
each of the relevant standards, CTI ASI (assessment,
sequence, item), CP (organisation, item, resource,
file), and SS (activity, attempt, objective,
satisfaction).
Collapse
Posted by Ola Hansson on
There needn't be a problem if we can clarify conceptually how things should be implemented before we do it. Isn't it rather the case that if we implement the capabilities to create sequences in general, there would be no need for implementing corresponding capabilities specific to LORS[m]? As a paralell, we might consider how the generic Categories package lets various other packages have their objects mapped to centrally defined categories. Just a thought ...

There is a problem, however, because Ernie won't clarify the need for a separate SS front-end within LORSm, although we've explained there will be a general one, but instead presents us with an ultimatum and threatens to cowboy in on our SS initiative. We have no problem with going Ernies route (public API, several front-ends) if he could just convince us and the OCT that his approach represents the best practice.

Collapse
Posted by Jeff Davis on
As you stated it, I like Ernie's approach better, I think defining a clean API that is well suited to reuse is better software engineering. Take the CMS/CR for example. At the time they were written it was not obvious that the CR would really need to be exposed to anything else, but subsequently the CMS ended up being discarded and now there are a number of other packages which use that API with very few changes.

I think most things (SS included) should be done with a service layer and a presentation layer (with the possibility of creating other vehicles for presentation) and even though you think the whole thing should be server side, I think forcing that decision into the design is a bad one.

If you are saying you don't want to force that decision into the design then I don't see why there is any conflict at all since you are in effect saying you want an ss-service + an ss-presentatio-layer package where Ernie says he want's ss-service + LORSm (== another ss-presentation-layer). I.e. all he wants is to use the service in a different presentation engine. And if ss-service is well designed then I just don't see why it should make a difference to you that he want's to use it.

Another thing I take issue with is the tone of the conversation, Ernie has delivered code (that I think leverages the existing infrastructure well), seems to be pretty responsive and thoughtful about how things are done, and is moving things forward. I don't think it's at all accurate to characterize what he is doing as "cowboying in on our SS initiative" or to say he has been amazingly insulting towards you.

As Roc said, Ernie has a problem he needs to solve; he is going to do what he needs to do to get it solved and even if you don't agree with his priorities, I would hope you could recognize that your goals and his are sufficiently aligned that the work you do can be complimentary.

Collapse
Posted by Ola Hansson on
Jeff, thanks for your point of view. I realize now that I have no right to expect that a general concensus be reached about feature implementations within the toolkit. Nor can I expect people to motivate their needs for this or that feature. In this light a public API provides people with the necessary freedom to do what they want to do. Therefore I'm now all for a public API.

Again, thanks for resolving this (non-)issue in such a constructive way.

/Ola

Collapse
Posted by Malte Sussdorff on
The assumption that is being made with assessment is the ability of packages like curriculum, LORS and evaluation to provide an API where assessment will be able to deliver a user_id, an object_id (of the assessment) and a percentage value to, so they can process the result of the assessment for a user further into e.g. a final grade. Furthermore we expect the SS to modify the permission of a user on an assessment in a way that the user only get's "read" permission, once the mandatory (defined in SS) activities have been executed with satisfaction.

I hope this very straight forward technical explanation answers your question on the general interaction between assessment and the SS.

Now, internally the assessment system is flexible enough to enable a sequencing on it's own. This sequencing though has nothing to do with IMS simple sequencing. The assessment sequencing is used for branching, displaying multiple question on a page and then go to the next one. We do this on the one hand by checking items internally (is the answer in a valid range and how many percentag points is this answer worth) as well as externally (if the answer to question a is "b", then display question c and make an answer to question d  mandatory.

If someone sees the need for an even tighter integration in the SS, then we would have to talk about the exact details.

Collapse
Posted by Ola Hansson on
Malte, your description of the integration between Assessment and Curriculum (well, the SS engine, anyway) goes pretty much hand in hand with the way I picture it.

The expectation that SS modifies the users permissions on an assessment is interesting, and something which I hadn't thought about. It sounds a bit tricky but I'm sure we'll work something out.

Collapse
Posted by Malte Sussdorff on
The reason I bring up modification of permissions is due to my understanding, that there is no need for packages to have a scheduling functionality, as this can be solved by permissions.

Take assessment. The assessment "bar" is available to user "foo" from 10am to 5pm. The scheduler will open the "write" permission for user "foo" on assessment "bar" at 10am and remove this again at 5pm.

Same is true for SS granting access to an assessment. Or take files. Let's assume a student shall only see the document on advanced mathematics if he passed the introduction test. SS would then take the grade from assessment (either pull or push) and then give the permission "read" on the file "advanced mathematics".

The beauty of this approach is it's easyness and the lack of need to modify most existing packages to know of an existing sequencing module.

Collapse
Posted by Ernie Ghiglione on
Hey Malte,

<blockquote> Same is true for SS granting access to an assessment. Or take files.
Let's assume a student shall only see the document on advanced
mathematics if he passed the introduction test. SS would then take
the grade from assessment (either pull or push) and then give the
permission "read" on the file "advanced mathematics".

The beauty of this approach is it's easyness and the lack of need to
modify most existing packages to know of an existing sequencing
module.
</blockquote>

Well, that will be -in most cases- given too responsibility to the SS engine that might not be part of its job (and it might fall as part of the Assessment package responsibilities to do so.

[IMPORTANT: note that this examples only covers how it should work if the Assessment package is meant -or is coded- to be compliant with IMS SS. If that's not the case, then the sequencing can be resolved simply by the Assessment internal mechanisms/functions]

Taking your example, let me see if I can modify it a bit so the bounderies of both systems are clear and neatly defined:

When the assessment is created the author defines certain rules that basically set the behaviours (and/or branching) that a student will take according to certain values that he/she scores (or previous content view) for instance.

Once the assessment gets uploaded, then the fun part begings:

The assessment package is responsible for manage, delivering the questions/tests/assessment to the students, gather results and some other admin tasks.

The SS engine, instead, does set the sequences and behaviour for any IMS SS that gets given.

Therefore, if the assessment comes with IMS SS information, the assessment package passes those to the SS engine which stores the sequencing information (only).

Now, when a student/learner goes about taking that particular assessment, the Assessment packages ask the SS to deliver the correct sequence for this user. Once the SS engine returns the appropiate sequence, then the Assessment package renders it accordingly to the user. However, there is going to be cases when the answer of the user will have to be passed to the SS engine so the Assessment package can get a new sequence according to the results.

Notice that there the SS engine bears no responsibility for recording the scores of the assessment -as it is only the fellow that has a bunch of rules set up and according to what the student answer (and the sequence originally set by the author) determines what comes next. So the responsibility for recording down the results of the student is part of the Assessment package.

Otherwise, not only the SS engine has to record all results for all random questions and tests (not to mentioned pages views for all sort of  courses and learning objects)... which at the end of the day, they really are not part of its job.

Now, the question you might want being asking: but then, who is the sequencing engine going to know where the student left last time? Well, when you request a sequence, since the Assessment package is the one that tracked where the guy left off, then it is the Assessment packages that says "Hey SS mate, random striker is back. He left in section 2, question 4 and the answer was 'The Beatles'... what should I show him next?"... and the IMS SS engine, will give you a set of questions to render.

Does that make sense?

This way, you are very clearly separating the sequencing job from the rendering and tracking, which this two last ones are the responsibility of the Assessment/LORSm packages. Determine the appropiate sequence of activities/questions/learning objects: that falls on the SS engine lap... and that's the way you want to do it, so neither your Assessment package nor LORSm have to understand anything about sequencing and behaviours. In addition, if the IMS SS in the future changes, and new behaviours are added, you keep using the same API and the SS engine will tell you what and how to render it to the user.

I hope that helps,

Ernie

PS: Once again, this applies only and only if the sequence you want to work with is of the type IMS SS... if not, you are just free to use whatever you want... from creating your own sequencer and behaviour or even the workflow packages if you find it useful.

PS2: Sorry for keep pushing for the Carnegie Mellon paper on simple sequencing, but it does cover *all* relevant aspects of IMS SS and it is really very well and robust design.

http://www.lsal.cmu.edu/lsal/resources/standards/ssservices/services-v02.pdf

Collapse
Posted by Malte Sussdorff on
Hi Ernie, I mentioned earlier:

Now, internally the assessment system is flexible enough to enable a sequencing on it's own. This sequencing though has nothing to do with IMS simple sequencing. The assessment sequencing is used for branching, displaying multiple question on a page and then go to the next one.

I see a clear distinction between what an assessment does *internally* and how it is called in a learning context. If you talk about IMS sequences within an assessment, they have to be controlled by the assessment system using the functions provided by the assessment.

But this is not what the assessment is all about in a learning context. In a learning context an assessment is only *part* of the learning experience. And this learning experience includes other objects as well (e.g. LORSm content, grades given in oral exams, ...). The SS package will deal with the conditions and rules that govern the way how the Sequence between these Learning Objects is created.

Let's try to get the distinct utterly clear, as I think this is the reason for confusion.

  • Question four follows question two if answer to question one was "bar". Otherwise display question three. Strictly assessment package internal.
  • Display assessment higher mathematics if paper on mathematics has been read. SS functionality
  • Display questions a,b,f,g if paper on mathematics has been read, display question b,d,g,j otherwise. SS functionality. Footnote: a,b,f,g is one assessment, b,d,g,j is another assessment.
You asked where we store the grades. The assessment system *internally* stores percentages. The results of an assessment will be *pushed* to the Evaluation package. The SS system has to query the evaluation package if it wants to create a rule based on grades, *not* the assessment system (though it might do so if it so pleases, but I don't think it would make sense for it to do it).

Now, when a student/learner goes about taking that particular assessment, the Assessment packages ask the SS to deliver the correct sequence for this user. Once the SS engine returns the appropiate sequence, then the Assessment package renders it accordingly to the user.
No. This is not the case. The assessment knows on it's own the sequence which to use for the assessment as an assessment *internally* does not differentiate between items and sections depending on external conditions. If you want to modify an assessment based on external conditions you should create two assessments, otherwise the results of *one* assessment are not comparable anymore within the assessment. And this might be the case where I run heads on against the "standards" wall, but unless I see a real use case, where you have an assessments display governed by external conditions, I'm not keen on designing it that way from the beginning (you can always exchange the *internal* sequencing engine at a later stage, if utterly necessary).

If a student leaves an assessment in the middle, the assessment system knows where to continue. No need for the SS system to give the next questions. This is something the assessment does all by itself *internally*.

My whole point is that there is a clear distinction between how sequencing is done *internally* in a package and *externaly*. You are not going to make the SS package responsible for the sequence of paragraphs in a document. Neither do you have to make it responsible for knowing the sequence *within* an assessment. But it is *very* responsible for providing the sequence between the document and the assessment.

Can you see this distinction and does it make sense to you ?

P.S.: I do agree that it would be nice to use the API and storage capabilities of an SS package for handling sequences internally in an assessment. But until we have such a generic API and storage capabilities, we are stuck with the engine currently implemented in the design specifications. If someone (Ola, Ernie 😊 ) wants to take a look at it and modify it in a way that we could split this out and make it into an SS api, that's fine with me. Please look at https://openacs.org/projects/openacs/packages/assessment/design/sequencing.

Collapse
Posted by Stan Kaufman on
Malte, I think your explanation of the distinction between Assessment's internal sequencing functions and those external to the package is exactly right and very helpful. Your examples of how this will work in the education example illustrate that it is a generic mechanism that can be used in other comparable vertical apps.
Collapse
Posted by Ola Hansson on
Malte, your latest posting is the best posting I have ever read on this subject. Well said indeed!

I haven't checked out Evaluation yet. Is it a backend to Assessment/Survey? Will Assessment/Survey have a dependency (in the .info file) on Evaluation?

The SS engine will want to deliver the "test" to a certain user (or party) when the sequence and the sequencing rules say that activity is next in line in the given sequence. Then, as you said earlier, Malte, the "score" will either be pushed back to the SS engine by the Evaluation package (or Assessment perhaps?), or, pulled back to the SS engine from Evaluation by having the SS engine poll Evaluation at the proper point(s) in time. The "score" will have have to be normalized to a value between 0 and 1 (I think) according to the SS spec, but a percentage value is a good start 😊

Collapse
Posted by Ernie Ghiglione on
Hi Malte,

Thanks for taking the time to explain this a bit more clearer. It has been really good. We should have more of these discussion as it really helps us to put everyone in the same page.

<blockquote> No. This is not the case. The assessment knows on it's own
the sequence which to use for the assessment as an assessment
*internally* does not differentiate between items and sections
depending on external conditions.
</blockquote>

But then, is it possible to say that the sequencing of QTI has nothing to do with IMS SS? For instance, a sequence of activities can't reach (for lack of a better word) one single and individual question in an assessment? If no, then we might need to figure out what we can do as Simple Sequencing does not place any restrictions on what can be sequenced in such a tree. (http://www.imsglobal.org/simplesequencing/ssv1p0/imsss_bestv1p0.html#1500831)

More over, can an QTI assessment be sequenced using IMS SS?

<blockquote> My whole point is that there is a clear distinction between how
sequencing is done *internally* in a package and *externaly*. You are
  not going to make the SS package responsible for the sequence of
paragraphs in a document.
</blockquote>

That is true. However, that deals with the granularity of the activities defined in the sequence. It won't be able to sequence paragraphs as they are part of a learning object, which I believe they are the smallest units that can be part of an activitity, right?

However, I was under the assumption -by reading the specs- that it was possible to sequence individual questions as they are the smallest (atoms) of QTI that can be sequenced. But I'm not so sure any more 😊

<blockquote> Can you see this distinction and does it make sense to you ?
</blockquote>

Yes, I believe I do. Summarizing:

IMS SS = sequencing of activities (learning objects, entire assessments, etc)
IMS QTI (assessment) = internal sequence of questions given by the assessment creator before it was uploaded into the system

Right?

Ernie

Collapse
Posted by Malte Sussdorff on
I haven't checked out Evaluation yet. Is it a backend to Assessment/Survey? Will Assessment/Survey have a dependency (in the .info file) on Evaluation?

If I have my way, no :). I'd like to support callbacks to do the actual "push", if needed at all, but rely on a "pull" mechanism otherwise. To be honest, I think the score should be made available on request, but I did not look into the exact integration with evaluation.

To avoid confusion, a "score" is a percentage value describing the relativeness of success in an assessment. A grade is what the evaluation system makes out of it. Internally the assessment system does allow negative percentage values to reflect the punishment for really answering stupidly/dangerously (e.g. in medical exams). For packages calling the assessment system with "as_score -user_id -assessment_id or -item_id or -section_id" the result will be calculated to be between 0 and 100 (percent), therefore helping your SS engine. We could add a switch "-normalized" so the value would be between 0 and 1 :).

Collapse
Posted by Ola Hansson on
"But then, is it possible to say that the sequencing of QTI has nothing to do with IMS SS?"

Yes. It says nowhere in the IMS QTI spec that IMS SS shall be used internally in an IMS QTI implementation. (And there is nothing in the IMS SS spec that indicates that simple sequencing should be used for anything besides sequencing of its own learning activities.)

"For instance, a sequence of activities can't reach (for lack of a better word) one single and individual question in an assessment?"

True. (No it can't.)

"If no, then we might need to figure out what we can do as Simple Sequencing does not place any restrictions on what can be sequenced in such a tree."

If you want an activity in a SS sequence to "reach" just one question you map that activity to an *assessment* with just one single question, no?

To clarify: SS is restricted to sequence its own "learning activities" only, but you are right in the sense that these learning activities can map to any type of object, including a question in an assessment. However, there is a practical (and very sensible) restriction placed by Malte et al who doesn't want SS to run the internal works of Assessment, which it is best suited to do itself.

"More over, can an QTI assessment be sequenced using IMS SS?"

Malte made it clear that it can't. Not internally within Assessment.

I think it's very clear by now that sequencing of questions in Assessment *internally* does *not* have anything to do do with IMS SS, which is specific only to how "learning activities" are to be sequenced, or traversed, in a learning activity tree (a learning sequence, or branched curriculum).

Further, the fact that SS should not act as the internal sequencer in Assessment, pretty strongly indicates, IMO, that the same might hold true for other packages as well, such as LORSm, File Storage, CR, Forums, etc. I repeat, SS is in fact very limited to sequencing learning activities, and is not a general sequencer of any type of objects.

Collapse
Posted by Ola Hansson on
Malte, thanks for clarifying Evaluation and score.

SS will probably be interested in score rather than grade. Let's work out the details when (if) the time comes.

Collapse
Posted by Matthias Melcher on
Malte,
<blockquote> And this might be the case where I run heads on against
the "standards" wall, but unless I see a real use case,
where you have an assessments display governed by
external conditions, I'm not keen on designing it that
way from the beginning
</blockquote>

1. I don't think we should run against standards

2. the efforts you might save on designing assessments
in a less open manner would have to be paid back as
additional or duplicate efforts on the SS engine.

3. a use case where assessment-internal entities (sections
and items) should be addressible from sequencing is simply
reusability:

If a test question was carefully crafted for a formative
test at the end of chapter, it should be able to be
re-used in a summative test at the end of the term, as well.

Collapse
Posted by Ernie Ghiglione on
<blockquote> Yes. It says nowhere in the IMS QTI spec that IMS SS shall be used internally
in an IMS QTI implementation. (And there is nothing in the IMS SS spec that
indicates that simple sequencing should be used for anything besides sequencing
of its own learning activities.)
</blockquote>

Ola,

IMS specs aren't quite necesarily (I hope they would though) pieces of a puzzle that you can just put together. As a matter of fact, there are several committees attending different issues that very often tend to overlap slightly. In just cases, they usually specify in their specification work that is/has been carried out by other comittees and posible relations.

For instances, this is for IMS SS:

IMS Simple Sequencing Best Practice and Implementation Guide
Version 1.0 Public Draft Specification

2. Relationship to Other Specifications
2.1 IMS Specifications

The IMS Simple Sequencing Specification is related to other IMS specifications, both complete and in-progress. This specification is intended to be consistent with these other initiatives wherever possible, in order to reduce redundancy and confusion between specifications. The related specifications are:

...
* IMS Question and Test Interoperability Specification - the IMS QTI Specification defines the structures used to support the exchange of question and test data [QTI, 01a], [QTI, 01b], [QTI, 01c].

2.1.2 IMS Question and Test Interoperability

Several potential areas of harmonization with QTI have been identified during the development of Simple Sequencing. These include:

    * randomization, selection, and ordering
    * assessments as learning activities
    * using assessment to affect sequencing behavior

--
http://www.imsproject.org/simplesequencing/v1p0pd/imsss_bestv1p0pd.html

Although it doesn't say specifically how they "fit together" it does acknowledge that potential areas have been identified.

And some of the issues of sequencing are part of them as show in the last bullet points above.

That's it then, that for now, there's no decision in terms of how these gray areas are going to be resolved, but they might be part of sequencing in the future / or they might be not.

<blockquote> To clarify: SS is restricted to sequence its own "learning activities" only,
but you are right in the sense that these learning activities can map to any
type of object, including a question in an assessment. However, there is a
practical (and very sensible) restriction placed by Malte et al who doesn't
want SS to run the internal works of Assessment, which it is best suited to do
itself.
</blockquote>

I think it wouldn't take so much effort to leave the door open for an option in the assessment package that allows another packages input for sequences, in the case that IMS SS version 2 does include its rule on QTI.

For instance, at the moment on runtime, LORSm asks LORS for the sequence it needs to display objects and render the index page accordingly. However, if we would have a SS engine, a SCORM course (version 1.3) could send the appropiate sequence of SCOs on the fly to the delivery environment.

It might be good if the assessment package could accomodate as well for this now. As it will make it still be useful if things changed in the near future.

<blockquote> Further, the fact that SS should not act as the internal sequencer in
Assessment, pretty strongly indicates, IMO, that the same might hold true for
other packages as well, such as LORSm, File Storage, CR, Forums, etc. I repeat,
SS is in fact very limited to sequencing learning activities, and is not a
general sequencer of any type of objects.
</blockquote>

So, Ola, now that the US Department of Defense has spent about 40 billion bucks on setting up SCORM, that includes IMS CP, MD and on their latest version adds IMS SS, are you gonna tell them that they got it all wrong?... man, I will be an angry tax payer!  😊

Ola, you said it yourself: "learning activities can map to any
type of object, including a question in an assessment."

Why would you limit yourself to your package? Would it seriously be much of an effort to open it up? Even for the engineering point of view, implementing a cool and neat IMS SS engine will be worth the challenge and make much more sense. I could have a good use to it, anyone willing to deliver SCORM 1.3 courses will be delighted by it.

Even I volunteer to code if required...

Is there any point of keep going back and forward on this? I really don't think so.

The truth is: the IMS specs are evolving functional specs (needless to say far from perfect), and not implementation specifications. They tell you how things should work, but they leave you in the dark in term on how to go about making them a happen. However, they do make sense. And they have some really smart people and with more experience than all of us put together in this area, and that's worth considering it.

If you still think that you should keep it for you own, fine. That's your decision and I'm not planning to keep pushing you otherwise. You loose your chance of making something worth for others...

Ernie

Collapse
Posted by Malte Sussdorff on
Matthias, as Ola pointed out, it says nowhere in the IMS QTI specification that we need to implement sequencing internally using IMS SS specification. Furthermore, reuseability is easy, as you can reuse questions and sections even assessments wherever you want. But you define the reuse within the assessment system.

Let's try to clarify the steps for the Professor:

- Upload a chapter (chapter1) to LORSm.
- Define a test (test1) using assessment, that depends on students having read the uploaded chapter. You just create an assessment at this stage.
- Go to the IMS SS package. Define that "test1" is only available to students if they have read "chapter1".
- do a lot of other things til the end of the term.
- End of term: Create a summative test (test2) using assessment. As assessment allows the reuse of items and sections, pick the questions from the question catalogue or the section catalogue or say "copy test1 and create a new assessment".
- Go to the IMS SS package. Define that "test2" is only available after everything else has been done (end of term).

There is no need to use simple sequencing engine within the assessment to achieve *branching*, *randomizing* or assigning questions to sections and sections to an assessment.

Obviously you could think about using the SS package API to do this, but then you would make a package considerably more complex than it needs to be and stall development on assessment for ages. Reason:

- In assessment I can easily say within the data modell: these 15 items belong to my section1 and only display 10 of these items in a random fashion.
- In assessment I can easily say: If answer to question1 is "foo" then go to question2
- In assessment I can easily say: If answer to question1 is given, question4 has to be answered as well

If I were to mirror these use cases through an external API to another package I'd have a hell of a time. And there is no use for the user, as the steps suddenly would be:

- Go to assessment
- Create a question
- Create multiple choice answers to the question
- Go to SS package
- Select a relationship between your question and the answers.
- Link multiple choice answers to the question using the SS package
- Store additional information like "10 out of 15" in the SS package
- Go back to assessment (aso.)

I'm not sure what you want to gain from defining all these steps with the SS package, as it can be as well achieved by the assement system.

I totally disagree with your second statement, as the order is wrong. If there was a cleverly designed SS engine that would take care of all sequencing needs of assessment (and the above examples are only the tip of the iceberg), then we could safe assessment the effort of implementing things on it's own. *BUT* this will take considerable ressources and the gain is more then questionable, as I don't see a performance boost or additional functionality given to assessment by using an overly complex SS package.

Don't get me wrong, I have thought about using SS package e.g. for branching, but to be totally honest, unless we do have a well specified SS package and API it does not make sense to go down that road. Furthermore, we need a very flexible system internally anyway for checks on items (is an answer in a valid range aso.). It is very easy to use this flexible system to extend to branching as well. So from a strictly ressource point of view it does not make much sense and as I said, I don't see any use case, where it would be beneficial to use the IMS SS package to control the *internals* of assessment, that cannot be solved as well with the current thinking.

Hope this clarifies my standpoint better and also clarifies my statement which you picked on.

Collapse
Posted by Dave Bauer on
Malte,

I am not arguing but your example is wrong. If assessment used a SS API it would not be apparent to the creator of the assessment.

Collapse
Posted by Malte Sussdorff on
Okay, I guess we *urgently* need to meet on IRC, or this discussion goes totally out of hand. Furthermore, let's sit down and talk about what an SS engine would need to do in the following scenarios:
  • Limited to creating Sequences between Learning Objects
  • Opened up to create sequences within other packages.
Especially in the latter case, we need to think of:
  • What is the data modell.
  • What are the API functions that are needed (I can surely can come up with a couple from assessment, but I assume Ernie has some ideas on this as well).
  • What is the user interface and how does this user interface integrate with the existing packages. Some options:
    • The existing package (e.g. assessment) only uses the API and datamodell
    • The existing package includes some ADP library code from the SS package.
    • There is no UI for the existing package (assessment). The sequences will only be handled by the SS package.
  • Last but not least. What is to gain from opening up SS package to be used in other packages and to what degree of integration is the ROI (both money as well as user experience) positive.
Collapse
Posted by Malte Sussdorff on
Dave, I was agreeably painting the devil, assuming that we not only talk about the API but also about the user interface. Guilty as charged ... :)
Collapse
Posted by Ola Hansson on
Malte, I will think about your items and hopefully have some comments, but why don't we keep the discussion on this forum...I'd like to have an entirely open discussion. Besides I need some time to think between the answers... 🤔

Also, let's take a look in the opposite end of the kaleidoscope for a moment and see what we have there ...

We somehow need to create and populate the activity trees which make up the learning sequences that we later want users to take or traverse.

I envision something like this:

1) Go to the SS package
2) Create a "sequence"
3) Enter metadata which (partly) defines the rules and conditions for the sequence (or skip this part and let the defaults - as specified by the IMS SS spec - kick in as a fallback.)
4) Create "activities" in a hierarchical (branched) tree (much like you add nodes to the sitemap in OpenACS.)
5) Enter metadata which defines the rules and conditions of the activity (or skip this part and let the defaults - as specified by the IMS SS spec - kick in as a fallback.)

All of the sequences, activities and the metadata which define the sequences are stored in the datamodel in the ss package, as specified in the IMS SS spec.

6) To each activity in the activity tree, map objects (or URLs) from throughout the toolkit (you may add more than one object and also so called "auxillary resources")

This is probably going to be a bit tricky, and we might want to engage some kind of object browser and/or provide an API which lets other packages link to the SS package with a provided object id that you want to map to an activity. We basically want to go to great lengths to make it as convenient as possible for a sequence designer to compose a sequence.

(The objects which we map the activities to can of course be any kind of object, forum message, LORSm object, etc.)

Now we let the user take the sequence, that is traverse its activities, by using a navigation bar (which persists across package borders as well as off-site if need be). The navigation bar will either contain a link (or several links) to the next available activity/activities in the current sequence depending on the conditions/rules for the activity/sequence, OR, (if it is just a single activity available) the content associated with the next activity (according to the defined rules/conditions) will be *delivered* automatically by a redirect to the corresponding URL.

Finding out which *one* activity is "the next" in the sequence to be delivered is the main responsibility of the SS engine. That is basically what it *does*. It doesn't necessarily have to be the one which delivers the  content, but it makes much sense that a service within the SS domain does this, IMO.

It is impossible for the SS engine to know in advance which activities in the sequence will be traversed or in what order. The activity to deliver next is determined by the SS engine "as time goes by" depending, for instance, on whether or not the Assessment package "reported" that the user received more or less than the number of points needed to pass a test (the number being defined by the sequence designer)

Can we steer this discussion over to how sequences will be created and how they are navigated/delivered for a moment?

Thoughts?

Collapse
Posted by Matthias Melcher on
Malte,

<blockquote> stall development on assessment for ages.
</blockquote>

I don't argue that already the first installment
of assessment needs branching. But I think, once
it starts to need sequencing it should utilize the
same mechanisms that are used for sequencing plain
content items.

Also, once we offer reuse, this should be done with
the mechanisms offered by the central, IMS CP tight,
repository rather than using the package-internal
means that your answer suggests:

<blockquote> pick the questions from the question catalogue
or the section catalogue or say "copy test1 and
create a new assessment".
</blockquote>

My idea about what might be the focus of each
respective application is the following:

- Assessment: select question items from QTI types,
  render the ASI, define and process answers,
  report rollup to SS

- LORS: import, export, reuse resources into items

- SS: tracking the rollup values, determine sequence,

- don't know if SS or LORSm: curriculum functions as
  rendering simple webcontent.

Collapse
Posted by Malte Sussdorff on
Also, once we offer reuse, this should be done with the mechanisms offered by the central, IMS CP tight, repository rather than using the package-internal means that your answer suggests

What is reuse for you?

The assessment system has an item repository, where all items are stored (item = question = smallest entity of an assessment). This repository stores *all* items, so there is no "reuse" in the traditional sense, as *all* items are reused the moment an assessment is generated. Now I assume that your intention is to store these items in an IMS CP tight repository, handled through LORS. I'm not sure what the benefit is, taking into account that all packages use a common content repository to store the content and IMS CP seems to be a way to distribute courses and course elements as a package for interchange between applications. It should therefore be the main goal to be able to import/export content from IMS CP. And if I'm not utterly mistaken, the concrete packaging of assessments is defined in IMS QTI, which we import and export already (thanks to Eduardo and Alvaro).

Your idea for the focus is a description which I can agree to. The main question though remains, how are we going to solve the issues technically. Taking the OpenACS approach, we store the content in the content repository, regardless of the packaging described in Standards, and offer an API that allows us to import/export the data according to these standards. Taking into account that assessment is used in a variety of use cases which have nothing to do with IMS specifications (see the use cases at http://cvs.openacs.org/cvs/*checkout*/openacs-4/packages/assessment/www/doc/requirements.html), my understanding is it is sufficient to have import/export functionalities.

But I have the slight feeling I'm missing something that others read in the specifications, that make it more useful to use IMS compliant packages than the ones provided by OpenACS. Can someone maybe give me the data model of a LORS ressource, which I could make into an item for assessment? The data model for items is based on the CR and split up into multiple objects, closely linked to each other for a number of (technical) reasons. You can see the latest version at http://cvs.openacs.org/cvs/*checkout*/openacs-4/packages/assessment/www/doc/as_items.html.

After writing this all up, can someone help me out with some questions:

  1. How does an IMS CP repository differ from the Content Repository used by OpenACS ?
  2. Would it make sense to extend the OpenACS content repository by the functionality in which IMS CP differs from CR.
  3. Looking at ims_cp_items (from the LORS package), a couple of other questions come into my mind (in the background of evaluating if assessment should extend ims_cp_items instead of cr_items, which would be implied by using the IMS CP repository and not the OpenACS Content Repository (pure version)).
    1. How are additional attributes to ims_cp_items handled? I assume through the metadata definition.
    2. Should this approach taken there be generalized to allow handling metadata of items in general (of the content_repository), thereby making the extensions to cr_items and useage of your own package tables obsolete?
    3. If the answer to above is no, under which circumstances should a package store it's content in the LORS CP and what is the benefit compared to using it's own tables / plain content repository ?
  4. Technical questions (Ernie, please help):
    • What is the parameter attribute for?
    • How do you store prerequsites? Wouldn't it make sense to use relationships for this ?
    • Some varchars suggest you would store XML content in these, is this true (e.g. maxtimeallowed, timelimitaction)
    • Is there a specific reason for storing isvisible, as this could be handled through the permission system.
    • You seem hesitant to use acs_rels in your system. Is there a reason? (I see a couple of mapping tables).
Collapse
Posted by Ola Hansson on
How should the SS package (or other packages) create sequences?

Hey guys! There is a slight chance that what I'm going to rant about below is one step closer to Ernie's "open SS API" idea (or maybe not):

I am thinking about possible ways that we could author, import, and edit sequences since we most definitely should be able to do all three. Of course, it will have to be possible to export them too but let's not complicate things.

If I understand this correctly, IMS Content Packaging (CP) has an XML file called a "manifest" which describes the content items within a course and the actual content files is packed together with a manifest in a "IMS content package".

It is also my understanding that IMS CP can be extended with an SS "binding" which extends the XML schema so it is possible to define sequencing rules, either together with the items in the course manifest, or standalone.

I think it is true to state that LORSm imports meta-data and content (and places it in a back-end), and that SS will want to import just the SS rules part of the meta-data and not the content. In addition to the meta-data about each item (activity) it must know the object id that the "item" gets in LORSm, I believe, so that the SS package knows which object to map that particular activity to.

Now, one question is how this ought to be handled in the LORSm case and similar cases (which already have the ability to parse manifest files). For instance; if you import a course into LORSm from a content package and you want to be able to deliver the "items" of that course not by browsing the links in LORSm but in the order determined by the SS engine from the user's behaviour and the rules/condition of the sequence I guess you have to create the sequence first by dumping the manifest data in the SS
datamodel (after the course has been uploaded into LORSm). Maybe there should be an *open API* in the SS  package for this that LORSm could hook up to while it is in the process of parsing the manifest in the first place
(if SS is installed, that is).

Does that make any sense?

Collapse
Posted by Stan Kaufman on
One of the key requirements for Assessment is that it provide a generic data collection mechanism for use in any vertical app an OpenACS developer wants to create. It should be good for financial apps, clinical trial apps, engineering apps, etc etc. All these domains have needs for managing conditional display and processing (ie branching, range-checking, error-detection) of an Assessment based on user responses. The processing logic is similar in all cases but is more or less orthogonal to the larger set of issues invoked by the SS spec.

I think the schema for handling conditional processing within Assessment we defined here makes lots of sense and provides a relatively elegant solution within the Assessment package. I thought Malte's post of 13 June made the internal/external distinctions clear and compelling.

It's important for Assessment not to be tightly bound to other education-specific packages (or any other vertical app packages for that matter). Maybe I'm misunderstanding, but it sounds as if that's what is being suggested in the most recent posts in this thread.

Collapse
Posted by Ernie Ghiglione on
Hi Malte,

It is important to acknowledge a couple of things before addressing some of this questions. We tent to think way to much on the tech aspects of the implementation and that's cool. But it is also as important to understand the problem that these specs are trying to tackle.

I'd strongly suggest to have a good thorough read of the specs a bit as it would clarify some of the questions you ask here and also will make you go "ahhh, that's what it is..." sort of feeling.

I know, they are an absolute bi#ch to to follow and they are as dry as a pommie's towel. But nevertheless, they are the best we got. The best practices and implementation guides are probably the best start.

http://www.imsglobal.org/content/packaging/cpv1p1p3/imscp_bestv1p1p3.html

<blockquote> - How does an IMS CP repository differ from the Content Repository
used by OpenACS ?
</blockquote>

This is a big and open question. But let me see if we can summarize it a bit:

OACS CR = stores content
IMS CP = organize and structure content

So, LORSm basically uses the OACS CR to store all the content, and the IMS CP implementation maps all the organization and structure of the content to the actual content.

For instance, an IMS CP item (an entity that could describe a learning unit/object/chapter/etc) is basically an aggregation of content put together to fulfil a specific learning objective (of course without getting very heavily into technicalities and details).

So manifest/organizations/organization/items and to some extent resources, are entities defined for IMS CP that hold structure and organize content in a particular way.

I hope this answers/clarifies the question a bit.

<blockquote> - Would it make sense to extend the OpenACS content repository by the
functionality in which IMS CP differs from CR.
</blockquote>

No. The CR is pretty cool the way it is and that all it needs to do (for storing content).

The IMS CP implementation sits on top, mapping

<blockquote> - Looking at ims_cp_items (from the LORS package), a couple of other
questions come into my mind (in the background of evaluating if
assessment should extend ims_cp_items instead of cr_items, which would
be implied by using the IMS CP repository and not the OpenACS Content
Repository (pure version)).
</blockquote>

??

<blockquote> - How are additional attributes to ims_cp_items handled? I assume
through the metadata definition.
</blockquote>

Metadata is a different set altogether. If for instance an item contains a node of metadata ala:

<item identifier="here_item">
  <metadata>
    ...
  </metadata>
  </item>

... when I parse the item, I grab the metadata node and pass it to the IMS MD implementation that parses the node and extracts the whole 100+ fields of metadata and maps them into the DB (yeah, it was hell of a mapping).

And this might be *just* the same way as it might work with IMS SS extensions.

<blockquote> - Should this approach taken there be generalized to allow handling
metadata of items in general (of the content_repository), thereby
making the extensions to cr_items and useage of your own package
tables obsolete?
</blockquote>

Although I think that IMS MD has waaay more metadata than any normal human can think of, I wouldn't try to map out all sort of metadata into this schema. Why? well, because you might not need so much. In the implemetnation of IMS MD in LORS, you can specify metadata for any acs_object. So in theory yes, it is possible. However, practically, it might not make a lot of sense.

<blockquote> - If the answer to above is no, under which circumstances should a
package store it's content in the LORS CP and what is the benefit
compared to using it's own tables / plain content repository ?
</blockquote>

The simple answer is: it's done, you don't have to generate 50+ tables in metadata and content packaging entities. It has integration with OACS CR, all IMS CP are acs_objects, exposes the metadata to the search packages (tsearch/tsearch2) and it uses the file-storage for storage and delivery (taking advantage of versioning, permissioning, webdav, etc).

The other best bet you might have would be to manually put your content in the file-storage, creating the folders and then putting all your files in them and liking them together. But then you will miss out on the main purpose of putting this content into packages (the idea of having learning objects)

<blockquote> - What is the parameter attribute for?
</blockquote>

It is for passing parameters to the LMS on runtime. So for instance, if your item points to a resource and this resource has href ="/index", then you can specify parameters to pass to this page when it's requested. For instance if the parameter is "msg=hello", then the LMS should render '/index?msg=hello' as URL.

<blockquote> - How do you store prerequsites? Wouldn't it make sense to use
relationships for this ?
</blockquote>

??

<blockquote> - Some varchars suggest you would store XML content in these, is this
true (e.g. maxtimeallowed, timelimitaction)
</blockquote>

No, I parse XML, get the value for the attribute and put it in the DB. It's all XML, but you just need to extract what you need.

<blockquote> - Is there a specific reason for storing isvisible, as this could be
handled through the permission system.
</blockquote>

Yes it is, as it doesn't mean that you are display the item, but also how it's position is in the sequence (have a look at the best practices doc)

<blockquote> - You seem hesitant to use acs_rels in your system. Is there a reason?
(I see a couple of mapping tables).
</blockquote>

It's not that I've been hesitant, I haven't used them at all. As I mentioned in my previous posting, I still need to play with permissions.

I hope this helps, again, I strongly suggest to have a look at the best practices and implementation guidelines as they might help a lot

Ernie

Collapse
Posted by Malte Sussdorff on
Stan, this is my fear as well, which is why I'm trying to understand the benefits this would give to an assessment system *in general*. If it is *in general* more wise to store questions and sections in a LORS environment, then we should do so, but then we should also think about other packages like forums (which needs to move to CR in the long run).

The only reason I got into this argument was the fact that I had been reading up on the IMS specs and reading Ernies statement about LORS being an API for storing content in a structured manner. And I wanted to understand if there would be a compelling reason to move as_items and such into LORS, especially as Matthias has been suggesting this. And this should be decided *now*, before we start moving the assessment to be based on the CR. To be honest, I don't see a way how the needs for information storage (general as_item information, display_information and type_specific information) of assessment can be easier remodelled in the LORS, so I'm kind off waiting for the intrinsic benefit of doing it anyway.

Last but not least, the idea of having an SS API is a good one, which I don't mind using. But we'd have to see when and where to use it. And as it is not out yet (not even specified), I think we should leave assessment out of this discussion for the time being and just keep in mind that sometime an SS implementation might come around and could replace some code that is residing in assessment.

Collapse
Posted by Ernie Ghiglione on
Malte, Stan,

<blockquote> It's important for Assessment not to be tightly bound to other
education-specific packages (or any other vertical app packages for
that matter). Maybe I'm misunderstanding, but it sounds as if that's
what is being suggested in the most recent posts in this thread.
</blockquote>

Yes, I think this can be a misunderstanding.

For all the time that I've been dealing with IMS/SCORM specs I haven't been able to find something that says that it is taylored to tackle issues in the education/academia industry please let me know.

As a matter of fact, the biggest adopters/pushers for e-learning specifications aren't necesarily the Universities but the massive software e-learning giants (Saba, SAP, Docent, Click2Learn, etc). Moreover, in the past two or so years that I've been working on this, I haven't been able to find one academic (grad or undergrad) course that complies to the specs. As you can see for the examples I have gather in the demo sites, most of them *are* in fact corporate training packages.

Additionally, the examples use as best practices that basically are aimed to support the usage of the specs, have little to do with academic and education realms, they are heavily corporate training (see Boeing & NETg examples for IMS SS for instance).

In terms of assessment, I really can't see how IMS QTI doesn't fulfil *any* of what you call 'generic' logic for assessment. If you have an example that is not covered by QTI, please do mention it and we can see how we can go about addressing it. But, once you have a closer look to it, you'll see that it does basically most of what you could do with an assessment. Whether the assessment is meant to be directed to engineers, psychologist, environmentalist or financial brokers, the underlying logic for putting together an assessment and its metrics is what IMS QTI addresses here. Not the angle or the industry that it is targeted to.

By no means I want every other specification compliant package in .LRN to revolve around LORSm. That is idiotic. All I'm saying is that all these packages should be taking care of what they are best at and sharing information with others when they have to. Additionaly, it is a great idea for packages to implement their own functionalities if they think that's best. However, if there is functionalities that can be reused/used why not taking advantage of them. I'm trying to push for letting doors open for future integration with others in the future.

For instance, I have no doubts that in the future IMS QTI will use IMS sequencing for deliverying questions (see here: http://www.imsglobal.org/question/qtiv1p2/imsqti_asi_bestv1p2.html#1495764). So while we are investing resources on it, we should take this into account. Future thinking is another good software engineering practice 😉.

Stan, I do understand your concern, but as you get more acquainted with these specs you will see that they don't focus on one particular industry as they are meant to be specifications for interoperability.

<blockquote> I think we should leave assessment out of this
discussion for the time being and just keep in mind that sometime an SS
implementation might come around and could replace some code that is residing
in assessment.
</blockquote>

Malte, that is a good idea. Although I would urge you to think on how your assessment implementation deals with sequencing at the moment and how it needs to be design so in the future, when we have a IMS SS engine, it can easily be adapted to use sequences that come from another sources.

Ernie

Collapse
Posted by Malte Sussdorff on
Did I mention we should meet online to talk things out immediately to avoid misunderstandings :) ?

Let's try to make some clear statements:

  • Support for IMS QTI was never at disposal. If you had the idea that we are not following IMS QTI with the assessment specs than this ia a misunderstanding. Your estimation of IMS QTI supporting all the use cases specified in the Functional Spezifications is doubtful in my eyes, but it does not matter the least as assessment supports QTI specifications.
  • The main concern was not that the specifications for IMS are not re-useable in other sectors as well (we use .LRN more outside universities than inside...). It is just that there is a cry for supporting packages that are not even there that scares me.
  • In a previous posting I made clear, how assessment should be developed and how it shall in the long term interact with packages that are to be developed. If there is a flaw in this, please give an answer how to circumwent this, given budget and time constraints.
  • Simple Sequencing (the reason this whole thread started) is not out at the moment and might not be for some time. Once an API is out, it sounds like a good idea to evaluate this and then implement (parts) of it in assessment (if someone is willing to put money in this).
  • I'm not concerned about assessment relying on other packages, if these packages are reliable and out :).
  • Question: Why does everyone say assessment has to follow the SS model, but noone talks the other way round: The SS package could as well make use of the functionality written within assessment and add other SS related things on top of it. After all, the sequencing specs (technical) for assessment are already out in the open, so anyone who has an interest in SS and wants to see standards supported and prevent a lot of money invested in sequencing twice, PLEASE take a look at the specs and give feedback. Or, even better, write up a specification for SS that is more readable than IMS specifications and tailored to the OpenACS realities.
  • If I understood Matthias correctly he was suggesting that assessment uses the LORS for questions, sections and assessments. I don't see the value in doing this at the moment, but maybe I just do not understand the CP specifications and it's goals.
Collapse
Posted by Ola Hansson on
Simple Sequencing (the reason this whole thread started) is not out at the moment and might not be for some time. Once an API is out, it sounds like a good idea to evaluate this and then implement (parts) of it in assessment (if someone is willing to put money in this).
I agree it would be worth evaluating that day, and let's be optimistic, Malte - I suspect that the .LRN funders are beginning to see that there is a huge need for this missing link.
Question: Why does everyone say assessment has to follow the SS model, but noone talks the other way round: The SS package could as well make use of the functionality written within assessment and add other SS related things on top of it. After all, the sequencing specs (technical) for assessment are already out in the open, so anyone who has an interest in SS and wants to see standards supported and prevent a lot of money invested in sequencing twice, PLEASE take a look at the specs and give feedback. Or, even better, write up a specification for SS that is more readable than IMS specifications and tailored to the OpenACS realities.
I don't say so 😊. SS can't use code in Assessmet for its sequencing for the simple reason that Assessment is not specified/coded to cater for the same needs. To the extent they *will* eventually share parts of the code or certain sub-processes, it makes a lot of sense (easier, quicker, less expensive) to first make sure the SS package gets funded and implemented, and then we can think about refactoring and breaking out some parts of it if we find that it makes sense at that time. I would honestly find it quite difficult to predict, at this early stage when there even is no SS package, whether a reuse of functionality such as what we are talking about now makes sense or not. It makes sense to keep that kind of reuse in mind, though. (Malte, the SS spec is pretty clear as it is with its pseudo code, only it doesn't say what programming language to use. If that is too long to follow, the Carnegie Mellon paper is a nice summary of the SS spec. If I specify it any more OpenACS-centric than that, well then it is practically already implemented.)
Collapse
Posted by Matthias Melcher on
I don't understand why we must think in terms of
"packages" that are arbitrarily put together
according to incidental funders' alliances, instead
of trying to achieve more modularity, distinguishing
between functionalities that are commonly needed and
reused (as some core SS, QTI, and LOR/CP logic and
concepts), vs. extensions into various directions (of both
learning and non-learning focus).

If assessment used the LORS for questions, sections,
and assessments, they were addressible throughout all
modules that share the common infrastructure, and hence
reusable more easily.

Collapse
Posted by Ola Hansson on
What APIs does the Simple Sequencing package need to expose?

- Navigation and Delivery

Simple Sequencing controls (for lack of a better word) how activities in an activity tree are to be delivered. There are numerous processes and sub-processes specified within the SS spec which jointly are responsible for a successful result, that is, either delivering an activity in the sequence, or refusing to deliver any.

People who are not directly involved in developing the SS engine will probably primarily be interested in the API for two of the processes, namely the "Overall Sequencing Process" and the "Content Delivery Environment Process". The overall process takes a "navigation request" like, for instance, "Start", "Resume All" and "Choice" (there are twelve of them), and passes it to a process called the "Navigation Request Process" which in turn calls other processes ... The "Overall Sequencing Process" should have an open API that other packages can call.

Perhaps something along the lines of:

ss::osp::navigation_request -request "Resume All" -sequence_id 12345

Let's pretend that such a call results in the successful delivery of an activity (it's always either one or nothing at all). Now, the delivery of an activity doesn't imply that the user is presented with an actual learning object or assessment. No, in this case it just means that the *activity* has been validated for delivery and that the learning object(s) associated with it has been identified. The SS spec leaves it to the individual implementation to decide how to return the content to the user. Therefor, it might make sense for the "Content Delivery Environment Process" of SS to use some kind of service contract mechanism that "dispatches" (not sure this is a good word) the actual delivery of the identified items to the user, instead of always doing it all by itself. (I'm having a hard time finding a good use-case, though.)

- Tests

When the SS engine has determined that an activity which represents a test is to be delivered it is going to delegate the task of determining the users score in the test to the Assessment package. Assessment then needs to be able to report the score, party_id and sequence_id (or root activity_id) to the SS package. It would be interesting to hear how others think this communication ought to work. I'm inclined to believe that using service contracts is the way to go here since there might not be a hardwired dependency between these two packages from either end (or perhaps there must be ...)

Collapse
Posted by Malte Sussdorff on
If assessment used the LORS for questions, sections, and assessments, they were addressible throughout all modules that share the common infrastructure, and hence reusable more easily.

Matthias, can you give me a concrete example and use case to work with. Furthermore, can you state why this would not be the case in the current situation, where it is defined in the specifications of the assessment system that an API is provided to third party modules to access and store information within the assessment system.

Collapse
Posted by Malte Sussdorff on
I just read through the Carnegie Mellon paper *again* and tried to understand how an implementation like this would help us with our internal branching needs and solve at the same time our generic "condition results in action" problem. I asked Timo to read through it as well. We did not come up with a solution at  the moment, especially taking into account that our generic condition results in action approach hits two flies with one stroke and there is not much sense to use SS for branching if it already works smoothly with an internal approach, unless I'm given a use case which mandates this (all use cases given so far can be solved by the way described a couple days back).

Therefore I will drop the issue of how the assessment should handle the use of SS and LORS internally from my part, as I need to follow up on other things as well. If people are interested in putting SS into assessment I'm open for detailed suggestions that provide an enhancement over the currently taken approach. If people are interested in storing items, sections and assessments in LORS, then I suggest to take a look on how assessment at the moment defines the items, sections and assessments itself and come up with an OpenACS solution on how to solve this with LORS.

Collapse
Posted by Ernie Ghiglione on
<blockquote> Matthias, can you give me a concrete example and use case to work
with. Furthermore, can you state why this would not be the case in the
current situation, where it is defined in the specifications of the
assessment system that an API is provided to third party modules to
access and store information within the assessment system.
</blockquote>

Malte, I'll give you an example you require in just a few days. I'm coding it so you'll see it implemented. Just bare with me for a few days.

Collapse
Posted by Matthias Melcher on
<blockquote> Matthias, can you give me a concrete example and use case
</blockquote>

As far as I understand the IMS repository, the question
item used as above, first in a formative test at the
end of a chapter, and then in a summative test at
the end of the term, would be such an example, and a
third usage might occur if the question item is only
conditionally accessed if the student's age is > 90
(well, perhaps the question is asked in a neurology
department, after all).

If the items are in the same repository and on the same
level as webcontent items used for chapter content,
this would IMO simplify at least the user interface,
and I would guess that also the actual coding could
be simplified if a clean simple division of labor and
simple concepts of "item" are applied.

Collapse
Posted by Ola Hansson on
I just wanted to quickly mention that the Carnegie Mellon paper that we keep referring to is not an *implementation* of SS. It's a summary of a couple of parts of the IMS SS spec, what the arguments to the processes are and what they return, etc. It is not complete. (If it is an implementation, then where is the code?)
Collapse
Posted by Stan Kaufman on
Way back in November 2003 when we first posted up our discussion of how to handle conditional processing aka "sequencing" in Assessment (see https://openacs.org/projects/openacs/packages/assessment/specs/sequencing) we explicitly alluded to the IMS SS and QTI specs and summarized their basic components -- and discussed how our schema basically implements the appropriate parts of their concepts. Our subsequent refinement (see https://openacs.org/projects/openacs/packages/assessment/design/sequencing) has been up since March (looking at the version history).

What I can't discern from the above debate is where this schema fails with respect to the IMS specs -- which are themselves simultaneously incomplete and obtuse. We didn't set out to try to slavishly implement the IMS specs; we wanted to create a robust, generic mechanism to handle conditional processing within a data collection package. That still seems to me to be the appropriate goal here.

What I'm looking for is a clear description where our schema falls short, and more importantly concrete suggestions for how to improve it. Frankly, we've been hoping for that kind of constructive input for months.