How can we improve Talis Aspire Reading Lists?

Allow students to rate resources within a list.

Students can already note whether they have read an item or not but it would also be useful for academics and library staff to see how they evaluated individual items on a list.

3 votes
Sign in
(thinking…)
Sign in with: Facebook Google
Signed in as (Sign out)

We’ll send you updates on this idea

Anselm Nye shared this idea  ·   ·  Flag idea as inappropriate…  ·  Admin →

1 comment

Sign in
(thinking…)
Sign in with: Facebook Google
Signed in as (Sign out)
Submitting...
  • AdminIan Corns (Talis) (Admin, Talis) commented  ·   ·  Flag as inappropriate

    Anselm, Nice to see this idea surface - it was something we discussed with early customers, but wasn't raised due to a variety of factors - priorities at the time, as well as complexity.

    Complexity? Well, yes. Thinking back to the discussions (I was analyst at the time) then several areas stand out.

    The first is parameterisation. Every institution MAY want to implement this differently and it would be interesting to know if this is still the case. For example, configured at a tenancy level (e.g. any resource can be rated on any list), a node level (e.g. school A wants ratings, school B doesn't) and a list level (e.g. academic can choose to enable/disable this).

    Secondly, is reporting on this. How are these ratings reflected back and to whom? Is the primary user the academic? Is the rating of an individual resource the primary focus, or is it more a birds-eye impression/overview of all resource ratings across a list (indicating is it's a 'good' list or not). Or, one step further, is it about the library evaluating resources within a subject area?

    However, perhaps the most tricky area is the sematics of a rating. With Amazon, this is fairly easy for them - is the thing being rated any good or not. But with something on a list, is the student rating the work (resource) as a whole? Are they rating just the chapters referenced by the academic in that context (e.g. 'read chapters 5-7')? Are they rating it in the context of the learning goal within the list - (e.g. this resource teaches you about x)? Or maybe they are using it in a different context - say to revise - for this purpose, its useless and gets a low rating...but when they used it to write essay X it provided some exceptional cites and is a high rating?

    I thought I'd throw some of these points out there, as they (and many more) cropped up at the time and it would be interesting to hear from customers which of these issues are no longer relevant, and which are.

    What are your thoughts?

Feedback and Knowledge Base