Screens would contain either
- Animation
- Interactivity
- Video
- Simple text and Graphic.
- Anti-pedagogy: It is the confluence of a chosen combination of these AS PER subject matter that defines the learning. Pre-configuring levels and fitting content into these levels is a dump of content in the determined areas.
- Anti Pattern for User Experience: When you define levels, you say, in effect, that the course I am giving you is going to contain simple text and graphic with page turner, or screens which have a said interactivity and so on. This is not user experience. Where did we care for users and ask them what would they like to see and how they want to experience e-Learning ?
- Opposed to the development teams real efforts: Levels define the output based estimates. But a delivery team takes in so much constraints, loose ends, additional but allied tasks, that they are not truly reflected in the estimates and hence the value of the entire project.
Still in my own team, I tend to join the chorus on discussing numbers with this model. So deeply entrenched is this culture, that the alternates are nowhere near main stream adoption.
Some alternates I have seen are:
1. Points based estimation: Have a unit for estimation and for various tasks, assign weighted points. These then sum up together multiplied by least effort value gives the total effort for which quote is given. This mimics functional point estimation in IT projects. However the unit figure typically is an abstract estimate, which needs more science for an effective measurement. It is best bet for approximation of efforts, does it give a framework for tracking efforts ?
2. Task based estimation: This technique is to list down the tasks in chronological order and rate the efforts against them. This is a safe bet and good estimation technique that allows for tracking as well. But the effectiveness of this depends on the granularity of defined tasks. Most of the time, we do so many out of the box tasks relevant to move the project that it seeps through the cracks of this estimation resulting in more variance.
3. Resource Loading based estimation: This at times really works. Though very crude and does not have scientific value, this gut estimation of resource loading over the project duration is normally close in measuring actuals. Yet, this measures efforts and cost of creating e-Learning. Does it measure the output and value of the package for e-Learning ?
4. Course-hour seat time based estimation: Suffers from same pitfalls as level based estimation, this is again anti thesis for pedagogy and andragogy. How can some one calculate the time I will remain in my seat for learning or studying ? May be reading. Yes. But is it e-Learning? What we talk here is only a notional time and I usually argue that it should be referred to with some good notation format and never as a value.
For example: SeatTime (Click Throughs):10 hours, SeatTime(Audio Length): 10 hours, SeatTime(Voice Reader): 10 hours.
These atleast suggest the various ways we arrive at the course hour and NEVER BY JUDGING LEARNING TIME.
Any other models, you use to effectively measure and track time of tasks, user feedback, value that is aligned with e-Learning and betters the above list. Please leave your comments.
No comments:
Post a Comment