Thursday, May 22, 2014

10 DOs & DON’Ts in MOBILE LEARNING - Game of Phones

Mobile learning is confusing. In theory it sounds great, in practice it’s often misattributed hype. Different devices have different patterns of use. The fact that you make ‘responsive’ e-learning simply means that it can be delivered on different devices NOT that it will be used on different devices. M-learning is, therefore, often more fiction than fact.
1. DON’T expect people to play tennis in a cupboard
People don’t do long e-learning courses on mobile phones. It’s a device for short, episodic activity, not long, deep, reflective learning experiences. Large e-learning courses on mobiles are a waste of time. It’s like playing tennis in a cupboard.
2. DO use for informal learning
Formal stuff is hard on mobiles, so focus on informal, such as fast facts, flashcards, quick quizzes, comms and support for students, social media and so on. People use mobiles informally, so deliver informal learning. 
3. DO think about social media
Mobiles have become the primary means for accessing social media. Twitter has become a great form of CPD for all sorts of professionals. Indeed, all forms of social media have some relevance in learning. So thing about this aspect of informal learning.
4. DO use for projects
Long used for gathering material and evidence for assessment in vocational learning. Mobiles are great at gathering data, whether notes, images, audio or video. It gives impetus to learning by doing.
5. DO use for performance support
OK, you’re stuck and only have a mobile to hand. That’s when you need the right answer to a question or solution to a problem. This is short, sharp and useful. Learning at the point of need.
6. DO use for spaced practice
The most ignored piece of theory in the psychology of learning is the Ebbinghaus forgetting curve – we forget most of what people suppose we learn – fact. The solution is spaced practice and mobiles are powerful, portable and personal devices in our pockets, so we can deliver spaced practice by delivering cues from any learning experience across a defined time.
7. DO use for assisted learning
Do use for useful apps that assist learning. A good exampel is the AI app, Photomaths. Teachers hate it, learners love it. You point your camera at the maths problem and it gives you the answer. Not only that, it gives you the breakdown of the steps between the question and answer. That's useful. 
8. DO think about context
One of the advantages of mobile is the fact that it is personal, powerul and portable. Think about recommending its use in context, In the store in retail, in the ward in medicine and so on. With the Internet of Things we may even see Beacon driven learning, where learning is triggered by beacons in the workplace.
9. DON’T expect people to pay
Yes, non-wifi use costs money. Don’t expect learners to pay unless they have agreed to this approach. This sounds obvious but it’s a fact that is too often forgotten in designing m-learning.
10. DON’T cheat on mobile metrics
First, there’s confusion about what ‘mobile’ devices are, a confusion that is confusing the hell out of everyone. When I say “he’s using a mobile”, I don’t mean he’s using a ‘tablet’. Otherwise, I’d say “he’s using a tablet”.  Yet people are reporting mobile use as phones plus tablets. If the answer is, tablets are taken around by people and used as second screens, then those two criteria also apply to my laptop. This is sleight of hand. Don’t cheat on mobile metrics
Conclusion

In the same way that tablets have been hyped in schools, as they are limited in terms of complex learning tasks such as long-form writing, coding, tools etc., mobiles are hyped in formal learning. Don’t treat all devices as delivery devices. Different devices have different learning attributes.

Friday, May 02, 2014

Educational research largely useless - no costs, no bite

A question that is often asked is why education seems to never change in relation to to technology, compared, for example, to health? One answer is that there's no real consequences of failure for deliverery in education, there is in health. A more extreme example is the universal use of very expensive flight simulators for pilots. Why? Pilots go down with the plane! But another problem is the lack of a culture of cost-effectiveness analysis (CEA).
A wonderful paper by Levin and Blefield ‘Guiding the development and use ofcost-effectiveness analysis in education’ (2013) shows how education, unlike other areas of massive expenditure fails to apply Cost-Effective Analysis (CEA) to research projects. In practice, this leaves a massive vacuum at the centre of arguments around educational interventions and explains why progress is glacial.
Given the massive costs to society for education this is a puzzle. Surely, say the authors,“alternatives that show greater productivity relative to their costs, i.e., that are more cost-effective and more efficient in the use of social resources, should be preferred for adoption and implemented more intensively.” But how often do you see costs mentioned and if they are, they are of such low quality as to be meaningless.
The problem, as the authors of the report claim, is endemic. There’s little understanding of some basic economic concepts among educational researchers. No real distinction is made between Cost Benefit Analysis (CBA) and Cost Effectiveness Analysis (CEA). There is rarely any conceptual understanding of Opportunity Costs or realistic Sensitivity Analysis. This leaves research incomplete and hanging. A good CEA can even shift marginal and even negative research results into positive territory, as one delivery method may result in marginal or even negative learning effectiveness, but at a much lower cost, allowing other interventions to take place, making the overall system better.
Examples - MOOCs, Blended, Tablets
Let me give you an example. In debates around MOOCs, many claim that MOOCs are no substitute for face-to-face campus courses. But that is not the point. If they result in the same measurable academic outcomes, or even lower educational outcomes, the real win is in the fact that MOOCs can deliver a scalable solution at a tiny fraction of the 'cost per learner' of traditional campus courses. We’re not talking about shaving percentages off the costs but coming down to a tiny fraction of the original costs. This, in turn frees up resources to do better teaching and even more research.
One more example is blended learning, where effective alternatives are rarely, truly calculated in terms of the costs components in relation to effective interventions in a blended solution. Blended learning often turns out to be just variations on blended ‘teaching’, with no real appreciation of the true costs of delivery.
Yet another example is the purchase of tablets in schools. I doubt of there is a single exhaustive cost-effective analysis on any of these large-scale purchases in the UK. In fact, disasters due to inadequate procurement have already been reported from the US.
Rarity of true costs
An additional problem is resource-based costing. In practice, estimates of costs based on resources are often hopelessly inadequate, as the institutions often doesn’t actually know the true costs of delivery. The actual costs of personnel and the all important issue use of buildings and accompanying resources is rarely calculable. Yet we know that many educational institutions, schools, colleges and especially Universities have woeful occupancy rates, making this a substantial figure. This is not easy as ‘rental’ rates are not applicable, as they are in many other areas of the economy but with a little effort one can calculate the costs of the building and its amortisation and maintenance costs. There may also be other hidden funding costs from Government or other sources. It is important to be exhaustive here. The true costs of any control group must be clear if cost-effectiveness is to be measured.
Once you have the true costs you really can determine a ‘cost per learner’, a metric I like. The next step is to also include the marginal cost per learner as further participants in online courses are normally at marginal costs. You can go further and look at the distribution of these costs but few get as far as this first metric.
Conclusion

The truth may be hard to bear here, but much educational research is meaningless in the sense of having no real chance of impact and change, as it does not carry through to Cost Effectiveness Analysis (CEA). The anti-corporate attitudes in our Universities is one problem, the lack of actual fiscal skills among researchers is another. But the main problem is the lack of commissioned research that demand rigour on costs. Without a truly rigorous Cost Effectiveness Analysis in education we will continue to spend huge amounts of money on fruitless research. The lesson is clear link effectiveness to costs. If you don’t the research will fall into the category of ‘inconclusive’. That means no evidence-based change will happen.