Sunday, April 19, 2015

Does gamification play Pavlov with learners? DOs & DON'Ts

The massive success of online games led many to suggest that games and gamification, could be used to turbo-charge online learning. Take a little magic dust from gaming, sprinkle generously and we’ll all find it more fun, be more motivated and learn to love learning. But there’s pros and cons here, as it can both help and hinder learning. If gamification is simply scoring, bonuses and badges, the 21st century version of Pavlov's dogs, that would be a disappointment. The simple stimuli, scores and rewards may keep learners going forward but it can be a distractive, disappointing and shallow form of engagement, skating across the surface of content. It may also demand more cognitive effort for not much gain. The danger is in takinglearning abck to the behaviourist era, with simple Pavlovian conditioned responses, or S-O-R theory. The learning game still has far too much behaviourist theory. Most obviously through learning objectives.
On the other hand, many proven, evidence-based pieces of learning theory seem to be congruent with games techniques, such as chunking, constructive failure, practice, doing and performance. I've given a detailed analsis of a real example here - Angry Birds.
Learning is gamelike
All teaching and learning is a bit of a game - gamification just makes it more obvious. It’s often said that we tend to blame the players not the game in education. Students quickly learn how to play the ‘game’; minimise attendance at lectures, get a hold of past-exam papers, cram for the exam.
There’s also a sense in which social media is also game-like. Twitter's really one long never-ending 'game' with plays (Tweets), rewards (Favourites & Retweets), wins (Followers). Similarly with Facebook, where the race for friends, comments and likes has a game-like feel. Going back to check your comments, notifications and likes can be as addictive as any game.The huge success of social media, measured in the billions who use it daily, even hourly, may be down to its gamelike addictive qualities.
What’s in a game?
Wittgenstein saw the word ‘game’ as an example of a word that defies definition, with a spread of related meanings, that resemble one another, like family relations. In fact, he saw all use of language as ‘language games’ in the sense of being involved in a way of speaking within a context – idle chat, teaching, scientific discourse, flirting and so on. So let’s not play the dictionary definition game but at the the DOs and DON’Ts.
1. Distract
Games can distract from true learning. In learning, often contemplation, steady progress and cognitive calm are required - not the cognitive distraction of cheap gamification. In this sense needless gamification can hinder learning. As Merrill said, “there’s too much ‘-tainment’ and not enough ‘edu-‘ in edutainment products”. Game players can be less interested in learning, knowledge, content… and get obsessed with simply winning. That’s a real danger. You see this in leaderboards, where a few at the top get obsessed and battle it out for days, while the rest get demoralized, as they know they’ll never get to the top. This can lead to regular resets on leaderboards to thwart the obsessives.
I built a bar-quiz kiosk for product knowledge (for a bank), which was incredibly popular but encouraged fierce rivalry between teams at different locations around the country. It worked abut a side –effect was that this encouraged competitive, rather than collaborative behavior.
2. Disappoint
Poor efforts at games and gamification can disappoint. The problem with games is that although they seem exciting and fun, are actually fiendishly difficult to design and make. It’s easy to try, not so easy to succeed. So I’ve seen lots of half-baked, condescending or childish attempts at gamification with cheap cartoons, crass special effects and dire sound effects. If it doesn’t work, seems condescending or superfluous, it may be worse than nothing at all and can demotivate learners. The lesson here is that learning need not always be ‘fun’. It sometimes needs to be taken slowly, seriously with intense focus and persistence, not pimped up like a teenager’s car.
3. Put off
‘Game’ is a pejorative word for some. Not all older learners appreciate the idea of games in learning and may find it faddish, even condescending. Games of a certain type may also exclude girls. It may be difficult to get gamified learning experiences accepted by the people who have to implement them or older, more conservative, audiences.
4. Overload
Gamification may well introduce extra cognitive effort that may outweigh any planned advantage. It may result, not in cognitive gain but cognitive overload. This can be counterproductive and can hinder rather than help learning. Don’t imagine that games techniques can be inserted into learning experiences without extra cognitive effort. Multitasking is a myth.
5. Assume
To assume that a game or gamification is always good for learning is a mistake. There are plenty of instances where games and gaming would be superfluous, even inappropriate. I’m not sure I appreciate games in sensitive medical subjects such as chronic diseases. In academia many would regard gamification as a cheapening of their subject.
6. Over-spend
Let’s not forget the expense. Games and gamification usually involve extra costs, such as design, writing, coding and testing. The extra time and expense must be justified by gains in speed of learning , impact or retention.
1. Control progress
Games and gamification are very clear in terms of self-awareness of progress. There’s tasks, levels and clear points where achievement is recognized and rewarded. Levels is a good example. High-end games with levels force the gamer to stay within a competence level until they prove they are competent at that level and can move on. This is comparable to Vygotsky’s Zone of Proximal Development. Personally, I think that games developers know far more about this than Vygotsky. Within that level you are subjected to repeated failure, unless you show clear competence or make fast progress. This is so different from most one-size-fits-all online learning.
2. Chunk
Games are sensitive to chunking, whether its short video, levels. This breaks learning down into manageable and meaningful tasks. This can be short videos, certainly shorter than the 6 minutes recommended by data from large scale video based courses, sprints (see Duolingo) and the all-important levels.
3. Allow failure
Games thrive on failure. You lose, die but live to play again. This catastrophic failure often results in you being thrown back to the start or at least back to a certain point, to come at the task again. This has two powerful learning effects 1) you repeat the experience which is no bad thing in learning; 2) you are motivated to avoid failure the second time around as the consequences are severe. Both combine to push the learner towards competence.
I have used these in gamified soft-skilled simulations in subjects like interviewing skills and conflict resolution.
4. Learn by doing
Games are rarely about digesting large amounts of ‘knowledge’. They come into their own in domains where you have to learn processes, procedures or real-world tasks and competences. Scenario-based learning is a great candidate for games techniques. It is here that levels of competence, learning from failure and learning, as well as assessing competences, comes into its own.
5. Practice
Games are largely about repeat cycles. You do something, fail, go back and do it again. There’s much more opportunity for repeated practice therefore reinforcement in long-terms memory, better retention and recall. This is what games are all about but learning rarely faces up to the truth, that it, above all, requires repeated practice, hence the massive inefficiency of sheep-dip training and lectures.
6. Time
Military simulation sometimes introduce progressively faster expected completion times. Faster than they would be expected in the real world. That is because it instills better levels of competence. Timers often create a slightly more intense expectation and raise attention and focus in learners, which may, for certain tasks be useful. They can push towards automaticity in recall, as there’s a world of difference between a competence that is measured by immediate action and recall and one that takes some time to recall. I’ve seen this work well in drill and practice tasks, recognition of aircraft and so on.
7. Make game and learning congruent
This is perhaps the best rule of all. The game must be congruent or as close in rules and structure of the learning experience. If they feel like two separate layers, it will all have been in vain. I once produced an interactive version of The Joy of Sex book. We included a Mr & Mrs game, where, as a couple, you were asked questions separately, then compared your answers. It was fun and it lived up to its intended purpose, to allow couples to have some informative (at times edgy) fun in their relationship.

I like the light touch gamification in Duolingo - sprints, adaptive and spaced practice but not so keen on all the bonus symbols. When gamification is congruent with good principles in learning theory, it has the power to increase the effectiveness of the learning.  I'd call it ‘gamish’ as I’m fond of the chunking, repeats and mastery but less keen on superficial scores, bonuses and badges. That’s not to say they don’t have a place but they often seem like a superfluous layer of complexity.

 Subscribe to RSS

Thursday, April 16, 2015

Myers-Briggs – a useless ponzi scheme?

Myers-Briggs is a multi-million dollar business, with over 2 million people taking the ‘test’ every year. Yet it may well be a complete waste of time, based on discarded 70 year-old theory, with no evidence that it is of any real practical use. In fact, there is evidence that it is plainly wrong and misleading, with no real predictive value.
Ponzi scheme?
No serious psychologist would use or recommend Myers Briggs but HR has a tendency to go for astrology-like fads, based on long-discarded theory, as it gives them ‘results’, some supposedly ‘real’ data to work with, despite the fact that this data is plainly unreliable. It’s a form of groupthink, as with NLP, Maslow, Kirkpatrick and learning styles (Honey & Mumford VAK).
We have an explanation for the popularity of these tests – flattery. Measured through self-reporting, always a dangerous source of data, it takes advantage of the Forer Effect, where people yearn for answers to such a degree that seem accurate but are actually vague and general, applying to many people. Astrology, Tarot Card reading, NLP, learning styles and most other fake diagnostic tools use this effect to good effect, but it’s a trick, a con. To say it has the same gullible appeal and zero predictive power of Astrology turns out not to be just an idle quip.  Case & Phillison (2004) trace the origins of the test back beyond Jung to Astrology during the Renaissance and earlier.
Another driver in HR, is the fact that HR professionals can make easy money. This is typical of schemes, such as NLP and the many 'learning styles' tests, that rely on selling the product by selling a personal franchise. In the case of Myers Briggs you pay $1700 to become a certified tester and then charge yourself out to administer the tests. Like NLP, you’re buying into the brand, a franchised Ponzi scheme.
Discarded Jungian theory
As usual, there is long discarded theory behind the test, developed in 1940, from the work on psychological types by of Carl Jung. The 16 ‘types’ used in the test are largely fictions. Jung’s ‘types’ are primitive, binary distinctions start with a distinction he stole from Nietzsche – the Apollonian’ and ‘Dionysian’ redefined as ‘rational’ (split into thinking & feeling) and ‘irrational’ (split into sensation & intuition). Below this is a fundamental split into ‘introverts’ and ‘extroverts’. It’s a conceptual game that Marxists used in dialectical materialism and amateur HR therapists use when they bandy about mindfulness v mindlessness, wellness v illness, happiness v unhappiness. These binaries don’t exist in psychology, where traits are commonly exhibited along an axis, as contraries (like hot and cold related to temperature), as opposed to contradictories (true and false). This is a fundamental conceptual error, ignoring a gradable spectrum for mutually exclusive opposites. Even Jung thought that these dualistic definitions were wrong but HR likes clear-cut ‘yes and no’ categories. It makes false theories seem much more convincing.
The evidence is in no way ambiguous. There is no serious, peer-reviewed, control-study evidence that supports Myers Briggs. In fact, what evidence there is, shows the theory and test to be wrong. People who take the test repeatedly get different results (up to 50% get different results on second attempt), which destroys its validity. In a clever study by Carskadon & Cook (1982). where people were asked to compare profiles, their preferred profile and the actual MBTI test profile, only half picked the same profile. Its predictive power is also unproven, in fact misleading. Radically alternative models, such as the OCEAN model (openness, conscientiousness, extraversion, agreeableness, and neuroticism) have more empirical evidence to support their fit and predictive power but they are owned by no one and haven’t attracted any marketing muscle. It has The Hogan Personality Inventory and DiSC tests that use the Five Factor research.
In Pettinger’s study (1993), the test was put to the test but the lack of validation led them to conclude that,“A review of the available literature suggests that there is insufficient evidence to support the tenets of and claims about the utility of the test.
On a practical level, Kuipers et al. (2009) explored the relationship between MBTI profiles and team processes in 1,630 people working in 156 teams and found that MBTI profiles does not seem to predict team development very well. In a general review by the National Academy of Sciences (1991), MBTI research was scrutinized and concluded that there is "not sufficient, well-designed research to justify the use of the MBTI in career counselling programs".
Myers & Briggs
So how did all this happen? Katherine Briggs and daughter Isabel Briggs Myers, neither of whom had any training in psychology, set out to design the test and worked with the HR manager of a local bank, Edward Hay. They took the basic Jungian distinctions, invented some new, more sellable terminology, played this 4-way splitting game and so was born, not so much a theory, as a business.
Like Maslow’s hierarchy of needs, NLPs eye movements, Honey & Mumford’s four fictional learning styles, VAK learning styles or Kirkpatrick’s four Levels of evaluation, there is no evidential support, no data, no controlled studies but plenty of marketing.
There are dangerous consequences here. First the reputation damage for HR that seems to be forever stuck in a series of time-warp practices; second the wasted time and costs; third, the misleading recommendations, whether for recruitment, job roles, promotion, whatever. To shape people’s lives, with such a blunt instrument is to simply pigeon-hole and stereotype them into false positions or worse prop up the progress of some, arbitrarily at the expense of others. It's part of a wider therapeutic apporach to HR that allows amateurs, who give little or no thought to the validity of tools and techniques, to become 'certified' to look into the minds of others and make amateur diagnoses. The mindless behaviour here is in the unthinking process that allows so-called professionals to assume roles they are not qualified to hold. HR should be looking at its own idiosyncratic behaviour before pronouncing shoddy and unprofessional judgements on others.
Case P. & Phillipson G. (2004) Astrology, Alchemy and Retro-Organization Theory: An Astro-Genealogical Critique of the MBTI Organization July 2004 vol. 11 no. 4 473-495
Carskadon, TG & Cook, DD (1982). "Validity of MBTI descriptions as perceived by recipients unfamiliar with type". Research in Psychological Type 5: 89–94.
Jung, C. G. (1923). Psychological types; or, The psychology of individuation. London: Paul, Trench, Trubner.
Kuipers B. S. et al. (2009) The Influence of Myers-Briggs Type Indicator Profiles on Team Development Processes. Small Group Research vol. 40 no. 4 436-464

Pittenger DJ (1993) The Utility of the Myers-Briggs Type Indicator REVIEW OF EDUCATIONAL RESEARCH Winter 1993 vol. 63 no. 4 467-488
Appendix 1: Chart

 Subscribe to RSS

Tuesday, April 14, 2015

VR and consciousness – some truly freakish ideas

The Hard Problem
For the last year I’ve been playing around with Virtual Reality, using the Oculus Rift’s DK1 then DK2, demoing it to hundreds of people all over the world. I’ve seen them scream, shake, fall and have their mind blown, almost replaced by these experiences. Almost to the last person, the reaction has been ‘that’s awesome’. It’s made me think – think hard.
The fact that it can, in seconds, replace consciousness of the world you know with another completely different world, floating around the International Space Station, walking across the bottom of the ocean, getting your head cut off on a scaffold during he French Revolution, bungee jumping, whatever…. led me to a renewed interest in consciousness. Philosophy has long seen consciousness as an intractable problem. What is it? Does it even exist? Theoretically, we seem to have been wandering around in a cul-de-sac of dead-end irreducibility. PS - if you don’t think this is a problem, think again, as this may mean the death of the human soul, a belief that keeps the three major Abrahamic religions, and others, alive.
Chalmers zombie hypothesis
Consciousness gave Descartes his anchor as the irreducible ‘I’ in I think therefore I am’, but science and solid philosophical debate about the difficulty of how separate minds and bodies interact, ate away at dualism. Some, like Daniel Dennett, now think that consciousness is a superfluous epiphenomenon. David Chalmers, a philosopher, rocked the philosophical world, when he came up with an idea that promises to break this age-old problem, the ‘hard problem’ of consciousness. That problem is why consciousness exists at all. Why do we feel anything? Why are we not zombie automatons? It is the subject of Tom Stoppard’s new play The Hard Problem, which received lukewarm reviews. Then again, theatre has never been good at deep philosophical analysis. Chalmers asks us to think of a doppelganger or equivalent of our own selves but without consciousness – just a zombie or Cartesian machine. Then poses a solution – that all networked machines are, to some degree, conscious.
Age of Algorithms
Renewed interest in the problem of consciousness has come from the recent rise of AI in our Age of Algorithms. Many are now practically engaged in replicating human abilities but the dividing line between soft and hard AI is still the notion of deep intelligence and consciousness. Not that AI is not always about ‘replicating’ human abilities and consciousness. We didn’t succeed in conquering the problem of flying by copying birds but by designing a different technology that did it better. Nevertheless, the issue of consciousness remains. In what way do sophisticated ‘thinking’ machines have consciousness? This has sparked a renewed interest in the problem. But another medium is also contributing.
VR as medium of the mind
Chalmers was moved by a childhood experience that corrected an abnormality in his left eye, where the world suddenly popped into 3D. I have the same disorder , and know exactly what he means, but it was my extended experience with VR that blew my mind. Rather than a qualitatively, improved experience of perception, my entire mind was put in another place, through involuntary ‘presence’. My reptile brain forced me into thinking I was somewhere else, doing something I wasn’t actually doing but simple experiencing - doing a real bungee jump. I can only explain it be reference to another experience in my life that was similarly revealing – taking LSD. Stephen Downes and I had an interesting talk in a bar last month on the revelation that act had on us both in terms of the arbitrariness of perception and consciousness. We both agreed that it was a life changing experience that influenced our philosophical view of the world. VR was similar, if not more controlled!
When I first tried VR with a bungee jump using a $350 VR headset and headphones, I was immediately transported to another place, could look around, saw people behind me waving, waved back, walked to the edge of a platform, looked down and it felt real. I jumped and felt myself falling. Then, hanging on the end of an elastic, I looked up and saw the water, down and saw the sky. I was upside down. But I wasn’t – it was all in my mind..
The bold move
Koch has argued that the line has changed over the years, as consciousness has been granted to dogs and higher animals, even insects or anything with a network of neurons. In a bold thought experiment, he extends this further, to include any communicating network. Couldn’t the internet, our computers, our phones – be conscious? The internet has the same number of synapse connections as about 10,000 human brains. Is it in any way conscious?
We have evidence that consciousness is related to networked activity. The obvious examples are sleep and anaesthetic states, where one can measure the actual decline in networked activity as we lose ‘consciousness’. Could it be that consciousness is simply a function of this networking and that all networked entities are, to some degree, conscious? What Chalmers, Koch and other posit, is an explanation that keeps the physics, neuroscience and philosophy in place. It is an intriguing idea stimulated by my experiments in VR.
They have their critics, such as Daniel Dennett and Patricia Churchland, who simply dismiss consciousness as an illusion. Some even think the problem is insoluble and that our brains are not equipped to solve the problem. But the issue keeps nagging away at me – very time I try VR, which is getting better and better, more ‘real’, more ‘extreme’, more ‘revelatory’.
VR and consciousness
Some experiments in VR may be instructive here. You can easily experience ‘presence’ in VR – the belief that you’re somewhere you’re not. That is commonplace. But consciousness-swap experiments show that you can experience something more – the consciousness of being someone or something else. 
There’s gender-swap experiments where you see your body and external world from a female or male perspective. There’s racial-swaps, disability-swaps, even living the life of someone else for a long period. I’ve been involved in a  social care VR programme where you become an elderly person and see the world from their perspective with blurred vision and a touch of induced memory loss. I’ve also been working on VR ideas that put you in the position of driving while under the influence of alcohol, drugs or distractions, like using your mobile.

We may be no more than super-evolved, networked, operating systems – literally androids. Technology, the internet, AI, philosophical analysis and neuroscience may be coming together t crack the problem of consciousness. I think VR, as a medium, will accelerate this analysis, as it creates a window on consciousness and the opportunity for experimentation that has never been possible before. Indeed, we could be on the verge of seeing consciousness as massively manipulable. I could be lifted from a depressive experience in seconds, see myself as others see me, be someone else. The only limit is the imagination of our own conscious thought to explore these new worlds and new ideas.

 Subscribe to RSS

Saturday, April 11, 2015

Will Apple be biggest watch company in the world?

Scott Galloway, of NYU Stern thinks that Apple will be the biggest watch company in the world. He may be right - they've, surprisingly sold out on release.
But a watch is no longer a watch, it’s an accessory, a piece of jewellery. It has long lost its role as a timepiece, by top-end vendors who market them as often unnaturally large and shiny, status objects. They’re not stupid these vendors. Neither are the people who research and study this stuff. They know exactly what a watch IS and what it is NOT.
Not a timepiece
Wrist watches were developed for military watches, synchronised actions, and moved to mass consumer status at the end of the First World War. But they quickly became status symbols. A watch is NOT a timepiece (that is merely a trivial piece of functionality). A watch is an ACCESSORY. It is a piece of jewellery, something that speaks to others about you. My son has an expensive watch, that shows the mechanical workings of the timepiece. He has to wind it up to make it work! He doesn’t care. It’s the look that counts. It’s unnaturally big, glitzy and looks expensive. Acoording to Scott Galloway and Geoffrey Miller, he’s saying “mate with me, I can afford this, even though it’s functionally useless”.
Binary choices
Scott Galloway of NYU Stern University studies this stuff and has some answers. (Watch this video - it's worth it).Why do young people queue line up round the block to buy Apple products? Teenagers don’t have big income but they still buy luxury goods such as phones, and watches. They want the best and the middle stuff won’t do. He thinks that consumers are over-messaged, so they take binary choices. It’s low-end or high-end. The middle is not the place to be. So we see luxury brands soar and cheap supermarkets thrive.. “Beautiful, ornate items make us feel Godlike” he says. Rich people are boring and predictable – they all aspire to and all buy the same things – the same few, high-end brands. But there’s another, bigger game in play….
Buy this – get laid
At the instinctive level, we think it makes us attractive to the other sex, says Galloway, ” these are basic and eternal human instincts – these have not gone away.” What does a watch brand say about you? Watches and phones have long been indicators of that basic instinct – procreation. The top watch brands say, “If you mate with me I’m more likely to look after your offspring than someone wearing a Swatch Watch.”
With phones, Blackberry failed because it said, “you’re a middle-manager going nowhere”.  Similarly with Nokia, which lost its early mojo. Being bought by boring Microsoft made it worse.
And it’s not just watches and phones, it’s cars and shoes. Men buy fast sports cars that can’t be legally driven at their designed speed to appear sexually attractive. Women, says Galloway, buy “ergonomically impossible shoes to solicit inbound offers form those same men”.
This is the position, elaborated by Geoffrey Miller in ‘Spend– Sex, Evolution and the Secrets of Consumerism’, who uses evolutionary psychology to explain the allure of the luxury brand. Luxury is about propagation. Miller makes a good point about all of those luxury ads in glossy mags and at airports showing all-too-good looking folk holding their wrists in surprising positions bearing a ridiculously expensive watch. He claims that these ads are not aimed at making you buy the watch. They’re aimed at the people who already have the car, bag and watch, to say ‘you’re above everyone else’. It’s not about aspiration, it’s about confirmation. Until you see the Apple Watch get that smart in advertising, it will remain a gadget.
Apple Watch as luxury brand?
Galloway, therefore, believes that Apple will be the biggest watch company in the world. Why? They’re a luxury brand, own the value chain and can redefine that market. I am not so sure. Here’s why…
1. Glasshole effect
Google Glass, according to Galloway, screams, “you will not conceive a child as no one will get near you”. I’ve tried it and you get a feeling in your gut that says “this is just wrong’. Apple’s Watch is in that territory. It may recover but after the initial lukewarm reviews and worrying pictures of geeks wearing gifts, you’ve lost momentum and it’s hard to recover.
2. Still a gadget
A technology has to transcend gadgetry. It has to be more than one-upmanship in the gadgetry game. It has to become a new medium. VR is not a gadget it’s a medium. The Apple Watch still looks like a gadget.
3. Difficult to use
Apple used to excel at - buy, open box and use products. This takes too much time to learn – witness the steep learning curve mentioned in this new York Times review. That means it will frustrate all but the technically inclined.
4. Speak to your watch?
It uses Siri as input but no one like speaking to their watch in public. Even in private, it’s a geekish stretch.
5. Rotten Apple
Apple has played the status card by launching the $18,000 gold version. This sends a bad message, that tech is not a decentralised, disintermediated democratic movement but a status brand. Apple don’t want to push this too far or the fruit will rot.
6. Mobile still king
The problem with the watch as wearable is its rival the mobile. Anything the watch can do, the mobile does better. And if you need your mobile to make your watch work, what’s the point? You’re using the same hand to use your mobile as your watch. One interferes, in terms of signals, with the other.
7. Reverse a trend
You have to convince a generation who either don’t wear a watch (as they have a mobile) or wear a watch as jewellery. Why bother? This is reversing two trends that are well embedded.

The classic error is to stay within the paradigm of the ‘wristwatch’. Wearables are likely to break free from the old concepts. Apple have successfully positioned themselves as a luxury brand and their stores and marketing ooze class. But this doesn’t seem right. Then again, they have sold out and a sure sign of sucess ifs the fact that fakes are already on sale in China ......

 Subscribe to RSS