One of the most overlooked factors in learning design – at least in a technical sense- is how closely its evolution is bound up in various theoretical approaches to learning. Even its predecessor – although I dispute that term, instead preferring to consider it an earlier version of the same thing, instructional design – was founded on very clear principles about how people learn. In the earlier iterations (and I am referring to the post World War II military training that moved into the corporate sector in the 1950s and 1960s) was based on behaviourist approaches to learning and teaching. It was deliberately standardized, incremental and technical. After all, everyone needs to know how to load and fire a weapon the exact same way if people are to be kept safe, and maximum efficiency is to be approached. And, based on my own experiences in the defence force, perhaps not a great deal has changed at that level. Certainly, the formulaic repetition of learning objectives and the hierarchical nature of development are throw backs to that era.
However, educational thought has developed from that time, and there is no less of a focus on learning how to do something, and more emphasis placed on what is happening in the brain during the process – or at least as best we understand it. This led to the development of cognition based models of learning, including cognitivism and constructivism and constructionism. While they are often presented as being on opposite side of current educational debate, I find it interesting that the current favour for cognitive load theory and project based learning developed from the same interest in how people learn best in a cognitive sense. Nevertheless, both models posit an idea of how we learn, and suggest that instruction should be tailored to best approximate that idea. Thus, in cognitive load theory, learning designers need to consider dual coding and the constraints of working memory. In constructivism, learning designers might spend more time encouraging the development of a students’ own mental models through shared experiences and reflection.
The next change (and I’m conscious I am being incredibly reductive here) was the movement to locate knowledge beyond the student; in the form of socio-cultural learning, this meant that learning took place via experiences and exposure to the culture of a profession, environment or something similar. This has particular interest for me in terms of an online environment, but I also think has applications to teacher education. I’ve lost count of the number of times I’ve tried explaining to preservice teachers that you don’t really learn to be a teacher until you’re in a school, watching what other teachers do. Also within this is George Siemens’s idea about connectivism – that is, the notion of learning when knowledge exists in an external form (to our brains). For example, what does learning look like (if it is different at all) when we have the resources of Wikipedia or a company database of knowledge to access – at any time.
Some have suggested that there really should be no change – knowledge can’t be used in any meaningful way until it has been internalised. That is, there’s no point having a YouTube video explaining how to do mathematics until you’ve learnt how to do double digit multiplication. While I acknowledge the point, I don’t necessarily agree – entirely – we’re perfectly capable of using a calculator to get an answer without needing to know the mathematics behind it. Is this what connectivity might look like? And what does this mean for the generation of new knowledge? Again, is there a corollary to big data and learning analytics here? If the machine can process much of the data that is required, is that relieving our brains of the need to do that? And hence freeing up our capacity for other things? But isn’t that what long term memory is meant to do, in cognitivist models? Worth thinking about.