I’ve kind of let my blogging go for about a year, what with everything else going on. But I recently attended an Apple Education Community Summit (an excellent event – for the first time, we had Apple Learning Coaches, Apple Distinguished Educators, and teachers from Apple Distinguished Schools all together) and that inspired me to get back on the blogging bandwagon. I’m probably going to cross-post some of these – perhaps the most interesting – to the Apple Education Community, too. So, just to remind myself of the principles of this: I will try to blog once per week, on a topic that is of interest to me, broadly related to education. Each post should be about 300-400 words. It’s not going to be rigorously defended or referenced – rather these are just some of the thoughts that I have over the course of the week. I’ll also post some of my conference presentations, too, but they don’t count towards my once per week ideal.
So, to begin, then. I’ve been thinking a lot about artificial intelligence and especially its effect upon education. I’m adopting a school-based approach here, simply because I think that’s more interesting – although I do note that some of what I say might be of interest to higher education providers too. One of the differences that I’ve noticed is that many school educators are less concerned about in-class assessment with GenAI than their higher education counterparts. When I asked them why this was the case, I got the same kind of answer: “We know our students writing, and we’d be able to spot any influence of GenAI.” Leaving aside any testing of whether that’s the case or not, I think it does make a good point: generally, school educators spend a lot more time working with their students, and read a lot more of their work than in higher education. In fact, as a HE educator, often the first time I see my students’ work is in their first assessment task – although I should note that there are always discussion boards etc.
Alternatively, school teachers often see students’ work three or four times a week, and often examine it more thoroughly, too, so I can see how that would position them to be more able to assess whether the work has been influenced by GenAI. Of course, I should note that much of this is predicated upon some pretty basic efforts in using GenAI – I wonder if teachers would be able to spot a more sophisticated effort, in which a student trains a tool on a previous sample of their writing…