Elevating Human Insights in an Age of AI
- Critical thinking and creativity are essential human differentiators in an era when AI can answer nearly any question.
- Positioning AI as a cognitive partner elevates learning by prompting evidence-based reasoning and sharper critical analysis.
- Experimentation with AI invites students to test, adapt, and refine ideas in ways that deepen engagement and relevance.
Transcript
Tom Chatfield: [00:13] I think in the age of AI, critical thinking and creativity are extremely important as human differentiators. We have our disposal tools that can answer almost any question, that can do almost anything you ask.
[00:29] But that makes it incredibly important to ensure you're asking the right questions in the first place and that you are adding value above and beyond these systems.
[00:40] In a way, we need humans to point these tools in the right direction, test theories, and communicate richly with one another to draw out human insights to complement the world of data.
It's incredibly important to ensure you're asking the right questions in the first place.
[00:53] One of the ways I love to use AI and get others to do so is to make it a questioner rather than an answer machine. I will ask it to play devil's advocate, pick holes in my arguments, adopt the persona of a critical reader or a critical friend.
[01:14] I find there's a huge difference between, on the one hand outsourcing our judgment to AI, which we know is associated with poorer educational and business outcomes, and on the other hand, asking what it means to use AI as a kind of cognitive prosthesis, to lift our judgments by challenging us to refine them, to rethink them, to evidence things by testing rather than confirming the ideas that we ourselves have.
[01:43] I deal with students of all ages, all the way up from postdoctoral students to 12-year-olds, and I find that I get an amazing response when I use AI as a context for learning. But crucially, I don't try just to get them to use AI.
We need to build AI into the assessment process so that we can measure mastery and progress, as well as just performance in an exam.
[02:00] We have a conversation about what we're going to do, we debate without technology in the room, what might be interesting and why, what they've seen. Then we plan, we prototype, we experiment. We go away, play, and then come back, anatomize, evaluate, and come up with a plan for the next time.
[02:21] Asking with students what it means for AI to bring their learning to life, empowering them to experiment, and learning from the practices that they co-develop, is a really exciting way of finding your way toward engaging practices and of addressing real needs rather than addressing imagined needs that you think are a great idea, but they feel don't speak to their circumstances at all.
[02:49] The opportunities for assessment are particularly rich in the context of business students and AI because we can increasingly use AIs to simulate business scenarios and get students interacting with incredibly realistic simulations of data, companies, and problems.
[03:11] This creates a challenge, of course, because AI can answer simple questions. And in a way, I think we need to build AI into the assessment process so that we can measure mastery and progress, as well as just performance in an exam.
Students can leverage these uniquely human capabilities to show how they are offering value above and beyond what technology can offer.
[03:27] It also potentially allows us to assess people in groups and collectively, as well as individually, to look at team performance and thus to look at leadership, followership, and collaboration in the context of an AI-facilitated task.
[03:43] Paradoxically, it places a greater emphasis on the value of skills like oracy, like empathy and listening, whereby students can leverage these uniquely human capabilities to show how they are offering value above and beyond what technology can offer.
[04:01] I think most importantly, this is going to be a process of experimentation and iteration, which means we need codes of conduct and trust between faculty and students.
[04:13] We don't want there to be a kind of arms race where every student is a potential cheat and every AI is an investigative system monitoring them.
[04:22] We need to find a way out of that and toward something where students and faculty are experimenting together with what it means to validate and improve the kind of skills that employers and businesses most want, which will equip them to thrive alongside intelligent machines.