What follows are the remarks I presented today to a working group of British Columbia’s Ministry of Post-Secondary Education and Future skills. This group brings together the Vice-Presidents Academic (or designate) for the colleges and universities across BC.wg_MLandAI
Thank you for the invitation to speak today. For our team, this is wonderful timing, as we’ve just come off our eight-week Digital Detox program, which this year focused on artificial intelligence in education. The Learning Technologies and Innovation team at TRU pride ourselves on not only providing excellent technical and pedagogical supports to our faculty here, but also leading conversations about ethical, care-centred, and accessible approaches to teaching and learning with technology across the sector within and beyond BC. Indeed, we’ve had participation in our Digital Detox and other programming from faculty and staff at almost every institution present here.
I think that we can’t talk meaningfully about Generative AI in our institutions without three key concepts at the centre:
- Algorithmic bias: the idea that datasets already include the same biases as exist in society.
- Automation bias: the tech utopian imaginary that machine-led decisions are inherently less biased than human-led decisions.
- Accountability gap: the distance between the entity making the decision and the human (hopefully) responsible for it.
There is no good data to suggest that machine-led decisions are more equitable, but there’s lots to suggest we believe it to be. As we approach discussion of Generative AI in our communities, these three concepts are central – especially how we take accountability (or fail to) for the work produced by AI.
How do we intend to use Generative AI? All users (students, researchers, writers) need more education about what generative AI can and cannot do. Remember that when it doesn’t “know,” it fabricates. (A really fun way to test this out is to ask one of these services about your own career and publication record.) But it’s important to note that the potential for misinformation is very high. Consider last week’s social media storm because a right-wing agitator generated a response from ChatGPT that asserted Pierre Polievre’s wife was a drug company executive. When that story couldn’t be corroborated, but ChatGPT repeatedly offered “citations” to prove its accuracy, conspiracy theorists asserted that the story had been scrubbed from reputable news sites.
We have an opportunity to get ahead of how our communities understand generative AI. If we are thinking about training future citizens, this work is critical and urgent.
You invited me here today, though, to talk about academic integrity, and the truth is that widespread social adoption of Generative AI dramatically changes the term “academic integrity.” Machine learning data sets are built on scraped data used either non-consensually or with a limited understanding of informed consent. Generative AI does not “cite” its sources (indeed, it actively suppresses them and jealously guards the contents of the data set). Dr. Sarah Eaton calls this the “postplagiarism” moment, and this suggests we need to reimagine what we mean by plagiarism as these tools become acceptable components of working life.
We also need to consider our values and ethics when we look at the kinds of labour practices that have been accepted at the cost of doing business in bringing tools like ChatGPT to market. The reason ChatGPT is less overtly racist than GPT-3 was is because of underpaid, traumatized labour in the Global South, exposed to the worst the internet has to offer in the name of sanitizing the data sets for the Global North. Where do institutional values of equity and decolonization fit here?
This isn’t an argument for a ban; it’s an argument for urgent conversations about what “integrity” means.
I beg you not to pursue an arms race here. If I say nothing else to you today, I want to say that. Just as Turnitin begat contract cheating and created a massive new set of problems and risks, there is no detector tool that will resolve these questions or enable us to ignore Generative AI in our teaching, learning, and research practice.
It’s about assessment integrity, not academic integrity. This is a moment for reassessing assessment and for considering how we can centre process, and not only the final product, in our assessment practice if we really wish to evaluate learning. It’s a good time for asking where and when essays are the most appropriate assessment and devoting time to writing instruction when we say it’s valuable. And it’s a time for frank conversations about what is and is not possible to achieve at scale.
This is a rare opportunity to start from a position of radical transparency, not fear. Rather than revising academic integrity policies, I think we need to establish disclosure policies. Outline the expectations of disclosing AI use for faculty, staff, and students. We talk about “cultures” of academic integrity in our institutions, but will we hold faculty, staff, policy writers, and in-house communications to the same standard as we hold students? Is it acceptable for faculty to use AI to write assessments or to draft syllabi, for example, and will disclosure be required?
We can look to Nature and other Springer Journals for policy direction here. Nature’s policy is two-fold: an AI can not be an author, because it cannot be held accountable; and anyone using AI in a research or writing capacity must disclose in acknowledgements, methodology, or introduction. We also need to consider the data privacy implications of using AI in the classroom, especially as faculty experiment with these tools. I think laying out what ethical use of AI looks like in our institutions needs to come first, before deciding how AI integrates with academic integrity policy and practice.
Living and learning ethically alongside AI will not be seamless or easy, but it begins with understanding how definitions of academic integrity will necessarily shift with competing understandings of intellectual property, plagiarism, and ownership. And it begins with transparency and an honest conversation about institutional values, ethics, and integrity.
Thank you so much for your time today