AI in Academia: A Partner in Progress
- Lubna Siddiqi
- 12 minutes ago
- 5 min read
Welcome to the Age of AI: Not the Apocalypse, Just a Partnership
Generative AI has arrived in academia with a bang, sparking debates across campuses, conference halls, and staffrooms. Is it a disruption? A revelation? Or just another trend? For me, it’s been something more personal and practical. Over time—and through a fair amount of trial and error—I’ve come to view AI not as a threat, but as a colleague. A digital partner that can challenge, support, and occasionally argue with me (politely) about grades.
Tested, Trialled, and Slightly Confused: My AI Toolkit Adventure
To find what truly works, I experimented with several AI tools. ChatGPT and GitHub Copilot stood out as consistent performers—adaptive, thoughtful, and useful for both content creation and reflective feedback. Claude impressed me with its depth of analysis, even if it sometimes took creative liberties. QuillBot and a few others, while functional, didn’t quite match the level of responsiveness or contextual awareness I needed for academic tasks. Interestingly, my journey with AI didn’t start recently. I first used Grammarly nearly a decade ago in Australia, where AI-assisted writing was already quietly becoming part of academic life. That early exposure planted the seed for my current, more deliberate use of AI in teaching, research, and assessment.
From Grammarly to Greatness: AI and My Students' Learning Journey
Before AI entered my marking workflow, it first entered my classroom. I conducted action research with my students, exploring how AI could support their learning and assessment preparation. With guidance, I allowed them to use generative AI tools to refine drafts, deepen their understanding of assignment briefs, and reflect on their writing choices. The results were promising—students became more engaged, produced stronger work, and most importantly, began to understand feedback in a more tangible way. I documented those findings in earlier blogs, Integrating AI in Education: A Learning Curve for All and Integrating AI in Assessments (part 2): Lessons from My Action-Based Research and they gave me a solid foundation. That experience raised an important question: If AI could support students in becoming more effective learners, why couldn’t it support educators in becoming more effective markers?
Marking, Minus the Migraine: When AI Steps Into Assessment
That question led me to experiment more purposefully with AI in marking—not just as a one-off shortcut, but as an informal research practice. I used different generative AI models to assist with evaluating student submissions based on a detailed rubric. The setup wasn’t instantaneous. I had to invest time upfront: crafting prompts, feeding in the rubric, aligning expectations, and providing clear context. It was a bit like training a new teaching assistant—initially demanding, but ultimately rewarding.
Once configured, the AI became a remarkably effective partner. I would provide my initial score and brief feedback, and ask the AI to expand on it, using the rubric and assignment brief. The feedback it generated was not only more detailed but often more insightful than I expected. It caught subtleties I missed—connections to theory, logic flaws, structure issues. In some cases, it suggested marks I hadn’t considered. And yes, we disagreed. But those disagreements made me think harder, justify my decisions more clearly, and ultimately become a more reflective assessor.
Conversations with a Bot: The Joy of Academic Debate (Seriously)
Perhaps the most unexpectedly rewarding part of using AI in assessment was the dialogue. When the AI proposed a higher or lower grade, I found myself debating with it—questioning its rationale, challenging its assumptions, and sometimes learning from its observations. It was like having a second pair of eyes—sharp, consistent, and gloriously unjudgmental. For once, marking didn’t feel solitary. It felt collaborative. And while I still made the final call, I did so with a stronger sense of clarity and purpose.
Prompting Ain’t Easy: But It's Worth It
Let’s be honest—getting AI to deliver good feedback is not just a matter of typing a question and waiting for magic. Prompting is an art form. It takes time to figure out how to communicate your expectations clearly. But once you’ve got the prompts right—once the AI "gets you"—the process becomes smoother, faster, and incredibly efficient. There’s a steep learning curve at first, but the payoff is significant. Once trained, the AI can scale your feedback efforts across multiple assignments, maintain consistency, and even adapt tone and examples depending on student performance.
When AI Gets Tired (Yes, That’s a Thing)
Strangely, I noticed that after reviewing multiple submissions, the AI would begin to wander. Feedback became vaguer, tone less precise, and responses inconsistent. It was as if the AI was getting mentally exhausted. Oddly enough, I was too. That was my cue—it’s break time for both of us. This reinforced something important: AI doesn’t replace the human dimension. It complements it. And it sometimes mirrors it more than we expect.
AI as a Mirror: Smarter Feedback, Sharper Thinking
Using AI didn’t just help me mark faster—it helped me mark better. It made my assessments more structured, more objective, and—surprisingly—more transparent. I began to notice patterns in my own feedback, spot biases I hadn’t previously questioned, and bring greater consistency to my marking. The AI didn’t just grade. It held up a mirror to my own practices and helped me refine them.
That said, the mirror wasn’t flawless. AI made mistakes. Sometimes it misinterpreted the criteria, over- or under-scored a submission, or introduced feedback that wasn’t relevant to the brief. These missteps were useful, too. They reminded me that AI is not a final authority but a tool that still needs human interpretation, correction, and, at times, a firm nudge in the right direction. But those very errors also prompted me to think more critically about the feedback process—and, in some cases, to strengthen my own reasoning in response.
It’s a Tool, Not a Takeover: Why Humans Still Matter
Let’s be clear: AI isn’t here to replace educators. It can’t understand emotional tone, cultural nuance, or the ethical implications of a comment. It doesn’t know your students, their challenges, or their progress over time. But when used with intention, AI can take on some of the cognitive and emotional load of academic work. It helps you deliver better feedback, make more consistent judgments, and, perhaps most importantly, spend more time doing what we love—teaching, mentoring, and engaging with students meaningfully.
Mark Smarter, Not Harder: Using AI in Assessment
For educators considering AI in their workflow, start small—experiment with generating feedback or summarizing content before moving into marking. Invest time in learning how to craft effective prompts, as clear input is key to quality output. Always review AI-generated responses critically, treating them as drafts, not decisions. Sharing insights with colleagues and staying updated on evolving tools will help refine your approach. Above all, maintain human oversight—AI can assist, but your academic judgment remains irreplaceable.
Closing Thoughts: Less Paperwork, More Purpose
My ongoing exploration of AI in teaching and assessment has been both practical and reflective. It’s reshaped how I approach feedback, challenged my assumptions, and given me a quiet kind of support that I didn’t know I needed.
We often talk about technology as either saviour or saboteur. I think generative AI is neither. It's a tool—an evolving, complex, sometimes frustrating, often brilliant tool. Used wisely, it doesn't distance us from the core of our work. It brings us closer.

Kommentare