Stop Using AI As A Hammer, When It’s A Screwdriver: My AI Odyssey Through The Classroom

This article is a teacher’s (me) journey out of the AI shadows and into classroom transformation. This article is a companion to a recorded lecture I gave on how I use AI in the classroom. I recommend watching the video in addition to reading this post, as it offers a deeper dive and helps contextualize the experiments and perspectives summarized here.

AI Isn’t a Hammer, It’s a Screwdriver

A teacher’s journey out of the AI shadows and into classroom transformation. This article is a companion to a recorded lecture I gave on how I use AI in the classroom. I recommend watching the video in addition to reading this post, as it offers a deeper dive and helps contextualize the experiments and perspectives summarized here.

We’ve successfully scared the hell out of ourselves about AI. That’s the truth. Despite the helpful Wall-E’s and Rosie the Robots, the likes of HAL 9000 locking astronauts out in space to the death machines of The Terminator, the cultural imagination has been fed a steady diet of dystopian dread. And now, with the hype and hysteria churned out by the media and social media, we’ve triggered a collective fight, flight, or freeze response. So it’s no surprise that when AI entered the classroom, a lot of educators felt like they were witnessing the start of an apocalypse, like all of us were each our own John Connors’ watching the dreaded Skynet come online for the first time.

But I’m here to tell you that’s not what’s happening. At least not in my classroom.

In fact, this post is about how I crawled out of the AI shadows and learned to see it not as a threat but as a tool. Not a hammer, but a screwdriver. Not something that does my job for me, but something that helps me do my job better. Especially the parts that grind me down or eat away at my time.

If you’re skeptical, hesitant, angry, or just plain confused about what AI is doing to education, pull up a chair. I’ve been there. But I’ve also experimented, adjusted, and seen the light and the darkness. I cannot dispel all of the implications of AI use, but I want to share what I’ve learned so you don’t have to build the spaceship from scratch.

We Owe It to Our Students to Model Bravery

Students are already using AI. They’re exploring it in secret, often at night, often with shame. They’re wondering if they’re doing something wrong. And if we meet them with fear, avoidance, or silence, we’re sending the message that they’re on their own. In a 2023 talk at ASU+GSV, Ethan Mollick noted that nearly all of his students had already used ChatGPT, often without disclosure. He emphasized that faculty need to assume AI is already in the room and should focus on teaching students how to use it wisely, ethically, and with reflection. That means our job isn’t to police usage—it’s to guide it.

I don’t want my students wandering through this new terrain without a map. So I model what I want them to do: ask questions, explore ethically, think critically, and most of all—reflect. I also model the discipline of not using AI output as a final product, but only as inspiration. If I use AI to brainstorm or generate language, I always make sure to rewrite it into something that reflects my own thinking and voice. That’s how we teach students to be creators, not copy machines. Map out where you have been and where you are going in your journey. 

And when I don’t know the answer? I tell them. Then we look it up together. I use this ChatGPT cheatsheet often. Check it out.

That’s what it means to teach AI literacy. It’s not about having all the answers. It’s about being brave enough to stay in the conversation. I was also wandering aimlessly with AI—unsure how to use it, uncertain about what was ethical—until I took this course from Wharton on Leveraging ChatGPT for Teaching. That course changed my mindset, my emotional state, and my entire classroom practice. It gave me a framework for using AI ethically, strategically, and with care for student development. If you’re looking for a place to start, that’s a great one.

AI Isn’t a Hammer. It’s a Screwdriver.

Here’s a metaphor I use a lot: AI is not a hammer. It’s a screwdriver.

Too many people try to use AI for the wrong task. They ask it to be a mindreader or a miracle worker. When it fails, they say it’s dumb. But that’s like trying to hammer in a screw and then blaming the hammer.

When you learn what AI actually does well, like pattern recognition, remixing ideas, filtering, and translating formats, you start to use AI for its actual strengths. As Bender et al. (2021) explain in their paper On the Dangers of Stochastic Parrots, large language models are fundamentally pattern-matching systems. They can generate fluent, creative-sounding language, but they do not possess understanding, emotional awareness, or genuine creativity. They remix what already exists. That is why we must use these tools to support our thinking, not replace it. It becomes a tool in your toolkit. Not a black box. Not a crutch. A screwdriver.

I don’t want AI to do my art and writing so I can do dishes. I want AI to do my dishes so I can do art and writing. As Joanna Maciejewska put it: “I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” It won’t do your dishes. But it might give you time back so you can do something that matters more.

How I Actually Use AI in Class With Students

I teach graphic design, motion, UX, and interactive design. AI is already a mainstay in each of these disciplines—from tools that enhance layout and animation to systems that evaluate accessibility and automate UX testing. But even though AI had become part of the professional design landscape, I was still skeptical. I wasn’t sure how to bring it into my classroom in a meaningful way. So I started small.

Using AI for minor efficiencies—generating rubrics, reformatting documents, cleaning up language—felt good. It felt safe. And it gave me just enough momentum to try it on bigger, more impactful tasks. What made the difference was a mindset shift. I stopped seeing myself as a single musician trying to play every part of a complex score and started seeing myself as the conductor of the orchestra. I didn’t need to play every part, I just needed to know how the parts worked together. That gave me the confidence to use AI—and to teach with it.

Here’s how I integrate AI into our learning:

  • Students design chatbots that simulate clients, so they can roleplay conversations. I used to pretend to be clients and interact with students through Canvas discussion boards. Now I can read their chat logs and have conversations with them about their questions and intentions.
  • In Motion Graphics, students use “vibe coding”—a form of sketching in code with the help of GPT to simulate motion, like moons orbiting planets.
  • In Interactive Design, they use Copilot** to debug code** in HTML, CSS, and JavaScript.
  • They learn to generate placeholder images for mockups, not final artwork.
  • We create custom Copilot agents, like “RUX”—a UX-focused bot trained to give scaffolded feedback based on accessibility standards.

I’m not handing them shortcuts. I’m handing them power tools and asking them to build something that’s still theirs.

The Creative Process Needs Scaffolding—AI Can Help

I believe in the creative process. I’ve studied models like the Double Diamond and the 4C Model. I’ve seen how students get stuck during the early stages, especially when self-doubt creeps in.

That’s where AI shines.

AI helps my students generate more ideas in the divergent phase. This echoes research by Mollick and Terwiesch (2024) showing that structured AI prompting increases idea variance and originality during the creative process. It helps them compare, sort, and edit during the convergent phase. And when I ask them to submit their chat logs as part of their final deliverable, I can see their thinking. It’s like watching a time-lapse of the creative process.

We’re not assessing just artifacts anymore. We’re assessing growth. And that includes how students use AI as part of their process. I make it clear that AI-generated outputs are not to be submitted as final work. Instead, we treat those outputs as inspiration or scaffolding—starting points that must be reshaped, edited, or reimagined by the human at the center of the learning. That’s a critical behavior we need to model as teachers. If we want students to be creative thinkers, not copy-paste artists, then we have to show them how that transformation happens.

Accessibility and AI Should Be Friends

I also use AI to make my course materials more accessible. I format assignments to follow TILT and UDL principles. For example, I asked GPT to act as a TILT and UDL expert and reformat a complex assignment brief. It returned a clean layout with clear learning objectives, task instructions, and evaluation criteria. I pasted this directly into a Canvas Page to ensure full screen reader compatibility and ease of access.

For rubrics, I asked GPT to generate a Canvas rubric using a CSV file template. I specified category names, point scales, and descriptors, and GPT returned a rubric that I could tweak and upload into Canvas. No more building from scratch in the Canvas UI.

To generate quizzes, I use OCR with my phone’s Notes app to scan printed textbook pages. I paste that text into GPT and ask it to write multiple-choice questions with answer keys. GPT can even generate QTI files, which I import directly into Canvas. This process saves me hours of manual quiz-writing and makes use of printed texts that don’t have digital versions.

AI helps me build ramps, not walls.

Faculty are also legally required to build those ramps. Under the Rehabilitation Act and the Americans with Disabilities Act (ADA), specifically Section 504, course content in learning management systems like Canvas must meet accessibility standards. But let’s be honest—retrofitting dozens or even hundreds of old documents, PDFs, and slide decks into fully accessible formats is a monumental task. It often gets pushed to the bottom of the to-do list, which leaves institutions vulnerable to non-compliance. Check out the WCAG standards for more details.

AI can help. It can reformat documents for screen reader compatibility, generate alt text, simplify layout structure, and audit for contrast and clarity. And it can do it in a fraction of the time it would take any one of us. By using AI thoughtfully here, we not only make our content better, we also help our institutions become more equitable and compliant faster.

When I use local LLMs to analyze student writing using tools like LM Studio, I keep student data safe, FERPA compliant, and private. This aligns with concerns raised by Liang et al. (2023) about how commercial LLMs may compromise the privacy of non-native English speakers and their content. It is ethical. It is efficient. And it respects the trust students place in me.

Let Students Build Their Own Tools

One of the best things I’ve done is empower students to create their own AI agents.

Yes, students can train their own Copilot bots. And when they do, they stop seeing AI as some alien threat. They start seeing it as a co-creator. A partner. A lab assistant. ChatGPT has a feature called Custom GPTs, which allows similar personalization, but it’s locked behind a paywall. That creates real inequity for students who can’t afford a subscription. Copilot, on the other hand, is free to students and provides the necessary capabilities to build custom agents or chatbots. Here’s a guide to get started building your own agents with Copilot.

As a way to model this behavior for students, I created a CoPilot Agent myself called RUX, short for “Rex UX”, honoring Rex, our beloved university mascot. I built it using Microsoft’s Copilot Studio, which lets you define an agent’s knowledge base, tone, and purpose. For RUX, I gave it specific documentation to pull from, including core sources like WCAG, UDL, and UX heuristics, and trained it to act as a guide and feedback coach for my UX students. It doesn’t give away answers. It asks questions, gives feedback, and helps students reflect.

Setting up an agent starts with defining your intent. I decided I wanted RUX to act like a mentor who knew the standards for accessibility and good UX practices, but also had the patience and tone of a coach. I uploaded key resources as reference material, wrote prompt examples, and added instructions to prevent the agent from simply giving away answers. This ensures students use it to reflect and improve rather than shortcut their learning.

The great part is that it took me about 30 minutes. And now my students use it to get feedback in between critiques, to check their work against accessibility standards, and to build their confidence.

And the students slowly start to ask better questions.

Final Thoughts: Be the Conductor, Not the Consumer

I tell my students this all the time: don’t just be a user. Be the conductor. That’s the heart of this whole article. I started this journey skeptical and unsure about how to use AI in my teaching, but I kept experimenting. And the more I leaned in, the more I realized I could use these tools to orchestrate the learning experience. I didn’t need to master every note, just guide the ensemble. Once I felt that shift, I was able to build my own practice and share it with students in ways that felt grounded and empowering.

Here are two simple but powerful GPT exercises that are from the UPenn AI in the Classroom course that I recommend for you to get started:

1. Role Playing (Assigning the AI a Persona)

This method helps shape AI responses by giving it a clear role.

Steps:

  • Tell the AI, “You are an expert in [topic].”
  • Provide a specific task, like “explain X to a 19-year-old art student” or “give feedback on a beginner-level UX portfolio.”
  • Refine the prompt with context about the student’s needs or your learning objectives.

Outcome: The AI behaves like a thoughtful tutor instead of a know-it-all. Students can use it as a low-stakes, judgment-free practice partner.

2. Chain of Thought Prompting

This is useful for step-by-step thinking and collaborative problem solving.

Steps:

  • Ask the AI to help you develop a lesson plan, solve a design challenge, or draft a workflow.
  • Break the task into steps: “What’s the first thing I should consider?” Then “What comes next?”
  • Let the AI ask you questions in return. Keep the conversation going.

Outcome: You model metacognition, and students learn how to refine ideas through iterative feedback. It supports both ideation and strategic planning.

Try these as warm-ups, homework tools, or reflection exercises. They’re simple, ethical, and illuminating ways to integrate AI in any classroom.

That’s what I want for my colleagues, too. You don’t have to know everything about AI. You just have to be curious. You have to be willing to ask: “What can this help me or my students do better?”

So here’s your first experiment:

  1. Have students brainstorm ideas for a project.
  2. Have them ask GPT the same question.
  3. Compare the lists.
  4. Reflect. (What worked? What didn’t? How will you approach brainstorming next time?, Repeat)

Then decide what to keep, what to toss, and what to remix. Just like we always have. Let’s stop building walls. Let’s start building labs. And let’s do it together.