Tag: LLM

  • Vibe Coding for 8 Crazy Nights

    Vibe Coding for 8 Crazy Nights

    During my semester long sabbatical, I set out to experiment with new ways to tell Jewish stories, and I kept coming back to the immersive feeling of games. While I stayed focused on my main objective, completing my book Hiddur Olam: Bereshit – Genesis and telling new Jewish stories through art and writing, this Hanukkah, I also felt a pull to expand this idea of immersive storytelling into video games, where players could step inside the work rather than only view or read it. Framing the game projects as interactive midrash let me treat code, mechanics, and level design as another layer of commentary on the same questions that animate the book: how to re engage with foundational Jewish narratives, how to honor tradition while playing with form, and how to imagine Jewish futures that feel both grounded and newly alive in digital space.

    Vibe coding and my AI toolbox

    For all of these projects, I leaned heavily on what I think of as vibe coding. By vibe coding, I mean describing in natural language how I want something to feel, look, or behave, then using AI coding tools to generate or refactor code until the game’s behavior matches that feeling. I used ChatGPT, Gemini, and GitHub’s coding assistants as a rotating team, asking for everything from small bug fixes and refactors to full systems like player controllers or state machines. I have 20 years of front-end and back-end web development coding experience. Having been a part of a wave of student designer-artist-coders in NY in the late 90s and early 00s making websites by day and net-art by night, vibe coding is great method to make code sketches of ideas or experiments. In this project, I would move the same block of code from one model to another when I got stuck, wanted new insight, or when I wanted to shift from quick procedural hacks into a more object oriented structure. Each of the the different LLM code “voices” helped me see new paths through the same problem. These tools gave me a sense of freedom to soar with code, where in the past I would have been creeping along, slowly teaching myself new methods and getting bogged down in syntax rather than in the Jewish and ludic questions that actually interested me.

    Research questions that guided me

    A cluster of questions ran through everything I made:

    • How can I evolve dreidel gameplay beyond a single spin and four letters?
    • With only four sides, can a dreidel still function as a rich, reusable dice object in a larger game system?
    • Can the dreidel be used more effectively to tell the story of Hanukkah, not just reference it visually?
    • What are better ways to tell the story of Hanukkah using the immersiveness of games?
    • How can I tell new digital Jewish stories that feel both grounded in tradition and native to contemporary game culture?
    • Is this creative act, moving ritual objects into speculative, interactive worlds, an example of Jewish futurism in practice?
    • How will Jewish people play dreidel in the future?

    Each experiment became a different argument or provisional answer to these questions.

    ​So, over 8 nights, I played with various game and interaction experiments. Here are my best of the best, in no particular order.​

    Dreidel Run: Neon Grid

    Best for dreidel kinetics

    With Dreidel Run, I leaned into the question of how to evolve dreidel gameplay at a purely kinetic level. Here, I made the case that the dreidel can succeed as a contemporary and arguably futuristic game mechanic when it is allowed to be fast, flashy, and even a little mindless, while still anchored in

    Hanukkah imagery like gelt and glowing colors. Using the Temple Run game mechanics, the experiment argues that not every Jewish game needs an explicit narrative lesson, and that embodied fun, quick reflexes, and the pleasure of catching coins and dodging hazards can themselves be a form of connection, a way of feeling Hanukkah as energy and rhythm rather than only as a story told in words.

    Dreidel x Katamari mashup

    Best for dreidel physics

    In the dreidel and Katamari Damacy inspired mashup, I took seriously the question of whether a small, four sided object could scale up into a world building tool. The design argues that as the spinning dreidel absorbs gelt and grows, it enacts a kind of visual and mechanical midrash on Hanukkah’s themes of accumulation,

    excess, and the tension between material things and spiritual light. By exaggerating the physics, I could show how a simple ritual object might literally reshape its environment, and in doing so, I tested how far dreidel based mechanics can stretch before they stop feeling like dreidel play and become something new. Another fun way to play with the dreidel kinetics.

    Dreidel Physics Sandbox

    Best Holiday Stress Reliever

    The smaller dreidel physics sandbox experiments addressed the quieter research question of how players might encounter Jewish content without a fixed goal at all. The spinning battle top game transforms the dreidel into a tornado like object tasked to destroy Seleucid idols of the Temple. It’s instant gameplay makes the argument that

    open ended, low stakes experimentation can be a valid form of digital Jewish learning, where the “lesson” is not amoral but a felt sense of spin, friction, wobble, and collapse. In the second experiment I used the Marble Madness type game play, making the dreidel become

    a tiny lab for thinking about stability and risk, which echoes Hanukkah’s precariousness, and invites players to linger, tinker, and waste time in a way that is still charged with symbolic possibility. These were worthwhile explorations of the exciting and kinetic nature of the dreidel game.

    Dreidel Catan prototype

    Most conceptual

    In my Catan style prototype, I explored whether a four sided dreidel could act as a meaningful dice object inside a complex resource and territory game that could help tell the story of Hanukkah in terms of the Maccabees, Hellenized Jews, and Seleucids as groups competing for resources and domination in Jerusalem. The design argues that it can, because each side of the dreidel already carries narrative weight, and that weight can be elevated when paired with a card, tableau and board game system like Catan. Resource bonuses, penalties, or events that shape a shared board.

    By letting the dreidel drive the different outcomes for each player I was curious to replace the dice with two dreidels. Pushing the game narrative of dreidel from a closed loop into a network of context specific effects.While buggy and complicated, this was one way that Hanukkah themes of scarcity, risk, and negotiation might live inside a modern strategy game.

    Hanukkah Quest 1: The Temple of Gloom

    Best for Hanukkah story

    Hanukkah Quest 1: The Temple of Gloom tackles the question of how to better tell the story of Hanukkah with the immersiveness of a game. Here, I argue that interactive midrash is possible when puzzles, jokes, and spatial navigation all serve as commentary on the holiday’s themes, such as hiddenness,

    illumination, desecration, and rededication. Instead of retelling the miracle in a linear script, the game invites players to stumble through a gloomy, playful temple and slowly piece together meaning from their own actions, which models a Jewish way of learning that is iterative, interpretive, and grounded in wandering and return.

    Jewish futurist wisdom

    These experiments do not just gesture toward Jewish futurism, they enact it and point toward where it might go next. They show that Jewish futurism means keeping ritual objects and stories in play, while re staging them inside interactive systems where players can touch, bend, and argue with them in real time, like a digital beit midrash that anyone can enter. By dropping the dreidel and Hanukkah into arcade runners, resource economies, absurd physics toys, and point and click temples, the work suggests that the future of Jewish storytelling may live in responsive systems rather than fixed scripts, and in shared worlds that generate many valid readings instead of a single correct answer. Your vibe coding practice, using AI to rapidly prototype and reconfigure these systems around a felt sense of Jewish meaning and play, is a clear example of Jewish futurism in practice, and it opens hopeful paths forward: networked Jewish game spaces, collaborative “midrash servers,” classroom rituals that unfold as playable worlds, and future projects where new holidays, communities, and speculative texts are first tested as games before they are written down. In that sense, these games are not an endpoint but a launch pad, a sign that Jewish life will keep unfolding inside new technologies, still circling the same core questions of memory, risk, light, and communal responsibility, while inviting the next generation to help code what comes next.

  • Stop Using AI As A Hammer, When It’s A Screwdriver: My AI Odyssey Through The Classroom

    Stop Using AI As A Hammer, When It’s A Screwdriver: My AI Odyssey Through The Classroom

    This article is a teacher’s (me) journey out of the AI shadows and into classroom transformation. This article is a companion to a recorded lecture I gave on how I use AI in the classroom. I recommend watching the video in addition to reading this post, as it offers a deeper dive and helps contextualize the experiments and perspectives summarized here.

    AI Isn’t a Hammer, It’s a Screwdriver

    A teacher’s journey out of the AI shadows and into classroom transformation. This article is a companion to a recorded lecture I gave on how I use AI in the classroom. I recommend watching the video in addition to reading this post, as it offers a deeper dive and helps contextualize the experiments and perspectives summarized here.

    We’ve successfully scared the hell out of ourselves about AI. That’s the truth. Despite the helpful Wall-E’s and Rosie the Robots, the likes of HAL 9000 locking astronauts out in space to the death machines of The Terminator, the cultural imagination has been fed a steady diet of dystopian dread. And now, with the hype and hysteria churned out by the media and social media, we’ve triggered a collective fight, flight, or freeze response. So it’s no surprise that when AI entered the classroom, a lot of educators felt like they were witnessing the start of an apocalypse, like all of us were each our own John Connors’ watching the dreaded Skynet come online for the first time.

    But I’m here to tell you that’s not what’s happening. At least not in my classroom.

    In fact, this post is about how I crawled out of the AI shadows and learned to see it not as a threat but as a tool. Not a hammer, but a screwdriver. Not something that does my job for me, but something that helps me do my job better. Especially the parts that grind me down or eat away at my time.

    If you’re skeptical, hesitant, angry, or just plain confused about what AI is doing to education, pull up a chair. I’ve been there. But I’ve also experimented, adjusted, and seen the light and the darkness. I cannot dispel all of the implications of AI use, but I want to share what I’ve learned so you don’t have to build the spaceship from scratch.

    We Owe It to Our Students to Model Bravery

    Students are already using AI. They’re exploring it in secret, often at night, often with shame. They’re wondering if they’re doing something wrong. And if we meet them with fear, avoidance, or silence, we’re sending the message that they’re on their own. In a 2023 talk at ASU+GSV, Ethan Mollick noted that nearly all of his students had already used ChatGPT, often without disclosure. He emphasized that faculty need to assume AI is already in the room and should focus on teaching students how to use it wisely, ethically, and with reflection. That means our job isn’t to police usage—it’s to guide it.

    I don’t want my students wandering through this new terrain without a map. So I model what I want them to do: ask questions, explore ethically, think critically, and most of all—reflect. I also model the discipline of not using AI output as a final product, but only as inspiration. If I use AI to brainstorm or generate language, I always make sure to rewrite it into something that reflects my own thinking and voice. That’s how we teach students to be creators, not copy machines. Map out where you have been and where you are going in your journey. 

    And when I don’t know the answer? I tell them. Then we look it up together. I use this ChatGPT cheatsheet often. Check it out.

    That’s what it means to teach AI literacy. It’s not about having all the answers. It’s about being brave enough to stay in the conversation. I was also wandering aimlessly with AI—unsure how to use it, uncertain about what was ethical—until I took this course from Wharton on Leveraging ChatGPT for Teaching. That course changed my mindset, my emotional state, and my entire classroom practice. It gave me a framework for using AI ethically, strategically, and with care for student development. If you’re looking for a place to start, that’s a great one.

    AI Isn’t a Hammer. It’s a Screwdriver.

    Here’s a metaphor I use a lot: AI is not a hammer. It’s a screwdriver.

    Too many people try to use AI for the wrong task. They ask it to be a mindreader or a miracle worker. When it fails, they say it’s dumb. But that’s like trying to hammer in a screw and then blaming the hammer.

    When you learn what AI actually does well, like pattern recognition, remixing ideas, filtering, and translating formats, you start to use AI for its actual strengths. As Bender et al. (2021) explain in their paper On the Dangers of Stochastic Parrots, large language models are fundamentally pattern-matching systems. They can generate fluent, creative-sounding language, but they do not possess understanding, emotional awareness, or genuine creativity. They remix what already exists. That is why we must use these tools to support our thinking, not replace it. It becomes a tool in your toolkit. Not a black box. Not a crutch. A screwdriver.

    I don’t want AI to do my art and writing so I can do dishes. I want AI to do my dishes so I can do art and writing. As Joanna Maciejewska put it: “I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” It won’t do your dishes. But it might give you time back so you can do something that matters more.

    How I Actually Use AI in Class With Students

    I teach graphic design, motion, UX, and interactive design. AI is already a mainstay in each of these disciplines—from tools that enhance layout and animation to systems that evaluate accessibility and automate UX testing. But even though AI had become part of the professional design landscape, I was still skeptical. I wasn’t sure how to bring it into my classroom in a meaningful way. So I started small.

    Using AI for minor efficiencies—generating rubrics, reformatting documents, cleaning up language—felt good. It felt safe. And it gave me just enough momentum to try it on bigger, more impactful tasks. What made the difference was a mindset shift. I stopped seeing myself as a single musician trying to play every part of a complex score and started seeing myself as the conductor of the orchestra. I didn’t need to play every part, I just needed to know how the parts worked together. That gave me the confidence to use AI—and to teach with it.

    Here’s how I integrate AI into our learning:

    • Students design chatbots that simulate clients, so they can roleplay conversations. I used to pretend to be clients and interact with students through Canvas discussion boards. Now I can read their chat logs and have conversations with them about their questions and intentions.
    • In Motion Graphics, students use “vibe coding”—a form of sketching in code with the help of GPT to simulate motion, like moons orbiting planets.
    • In Interactive Design, they use Copilot** to debug code** in HTML, CSS, and JavaScript.
    • They learn to generate placeholder images for mockups, not final artwork.
    • We create custom Copilot agents, like “RUX”—a UX-focused bot trained to give scaffolded feedback based on accessibility standards.

    I’m not handing them shortcuts. I’m handing them power tools and asking them to build something that’s still theirs.

    The Creative Process Needs Scaffolding—AI Can Help

    I believe in the creative process. I’ve studied models like the Double Diamond and the 4C Model. I’ve seen how students get stuck during the early stages, especially when self-doubt creeps in.

    That’s where AI shines.

    AI helps my students generate more ideas in the divergent phase. This echoes research by Mollick and Terwiesch (2024) showing that structured AI prompting increases idea variance and originality during the creative process. It helps them compare, sort, and edit during the convergent phase. And when I ask them to submit their chat logs as part of their final deliverable, I can see their thinking. It’s like watching a time-lapse of the creative process.

    We’re not assessing just artifacts anymore. We’re assessing growth. And that includes how students use AI as part of their process. I make it clear that AI-generated outputs are not to be submitted as final work. Instead, we treat those outputs as inspiration or scaffolding—starting points that must be reshaped, edited, or reimagined by the human at the center of the learning. That’s a critical behavior we need to model as teachers. If we want students to be creative thinkers, not copy-paste artists, then we have to show them how that transformation happens.

    Accessibility and AI Should Be Friends

    I also use AI to make my course materials more accessible. I format assignments to follow TILT and UDL principles. For example, I asked GPT to act as a TILT and UDL expert and reformat a complex assignment brief. It returned a clean layout with clear learning objectives, task instructions, and evaluation criteria. I pasted this directly into a Canvas Page to ensure full screen reader compatibility and ease of access.

    For rubrics, I asked GPT to generate a Canvas rubric using a CSV file template. I specified category names, point scales, and descriptors, and GPT returned a rubric that I could tweak and upload into Canvas. No more building from scratch in the Canvas UI.

    To generate quizzes, I use OCR with my phone’s Notes app to scan printed textbook pages. I paste that text into GPT and ask it to write multiple-choice questions with answer keys. GPT can even generate QTI files, which I import directly into Canvas. This process saves me hours of manual quiz-writing and makes use of printed texts that don’t have digital versions.

    AI helps me build ramps, not walls.

    Faculty are also legally required to build those ramps. Under the Rehabilitation Act and the Americans with Disabilities Act (ADA), specifically Section 504, course content in learning management systems like Canvas must meet accessibility standards. But let’s be honest—retrofitting dozens or even hundreds of old documents, PDFs, and slide decks into fully accessible formats is a monumental task. It often gets pushed to the bottom of the to-do list, which leaves institutions vulnerable to non-compliance. Check out the WCAG standards for more details.

    AI can help. It can reformat documents for screen reader compatibility, generate alt text, simplify layout structure, and audit for contrast and clarity. And it can do it in a fraction of the time it would take any one of us. By using AI thoughtfully here, we not only make our content better, we also help our institutions become more equitable and compliant faster.

    When I use local LLMs to analyze student writing using tools like LM Studio, I keep student data safe, FERPA compliant, and private. This aligns with concerns raised by Liang et al. (2023) about how commercial LLMs may compromise the privacy of non-native English speakers and their content. It is ethical. It is efficient. And it respects the trust students place in me.

    Let Students Build Their Own Tools

    One of the best things I’ve done is empower students to create their own AI agents.

    Yes, students can train their own Copilot bots. And when they do, they stop seeing AI as some alien threat. They start seeing it as a co-creator. A partner. A lab assistant. ChatGPT has a feature called Custom GPTs, which allows similar personalization, but it’s locked behind a paywall. That creates real inequity for students who can’t afford a subscription. Copilot, on the other hand, is free to students and provides the necessary capabilities to build custom agents or chatbots. Here’s a guide to get started building your own agents with Copilot.

    As a way to model this behavior for students, I created a CoPilot Agent myself called RUX, short for “Rex UX”, honoring Rex, our beloved university mascot. I built it using Microsoft’s Copilot Studio, which lets you define an agent’s knowledge base, tone, and purpose. For RUX, I gave it specific documentation to pull from, including core sources like WCAG, UDL, and UX heuristics, and trained it to act as a guide and feedback coach for my UX students. It doesn’t give away answers. It asks questions, gives feedback, and helps students reflect.

    Setting up an agent starts with defining your intent. I decided I wanted RUX to act like a mentor who knew the standards for accessibility and good UX practices, but also had the patience and tone of a coach. I uploaded key resources as reference material, wrote prompt examples, and added instructions to prevent the agent from simply giving away answers. This ensures students use it to reflect and improve rather than shortcut their learning.

    The great part is that it took me about 30 minutes. And now my students use it to get feedback in between critiques, to check their work against accessibility standards, and to build their confidence.

    And the students slowly start to ask better questions.

    Final Thoughts: Be the Conductor, Not the Consumer

    I tell my students this all the time: don’t just be a user. Be the conductor. That’s the heart of this whole article. I started this journey skeptical and unsure about how to use AI in my teaching, but I kept experimenting. And the more I leaned in, the more I realized I could use these tools to orchestrate the learning experience. I didn’t need to master every note, just guide the ensemble. Once I felt that shift, I was able to build my own practice and share it with students in ways that felt grounded and empowering.

    Here are two simple but powerful GPT exercises that are from the UPenn AI in the Classroom course that I recommend for you to get started:

    1. Role Playing (Assigning the AI a Persona)

    This method helps shape AI responses by giving it a clear role.

    Steps:

    • Tell the AI, “You are an expert in [topic].”
    • Provide a specific task, like “explain X to a 19-year-old art student” or “give feedback on a beginner-level UX portfolio.”
    • Refine the prompt with context about the student’s needs or your learning objectives.

    Outcome: The AI behaves like a thoughtful tutor instead of a know-it-all. Students can use it as a low-stakes, judgment-free practice partner.

    2. Chain of Thought Prompting

    This is useful for step-by-step thinking and collaborative problem solving.

    Steps:

    • Ask the AI to help you develop a lesson plan, solve a design challenge, or draft a workflow.
    • Break the task into steps: “What’s the first thing I should consider?” Then “What comes next?”
    • Let the AI ask you questions in return. Keep the conversation going.

    Outcome: You model metacognition, and students learn how to refine ideas through iterative feedback. It supports both ideation and strategic planning.

    Try these as warm-ups, homework tools, or reflection exercises. They’re simple, ethical, and illuminating ways to integrate AI in any classroom.

    That’s what I want for my colleagues, too. You don’t have to know everything about AI. You just have to be curious. You have to be willing to ask: “What can this help me or my students do better?”

    So here’s your first experiment:

    1. Have students brainstorm ideas for a project.
    2. Have them ask GPT the same question.
    3. Compare the lists.
    4. Reflect. (What worked? What didn’t? How will you approach brainstorming next time?, Repeat)

    Then decide what to keep, what to toss, and what to remix. Just like we always have. Let’s stop building walls. Let’s start building labs. And let’s do it together.

  • From Prompt to Practice: How Artists Can Rethink and Reclaim AI Tools

    From Prompt to Practice: How Artists Can Rethink and Reclaim AI Tools

    Yes, there’s a problem, but It’s not just about AI.

    I’ve heard passionate arguments against AI usage from fellow artists. I’ve also read in detail about the lawsuits filed by creators against companies who used their work without permission. I agree that this is wrong and that it has hurt the true validity of the tools. These concerns are real, and they’ve shaped how I approach the technology.

    A central question remains: if we could make the source imagery for AI training completely copyright-free and ethical, would that actually end the argument over the use of these tools in art making? Or is the real issue an underlying belief in purity in the creative process? As a graphic designer, I know that purity in creation was disrupted long before AI ever existed.

    From Generative Art to Generative AI

    I was exhilarated the first time I started using AI in my art. I’ve been working as a generative artist since the late 90s, so I’ve seen a lot of shifts in how tech intersects with creativity. Back in the early 2000s, I was building generative art installations that used text, images, and sound. I was dreaming in code, digital sensors, and databases as these were the core elements to create art-making robots. I was even invited to exhibit my projects in the US and international media arts biennales in Split, Croatia and Wroclaw, Poland. But this new wave of AI tools? It felt like a leap. A serious one.

    When the most recent wave of AI tools burst onto the scene in 2022, I found myself re-reading Walter Benjamin’s essay The Work of Art in the Age of Mechanical Reproduction and picking up Lev Manovich’s article Who is an Author in the Age of AI? for the first time. They both helped give me clarity on the debate developing. The first was written over a century ago, and the second in the present moment, but both challenged me to think beyond the surface of the debate.

    That said, I didn’t jump in without questions. I was skeptical about where the image data was coming from. Most of the image models were trained on LAION400M, a huge dataset scraped from the internet. It was meant for research, not commercial use. As an artist, I care deeply about copyright and creative ownership. That part bothered me.

    But the power of the tool was undeniable. AI helps me iterate quickly. It pushes my image-making forward and challenges me to try things I wouldn’t have done on my own. New poses, wild color combinations, unusual compositions. Sometimes, I don’t even recognize what I’m capable of until I see what AI reflects back at me.

    Through my experiments and explorations, it became very true that it’s not about replacing my work at all. It’s about expanding it.

    AI as a Catalyst and Sharpener for Creativity

    One of the things I love most about AI is how it helps me start. Sometimes, it’s like having an oracle. I might not know exactly where a project is headed, but I can toss in a few ideas and see what comes back. That response helps me clarify what I want. Or what I don’t want. And that’s part of the creative process, too. The ability to think divergently and then convergently is the true poetry of the creative process and having a “helper” in that process allows greater tracking and reflection on the process.

    I use Stable Diffusion most often because it’s open-source and highly controllable. I like that I can run it locally on my machine without paying for credits or cloud storage. Not having a paywall gives me the freedom to really dig deep. I can generate a hundred versions of an idea, explore unexpected paths, and move fast without overthinking costs.

    There was one recent project that brought this all into focus. A client in Houston wanted a mural with about 25 different visual elements. Honestly, it was overwhelming. I started by asking AI to look for patterns in the list text, riffing with the chatbot to explore visual ways to combine the many items into one space. The ideas that chat suggested, but it did mention the word surrealism that made me think of Salvador Dali’s haunting landscapes. That was it! A landscape like Dali’s is a place where all the elements could exist together logically. A dreamscape, so to speak. That unlocked the whole thing. Without AI, I might’ve taken much longer to get there.

    Most AI images aren’t good. I’d say maybe 10 percent are worth a second look. I know I’ve hit something useful when it meets my “GE” standard: Good Enough to move forward. I’m looking for strong composition and clear visual hierarchy. Everything else can be worked on later. But if the image holds space in a striking way, I’ll keep going.

    AI for Process, Not Just Product: An Ethical Approach

    People often think of AI as a tool for generating a final image. That’s not how I use it. For me, AI is most powerful when it supports the process. It helps me evaluate, brainstorm, and reflect. Sometimes, I upload a rough idea or a brain dump and use a chatbot to ask questions or poke holes in it. That outside perspective—fast, responsive, and nonjudgmental—is gold.

    If I’m stuck, I might ask the AI to generate some moodboards or rough compositions. I’m not expecting polished work. I’m looking for sparks. A direction to follow or a problem to solve. It’s the same way I’d sketch a dozen thumbnails on paper.

    And no, I don’t fall in love with the first cool thing AI spits out. That novelty wore off fast. I’ve trained myself to be curatorial. Most results don’t hit the mark. But knowing I can always make more helps me stay loose. I push ideas until they’re solid.

    One piece that stands out was an illustration I made for a Torah portion about Pharaoh’s dream. I had drawn a grim-looking cow skull and was thinking of placing it in a field of wheat. Then, the AI surprised me. It created a cow skull made out of wheat. That twist was mysterious and totally unexpected—perfect for the surreal nature of a dream. I never would have gone there on my own. But once I saw it, I knew exactly where to take it.

    AI Won’t Replace Me. It Will Refine Me.

    As an educator, I’ve brought AI into the classroom not just as a tool but as a way to help students understand how creativity works. I show them how to use it to ideate, test ideas, and refine their thinking. We do in-class exercises where students generate images or prompts and then share their chatlogs with me. It gives me a real window into how they’re thinking—and how they’re growing.

    Some students are skeptical at first. Others dive in headfirst and sometimes know more than I do about certain tools. The ones who get it quickly start using AI not to shortcut the work but to deepen it. They realize that it’s not about letting AI do the thinking. It’s about using AI to push your thinking further.

    I’ve even used AI as part of a critique. One time, I had students feed their near-final projects into ChatGPT and ask for feedback. With the right prompts, the feedback was surprisingly thoughtful. Not perfect. But useful. It opened a door for them to reflect and iterate in ways they hadn’t before and to be critical of comments that didn’t in fact, help improve their work.

    What AI reveals, I think, is that creativity isn’t just about generating something new. It’s about discovering connections, asking better questions, and recognizing what’s missing. AI isn’t great at originality on its own, but it’s fantastic at remixing and showing what’s possible. It’s like a mirror that reflects potential back at you. I’ve worked closely with a few creative development systems like the Double Diamond and SCAMPER and I can say with confidence: AI can support both divergent and convergent thinking, especially when used intentionally.

    Originality: The Collage Conversation

    This comes up a lot: “Isn’t AI just stealing?” And my response usually starts with this: what about collage?

    We’ve accepted collage as a legitimate art form for over a century. Artists like Hannah Höch, Romare Bearden, and Robert Rauschenberg all used found images, many of them copyrighted. They cut, glued, layered, and remixed to create something new. If we call that art, why are we drawing the line at AI?

    To me, AI-generated images are collage-like. The human prompts them with intention. The AI recombines things based on patterns it has learned. The process is digital, but the creative act is still there. Cutting and pasting by hand doesn’t make something inherently more authentic. It’s the idea behind the work that matters.

    Now, I don’t ignore the legal and ethical side of this. Most major AI image models are trained on datasets built from scraped web images, and that’s a problem. I’ve been exploring more ethically sourced options. For example, Adobe’s Firefly and Shutterstock’s model are trained on licensed stock images. Even better, I recently started working with a model called PixelDust. It’s a rebuild of Stable Diffusion, but trained only on public domain and Creative Commons Zero (CC0) images—think Wikipedia, museum archives, and open repositories. While it’s the closest public domain model out there, it still is not 100% certain it is copyright-free.

    I fine-tuned that model using 380 of my own original works. That means when I prompt it now, it generates images in my style using my visual language. It’s still collaborative, but it feels more personal. And the results have seriously improved my ideation speed and image quality.

    There’s a difference between copying and remixing. Collage artists have done it forever. Musicians sample. Writers quote. AI might be new, but it fits within a long tradition of borrowing, blending, and transforming. What complicates the conversation is “style”. People think style is protected by copyright, but it’s not. Only specific works are. So, while artists may be known for their style, that alone doesn’t make it off-limits.

    Yes, people have questioned the validity of my AI-assisted work. When that happens, I explain. I describe how I use AI for ideation, how I fine-tune models on my own work, and how that affects the output. Once people understand that I’m building on my own images and ideas, they usually come around.

    Co-Creating with the Machine: How AI Refines My Process

    Over time, I’ve started experimenting with creating a kind of AI version of myself. Not in a sci-fi clone kind of way, but as a tool trained to think and see more like me. I fine-tuned a model using 360 of my own artworks, each paired with carefully written prompts. That way, when I generate new images, they come out in my visual language, not someone else’s.

    I also use a tool called ControlNet. It lets me upload sketches or basic compositions, and then the AI fills in the style and detail. This setup allows me to keep control over layout and flow while still tapping into the speed and surprise of the AI. It doesn’t always work the first time, and it can be a long back-and-forth, but the results are worth it.

    Eventually, I’d love to have a copyright-safe, fully custom model that supports my entire process. The goal isn’t automation for the sake of ease. I want to hand off the repetitive, procedural stuff so I can stay focused on creativity, strategy, and ideas.

    And no, I don’t want my AI self to be autonomous. That would defeat the point. I’m the creative leader here. The AI is my partner. It helps me explore, test, and refine, but I make the final call.

    I’ve also made peace with the idea of my style being encoded. I’ve been an illustrator long enough to know that you don’t really “own” a style. My style is a blend of influences I’ve absorbed over the years, and it’s always evolving. As a professional, I’ve had to learn multiple styles just to stay competitive. So, no, I don’t see style as sacred. It’s the ideas and the content that matter most to me.

    Rethinking and Remixing Creativity with AI

    My relationship with AI has changed a lot since I started. At first, I believed the hype. I thought it would be a job killer that could replace me. But as I worked with it more, I realized its limitations. It isn’t a one-click creative solution. It’s a tool that depends on my input, my ideas, and my vision. It helps me move faster and reflect more deeply, but it doesn’t do the thinking or the feeling for me.

    I’ve come to believe that AI isn’t replacing creativity. It’s revealing it. It shows us how we think, where we hesitate, and what we ignore. It challenges the old myths that artists work in isolation, drawing purely from inspiration or talent. That myth never held true for working designers and educators like me. And it definitely doesn’t reflect how creativity works in the real world.

    Still, I respect the artists who are hesitant or resistant. I’ve listened to powerful critiques and concerns. The lawsuits over unauthorized dataset usage raise important ethical and legal questions. And they should. If we can’t build these tools on ethically sourced, copyright-free content, then we have no foundation to build from. But if we can create models trained on ethically gathered images, then we should be having different conversations. One would be about practice. Another about process. We’d also be talking about expanding what it means to be creative. Instead, we’re stuck in echo chamber-like debates with half-truths and misunderstandings.

    AI is not a threat to purity in art because that purity never really existed. From collage to sampling to appropriation, art has always thrived on remix. This is what Benjamin meant when he spoke of the “aura” of artworks over a hundred years ago. Reproduction changes the way we relate to art, but it doesn’t remove its meaning. It shifts the space where meaning happens.

    So I use AI not because it replaces me but because it helps me be more of who I already am. A generative artist. A question asker. A teacher. A remix thinker. A designer trained in collaboration, systems, and complexity. AI is now a part of that system. And I welcome it, carefully and critically, into my process.

    The tech will keep evolving. But the core of creativity, being curiosity, play, rigor, surprise, and reflection, has not changed. AI just gives us more ways to explore it.