Tag: model-training

  • Judaism Has No Ready‑Made Answer for AI, and That’s the Point

    Judaism Has No Ready‑Made Answer for AI, and That’s the Point

    by Mike Wirth

    Judaism has no halakhic precedent, no formal theology, and no inherited best practices for artificial intelligence. There is no daf of Talmud that tells us what to do when our creations begin to imagine, write, and decide alongside us. That absence is not a weakness of tradition; it is a feature of its design.

    Across history, Jews have not inherited perfect systems; we have built them and evolved them. The Mishnah transformed memory into a network, medieval commentaries became the first hyperlinked texts, and the printing press democratized Torah (Scholem 207–10). Today, Sefaria, an open‑source library connecting millennia of commentary, extends that same impulse into the digital realm (“Sefaria: A Living Library”). Each technological revolution has become a new revelation of Torah’s possibilities.

    These questions are not abstract for me. As a muralist, UX designer, and Jewish futurist, I spend most days sketching ideas for speculative ritual objects, teaching with digital tools, and experimenting with AI‑assisted imagery that asks what Torah might look and feel like in a world of holograms, networks, and neural nets (“Jewish futurism”). The ideas in this essay emerge as much from the studio and classroom as from the beit midrash (Jewish houses of study).

    So the question before us is not “What does Judaism say about AI?” but “How might Judaism create with AI?” What might revelation look like when it learns to code?

    From Fear to Framework

    The Jewish conversation about AI often begins with fear. Questions like, “Can a machine issue psak?”, “Will it erode human authority?”, and “What remains sacred when language itself is synthetic?” appear frequently in contemporary halakhic and communal discussions (Grossman; “AI Meets Halachah”).

    Those are vital questions, but they treat Judaism as if its primary task were to regulate technology. In truth, Judaism’s genius has always been to design with it. The halakhic mind guards boundaries, while the artistic mind builds bridges. Both sustain covenant.

    In my own work, I see this tension every time I bring AI into a Jewish classroom or community workshop. Some participants arrive worried that a model might replace rabbis, artists, or teachers; others are excited and want to use it as a shortcut for everything. Holding both responses at once has become part of the practice.

    AI does not threaten Torah; it extends Torah’s medium. The question is not whether AI can write a responsum, but whether it can help us see Torah more deeply, teach more inclusively, and create more beautifully (Freeman and Mayse).

    Judaism as a Metamodern Design System

    Theorists of metamodernism describe our age as one that “oscillates between a modern enthusiasm and a postmodern irony” (Vermeulen and van den Akker). Judaism has been oscillating like this for three thousand years. It holds paradox as pedagogy. Every midrash begins with faith that truth exists and ends with humility that no single voice can hold it.

    Modernism believed in rational progress, while postmodernism dismantled it. Judaism, like the metamodern imagination, lives between those poles and moves between faith and doubt, reverence and critique, permanence and change (Scholem 5–9). The beit midrash is built on this oscillation, with generations of sages arguing in the margins and preserving even rejected views as part of Torah’s living archive (Kol HaMevaser; Sacks).

    Design thinking names this same dynamic: empathy, iteration, and purpose (Brown). Revelation, too, is iterative. Sinai was not just a single event but a recurring dialogue in which each generation prototypes new vessels for holiness such as scroll, page, press, and screen (Kaplan; “A Jewish Theological Perspective on Technology”). To be Jewish in the age of AI is to practice metamodern design and to make meaning through contradiction with sincerity and skepticism in equal measure.

    Jewish tradition has long trained us to live with this kind of paradox. In the Talmud, opposing positions can both be affirmed as elu v’elu divrei Elohim chayim, “these and those are the words of the living God,” even when only one becomes binding law (Kol HaMevaser). A machloket l’shem shamayim, an argument for the sake of heaven, is praised precisely because it keeps contradictory truths in productive tension (Sacks). Designing Jewishly with AI means treating its many outputs less as threats to certainty and more as invitations into this older discipline of holding multiple, sincere possibilities at once.

    When I teach with AI tools, the classroom becomes a small beit midrash (house of study) that includes the system as a noisy study partner. The goal is not to crown the model as an authority, but to use its strange suggestions to sharpen our questions and clarify what feels authentically Jewish (Freeman and Mayse).

    The Missing Dimension in the Jewish AI Debate

    Most Jewish writing on AI focuses on halakhah or philosophy, on rules, limits, and fears of replacement (Grossman; “Artificial Intelligence and Us”). What is often missing is the creative and embodied dimension of Jewish life: the building, singing, making, and designing through which Torah becomes lived experience. A growing cohort of Jewish artists and educators is already experimenting with AI in grounded and thoughtful ways, and their practice should shape the wider conversation (Jewish Creative Sensibilities).

    What is missing is a language for Jewish Design Thinking, a covenantal process that insists we think, act, and then think again before acting again (Prizmah; Adat Ari El). Jewish Design Thinking uses the raw materials of Torah, halakhah, story, and ritual to prototype futures in which technology serves covenant rather than the other way around. In my own projects, that rhythm looks like sketching speculative altars and merkavot in Procreate, feeding fragments of those images into fine‑tuned Stable Diffusion models trained on my work, and then painting or compositing the outputs back into finished pieces that can live in community spaces (“Jewish futurism”).

    Jewish life has always realized its deepest ideas through concrete forms, from the engineered choreography of Shabbat to the legal and spatial design of the eruv (Prizmah; Adat Ari El). My practice simply extends that logic into neon, pixels, and code.

    Judaism is not only a religion of interpretation; it is a culture of creation. The Mishkan was not explained. It was constructed. Bezalel, “filled with the spirit of God,” designed holiness in metal, fabric, and light (Exod. 31.1–5). Art is not ornament to Torah; it is one of Torah’s oldest dialects.

    To respond to AI in a Jewish way, we cannot only interpret it. We have to create with it. This is how Judaism answers itself, through making.

    The Library, the Aura, and the Algorithm

    To locate AI inside this longer story, it helps to notice how modern thinkers have imagined libraries, images, and code. Their work forms a kind of shadow commentary on Torah in the age of algorithms.

    In The Library of Babel, Jorge Luis Borges imagined an infinite library of all possible books, an uncanny prophecy of both divine omniscience and algorithmic excess (Borges). His librarians wander an endless text in search of coherence, much like today’s AI systems that spin out countless variations of meaning from their training data.

    Walter Benjamin, in The Work of Art in the Age of Mechanical Reproduction, warned that technology could dissolve the “aura” of the artwork, yet he also saw its democratizing power and observed that “the technique of reproduction detaches the object from tradition” (Benjamin 221). Judaism, too, detaches and reattaches tradition each time it is rewritten. Every new edition of the Talmud and every digital platform like Sefaria relocates ancient words into new communities of readers (“Sefaria: A Living Library”).

    Lev Manovich later described digital media as infinitely variable and “not fixed once and for all” (Manovich 36), while Ray Kurzweil imagined humanity and technology eventually merging in The Age of Spiritual Machines, a secular echo of Kabbalistic visions of unity (Kurzweil 3–6; Scholem 254–60). Torah, like code, thrives through iteration, versioning, and unexpected recombination.

    AI, in this view, is not heresy but a kind of midrashic engine. It recombines the infinite library and tests new relationships between language and light. Classical halakhah is clear that only a human sage, embedded in community and covenant, can issue binding psak; no machine can acquire the da’at and relational responsibility that Jewish law demands (“AI Meets Halachah”; “Not in Heaven”). Yet nonbinding interpretation, or midrash, has always welcomed imaginative recombination, playful juxtaposition, and speculative voices that never become law. In that sense, AI resembles a hyperactive study partner. It cannot decide halakhah, but it can surface unlikely parallels, draft parables, and map conceptual constellations that human learners then sift, critique, and sanctify (Freeman and Mayse).

    I see this most clearly in a piece that grew out of Ezekiel’s visions of angels. I used my fine‑tuned model to generate non‑angelic, almost alien interpretations of the prophetic descriptions and then collaged them into a single spiritual mass, a kind of living landscape of eyes, light, and motion (“Jewish Futurism”).

    Communing with the angels., Collage of human and AI generated elements. Mike Wirth 2022

    The glowing figure in the foreground is my own silhouette, walking and dancing through that terrain like a meditative avatar. The AI outputs gave me dozens of unsettling textures, but the real work was deciding which fragments felt true to the terror and beauty of Ezekiel’s language and which were just spectacle.

    Another work explores the myth of the Sambatyon river, said to rage six days a week and rest only on Shabbat. For that piece, I fine‑tuned Stable Diffusion on my existing style and then asked it for impossible rivers: streams of light, shattered planets, and planetary eyes that watched the water (“Jewish Futurism”). I layered those textures with hand‑painted elements to create a scene where a lone human figure stands at the edge of a cosmic torrent that briefly calms. The model could hallucinate a thousand strange rivers, but only a human choice could decide which one carried the emotional weight of a world that is always almost at rest and never quite there.

    Readiness Before Revelation: The Sar HaTorah Framework

    The Zohar’s parable of the Sar HaTorah, the angelic teacher summoned by a rabbi for instant wisdom, warns that revelation demands readiness (Zohar, Introduction). The rabbi gains divine knowledge but nearly dies from overload. The story is not opposed to knowledge. It is about integration.

    This tale offers a design ethic for AI. The Sar HaTorah Framework structures engagement in three stages:

    • Hachanah (Preparation): set intention, purify data, and ask why we are creating.
    • Hishtatfut (Participation): collaborate consciously with the machine, using its speed and scale while maintaining human authorship, accountability, and empathy.
    • Teshuvah (Reflection): review consequences, biases, and impacts; take responsibility for harms and repair what was overlooked.

    In the classroom, this often looks like taking a breath before anyone opens a laptop, naming aloud what we hope the tool will help us do, and agreeing on red lines for its use (Freeman and Mayse). After a project, it means debriefing not just the final image or app, but the process and its ethical ripples.

    Approached this way, AI becomes not a shortcut to wisdom but a partner in its disciplined pursuit. It enacts a metamodern humility in which we build with awe and awareness at the same time.

    Hiddur Olam: Beautifying and Repairing

    Hiddur Olam, “to beautify the world,” fuses Hiddur Mitzvah (beautifying ritual) with Tikkun Olam (repairing the world). It reframes creativity itself as spiritual service and as a design system where beauty and ethics co‑produce meaning (Wirth, “Hiddur Olam”).

    Rooted in Dewey’s experiential learning, Kolb’s learning cycle, and Mussar’s ethical traits (Dewey; Kolb; Wirth, “Hiddur Olam”), Hiddur Olam unfolds in six stages: Study, Envision, Ground, Co‑Create, Reflect, and Carry Forward. When joined with AI, it turns technology into sacred process:

    • Study: AI can surface patterns across commentary and reveal connections that human readers might miss (“Torah Study and the Digital Revolution”).
    • Envision: it can visualize text, sound, and symbolism and map Torah as a constellation of interlinked ideas (“Torah Study and the Digital Revolution”).
    • Ground: it can prompt ethical reflection by modeling dilemmas, bias, or moral consequences (“Judaism and AI Design Ethics Part 1”).
    • Co‑Create: it can amplify creative collaboration and scaffold group art or music rooted in Torah themes (Adat Ari El).
    • Reflect: it can archive process transparently and support cheshbon hanefesh, or ethical accounting.
    • Carry Forward: it can translate insights into accessible formats such as AR, VR, and multiple languages and expand the covenant of learning (Prizmah).

    Over the past few years, I have been testing Hiddur Olam through a multi‑volume art book project on the Torah portions, beginning with Bereshit (“Hiddur Olam”). I created one image for each parasha, always starting from a single word, line, or moment in the text that echoed something I recognized from creative life. A character’s hesitation might become a blurred stroke; a moment of cosmic expansion might turn into layered spheres and ripples of color. Sometimes I used AI for ideation or textures, often running newer versions of my own trained model, and then refining by hand until the image felt like an honest parallel to both the Torah story and the inner drama of making anything at all (Wirth, “Spiritual Creativity”). Sharing these works with students and communities has turned the cycle itself into a practice, where the art becomes a mirror for their own struggles with beginning, failing, revising, and starting again.

    Each use becomes holy when guided by middot: kavannah (intention), emet (transparency), tzedek (justice), hiddur (beauty), and teshuvah (reflection) (“A Jewish Theological Perspective on Technology”). Hiddur Olam transforms design into devotion and code into covenant (Wirth, “Hiddur Olam”).

    Taken together, the Sar HaTorah stages and Hiddur Olam’s six steps form a kind of Jewish Design Thinking cycle. It begins with study and intention, moves through collaborative making, and returns in reflection and repair. This is not generic human‑centered design. It is mitzvah‑centered and community‑centered design, measured by tzedek, emet, and hiddur rather than by engagement metrics alone (Prizmah; Adat Ari El).

    Creative Practice as Torah

    In the classroom and studio, creative collaboration becomes a form of Torah she’bema’aseh, Torah of action. When communities co‑paint a mural, code a generative landscape, or build an interactive ritual, they perform theology (Jewish Creative Sensibilities).

    One workshop on Shabbat and technology at Providence Country Day stays with me. I asked the Jewish students club to design speculative Shabbat devices that would honor the spirit of rest, with one constraint: each idea had to use AI as an ingredient, not a loophole. Their first concepts included a “pre‑Shabbat planner,” an AI that would work only during the week to help organize meals, divrei Torah sources, and guest logistics so that by candle‑lighting every screen could shut down and people could actually exhale into the day of rest. Another group sketched a “story seed” tool that would generate just the first paragraph of a midrashic bedtime tale from a few spoken prompts, leaving the rest of the story to be finished aloud at the table without any devices. As they presented, the students argued, like a pop‑up beit midrash, about which designs genuinely deepened Shabbat and which quietly pulled them back toward constant convenience. The room shifted when one quiet student finally said, “Maybe the most Jewish thing AI can do on Shabbat is remind us to stop using it,” and everyone recognized that their “coolest” ideas were often the ones that erased the need to slow down at all. That shared moment of realization, more than any prototype, was the Torah we made together.

    AI enhances this work when it supports, rather than replaces, human imagination:

    • It can model interpretive possibilities and expand midrashic dialogue (Freeman and Mayse).
    • It can generate interactive visualizations of text structure and help learners see commentary as relational networks (“Torah Study and the Digital Revolution”).
    • It can simulate moral scenarios and invite learners to wrestle with empathy in digital form (“A.I., Halakhic Decision Making”).

    In these settings, authority dissolves into participation. Knowledge becomes co‑created, ethical, and embodied (Jewish Creative Sensibilities). This is a powerful expression of metamodern faith that is sincere, self‑aware, and alive to paradox.

    Judaism Answering Itself

    Judaism has always been metamodern. It believes and doubts at once, reveres and revises, and guards and reinvents (Scholem 1–10). Its survival has never depended on static answers but on the courage to redesign its questions.

    AI now becomes the next instrument of that redesign. It allows us to test what covenant means in a world of mirrors. It can trace interpretive lineages across millennia, simulate voices of rabbis and philosophers, or visualize the evolution of a single idea through time (“Torah Study and the Digital Revolution”; “A Jewish Theological Perspective on Technology”).

    Jewish futurism will not succeed on imagination alone. It needs Jewish Design Thinking, a disciplined way to dream, build, and then review our creations against tikkun olam, emet, and kavannah before we release them into the world (Prizmah; Adat Ari El). My Jewish futurism projects, from neon speculative self‑portraits to AI‑integrated ritual prototypes, are small attempts to practice this in public (“Jewish futurism”; Wirth, “Spiritual Creativity”). They are betas for a future Judaism in which our tools are strange and luminous, but our commitments to repair and responsibility remain non‑negotiable.

    AI cannot choose why we study, create, or repair. That remains human work. The Sar HaTorah teaches readiness, and Hiddur Olam teaches responsibility. Together, they suggest a metamodern theology of technology that is reverent, experimental, ethical, and open‑ended (“A Jewish Theological Perspective on Technology”).



    Works Cited

    Adat Ari El. “The Intersection of Design Thinking and Jewish Education.” Adat Ari El, 29 July 2025.

    Benjamin, Walter. “The Work of Art in the Age of Mechanical Reproduction.” Illuminations, translated by Harry Zohn, Schocken, 1969, pp. 217–51.

    Borges, Jorge Luis. “The Library of Babel.” Labyrinths, New Directions, 1964.

    Brown, Tim. Change by Design: How Design Thinking Transforms Organizations and Inspires Innovation. Harper Business, 2009.

    Dewey, John. Experience and Education. Kappa Delta Pi, 1938.

    “AI Meets Halachah.” Jewish Action, 7 June 2023.

    “Artificial Intelligence and Us.” jewishideas.org.

    Freeman, Molly, and Ariel Mayse. “AI and Judaism.” New Lehrhaus, 2024.

    Grossman, Guy. “Jewish Perspectives on Artificial Intelligence and Synthetic Biology.” Hakirah, vol. 35, 2023.

    Jewish Creative Sensibilities: Framing a New Aspiration for Jewish Education. The Lippman Kanfer Foundation, 2019.

    Kaplan, Mordecai. “Religion of Human Techno‑Genesis.” Jewish Philosophy Place, 2014.

    Kol HaMevaser. “Elu Va‑Elu Divrei Elohim Hayyim and the Question of Multiple Truths.” 2015.

    Kolb, David. Experiential Learning: Experience as the Source of Learning and Development. Prentice Hall, 1984.

    Kurzweil, Ray. The Age of Spiritual Machines. Penguin, 1999.

    Manovich, Lev. The Language of New Media. MIT Press, 2001.

    “Not in Heaven: The Major Challenge to Artificial Halakhic Decisions.” Times of Israel Blogs, 2025.

    Prizmah. “Design Thinking for Jewish Day Schools.” Prizmah Center for Jewish Day Schools, 2019.

    Sacks, Jonathan. “Argument for the Sake of Heaven.” Covenant & Conversation, The Rabbi Sacks Legacy, 19 June 2022.

    Scholem, Gershom. Major Trends in Jewish Mysticism. Schocken, 1941.

    “Sefaria: A Living Library of Jewish Texts.” Sefaria.org.

    “Torah Study and the Digital Revolution: A Glimpse of the Future.” The Lehrhaus, 28 Jan. 2020.

    Vermeulen, Timotheus, and Robin van den Akker. “Notes on Metamodernism.” Journal of Aesthetics & Culture, vol. 2, no. 1, 2010.

    Wirth, Mike. “Hiddur Olam: Creativity, Community, and the Future of Religious Education.” 2024.

  • Stop Using AI As A Hammer, When It’s A Screwdriver: My AI Odyssey Through The Classroom

    Stop Using AI As A Hammer, When It’s A Screwdriver: My AI Odyssey Through The Classroom

    This article is a teacher’s (me) journey out of the AI shadows and into classroom transformation. This article is a companion to a recorded lecture I gave on how I use AI in the classroom. I recommend watching the video in addition to reading this post, as it offers a deeper dive and helps contextualize the experiments and perspectives summarized here.

    AI Isn’t a Hammer, It’s a Screwdriver

    A teacher’s journey out of the AI shadows and into classroom transformation. This article is a companion to a recorded lecture I gave on how I use AI in the classroom. I recommend watching the video in addition to reading this post, as it offers a deeper dive and helps contextualize the experiments and perspectives summarized here.

    We’ve successfully scared the hell out of ourselves about AI. That’s the truth. Despite the helpful Wall-E’s and Rosie the Robots, the likes of HAL 9000 locking astronauts out in space to the death machines of The Terminator, the cultural imagination has been fed a steady diet of dystopian dread. And now, with the hype and hysteria churned out by the media and social media, we’ve triggered a collective fight, flight, or freeze response. So it’s no surprise that when AI entered the classroom, a lot of educators felt like they were witnessing the start of an apocalypse, like all of us were each our own John Connors’ watching the dreaded Skynet come online for the first time.

    But I’m here to tell you that’s not what’s happening. At least not in my classroom.

    In fact, this post is about how I crawled out of the AI shadows and learned to see it not as a threat but as a tool. Not a hammer, but a screwdriver. Not something that does my job for me, but something that helps me do my job better. Especially the parts that grind me down or eat away at my time.

    If you’re skeptical, hesitant, angry, or just plain confused about what AI is doing to education, pull up a chair. I’ve been there. But I’ve also experimented, adjusted, and seen the light and the darkness. I cannot dispel all of the implications of AI use, but I want to share what I’ve learned so you don’t have to build the spaceship from scratch.

    We Owe It to Our Students to Model Bravery

    Students are already using AI. They’re exploring it in secret, often at night, often with shame. They’re wondering if they’re doing something wrong. And if we meet them with fear, avoidance, or silence, we’re sending the message that they’re on their own. In a 2023 talk at ASU+GSV, Ethan Mollick noted that nearly all of his students had already used ChatGPT, often without disclosure. He emphasized that faculty need to assume AI is already in the room and should focus on teaching students how to use it wisely, ethically, and with reflection. That means our job isn’t to police usage—it’s to guide it.

    I don’t want my students wandering through this new terrain without a map. So I model what I want them to do: ask questions, explore ethically, think critically, and most of all—reflect. I also model the discipline of not using AI output as a final product, but only as inspiration. If I use AI to brainstorm or generate language, I always make sure to rewrite it into something that reflects my own thinking and voice. That’s how we teach students to be creators, not copy machines. Map out where you have been and where you are going in your journey. 

    And when I don’t know the answer? I tell them. Then we look it up together. I use this ChatGPT cheatsheet often. Check it out.

    That’s what it means to teach AI literacy. It’s not about having all the answers. It’s about being brave enough to stay in the conversation. I was also wandering aimlessly with AI—unsure how to use it, uncertain about what was ethical—until I took this course from Wharton on Leveraging ChatGPT for Teaching. That course changed my mindset, my emotional state, and my entire classroom practice. It gave me a framework for using AI ethically, strategically, and with care for student development. If you’re looking for a place to start, that’s a great one.

    AI Isn’t a Hammer. It’s a Screwdriver.

    Here’s a metaphor I use a lot: AI is not a hammer. It’s a screwdriver.

    Too many people try to use AI for the wrong task. They ask it to be a mindreader or a miracle worker. When it fails, they say it’s dumb. But that’s like trying to hammer in a screw and then blaming the hammer.

    When you learn what AI actually does well, like pattern recognition, remixing ideas, filtering, and translating formats, you start to use AI for its actual strengths. As Bender et al. (2021) explain in their paper On the Dangers of Stochastic Parrots, large language models are fundamentally pattern-matching systems. They can generate fluent, creative-sounding language, but they do not possess understanding, emotional awareness, or genuine creativity. They remix what already exists. That is why we must use these tools to support our thinking, not replace it. It becomes a tool in your toolkit. Not a black box. Not a crutch. A screwdriver.

    I don’t want AI to do my art and writing so I can do dishes. I want AI to do my dishes so I can do art and writing. As Joanna Maciejewska put it: “I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” It won’t do your dishes. But it might give you time back so you can do something that matters more.

    How I Actually Use AI in Class With Students

    I teach graphic design, motion, UX, and interactive design. AI is already a mainstay in each of these disciplines—from tools that enhance layout and animation to systems that evaluate accessibility and automate UX testing. But even though AI had become part of the professional design landscape, I was still skeptical. I wasn’t sure how to bring it into my classroom in a meaningful way. So I started small.

    Using AI for minor efficiencies—generating rubrics, reformatting documents, cleaning up language—felt good. It felt safe. And it gave me just enough momentum to try it on bigger, more impactful tasks. What made the difference was a mindset shift. I stopped seeing myself as a single musician trying to play every part of a complex score and started seeing myself as the conductor of the orchestra. I didn’t need to play every part, I just needed to know how the parts worked together. That gave me the confidence to use AI—and to teach with it.

    Here’s how I integrate AI into our learning:

    • Students design chatbots that simulate clients, so they can roleplay conversations. I used to pretend to be clients and interact with students through Canvas discussion boards. Now I can read their chat logs and have conversations with them about their questions and intentions.
    • In Motion Graphics, students use “vibe coding”—a form of sketching in code with the help of GPT to simulate motion, like moons orbiting planets.
    • In Interactive Design, they use Copilot** to debug code** in HTML, CSS, and JavaScript.
    • They learn to generate placeholder images for mockups, not final artwork.
    • We create custom Copilot agents, like “RUX”—a UX-focused bot trained to give scaffolded feedback based on accessibility standards.

    I’m not handing them shortcuts. I’m handing them power tools and asking them to build something that’s still theirs.

    The Creative Process Needs Scaffolding—AI Can Help

    I believe in the creative process. I’ve studied models like the Double Diamond and the 4C Model. I’ve seen how students get stuck during the early stages, especially when self-doubt creeps in.

    That’s where AI shines.

    AI helps my students generate more ideas in the divergent phase. This echoes research by Mollick and Terwiesch (2024) showing that structured AI prompting increases idea variance and originality during the creative process. It helps them compare, sort, and edit during the convergent phase. And when I ask them to submit their chat logs as part of their final deliverable, I can see their thinking. It’s like watching a time-lapse of the creative process.

    We’re not assessing just artifacts anymore. We’re assessing growth. And that includes how students use AI as part of their process. I make it clear that AI-generated outputs are not to be submitted as final work. Instead, we treat those outputs as inspiration or scaffolding—starting points that must be reshaped, edited, or reimagined by the human at the center of the learning. That’s a critical behavior we need to model as teachers. If we want students to be creative thinkers, not copy-paste artists, then we have to show them how that transformation happens.

    Accessibility and AI Should Be Friends

    I also use AI to make my course materials more accessible. I format assignments to follow TILT and UDL principles. For example, I asked GPT to act as a TILT and UDL expert and reformat a complex assignment brief. It returned a clean layout with clear learning objectives, task instructions, and evaluation criteria. I pasted this directly into a Canvas Page to ensure full screen reader compatibility and ease of access.

    For rubrics, I asked GPT to generate a Canvas rubric using a CSV file template. I specified category names, point scales, and descriptors, and GPT returned a rubric that I could tweak and upload into Canvas. No more building from scratch in the Canvas UI.

    To generate quizzes, I use OCR with my phone’s Notes app to scan printed textbook pages. I paste that text into GPT and ask it to write multiple-choice questions with answer keys. GPT can even generate QTI files, which I import directly into Canvas. This process saves me hours of manual quiz-writing and makes use of printed texts that don’t have digital versions.

    AI helps me build ramps, not walls.

    Faculty are also legally required to build those ramps. Under the Rehabilitation Act and the Americans with Disabilities Act (ADA), specifically Section 504, course content in learning management systems like Canvas must meet accessibility standards. But let’s be honest—retrofitting dozens or even hundreds of old documents, PDFs, and slide decks into fully accessible formats is a monumental task. It often gets pushed to the bottom of the to-do list, which leaves institutions vulnerable to non-compliance. Check out the WCAG standards for more details.

    AI can help. It can reformat documents for screen reader compatibility, generate alt text, simplify layout structure, and audit for contrast and clarity. And it can do it in a fraction of the time it would take any one of us. By using AI thoughtfully here, we not only make our content better, we also help our institutions become more equitable and compliant faster.

    When I use local LLMs to analyze student writing using tools like LM Studio, I keep student data safe, FERPA compliant, and private. This aligns with concerns raised by Liang et al. (2023) about how commercial LLMs may compromise the privacy of non-native English speakers and their content. It is ethical. It is efficient. And it respects the trust students place in me.

    Let Students Build Their Own Tools

    One of the best things I’ve done is empower students to create their own AI agents.

    Yes, students can train their own Copilot bots. And when they do, they stop seeing AI as some alien threat. They start seeing it as a co-creator. A partner. A lab assistant. ChatGPT has a feature called Custom GPTs, which allows similar personalization, but it’s locked behind a paywall. That creates real inequity for students who can’t afford a subscription. Copilot, on the other hand, is free to students and provides the necessary capabilities to build custom agents or chatbots. Here’s a guide to get started building your own agents with Copilot.

    As a way to model this behavior for students, I created a CoPilot Agent myself called RUX, short for “Rex UX”, honoring Rex, our beloved university mascot. I built it using Microsoft’s Copilot Studio, which lets you define an agent’s knowledge base, tone, and purpose. For RUX, I gave it specific documentation to pull from, including core sources like WCAG, UDL, and UX heuristics, and trained it to act as a guide and feedback coach for my UX students. It doesn’t give away answers. It asks questions, gives feedback, and helps students reflect.

    Setting up an agent starts with defining your intent. I decided I wanted RUX to act like a mentor who knew the standards for accessibility and good UX practices, but also had the patience and tone of a coach. I uploaded key resources as reference material, wrote prompt examples, and added instructions to prevent the agent from simply giving away answers. This ensures students use it to reflect and improve rather than shortcut their learning.

    The great part is that it took me about 30 minutes. And now my students use it to get feedback in between critiques, to check their work against accessibility standards, and to build their confidence.

    And the students slowly start to ask better questions.

    Final Thoughts: Be the Conductor, Not the Consumer

    I tell my students this all the time: don’t just be a user. Be the conductor. That’s the heart of this whole article. I started this journey skeptical and unsure about how to use AI in my teaching, but I kept experimenting. And the more I leaned in, the more I realized I could use these tools to orchestrate the learning experience. I didn’t need to master every note, just guide the ensemble. Once I felt that shift, I was able to build my own practice and share it with students in ways that felt grounded and empowering.

    Here are two simple but powerful GPT exercises that are from the UPenn AI in the Classroom course that I recommend for you to get started:

    1. Role Playing (Assigning the AI a Persona)

    This method helps shape AI responses by giving it a clear role.

    Steps:

    • Tell the AI, “You are an expert in [topic].”
    • Provide a specific task, like “explain X to a 19-year-old art student” or “give feedback on a beginner-level UX portfolio.”
    • Refine the prompt with context about the student’s needs or your learning objectives.

    Outcome: The AI behaves like a thoughtful tutor instead of a know-it-all. Students can use it as a low-stakes, judgment-free practice partner.

    2. Chain of Thought Prompting

    This is useful for step-by-step thinking and collaborative problem solving.

    Steps:

    • Ask the AI to help you develop a lesson plan, solve a design challenge, or draft a workflow.
    • Break the task into steps: “What’s the first thing I should consider?” Then “What comes next?”
    • Let the AI ask you questions in return. Keep the conversation going.

    Outcome: You model metacognition, and students learn how to refine ideas through iterative feedback. It supports both ideation and strategic planning.

    Try these as warm-ups, homework tools, or reflection exercises. They’re simple, ethical, and illuminating ways to integrate AI in any classroom.

    That’s what I want for my colleagues, too. You don’t have to know everything about AI. You just have to be curious. You have to be willing to ask: “What can this help me or my students do better?”

    So here’s your first experiment:

    1. Have students brainstorm ideas for a project.
    2. Have them ask GPT the same question.
    3. Compare the lists.
    4. Reflect. (What worked? What didn’t? How will you approach brainstorming next time?, Repeat)

    Then decide what to keep, what to toss, and what to remix. Just like we always have. Let’s stop building walls. Let’s start building labs. And let’s do it together.

  • From Prompt to Practice: How Artists Can Rethink and Reclaim AI Tools

    From Prompt to Practice: How Artists Can Rethink and Reclaim AI Tools

    Yes, there’s a problem, but It’s not just about AI.

    I’ve heard passionate arguments against AI usage from fellow artists. I’ve also read in detail about the lawsuits filed by creators against companies who used their work without permission. I agree that this is wrong and that it has hurt the true validity of the tools. These concerns are real, and they’ve shaped how I approach the technology.

    A central question remains: if we could make the source imagery for AI training completely copyright-free and ethical, would that actually end the argument over the use of these tools in art making? Or is the real issue an underlying belief in purity in the creative process? As a graphic designer, I know that purity in creation was disrupted long before AI ever existed.

    From Generative Art to Generative AI

    I was exhilarated the first time I started using AI in my art. I’ve been working as a generative artist since the late 90s, so I’ve seen a lot of shifts in how tech intersects with creativity. Back in the early 2000s, I was building generative art installations that used text, images, and sound. I was dreaming in code, digital sensors, and databases as these were the core elements to create art-making robots. I was even invited to exhibit my projects in the US and international media arts biennales in Split, Croatia and Wroclaw, Poland. But this new wave of AI tools? It felt like a leap. A serious one.

    When the most recent wave of AI tools burst onto the scene in 2022, I found myself re-reading Walter Benjamin’s essay The Work of Art in the Age of Mechanical Reproduction and picking up Lev Manovich’s article Who is an Author in the Age of AI? for the first time. They both helped give me clarity on the debate developing. The first was written over a century ago, and the second in the present moment, but both challenged me to think beyond the surface of the debate.

    That said, I didn’t jump in without questions. I was skeptical about where the image data was coming from. Most of the image models were trained on LAION400M, a huge dataset scraped from the internet. It was meant for research, not commercial use. As an artist, I care deeply about copyright and creative ownership. That part bothered me.

    But the power of the tool was undeniable. AI helps me iterate quickly. It pushes my image-making forward and challenges me to try things I wouldn’t have done on my own. New poses, wild color combinations, unusual compositions. Sometimes, I don’t even recognize what I’m capable of until I see what AI reflects back at me.

    Through my experiments and explorations, it became very true that it’s not about replacing my work at all. It’s about expanding it.

    AI as a Catalyst and Sharpener for Creativity

    One of the things I love most about AI is how it helps me start. Sometimes, it’s like having an oracle. I might not know exactly where a project is headed, but I can toss in a few ideas and see what comes back. That response helps me clarify what I want. Or what I don’t want. And that’s part of the creative process, too. The ability to think divergently and then convergently is the true poetry of the creative process and having a “helper” in that process allows greater tracking and reflection on the process.

    I use Stable Diffusion most often because it’s open-source and highly controllable. I like that I can run it locally on my machine without paying for credits or cloud storage. Not having a paywall gives me the freedom to really dig deep. I can generate a hundred versions of an idea, explore unexpected paths, and move fast without overthinking costs.

    There was one recent project that brought this all into focus. A client in Houston wanted a mural with about 25 different visual elements. Honestly, it was overwhelming. I started by asking AI to look for patterns in the list text, riffing with the chatbot to explore visual ways to combine the many items into one space. The ideas that chat suggested, but it did mention the word surrealism that made me think of Salvador Dali’s haunting landscapes. That was it! A landscape like Dali’s is a place where all the elements could exist together logically. A dreamscape, so to speak. That unlocked the whole thing. Without AI, I might’ve taken much longer to get there.

    Most AI images aren’t good. I’d say maybe 10 percent are worth a second look. I know I’ve hit something useful when it meets my “GE” standard: Good Enough to move forward. I’m looking for strong composition and clear visual hierarchy. Everything else can be worked on later. But if the image holds space in a striking way, I’ll keep going.

    AI for Process, Not Just Product: An Ethical Approach

    People often think of AI as a tool for generating a final image. That’s not how I use it. For me, AI is most powerful when it supports the process. It helps me evaluate, brainstorm, and reflect. Sometimes, I upload a rough idea or a brain dump and use a chatbot to ask questions or poke holes in it. That outside perspective—fast, responsive, and nonjudgmental—is gold.

    If I’m stuck, I might ask the AI to generate some moodboards or rough compositions. I’m not expecting polished work. I’m looking for sparks. A direction to follow or a problem to solve. It’s the same way I’d sketch a dozen thumbnails on paper.

    And no, I don’t fall in love with the first cool thing AI spits out. That novelty wore off fast. I’ve trained myself to be curatorial. Most results don’t hit the mark. But knowing I can always make more helps me stay loose. I push ideas until they’re solid.

    One piece that stands out was an illustration I made for a Torah portion about Pharaoh’s dream. I had drawn a grim-looking cow skull and was thinking of placing it in a field of wheat. Then, the AI surprised me. It created a cow skull made out of wheat. That twist was mysterious and totally unexpected—perfect for the surreal nature of a dream. I never would have gone there on my own. But once I saw it, I knew exactly where to take it.

    AI Won’t Replace Me. It Will Refine Me.

    As an educator, I’ve brought AI into the classroom not just as a tool but as a way to help students understand how creativity works. I show them how to use it to ideate, test ideas, and refine their thinking. We do in-class exercises where students generate images or prompts and then share their chatlogs with me. It gives me a real window into how they’re thinking—and how they’re growing.

    Some students are skeptical at first. Others dive in headfirst and sometimes know more than I do about certain tools. The ones who get it quickly start using AI not to shortcut the work but to deepen it. They realize that it’s not about letting AI do the thinking. It’s about using AI to push your thinking further.

    I’ve even used AI as part of a critique. One time, I had students feed their near-final projects into ChatGPT and ask for feedback. With the right prompts, the feedback was surprisingly thoughtful. Not perfect. But useful. It opened a door for them to reflect and iterate in ways they hadn’t before and to be critical of comments that didn’t in fact, help improve their work.

    What AI reveals, I think, is that creativity isn’t just about generating something new. It’s about discovering connections, asking better questions, and recognizing what’s missing. AI isn’t great at originality on its own, but it’s fantastic at remixing and showing what’s possible. It’s like a mirror that reflects potential back at you. I’ve worked closely with a few creative development systems like the Double Diamond and SCAMPER and I can say with confidence: AI can support both divergent and convergent thinking, especially when used intentionally.

    Originality: The Collage Conversation

    This comes up a lot: “Isn’t AI just stealing?” And my response usually starts with this: what about collage?

    We’ve accepted collage as a legitimate art form for over a century. Artists like Hannah Höch, Romare Bearden, and Robert Rauschenberg all used found images, many of them copyrighted. They cut, glued, layered, and remixed to create something new. If we call that art, why are we drawing the line at AI?

    To me, AI-generated images are collage-like. The human prompts them with intention. The AI recombines things based on patterns it has learned. The process is digital, but the creative act is still there. Cutting and pasting by hand doesn’t make something inherently more authentic. It’s the idea behind the work that matters.

    Now, I don’t ignore the legal and ethical side of this. Most major AI image models are trained on datasets built from scraped web images, and that’s a problem. I’ve been exploring more ethically sourced options. For example, Adobe’s Firefly and Shutterstock’s model are trained on licensed stock images. Even better, I recently started working with a model called PixelDust. It’s a rebuild of Stable Diffusion, but trained only on public domain and Creative Commons Zero (CC0) images—think Wikipedia, museum archives, and open repositories. While it’s the closest public domain model out there, it still is not 100% certain it is copyright-free.

    I fine-tuned that model using 380 of my own original works. That means when I prompt it now, it generates images in my style using my visual language. It’s still collaborative, but it feels more personal. And the results have seriously improved my ideation speed and image quality.

    There’s a difference between copying and remixing. Collage artists have done it forever. Musicians sample. Writers quote. AI might be new, but it fits within a long tradition of borrowing, blending, and transforming. What complicates the conversation is “style”. People think style is protected by copyright, but it’s not. Only specific works are. So, while artists may be known for their style, that alone doesn’t make it off-limits.

    Yes, people have questioned the validity of my AI-assisted work. When that happens, I explain. I describe how I use AI for ideation, how I fine-tune models on my own work, and how that affects the output. Once people understand that I’m building on my own images and ideas, they usually come around.

    Co-Creating with the Machine: How AI Refines My Process

    Over time, I’ve started experimenting with creating a kind of AI version of myself. Not in a sci-fi clone kind of way, but as a tool trained to think and see more like me. I fine-tuned a model using 360 of my own artworks, each paired with carefully written prompts. That way, when I generate new images, they come out in my visual language, not someone else’s.

    I also use a tool called ControlNet. It lets me upload sketches or basic compositions, and then the AI fills in the style and detail. This setup allows me to keep control over layout and flow while still tapping into the speed and surprise of the AI. It doesn’t always work the first time, and it can be a long back-and-forth, but the results are worth it.

    Eventually, I’d love to have a copyright-safe, fully custom model that supports my entire process. The goal isn’t automation for the sake of ease. I want to hand off the repetitive, procedural stuff so I can stay focused on creativity, strategy, and ideas.

    And no, I don’t want my AI self to be autonomous. That would defeat the point. I’m the creative leader here. The AI is my partner. It helps me explore, test, and refine, but I make the final call.

    I’ve also made peace with the idea of my style being encoded. I’ve been an illustrator long enough to know that you don’t really “own” a style. My style is a blend of influences I’ve absorbed over the years, and it’s always evolving. As a professional, I’ve had to learn multiple styles just to stay competitive. So, no, I don’t see style as sacred. It’s the ideas and the content that matter most to me.

    Rethinking and Remixing Creativity with AI

    My relationship with AI has changed a lot since I started. At first, I believed the hype. I thought it would be a job killer that could replace me. But as I worked with it more, I realized its limitations. It isn’t a one-click creative solution. It’s a tool that depends on my input, my ideas, and my vision. It helps me move faster and reflect more deeply, but it doesn’t do the thinking or the feeling for me.

    I’ve come to believe that AI isn’t replacing creativity. It’s revealing it. It shows us how we think, where we hesitate, and what we ignore. It challenges the old myths that artists work in isolation, drawing purely from inspiration or talent. That myth never held true for working designers and educators like me. And it definitely doesn’t reflect how creativity works in the real world.

    Still, I respect the artists who are hesitant or resistant. I’ve listened to powerful critiques and concerns. The lawsuits over unauthorized dataset usage raise important ethical and legal questions. And they should. If we can’t build these tools on ethically sourced, copyright-free content, then we have no foundation to build from. But if we can create models trained on ethically gathered images, then we should be having different conversations. One would be about practice. Another about process. We’d also be talking about expanding what it means to be creative. Instead, we’re stuck in echo chamber-like debates with half-truths and misunderstandings.

    AI is not a threat to purity in art because that purity never really existed. From collage to sampling to appropriation, art has always thrived on remix. This is what Benjamin meant when he spoke of the “aura” of artworks over a hundred years ago. Reproduction changes the way we relate to art, but it doesn’t remove its meaning. It shifts the space where meaning happens.

    So I use AI not because it replaces me but because it helps me be more of who I already am. A generative artist. A question asker. A teacher. A remix thinker. A designer trained in collaboration, systems, and complexity. AI is now a part of that system. And I welcome it, carefully and critically, into my process.

    The tech will keep evolving. But the core of creativity, being curiosity, play, rigor, surprise, and reflection, has not changed. AI just gives us more ways to explore it.

  • AJS Perspectives Journal: The AI Issue

    AJS Perspectives Journal: The AI Issue

    I had the pleasure of contributing both an interview and original artwork to the cover and interior of the AI Issue of AJS Perspectives, published by the Association for Jewish Studies. The issue explores how artificial intelligence is beginning to reshape Jewish scholarship, pedagogy, and creative practice, and it was meaningful to participate in that conversation from both a visual and conceptual standpoint.

    Cover the AI Issue Summer 24′

    I especially enjoyed working again with Doug Rosenberg, whose editorial vision I deeply admire and with whom I have collaborated in the past. Doug thoughtfully framed the issue by placing two distinct but complementary approaches into dialogue. He focused on Julie Wietz’s use of the Golem as a performative and robotic avatar alongside my own work around Sar Torah, a model of generative knowledge that treats Torah as a living, evolving system rather than a static archive.

    Julie and I have also worked together previously, and seeing our practices paired in this context was especially rewarding. Her embodied, mythic approach and my systems-based, generative approach ask similar questions from different angles: how Jewish imagination, ethics, and inherited narratives shape our relationship to emerging technologies.

    Feature spread by Doug Rosenberg- AJS Perspectives Journal Summer 24′

    I also greatly enjoyed working with the editorial team to develop artwork that could serve as a cohesive visual theme for the issue. That collaboration gave me the opportunity to show my Jewish futurism work in action, not as speculation, but as a visual language actively engaging with contemporary Jewish scholarship. It felt meaningful to bring this work into conversation with this part of the Jewish academic world, where ideas, tradition, and future-facing inquiry meet.

    Overall, the experience reaffirmed for me that discussions about AI within Jewish Studies are ultimately about people, values, and responsibility. They ask how we carry tradition forward, how knowledge is generated and shared, and how creativity remains a sacred act even as our tools continue to evolve.