.

How do you solve a problem like … AI? (Opinion)

Screenshot from The Sound of Music
Screenshot from The Sound of Music

Previously, I shared a Vox resource on Generative AI in teaching and learning, with the warning that we risk substituting cool new tech for the desirable difficulties of learning. In the video, they use the metaphor of Google Maps: technology may help us find our way, but it can also prevent us from learning to find our own way, constructing our own mental maps. 

What about Generative AI and artmaking? Does generating imagery/ideating give us a starting point or inhibit our ability to visualize, draft, and experiment? Yes. I’ve learned much from Leon Furze’s post The Myth of the AI First Draft, acknowledging that writing is supposed to be hard work, to a point. We have some limited evidence that, for beginners, exposure to AI samples may marginally improve creative output. What’s challenging about addressing a new development like AI is that implementation has far outpaced the science and education communities’ ability to study it systematically, understand it any depth, and then recommend practices based on data and experience (not just moral panic)

The question of “How do you solve a problem like AI?” may not even be the right one to ask. In a Substack post on teaching writing, Emily Pitts Donahoe embraces a creative approach to working with Generative AI: using it in different ways, at different stages of writing, and reflecting on the experience (What did the Gen AI provide you to work with, how did it help move you forward, and how does it compare to the way you usually write/create?). What’s interesting to me about this approach is that it leaves unanswered the question of whether the AI is good or bad and asks what the experience is like (and the results). 

Teaching in light of generative AI doesn’t necessarily mean adopting a draconian or permissive policy, but it does require us to be clear about what the tool is doing (and what the student is or isn’t doing). We might need to be clearer about the purposes of our assignments, our goals for student learning, and what students might expect in terms of desirable difficulties. We also might need to engage in some purposeful experimentation. This is not an endorsement of AI (I have a fair share of ethical qualms, such as energy use and the need for regulation), but I acknowledge a need for dialogue, further study, and patience. In the meantime, I’m happy to discuss the ideas you have for addressing AI in your courses, and you can always reach me at: asmith@pcad.edu 

Previous Posts

Honoring the Labor of Teaching: A Labor Day Post

This post was published on the Friday of Labor Day weekend. Having begun as an employee strike day turned holiday to recognize the American labor movement (which, as history reminds…

“May the course be with you” on the first day of class

I’m sitting in a large lecture hall at Indiana University, where our professor is showing us how the dance steps to a Polish mazurka (one… LIFT on two… down on…

Course Design (Part 3 of 3): Arriving at the good (enough) place

We’ve arrived at week three of this series on course design, and some of you might be asking, “Dude, where’s my syllabus?” For many, the work of course design/planning is…

Course Design (Part 2 of 3): Who are you? (I really wanna know).

Last week, we looked at the role of learning objectives (both content-related and affective), and thought about what it means for students to have ‘learned’ in your course. This is…

Course Design (Part 1 of 3): Where do you want to go?

As part of an initial effort at supporting course design here at Pennsylvania College of Art & Design, I’m sharing a version of the 3-day layout through the blog. Course…

Getting Specific About Intelligence – Artificial AND Human

After a few weeks away, I’m back at my promised effort to investigate multiple questions about teaching, learning, and artificial intelligence. This week, we’ll see how the ways in which…