.

How do you solve a problem like … AI? (Opinion)

Screenshot from The Sound of Music
Screenshot from The Sound of Music

Previously, I shared a Vox resource on Generative AI in teaching and learning, with the warning that we risk substituting cool new tech for the desirable difficulties of learning. In the video, they use the metaphor of Google Maps: technology may help us find our way, but it can also prevent us from learning to find our own way, constructing our own mental maps. 

What about Generative AI and artmaking? Does generating imagery/ideating give us a starting point or inhibit our ability to visualize, draft, and experiment? Yes. I’ve learned much from Leon Furze’s post The Myth of the AI First Draft, acknowledging that writing is supposed to be hard work, to a point. We have some limited evidence that, for beginners, exposure to AI samples may marginally improve creative output. What’s challenging about addressing a new development like AI is that implementation has far outpaced the science and education communities’ ability to study it systematically, understand it any depth, and then recommend practices based on data and experience (not just moral panic)

The question of “How do you solve a problem like AI?” may not even be the right one to ask. In a Substack post on teaching writing, Emily Pitts Donahoe embraces a creative approach to working with Generative AI: using it in different ways, at different stages of writing, and reflecting on the experience (What did the Gen AI provide you to work with, how did it help move you forward, and how does it compare to the way you usually write/create?). What’s interesting to me about this approach is that it leaves unanswered the question of whether the AI is good or bad and asks what the experience is like (and the results). 

Teaching in light of generative AI doesn’t necessarily mean adopting a draconian or permissive policy, but it does require us to be clear about what the tool is doing (and what the student is or isn’t doing). We might need to be clearer about the purposes of our assignments, our goals for student learning, and what students might expect in terms of desirable difficulties. We also might need to engage in some purposeful experimentation. This is not an endorsement of AI (I have a fair share of ethical qualms, such as energy use and the need for regulation), but I acknowledge a need for dialogue, further study, and patience. In the meantime, I’m happy to discuss the ideas you have for addressing AI in your courses, and you can always reach me at: asmith@pcad.edu 

Previous Posts

What do you REALLY want them to remember?

This is an end of semester post. Students have turned in their projects, and I know that many of you are busy grading and/or preparing for this week’s Commencement. Understandably,…

Relationship-building IS evidence-based practice.

Recently, I finished a book called The Last Human Job: The Work of Connecting in a Disconnected World, by Allison Pugh. In it, Pugh focuses on what she calls ‘connective…

Copy That: Reflecting on the Values of Imitation

Copycat isn’t usually meant as a compliment. In the same vein, many of us are familiar with the quote: ‘Imitation is the sincerest form of flattery’ (usually attributed to Oscar…

When learning can be a chore … just do it?

Students are often met with the idea school is just preparation for the next stage, which is in turn preparation for the next course, job, internship, etc. There’s stuff you…

Looking Back, Art Isn’t Always A Solo Endeavor

Recently, I rewatched one of my favorite anime in recent memory, Look Back (link to trailer here, link to review here). In the movie, Fujino befriends a ‘shut in,’ Kyomoto,…

thank u, next?: on the value(s) of revisiting

“… there is nothing like returning to a place that remains unchanged to see the ways that you yourself have changed.” – Nelson Mandela    For my 10th birthday, I…