Teaching Designers AI Literacy
So… Sci-fi authors were mostly wrong about which comes first with AI and robotics.
We have machines now that cannot open doors, but can write business plans*.
Moravec’s Paradox, formulated by Hans Moravec, Rodney Brooks, and others in the 1980s, highlights this counterintuitive aspect of artificial intelligence and robotics. (If you’re triggered by this… either you are part of a robotics company or Nvidia, and yes, I’m aware of your advancements (Thanks, Dr. Jim Fan… See my addendum in the bottom. Let’s see what you got in 2024!!).
High-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources.
We now have a new class of intelligence inspired by our neural networks but operate differently from our intelligence. While we still lack the capabilities to encapsulate this intelligence in definition, we can design with it, and alongside it.
I believe the fruits of AI research can move from the domain of coders into the creative realm of designers. To see AI as an adaptive material, a class of algorithms capable of operating under uncertain conditions, whether it is perceiving, synthesizing, or remembering.
In this blog I wanted to share some ideas and exercises about how I have been teaching AI literacy to students and clients with no programming backgrounds.
These exercises are being integrated into the new ‘Generative AI & Design’ Design Masters’ Elective here at TU Delft, starting in the 2024 academic year.
Basic AI Literacy
Guiding Motto: Understanding AI as a Design Material
I think a practical definition of AI is a class of algorithms that can perform under uncertainty. Generative AI is a sub-class of those algorithms that can synthesize new content from old. As these models display emergent reasoning abilities, we can use them to design adaptive and self-reflective systems to pursue goals previously only possible by human minds.
Designers, being trained in a discipline for navigating uncertainty, is best suited for taking AI out of the lab and into our lives. The challenges in leveraging this intelligence are no longer a technical ones, but design ones. Designers don’t need to understand how the underlying technology is built so that they can build more models, but they need to understand how it is built so they can understand it as another possible material or tool in their repertoire for creating new products and services, and perhaps new ways of designing.
🔥 Exercise #1: Embodied LLM
This one-word improv exercise is designed to illustrate the concepts of tokens, attention, and the response generation process in Large Language Models (LLMs), while also demonstrating how hallucination in AI (as one token throws off everyone elses’ tokens after .
- Participants: Teams of 3 or more people.
- Task: Answer a simple question, “Why is the sky blue?”
- Method: Without communicating with each other, each team member adds only one word at a time to the team’s answer, until there is nothing more to add.
“AI Twists” on the Improv Exercise :
- Embodying Tokens: This rule mimics the token-based generation in LLMs, where each token (word) is generated one after another.
- Embodying Attention: Participants draw an arrow pointing to the existing word that inspired their new word. This exemplifies the attention mechanism of LLMs, where the model considers the relevance of all previous tokens when generating a new one.
- Embodying Weights: After all teams have presented their answers, the class votes on which answer is the best. This mimics the final weighting in LLMs, where the model evaluates different possible responses to determine the most appropriate one.
Learning Objective:
- Understanding what tokens are and how the attention mechanism works in LLMs.
- Gaining insight into how LLMs generate responses.
- Exploring why hallucinations (inaccuracies or irrelevant information) can occur in AI-generated responses.
Dissecting AI Tools
Guiding Motto: Deconstruction as a Learning Tool
Most have already experimented with AI tools, particularly AI assistants (ie. ChatGPT, Claude) and generators (ie. RunwayML, Midjourney). They would have already encountered the concept of “Prompting”, which has become the most common and intuitive way of controlling and instructing models.
From the first module, they ought to now understand how prompting as a technique came about from X-shot learning, and should be able to start grasping prompt engineering techniques for more advanced linguistic model manipulation.
They might not have already started unpacking and dissecting existing AI tools for how they work. Such as trying to break ChatGPT in order to understand the framework that chains together the many models and pipelines that go into making the chatbot wrapper work around the core GPT-3.5/4 models.
🔥 Exercise: AI Safari
This exercise involves exploring the vast landscape of AI tools and critically analyzing one chosen tool, focusing on its functionality, applications, and limitations. The goal of this exercise is to foster a deeper understanding of the practical and ethical dimensions of AI technology, encouraging students to think critically about the integration of AI into various aspects of society and industry. By researching and presenting an AI tool, students not only learn about specific applications of AI but also develop the skills necessary to evaluate and critique emerging technologies. The class would also be exposed to more diverse AI tools than individually could cover.
- Participants: Individual.
- Task: Each student is tasked with finding and selecting one AI tool from the over 10,000 available that they find particularly interesting or relevant. (A couple of archives to get started: Toolify, theresanaiforthat)
- Presentation: Students then present their chosen AI tool to the class.
Unpacking the Tool:
- During their presentation, students must:
- Explain how the AI tool works, detailing its underlying technology, algorithms, and intended use cases. (Making educated guesses if the tool lacks transparency)
- Showcase a practical demonstration of the tool in actio, if possible.
Showcase a Failure:
- Students are also required to present a scenario or case where the tool fails or underperforms. This could involve demonstrating a limitation, a flaw in its design, or a situation where the tool provides incorrect or inappropriate results.
Learning Objective:
- Understanding AI Tool Development: Gaining insight into how AI models are developed into functional tools.
- Critiquing AI Products: Developing the ability to critically assess AI tools, understanding their strengths, weaknesses, and the implications of their use in real-world scenarios.
Rapid Prototyping AI
Guiding Motto: Embodied Prototyping for Uncertainties
One challenge I’ve observed in designing AI products is the difficulty in prototyping. Often, the approach is either overly optimistic, assuming the AI can do anything, or overly pessimistic, assuming it can do nothing. Both perspectives lead to product concepts that are either irrelevant or impossible. However, with the knowledge gained from the previous two modules, students will acquire not only a fundamental understanding of the technology’s capabilities and limitations but also valuable insights from analyzing other tools. This includes learning how others are addressing flaws or repurposing functionalities.
Armed with this intuition, students can then design AI products and prototype them using “Wizard of Oz” techniques. This involves simulating the AI component or function they aim to integrate, allowing them to rapidly test and refine the desired interaction.
🔥Exercise: Wizard of Oz Machine
This exercise encourages students to conceptualize and prototype a new AI-powered service or product, using a hands-on approach to understand the design and functionality of AI systems.
- Participants: Students working in small groups.
- Task: Each group is tasked with ideating a novel AI-powered service or product. The idea should be innovative and feasible, considering current AI capabilities.
Embodying Components with Human Users:
- After ideation, groups are required to simulate their AI service/product by assigning team members to act as different components of the AI system.
- Each component should be encapsulated in simple instructions that an AI model could also follow, without the ability to adapt to changing circumstances.
Testing and Critique:
- Other groups test the simulated AI service/product.
- Testers provide feedback on the design and concept of each human-represented module or component.
- NOTE: When a human component has to deviate from its original design in order to complete an interaction, this highlights an issue the AI version would likely not be able to adapt to, thus additional design is necessary for that component in the chain.
Learning Objectives:
- Designing an AI Pipeline: Understanding the complexity and interconnectivity of different components in an AI system. Learning how to design a pipeline that efficiently processes inputs and delivers outputs.
- Prototyping an AI Concept: Gaining skills in quickly prototyping AI concepts using available resources.
Conclusion
As we navigate through the rapidly evolving landscape of artificial intelligence, I hope the exercises shared in this blog provide a foundational step towards developing a deeper understanding and practical literacy in AI. From embodying the workings of LLMs to dissecting AI tools and wizard-of-z prototyping AI concepts, I hope it gives you some more ideas about how to teach non-coders to working with this technology.
AI as a design material requires creativity, critical thinking, and an intuitive understanding of its capabilities and limitations. I hope these exercises empower students to see AI not as a complex science, but as a tangible part of their repertoire.
A Call to Design Educators.
For those in design education, I’d love to hear your thoughts on how these exercises can be adapted or expanded to suit your teaching goals. The journey towards AI literacy is ongoing, and we can shape a future where technology and human creativity coexist in harmony.
The future of work is collaborative human-AI teams!
To do so, let’s demystify AI!
ADDENDUM
*So… 2024 appears to be the year where this (Robotics failings) changes. Generative AI is also giving birth to a wealth of methods for improving robotics dexterity and solving physics based tasks:
- Generative AI, specifically diffusion models, to address complex object manipulation challenges. Each model focuses on a specific constraint type, such as avoiding collisions or maintaining stability, and these models work together to find global solutions to packing problems.
Apparently being abel to “imagine” what the solution would look like is really helpful…
- Multimodal models to help with task breakdown and Hierarchical Planning. These models collaborate to create detailed and feasible plans, aiding robots in household, construction, and manufacturing tasks. https://hierarchical-planning-foundation-model.github.io/
- More to come… that’s not the point of this blog, maybe I’ll write another on Physics-based AI and robotics if there is interest…