Students will use Teachable Machine and Scratch to train an AI model to recognize specific gestures. This project fosters an understanding of AI applications, limitations, trustworthiness, and bias.
How does google teachable machine image classification work?
Looks at a single frame of a video
Feeds the data from that image (color and position of pixels) to a neural network
That neural network is trained to find correlations between the inputs (colors, position of pixels) and the outputs (which data class it should belong to)
AI Bias
This means that the network treats different types of people differently in a negative sense.
AI works best when the inputs are similar to the types of data that it was trained on. Because the model we are using today looks at colors and positions, it means that differences in colors of the input can affect the result of the network. This can be a problem two people with different skin tones both want to use the same network that was only trained with the data from one of them.
Real world ex: Facial recognition systems used to work much better for lighter skinned faces than darker skinned ones, because most of the training data included lighter skinned people.
Real world ex: Amazon found their resume screening algorithm was discriminating against female sounding names because most of their past hires were male.
A video walkthrough of this activity is available on youtube
Written Instructions
--- Part 1: Teachable Machine ---
1. Go to bit.ly/teachablescratch and click on "Teachable Machine" at the top of the page.
2. Select an "Image Project" and "Standard Image Model."
3. Choose two gestures for the machine to recognize and enter them as class names (e.g. "question" and "heart").
4. Click on the webcam button, then go into settings and turn "hold to record" off.
5. Click "Record" and provide varied examples of each gesture (ex: question -> use both hands in different positions)
6. Collect data for both classes
7. When done collecting data, click "Train Model." This process can take some time (up to a few minutes). If the page becomes unresponsive, click "Wait."
8. Test the model and note its performance.
9. Go back and add a third class (e.g. "nothing") and train the model on data not representing a gesture.
10. Continue testing and training the model to improve its performance.
11. While waiting for the model to train, discuss AI limitations, trustworthiness, and bias with students. Get them to try to fool their model into giving the wrong answer, and notice when this happens.
It often confidently gives the wrong answer
It is good at only exactly the types of problems it has been trained on (ex: if you only train it for questions with your right hand, it may not work for someone who uses their left hand)
It will often not work for anyone who was not included in the training data. You can try this by getting the students to trade computers with a peer and see if the model still works. Think of how this relates to problems of AI bias (ex: facial recognition used to work much better for lighter skinned faces than darker skinned ones)
12. Once satisfied with the model's performance, turn off the webcam input.
13. Export the model and upload it to the cloud, then copy the provided link.
Note that the training images are not uploaded, only the model weights. This means that there are no data privacy concerns with uploading images of students.
--- Part 2: Scratch ---
14. Return to the Scratch page and add the "Google Teachable Machine" extension.
15. Drag in a "when green flag clicked" block and attach a "use model URL" block below it.
16. Paste the copied URL into the "use model URL" block.
17. Run the code and check for a green checkmark in the top left; assist students if needed.
Walk around the classroom to check everyone has the green check. If you see an orange exclamation mark it means it didn't work.
The most common errors are not running the code (click green flag), or improperly copying in the model url
18. Add a loop to the code that displays the current prediction.
19. Add a new sprite (e.g. heart) that will appear when the model predicts a specific gesture.
20. Inside the sprite, add a "when green flag clicked" block, a loop, and an "if-then-else" statement.
21. Use "show" and "hide" blocks to control the sprite's visibility based on the model prediction.
22. Test the code to ensure it works.
23. Add a second sprite (e.g. question mark) and repeat steps 20-22 for it.
You can copy the code to the second sprite by dragging and dropping.
You can make your own sprite by drawing it, or by uploading custom images
24. Test the code to see if both sprites appear when the associated gestures are made.
25. Celebrate the successful completion of the activity!