This month’s AI podcast is an introduction to the basics of AI chatbots and an overview of how they can be used in post secondary education.
Please be aware that Microsoft CoPilot is the only AI tool supported by Lambton College’s IT department. Using AI with sensitive data, like student work or proprietary college information, requires using Microsoft CoPilot while logged in with your college credentials.
If you are uncertain about securely accessing CoPilot to protect sensitive information, consult IT before proceeding. Using any other AI system with sensitive data violates college policy.
Cards extension
This podcast episode was created using NotebookLM, an AI tool from Google that allows you to create an AI-generated podcast conversation between two hosts based on sources that you give it.
Note to Listeners About the “Commercial Break”
At 7:02 in the podcast, you’ll hear the AI say, “Make sure to come back for part two.” If you’re a regular podcast listener, you might recognize this as a common feature of podcast episodes—where a host says something like, “We’ll be right back after this message from our sponsor,” and then the episode immediately resumes without an ad actually playing.
So why did this happen?
This is an interesting example of how bias in an AI’s training data can lead to unexpected outputs. The AI that generated this podcast would have been trained on a vast amount of text, including real podcasts that often include these types of break cues. However, the AI doesn’t understand two important points that are obvious to us: i) that these commercial breaks are meant to signal an ad that would be inserted later, and ii) that podcasts are often recorded with this space for a commercial but then do not get a sponsor, resulting in the host saying, “We’re going to take a quick break” and then “Welcome back” with no ad played in between. As a result, the AI imitates the structure it has seen in its training data, mistaking this often-repeated pattern for a necessary feature of a podcast episode that it should include rather than recognizing it as something that should be ignored.
This is an example of bias in an AI’s output resulting from errors in its training data. While the bias is benign in this case, it does illustrate how AI’s mimic the patterns in their training data without being able to understand what those patterns represent.
Training Data For This Podcast Episode
Below is the text of the document that was fed into NotebookLM to create the podcast. The podcast does not reproduce the information in the document in its entirety, so you may want to read it if you’re looking for more detail on any of the points covered in the episode. It has been lightly edited for readability.
AI and Education Blogs
Here is a list of blogs on AI and education you may want to check out. If you know of other blogs worth following, please comment below.