...
Note |
---|
Please be aware that Microsoft CoPilot is the only AI tool supported by Lambton College’s IT department. Using AI with sensitive data, like student work or proprietary college information, requires using Microsoft CoPilot while logged in with your college credentials. If you are uncertain about securely accessing CoPilot to protect sensitive information, consult IT before proceeding. Using any other AI system with sensitive data violates college policy. |
Podcast Episode
Cards extension
This podcast episode was created using NotebookLM, an AI tool from Google that allows you to create an AI-generated podcast conversation between two hosts based on sources that you give it.
...
The “Commercial Break” and what it can show us about bias in AI
Expand | ||
---|---|---|
| ||
At 7:02 in the podcast, you’ll hear the AI say, “Make sure to come back for part two.” If you’re a regular podcast listener, you might recognize this as a common feature of podcast episodes—where a host says something like, “We’ll be right back after this message from our sponsor,” and then the episode immediately resumes without an ad actually playing. So why did this happen? This is an interesting example of how bias in an AI’s training data can lead to unexpected outputs. The AI that generated this podcast would have been trained on a vast amount of text, including real podcasts that often include these types of break cues. However, the AI doesn’t understand two important points that are obvious to us: i) that these commercial breaks are meant to signal an ad that would be inserted later, and ii) that podcasts are often recorded with this space for a commercial but then do not get a sponsor, resulting in the host saying, “We’re going to take a quick break” and then “Welcome back” with no ad played in between. As a result, the AI imitates the structure it has seen in its training data, mistaking this often-repeated pattern for a necessary feature of a podcast episode that it should include rather than recognizing it as something that should be ignored. This is an example of bias in an AI’s output resulting from errors in its training data. While the bias is benign in this case, it does illustrate how AI’s mimic the patterns in their training data without being able to understand what those patterns represent. Any biases inherent in an AI’s training data will be reproduced in the AI’s outputs. |
Training Data For This Podcast Episode
...
Expand | ||
---|---|---|
| ||
Introduction to AI in EducationAI is transforming education, reshaping how we teach, learn, and prepare students for the workforce. But what does that mean for faculty? How does AI work, and how can it be integrated responsibly into the classroom? In this post, we’ll explore:
What is AI?When people hear "AI," they often think of ChatGPT, but artificial intelligence extends far beyond chatbots. AI is already embedded in everyday technology, from facial recognition systems to voice assistants like Siri and Alexa. Some of the most common types of AI include:
Large Language Models (LLMs) and ChatbotsThe type of AI we focus on in this discussion is a Large Language Model (LLM)—the technology behind chatbots like ChatGPT and Microsoft’s CoPilot. These models generate human-like responses based on vast amounts of training data. While ChatGPT is the most widely recognized, several other AI chatbots are making an impact:
How Do Large Language Models Work?Many people think of AI chatbots as a "smarter Google," but they function very differently. Search engines retrieve existing content, whereas LLMs predict the next word in a sequence based on probability. Think of it like predictive text on a smartphone—except on a much larger scale. These models generate responses by drawing from massive datasets, which means:
Memory and Context WindowsOne key limitation of LLMs is their context window—the amount of recent input they can consider when generating a response. Once that window is exceeded, the AI no longer "remembers" previous interactions. For long or complex discussions, users may need to reintroduce context to keep conversations on track. What Happens to Data Entered in an AI Chat?One of the biggest concerns about AI is data privacy. What happens to the information users input into a chatbot?
Institutional vs. Public AI UseAt Lambton College, faculty and staff handling sensitive student data must use Microsoft’s CoPilot and be logged in with their College credentials. Entering student information into ChatGPT or other public AI tools violates privacy policies. For example, using ChatGPT to check a student’s essay for grammar would be a privacy violation, whereas using AI to generate discussion prompts would not. Faculty should consult IT if unsure about compliance. |
...
Expand | ||
---|---|---|
| ||
Best Practices for Using AI ToolsWhich AI Tools Can I Use?At Lambton College, anyone working with sensitive information must use the College’s instance of Microsoft’s CoPilot while logged in with their College credentials. Using public AI tools for student data or proprietary College information is not permitted. Effective AI Prompting and InteractionCommonEffective AI Prompting and InteractionCommon Pitfalls in AI UseMany users don’t maximize AI’s potential due to vague inputs and a passive approach. Here are common missteps: Best Practices for Prompting AI✅ Use Context-Rich Prompts – Provide background details, specify constraints, and clarify objectives. 📌 Example: AI for Research and Content CreationCommon Pitfalls❌ Basic Summaries – Using AI for surface-level definitions rather than deeper analysis. Best Practices for AI in Research & Writing✅ Synthesizing Research – Use AI to summarize, contrast, and analyze academic literature. 📌 Example: AI-Enhanced Teaching & Student EngagementCommon Pitfalls❌ AI as a Shortcut – Using AI for content generation without promoting deeper learning. Best Practices for AI in Education✅ AI as a Learning Tool – Design activities where students use AI for brainstorming, analysis, and skill-building. 📌 Example: AI for Workflow & Productivity OptimizationCommon Pitfalls❌ Minimal Automation – Doing repetitive tasks manually instead of streamlining with AI. Best Practices for AI in Productivity✅ Automating Repetitive Tasks – Use AI for emails, lesson plans, meeting summaries, and curriculum mapping. 📌 Example: AI Ethics & Policy AwarenessCommon Pitfalls❌ Ignoring AI Bias – Assuming AI-generated content is neutral. Best Practices for Ethical AI Use✅ Addressing Bias and Misinformation – Recognize and mitigate biases in AI-generated content. 📌 Example: Final Takeaways: How Experienced Users Maximize AI’s Potential✅ They treat AI as a collaborator, not just a tool. Shaping the Future of AI in EducationAI is not going away—it’s becoming part of how we work and learn. Faculty have the opportunity to shape how students engage with AI responsibly. Whether experimenting with AI for personal productivity or integrating it into coursework, the key is to approach AI with curiosity, critical thinking, and ethical awareness. |
AI
...
Resources
Visit our AI Resources page for more information and blogs on AI and education you may want to check out. If you know of other blogs AI resources worth followingsharing, please comment below.
...
let us know by leaving a comment on the Professional Development Teams channel.