Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Note

Please be aware that Microsoft CoPilot is the only AI tool supported by Lambton College’s IT department. Using AI with sensitive data, like student work or proprietary college information, requires using Microsoft CoPilot while logged in with your college credentials.

If you are uncertain about securely accessing CoPilot to protect sensitive information, consult IT before proceeding. Using any other AI system with sensitive data violates college policy.

Podcast Episode

Cards extension

...

Expand
titleClick here to read about the commerical break and AI bias

At 7:02 in the podcast, you’ll hear the AI say, “Make sure to come back for part two.” If you’re a regular podcast listener, you might recognize this as a common feature of podcast episodes—where a host says something like, “We’ll be right back after this message from our sponsor,” and then the episode immediately resumes without an ad actually playing.

So why did this happen?

This is an interesting example of how bias in an AI’s training data can lead to unexpected outputs. The AI that generated this podcast would have been trained on a vast amount of text, including real podcasts that often include these types of break cues. However, the AI doesn’t understand two important points that are obvious to us: i) that these commercial breaks are meant to signal an ad that would be inserted later, and ii) that podcasts are often recorded with this space for a commercial but then do not get a sponsor, resulting in the host saying, “We’re going to take a quick break” and then “Welcome back” with no ad played in between. As a result, the AI imitates the structure it has seen in its training data, mistaking this often-repeated pattern for a necessary feature of a podcast episode that it should include rather than recognizing it as something that should be ignored. 

This is an example of bias in an AI’s output resulting from errors in its training data. While the bias is benign in this case, it does illustrate how AI’s mimic the patterns in their training data without being able to understand what those patterns represent. Any biases inherent in an AI’s training data will be reproduced in the AI’s outputs.

Training Data For This Podcast Episode

...

Expand
titleClick here to read the Introduction to AI in Education section.

Introduction to AI in Education

AI is transforming education, reshaping how we teach, learn, and prepare students for the workforce. But what does that mean for faculty? How does AI work, and how can it be integrated responsibly into the classroom?

In this post, we’ll explore:

  • What AI is (and what it isn’t)

  • How AI impacts teaching and learning

  • How to prepare students for an AI-driven workforce

  • The ethical and practical considerations faculty should keep in mind

What is AI?

When people hear "AI," they often think of ChatGPT, but artificial intelligence extends far beyond chatbots. AI is already embedded in everyday technology, from facial recognition systems to voice assistants like Siri and Alexa. Some of the most common types of AI include:

  • Computer Vision – Powers applications like facial recognition and self-driving cars.

  • Speech Recognition – Converts spoken language into text, enabling tools like Google Assistant and dictation software.

  • Recommendation Systems – Personalizes content suggestions on platforms like Netflix and Amazon.

  • Generative AI – Produces new content, including text (ChatGPT, CoPilot), images (DALL-E, Midjourney), and even music.

Large Language Models (LLMs) and Chatbots

The type of AI we focus on in this discussion is a Large Language Model (LLM)—the technology behind chatbots like ChatGPT and Microsoft’s CoPilot. These models generate human-like responses based on vast amounts of training data.

While ChatGPT is the most widely recognized, several other AI chatbots are making an impact:

  • ChatGPT (OpenAI)

  • CoPilot (Microsoft) – The only AI tool officially supported by Lambton College IT.

  • Gemini (Google) and NotebookLM (Google)

  • Claude (Anthropic) – A powerful but lesser-known competitor.

  • Llama (Meta) – Integrated into social media and business applications.

  • Perplexity – A research-focused AI that blends chatbot capabilities with web search.

  • AI Tutor Pro and AI Teaching Assistant Pro – Purpose-built educational AI tools.

How Do Large Language Models Work?

Many people think of AI chatbots as a "smarter Google," but they function very differently. Search engines retrieve existing content, whereas LLMs predict the next word in a sequence based on probability.

Think of it like predictive text on a smartphone—except on a much larger scale. These models generate responses by drawing from massive datasets, which means:

  • Their predictions are shaped by patterns in the training data.

  • They can reflect biases present in the information they were trained on.

  • Their accuracy depends on the quality and diversity of their training sources.

  • They sometimes produce hallucinations—plausible-sounding but false information.

Memory and Context Windows

One key limitation of LLMs is their context window—the amount of recent input they can consider when generating a response. Once that window is exceeded, the AI no longer "remembers" previous interactions.

For long or complex discussions, users may need to reintroduce context to keep conversations on track.

What Happens to Data Entered in an AI Chat?

One of the biggest concerns about AI is data privacy. What happens to the information users input into a chatbot?

  • AI models process inputs in real-time and do not store permanent memory across sessions. For example, OpenAI’s ChatGPT does not retain user conversations permanently but may temporarily store chat inputs to improve model performance, detect abuse, and refine responses. Google Gemini, Anthropic Claude, and Other Models typically follow similar data usage policies.

  • Some platforms temporarily retain chat data for training or quality improvement.

  • Users should assume that public AI models may analyze inputs, even if they don’t store them long-term.

  • Some models, particularly with a paid subscription, offer settings to disable data retention.

Institutional vs. Public AI Use

At Lambton College, faculty and staff handling sensitive student data must use Microsoft’s CoPilot and be logged in with their College credentials. Entering student information into ChatGPT or other public AI tools violates privacy policies.

For example, using ChatGPT to check a student’s essay for grammar would be a privacy violation, whereas using AI to generate discussion prompts would not. Faculty should consult IT if unsure about compliance.

...

Expand
titleClick here to read the AI Best Practices section.

Best Practices for Using AI Tools

Effective AI Prompting and Interaction

Common Pitfalls in AI Use

Many users don’t maximize AI’s potential due to vague inputs and a passive approach. Here are common missteps:
❌ Generic Prompts – Asking broad questions like “Give me ideas for an AI workshop.”
❌ One-and-Done Approach – Accepting the first response without refining or probing deeper.
❌ Lack of Role Framing – Not specifying whether the AI should respond as a researcher, instructional designer, or subject matter expert.
❌ Unstructured Requests – Seeking general information rather than well-organized insights.

Best Practices for Prompting AI

✅ Use Context-Rich Prompts – Provide background details, specify constraints, and clarify objectives.
✅ Refine Iteratively – Engage in a back-and-forth to improve responses rather than settling for the first output.
✅ Guide AI with Multi-Step Thinking – Break complex requests into sequential tasks.
✅ Role-Based Prompting – Frame the AI’s perspective (e.g., “Act as a curriculum designer and suggest a new assessment method.”).
✅ Structured Output Requests – Ask for organized responses (e.g., “Provide a summary with key takeaways, limitations, and next steps.”).

📌 Example:
Instead of asking “How can I improve my course?”, say:
➡️ “I’m revising a syllabus for an introductory business course. It needs to balance technical concepts with practical applications. Can you suggest a week-by-week structure with key learning objectives, real-world case studies, and interactive learning activities?”


AI for Research and Content Creation

Common Pitfalls

❌ Basic Summaries – Using AI for surface-level definitions rather than deeper analysis.
❌ Limited Context Awareness – Copying AI responses verbatim without adapting them to specific needs.
❌ Underutilizing File Uploads – Not using AI to extract insights from research papers or datasets.

Best Practices for AI in Research & Writing

✅ Synthesizing Research – Use AI to summarize, contrast, and analyze academic literature.
✅ Adapting AI Content to Context – Modify AI-generated outputs to align with institutional policies and course goals.
✅ Efficient Data Processing – Upload PDFs, spreadsheets, or reports and ask AI to extract key insights.
✅ AI-Assisted Writing – Draft lesson plans, grant proposals, or research abstracts with AI, then fine-tune them.

📌 Example:
Instead of asking “What is Bloom’s Taxonomy?”, ask:
➡️ “Can you adapt Bloom’s Taxonomy to create an assessment framework for an online nursing simulation course, with example rubrics?”


AI-Enhanced Teaching & Student Engagement

Common Pitfalls

❌ AI as a Shortcut – Using AI for content generation without promoting deeper learning.
❌ No Critical Engagement – Allowing AI use without teaching students to fact-check and refine responses.
❌ Limited Faculty Training – Not equipping instructors with strategies for ethical AI integration.

Best Practices for AI in Education

✅ AI as a Learning Tool – Design activities where students use AI for brainstorming, analysis, and skill-building.
✅ AI-Assisted Feedback – Use AI to generate rubric-based feedback tailored to different proficiency levels.
✅ Course-Specific AI Training – Educate faculty on discipline-specific AI applications.
✅ Interactive AI Use – Encourage students to engage critically, rather than passively, with AI-generated content.

📌 Example:
Instead of saying “You can use AI for your assignments,” say:
➡️ “For this marketing assignment, ask the AI to critique your company’s target audience selection. Compare its insights with your own and refine your strategy based on the discussion.”


AI for Workflow & Productivity Optimization

Common Pitfalls

❌ Minimal Automation – Doing repetitive tasks manually instead of streamlining with AI.
❌ No Data Insights – Using AI only for text generation rather than institutional research or analysis.
❌ Limited Integration – Using AI in isolation rather than embedding it into workflows.

Best Practices for AI in Productivity

✅ Automating Repetitive Tasks – Use AI for emails, lesson plans, meeting summaries, and curriculum mapping.
✅ Data-Driven Decision Making – Leverage AI for institutional research, performance analysis, and predictive modeling.
✅ AI-Integrated Workflows – Combine AI with tools like Notion, Power Automate, or Excel for efficiency.
✅ Cross-Department Collaboration – Encourage AI use across teaching, research, and administration.

📌 Example:
Instead of manually compiling student feedback, ask:
➡️ “Summarize common themes from these course evaluations and suggest three actionable improvements.”


AI Ethics & Policy Awareness

Common Pitfalls

❌ Ignoring AI Bias – Assuming AI-generated content is neutral.
❌ Lack of Institutional Strategy – Using AI informally without considering long-term policy impacts.
❌ No AI Literacy Education – Failing to teach faculty and students how AI generates responses and where it may be flawed.

Best Practices for Ethical AI Use

✅ Addressing Bias and Misinformation – Recognize and mitigate biases in AI-generated content.
✅ Aligning with Institutional Policies – Ensure AI use aligns with academic integrity policies.
✅ Transparency in AI Use – Disclose when AI is used in policy-making, research, or instructional design.
✅ Advocating for AI Literacy – Educate peers and students on how AI works and its limitations.

📌 Example:
When discussing AI use in assessments, ask:
➡️ “What are ethical guidelines for allowing AI-generated content in student assignments? Provide examples from other colleges’ policies.”


Final Takeaways: How Experienced Users Maximize AI’s Potential

✅ They treat AI as a collaborator, not just a tool.
✅ They refine AI-generated content to fit specific needs.
✅ They integrate AI into workflows for maximum efficiency.
✅ They approach AI critically, ensuring it enhances learning rather than replaces thinking.
✅ They use AI across teaching, research, and administration, maximizing its impact.

Shaping the Future of AI in Education

AI is not going away—it’s becoming part of how we work and learn. Faculty have the opportunity to shape how students engage with AI responsibly.

Whether experimenting with AI for personal productivity or integrating it into coursework, the key is to approach AI with curiosity, critical thinking, and ethical awareness.

AI

...

Resources

Visit our AI Resources page for more information and blogs on AI and education you may want to check out. If you know of other blogs AI resources worth followingsharing, please comment below.

...

let us know by leaving a comment on the Professional Development Teams channel.