/
Introduction to AI in Education

Introduction to AI in Education

This month’s AI podcast is an introduction to the basics of AI chatbots and an overview of how they can be used in post secondary education.

Please be aware that Microsoft CoPilot is the only AI tool supported by Lambton College’s IT department. Using AI with sensitive data, like student work or proprietary college information, requires using Microsoft CoPilot while logged in with your college credentials.

If you are uncertain about securely accessing CoPilot to protect sensitive information, consult IT before proceeding.

Podcast Episode

Introduction to AI in Education
This podcast episode provides an overview of AI, with a focus on Large Language Models (LLMs) and their applications in education. It stresses the importance of responsible and ethical AI use in the classroom by highlighting how to use AI as a learning tool and not a shortcut. It emphasizes the need for AI literacy among faculty and students to navigate AI tools effectively and critically.

This podcast episode was created using NotebookLM, an AI tool from Google that allows you to create an AI-generated podcast conversation between two hosts based on sources that you give it.

The “Commercial Break” and what it can show us about bias in AI

At 7:02 in the podcast, you’ll hear the AI say, “Make sure to come back for part two.” If you’re a regular podcast listener, you might recognize this as a common feature of podcast episodes—where a host says something like, “We’ll be right back after this message from our sponsor,” and then the episode immediately resumes without an ad actually playing.

So why did this happen?

This is an interesting example of how bias in an AI’s training data can lead to unexpected outputs. The AI that generated this podcast would have been trained on a vast amount of text, including real podcasts that often include these types of break cues. However, the AI doesn’t understand two important points that are obvious to us: i) that these commercial breaks are meant to signal an ad that would be inserted later, and ii) that podcasts are often recorded with this space for a commercial but then do not get a sponsor, resulting in the host saying, “We’re going to take a quick break” and then “Welcome back” with no ad played in between. As a result, the AI imitates the structure it has seen in its training data, mistaking this often-repeated pattern for a necessary feature of a podcast episode that it should include rather than recognizing it as something that should be ignored. 

This is an example of bias in an AI’s output resulting from errors in its training data. While the bias is benign in this case, it does illustrate how AI’s mimic the patterns in their training data without being able to understand what those patterns represent. Any biases inherent in an AI’s training data will be reproduced in the AI’s outputs.

Training Data For This Podcast Episode

Below is the text of the document that was fed into NotebookLM to create the podcast. The podcast does not reproduce the information in the document in its entirety, so you may want to read it if you’re looking for more detail on any of the points covered in the episode. It has been lightly edited for readability.

Introduction to AI in Education

AI is transforming education, reshaping how we teach, learn, and prepare students for the workforce. But what does that mean for faculty? How does AI work, and how can it be integrated responsibly into the classroom?

In this post, we’ll explore:

  • What AI is (and what it isn’t)

  • How AI impacts teaching and learning

  • How to prepare students for an AI-driven workforce

  • The ethical and practical considerations faculty should keep in mind

What is AI?

When people hear "AI," they often think of ChatGPT, but artificial intelligence extends far beyond chatbots. AI is already embedded in everyday technology, from facial recognition systems to voice assistants like Siri and Alexa. Some of the most common types of AI include:

  • Computer Vision – Powers applications like facial recognition and self-driving cars.

  • Speech Recognition – Converts spoken language into text, enabling tools like Google Assistant and dictation software.

  • Recommendation Systems – Personalizes content suggestions on platforms like Netflix and Amazon.

  • Generative AI – Produces new content, including text (ChatGPT, CoPilot), images (DALL-E, Midjourney), and even music.

Large Language Models (LLMs) and Chatbots

The type of AI we focus on in this discussion is a Large Language Model (LLM)—the technology behind chatbots like ChatGPT and Microsoft’s CoPilot. These models generate human-like responses based on vast amounts of training data.

While ChatGPT is the most widely recognized, several other AI chatbots are making an impact:

  • ChatGPT (OpenAI)

  • CoPilot (Microsoft) – The only AI tool officially supported by Lambton College IT.

  • Gemini (Google) and NotebookLM (Google)

  • Claude (Anthropic) – A powerful but lesser-known competitor.

  • Llama (Meta) – Integrated into social media and business applications.

  • Perplexity – A research-focused AI that blends chatbot capabilities with web search.

  • AI Tutor Pro and AI Teaching Assistant Pro – Purpose-built educational AI tools.

How Do Large Language Models Work?

Many people think of AI chatbots as a "smarter Google," but they function very differently. Search engines retrieve existing content, whereas LLMs predict the next word in a sequence based on probability.

Think of it like predictive text on a smartphone—except on a much larger scale. These models generate responses by drawing from massive datasets, which means:

  • Their predictions are shaped by patterns in the training data.

  • They can reflect biases present in the information they were trained on.

  • Their accuracy depends on the quality and diversity of their training sources.

  • They sometimes produce hallucinations—plausible-sounding but false information.

Memory and Context Windows

One key limitation of LLMs is their context window—the amount of recent input they can consider when generating a response. Once that window is exceeded, the AI no longer "remembers" previous interactions.

For long or complex discussions, users may need to reintroduce context to keep conversations on track.

What Happens to Data Entered in an AI Chat?

One of the biggest concerns about AI is data privacy. What happens to the information users input into a chatbot?

  • AI models process inputs in real-time and do not store permanent memory across sessions. For example, OpenAI’s ChatGPT does not retain user conversations permanently but may temporarily store chat inputs to improve model performance, detect abuse, and refine responses. Google Gemini, Anthropic Claude, and Other Models typically follow similar data usage policies.

  • Some platforms temporarily retain chat data for training or quality improvement.

  • Users should assume that public AI models may analyze inputs, even if they don’t store them long-term.

  • Some models, particularly with a paid subscription, offer settings to disable data retention.

Institutional vs. Public AI Use

At Lambton College, faculty and staff handling sensitive student data must use Microsoft’s CoPilot and be logged in with their College credentials. Entering student information into ChatGPT or other public AI tools violates privacy policies.

For example, using ChatGPT to check a student’s essay for grammar would be a privacy violation, whereas using AI to generate discussion prompts would not. Faculty should consult IT if unsure about compliance.

AI Literacy: A Foundational Skill for Educators and Students

Just as digital literacy became essential in the internet era, AI literacy is now a crucial skill.

What is AI Literacy?

AI literacy is the ability to understand, evaluate, and effectively use artificial intelligence across different contexts—similar to how digital literacy became essential in the internet era. As AI becomes more embedded in education, work, and daily life, developing AI literacy is increasingly critical for both faculty and students.

AI Literacy as a Foundational Skill

Just as digital literacy helps individuals navigate search engines, social media, and online resources, AI literacy enables users to engage critically and responsibly with AI-powered tools. It includes:

  • Understanding how AI works, including how models generate responses.

  • Recognizing AI’s capabilities and limitations, such as its predictive nature and potential for error.

  • Using AI effectively and appropriately in various fields, from education and research to business and healthcare.

For faculty, AI literacy means:

  • Evaluating AI’s role in teaching and learning—knowing when AI enhances education and when it may be a shortcut.

  • Helping students develop AI fluency—teaching them to use AI as a tool for learning rather than outsourcing their thinking.

  • Navigating ethical and institutional policies to ensure responsible AI use in coursework and research.

Critical Thinking When Interacting with AI

Verifying Sources

AI-generated content is not inherently reliable. LLMs can hallucinate (i.e., generate false or fabricated information), so responses must be fact-checked. Best practice: Use AI as a research assistant, not a final authority.

Recognizing Bias in AI

AI models are trained on massive datasets, which may contain biases based on historical or societal trends. This means:

  • AI-generated career advice may reinforce gender stereotypes.

  • Historical summaries might reflect cultural biases.

  • Recommendations could favour dominant perspectives while omitting underrepresented voices.

Faculty should encourage students to critically analyze AI outputs, question potential biases, and discuss ways to mitigate them.

Refining Prompts for Better Results

AI’s effectiveness depends on the quality of the prompt. Clear, specific prompts yield more accurate and useful responses.

Example: Instead of asking “Explain World War II,” a more effective prompt would be:

➡️ “Summarize the economic causes of World War II and their impact on global trade.”

Teaching students iterative prompting—adjusting queries to refine results—helps them develop stronger AI literacy skills.

Ethical AI Use in Education and the Workplace

Transparency in AI Use

Faculty and students should disclose when AI is used, especially in assignments, research, or professional work. Institutions should establish clear policies on acceptable AI use to maintain academic integrity.

Avoiding Over-Reliance on AI

AI should support learning and decision-making, not replace human thinking.

In education: AI is best used for brainstorming, summarization, and feedback, not as a tool to complete assignments.

In the workplace: AI can boost productivity, but critical judgment is still necessary.

Privacy and Security Considerations

Faculty must guide students on responsible AI use, ensuring they understand what data AI models collect and how it is stored.

Best practice:

  • Use institution-approved AI tools (e.g., Microsoft CoPilot for faculty at Lambton College) when handling sensitive information.

  • Never enter student names, grades, or personal details into public AI platforms.

To reflect on ethical AI use in the classroom, faculty might consider:

  • If a student asks AI to improve an argument in an essay, is that academic dishonesty or a learning tool?

  • Should students be required to disclose when they use AI in their coursework? Why or why not?

Understanding Bias in AI

Bias in AI occurs when systems produce systematic errors that result in unfair or discriminatory outcomes. This often reflects biases present in training data or model design.

Bias can manifest in many ways, including gender, racial, socioeconomic, and age-based biases.

Example: If you ask an AI model to list famous scientists, it may primarily suggest men. This doesn’t mean women haven’t made major contributions—it reflects historical biases in available data.

Educators play a crucial role in helping students recognize AI bias and develop strategies to critically assess AI-generated content.

The Role of AI Literacy in Post Secondary Education

Developing AI literacy empowers faculty and students to engage with AI confidently and critically. For students, it’s a foundational skill that will be essential in an AI-augmented workforce. For educators, AI literacy enables them to integrate AI meaningfully into their teaching while upholding academic integrity and ethical responsibility.

By fostering AI literacy, colleges and universities can ensure students graduate with the skills to navigate, evaluate, and responsibly use AI—not just as passive consumers but as critical thinkers prepared for the evolving demands of the workplace.

Bringing AI into the Classroom

Moving Beyond “AI as a Shortcut”

As AI tools become more accessible, one of the biggest concerns in education is their potential misuse—particularly in assignments where students might rely on AI to complete work without genuine learning. Instead of banning AI outright, faculty can guide students toward using it as a tool for learning, critical thinking, and skill development rather than as a shortcut.

Addressing Concerns About AI-Generated Assignments

There is understandable concern that AI makes academic work meaningless by allowing students to generate essays, problem sets, or responses with minimal effort. However, AI is most effective when used as an enhancer of learning, not a replacement for student effort.

If students rely on AI-generated content without engaging with the material, they miss opportunities to develop critical thinking, problem-solving, and analytical skills. Faculty can design assignments that encourage AI use as a support tool rather than a content generator.

Encouraging AI as a Brainstorming and Feedback Tool

Rather than banning AI, faculty can position it as a tool for enhancing creativity, organization, and understanding.

Brainstorming & Idea Generation

AI can help students overcome writer’s block by generating ideas, outlines, or key points.

Example: Instead of asking AI to write an essay, students can prompt it to generate different perspectives on a topic and evaluate them.

Structuring Writing & Organizing Thoughts

AI can assist in outlining essays, creating logical flow, and improving coherence.

Example: A student struggling with structuring a research paper can use AI to generate a suggested outline, then refine it with their own analysis.

Providing Constructive Feedback

AI tools can offer instant feedback on grammar, style, and clarity, helping students iteratively improve their work.

Example: A student can draft an argument, run it through AI for feedback, and revise it based on the AI’s suggestions.

AI Integration in Teaching & Learning

Using AI to Generate Discussion Questions & Lesson Plans

Faculty can use AI to create discussion prompts based on reading assignments or develop lesson plans with structured outlines.

Example: AI can generate debate topics on ethical AI use based on current events.

AI for Formative Assessments: Creating Quiz Questions & Practice Tests

AI can generate multiple-choice, short-answer, or scenario-based questions for formative assessments, which faculty can customize to align with learning objectives.

Example: AI can create adaptive quiz questions that adjust based on student responses.

AI as a Tutor: Personalized Concept Review

AI-powered tutoring can help students review difficult concepts by providing explanations tailored to their level.

Example: A nursing student struggling with anatomy could ask AI to explain a concept in simpler terms or relate it to real-world applications.

AI-Powered Research Assistants: Summarizing Complex Material

AI can summarize journal articles, textbooks, or research papers, helping students grasp key concepts faster.

Example: A student researching climate policy can use AI to compare policies from different countries based on key themes.

⚠️ Caution: Given AI’s penchant for hallucination, it is essential that students be taught to verify AI-generated summaries against original sources for accuracy.

AI and Academic Integrity

Strategies for Designing AI-Inclusive Assignments

To maintain academic integrity while integrating AI, faculty should:

  • Shift away from traditional “AI-completable” assignments (e.g., generic essays).

  • Emphasize process over product by requiring students to submit outlines, drafts, and reflections on their AI use.

  • Use AI detection tools carefully—they are not foolproof and should not be the sole determinant of academic misconduct.

Faculty’s Role in Modelling Ethical AI Use

Faculty can demonstrate responsible AI use by:

  • Using AI for lesson planning while verifying outputs.

  • Being transparent with students about when and how AI is being used.

  • Encouraging discussions on AI’s impact on knowledge creation and academic integrity.

  • Developing AI-Aware Policies: Balancing Innovation & Accountability

Rather than blanket bans, institutions should develop clear policies on AI use that:

  • Define acceptable AI use for assignments.

  • Encourage disclosure (e.g., requiring students to note when AI assisted in their work).

  • Differentiate between AI-assisted learning and AI plagiarism.

Finally, open and ongoing communication among faculty and administrators will necessary to ensure consistent, relevant policies across courses and departments, particularly as AI’s capabilities continue to change rapidly.

Best Practices for Using AI Tools

Effective AI Prompting and Interaction

Common Pitfalls in AI Use

Many users don’t maximize AI’s potential due to vague inputs and a passive approach. Here are common missteps:
❌ Generic Prompts – Asking broad questions like “Give me ideas for an AI workshop.”
❌ One-and-Done Approach – Accepting the first response without refining or probing deeper.
❌ Lack of Role Framing – Not specifying whether the AI should respond as a researcher, instructional designer, or subject matter expert.
❌ Unstructured Requests – Seeking general information rather than well-organized insights.

Best Practices for Prompting AI

✅ Use Context-Rich Prompts – Provide background details, specify constraints, and clarify objectives.
✅ Refine Iteratively – Engage in a back-and-forth to improve responses rather than settling for the first output.
✅ Guide AI with Multi-Step Thinking – Break complex requests into sequential tasks.
✅ Role-Based Prompting – Frame the AI’s perspective (e.g., “Act as a curriculum designer and suggest a new assessment method.”).
✅ Structured Output Requests – Ask for organized responses (e.g., “Provide a summary with key takeaways, limitations, and next steps.”).

📌 Example:
Instead of asking “How can I improve my course?”, say:
➡️ “I’m revising a syllabus for an introductory business course. It needs to balance technical concepts with practical applications. Can you suggest a week-by-week structure with key learning objectives, real-world case studies, and interactive learning activities?”


AI for Research and Content Creation

Common Pitfalls

❌ Basic Summaries – Using AI for surface-level definitions rather than deeper analysis.
❌ Limited Context Awareness – Copying AI responses verbatim without adapting them to specific needs.
❌ Underutilizing File Uploads – Not using AI to extract insights from research papers or datasets.

Best Practices for AI in Research & Writing

✅ Synthesizing Research – Use AI to summarize, contrast, and analyze academic literature.
✅ Adapting AI Content to Context – Modify AI-generated outputs to align with institutional policies and course goals.
✅ Efficient Data Processing – Upload PDFs, spreadsheets, or reports and ask AI to extract key insights.
✅ AI-Assisted Writing – Draft lesson plans, grant proposals, or research abstracts with AI, then fine-tune them.

📌 Example:
Instead of asking “What is Bloom’s Taxonomy?”, ask:
➡️ “Can you adapt Bloom’s Taxonomy to create an assessment framework for an online nursing simulation course, with example rubrics?”


AI-Enhanced Teaching & Student Engagement

Common Pitfalls

❌ AI as a Shortcut – Using AI for content generation without promoting deeper learning.
❌ No Critical Engagement – Allowing AI use without teaching students to fact-check and refine responses.
❌ Limited Faculty Training – Not equipping instructors with strategies for ethical AI integration.

Best Practices for AI in Education

✅ AI as a Learning Tool – Design activities where students use AI for brainstorming, analysis, and skill-building.
✅ AI-Assisted Feedback – Use AI to generate rubric-based feedback tailored to different proficiency levels.
✅ Course-Specific AI Training – Educate faculty on discipline-specific AI applications.
✅ Interactive AI Use – Encourage students to engage critically, rather than passively, with AI-generated content.

📌 Example:
Instead of saying “You can use AI for your assignments,” say:
➡️ “For this marketing assignment, ask the AI to critique your company’s target audience selection. Compare its insights with your own and refine your strategy based on the discussion.”


AI for Workflow & Productivity Optimization

Common Pitfalls

❌ Minimal Automation – Doing repetitive tasks manually instead of streamlining with AI.
❌ No Data Insights – Using AI only for text generation rather than institutional research or analysis.
❌ Limited Integration – Using AI in isolation rather than embedding it into workflows.

Best Practices for AI in Productivity

✅ Automating Repetitive Tasks – Use AI for emails, lesson plans, meeting summaries, and curriculum mapping.
✅ Data-Driven Decision Making – Leverage AI for institutional research, performance analysis, and predictive modeling.
✅ AI-Integrated Workflows – Combine AI with tools like Notion, Power Automate, or Excel for efficiency.
✅ Cross-Department Collaboration – Encourage AI use across teaching, research, and administration.

📌 Example:
Instead of manually compiling student feedback, ask:
➡️ “Summarize common themes from these course evaluations and suggest three actionable improvements.”


AI Ethics & Policy Awareness

Common Pitfalls

❌ Ignoring AI Bias – Assuming AI-generated content is neutral.
❌ Lack of Institutional Strategy – Using AI informally without considering long-term policy impacts.
❌ No AI Literacy Education – Failing to teach faculty and students how AI generates responses and where it may be flawed.

Best Practices for Ethical AI Use

✅ Addressing Bias and Misinformation – Recognize and mitigate biases in AI-generated content.
✅ Aligning with Institutional Policies – Ensure AI use aligns with academic integrity policies.
✅ Transparency in AI Use – Disclose when AI is used in policy-making, research, or instructional design.
✅ Advocating for AI Literacy – Educate peers and students on how AI works and its limitations.

📌 Example:
When discussing AI use in assessments, ask:
➡️ “What are ethical guidelines for allowing AI-generated content in student assignments? Provide examples from other colleges’ policies.”


Final Takeaways: How Experienced Users Maximize AI’s Potential

✅ They treat AI as a collaborator, not just a tool.
✅ They refine AI-generated content to fit specific needs.
✅ They integrate AI into workflows for maximum efficiency.
✅ They approach AI critically, ensuring it enhances learning rather than replaces thinking.
✅ They use AI across teaching, research, and administration, maximizing its impact.

Shaping the Future of AI in Education

AI is not going away—it’s becoming part of how we work and learn. Faculty have the opportunity to shape how students engage with AI responsibly.

Whether experimenting with AI for personal productivity or integrating it into coursework, the key is to approach AI with curiosity, critical thinking, and ethical awareness.

AI Resources

Visit our AI Resources page for more information and blogs on AI and education you may want to check out. If you know of other AI resources worth sharing, please let us know by leaving a comment on the Professional Development Teams channel.

Related content

Coordinating Human and Machine Learning
Coordinating Human and Machine Learning
More like this