Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Expand
titleClick here to read the Introduction to AI in Education section.

Introduction to AI in Education

AI is transforming education, reshaping how we teach, learn, and prepare students for the workforce. But what does that mean for faculty? How does AI work, and how can it be integrated responsibly into the classroom?

In this post, we’ll explore:

  • What AI is (and what it isn’t)

  • How AI impacts teaching and learning

  • How to prepare students for an AI-driven workforce

  • The ethical and practical considerations faculty should keep in mind

What is AI?

When people hear "AI," they often think of ChatGPT, but artificial intelligence extends far beyond chatbots. AI is already embedded in everyday technology, from facial recognition systems to voice assistants like Siri and Alexa. Some of the most common types of AI include:

  • Computer Vision – Powers applications like facial recognition and self-driving cars.

  • Speech Recognition – Converts spoken language into text, enabling tools like Google Assistant and dictation software.

  • Recommendation Systems – Personalizes content suggestions on platforms like Netflix and Amazon.

  • Generative AI – Produces new content, including text (ChatGPT, CoPilot), images (DALL-E, Midjourney), and even music.

Large Language Models (LLMs) and Chatbots

The type of AI we focus on in this discussion is a Large Language Model (LLM)—the technology behind chatbots like ChatGPT and Microsoft’s CoPilot. These models generate human-like responses based on vast amounts of training data.

While ChatGPT is the most widely recognized, several other AI chatbots are making an impact:

  • ChatGPT (OpenAI)

  • CoPilot (Microsoft) – The only AI tool officially supported by Lambton College IT.

  • Gemini (Google) and NotebookLM (Google)

  • Claude (Anthropic) – A powerful but lesser-known competitor.

  • Llama (Meta) – Integrated into social media and business applications.

  • Perplexity – A research-focused AI that blends chatbot capabilities with web search.

  • AI Tutor Pro and AI Teaching Assistant Pro – Purpose-built educational AI tools.

How Do Large Language Models Work?

Many people think of AI chatbots as a "smarter Google," but they function very differently. Search engines retrieve existing content, whereas LLMs predict the next word in a sequence based on probability.

Think of it like predictive text on a smartphone—except on a much larger scale. These models generate responses by drawing from massive datasets, which means:

  • Their predictions are shaped by patterns in the training data.

  • They can reflect biases present in the information they were trained on.

  • Their accuracy depends on the quality and diversity of their training sources.

  • They sometimes produce hallucinations—plausible-sounding but false information.

Memory and Context Windows

One key limitation of LLMs is their context window—the amount of recent input they can consider when generating a response. Once that window is exceeded, the AI no longer "remembers" previous interactions.

For long or complex discussions, users may need to reintroduce context to keep conversations on track.

What Happens to Data Entered in an AI Chat?

One of the biggest concerns about AI is data privacy. What happens to the information users input into a chatbot?

  • AI models process inputs in real-time and do not store permanent memory across sessions. For example, OpenAI’s ChatGPT does not retain user conversations permanently but may temporarily store chat inputs to improve model performance, detect abuse, and refine responses. Google Gemini, Anthropic Claude, and Other Models typically follow similar data usage policies.

  • Some platforms temporarily retain chat data for training or quality improvement.

  • Users should assume that public AI models may analyze inputs, even if they don’t store them long-term.

  • Some models, particularly with a paid subscription, offer settings to disable data retention.

Institutional vs. Public AI Use

At Lambton College, faculty and staff handling sensitive student data must use Microsoft’s CoPilot and be logged in with their College credentials. Entering student information into ChatGPT or other public AI tools violates privacy policies.

For example, using ChatGPT to check a student’s essay for grammar would be a privacy violation, whereas using AI to generate discussion prompts would not. Faculty should consult IT if unsure about compliance.

...