Corporate Implementation of Generative AI: An Inside Look
Education firm integrates AI with deliberate approach, prioritizing user needs, data protection, and intellectual property
Darren Person, Executive Vice President and Chief Digital Officer at Cengage Group, details the company's thoughtful approach to incorporating generative AI into its products and services.
Over the past two years, many students and educators have turned to academic applications of generative AI. Some education companies have hastily included the technology into their offerings, regardless of its applicability. Cengage Group, however, takes a measured and intentional approach.
The company's AI policy centers around serving the needs of its users, ensuring data protection, and safeguarding intellectual property. This calibrated approach ensures responsible and effective AI adoption.
- Attending user requirements
At the ALA Annual Conference last summer, Cengage Group surveyed librarians to learn about their AI-related challenges. Respondents expressed an interest in using AI to enhance learner outcomes and lighten teachers' workloads. However, they were adamant against AI tools that replaced research or compromised students' critical thinking development.
Armed with this information, Cengage Group convened a workshop involving product, technology, and other teams to brainstorm AI solutions that addressed customers' most pressing problems. Since then, the company has developed several AI applications, prioritizing care and testing over flashy technologies.
Take, for example, the recent task of translating vast amounts of online content into multiple languages. The team investigated using a large language model (LLM). However, upon testing, the traditional machine translation method proved more effective than the current state of AI.
When new generative AI tools are released, they will be in beta to allow researchers, educators, and students to contribute to their development through feedback.
- Securing data and privacy
Data breaches, like the recent one at PowerSchool, can impact millions of students and instructors. With 29 cybersecurity laws implemented in 20 states last year affecting schools and learner data protection, Cengage Group places great importance on data security and user privacy. The measured approach to integrating AI ensures that any new tools are as secure as possible.
- Upholding academic integrity
While safeguarding data and privacy, Cengage Group prioritizes the intellectual property of scholars over AI. Large language model (LLM) chatbots can deliver factual inaccuracies—known as "hallucinations"—alongside helpful information. In contrast, articles published by recognized scholars in peer-reviewed journals meet clear journalistic and academic standards and serve as valuable intellectual property that can be cited, reproduced, and corrected, if necessary.
Cengage Group is exploring generative AI tools like retrieval-augmented generation (RAG) that leverage the conversational powers of an LLM while limiting responses to specific, academically relevant data it has been trained on. This restraint ensures that the chatbot delivers information only from authoritative articles published with proper bylines and journalistic standards.
This snapshot outlines Cengage Group's AI journey thus far, with the technology continually evolving. The company's principles will guide its ongoing AI evaluation to ensure that it continues to provide rich academic opportunities for researchers, educators, and students.
- The deliberate approach of Cengage Group in integrating AI is geared towards enhancing learning and self-development by addressing the needs of users, as they seek to improve learning outcomes and reduce teachers' workloads, while being cautious about AI tools that might compromise critical thinking development.
- In prioritizing technology, education-and-self-development, and learning, Cengage Group emphasizes data protection and privacy, ensuring that any AI tools developed are as secure as possible to minimize the risks of data breaches akin to the one at PowerSchool, especially considering the increasing number of cybersecurity laws focusing on learner data protection.