Date and Time
Wednesday Jun 18, 2025
12:00 PM - 1:00 PM CDT
Location
Zoom
Fees/Admission
Free
Website
Description
As artificial intelligence continues to evolve, organizations are finding out that bigger isn't always better when it comes to language models. This webinar will help business owners learn how to turn large language models (LLMs) into smaller, specialized models that deliver targeted performance at a fraction of the cost.
In this session, we'll explore techniques like knowledge distillation and model pruning to create more efficient AI solutions for your business. We'll discuss how these streamlined models can run more effectively on your devices, reducing costs while maintaining the intelligence and accuracy your business requires.
Learning Objectives:
- Understanding the business case for model distillation versus large-scale deployments
- Mastering knowledge distillation and pruning techniques for your specific use cases
- Implementing specialized AI solutions within existing infrastructure constraints
- Building a distributed AI strategy that scales across workstations
Having trouble registering for this webinar? Click Here for help.
Funded in part through a Cooperative Agreement with the U.S. Small Business Administration. All opinions, conclusions, and/or recommendations expressed herein are those of the author(s) and do not necessarily reflect the views of the SBA. By registering for this event, you agree to receive email communications from SCORE based on the information collected. Click HERE to view SCORE Terms and Conditions and Privacy Statements.
Presenter
Jerome Gabryszewski
AI & Data Science Business Development ManagerHPJerome Gabryszewski is an AI & Data Science Business Development Manager with a decade of experience at HP, driving innovation across multiple roles.