Run A Hands-On AI Agent Workshop That Actually Creates Value
If you want employees to adopt generative AI, put them in a room, give them real data, and require working demos by the end of the sprint.
The fastest path is a hands-on build workshop where small teams ship functional agents on a tight clock, present 5-minute demos to executive judges, and leave with a shortlist of pilots. Publish a simple internal schedule, pre-load a sandbox with governed datasets, and staff expert coaches who can unblock teams in the moment. That is how you convert interest into capability and capability into measurable outcomes. Then point your builders at a credible pattern from the field and make it your own.
The American Society for Nondestructive Testing ran exactly this kind of experience at its 2025 annual conference in Orlando. The dedicated session description outlined the 2-day build-and-compete format, inviting participants to create custom agents to solve nondestructive testing problems and face peer judging.
For them, the contest produced more than applause. It created credible use cases that practitioners and sponsors could carry back to plants, labs, and job sites. That is the kind of narrative your company needs when you ask leaders to fund pilots and harden prototypes.
What Happened at ASNT’s AI Agent Battle
ASNT primed the pump before the in-person contest. A public webinar on AI agents in NDT introduced patterns, build steps, and governance expectations, lowering activation energy for first-time builders. The conference schedule also included adjacent AI-focused sessions, such as an applied automation talk that reinforced practical workflows. Event coverage published 2 weeks after the conference highlighted the AI Agent Battle as one of the week’s memorable moments and noted the broader momentum around AI, data, and demos.
The structure mattered. Attendees did not sit through long lectures. Instead, they built agents tied to real inspection tasks, iterated publicly, and delivered results by a deadline. That format aligns with strong evidence that active learning outperforms lecture-first instruction, including a well-cited meta-analysis that found higher performance and lower failure rates when learners engage directly with problems. Reviews of project-based learning show similar gains, as documented in a recent higher-education review and a science-education meta-analysis. Research on hackathon-style builds also indicates improved teamwork, problem-solving, and persistence when the event is time-bounded and well-coached, as summarized in a 2024 systematic review and a complementary educational evaluation.
As ASNT COO Barry Schieferstein noted after the event: “I was struck by how the AI Agent Challenge transformed what a conference experience can be. Instead of talking about innovation, our members were building it, creating real AI agents that connect directly to nondestructive testing practice. For ASNT, this was more than a workshop; it was a statement about how associations can lead their industries into the future. We proved that hands-on, coached learning not only transfers skills faster but also creates deeper engagement for members and sponsors alike. It showed that associations can be at the forefront of applied technology, not just in what we teach but in how we learn together.”
Editor’s Note: This is part of an ongoing series examining generative AI and its continuing impact on the business world.
Lessons For Leaders And How To Adapt The Model
1. Start with the Outcome You Want Executives to See
Require working demos that reduce cycle time or error rate on real work, not simulated tasks. Borrow the visibility play from ASNT’s public agenda and publish an internal schedule that names the competition, sets start and end times, and defines judging criteria that reward measurable improvement. Anchor the event in a recognizable venue and invite decision-makers to the demo window so they can witness the bar you are setting.
2. Design the Learning Arc End-to-End
Offer a ninety-minute orientation one week before the sprint, just as ASNT primed builders with its preparatory webinar. In that session, showcase 3 agent patterns your teams actually need, such as a compliance report generator, a decision-support method selector, or a frontline equipment advisor. Provide a sandbox that mirrors production constraints. Seed it with governed, redacted, or synthetic datasets. Staff an expert facilitator to unblock integration, data access, and model routing. Active learning evidence shows that guided practice beats passive content because people learn while doing the work they will repeat at their desks.
3. Treat the Sprint as a Product, Not a Class
Give the program a name, define the publishing rules, and state the deliverables up front. Require three artifacts from every team by the final bell: a one-paragraph problem statement, a must-have capability checklist, and a data plan that names sources and permissions. Record the five-minute demos and publish them on an internal portal that mirrors the discoverability of ASNT’s events hub. Tag entries by workflow and data domain. Provide a short, standardized request form for productionization. Commit to a two-week window to stand up the top prototypes as controlled pilots.
4. Invite Partners Without Losing Neutrality
ASNT’s competition sat in a larger marketplace of talks and exhibits. You can replicate the effect by designating a small number of integration slots for ecosystem tools under strict governance. Publish neutral judging criteria, cap the number of slots, and require that all demos show the problem, the workflow, and the measurable result. When stakeholders experience tools within credible workflows rather than slide decks, behaviors change and pipelines form.
5. Close the Loop with Governance and Scale
Use the sprint to surface data dependencies, security risks, and monitoring needs while momentum runs high. Ask every team to submit a one-page risk register with owners and mitigation steps. Establish a lightweight review process to approve the top prototypes for limited pilots. Tie your next quarterly build to refreshed business priorities and begin with quick updates from prior winners that show movement on cycle time, defect rates, or satisfaction. Over time, you will accumulate a library of approved, reusable agents and a standing competition that sources the next set of candidates.
6. Keep Measuring
Track participation rates, number of working demos, percentage of demos promoted to pilots, and time from demo to production. Compare agent outcomes against baselines for throughput, quality, and cost. Publish those deltas alongside the demo videos on your internal portal. When leaders see tangible improvements and a consistent pipeline of vetted agents, the investment conversation gets easier. The ASNT model shows that clarity of framing, hands-on execution, and public visibility drive adoption. Your adaptation turns that energy into durable capability inside the business.
Say Goodbye to Slide Decks
Run a focused build where employees ship real agents, present fast demos, and leave with pilot candidates. Use ASNT’s Battle of the AI Agents as a pattern. Combine that playbook with what the active learning research and hackathon reviews already tell us about time-boxed, coached builds. Leaders who adapt this model will not leave with slide decks. They will leave with demos, data plans, and a repeatable engine that turns education into deployment and deployment into measurable revenue.
The information and opinions presented are the author’s own and not those of Vistage Worldwide, Inc.
