TL;DR
VoiceXD allowed teams to design and build AI assistants, but offered no way to test them with real stakeholders before publishing. I owned the design of Sharable Prototypes, a feature that lets teams share assistants as interactive, branded prototypes without pushing them to production. The feature reduced validation time, increased prototype testing, and unlocked a new acquisition channel through unregistered users trying assistants via shared links.
Context and Problems
VoiceXD supports the full lifecycle of conversational assistant creation, from flow design to publishing. However, a critical gap remained: the ability to test assistants before they went live.
Teams building assistants repeatedly ran into the same issue:
- Once an assistant was built, the only way to test it realistically was to publish it.
- Sharing screenshots or screen recordings stripped away the conversational context.
- Stakeholders, subject-matter experts, and end users couldn’t interact with assistants meaningfully.
This created three downstream problems:
- Bugs and broken flows surfaced only after publishing
- Conversation Designers hesitated to ship without proper validation
- There was no way for non-registered users to experience assistants early
Goals
Business goals:
- Reduce time from assistant design to publish
- Increase top-of-funnel sign-ups by letting unregistered users try assistants
User goals:
- Validate assistants quickly without publishing
- Share assistants with stakeholders in a realistic and interactive format
- Control what is tested, for how long, and how it appears
Understanding the users
While the end users of VoiceXD assistants would eventually benefit from better testing, the primary users of this feature were internal product teams responsible for building and validating assistants. Early on, I focused on understanding who was involved in testing today, what broke down, and why existing workarounds weren’t scaling.
Core user types
Through conversations and observation, three distinct user types emerged:
-
Conversation designers: These users designed the assistant logic and flows. They needed a fast way to validate conversational paths without publishing to production or creating throwaway versions.
-
Subject-matter reviewers: Often non-technical stakeholders, these users reviewed accuracy, tone, and coverage. They needed a realistic experience, but without access to the VoiceXD editor or setup overhead.
-
Internal stakeholders (PMs, QA, leadership): These users evaluated readiness and risk. They needed clarity on what was being tested, confidence that the experience reflected production behavior, and control over access.
Each group had different needs, but they all intersected at the same failure point: there was no safe, shared testing surface.
![]()
Exploration and Design Iterations
Before converging on the final solution, I explored multiple directions to understand how teams want to configure, share, and experience prototypes. The goal was to stress-test assumptions and uncover failure points early.
![]()
1. Exploring prototype configuration (creator experience)
The first area of exploration focused on how designers configure a prototype before sharing it. For this, I evaluated multiple configuration approaches:
- Lightweight setups that allowed immediate sharing with minimal input
- More expressive configurations where designers could define scope, interaction modes, and context upfront
- Progressive disclosure models that revealed options only when needed
2. Exploring the shareable prototype experience (tester experience)
The second area of exploration focused on what happens after a prototype link is opened. This surface mattered just as much, because the quality of feedback depends heavily on how realistic and focused the testing experience feels. For this, I explored multiple entry experiences:
- Opening the prototype directly inside a simulated chat interface
- A lightweight landing page that framed the test before entering the prototype
- Context-aware pages that adapted based on how the prototype was configured
How these explorations informed the final design?
By separating exploration across these two surfaces, the following insights emerged:
- Designers need control, but only when it adds value
- Testers need clarity. Too many configuration options cause confusion
- The system should carry intent from creator to tester without manual explanation
These insights directly shaped the final feature set, ensuring that configuration decisions and the shared testing experience worked together as a cohesive system rather than as isolated features.
Feature: Prototype configuration (inside VoiceXD)
This feature lives within the assistant settings and offers all testing-related decisions in one place.
Users can:
- Choose the interaction mode (chat, voice, and combined)
- Share a specific version of the assistant for testing
- Set access rules, including temporary or persistent links
- Apply brand styling so the experience feels polished and stakeholder-ready
- Select which scenarios or situations are available in the prototype
Problem solved
How do I control what gets tested, how it behaves, and who it’s for, without publishing to production?
Feature: Prototype testing experience (shared link)
When someone opens the prototype link, they see a simple testing page. The goal was to make testing effortless. No login, no instructions needed. Just open the link and start interacting.
This page:
- Clearly communicates that the assistant is a test prototype, not a live production bot
- Allows testers to choose a scenario (if enabled by the designer)
- Supports both chat and voice interactions, mirroring real usage
- Applies the creator’s brand styling for context and credibility
Problem solved
What does a tester see when they open the link, and how do we guide them into meaningful feedback?
Validation and Impact
Because this feature was shipped quickly, I focused on early signals for validating the features.
We tested the experience with 7 early users, including conversation designers and internal stakeholders who regularly validated assistants before launch. Each participant was asked to:
- Share an assistant prototype
- Validate at least one scenario
- Compare the experience to their previous testing workflow
What we learned:
- 6 out of 7 participants said the feature reduced the time and effort needed to validate an assistant
- 5 out of 7 preferred this approach over screenshots or screen recordings for stakeholder reviews
- Multiple participants highlighted that being able to test without publishing helped them iterate faster
User feedback on the feature:
Sharing a link is infinitely better than explaining a flow over a call.
The ability to send a link to clients and have them test on their own devices can be a game changer for our approval process.
We'll be able to catch edge cases (in the conversation logic) that we would have missed until post-launch.