Pluggable EnginesWeb APIWhisper
Voice Recognition
The core engine for converting spoken language into executable text.
Basic Usage
Voice recognition is managed via the `start()`, `stop()`, and `toggle()` methods. JSVoice handles the complex state management of the `SpeechRecognition` API, including auto-restarts and error handling.
javascript
1const voice = new JSVoice();
2
3// Start listening
4await voice.start();
5
6// Stop listening
7voice.stop();
8
9// Toggle between states
10voice.toggle();Configuration Options
Pass these options to the constructor to customize behavior.
| Option | Default | Description |
|---|---|---|
| continuous | true | Keep listening after a result is returned. |
| interimResults | true | Return results that are not yet final (good for real-time feedback). |
| lang | 'en-US' | Language code for recognition. |
| autoRestart | true | Automatically restart if the browser stops the engine. |
| engines | [] | Array of custom engine classes (e.g. WhisperEngine). |
Pluggable Engines
JSVoice supports a fully pluggable architecture. While it ships with a NativeSpeechEngine (browser) by default, you can easily swap this for higher-accuracy cloud providers.
Using OpenAI Whisper
To use Whisper for improved accuracy (at the cost of latency/usage fees), inject the engine via the constructor.
javascript
1import { WhisperEngine } from 'jsvoice-engines';
2
3const voice = new JSVoice({
4 // Using a custom engine completely replaces the browser native one
5 engines: [
6 new WhisperEngine({ apiKey: 'YOUR_OPENAI_KEY' })
7 ]
8});