@ysolve/story-visualizer
v0.0.8
Published
A React component for visualizeing historical stories with text-to-speech and animated avatars.
Readme
Story Visualizer for O-CITY
Description
This project is a React component that transforms cultural heritage information from the O-CITY platform into an interactive audiovisual experience.
Instead of displaying plain text and images, it generates a "simulated video" that includes:
- Voice narration using [ElevenLabs TTS]
- Automatic translation with [Google Translate]
- Synchronized subtitles
- An animated avatar that accompanies the narration
The goal is to provide users with a more immersive and engaging way to explore cultural routes, in multiple languages, going beyond static content.
Key Technologies
- React 19
- Vite (development and build)
- TypeScript
- ElevenLabs TTS API (voice synthesis)
- Google Translate API (translation)
- TensorFlow.js (text embeddings and processing)
Installation and Usage (local development)
Clone the repository:
git clone https://github.com/ocity-org/videotts.git cd videottsInstall dependencies:
npm install Run in development mode:Run in development mode:
npm run dev Build for production:Build for production
npm run build
Roadmap
- Initial prototype as a React application
- Migration to an NPM library for integration into the O-CITY frontend
- Improved audio-subtitle synchronization
- Integration of custom AI models for translation and narration
- Extended avatar customization and animations
Contributing
This project is currently under development as part of the O-CITY platform. In the future, it will be published as an NPM package for standalone usage.
Example usage
import ReactDOM from "react-dom/client";
import { TextToVideo } from "./textToVideo";
...
<VideoTTS
heritageItems={heritageItems}
targetLanguage="es"
descriptionLength="short"
/>
