
Playbook
Playbook's virtual reality prototyping platform helps developers design 3D user interfaces for immersive environments instantly, with no prior software development knowledge.
Date | Investors | Amount | Round |
---|---|---|---|
- | N/A | - | |
Total Funding | 000k |
USD | 2022 | 2023 |
---|---|---|
Revenues | 0000 | 0000 |
EBITDA | 0000 | 0000 |
Profit | 0000 | 0000 |
EV | 0000 | 0000 |
EV / revenue | 00.0x | 00.0x |
EV / EBITDA | 00.0x | 00.0x |
R&D budget | 0000 | 0000 |
Source: Dealroom estimates
Related Content
Headquartered in Los Angeles, California, Playbook VR, INC. is the developer of Playbook, a generative media platform designed to accelerate 3D production for spatial computing (AR/VR) and visual effects (VFX). The company was established in 2020 by Jean-Daniel LeRoy and Skylar Thomas, who developed the initial concept as a capstone project at the University of Southern California, driven by a shared interest in the future of entertainment. Thomas, the CTO, brings a decade of experience in VFX, animation, and full-stack development, including work with Meta (then Oculus) and Xbox. This background in 3D and AI informs the platform's core functionality.
Playbook operates as a web-based, diffusion-powered rendering engine that integrates with production pipelines via its editor and API. The platform addresses the need for faster content creation by allowing users to build and animate scenes, then apply AI to render high-fidelity images and videos. It provides granular control, enabling artists to prompt individual objects and lighting within a 3D scene while maintaining temporal and spatial consistency across animated sequences. This is achieved by combining 3D data with generative AI models like Stable Diffusion, allowing for precise camera control and the ability to re-texture entire scenes or specific elements using masks. The business model appears to be subscription-based, offering plans that may include recurring payments and shared credit pools for teams. It also provides enterprise-level solutions that focus on commercial rights, security, and team training.
The service is designed for collaborative, real-time editing, allowing multiple users in different locations to work on the same scene simultaneously, similar to tools like Figma. Users can start with a basic 3D scene, import models in GLB/glTF formats, and leverage AI to generate textures, styles, or even new 3D objects from text prompts through integrations like Tripo. This workflow merges traditional 3D techniques with generative AI, empowering filmmakers, game developers, and designers to quickly prototype and create visually complex content directly from a web browser.
Keywords: generative media, AI rendering, 3D production pipeline, spatial computing, VFX, virtual production, diffusion-based rendering, real-time 3D, collaborative design tool, AR/VR development, Jean-Daniel LeRoy, Skylar Thomas, web-based 3D editor, ComfyUI, 3D animation, generative AI, media and entertainment, video game production, AI video generation, scene creation, AI-powered design