---
title: "Luma 推出了由统一智能支持的 Luma 代理，专用于创意工作"
type: "News"
locale: "zh-CN"
url: "https://longbridge.com/zh-CN/news/277974491.md"
description: "Luma 推出了 Luma Agents，这是一种新型的 AI 协作工具，旨在增强机构和企业的创意工作流程。基于统一智能架构，这些代理能够在各种媒体格式中执行端到端的创意任务，同时在整个过程中保持上下文。Luma Agents 已经被全球合作伙伴如 Publicis Groupe 和 Serviceplan Group 部署，旨在简化创意输出，使团队能够专注于战略和质量，而不是工具的协调。该技术整合了多种创意方向并优化输出，提高了创意生产的效率和一致性"
datetime: "2026-03-05T18:15:00.000Z"
locales:
  - [zh-CN](https://longbridge.com/zh-CN/news/277974491.md)
  - [en](https://longbridge.com/en/news/277974491.md)
  - [zh-HK](https://longbridge.com/zh-HK/news/277974491.md)
---

# Luma 推出了由统一智能支持的 Luma 代理，专用于创意工作

_Built on Luma’s new Unified Intelligence architecture, Luma Agents introduce a new category of AI collaborators and are deployed today with global enterprise partners, including Publicis Groupe and Serviceplan Group_

PALO ALTO, Calif.--(BUSINESS WIRE)-- Luma today announced the launch of Luma Agents, a new class of AI collaborators capable of executing end-to-end creative work across text, image, video, and audio. Designed for agencies, marketing teams, studios, and enterprise organizations that aspire to scale creative output without sacrificing quality, Luma Agents maintain full context from the initial brief to final delivery – coordinating tools, models, and iterations within a single unified system.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20260305354123/en/

“Creative work has never lacked ambition; it’s lacked execution capacity,” said Amit Jain, Co-Founder and CEO of Luma. “Creative teams shouldn’t have to spend their time orchestrating tools. They should spend it creating. Agents aren’t shortcuts. They’re collaborators that maintain context, coordinate execution, and advance projects so teams can focus on taste, direction, and strategy.”

For the past several years, most AI systems have been assembled by chaining together separate models for language, vision, video, and reasoning — stitching outputs together through orchestration layers. While powerful in isolation, these systems fragment context and require increasingly complex workflows to produce reliable creative results.

Luma believes intelligence should not be assembled in pieces; it should be built as one coherent system.

**Creative Agents That Make You Prolific**

Luma Agents replace fragmented, multi-model workflows with coordinated, execution built on unified reasoning. Instead of switching between disconnected tools and rebuilding context at every step, teams work alongside Agents that:

-   Execute projects end-to-end, from planning through production and delivery
-   Maintain shared context across text, image, video, and audio
-   Advance multiple creative directions in parallel
-   Evaluate and refine outputs instead of generating one-shot results
-   Integrate into enterprise tools and production systems via API

Agents operate inside a collaborative, multiplayer environment where humans direct creative intent and Agents handle orchestration, routing, and execution – resulting in more output, greater consistency, and higher creative velocity.

**Deployed at Global Scale**

Luma Agents are already embedded across global agency operations.

Publicis Groupe and Serviceplan Group are deploying Luma Agents across strategy, creative development, and production workflows to increase throughput while maintaining brand consistency across markets.

“Luma is now part of our broader House of AI ecosystem and integrated directly into our creative workflows. It allows our teams across more than 20 countries to collaborate more smoothly and develop great work faster. For our clients, that means high-quality creative output delivered with greater speed and efficiency – without compromising craft,” says Alexander Schill, Global CCO at Serviceplan Group.

**Built on Unified Intelligence**

Luma Agents are built on Unified Intelligence, a new model architecture designed to move beyond the industry’s prevailing approach of assembling intelligence in pieces. Instead of chaining together separate models for language, vision, and generation, Unified Intelligence trains a single multimodal reasoning system capable of understanding and generating across formats within the same architecture.

For the past several years, most AI systems have been assembled as pipelines: one model writes text, another generates images, another processes video, and orchestration layers attempt to stitch their outputs together. While effective for narrow tasks, these systems fragment reasoning, lose context between steps, and require complex workflows to produce reliable results.

Unified Intelligence takes a different approach. Instead of connecting specialized models after the fact, it trains a single multimodal reasoning system capable of understanding and generating across formats within the same architecture.

Rather than separating thinking from creation, Unified Intelligence tightly couples reasoning and rendering, allowing the system to plan, imagine, and produce as part of one coherent cognitive process.

When a human architect sketches a building, they are not simply drawing lines – they are simultaneously simulating structure, light, spatial dynamics, and lived experience. Reasoning and imagination happen together. Unified Intelligence is built on the same principle.

The first model built on this architecture is Uni-1.

Uni-1 is a decoder-only autoregressive transformer operating over a shared token space that interleaves language and image tokens, allowing both modalities to function as first-class inputs and outputs in the same sequence. This design enables the model to reason in language while imagining and rendering in pixels within the same forward pass.

Rather than generating outputs step-by-step across disconnected systems, Uni-1 can plan, visualize, and produce creative artifacts as part of a single coherent reasoning process. The result is a foundation where thinking and creation are tightly coupled, much closer to how human intelligence works.

Built on top of this unified architecture, Luma Agents can coordinate complex creative workflows that previously required multiple tools and manual orchestration. They can:

-   Coordinate across leading AI models, including Ray3.14, Veo 3, Sora 2, Kling 2.6, Nano Banana Pro, Seedream, GPT Image 1.5, and ElevenLabs
-   Automatically select and route tasks to the best model or capability for each step
-   Maintain persistent context across assets, collaborators, and creative iterations
-   Evaluate and refine outputs, improving results through iterative self-critique

Together, these capabilities allow Luma Agents to function not as isolated generation tools, but as collaborative AI creatives capable of executing end-to-end creative work.

“Intelligence shouldn’t be fragmented by modality,” added Jain. “Unified systems reason holistically. When the same model can think, imagine, and render, you move closer to intelligence that behaves coherently across the entire creative process.”

**Enterprise-Ready by Design**

Luma Agents are designed for enterprise environments where intellectual property protection, compliance, and operational scale are critical. Key enterprise safeguards include Full IP ownership retained by customers, automated content review to reduce copyright risk, legal trace documentation demonstrating human involvement, required human review workflows prior to public release, and cloud-based infrastructure with enterprise-grade guardrails.

**About Luma**

Luma, based in Palo Alto, California, builds unified multimodal AI systems that combine reasoning and generation within a single model architecture. Its Unified Intelligence platform powers agents capable of planning and producing end-to-end creative work across video, imagery, and 3D, serving teams at leading advertising agencies, global enterprises, and entertainment studios. In 2025, Luma launched Ray3, the world’s first reasoning video model, followed by Ray3.14, delivering native 1080p outputs and production-grade stability for professional workflows. The company is backed by HUMAIN, Andreessen Horowitz, AWS, AMD Ventures, NVIDIA, Amplify Partners, Matrix Partners, and leading investors across technology and entertainment. For more information, visit www.lumalabs.ai

View source version on businesswire.com: https://www.businesswire.com/news/home/20260305354123/en/

Source: Luma

### 相关股票

- [IPG.US](https://longbridge.com/zh-CN/quote/IPG.US.md)
- [LUMN.US](https://longbridge.com/zh-CN/quote/LUMN.US.md)
- [BOTZ.US](https://longbridge.com/zh-CN/quote/BOTZ.US.md)
- [ARKQ.US](https://longbridge.com/zh-CN/quote/ARKQ.US.md)
- [CLOU.US](https://longbridge.com/zh-CN/quote/CLOU.US.md)
- [IBOT.US](https://longbridge.com/zh-CN/quote/IBOT.US.md)

## 相关资讯与研究

- [月薪 3 万被截胡，AI 人才的时代红利来了？](https://longbridge.com/zh-CN/news/282659299.md)
- [AI 订单持续落地 CoreWeave 获华尔街一致看多 股价飙升超 8% 延续上周涨势](https://longbridge.com/zh-CN/news/282596032.md)
- [AI 时代，普通人如何自救？](https://longbridge.com/zh-CN/news/282615655.md)
- [知名记者爆苹果或年底发布 AI 眼镜 产业链人士：项目高度保密](https://longbridge.com/zh-CN/news/282528056.md)
- [现在让小孩学 AI 属于有病？](https://longbridge.com/zh-CN/news/282670516.md)