
A Comprehensive Benchmark for EvaluatingText-to-Audio Models
Text-to-Audio (TTA) generation has made rapid progress, but current evaluation methods remain narrow, focusing mainly on perceptual quality while overlooking robustness, generalization, and ethical concerns. We present TTA-Bench, a comprehensive benchmark for evaluating TTA models across functional performance, reliability, and social responsibility. It covers seven dimensions including accuracy, robustness, fairness, and toxicity, and includes 2,999 diverse prompts generated through automated and manual methods. We introduce a unified evaluation protocol that combines objective metrics with over 118,000 human annotations from both experts and general users. Ten state-of-the-art models are benchmarked under this framework, offering detailed insights into their strengths and limitations. TTA-Bench establishes a new standard for holistic and responsible evaluation of TTA systems.
We identify 7 different aspects that are important in real-world model deployment, including:
Key Features








10 Models
16 Metrics
50 Scenario

Daily Life Scenes
Covering various sound scenarios in daily life, such as family activities and social occasions

Natural & Outdoor Scenes
Including various sounds in natural environments, such as weather, animals, and plants

Work & Production Scenes
Covering various sounds in workplace and production environments

Art & Cultural Scenes
Including sounds from artistic performances and cultural activities

Transportation & Travel Scenes
Covering various sounds from transportation and travel scenarios