The 1st Workshop on

Embodied4Arts: Robots as Creative Partners for Artistic, Expressive, and Craft Tasks

Creative embodied tasks (painting/drawing, music performance, and deformable craft) provide a high-impact testbed where robots must turn high-level intent into physically executable programs under constraints while optimizing style, expressivity, and human preference.
Date: TBD (RSS 2026 workshop day) Location: TBD (RSS 2026 venue) Format: Half-day (4 hours)
Embodied AI Robot Learning World Models Creative Robotics Deformables Human Feedback
Embodied4Arts teaser (add assets/img/teaser_rss.png)
Figure. Representative creative embodied domains: (a) robotic painting/drawing, (b) computational craft, (c) fabrication-aware garments, (d) computational origami/program generation, and (e) robotic music performance.

About the Workshop

Robotics has entered an era of foundation models and data-driven policies, yet many benchmarks still emphasize utilitarian goals with binary outcomes. Embodied artistic creation offers a complementary frontier: robots must convert high-level intent into action while balancing hard constraints (contact, tool use, safety, deformable dynamics) with soft objectives (style, expressivity, aesthetics, human preference). Small modeling errors leave visible traces—“good” behavior is not only correct but also expressive, controllable, and interpretable.

Embodied4Arts reframes creative tasks as a driver for next-generation embodied intelligence: from generative models to generative motor programs. We aim to bring together robot learning, manipulation/control, world models, HRI, and creative AI/graphics to develop shared formulations and evaluation protocols.

Note: This website is a living document. Dates, venue details, and the final speaker list will be updated as RSS releases official workshop scheduling.

Topics of Interest

(1) Painting & Drawing as Programs

Transform intent (text/image/sketch) into stroke programs under tool/material constraints. Focus: hierarchical action representations, verification/constraints, and metrics beyond pixel similarity.

  • Stroke planning and macro-actions
  • Verification-aware decoding and constraint checking
  • Evaluation: efficiency, robustness, style/expressivity

(2) Embodied Music & Expressive Control

Beyond note correctness: timing, dynamics, articulation, and interaction. Focus: contact/compliance, tempo/phrasing representations, and expressivity metrics.

  • Expressive performance and co-performance
  • Audio-visual perception for control
  • Evaluation aligned with human perception

(3) Deformables & Computational Craft

Folding, weaving, sewing, knotting, draping: partial observability and complex state spaces. Focus: structured planning, learned/hybrid simulators, and standardized benchmarks.

  • World models for deformable dynamics
  • Long-horizon planning under constraints
  • Benchmarking and reproducible evaluation

Cross-cutting questions

  • What makes an artistic/expressive task a useful benchmark for embodied intelligence?
  • How should we evaluate creativity and expressivity beyond success/failure?
  • How can world models enable imagination while maintaining physical validity (verification/constraints)?
  • How do we incorporate human feedback without collapsing authorship/agency?

Embodied4Arts Challenge (Two Painting Tracks)

Track 1: Simulation Painting (Best Quality)

Generate executable stroke trajectories in simulation to reproduce target sketches/paintings under a constrained budget (stroke count, motion cost, time).

  • Visual quality: perceptual similarity proxies (LPIPS/CLIP-like)
  • Embodied efficiency: path length, pen-up travel, strokes, time
  • Robustness (optional): performance under perturbations

Track 2: Real-Robot Painting (Equal Time)

Remote submission with organizer execution on a single standardized platform (same robot, tool mount, surface, and time budget). Teams submit executable programs/trajectories; organizers run all submissions under identical safety constraints and record outputs.

  • Final artwork: perceptual metrics + optional human rating
  • Reliability/safety: stable execution, constraint adherence
  • Timing: strict equal time cap for all teams

Sponsorship & Prizes

Sponsored by Axis.ai. Sponsorship funds will be used solely for prizes and workshop/community materials; it will not influence selection, review, or awards decisions, and no commercial promotion will be part of the technical program.

  • Total prize pool: $1,500
  • Grand Prize: $800 • 2nd: $300 • 3rd: $200 • Best Simulation: $100 • Best Real-Robot: $100

Timeline (relative)

  • T-8 weeks: starter kit release + opening of submissions
  • T-3 weeks: submission deadline (challenge + optional abstract/video)
  • T-1 week: final leaderboard + winner notification; spotlight schedule finalized

Invited Speakers & Panelists

Status legend: Confirmed = speaker has explicitly agreed; Invited = pending response. We will verify that speakers/panelists accept a speaking/panel slot at at most one RSS workshop.

Texas A&M University • Robotic painting/drawing: policies and evaluation
Confirmed
UCLA • Expressive control; graphics & embodied creation
Confirmed
NVIDIA Research • Human–AI co-creation systems; creative tools
Confirmed
UC Santa Barbara • Computational craft; interactive fabrication
Confirmed
CMU / Google DeepMind • Generative audio/music models
Confirmed
UC San Diego • Co-creative music systems
Confirmed
Style3D • Physics-based cloth simulation for creation
Confirmed
Ghent Univ. (imec) • Cloth manipulation benchmarks & evaluation
Confirmed
Columbia University • Fabric/deformable manipulation
Invited
Stanford University • World models for long-horizon skills
Invited
UC Berkeley • Model-based RL
Invited

Panel format: invited speakers + organizers; we will add 1–2 panelists from RSS core communities to ensure strong alignment with planning/control/manipulation and evaluation.

Organizers

Texas A&M University • yanjia_0812@tamu.edu
UC Berkeley • phli@berkeley.edu
Texas A&M University • mingyang@tamu.edu
Texas A&M University • xiangbo.gao@tamu.edu
UC Berkeley • ghr@berkeley.edu
Stanford University • yninghong@gmail.com
Texas A&M University • tzz@tamu.edu

Organizing Roles (summary)

  • Primary contact: Yanjia Huang
  • Program/review: Peihao Li; Ying Jiang; Yining Hong
  • Challenge (simulation): Mingyang Wu; Ying Jiang
  • Challenge (real robot): Yanjia Huang; Yunuo Chen; Xiangbo Gao
  • Website/publicity: Yining Hong; Peihao Li
  • Poster/lightning logistics: Ying Jiang; Xiangbo Gao

Tentative Schedule (4 hours)

Half-day (morning or afternoon). Includes a 30-minute coffee break that may overlap with a poster session.

TimeActivity
00:00–00:10Opening remarks: scope, goals, format, and challenge overview
00:10–01:00Two invited talks (20–25 min + Q&A each): painting as programs; expressive music control
01:00–01:30Coffee break + Poster session (posters remain up throughout)
01:30–02:05Invited talk: deformables/craft
02:05–02:30Contributed lightning talks (8–10 talks, 2–3 min each)
02:30–02:50Challenge report + winner spotlights
02:50–03:25Panel discussion + audience Q&A (moderated)
03:25–03:55Breakout discussions + report-back (structured prompts)
03:55–04:00Closing: outcomes, next steps, community actions

Logistics (Resources and Attendance)

Resources: standard A/V (projector, mics) and 20–40 poster boards; optional small tabletop demo area if space permits.
In-person: at least three organizers (including the primary contact) will attend.
Online (subject to RSS policy): we will post materials on the website and host an online Q&A channel.

Submission

Call for Contributions (non-archival)

  • Extended abstract: 1–2 pages (PDF), OR
  • Video demo: 2–3 minutes + 1-page summary
  • Optional: challenge participation (sim/real tracks) + short method description

Lightning talks will be selected from submissions (students/first-time presenters prioritized). Posters for accepted contributions.

Important Dates (TBD)

  • Submissions open: TBD
  • Submission deadline: TBD
  • Notification: TBD

We will publish the submission portal and detailed instructions on this page once the RSS workshop schedule is finalized.

Contacts

Lead contact: Yanjia Huangyanjia_0812@tamu.edu
Website: embodied4arts.github.io