Call for Papers

We invite submissions on any aspect of efficiency of visual computing, includes but not limited to:
    Data-efficient visual computing:
  • Improving image/video compression
  • Improving point cloud compression
  • New method for multi-view image and video compression
  • Lossless compression and entropy model
  • Compression for human and machine vision
  • Label-efficient visual computing:
  • New methods for in-context learning
  • New methods for few-/zero-shot learning
  • New methods for domain-adaptation methods
  • New methods for training models under limited labels
  • Benchmark for evaluating model generalization
  • Model-efficient visual computing:
  • Network sparsity, quantization, distillation
  • Efficient network architecture design
  • Hardware implementation and on-device learning
  • Brain-inspired computing methods
  • Efficient training techniques
Submission Format: need to be anonymous and follow ICCV 2025 author instructions. Please download the ICCV 2025 Author Kit for detailed formatting instructions. The workshop considers two types of submissions: (1) Long Paper: Papers are limited to 7 pages excluding references; (2) Short Paper: Papers are limited to 5 pages excluding references.
Peer review: Paper submissions must conform with the "double-blind" review policy. All papers will be peer-reviewed by experts in the field, they will receive at least two reviews from the program committee. Based on the reviewers' recommendations, accepted papers will be assigned either a contributed talk or a poster presentation.
Submission Site: https://openreview.net/group?id=thecvf.com/ICCV/2025/Workshop/ECLR
Submission Deadline: 07 Jun, 2025
Important: The accepted papers will be published in the proceeding, along with the ICCV main conference, indexed in EI Compendex.

Important Dates

Event Date
Paper submission deadline 07 Jun, 2025
Notification of acceptance 21 Jun, 2025
Camera-ready submission deadline 28 Jun, 2025
Workshop date 19 Oct or 20 Oct, 2025 (Depends on arrangement)

Workshop Schedule

Time Event
9:00-9:20 Opening Remarks
9:20-9:50 Invited Talk 1
9:50-10:20 Invited Talk 2
10:20-10:50 Coffee Break
10:50-11:20 Oral Session (3 papers)
11:20-12:00 Poster Session
12:00-12:10 Closing Remark

Organizer

Jinyang Guo

Beihang University

Zhenghao Chen

The University of Newcastle

Yuqing Ma

Beihang University

Yifu Ding

Beihang University & Nanyang Technological University


Xianglong Liu

Beihang University

Jinman Kim

The University of Sydney

Wanli Ouyang

Shanghai AI Laboratory

Dacheng Tao

Nanyang Technological University



Publication Chairs

Yejun Zeng

Beihang University

Jiacheng Wang

Beihang University


Local Arrangement Chairs

Yanan Zhu

Beihang University

Accepted Papers

🎉Accepted Long Paper

  • Subpixel Placement of Tokens in Vision Transformers
  • SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation
  • Kernel-based Motion Free B-frame Coding for Neural Video Compression
  • Efficient Depth-Varying Optical Simulation for Defocus Deblur
  • Tiny-vGamba: Distilling Large Vision-(Language) Knowledge from CLIP into a Lightweight vGamba Network
  • Low-bit FlashAttention Accelerated Operator Design Based on Triton
  • Pruning by Block Benefit: Exploring the Properties of Vision Transformer Blocks during Domain Adaptation
  • VCMamba: Bridging Convolutions with Multi-Directional Mamba for Efficient Visual Representation
  • Leveraging Learned Image Prior for 3D Gaussian Compression
  • Fisheye image augmentation for overcoming domain gaps with the limited dataset
  • From Binary to Semantic: Utilizing Large-Scale Binary Occupancy Data for 3D Semantic Occupancy Prediction
  • Relevance-Guided Activation Sparsification for Bandwidth-Efficient Collaborative Inference
  • Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics
  • Adaptive Compression of Large Vision Models for Efficient Image Quality Assessment of AI-Generated Content
  • Latent Representation of Microstructures using Variational Autoencoders with Spatial Statistics Space Loss
  • FDAL: Leveraging Feature Distillation for Efficient and Task-Aware Active Learning
  • From Coarse to Fine: Learnable Discrete Wavelet Transforms for Efficient 3D Gaussian Splatting
  • Compressed Diffusion: Pruning with Knowledge Distillation for Efficient Text-to-Image Generation

🎉Accepted Short Paper

  • L-GGSC: Learnable Graph-based Gaussian Splatting Compression
  • Your Super Resolution Model is not Enough for tackling Real-World Scenarios
  • Decay Pruning Method: Smooth Pruning With a Self-Rectifying Procedure