Face Alignment by Explicit Shape Regression， MSRA. PDF
Xudong Cao; Yichen Wei; Fang Wen; Jian Sun
Understanding Collective Crowd Behaviors:Learning Mixture Model of Dynamic Pedestrian-Agents
Bolei Zhou; Xiaogang Wang
A-Optimal Non-negative Projection for Image Representation 浙大的
Zheng Yang; Haifeng Liu
Nonrigid Structure-from-Motion Factorization Made Easy
Yuchao Dai; Hongdong Li; Mingyi He
What Are We Looking For: Towards Statistical Modeling of Saccadic Eye Movements and Visual Saliency 哈工大的
Xiaoshuai Sun; Hongxun Yao; Rongrong Ji; xianming Liu; Pengfei Xu
Street-to-Shop: Cross-Scenario Clothing Retrieval via Parts Alignment and Auxiliary Set 自动化所的
Si Liu; Zheng Song; Guangcan LIU; Changsheng Xu; Hanqing Lu; Shuicheng YAN
|Editor’s note: the following is an anonymized letter from a Machine Learning researcher who decided to withdraw his submission from CVPR 2012. The submission received ratings of “Definitely Reject,” “Borderline” and “Weakly Reject.” The letter and the paper reviews are posted here with his permission.|
We decided to withdraw our paper #[ID no.] from CVPR “[Paper Title]” by [Author Name] et al.
We posted it on ArXiv: http://arxiv.org/ [ Paper ID] .
We are withdrawing it for three reasons: 1) the scores are so low, and the reviews so ridiculous, that I don’t know how to begin writing a rebuttal without insulting the reviewers; 2) we prefer to submit the paper to ICML where it might be better received; 3) with all the fuss I made, leaving the paper in would have looked like I might have tried to bully the program committee into giving it special treatment.
Getting papers about feature learning accepted at vision conference has always been a struggle, and I’ve had more than my share of bad reviews over the years. Thankfully, quite a few of my papers were rescued by area chairs.
This time though, the reviewers were particularly clueless, or negatively biased, or both. I was very sure that this paper was going to get good reviews because: 1) it has two simple and generally applicable ideas for segmentation (“purity tree” and “optimal cover”); 2) it uses no hand-crafted features (it’s all learned all the way through. Incredibly, this was seen as a negative point by the reviewers!); 3) it beats all published results on 3 standard datasets for scene parsing; 4) it’s an order of magnitude faster than the competing methods.
从2001到2011的cvpr best paper列表：