JS.Ye’s Research Hub

Paper Reviews, Course Notes, and Research Progress

A close reading of Geng et al., “Sparsity Conditional Energy Label Distribution Learning for Age Estimation” (2016): starting from a conditional energy model, we derive a closed-form transformation that turns the exponential sum over binary latent variables into a “gated product”; we build a joint objective of KL fitting + sparse gating, derive explicit gradients and SGD updates for \(b_j\), \(u_{jr}\), and \(\omega_r\); and we summarize and analyze the experiments.

Read more »

This post offers a close reading of Geng et al.’s “Recurrent Age Estimation” (2019). It lays out the algorithmic pipeline and overall architecture of Recurrent Age Estimation (RAE), summarizes the experimental setups and key results on public datasets, and provides a brief analysis of model performance and computational cost.

Read more »

A close reading of Geng et al.’s “Semi-Supervised Adaptive Label Distribution Learning for Facial Age Estimation” (2017). Building on a recap of LDL/ALDL, this post unifies notation and lays out the minimal closed‑loop pipeline of SALDL (conditional distribution prediction → pseudo‑age KNN → age‑wise σ adaptation). It also makes explicit that pseudo‑age estimation for all samples is based on the current model’s predicted distributions, and it summarizes the MORPH experimental protocol and comparative results.

Read more »

This post closely reads Geng et al.’s “Age Estimation Using Expectation of Label Distribution Learning” (2018) from four angles—method, theory, engineering, and experiments. We first derive the CDF of the Gaussian label distribution and prove the limit equivalence Ranking ≈ 1 − CDF to unify the two lines; then we present a joint learning framework (KL + L1) and the logits gradient to explain the alignment with MAE; next we analyze Thin/TinyAgeNet and hybrid pooling for lightweighting; finally we summarize the experimental setup and conclusions, clarifying the robust gains of “distribution + expectation.”

Read more »

This post is a close reading of Xin Geng et al.’s papers “Deep Label Distribution Learning for Apparent Age Estimation” (2015) and “Practical Age Estimation Using Deep Label Distribution Learning” (2020). Starting from the ICCV Workshops 2015 solution, it walks through the ChaLearn-facing two-stream CNN, the soft labels generated from the annotated mean and variance with KL-based distribution supervision, and the full training/inference pipeline. It then moves to Practical DLDL (2020), which shrinks the “global” label distribution into a neighborhood-truncated distribution centered at the true age and systematically sweeps σ.

Read more »

This article presents a close reading of Xin Geng et al.’s paper Facial Age Estimation by Adaptive Label Distribution Learning(2014), systematically reviewing the method framework and providing key mathematical derivations. It shows the alternating optimization procedure of adaptive label distributions and age-specific variance with soft-label construction; the quasi-Newton optimization for updating the parameters of the conditional probability function; and the experimental setup and conclusions.

Read more »

This post presents a thorough reading of the 2013 paper Facial Age Estimation by Learning from Label Distributions by Xin Geng et al. The work extends the 2011 label distribution learning (LDL) framework and the IIS-LLD optimization method by proposing the Conditional Probability Neural Network (CPNN), which directly models the conditional distribution in an end-to-end manner and breaks the limitation of fixed functional-form assumptions. This article focuses on the CPNN modeling pipeline and summarizes its experimental results on the FG-NET and MORPH datasets.

Read more »

This article offers a thorough analysis of Xin Geng et al.’s paper Facial Age Estimation by Learning from Label Distributions (2011), systematically organizing its research framework and deriving the key mathematical models in detail. The analysis clarifies the definition of label distributions, examines the label distribution learning model that uses KL divergence as its objective, derives the conditional probability function based on the maximum entropy model along with its optimization algorithm IIS-LLD, and finally summarizes the paper’s experimental design and results.

Read more »
0%