Review: PR-108-MobileNetV2: Inverted Residuals and Linear Bottlenecks

Joonsu Oh
2 min readJul 31, 2021

Depthwise Separable Convolution

  • MobileNetV1
Figure 1

Linear Bottlenecks

  • For an input set of real images, we say that the set of layer activation forms a “manifold of interest”.
  • It has been long assumed that manifolds of interest in neural network could be embedded in low-dimensional subspaces.
  • The authors have highlighted two properties that are indicative of the requirement that the manifold of interest should lie in a low-dimensional subspace of the higher-dimensional activation space.
  1. If the manifold of interest remains non-zero volume after ReLU transformation, it corresponds to a linear transformation.
  2. ReLU is capable of preserving complete information about the manifold only if the input manifold lies in a low-dimensional subspace of the input space.
  • Assuming the manifold of interest is low-dimensional, we can capture this by inserting linear bottleneck layers.

Inverted Residuals

  • Inspired by the intuition that the bottlenecks actually contain all the necessary information, the authors use shortcuts directly between the bottlenecks.
  • Narrow → Wide → Narrow approach
Figure 2
Figure 3

--

--