Review: PR-108-MobileNetV2: Inverted Residuals and Linear Bottlenecks
Depthwise Separable Convolution
- For an input set of real images, we say that the set of layer activation forms a “manifold of interest”.
- It has been long assumed that manifolds of interest in neural network could be embedded in low-dimensional subspaces.
- The authors have highlighted two properties that are indicative of the requirement that the manifold of interest should lie in a low-dimensional subspace of the higher-dimensional activation space.
- If the manifold of interest remains non-zero volume after ReLU transformation, it corresponds to a linear transformation.
- ReLU is capable of preserving complete information about the manifold only if the input manifold lies in a low-dimensional subspace of the input space.
- Assuming the manifold of interest is low-dimensional, we can capture this by inserting linear bottleneck layers.
- Inspired by the intuition that the bottlenecks actually contain all the necessary information, the authors use shortcuts directly between the bottlenecks.
- Narrow → Wide → Narrow approach