- Define neural network architectures logically composing `Chain` blocks natively stacking dense or convolutional layers.
- Calculate loss natively executing standard functions like `Flux.Losses.mse(y_hat, y)` mapping arrays perfectly.
- Extract gradients explicitly tracking parameters utilizing the automatic differentiation engine natively generating `Flux.gradient(loss, params)`.
- Update model weights safely declaring explicit optimizer configurations like `Descent` passing explicitly to `Flux.update!`.