Source Themes

Fine-tuning can cripple your foundation model; preserving features may be the solution

We analyze concept forgetting while fine-tuning foundation models and propose a simple fix to this phenomenon.

Raising the Bar on the Evaluation of Out-of-Distribution Detection

We propose a new benchmark for generating and evaluating different types of out-of-distribution samples given an in-distribution dataset.

Deep Deterministic Uncertainty: A New Simple Baseline

A deterministic deep neural network with sensitivity and smoothness (bi-Lipschitz) constraints on its feature space can be used to quantify epistemic uncertainty from an estimate of density in feature space and aleatoric uncertainty from the entropy of its softmax distribution.

Open Vocabulary Semantic Segmentation with Patch Aligned Contrastive Learning

We propose a modified contrastive loss function which allows training an alignment between patch tokens of a vision encoder and text CLS token of CLIP like models. This loss allows for easy seamless transfer to semantic segmentation without requiring additional annotations.

Calibrating Deep Neural Networks using Focal Loss

Propose focal loss as an alternative to cross-entropy loss for training well-calibrated, confident and accurate neural networks.