We analyze concept forgetting while fine-tuning foundation models and propose a simple fix to this phenomenon.
We propose a new benchmark for generating and evaluating different types of out-of-distribution samples given an in-distribution dataset.
A deterministic deep neural network with sensitivity and smoothness (bi-Lipschitz) constraints on its feature space can be used to quantify epistemic uncertainty from an estimate of density in feature space and aleatoric uncertainty from the entropy of its softmax distribution.
We propose a modified contrastive loss function which allows training an alignment between patch tokens of a vision encoder and text CLS token of CLIP like models. This loss allows for easy seamless transfer to semantic segmentation without requiring additional annotations.
Propose focal loss as an alternative to cross-entropy loss for training well-calibrated, confident and accurate neural networks.