torch.nn.functional.margin_ranking_loss
function margin_ranking_loss(input1: Tensor, input2: Tensor, target: Tensor): Tensorfunction margin_ranking_loss(input1: Tensor, input2: Tensor, target: Tensor, margin: number, size_average: boolean | null, reduce: boolean | null, reduction: 'none' | 'mean' | 'sum', options: MarginRankingLossFunctionalOptions): TensorMargin ranking loss for learning relative ordering between pairs of samples.
Learns to rank two inputs such that one is preferred over the other by a margin. Given two inputs x₁ and x₂ and a target y ∈ {-1, +1}, encourages x₁ to be ranked higher than x₂ (when y=+1) or vice versa (when y=-1). Essential for:
- Learning-to-rank tasks (information retrieval, recommendation systems)
- Metric learning with pairwise preferences
- Siamese networks for similarity learning
- Information retrieval (ranking documents by relevance)
- Recommendation systems (user prefers item A over item B)
- Biometric verification (verify pairs of similar/dissimilar samples)
Core idea: Penalize when the higher-ranked input doesn't have a sufficiently larger score than the lower-ranked input. Loss = 0 when correct ranking margin is satisfied.
Loss formula:
- When target = +1 (x₁ should rank higher): loss = max(0, -(x₁ - x₂) + margin) = max(0, x₂ - x₁ + margin)
- When target = -1 (x₂ should rank higher): loss = max(0, (x₁ - x₂) + margin)
- Combined: loss = max(0, -y·(x₁ - x₂) + margin)
Margin interpretation: The gap that must exist between the preferred and non-preferred scores for zero loss. Larger margin → more robust ranking preference.
- Asymmetric formulation: Uses -y·(x₁-x₂) allowing single formula for both preferences
- Zero loss when: -y·(x₁-x₂) ≤ margin (correct ranking with sufficient margin)
- Target values: Must be exactly +1 or -1; other values produce undefined behavior
- Margin effect: Larger margin → stricter ranking requirement
- Pairwise nature: Loss depends only on difference x₁-x₂, not absolute values
- Target validity: Must contain only +1 or -1; invalid targets produce wrong results
- Margin too large: Can cause training instability
- Shape mismatch: input1, input2, target must have same shape
Parameters
input1Tensor- First input scores, shape [batch_size] or [...]. Can be any shape. Pairwise comparison scores, typically raw model outputs.
input2Tensor- Second input scores, shape [...] matching input1. Compared against input1 to determine ranking preference.
targetTensor- Target ranking preference, shape [...] matching inputs. Values must be +1 (prefer x₁) or -1 (prefer x₂).
Returns
Tensor– Loss tensor, shape [] (scalar) if reduction='mean'|'sum', else [...]Examples
// Learning-to-rank: document relevance scoring
const batch_size = 32;
const doc1_scores = torch.randn([batch_size]); // Relevant documents
const doc2_scores = torch.randn([batch_size]); // Irrelevant documents
const targets = torch.ones([batch_size]); // Prefer doc1
const loss = torch.nn.functional.margin_ranking_loss(doc1_scores, doc2_scores, targets, 1.0);// Recommendation system: user preference
const preferred_scores = torch.randn([64]);
const non_preferred_scores = torch.randn([64]);
const preferences = torch.ones([64]);
const loss = torch.nn.functional.margin_ranking_loss(preferred_scores, non_preferred_scores, preferences, 0.5);// Siamese network for similarity
const sim_pos = torch.randn([32]); // Positive pair similarity
const sim_neg = torch.randn([32]); // Negative pair similarity
const targets = torch.ones([32]); // Positive should be higher
const loss = torch.nn.functional.margin_ranking_loss(sim_pos, sim_neg, targets, 0.5);See Also
- PyTorch torch.nn.functional.margin_ranking_loss
- torch.nn.functional.hinge_embedding_loss - Similar margin loss for single input
- torch.nn.functional.triplet_margin_loss - Margin loss for triplets
- torch.nn.functional.multi_margin_loss - Margin loss for multi-class classification