torch.nn.MarginRankingLoss
class MarginRankingLoss extends Modulenew MarginRankingLoss(options?: MarginRankingLossOptions)
- readonly
margin(number) - readonly
reduction(Reduction)
Margin Ranking Loss: ranking loss for pairs of inputs.
Computes a ranking-based margin loss between two inputs (score1 and score2) using a binary target. The loss encourages score1 to be higher than score2 if target=1, and score2 to be higher than score1 if target=-1, by at least a margin. Essential for:
- Ranking problems (learning to rank items)
- Recommender systems (preferring one item over another)
- Siamese networks with pair-wise ranking
- Natural Language Processing (matching queries to documents)
- Metric learning (enforcing separation between positive and negative pairs)
For each sample in the batch, computes: loss = max(0, -target * (input1 - input2) + margin) Target is 1 if input1 should be ranked higher, -1 if input2 should rank higher.
When to use MarginRankingLoss:
- Ranking tasks where relative order matters more than absolute scores
- Pair-wise comparisons (is A better than B?)
- Learning embedding spaces with relative distance constraints
- Recommendation tasks (learning user preferences)
- Information retrieval (ranking relevant documents above irrelevant ones)
Trade-offs:
- vs TripletMarginLoss: MarginRanking works on pairs; Triplet works on triplets (anchor/pos/neg)
- vs CrossEntropy: MarginRanking learns relative ranking; CrossEntropy learns absolute classification
- Sensitivity: Highly sensitive to the choice of margin
- Scalability: Requires careful selection of pairs (hard negative mining often helpful)
- Simple and flexible: Can be used with any scalar scores from neural networks
Algorithm: For each pair (x1, x2) and target y ∈ {1, -1}:
- loss = max(0, -y * (x1 - x2) + margin) If y = 1, loss is zero if x1 ≥ x2 + margin. If y = -1, loss is zero if x2 ≥ x1 + margin.
- Pair-wise ranking: Compares two items, not triplets
- Simple outputs: Works with any scalar predictions (not just embeddings)
- Flexible: Can be used for any ranking problem
- Margin interpretation: Positive margin creates separation
- Target flexibility: target=1 means input1 higher, target=-1 means input2 higher
Examples
// Simple ranking: item A should rank higher than item B
const margin_loss = new torch.nn.MarginRankingLoss({ margin: 1.0 });
const scores_A = torch.tensor([2.0, 1.5, 3.0]);
const scores_B = torch.tensor([1.0, 2.0, 1.0]);
const target = torch.tensor([1.0, 1.0, 1.0]); // A > B
const loss = margin_loss.forward(scores_A, scores_B, target);
// For [2.0, 1.0]: max(0, -1*(2.0-1.0)+1.0) = 0 (satisfied)
// For [1.5, 2.0]: max(0, -1*(1.5-2.0)+1.0) = 1.5 (violated)// Recommender system: prefer liked items over disliked
class PreferenceScorer extends torch.nn.Module {
fc1: torch.nn.Linear;
fc2: torch.nn.Linear;
constructor() {
super();
this.fc1 = new torch.nn.Linear(128, 64);
this.fc2 = new torch.nn.Linear(64, 1);
}
forward(x: torch.Tensor): torch.Tensor {
let h = torch.nn.functional.relu(this.fc1.forward(x));
return this.fc2.forward(h);
}
}
const scorer = new PreferenceScorer();
const margin_loss = new torch.nn.MarginRankingLoss({ margin: 0.2 });
// For each user, score items they liked vs disliked
const liked_item = torch.randn([32, 128]);
const disliked_item = torch.randn([32, 128]);
const liked_score = scorer.forward(liked_item);
const disliked_score = scorer.forward(disliked_item);
const target = torch.ones([32]); // User prefers liked over disliked
const loss = margin_loss.forward(liked_score, disliked_score, target);