spark.client.requestHotReload
function requestHotReload(code: string): voidRequest hot reload of the torch function code without restarting.
Sends new code to the worker to be applied at the next checkpoint(). The worker's persisted state (models, optimizers) remains unchanged, allowing you to update logic while keeping training progress.
Common use case: Fix a bug in your training loop without losing the model.
The old code continues executing until the next checkpoint() is reached, at which point the new code takes over with all persisted values intact.
The worker continues running the old code until the next checkpoint. Make sure your code has checkpoints to catch the reload request promptly.
Parameters
codestring- Complete torch function code to execute
Examples
// Original code
function torch() {
const model = spark.persist('model', () => nn.Sequential(...));
async function train() { ... }
spark.expose({ train });
}
const s = spark.use(torch);
await s.train(); // Starts training...
// User notices a bug in the training loop and fixes it
// Send updated code without restarting
const fixedCode = `
function torch() {
const model = spark.persist('model', () => nn.Sequential(...));
async function train() {
// Fixed training logic
}
spark.expose({ train });
}
`;
requestHotReload(fixedCode);
// Model weights are preserved! Training continues with fixed codeSee Also
- useSpark - Returns the hook to call exposed functions