最佳化器

最佳化器#

MLX 的最佳化器既可與 mlx.nn 搭配使用,也可用於純 mlx.core 函式。典型作法是呼叫 Optimizer.update() 依據損失梯度更新模型參數,接著呼叫 mlx.core.eval() 同時計算模型參數與 最佳化器狀態

# Create a model
model = MLP(num_layers, train_images.shape[-1], hidden_dim, num_classes)
mx.eval(model.parameters())

# Create the gradient function and the optimizer
loss_and_grad_fn = nn.value_and_grad(model, loss_fn)
optimizer = optim.SGD(learning_rate=learning_rate)

for e in range(num_epochs):
    for X, y in batch_iterate(batch_size, train_images, train_labels):
        loss, grads = loss_and_grad_fn(model, X, y)

        # Update the model with the gradients. So far no computation has happened.
        optimizer.update(model, grads)

        # Compute the new parameters but also the optimizer state.
        mx.eval(model.parameters(), optimizer.state)

儲存與載入#

若要序列化最佳化器,請儲存其狀態。要載入最佳化器,則載入並設定已儲存的狀態。以下是簡單範例:

import mlx.core as mx
from mlx.utils import tree_flatten, tree_unflatten
import mlx.optimizers as optim

optimizer = optim.Adam(learning_rate=1e-2)

# Perform some updates with the optimizer
model = {"w" : mx.zeros((5, 5))}
grads = {"w" : mx.ones((5, 5))}
optimizer.update(model, grads)

# Save the state
state = tree_flatten(optimizer.state, destination={})
mx.save_safetensors("optimizer.safetensors", state)

# Later on, for example when loading from a checkpoint,
# recreate the optimizer and load the state
optimizer = optim.Adam(learning_rate=1e-2)

state = tree_unflatten(mx.load("optimizer.safetensors"))
optimizer.state = state

注意,並非每個最佳化器的設定參數都會儲存在狀態中。例如對 Adam 而言,學習率會被儲存,但 betaseps 參數不會。經驗法則是:若該參數可被排程,則會包含在最佳化器狀態中。

clip_grad_norm(grads, max_norm)

裁剪梯度的全域範數。