LibMTL.weighting.MoCo

class MoCo[source]

Bases: LibMTL.weighting.abstract_weighting.AbsWeighting

MoCo.

This method is proposed in Mitigating Gradient Bias in Multi-objective Learning: A Provably Convergent Approach (ICLR 2023) and implemented based on the author’ sharing code (Heshan Fernando: fernah@rpi.edu).

Parameters
  • MoCo_beta (float, default=0.5) – The learning rate of y.

  • MoCo_beta_sigma (float, default=0.5) – The decay rate of MoCo_beta.

  • MoCo_gamma (float, default=0.1) – The learning rate of lambd.

  • MoCo_gamma_sigma (float, default=0.5) – The decay rate of MoCo_gamma.

  • MoCo_rho (float, default=0) – The ell_2 regularization parameter of lambda’s update.

Warning

MoCo is not supported by representation gradients, i.e., rep_grad must be False.

init_param(self)[source]

Define and initialize some trainable parameters required by specific weighting methods.

backward(self, losses, **kwargs)[source]
Parameters
  • losses (list) – A list of losses of each task.

  • kwargs (dict) – A dictionary of hyperparameters of weighting methods.