I have been trying to implement a research paper that utilized differential transformer block  attention [https://arxiv.org/abs/2502.13189](https://arxiv.org/abs/2502.13189) as a means to denoise background noise from  biological sounds, While training the model I am constantly running into numeric instability (nan loss), specifically this step : —

lambda\_val = torch.exp(lambda\_q1\_dot\_k1) – torch.exp(lambda\_q2\_dot\_k2) + self.lambda\_init

Most probably due to exponential terms assuming large values. I did try clamping the lambda values to avoid this but doing this is resulting in diverging loss values after few epochs.  Anybody how might  have tried this block can suggest any fixes or whether the clamping approach is the right way in terms of loss optimization (I know  clamping is not the best thing for loss optimization ) ?

Posted in