Skip to content

Cross attention should be over the whole seq and smaller seq

In your code you split the sequence into a prefix and smaller window and calculate the cross-attention with respect to it...

https://github.com/lucidrains/perceiver-ar-pytorch/blob/685d77d152c55ef7210336566b952de7da631f68/perceiver_ar_pytorch/perceiver_ar_pytorch.py#L284

However, in the diagram of the method, the whole sequence is used for the V and K... Can you kindly confirm?

Thank you!