How would you encode a set of sequences to a (bidirectional) transformer? For a sequence we have positional encodings. For a set we can just work without them. What about a set of sequences {s\_1, …, s\_n}, where each s\_1, …, s\_n is a sequence, but their relative order does not matter?
Trend Engine: AI-Powered News & Trends
Where AI Meets What's Trending