Key points are not available for this paper at this time.
Attention-based transformers have become the standard architecture in many deep learning fields, primarily due to their ability to model long-range dependencies and handle variable-length input sequences. However, the attention mechanism with its quadratic complexity is a significant bottleneck in the transformer architecture. This algorithm is only uni-directional in the decoder and converges to a static pattern in over-parametrized decoder-only models. I address this issue by developing a generative function as attention or activation replacement. It still has the auto-regressive character by comparing each token with the previous one. In my test setting with nanoGPT this yields a smaller loss while having a smaller model. The loss further drops by incorporating an average context vector. This concept of attention replacement is distributed under the GNU AGPL v3 license at https: //gitlab. com/Bachstelze/causalgeneration.
Building similarity graph...
Analyzing shared references across papers
Loading...
Kalle Hilsenbek (Sun,) studied this question.
www.synapsesocial.com/papers/68e64883b6db6435875d9e17 — DOI: https://doi.org/10.48550/arxiv.2406.10906
Kalle Hilsenbek
Building similarity graph...
Analyzing shared references across papers
Loading...