Commonly called a KAN. A KAN is a Neural network where you learn the activation functions instead of the weights.
For now they’ve turned out to be not very useful.
KAN vs MLP
It turns out, that you can write Kolmogorov-Arnold Network as an MLP, with some repeats and shift before ReLU.
- https://colab.research.google.com/drive/1v3AHz5J3gk-vu4biESubJdOsUheycJNz
- https://archive.md/Wg8P7
- https://www.reddit.com/r/MachineLearning/comments/1clcu5i/d_kolmogorovarnold_network_is_just_an_mlp/
Link dump
- Paper: https://arxiv.org/abs/2404.19756
- https://cprimozic.net/blog/trying-out-kans/
- https://github.com/Ameobea/kan/blob/main/tiny_kan.py
- https://github.com/KindXiaoming/pykan
- https://github.com/Blealtan/efficient-kan