Bytes is all you need

Published June 27, 2023


1 min read

Image of Damian Jaspar

Damian Jaspar

Modern deep learning approaches usually transform inputs into a modality-specific form. Researchers from Apple have developed a new approach bypassing this step and directly training transformer-based deep learning models on raw file bytes, thus enabling the development of models that can operate on multiple input modalities. The presented ByteFormer model shows excellent performance in the image domain and has applications in privacy-preserving inference.

Link to paper: