EcoSta 2023: Start Registration
View Submission - EcoSta2023
A0494
Title: Proformer: A hybrid macaron transformer model predicts expression values from promoter sequences Authors:  Il-Youp Kwak - Chung-Ang University (Korea, South) [presenting]
Abstract: The breakthrough high-throughput measurement of the cis-regulatory activity of millions of randomly generated promoters provides an unprecedented opportunity to decode the cis-regulatory logic that determines the expression values systematically. An end-to-end transformer encoder architecture named Performer is developed to predict the expression values from DNA sequences. Performer used a Macaron-like Transformer encoder architecture, where two half-step feed-forward (FFN) layers were placed at the beginning and the end of each encoder block, and a separable 1D convolution layer was inserted after the first FFN layer and in front of the multi-head attention layer. The sliding k-mers from one-hot encoded sequences were mapped onto a continuous embedding, combined with the learned positional embedding and strand embedding (forward strand vs reverse complemented strand) as the sequence input. Moreover, Proformer introduced multiple expression heads with mask filling to prevent the transformer models from collapsing when training on a relatively small amount of data. It is empirically determined that this design had significantly better performance than the conventional design, such as using the global pooling layer as the output layer for the regression task. These analyses support the notion that Performer provides a novel learning method and enhances our understanding of how cis-regulatory sequences determine the expression values.