CT-CAD: Context-Aware Transformers for End-to-End Chest Abnormality Detection on X-Rays

Abstract

Supervised based deep learning methods have achieved great success in medical image analysis domain. Essentially, most of them could be further improved by exploring and embedding context knowledge for accuracy boosting. Moreover, they generally suffer from slow convergency and high computing cost, which prevents their usage in a practical scenario. To tackle these problems, we present CT-CAD, context-aware transformers for end-to-end chest abnormality detection on X-Ray images. The proposed method firstly constructs a context-aware feature extractor, which enlarges receptive fields to encode multi-scale context information via an iterative feature fusion scheme and dilated context encoding blocks. Afterwards, deformable transformer detector are built for category classification and location regression, where their deformable attention block attend to a small set of key sampling points, thus allowing the transformer to focus on feature subspace and accelerate convergence speed. Through comparative experiments on Vinbig Chest and Chest Det10 Datasets, the proposed CT-CAD demonstrates its effectiveness and outperforms the existing methods in mAP and training epoches.

Publication
IEEE International Conference on Bioinformatics and Biomedicine
Yirui Wu
Yirui Wu
Young Professor, CCF Senior Member

My research interests include Computer Vision, Artifical Intelligence, Multimedia Computing and Intelligent Water Conservancy.