ResNeSt
ResNeSt: Split-Attention Networks
view repo
While image classification models have recently continued to advance, most downstream applications such as object detection and semantic segmentation still employ ResNet variants as the backbone network due to their simple and modular structure. We present a simple and modular Split-Attention block that enables attention across feature-map groups. By stacking these Split-Attention blocks ResNet-style, we obtain a new ResNet variant which we call ResNeSt. Our network preserves the overall ResNet structure to be used in downstream tasks straightforwardly without introducing additional computational costs. ResNeSt models outperform other networks with similar model complexities. For example, ResNeSt-50 achieves 81.13 of 224x224, outperforming previous best ResNet variant by more than 1 accuracy. This improvement also helps downstream tasks including object detection, instance segmentation and semantic segmentation. For example, by simply replace the ResNet-50 backbone with ResNeSt-50, we improve the mAP of Faster-RCNN on MS-COCO from 39.3 from 42.1
READ FULL TEXTResNeSt: Split-Attention Networks
ResNeSt: Split-Attention Networks for Tensorflow2
PyTorch implementation of ResNeSt : Split-Attention Networks
None
This repository houses the code for a streamlit powered web app (capable of running on an AWS `t2.micro` EC2 instance) backed with a CNN fine-tuned on the SIIM ISIC Melanoma Classification Competition data.