Keyword search (4,164 papers available)

"Zunair H" Authored Publications:

Title Authors PubMed ID
1 CosSIF: Cosine similarity-based image filtering to overcome low inter-class variation in synthetic medical image datasets Islam M; Zunair H; Mohammed N; 38492455
ENCS
2 Quantifying imbalanced classification methods for leukemia detection Depto DS; Rizvee MM; Rahman A; Zunair H; Rahman MS; Mahdy MRC; 36516574
ENCS
3 Knowledge distillation approach towards melanoma detection Khan MS; Alam KN; Dhruba AR; Zunair H; Mohammed N; 35594685
CONCORDIA
4 Sharp U-Net: Depthwise convolutional network for biomedical image segmentation Zunair H; Ben Hamza A; 34348214
ENCS
5 A comparative analysis of deep learning architectures on high variation malaria parasite classification dataset. Rahman A, Zunair H, Reme TR, Rahman MS, Mahdy MRC 33465520
ENCS
6 Melanoma detection using adversarial training and deep transfer learning. Zunair H, Ben Hamza A 32252036
CONCORDIA

 

Title:Sharp U-Net: Depthwise convolutional network for biomedical image segmentation
Authors:Zunair HBen Hamza A
Link:https://pubmed.ncbi.nlm.nih.gov/34348214/
DOI:10.1016/j.compbiomed.2021.104699
Publication:Computers in biology and medicine
Keywords:Fully convolutional networkSemantic segmentationSharpening filterSkip connectionsU-Net
PMID:34348214 Category: Date Added:2021-08-05
Dept Affiliation: ENCS
1 Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada.
2 Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada. Electronic address: hamza@ciise.concordia.ca.

Description:

The U-Net architecture, built upon the fully convolutional network, has proven to be effective in biomedical image segmentation. However, U-Net applies skip connections to merge semantically different low- and high-level convolutional features, resulting in not only blurred feature maps, but also over- and under-segmented target regions. To address these limitations, we propose a simple, yet effective end-to-end depthwise encoder-decoder fully convolutional network architecture, called Sharp U-Net, for binary and multi-class biomedical image segmentation. The key rationale of Sharp U-Net is that instead of applying a plain skip connection, a depthwise convolution of the encoder feature map with a sharpening kernel filter is employed prior to merging the encoder and decoder features, thereby producing a sharpened intermediate feature map of the same size as the encoder map. Using this sharpening filter layer, we are able to not only fuse semantically less dissimilar features, but also to smooth out artifacts throughout the network layers during the early stages of training. Our extensive experiments on six datasets show that the proposed Sharp U-Net model consistently outperforms or matches the recent state-of-the-art baselines in both binary and multi-class segmentation tasks, while adding no extra learnable parameters. Furthermore, Sharp U-Net outperforms baselines that have more than three times the number of learnable parameters.





BookR developed by Sriram Narayanan
for the Concordia University School of Health
Copyright © 2011-2026
Cookie settings
Concordia University