Architecture¶
Backbone¶
Backbone modules.
- class alonet.detr.backbone.Backbone(name, train_backbone, return_interm_layers, dilation, **kwargs)¶
Bases:
alonet.detr.backbone.BackboneBase
ResNet backbone with frozen BatchNorm.
- training: bool¶
- class alonet.detr.backbone.BackboneBase(backbone, train_backbone, num_channels, return_interm_layers, aug_tensor_compatible=True)¶
Bases:
torch.nn.modules.module.Module
Base class to define behavior of backbone
- forward(frames, **kwargs)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class alonet.detr.backbone.FrozenBatchNorm2d(n)¶
Bases:
torch.nn.modules.module.Module
BatchNorm2d where the batch statistics and the affine parameters are fixed.
Copy-paste from torchvision.misc.ops with added eps before rqsrt, without which any other models than torchvision.models.resnet[18,34,50,101] produce nans.
- forward(x)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class alonet.detr.backbone.Joiner(backbone, position_embedding)¶
Bases:
torch.nn.modules.container.Sequential
A sequential wrapper for backbone and position embedding.
- self.forward returns a tuple:
list of feature maps from backbone
list of position encoded feature maps
- forward(frames, **kwargs)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- alonet.detr.backbone.build_backbone()¶
- alonet.detr.backbone.get_rank()¶
- alonet.detr.backbone.get_world_size()¶
- alonet.detr.backbone.is_dist_avail_and_initialized()¶
- alonet.detr.backbone.is_main_process()¶
Transformer¶
DETR Transformer class.
- Copy-paste from torch.nn.Transformer with modifications:
positional encodings are passed in MHattention
extra LN at the end of encoder is removed
decoder returns a stack of activations from all decoding layers
Then copy-past from detr official repository with modification to be usuable inside aloception along with deformable detr.
- class alonet.detr.transformer.Transformer(d_model=512, nhead=8, encoder=None, decoder=None, decoder_layer=None, encoder_layer=None, num_encoder_layers=6, num_decoder_layers=6, dim_feedforward=2048, dropout=0.1, activation='relu', normalize_before=False, return_intermediate_dec=False)¶
Bases:
torch.nn.modules.module.Module
- forward(src, mask, query_embed, pos_embed, **kwargs)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class alonet.detr.transformer.TransformerDecoder(decoder_layer, num_layers, norm=None, return_intermediate=False)¶
Bases:
torch.nn.modules.module.Module
- decoder_forward(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None, pos=None, query_pos=None, **kwargs)¶
- forward(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None, pos=None, query_pos=None, decoder_outputs=None, **kwargs)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- pre_process_tgt(tgt, query_pos, tgt_key_padding_mask, **kwargs)¶
Pre process decoder inputs
- training: bool¶
- class alonet.detr.transformer.TransformerDecoderLayer(d_model, n_heads=8, dim_feedforward=2048, dropout=0.1, activation='relu', normalize_before=False)¶
Bases:
torch.nn.modules.module.Module
- decoder_layer_forward(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None, pos=None, query_pos=None, **kwargs)¶
- forward(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None, pos=None, query_pos=None, **kwargs)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- forward_post(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None, pos=None, query_pos=None)¶
- forward_pre(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None, pos=None, query_pos=None)¶
- pre_process_tgt(tgt, query_pos, tgt_key_padding_mask, **kwargs)¶
Pre process decoder inputs
- training: bool¶
- with_pos_embed(tensor, pos)¶
- class alonet.detr.transformer.TransformerEncoder(encoder_layer, num_layers, norm=None)¶
Bases:
torch.nn.modules.module.Module
- forward(src, mask=None, src_key_padding_mask=None, pos=None, **kwargs)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class alonet.detr.transformer.TransformerEncoderLayer(d_model, nhead, dim_feedforward=2048, dropout=0.1, activation='relu', normalize_before=False)¶
Bases:
torch.nn.modules.module.Module
- forward(src, src_mask=None, src_key_padding_mask=None, pos=None, **kwargs)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- forward_post(src, src_mask=None, src_key_padding_mask=None, pos=None)¶
- forward_pre(src, src_mask=None, src_key_padding_mask=None, pos=None)¶
- training: bool¶
- with_pos_embed(tensor, pos)¶
- alonet.detr.transformer.build_transformer()¶