Transformer¶
MLP¶
- class alonet.transformers.mlp.MLP(input_dim, hidden_dim, output_dim, num_layers)¶
Bases:
torch.nn.modules.module.Module
Very simple multi-layer perceptron (also called FFN)
- forward(x)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
Positional Encoding¶
Various positional encodings for the transformer.
- class alonet.transformers.position_encoding.PositionEmbeddingSine(num_pos_feats=64, temperature=10000, normalize=False, scale=None, center=False)¶
Bases:
torch.nn.modules.module.Module
This is a more standard version of the position embedding, very similar to the one used by the Attention is all you need paper, generalized to work on images.
- forward(ftmap_mask)¶
TODO. We must made one global position encoding in the transformer class and one specific class in Detr that inhert from the main psoition embedding method.
- Parameters
- ftmap_mask: tuple
Tuple made of one torch.Tensor (The feature map) and one mask for the transformer
- training: bool¶
- alonet.transformers.position_encoding.build_position_encoding()¶