Dual RNN Models

Pytorch Models for Sequential Data
dls = create_dls_test(prediction=True)
init_sz = 50
dls.show_batch(max_n=1)

Model


source

Diag_RNN

 Diag_RNN (input_size, output_size, output_layer=1, hidden_size=100,
           rnn_layer=1, linear_layer=1, stateful=False, hidden_p=0.0,
           input_p=0.0, weight_p=0.0, rnn_type='gru',
           ret_full_hidden=False, normalization='', **kwargs)

*Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to, etc.

.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*


source

Diag_RNN_raw

 Diag_RNN_raw (input_size, output_size, output_layer=1, hidden_size=100,
               rnn_layer=1, linear_layer=1, stateful=False)

*Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to, etc.

.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*


source

DiagLSTM

 DiagLSTM (input_size, output_size, output_layer=1, hidden_size=100,
           rnn_layer=1, linear_layer=1, **kwargs)

*Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to, etc.

.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*


source

Diag_TCN

 Diag_TCN (input_size, output_size, output_layer, hl_width, mlp_layers=0,
           hl_depth=1, act=<class 'torch.nn.modules.activation.Mish'>,
           bn=False, stateful=False, **kwargs)

*Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to, etc.

.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*


source

ARProg_Init

 ARProg_Init (n_u, n_y, init_sz, n_x=0, hidden_size=100, rnn_layer=1,
              diag_model=None, linear_layer=1, final_layer=0,
              hidden_p=0.0, input_p=0.0, weight_p=0.0, rnn_type='gru',
              ret_full_hidden=False, stateful=False, normalization='',
              **kwargs)

*Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to, etc.

.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*

model = ARProg_Init(1,1,init_sz=init_sz,rnn_layer=1,hidden_size=50)
lrn = Learner(dls,model,loss_func=SkipNLoss(mse,init_sz))
# lrn.fit(1,lr=3e-3)
lrn.fit_flat_cos(1,3e-3,pct_start=0.2)
epoch train_loss valid_loss time
0 0.054181 0.059382 00:01

FranSys


source

FranSys

 FranSys (n_u, n_y, init_sz, n_x=0, hidden_size=100, rnn_layer=1,
          diag_model=None, linear_layer=1, init_diag_only=False,
          final_layer=0, hidden_p=0.0, input_p=0.0, weight_p=0.0,
          rnn_type='gru', ret_full_hidden=False, stateful=False,
          normalization='', **kwargs)

*Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to, etc.

.. note:: As per the example above, an __init__() call to the parent class must be made before assignment on the child.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*

model = FranSys(1,1,init_sz=init_sz,linear_layer=1,rnn_layer=2,hidden_size=50)
lrn = Learner(dls,model,loss_func=nn.MSELoss())
lrn.fit(1,lr=3e-3)
epoch train_loss valid_loss time
0 0.041958 0.032479 00:02

RNN without linear layer as diagnosis module

#TCN as Diagnosis Module
diag_tcn = Diag_TCN(2,50,2,hl_depth=6,hl_width=20,mlp_layers=3)
model = FranSys(1,1,init_sz=init_sz,linear_layer=1,rnn_layer=2,hidden_size=50,diag_model=diag_tcn)
lrn = Learner(dls,model,loss_func=nn.MSELoss())
lrn.add_cb(TbpttResetCB())
lrn.fit(1,lr=3e-3)
epoch train_loss valid_loss time
0 0.037339 0.029248 00:01
diag_rnn = Diag_RNN_raw(2,50,2,stateful=False)
model = FranSys(1,1,init_sz=init_sz,linear_layer=1,rnn_layer=2,hidden_size=50,diag_model=diag_rnn)
lrn = Learner(dls,model,loss_func=nn.MSELoss())
lrn.add_cb(TbpttResetCB())
lrn.fit(1,lr=3e-3)
epoch train_loss valid_loss time
0 0.036428 0.023112 00:02

Fast variant with initsz diagnosis only

model = FranSys(1,1,init_sz=init_sz,linear_layer=1,rnn_layer=2,hidden_size=50,init_diag_only=True)
lrn = Learner(dls,model,loss_func=nn.MSELoss(),opt_func=ranger)
lrn.add_cb(TbpttResetCB())
lrn.fit(1,lr=3e-3)
epoch train_loss valid_loss time
0 0.075652 0.069702 00:01

Callbacks


source

FranSysCallback

 FranSysCallback (modules, p_state_sync=10000000.0, p_diag_loss=0.0,
                  p_osp_sync=0, p_osp_loss=0, p_tar_loss=0,
                  sync_type='mse', targ_loss_func=<function mae>,
                  osp_n_skip=None, FranSys_model=None, detach=False,
                  **kwargs)

Callback that regularizes the output of the FranSys model.

Type Default Details
modules
p_state_sync float 10000000.0 scalingfactor for regularization of hidden state deviation between diag and prog module
p_diag_loss float 0.0 scalingfactor of loss calculation of diag hidden state to final layer
p_osp_sync int 0 scalingfactor for regularization of hidden state deviation between one step prediction and diag hidden states
p_osp_loss int 0 scalingfactor for loss calculation of one step prediction of prog module
p_tar_loss int 0 scalingfactor for time activation regularization of combined hiddenstate of diag and prog with target sequence length
sync_type str mse
targ_loss_func function mae
osp_n_skip NoneType None number of elements to skip before osp is applied, defaults to model.init_sz
FranSys_model NoneType None
detach bool False
kwargs VAR_KEYWORD
model = FranSys(1,1,init_sz=init_sz,linear_layer=1,rnn_layer=2,hidden_size=50)
cb = FranSysCallback([model.rnn_diagnosis,model.rnn_prognosis],
                        p_state_sync=1e-1, 
                        p_diag_loss=0.0,
                        p_osp_sync=0,
                        p_osp_loss=0.1,
                        sync_type='cos_pow')
lrn = Learner(dls,model,loss_func=nn.MSELoss(),cbs=cb,opt_func=ranger)
lrn.fit(1,lr=3e-3)
epoch train_loss valid_loss time
0 0.164724 0.062946 00:02

source

FranSysCallback_variable_init

 FranSysCallback_variable_init (init_sz_min, init_sz_max, **kwargs)

Callback reports progress after every epoch to the ray tune logger

Learner


source

FranSysLearner

 FranSysLearner (dls, init_sz, attach_output=False, loss_func=L1Loss(),
                 metrics=[<function fun_rmse at 0x1490f8af0>],
                 opt_func=<function Adam>, lr=0.003, cbs=[], n_x=0,
                 hidden_size=100, rnn_layer=1, diag_model=None,
                 linear_layer=1, init_diag_only=False, final_layer=0,
                 hidden_p=0.0, input_p=0.0, weight_p=0.0, rnn_type='gru',
                 ret_full_hidden=False, stateful=False, normalization='',
                 **kwargs)
lrn = FranSysLearner(dls,init_sz=50)
lrn.fit(1,lr=3e-3)
epoch train_loss valid_loss fun_rmse time
0 0.168541 0.126720 0.159639 00:01
dls = create_dls_test(prediction=False)
lrn = FranSysLearner(dls,init_sz=50,attach_output=True)
lrn.fit(1,lr=3e-3)
epoch train_loss valid_loss fun_rmse time
0 0.179930 0.183227 0.232352 00:01