Navigation

    Quantiacs Community

    • Register
    • Login
    • Search
    • Categories
    • News
    • Recent
    • Tags
    • Popular
    • Users
    • Groups

    Q17 Neural Networks Algo Template; is there an error in train_model()?

    Strategy help
    5
    6
    563
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • E
      EDDIEE last edited by

      In the function "train_model()" the for-loop "for asset_name in asset_name_all:" finishes before the model-optimization. So basically the optimized LSTM neural network is always based on the last asset. This can't be on purpose, can it?

      best
      eduard

      def train_model(data):
      """
      train the LSTM network
      """

      asset_name_all = data.coords['asset'].values
      
      features_all = get_features(data)
      target_all = get_target_classes(data)
      
      model = get_model()
      
      for asset_name in asset_name_all:
          
          # drop missing values:
          target_cur = target_all.sel(asset=asset_name).dropna('time', 'any')
          features_cur = features_all.sel(asset=asset_name).dropna('time', 'any')
      
          # align features and targets:
          target_for_learn_df, feature_for_learn_df = xr.align(target_cur, features_cur, join='inner')
          
      criterion = nn.MSELoss() # define loss function
      
      optimiser = optim.LBFGS(model.parameters(), lr=0.08) # we use an LBFGS solver as optimiser
      epochs = 1 #how many epochs 
      for i in range(epochs):
              def closure(): # reevaluates the model and returns the loss (forward pass)
                  optimiser.zero_grad()
                  
                  #input tensor
                  in_ = torch.zeros(1,len(feature_for_learn_df.values))
                  in_[0,:]=torch.tensor(np.array(feature_for_learn_df.values))
                  
                  #output
                  out = model(in_)
                  
                  #target tensor
                  target = torch.zeros(1,len(target_for_learn_df.values))
                  target[0,:]=torch.tensor(np.array(target_for_learn_df.values))
                  
                  #evaluate loss
                  loss = criterion(out, target)
                  loss.backward()
                  
                  return loss
              optimiser.step(closure) #updates weights
              
      return model
      support 1 Reply Last reply Reply Quote 1
      • A
        antinomy last edited by

        Yes, I noticed that too. And after fixing it the backtest takes forever...
        Another thing to consider is that it redefines the model with each training but I belive you can retrain already trainded NNs with new Data so they learn based on what they previously learned.

        1 Reply Last reply Reply Quote 0
        • support
          support @EDDIEE last edited by

          @eddiee We are sorry, you are right and we are fixing.

          support 1 Reply Last reply Reply Quote 0
          • support
            support @support last edited by

            @support thank you @antinomy

            1 Reply Last reply Reply Quote 0
            • V
              Vyacheslav_B last edited by

              Hello colleagues.

              The solution in case of predicting one financial instrument can be the following (train_period changed)

              
              def load_data(period):
                  return qndata.cryptodaily_load_data(tail=period, assets=['BTC'])
              
              
              def train_model(data):
                  """
                      train the LSTM network
                  """
              
                  asset_name = 'BTC'
              
                  features_all = get_features(data)
                  target_all = get_target_classes(data)
              
                  model = get_model()
              
                  # drop missing values:
                  target_cur = target_all.sel(asset=asset_name).dropna('time', 'any')
                  features_cur = features_all.sel(asset=asset_name).dropna('time', 'any')
              
                  # align features and targets:
                  target_for_learn_df, feature_for_learn_df = xr.align(target_cur, features_cur, join='inner')
              
                  criterion = nn.MSELoss()  # define loss function
              
                  optimiser = optim.LBFGS(model.parameters(), lr=0.08)  # we use an LBFGS solver as optimiser
                  epochs = 1  # how many epochs 
                  for i in range(epochs):
                      def closure():  # reevaluates the model and returns the loss (forward pass)
                          optimiser.zero_grad()
              
                          # input tensor
                          in_ = torch.zeros(1, len(feature_for_learn_df.values))
                          in_[0, :] = torch.tensor(np.array(feature_for_learn_df.values))
              
                          # output
                          out = model(in_)
              
                          # target tensor
                          target = torch.zeros(1, len(target_for_learn_df.values))
                          target[0, :] = torch.tensor(np.array(target_for_learn_df.values))
              
                          # evaluate loss
                          loss = criterion(out, target)
                          loss.backward()
              
                          return loss
              
                      optimiser.step(closure)  # updates weights
              
                  return model
              
              
              weights = qnbt.backtest_ml(
                  load_data=load_data,
                  train=train_model,
                  predict=predict,
                  train_period=1 * 365,  # the data length for training in calendar days
                  retrain_interval=365,  # how often we have to retrain models (calendar days)
                  retrain_interval_after_submit=1,  # how often retrain models after submission during evaluation (calendar days)
                  predict_each_day=False,  # Is it necessary to call prediction for every day during backtesting?
                  # Set it to true if you suspect that get_features is looking forward.
                  competition_type='crypto_daily_long_short',  # competition type
                  lookback_period=365,  # how many calendar days are needed by the predict function to generate the output
                  start_date='2014-01-01',  # backtest start date
                  build_plots=True  # do you need the chart?
              )
              
              
              1 Reply Last reply Reply Quote 0
              • A
                aluminum.pig Banned last edited by

                This post is deleted!
                1 Reply Last reply Reply Quote 0
                • First post
                  Last post
                Powered by NodeBB | Contributors
                • Documentation
                • About
                • Career
                • My account
                • Privacy policy
                • Terms and Conditions
                • Cookies policy
                Home
                Copyright © 2014 - 2021 Quantiacs LLC.
                Powered by NodeBB | Contributors