The reason is in "train_model" function.
asset_name_all = data.coords['asset'].values
features_all = get_features(data)
target_all = get_target_classes(data)
models = dict()
for asset_name in asset_name_all:
# drop missing values:
target_cur = target_all.sel(asset=asset_name).dropna('time', 'any')
features_cur = features_all.sel(asset=asset_name).dropna('time', 'any')
target_for_learn_df, feature_for_learn_df = xr.align(target_cur, features_cur, join='inner')
if len(features_cur.time) < 10:
model = get_model()
models[asset_name] = model
logging.exception('model training failed')
If there are less than 10 features for training the model, then the model is not created (if len(features_cur.time) < 10).
This condition makes sense. I would not remove it.
The second thing that can affect is the retraining interval of the model ("retrain_interval").
weights = qnbt.backtest_ml(
train_period=2 *365, # the data length for training in calendar days
retrain_interval=10 *365, # how often we have to retrain models (calendar days)
retrain_interval_after_submit=1, # how often retrain models after submission during evaluation (calendar days)
predict_each_day=False, # Is it necessary to call prediction for every day during backtesting?
# Set it to true if you suspect that get_features is looking forward.
competition_type='crypto_daily_long_short', # competition type
lookback_period=365, # how many calendar days are needed by the predict function to generate the output
start_date='2014-01-01', # backtest start date
analyze = True,
build_plots=True # do you need the chart?