Navigation

    Quantiacs Community

    • Register
    • Login
    • Search
    • Categories
    • News
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. EDDIEE
    3. Topics
    E
    • Profile
    • Following 0
    • Followers 0
    • Topics 13
    • Posts 26
    • Best 6
    • Groups 0
    • Blog

    Topics created by EDDIEE

    • E

      Q18_ML_Strategy2
      News and Feature Releases • • EDDIEE

      2
      0
      Votes
      2
      Posts
      284
      Views

      support

      @eddiee Dear Eddiee, the issue is not the low amount of assets, although it is better to trade more assets, but the static vs. dynamic selection.

      Please note that the website shows the performance of the strategies independently on the fact that are still currently traded or not, and it uses the original volatility (in other words it does not reflect any scaling of the volatility).

    • E

      Checking of strategies for Q20 takes two weeks
      Strategy help • • EDDIEE

      8
      0
      Votes
      8
      Posts
      563
      Views

      support

      @algotime Dear Algotime, all three strategies will participate in the contest. We will update the contest leaderboard once all eligible strategies have finished processing. Thank you for your patience.

    • E

      Q20: Where can I see the complete list of all available fundamental indicators?
      Support • • EDDIEE

      2
      0
      Votes
      2
      Posts
      211
      Views

      support

      @EDDIEE
      Hello, sorry for late answer, for now it is best to use indicators shown in the table at https://quantiacs.com/documentation/en/data/fundamental.html. Recent qnt library version has obsolete methods for getting the list of all available fundamentals, it will be fixed in next qnt update.

    • E

      Improving Quantiacs: Aligning Developer Objectives with the ones of Quantiacs
      General Discussion • developers improvement quantiacs rankings risk • • EDDIEE

      4
      3
      Votes
      4
      Posts
      447
      Views

      N

      @eddiee Hi, Mr. Eddie.

      I am new to building strategies using ML/DL on Quantiacs and am very impressed with the OS performance of your ML strategies. I hope you can give me your contact (mail, limkedin,...) so I can learn from your experience in building an ML/DL strategy.

      Sincerely thank.

    • E

      Q19 Contest
      General Discussion • • EDDIEE

      3
      0
      Votes
      3
      Posts
      314
      Views

      support

      @eddiee Dear Eddiee, yes, the rules and the universe are the same. We will need some more time to extend the universe and the data set, so we decided to run a new contest with the same rules.

      Please note that according to the rules at https://quantiacs.com/contest/19

      A Trading System will be deemed to be a “unique“ Trading System if it was not submitted by the same user to a previous Contest and it was not published by the Sponsor itself and it was not submitted by another user to a previous Contest or to the current Contest. The Sponsor will run on submissions a correlation filter and will have to right to disqualify submissions which are not deemed to be unique.

      So re-submitting the same system will result into a system which is not eligible for a prize.

    • E

      Why is the "is_liquid" dataset flawed?
      Strategy help • • EDDIEE

      3
      0
      Votes
      3
      Posts
      293
      Views

      support

      @eddiee It is fixed, sorry for the problem.

    • E

      How does "qnbt.backtest_ml" really work?
      Strategy help • • EDDIEE

      2
      0
      Votes
      2
      Posts
      251
      Views

      support

      Dear Eddie, the training takes place on a rolling basis. The prediction at time "t" uses the defined training period (until "t-1"). If you choose the backtest to start at "2006-01-01", then this will be the first "predicted" date.

      As the training can be computationally expensive, the retraining option offers the option to freeze the model and to perform the interval every "retrain_interval" days. As you correctly say, the rolling window is still the one of "train_period".

      If by "expanding window" you mean a retraining which uses more and more data as time goes on, no, this is not currently implemented, we use a fixed-size rolling window.

    • E

      Q17 Contest: When will you update the performance of the strategies?
      Support • • EDDIEE

      4
      1
      Votes
      4
      Posts
      320
      Views

      support

      @theflyingdutchman Hello, before the end of the week the update will be ready, sorry for the delay

    • E

      Strategy Optimization in local development environment is not working
      Support • • EDDIEE

      5
      0
      Votes
      5
      Posts
      389
      Views

      V

      @eddiee

      This code works for me. I can give you ideas on what to try.

      Update the qnt library or reinstall.

      If it doesn't help, clone the repository

      https://github.com/quantiacs/toolbox

      git clone https://github.com/quantiacs/toolbox.git

      run
      qnt/examples/005-01-optimizer.py
      and other examples.

      You may need to specify API_KEY

      You might be able to see exactly where the error occurs in the code.
      And you can modify the library code by adding logging for optimize_strategy

    • E

      Local Development Error "No module named 'qnt'"
      Support • • EDDIEE

      9
      1
      Votes
      9
      Posts
      724
      Views

      support

      @eddiee Hello! Please check here:

      https://quantiacs.com/documentation/en/user_guide/local_development.html#updating-the-conda-environment

    • E

      Q17 Contest
      General Discussion • • EDDIEE

      5
      0
      Votes
      5
      Posts
      397
      Views

      support

      @theflyingdutchman Yes, we are integrating new data sources for a new asset class, once we are done (next week) the data and leaderboard updates will start again.

    • E

      Q17 Machine learning - RidgeRegression (Long/Short); there is an error in the code
      Strategy help • • EDDIEE

      4
      1
      Votes
      4
      Posts
      485
      Views

      E

      @support

      This is a possible fix, but no gurantee. You have to adjust also the prediction function.

      def train_model(data):
      """Create and train the models working on an asset-by-asset basis."""

      models = dict()

      asset_name_all = data.coords['asset'].values

      data = data.sel(time=slice('2013-05-01',None)) # cut the noisy data head before 2013-05-01

      features_all = get_features(data)
      target_all = get_target_classes(data)

      model = create_model()

      for asset_name in asset_name_all:

      # drop missing values: target_cur = target_all.sel(asset=asset_name).dropna('time', 'any') features_cur = features_all.sel(asset=asset_name).dropna('time', 'any') # align features and targets: target_for_learn_df, feature_for_learn_df = xr.align(target_cur, features_cur, join='inner') if len(features_cur.time) < 10: # not enough points for training continue try: model.fit(feature_for_learn_df.values, target_for_learn_df) models[asset_name] = model except KeyboardInterrupt as e: raise e except: logging.exception('model training failed')

      return models

    • E

      Q17 Neural Networks Algo Template; is there an error in train_model()?
      Strategy help • • EDDIEE

      6
      1
      Votes
      6
      Posts
      569
      Views

      V

      Hello colleagues.

      The solution in case of predicting one financial instrument can be the following (train_period changed)

      def load_data(period): return qndata.cryptodaily_load_data(tail=period, assets=['BTC']) def train_model(data): """ train the LSTM network """ asset_name = 'BTC' features_all = get_features(data) target_all = get_target_classes(data) model = get_model() # drop missing values: target_cur = target_all.sel(asset=asset_name).dropna('time', 'any') features_cur = features_all.sel(asset=asset_name).dropna('time', 'any') # align features and targets: target_for_learn_df, feature_for_learn_df = xr.align(target_cur, features_cur, join='inner') criterion = nn.MSELoss() # define loss function optimiser = optim.LBFGS(model.parameters(), lr=0.08) # we use an LBFGS solver as optimiser epochs = 1 # how many epochs for i in range(epochs): def closure(): # reevaluates the model and returns the loss (forward pass) optimiser.zero_grad() # input tensor in_ = torch.zeros(1, len(feature_for_learn_df.values)) in_[0, :] = torch.tensor(np.array(feature_for_learn_df.values)) # output out = model(in_) # target tensor target = torch.zeros(1, len(target_for_learn_df.values)) target[0, :] = torch.tensor(np.array(target_for_learn_df.values)) # evaluate loss loss = criterion(out, target) loss.backward() return loss optimiser.step(closure) # updates weights return model weights = qnbt.backtest_ml( load_data=load_data, train=train_model, predict=predict, train_period=1 * 365, # the data length for training in calendar days retrain_interval=365, # how often we have to retrain models (calendar days) retrain_interval_after_submit=1, # how often retrain models after submission during evaluation (calendar days) predict_each_day=False, # Is it necessary to call prediction for every day during backtesting? # Set it to true if you suspect that get_features is looking forward. competition_type='crypto_daily_long_short', # competition type lookback_period=365, # how many calendar days are needed by the predict function to generate the output start_date='2014-01-01', # backtest start date build_plots=True # do you need the chart? )
    • Documentation
    • About
    • Career
    • My account
    • Privacy policy
    • Terms and Conditions
    • Cookies policy
    Home
    Copyright © 2014 - 2021 Quantiacs LLC.
    Powered by NodeBB | Contributors