Navigation

    Quantiacs Community

    • Register
    • Login
    • Search
    • Categories
    • News
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. Popular
    Log in to post
    • All categories
    • Support
    •      Request New Features
    • Strategy help
    • General Discussion
    • News and Feature Releases
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics
    • All Time
    • Day
    • Week
    • Month
    • E

      Q17 Neural Networks Algo Template; is there an error in train_model()?
      Strategy help • • EDDIEE

      6
      1
      Votes
      6
      Posts
      1342
      Views

      V

      Hello colleagues.

      The solution in case of predicting one financial instrument can be the following (train_period changed)

      def load_data(period): return qndata.cryptodaily_load_data(tail=period, assets=['BTC']) def train_model(data): """ train the LSTM network """ asset_name = 'BTC' features_all = get_features(data) target_all = get_target_classes(data) model = get_model() # drop missing values: target_cur = target_all.sel(asset=asset_name).dropna('time', 'any') features_cur = features_all.sel(asset=asset_name).dropna('time', 'any') # align features and targets: target_for_learn_df, feature_for_learn_df = xr.align(target_cur, features_cur, join='inner') criterion = nn.MSELoss() # define loss function optimiser = optim.LBFGS(model.parameters(), lr=0.08) # we use an LBFGS solver as optimiser epochs = 1 # how many epochs for i in range(epochs): def closure(): # reevaluates the model and returns the loss (forward pass) optimiser.zero_grad() # input tensor in_ = torch.zeros(1, len(feature_for_learn_df.values)) in_[0, :] = torch.tensor(np.array(feature_for_learn_df.values)) # output out = model(in_) # target tensor target = torch.zeros(1, len(target_for_learn_df.values)) target[0, :] = torch.tensor(np.array(target_for_learn_df.values)) # evaluate loss loss = criterion(out, target) loss.backward() return loss optimiser.step(closure) # updates weights return model weights = qnbt.backtest_ml( load_data=load_data, train=train_model, predict=predict, train_period=1 * 365, # the data length for training in calendar days retrain_interval=365, # how often we have to retrain models (calendar days) retrain_interval_after_submit=1, # how often retrain models after submission during evaluation (calendar days) predict_each_day=False, # Is it necessary to call prediction for every day during backtesting? # Set it to true if you suspect that get_features is looking forward. competition_type='crypto_daily_long_short', # competition type lookback_period=365, # how many calendar days are needed by the predict function to generate the output start_date='2014-01-01', # backtest start date build_plots=True # do you need the chart? )
    • S

      Balance, order size, stop loss, open and close position price
      Support • • ScalpingAF

      6
      0
      Votes
      6
      Posts
      576
      Views

      support

      @scalpingaf Correct, all trades (buy or sell) are taken at the open of the next day you take the decision.

    • C

      Why .interpolate_na dosen't work well ?
      Support • • cyan.gloom

      6
      0
      Votes
      6
      Posts
      1317
      Views

      C

      @antinomy

      I got it !
      Thanks a lot !!

    • W

      sliding 3d array
      Strategy help • • wool.dewgong

      6
      0
      Votes
      6
      Posts
      995
      Views

      support

      @wool-dewgong Hello! We added one template which should address your issue and allow you to perform a rolling fast ML training with retraining. It is available in your user space in the Examples section and you can read it here also in the public docs:

      https://quantiacs.com/documentation/en/examples/machine_learning_with_a_voting_classifier.html

    • N

      Q21 contest results
      News and Feature Releases • • neural.exeggutor

      6
      0
      Votes
      6
      Posts
      10010
      Views

      support

      @theflyingdutchman Hi, sorry for the delay, yes, all fine, more details by e-mail

    • S

      Q16 where to put is_liquid in ML template
      Strategy help • • Sheikh

      6
      0
      Votes
      6
      Posts
      1382
      Views

      S

      Hi @support,
      Thanks for getting back. No worries, I was able to get 6 strategies into the Q16 competition so far.
      qnt3.PNG

    • M

      Kernel Dies
      Support • • magenta.kabuto

      6
      0
      Votes
      6
      Posts
      599
      Views

      M

      @vyacheslav_b perfect. It wasnt obvious to me that single pass was meant by that. Thank you

    • S

      Stocks data
      Support • • Sun-73

      6
      0
      Votes
      6
      Posts
      1591
      Views

      S

      @support Yes, I can load now the stocks data. Thank you once again!

    • illustrious.felice

      Sharpe decreases when submitting strategy
      Support • • illustrious.felice

      6
      0
      Votes
      6
      Posts
      470
      Views

      illustrious.felice

      @vyacheslav_b Thank you so much

    • I

      Getting started with local dev.
      Support • • iron.tentacruel

      6
      0
      Votes
      6
      Posts
      611
      Views

      support

      @iron-tentacruel Sorry for the delay in the answer. We recommend conda as we can better track dependencies. With conda you can create locally an environment which mirrors the one on the Quantiacs server and you can work locally as you would on the server. If you need a specific version of a package, please let us know.

    • illustrious.felice

      Difference between relative_return & mean_return
      Support • • illustrious.felice

      6
      1
      Votes
      6
      Posts
      664
      Views

      illustrious.felice

      @vyacheslav_b Thank you so much

    • magenta.grimer

      Optimize the Trend Following strategy with custom args
      Strategy help • • magenta.grimer

      6
      0
      Votes
      6
      Posts
      879
      Views

      support

      Hello.

      I checked this problem. The script which cut "###DEBUG###" cells was incorrect. I fixed this and resent your strategies (filtered by time out) to checking.

      Regards.

    • B

      Machine Learning - LSTM strategy seems to be forward-looking
      General Discussion • • black.magmar

      6
      1
      Votes
      6
      Posts
      3401
      Views

      support

      @black-magmar You are correct, but this kind of forward-looking is always present when you have all the data at your disposal. The important point is that there is no forward-looking in the live results, and that should not happen as the prediction will be done for a day for which data are not yet available.

    • cespadilla

      Leaderboard not updating?
      Support • competition leaderboard q16 • • cespadilla

      5
      1
      Votes
      5
      Posts
      1729
      Views

      cespadilla

      @support Hi again guys, I think the leaderboard is not updating again 😳

    • G

      Colab new error 'EntryPoints' object has no attribute 'get'
      Support • • gjhernandezp

      5
      0
      Votes
      5
      Posts
      976
      Views

      support

      @gjhernandezp Thank you for sharing your solution!

    • A

      Q23 should be running now, but not able to join, right?
      Support • • angusslq

      5
      0
      Votes
      5
      Posts
      1432
      Views

      support

      @green-flareon Thanks. The live phase of the Q23 is running. Quants can join any contest during the submission phase. Q24 is on.

    • B

      Submission failed: what's wrong??
      Support • • buyers_are_back

      5
      0
      Votes
      5
      Posts
      533
      Views

      support

      @buyers_are_back We reprocessed the submission, it is formally correct and passes all the filters. Sorry for the issue, evidently on our side.

    • E

      Strategy Optimization in local development environment is not working
      Support • • EDDIEE

      5
      0
      Votes
      5
      Posts
      770
      Views

      V

      @eddiee

      This code works for me. I can give you ideas on what to try.

      Update the qnt library or reinstall.

      If it doesn't help, clone the repository

      https://github.com/quantiacs/toolbox

      git clone https://github.com/quantiacs/toolbox.git

      run
      qnt/examples/005-01-optimizer.py
      and other examples.

      You may need to specify API_KEY

      You might be able to see exactly where the error occurs in the code.
      And you can modify the library code by adding logging for optimize_strategy

    • A

      I've just lost a notebook that contains my entire algorithm
      Support • • aybber

      5
      0
      Votes
      5
      Posts
      806
      Views

      A

      @support no worries, I've been able to recover the strategy thank you!

    • C

      How to fix this error
      Support • • cyan.gloom

      5
      0
      Votes
      5
      Posts
      1739
      Views

      C

      @antinomy
      Thanks for your advice !

    • Documentation
    • About
    • Career
    • My account
    • Privacy policy
    • Terms and Conditions
    • Cookies policy
    Home
    Copyright © 2014 - 2021 Quantiacs LLC.
    Powered by NodeBB | Contributors