Navigation

    Quantiacs Community

    • Register
    • Login
    • Search
    • Categories
    • News
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. multi_byte.wildebeest
    M
    • Profile
    • Following 0
    • Followers 0
    • Topics 11
    • Posts 19
    • Best 4
    • Groups 0
    • Blog

    multi_byte.wildebeest

    @multi_byte.wildebeest

    5
    Reputation
    5
    Profile views
    19
    Posts
    0
    Followers
    0
    Following
    Joined Last Online

    multi_byte.wildebeest Follow

    Best posts made by multi_byte.wildebeest

    • How can we have the estimation of Sharpe submitted ?

      Hi, how can we know the IS Sharpe will be when develop my strategy ?

      For example, the backtest_ml prints Sharpe at the end, but as I know, it is the Sharpe of the last day of trading.

      Many strategies developed locally have Sharpe > 1.0 but when submitted are filtered by IS Sharpe.

      posted in Support
      M
      multi_byte.wildebeest
    • Printing training performance of neural network models

      I am starting with a LSTM example model (In the example section). I want to print the Sharpe of the model on the training set to compare its Sharpe on test set (which is printed by default). To do that, I am editting the backtester.py file (specifically backtest_ml function) (You can check I added line 172-226 in the editted file for printing the Sharpe of the model on training set in the backtester.py file attached with this post).

      78b43efd-fb84-4c5f-a6e7-5acd2dc9bfa2-image.png

      193d9f00-0899-4160-a7e3-d34b46fc1f42-image.png

      b05b76c7-17f2-4f82-b835-b5a94e785e6b-image.png

      But I got an unexpected result that I think I was doing st wrong : Sharpe of the training is 0.89 and Sharpe of the test set is -0.04; whereas if there was nothing changed in the backtester.py file , the Sharpe by default printed is 0.89.

      posted in Support
      M
      multi_byte.wildebeest
    • Submitting Deep Learning model-Filtered by Calculation time exceeded

      My DL models are filtered by Calculation time exceed.

      I have tested it on Google Colab, and the time for it to run qnt.backtest (the last cell) is about 2-3 minutes (without considering some kind of pip install pandas, qnt libraries API).
      (Other cells take 0s to run).

      Whereas the time for futures is up to 10 minutes.

      My submission is made as follows : Click Jupyter, replace "strategy.ipynb" with my "strategy.ipynb" (delete three first cells of installing pandas, qnt libraries ... as suggested), add a new cell in "init.ipynb" : pip install torch.

      Can you figure out the root of the problem and how to tackle it ? (you can use LSTM example which is analogous to my model).

      @vyacheslav_b

      posted in Support
      M
      multi_byte.wildebeest

    Latest posts made by multi_byte.wildebeest

    • RE: Why we need to limit the time to process the strategy ?

      @support please let me know. I do want to hear from excellent builders like you guys.

      posted in Support
      M
      multi_byte.wildebeest
    • Why we need to limit the time to process the strategy ?

      Hi, I wonder why we need to limit the time to process our model (e.g the ML, DL should not exceed 10 minutes for both training and evaluation, models for crypto trading are not allowed to surpass 5 minutes ...) ?

      Can you give me the justification for its necessity when employ/deploy these models in the real-world trading scenarios ?

      Thanks,

      posted in Support
      M
      multi_byte.wildebeest
    • Why we need to limit the time to process the strategy ?

      Hi, I wonder why we need to limit the time to process our model (e.g the ML, DL should not exceed 10 minutes for both training and evaluation, models for crypto trading are not allowed to surpass 5 minutes ...) ?

      Can you give me the justification for its necessity when employ/deploy these models in the real-world trading scenarios ?

      Thanks,

      posted in Support
      M
      multi_byte.wildebeest
    • RE: WARNING: some dates are missed in the portfolio_history

      @support Hi, but how about some dates are missed in the portfolio history when run the precheck ?

      posted in Support
      M
      multi_byte.wildebeest
    • RE: Differences between Sharpe in Precheck and Sharpe in strategy.ipynb

      @support Thank you !

      posted in Support
      M
      multi_byte.wildebeest
    • RE: WARNING: some dates are missed in the portfolio_history

      @support thank you so much for your very clear explaination !

      posted in Support
      M
      multi_byte.wildebeest
    • WARNING: some dates are missed in the portfolio_history

      Hi, I am starting with Examples: Q18 : Supervised learning (Ridge Classifier).

      I encountered the error : some dates are missed in the portfolio_history when run the precheck.ipynb file, whereas running the strategy.ipynb file is fine.

      The resulting outcome is Sharpe in precheck is 0.25 << Sharpe in strategy.ipynb.

      I think there is no forward looking, because: weights = qnbt.backtest_ml.

      1. Can you give me the possible reasons ?
      2. Can you please give me temporary way to resolve it ? (Assign some weights to missed dates)
      posted in Support
      M
      multi_byte.wildebeest
    • RE: Differences between Sharpe in Precheck and Sharpe in strategy.ipynb

      @support yes, I just use qnt.backtest_ml, which as I know helps avoid forward looking.

      You can check out my model and see : Sharpe in strategy, precheck, and IS are different although no forward looking.

      Id :
      dlsdcexp_4

      16765536

      Thanks,

      posted in Support
      M
      multi_byte.wildebeest
    • Differences between Sharpe in Precheck and Sharpe in strategy.ipynb

      Hi, I am designing a deep learning model, and I want to get the estimated Sharpe.

      I had different results between running Precheck.ipynb and strategy.ipynb.

      I have tried like get_sharpe(data, weights) in strategy.ipynb as guided and got the same results as using backtest_ml.

      So, what sharpe will be more close to the IS score (>= 1.0 for submission)

      Thank you.

      posted in Support
      M
      multi_byte.wildebeest
    • How can we have the estimation of Sharpe submitted ?

      Hi, how can we know the IS Sharpe will be when develop my strategy ?

      For example, the backtest_ml prints Sharpe at the end, but as I know, it is the Sharpe of the last day of trading.

      Many strategies developed locally have Sharpe > 1.0 but when submitted are filtered by IS Sharpe.

      posted in Support
      M
      multi_byte.wildebeest
    • Documentation
    • About
    • Career
    • My account
    • Privacy policy
    • Terms and Conditions
    • Cookies policy
    Home
    Copyright © 2014 - 2021 Quantiacs LLC.
    Powered by NodeBB | Contributors