Different Sharpe ratios in backtest and competition filter
@antinomy We decided to open an issue on point 2) as we never mentioned this point in the documentation: https://quantiacs.com/community/topic/15/share-the-state-between-iterations?_=1614767260575
Thank you for the answer!
if you use for example global variables and update state between passes
Yep, that's exactly what I did.
I managed to implement it without global variables, now the sharpe ratio matches and it got accepted.
@antinomy Ok, that's great. But we put in our roadmap the option to update state between passes also.
I have a question about the sharpe ratio shown in the global leaderboard...
If we take the value indicated in the main interface for the out of sample, and then try to replicate it by accessing a specific system and filtering its OOS period, the value obtained differs from the one shown In the main interface of the global leaderboard.
Why is this discrepancy generated? And which of the two sharpe ratio values is correct?
Thanks in advance!
@captain-nidoran Hi, first of all, two considerations:
for systems with a very short track record (some days) there are numerical instabilities in the computation (especially for BTC Futures systems) because of the strong fluctuations in the equity curves.
the result indicated in the main interface is obtained using the evaluator which runs on the Quantiacs server. The result is then pushed to the front end. If instead you access a specific system and you filter the OOS period using the graphical interface, then the computation will use a quick front-end script for making computations. So numerical discrepancies can arise, but they should become smaller as the OOS period becomes larger (see also point 1).
The result indicated in the main interface is more precise.
However, could you show one example showing the discrepancy? So we will dig more into the issue, and see if there is only a numerical issue or a bug which has to be fixed.
@support Hi again,
As an example, lets consider this one:
As you can see the SR shown for OOS in the global leaderboard is 2.044, but if we acces this specific system and filter the OOS period we can see this:
In this case the SR shown inside the specific system is 30.69
I hope this example can help you.
Thanks one more time!
@captain-nidoran Ok, so we tracked the issue, sorry for the delay. The chart uses the fast calculation of the Shape Ratio based on precalculated relative returns, which are cropped:
chart_sr = calc_sharpe_ratio(crop(relative_returns))
But the rating page calculates it in another way. It crops the output (equity chart), it calculates the relative returns, and then it calculates the Sharpe Ratio.
chart_sr = calc_sharpe_ratio(calc_relative_returns(crop(output)))
The starting point of the computations (first relative return) is different and it includes a big leap of 18% on day 1.
We modified the results displayed in the charts so that the y axis is rescaled, and the very first point has "zero" returns.
Now it became as follows (note that y axis started at 1.18, now it starts at 1):
@captain-nidoran The remaining difference you see not is due to the different approximated implementation on the front-end, for the contest the relevant result is the one displayed on the OOS column. As time increases, this difference will vanish.
@support Thank you very much for the clarification, and once again congratulations for the great job you are doing