Hi @support,
no problem.
I didnt check until now, the accepted strategies do not use machine learning
I will try out some machine learning strategies in the upcoming days and let you know.
Thanks again and Regards
Posts made by magenta.kabuto
-
RE: Runtime Error?
-
RE: Acess previous weights
Hi @vyacheslav_b,
I just quickly checked the template and it seems to be very helpful.
Thx a lot for the update!
Regards -
RE: Acess previous weights
hello again to all,
I hope everyone is fine.
I again came across a question, which should have occurred to me earlier, namely when we use a stateful machine learning strategy for submission, how can we pass on the states without using the ml_backtester, assuming the notebook is rerun at each point in time.
Thank you.
Regards -
RE: Acess previous weights
@vyacheslav_b the problem with not using states as I understand is the following: lets say the model estimated in t (single pass) gives an estimate for NAS:AAPL = 0.04 (weight allocation). So thats the position assigned to the stock in t for t+1.
In t+1 the model is reestimated but with the information of NAS:AAPL in t and assigns weights 0.03 for t+1 and 0.035 for t+2 in t+1. When I do not use states, and apply get_lower_slippage function , I will have weight allocation 0.035 for t+2 in t+1 whereas with the states I will have 0.04 for t+2 in t+1 and I will not have impact of the transaction costs.
Thank you.
Regards -
RE: Acess previous weights
Hi @vyacheslav_b ,
thank you very much for the solution.
I did not know that the ML_backtester is capable of handling two outputs (weights and state) but I will from now on use it.
Regards -
RE: Acess previous weights
Hi @support,
just wanted to thank for the suggestion of the stateful backtester, as this solves the issue.
I incorporated the DL model into the stateful backtester, which seems to work (backtesting right-now).
And the get_lower slippage function in the ML- templates is subject to forward looking, overfitting the holding period.
Regards -
RE: Acess previous weights
@support thx for your reply.
I also saw some discussion related to a similar topic in the general discussion.
I will try to explain what I mean, if it doesnt make sense just ignore it and if I get something wrong, pls let me know.
The point i am making is lets imagine the evaluation period starts on 01.06.2024 and my Deep Learning model meets the criteria for evaluation.
Now since 01.06.2024 is the first day, the model will predict for this day and assign the weights for assets traded on 01.06.2024, one day before, where it is trained.
Now to my understanding, the model is retrained everyday for the single-pass submission of the DL-Model, which therefore will have different weights (model weights, not allocation weights) for the next day training (01.06.2024) and will predict one step ahead, the allocations for 02.06.2024, and so on.
So my question is, that under this framework I do not have access to the previous weights, right ? (For 01.06.2024 model training, I do not know what allocation weights I assigned on 31.05.2024)(I dont know whether with the stateful model I can have access to lets say up to 60 days of previous allocations)
The weights assigned however, are saved somewhere with quantiacs ,as these allocation were made in the past but are not accessible to me ,lets say on 03.06.2024.
SO if I want to reduce slippage, so that I on 03.06.2024 want to change allocations only if the predictions are larger since the beginning of the evaluation, how can I do that.
I hope it makes sense what I am trying to say.
Regards -
RE: Acess previous weights
Hey @support,
can you maybe help?
Is there a way to download or access weights like its done for data? (which is updated for each time step)
Thank you.
Regards -
RE: Acess previous weights
Hi @vyacheslav_b , thx for your reply.
Sry I expressed myself badly. What I mean is that as I understand if I predict each time step, my for example machine learning model, will take a position in t (lets say 0.5). Now in t+1 when the notebook is run again for prediction that information is lost ,isnt it? So if I want to apply the lower slippage, how can I do that?
An example is the screenshot I posted above, which makes a one step prediction, by assigning a weight for selected assets on the latest index at time step t. No tomorrow in t+1 it will assign a new weight for the selcted assets without knowledge what was assigned in t with the knowledge in t, as in t+1 I could take the value of the prediction for (which is part of the batch) but will be different from the weights I assigned in t because of forward looking.
I hope I didnt overcomplicate in my expression.
Regards -
RE: Acess previous weights
or if not. Is there code available of the competition backtest, so I may figure out a way.
Thx -
RE: Acess previous weights
Hi @vyacheslav_b,
thx for the solution.
Are you aware of a way to access previous positions taken, when using single pass, as the qnbt backtester leads to a runtime error?
Regards -
RE: KeyError: "cannot represent labeled-based slice indexer for coordinate 'time' with a slice over integer positions; the index is unsorted or non-unique"
@newbiequant96 no problem.
I think the issue now is unrelated to the the previous issue. If you can show what is written above return code 1, I can maybe help.
It seems to be an issue in the code.
Regards -
RE: KeyError: "cannot represent labeled-based slice indexer for coordinate 'time' with a slice over integer positions; the index is unsorted or non-unique"
Hi @newbiequant96,
You need to need to, in the init notebook write: !pip install pandas==1.2.5
I recently encountered that problem again, in my case I think, because statsmodels is not compatible with the default pandas version, it upgrades it.
It needs to be downgraded again in my case.
Hope this will work for you.
Regards -
RE: Data loading in online Env
Does anyone know whether the quantiacs competition backtestester would, using the following code: stats = qnstats.calc_stat(data, weights.sel(time=slice("2006-01-01", None))) return all weights since inception or only the weights assigned on the current time step?
Thank you -
RE: Data loading in online Env
ok @support , thx a lot for the info.
I used the backtest_ml in order to access previous weights for my strategy. Is there a way to access past weights, without the use of ML_Backtest?
Thank you.
Regards -
RE: Data loading in online Env
Hi @support,
thx for the information.
Does this cell limit include the ml_backtester?
Thank you.
Regards -
Data loading in online Env
Hello @support,
could you pls check why strategy # 16876206 and # 16875625 were filtered, as they passed the precheck, and there are no error warnings .
Thanks a lot.
Regards -
RE: Runtime Error?
Hi @support ,
are there any updates on the previously mentioned issue? I am aware that as the deadline is close, the systems are backtesting slowly, however I am unable to figure out what the issue is .
Thank you
Regards -
RE: Runtime Error?
Hello @support,
I checked the strategy and refined it.
When it is run in the online environment there is no issue but when I precheck, the issue exceeded 1800 seconds for a cell arises, which should be obvious as when I backtest the strategy from 2006, it should take longer than this.
Now that I submitted it, it has been filtered without any error and there is no failed logs, could you pls have a look: # 16818788 .thank you.
Regards -
RE: Runtime Error?
@support alright, thank you very much.
I will check on Google Colab, hopefully I can figure out the mistake, otherwise I will be back
Regards