Missed Call to write output
-
@magenta-kabuto Hello. If you are developing your strategies in a local environment, I recommend running your strategy code in an online environment before submitting it to the competition.
There may be errors related to the absence of certain Python libraries in the online environment, the use of local files, and the application of variables or settings from the local environment.
It is important that the line
import qnt.output as qnout qnout.write(weights)
is placed in a separate cell.
-
@vyacheslav_b Thx for your answer. You are right, it seems to have been a local dependency I forgot to comment out.
-
This post is deleted! -
@magenta-kabuto "strategy.ipynb" does not compile. Can this error be thought of as a runtime error? Because I am currently encountering it. Thx in advance
-
@magenta-kabuto Hi, sorry for delay, we need more input.
-
@support thx for the reply. I ran the strategy, for which I get the mentioned error in the online environment and get the following error :---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[54], line 1
----> 1 weights = qnbk.backtest_ml(
2 train=train_model,
3 predict=predict,
4 train_period=365*5, # the data length for training in calendar days
5 retrain_interval=125, # how often we have to retrain models (calendar days)
6 retrain_interval_after_submit=125, # how often retrain models after submission during evaluation (calendar days)
7 predict_each_day=True, # Is it necessary to call prediction for every day during backtesting?
8 # Set it to true if you suspect that get_features is looking forward.
9 competition_type='stocks_nasdaq100', # competition type
10 lookback_period=375, # how many calendar days are needed by the predict function to generate the output
11 start_date='2006-01-01', # backtest start date
12 build_plots=False # do you need the chart?
13 )File ~/book/qnt/backtester.py:119, in backtest_ml(train, predict, train_period, retrain_interval, predict_each_day, retrain_interval_after_submit, competition_type, load_data, lookback_period, test_period, start_date, end_date, window, analyze, build_plots, collect_all_states)
116 qnout.write(result)
118 if need_retrain and retrain_interval_cur > 1 or state is not None:
--> 119 qnstate.write((created, model, state))
121 if is_submitted():
122 if state is not None:File ~/book/qnt/state.py:12, in write(state)
10 path = get_env("OUT_STATE_PATH", "state.out.pickle.gz")
11 with gzip.open(path, 'wb') as gz:
---> 12 pickle.dump(state, gz)
13 log_info("State saved.")AttributeError: Can't pickle local object 'Layer._initializer_tracker.<locals>.<lambda>' unfortunately I dont get what is meant by that exactly. Thx a lot. Regards
-
@support I couldnt find any local dependency and thought the problem may be that the model in the train model is not outputted as a dictionary and cannot be pickled,but this doesnt seem to be the case. Can you work with this information?
Regards -
@magenta-kabuto Hello. Check the version of Python in your local environment. The problem might be due to incompatibility of the pickle module across different Python versions.
import sys import platform print("Python version (simple):", sys.version) print("Python version (detailed):", platform.python_version())
-
@vyacheslav_b said in Missed Call to write output:
import sys
import platformprint("Python version (simple):", sys.version)
print("Python version (detailed):", platform.python_version())Hi @Vyacheslav_B thx a lot for your suggestion. The versions are the same (3.10.13), so unfortunately that isnt the reason for the error.
regards -
@magenta-kabuto append
print(state)
before saving it. What are you trying to save?
you may need to restart the kernel.
Answer ChatGPT:
The error message you're encountering, AttributeError: Can't pickle local object 'Layer._initializer_tracker.<locals>.<lambda>', indicates that the pickle module is unable to serialize a lambda function (or possibly another local object) that is part of the object state you're attempting to dump to a file. This is a common limitation of pickle, as it cannot serialize lambda functions, local functions, classes defined within functions, or instances of such classes, among other things.
Avoid Using Lambda Functions in Serializable Objects
If possible, replace lambda functions with defined functions (even if they're one-liners). Defined functions can be pickled because they are not considered local objects. For example, if you have:lambda x: x + 1
Replace it with:
def increment(x): return x + 1
-
@vyacheslav_b hmm..The state is saved by the backtester, it is not something I configured. I guess it is related to the retraining of the model but I am not sure.Regarding lambda, I only used lambda once in the entire notebook and it was unrelated to the layers of the neural network, however I removed it and strangely since morning I am for the same code, that worked yesterday encountering the following error in the online environment (using only 1 epoch,whereas yesterday I used upto 10 epochs,which should be more expensive): 2024-03-19 11:55:08.657960: W external/local_tsl/tsl/framework/cpu_allocator_impl.cc:83] Allocation of 10035200000 exceeds 10% of free system memory, so that I am not possible to check whether the error dissapeared.
I am sorry to bother you with this but its getting really confusing
Regards -
@magenta-kabuto Hello. I meant that you can modify the code in the file ~/book/qnt/state.py and see what gets saved there.
It was:
import gzip, pickle from qnt.data import get_env from qnt.log import log_err, log_info def write(state): if state is None: return path = get_env("OUT_STATE_PATH", "state.out.pickle.gz") with gzip.open(path, 'wb') as gz: pickle.dump(state, gz) log_info("State saved.") def read(path=None): if path is None: path = get_env("IN_STATE_PATH", "state.in.pickle.gz") try: with gzip.open(path, 'rb') as gz: res = pickle.load(gz) log_info("State loaded.") return res except Exception as e: log_err("Can't load state.", e) return None
It became:
import gzip, pickle from qnt.data import get_env from qnt.log import log_err, log_info def write(state): if state is None: return print("state") print(state) path = get_env("OUT_STATE_PATH", "state.out.pickle.gz") with gzip.open(path, 'wb') as gz: pickle.dump(state, gz) log_info("State saved.") def read(path=None): if path is None: path = get_env("IN_STATE_PATH", "state.in.pickle.gz") try: with gzip.open(path, 'rb') as gz: res = pickle.load(gz) log_info("State loaded.") return res except Exception as e: log_err("Can't load state.", e) return None
Save the file and restart the kernel.
-
@vyacheslav_b oh I see. Thx a lot for the solution. I will try it out later
-
Hi @vyacheslav_b,
I implemented your suggestion in my local environment and this is the result:
fetched chunk 1/1 3s
Data loaded 3s
Output cleaning...
fix uniq
ffill if the current price is None...
Check liquidity...
Ok.
Check missed dates...
Ok.
Normalization...
Output cleaning is complete.
NOTICE: The environment variable OUTPUT_PATH was not specified. The default value is 'fractions.nc.gz'
Write output: fractions.nc.gz
NOTICE: The environment variable OUT_STATE_PATH was not specified. The default value is 'state.out.pickle.gz'
state
(numpy.datetime64('2024-04-02T00:00:00.000000000'), {'arch': <main.LSTM_Encoder object at 0x7fc4be530bb0>}, None)
State saved.I dont know why the date is 2024-04-02, however the second part is the model and the state is None, and as the error is : File ~/book/qnt/state.py:12, in write(state)
10 path = get_env("OUT_STATE_PATH", "state.out.pickle.gz")
11 with gzip.open(path, 'wb') as gz:
---> 12 pickle.dump(state, gz)
13 log_info("State saved.")AttributeError: Can't pickle local object 'Layer._initializer_tracker.<locals>.<lambda>' , the error occurs while dumping the state, which is none, the error message is unclear to me,besides pickle being unable to dump this object.ChatGPT suggests using dill instead of pickle. Do you maybe know more? Thx a lot. Regards
-
I finally resolved the issue, after lots of struggle. The custom layers, custom loss function and the function had to be serialized and deserialized correctly in order to save the architecture and weights as Json, rather than in a dictionary, like is suggested for pytorch in the Neural netowork template.
It seems Pytorch is way more user friendly when it comes to saving and loading models.