@theflyingdutchman
You may have too many parameters in your strategy resulting in over fitting. This often happens when optimizing on asset by asset basis with too many indicators. Try generalizing first.
Posts made by alfredaita
-
RE: Q18 testing
-
RE: Error found while running analysis
100% (119812 of 119812) |################| Elapsed Time: 0:00:00 Time: 0:00:00
Output cleaning...
fix uniq
ffill if the current price is None...
Check liquidity...
Ok.
Check missed dates...
Ok.
Normalization...
Output cleaning is complete.
Write output: /root/fractions.nc.gz
WARNING:absl:Found untraced functions such as lstm_cell_layer_call_fn, lstm_cell_layer_call_and_return_conditional_losses, lstm_cell_1_layer_call_fn, lstm_cell_1_layer_call_and_return_conditional_losses while saving (showing 4 of 4). These functions will not be directly callable after loading.
INFO:tensorflow:Assets written to: ram://fbf49b7a-6c3a-490c-94cb-b27fb3e70aff/assets
INFO:tensorflow:Assets written to: ram://fbf49b7a-6c3a-490c-94cb-b27fb3e70aff/assets
WARNING:absl:<keras.layers.recurrent.LSTMCell object at 0x7f651d7123d0> has the same name 'LSTMCell' as a built-in Keras object. Consider renaming <class 'keras.layers.recurrent.LSTMCell'> to avoid naming conflicts when loading withtf.keras.models.load_model
. If renaming is not possible, pass the object in thecustom_objects
parameter of the load function.
WARNING:absl:<keras.layers.recurrent.LSTMCell object at 0x7f651d716c10> has the same name 'LSTMCell' as a built-in Keras object. Consider renaming <class 'keras.layers.recurrent.LSTMCell'> to avoid naming conflicts when loading withtf.keras.models.load_model
. If renaming is not possible, pass the object in thecustom_objects
parameter of the load function.
State saved.Run First Iteration...
100% (533136 of 533136) |################| Elapsed Time: 0:00:00 Time: 0:00:00ValueError Traceback (most recent call last)
<ipython-input-13-1448c3f28c90> in <module>
11 start_date='2014-01-01', # backtest start date
12 analyze =True,
---> 13 build_plots= True # do you need the chart?
14 )~/book/qnt/backtester.py in backtest_ml(train, predict, train_period, retrain_interval, predict_each_day, retrain_interval_after_submit, competition_type, load_data, lookback_period, test_period, start_date, end_date, window, analyze, build_plots, collect_all_states)
143
144 train_data_slice = copy_window(data, data_ts[-1], train_period)
--> 145 model = train(train_data_slice)
146
147 test_data_slice = copy_window(data, data_ts[-1], lookback_period)<ipython-input-11-b02ff8641cb5> in train_model(data)
22
23 current_size = current_asset.shape[0]
---> 24 current_asset = sc.fit_transform(current_asset.values.reshape(-1,1) )
25
26/usr/local/lib/python3.7/site-packages/sklearn/base.py in fit_transform(self, X, y, **fit_params)
697 if y is None:
698 # fit method of arity 1 (unsupervised transformation)
--> 699 return self.fit(X, **fit_params).transform(X)
700 else:
701 # fit method of arity 2 (supervised transformation)/usr/local/lib/python3.7/site-packages/sklearn/preprocessing/_data.py in fit(self, X, y)
361 # Reset internal state before fitting
362 self._reset()
--> 363 return self.partial_fit(X, y)
364
365 def partial_fit(self, X, y=None):/usr/local/lib/python3.7/site-packages/sklearn/preprocessing/_data.py in partial_fit(self, X, y)
396 X = self._validate_data(X, reset=first_pass,
397 estimator=self, dtype=FLOAT_DTYPES,
--> 398 force_all_finite="allow-nan")
399
400 data_min = np.nanmin(X, axis=0)/usr/local/lib/python3.7/site-packages/sklearn/base.py in _validate_data(self, X, y, reset, validate_separately, **check_params)
419 out = X
420 elif isinstance(y, str) and y == 'no_validation':
--> 421 X = check_array(X, **check_params)
422 out = X
423 else:/usr/local/lib/python3.7/site-packages/sklearn/utils/validation.py in inner_f(*args, **kwargs)
61 extra_args = len(args) - len(all_args)
62 if extra_args <= 0:
---> 63 return f(*args, **kwargs)
64
65 # extra_args > 0/usr/local/lib/python3.7/site-packages/sklearn/utils/validation.py in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, estimator)
727 " minimum of %d is required%s."
728 % (n_samples, array.shape, ensure_min_samples,
--> 729 context))
730
731 if ensure_min_features > 0 and array.ndim == 2:ValueError: Found array with 0 sample(s) (shape=(0, 1)) while a minimum of 1 is required by MinMaxScaler.
-
RE: Error found while running analysis
@support
My original post might be somewhat misleading.
First: I built the function with Keras 2.8 and TensorFlow 2.8 using their API
and not Sequential (),
Second: The error occurs on the Server and Colab using backtest_ml.
It seems that it appears after going thru my code but occurs at some point in the analytic stage.
Third: No error occurred when I ran the "Train_Model " and "Predict" function independently on either the server or on Colab
IE:
Train_Model( Crypto_daily_Data) this returns a Dict.[ history] of separate models for each asset, all with the same number of "Features.predict(history,Crypto_Daily_Data)
this returns weights a grid of assets containing either [-1,1,0] for each asset.
The toolbox version used in colab is what your website refers to [ " using HTTPS ]. I was unable to determine the actual
version numbers. Also, I jupyter site you are currently providing. -
Error found while running analysis
Created an LSTM model for crossover when testing IE train and predict separately on the collab.
No errors.
when running quantic server, I get the following error [last line]Found array with 0 sample(s) (shape=(0, 1)) while a minimum of 1 is required by MinMaxScaler.
-
Saving and recalling a dictionary of trained models
I know this has been answered before, but I have some questions?
Using Keras 2.8 I have trained a dictionary of trained models. I'm designing it intending to use backtest_ml.
I called each of the functions separately to test. The dictionary of the trained model took about an hour to train; this is probably too long for submission purposes at this point.
I saved the model to my area as a CSV by converting it into a data frame [Pandas].
When I shut down and want to recall it, will it function? I examined the data, and it seems like it should. But I have not tried.
I edited the init.ipynb file but each time I have to run it, no problem. would not be convenient.
Do I have access to GPU's or TPU'sThanks in advance
Alfred -
notebook for googlecolab not working
google colab will not install the package.
seems to be a SSH issue on the github side.