Navigation

    Quantiacs Community

    • Register
    • Login
    • Search
    • Categories
    • News
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. illustrious.felice
    3. Posts
    • Profile
    • Following 0
    • Followers 0
    • Topics 20
    • Posts 60
    • Best 7
    • Groups 0
    • Blog

    Posts made by illustrious.felice

    • Strategy trades illiquid instruments

      Hi, I am having problem Strategy trades illiquid instruments at latest date in log, while I have multiplied weight with liquid. I checked log only latest date has this error. Please help me. My stock universe is top 7 magnificent. @support @Vyacheslav_B

      My alpha id is #18379797
      5f803660-59a2-48d7-b455-894bba89806a-image.png
      493f87dd-887b-4c1d-8ecd-371e1fe8f382-image.png

      posted in Support
      illustrious.felice
      illustrious.felice
    • RE: I cannot install pandas 1.2.5

      @omohyoid Hi, you can switch to google colab

      posted in Support
      illustrious.felice
      illustrious.felice
    • RE: Accessing Quantiacs takes too long

      @support Hello. My strategy has the id #16934018 and was submitted in early May, but pnl OS has not been updated yet. Please check this issue. Thank you.

      posted in Support
      illustrious.felice
      illustrious.felice
    • RE: Accessing Quantiacs takes too long

      @support Thank you very much. I would like to ask the following: Currently I want to research crypto and futures strategies, but currently the contest is about stocks, so I would like to ask how I can submit crypto and futures strategies to be able to see the OS PnL of these strategies? Thank you.

      posted in Support
      illustrious.felice
      illustrious.felice
    • Accessing Quantiacs takes too long

      Hi, I noticed that the time to enter Quantiacs was very long (from entering the platform to successfully logging in took about a few minutes, then accessing the notebook repo, accessing the init and strategy files also took a few more minutes). Hopefully Quantiacs can speed up access to files on the platform as well as when logging in.

      Thank you very much.

      @support @Vyacheslav_B

      posted in Support
      illustrious.felice
      illustrious.felice
    • RE: Acess previous weights

      @vyacheslav_b Thank you for your response
      Here is the code I used from your example. I added some other features (eg: trend = qnta.roc(qnta.lwma(data.sel(field='close'), 40), 1),...) and noticed that after passing ml_backtest, every indexes are all nan. Pnl is a straight line. I have tried changing many other features but the result is still the same, all indicators are nan

      import xarray as xr
      import qnt.data as qndata
      import qnt.backtester as qnbt
      import qnt.ta as qnta
      import qnt.stats as qns
      import qnt.graph as qngraph
      import qnt.output as qnout
      import numpy as np
      import pandas as pd
      import torch
      from torch import nn, optim
      import random
      
      asset_name_all = ['NAS:AAPL', 'NAS:GOOGL']
      lookback_period = 155
      train_period = 100
      
      
      class LSTM(nn.Module):
          """
          Class to define our LSTM network.
          """
      
          def __init__(self, input_dim=3, hidden_layers=64):
              super(LSTM, self).__init__()
              self.hidden_layers = hidden_layers
              self.lstm1 = nn.LSTMCell(input_dim, self.hidden_layers)
              self.lstm2 = nn.LSTMCell(self.hidden_layers, self.hidden_layers)
              self.linear = nn.Linear(self.hidden_layers, 1)
      
          def forward(self, y):
              outputs = []
              n_samples = y.size(0)
              h_t = torch.zeros(n_samples, self.hidden_layers, dtype=torch.float32)
              c_t = torch.zeros(n_samples, self.hidden_layers, dtype=torch.float32)
              h_t2 = torch.zeros(n_samples, self.hidden_layers, dtype=torch.float32)
              c_t2 = torch.zeros(n_samples, self.hidden_layers, dtype=torch.float32)
      
              for time_step in range(y.size(1)):
                  x_t = y[:, time_step, :]  # Ensure x_t is [batch, input_dim]
      
                  h_t, c_t = self.lstm1(x_t, (h_t, c_t))
                  h_t2, c_t2 = self.lstm2(h_t, (h_t2, c_t2))
                  output = self.linear(h_t2)
                  outputs.append(output.unsqueeze(1))
      
              outputs = torch.cat(outputs, dim=1).squeeze(-1)
              return outputs
      
      
      def get_model():
          def set_seed(seed_value=42):
              """Set seed for reproducibility."""
              random.seed(seed_value)
              np.random.seed(seed_value)
              torch.manual_seed(seed_value)
              torch.cuda.manual_seed(seed_value)
              torch.cuda.manual_seed_all(seed_value)  # if you are using multi-GPU.
              torch.backends.cudnn.deterministic = True
              torch.backends.cudnn.benchmark = False
      
          set_seed(42)
          model = LSTM(input_dim=3)
          return model
      
      
      def get_features(data):
          close_price = data.sel(field="close").ffill('time').bfill('time').fillna(1)
          open_price = data.sel(field="open").ffill('time').bfill('time').fillna(1)
          high_price = data.sel(field="high").ffill('time').bfill('time').fillna(1)
          log_close = np.log(close_price)
          log_open = np.log(open_price)
          trend = qnta.roc(qnta.lwma(close_price ), 40), 1)
          features = xr.concat([log_close, log_open, high_price, trend], "feature")
          return features
      
      
      def get_target_classes(data):
          price_current = data.sel(field='open')
          price_future = qnta.shift(price_current, -1)
      
          class_positive = 1  # prices goes up
          class_negative = 0  # price goes down
      
          target_price_up = xr.where(price_future > price_current, class_positive, class_negative)
          return target_price_up
      
      
      def load_data(period):
          return qndata.stocks.load_ndx_data(tail=period, assets=asset_name_all)
      
      
      def train_model(data):
          features_all = get_features(data)
          target_all = get_target_classes(data)
          models = dict()
      
          for asset_name in asset_name_all:
              model = get_model()
              target_cur = target_all.sel(asset=asset_name).dropna('time', 'any')
              features_cur = features_all.sel(asset=asset_name).dropna('time', 'any')
              target_for_learn_df, feature_for_learn_df = xr.align(target_cur, features_cur, join='inner')
              criterion = nn.MSELoss()
              optimiser = optim.LBFGS(model.parameters(), lr=0.08)
              epochs = 1
              for i in range(epochs):
                  def closure():
                      optimiser.zero_grad()
                      feature_data = feature_for_learn_df.transpose('time', 'feature').values
                      in_ = torch.tensor(feature_data, dtype=torch.float32).unsqueeze(0)
                      out = model(in_)
                      target = torch.zeros(1, len(target_for_learn_df.values))
                      target[0, :] = torch.tensor(np.array(target_for_learn_df.values))
                      loss = criterion(out, target)
                      loss.backward()
                      return loss
      
                  optimiser.step(closure)
              models[asset_name] = model
          return models
      
      
      def predict(models, data, state):
          last_time = data.time.values[-1]
          data_last = data.sel(time=slice(last_time, None)) 
          
          weights = xr.zeros_like(data_last.sel(field='close'))
          for asset_name in asset_name_all:
              features_all = get_features(data_last)
              features_cur = features_all.sel(asset=asset_name).dropna('time', 'any')
              if len(features_cur.time) < 1:
                  continue
              feature_data = features_cur.transpose('time', 'feature').values
              in_ = torch.tensor(feature_data, dtype=torch.float32).unsqueeze(0)
              out = models[asset_name](in_)
              prediction = out.detach()[0]
              weights.loc[dict(asset=asset_name, time=features_cur.time.values)] = prediction
              
              
          weights = weights * data_last.sel(field="is_liquid")
          
          # state may be null, so define a default value
          if state is None:
              default = xr.zeros_like(data_last.sel(field='close')).isel(time=-1)
              state = {
                  "previus_weights": default,
              }
              
          previus_weights = state['previus_weights']
          
          
          # align the arrays to prevent problems in case the asset list changes
          previus_weights, weights = xr.align(previus_weights, weights, join='right') 
          
      
          weights_avg = (previus_weights + weights) / 2
          
          
          next_state = {
              "previus_weights": weights_avg.isel(time=-1),
          }
          
      #     print(last_time)
      #     print("previus_weights")
      #     print(previus_weights)
      #     print(weights)
      #     print("weights_avg")
      #     print(weights_avg.isel(time=-1))
      
          return weights_avg, next_state
      
      
      weights = qnbt.backtest_ml(
          load_data=load_data,
          train=train_model,
          predict=predict,
          train_period=train_period,
          retrain_interval=360,
          retrain_interval_after_submit=1,
          predict_each_day=True,
          competition_type='stocks_nasdaq100',
          lookback_period=lookback_period,
          start_date='2006-01-01',
          build_plots=True
      )
      

      Screenshot 2024-05-16 085531.png
      Screenshot 2024-05-16 085555.png
      Screenshot 2024-05-16 085716.png

      posted in Support
      illustrious.felice
      illustrious.felice
    • RE: Acess previous weights

      @vyacheslav_b Hello, I was trying the code you gave and realized that using state for train ml_backtest only works when the get feature function is a feature like ohlc or log of ohlc (open, high, low, close).

      I added some other features (eg: trend = qnta.roc(qnta.lwma(data.sel(field='close'), 40), 1),...) and noticed that after passing ml_backtest, every The indexes are all nan. Looking forward to your help. Thank you.

      @Vyacheslav_B

      posted in Support
      illustrious.felice
      illustrious.felice
    • Test out sample performance

      Hi,

      I know that there is currently a stock contest going on, but if I want to develop a crypto or futures strategy and then submit it so I can test OS performance, is that allowed?

      Thank you.
      @support @Vyacheslav_B

      posted in Support
      illustrious.felice
      illustrious.felice
    • Extend strategy submission time Q21

      Hi Quantiacs,

      I notice that Q21 is about to run out of submission time (May 1), but now I realize that the time to submit a strategy is very long (takes 3 - 5 days). Hopefully Quantiacs can provide a quick submission strategy solution (I used single backtest, running precheck is also very fast), or it would be really good if Q21 could extend the submission time by a few weeks.

      Besides, I would also like to ask if ML and DL algorithms switch from ml_backtest to single backtest, will they overfit when using all the data for training? In ml_backtest, it is allowed to divide train_period and test_period, so how can we divide like that in single backtest? We look forward to your help providing a single backtest code example for ML DL strategy. Thank you. Besides, we hope that Quantiacs will provide single backtest for ML DL to help limit the forward looking situation.

      Thanks a lot.

      @Vyacheslav_B @support

      posted in Support
      illustrious.felice
      illustrious.felice
    • RE: Missed call to write_output although had included it

      @vyacheslav_b I also encountered this situation. Can you give more specific instructions on how the precheck file works? Thank you

      posted in Support
      illustrious.felice
      illustrious.felice
    • TypeError: __init__() got an unexpected keyword argument 'max_value'

      Hi,

      I got an error after running load data on the local environment

      f4554d10-61fe-413f-b36a-f1a8902ddcfb-image.png
      Please help me fix it.

      Thank you @support

      posted in Strategy help
      illustrious.felice
      illustrious.felice
    • RE: RuntimeError: expand(torch.DoubleTensor{[694, 6]}, size=[694]): the number of sizes provided (1) must be greater or equal to the number of dimensions in the tensor (2)

      @support Thank you so much. I have resolved this error

      posted in Strategy help
      illustrious.felice
      illustrious.felice
    • RE: backtest_ml has too long a run time

      @vyacheslav_b Thank you so much. I would like to ask more, every time I train ml_backtest for LSTM, I get a different sharpe result (sometimes the sharpness is 1.2, 1.1, 0.9,...) Why is there the following sharpness inconsistency? Every time you train like that? We wish to be answered. Thank you.

      posted in Strategy help
      illustrious.felice
      illustrious.felice
    • RE: backtest_ml has too long a run time

      @illustrious-felice I would like to ask, how can I remove the correlation check each time I run ml_backtest? Thank you so much. Checking correlation is time consuming and I want to eliminate it

      posted in Strategy help
      illustrious.felice
      illustrious.felice
    • RE: backtest_ml has too long a run time

      @illustrious-felice If I leave the list of stocks as these stocks, the backtest will run normally and I will get the results
      Screenshot 2024-03-06 220401.png Screenshot 2024-03-06 220354.png
      However, if I change to other codes, such as TSLA, FB, META, it will error
      Screenshot 2024-03-06 220523.png Screenshot 2024-03-06 220517.png
      I could only find about 6 or 7 stock codes that didn't have errors

      posted in Strategy help
      illustrious.felice
      illustrious.felice
    • RE: backtest_ml has too long a run time

      @vyacheslav_b Thank you so much. I have one more question. When I add stock (NAS:TSLA, NAS:FB,...), I get the following error:
      423541864_1114755819833295_6385236541869982169_n.png
      It seems that these tickers are missing data. I tried other codes and found some that could be added like ADBE, GOOGL. I would like to ask why this situation occurs and is there any way to fix it? Thank you very much.

      posted in Strategy help
      illustrious.felice
      illustrious.felice
    • RE: backtest_ml has too long a run time

      @vyacheslav_b Hopefully you can give me a code example of LSTM when using more than 1 feature to train in the get_feaures() function and using more than 3 assets (stock codes). Thank you. Currently I am getting an error when changing the number of features and assets. Thank you so much.

      I hope you can show me an example of changing ml_backtest to single backtest in example neural networks too.

      posted in Strategy help
      illustrious.felice
      illustrious.felice
    • RE: backtest_ml has too long a run time

      @vyacheslav_b Thank you. Please help me with the issue RuntimeError: expand(torch.DoubleTensor{[694, 6]}, size=[694]): the number of sizes provided (1) must be greater or equal to the number of dimensions in the tensor (2).

      I have tried adding the following feature:
      Screenshot 2024-03-04 200450.png
      However I got an error
      eaff9375-5550-4632-b22b-386f04894c42-image.png
      5ef46a52-c58d-41c8-a64b-b80f4ed2b952-image.png
      Please help me fix the error. I really need to fix this bug so I can train more than 1 feature. Thank you very much.

      posted in Strategy help
      illustrious.felice
      illustrious.felice
    • Documentation
    • About
    • Career
    • My account
    • Privacy policy
    • Terms and Conditions
    • Cookies policy
    Home
    Copyright © 2014 - 2021 Quantiacs LLC.
    Powered by NodeBB | Contributors