Navigation

    Quantiacs Community

    • Register
    • Login
    • Search
    • Categories
    • News
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. antinomy
    A
    • Profile
    • Following 0
    • Followers 1
    • Topics 9
    • Posts 65
    • Best 30
    • Groups 0
    • Blog

    antinomy

    @antinomy

    34
    Reputation
    31
    Profile views
    65
    Posts
    1
    Followers
    0
    Following
    Joined Last Online

    antinomy Follow

    Best posts made by antinomy

    • RE: Bollinger Bands

      @anthony_m
      Bollinger Bands are actually quite easy to calculate.
      The middle band is just the simple moving average, the default period is 20.
      For the other bands you need the standard deviation for the same period.
      The upper band is middle + multiplier * std
      The lower band is middle - multiplier * std
      Where the default for the multiplier is 2.

      There's an article on the formula for Bollinger Bands on Investopedia - they use the 'typical price' (high + low + close) / 3 but I think most people just use the close price.

      For the code it depends if you only need the latest values or the history.
      Using pandas the code for the first alternative could be:

      def strategy(data):
          close = data.sel(field='close').copy().to_pandas().ffill().bfill().fillna(0) 
      
          # let's just use the default 20 period:
          period = 20
          sma = close.iloc[-period:].mean()
          std = close.iloc[-period:].std()
      
          # and the default multiplier of 2:
          multiplier = 2
          upper = sma + multiplier * std
          lower = sma - multiplier * std
      

      If you need more than the last values you can use pandas.rolling:

      def strategy(data):
          close = data.sel(field='close').copy().to_pandas().ffill().bfill().fillna(0)
      
          # let's just use the default 20 period:
          period = 20
          sma = close.rolling(period).mean()
          std = close.rolling(period).std()
      
          # and the default multiplier of 2:
          multiplier = 2
          upper = sma + multiplier * std
          lower = sma - multiplier * std
      
      posted in Strategy help
      A
      antinomy
    • RE: Announcing the Winners of the Q14 Contest

      Thank you very much, this is awesome!
      And I also congratulate the other 2 winners!
      I got notified by e-mail 2 days ago and have been totally excited since then 🎉

      posted in News and Feature Releases
      A
      antinomy
    • External Libraries

      Hello @support ,

      I've been using cvxpy in the server environment which I installed by running

      !conda install -y -c conda-forge cvxpy
      

      in init.ipynb. But whenever this environment is newly initialized, the module is gone and I have to run this cell again (which takes awfully long).

      Is this normal or is there something wrong with my environment?
      My current workaround is placing these lines before the import

      try:
          import cvxpy as cp
      except ImportError:
          import subprocess
      
          cmd = 'conda install -y -c conda-forge cvxpy'.split()
          rn = subprocess.run(cmd)
      
          import cvxpy as cp
      

      Is there a better way?

      Best regards.

      posted in Support
      A
      antinomy
    • Different Sharpe ratios in backtest and competition filter

      When I run my futures strategy in a notebook on the server starting 2006-01-01 I get this result:

      Check the sharpe ratio...
      Period: 2006-01-01 - 2021-03-01
      Sharpe Ratio = 1.3020322470218595

      However, it gets rejected from the competition because the Sharpe is below 1. When I click on its chart in the "Filtered" tab it shows

      Sharpe Ratio 0.85

      When I copy the code from the html-previev of the rejected algo and paste it into a notebook, I get exactly the same result as above (Sharpe 1.3), so it doesn't seem to be a saving error.

      Why is there such a big difference? I thought the backtest results from the notebook shuld indicate if the strategy is elegible for the competition.
      How can I calculate the Sharpe ratio used for the competition in the notebook so I will know beforehand if the algo gets accepted or not?

      posted in Support
      A
      antinomy
    • RE: The Q16 Contest is open!

      @support
      First of all, having looked at various sources for crypto data myself, I know this can be a pain in the neck, so I can appreciate the effort you took to provide it.

      I get the method for avoiding lookahead-bias by including delisted symbols. The key point would be what you mean exactly by disappeared and from where.
      Do you mean they were delisted from the exchange where the winning algos will be traded or did the data source you used just not have data?

      To name 2 examples: DASH and XMR don't have recent prices but I don't know of an exchange they were delisted from. When I look them up on tradingview they do have prices on all the exchanges available there and are still traded with normal volumes.

      Charts for their closing price on quantiacs:

      import qnt.data as qndata
      
      data = qndata.cryptodaily_load_data(min_date='2020')
      close = data.sel(field='close').to_pandas()
      close[['DASH', 'XMR']].plot()
      

      Figure_1.png

      On tradingview:

      UDGfbmYT.png

      There are many reasons why we might need prices for symbols that are currently not among the top 10 in terms of market cap. An obvious one would be that they might be included in the filter again any second and algorithms do need historical data. Also, there are many ways to include symbols in computations without trading them: as indicators, to calculate market averages and so on.

      posted in News and Feature Releases
      A
      antinomy
    • RE: Data for Futures

      @support Yes, it's working now. Thanks!

      posted in News and Feature Releases
      A
      antinomy
    • RE: The Q16 Contest is open!

      @support
      I totally agree that the indicator usage is not trivial at all regarding lookahead-bias, still trying to wrap my head around it 😉
      The symbol list alone could already lead to lookahead bias - in theory, I don't have a realistic example.
      Because if the symbol is in the dataset, the algo could know it will be among the top 10 at some point, thus probably go up in price.
      I guess we really need to be extra careful avoiding these pitfalls, but they might also become apparent after the submission...

      From what I understand this contest is kind of a trial run for the stocks contest, so may I make a suggestion?
      On Quantopian there was data for around 8000 assets, including non-stocks like ETFs but for the daily contest for instance, the symbols had to be in the subset of liquid stocks they defined (around 2000 I think).

      The scenarios

      1. there's a price for that stock, it will become large
      2. it's included in the asset list, it must go up some day

      were not really a problem because there was no way to infer from prices or the symbol being present that it will be included in the filter some day.

      Maybe you could do something like that, too?
      It doesn't have to be those kind of numbers, but simply providing data for a larger set of assets containing symbols which will never be included in the filter could avoid this problem (plus of course including delisted stocks).

      For this contest I think your suggestion to retroactively fill the data if a symbol makes it on top again is a good idea.

      posted in News and Feature Releases
      A
      antinomy
    • RE: The Q16 Contest is open!

      @support In my posts I was merely thinking about unintentional lookahead bias because when it comes to the intentional kind, there are lots of ways to do that and I believe you never can make all of them impossible.
      But I think that's what the rules are for and the live test is also a good measure to call out intentional or unintentional lookahead bias as well as simple innocent overfitting.

      To clarify the Quantopian example a bit, I don't think what I described was meant to prevent lookahead bias. The 8000 something symbols just was all what they had and the rules for the tradable universe were publicly available (QTradableStocksUS on archive.org). I just thought, providing data for a larger set than what's actually tradable would make the scenarios I mentioned less likely. For that purpose I think both sets could also be openly defined. Let's say the larger one has the top 100 symbols in terms of market cap, dollar volume or whatever and the tradable ones could be the top 10 out of them with the same measurement.

      On the other hand, I still don't know if those scenarios could become a real problem. Because what good does this foreknowledge if you can't trade them yet? And after they're in the top 10 it would be legitimate to use the fact that they just entered, because we would also have known this at that time in real life.

      posted in News and Feature Releases
      A
      antinomy
    • RE: The Quantiacs Referral Program

      @news-quantiacs
      Hello,
      about that link, the important part is the one that starts with the question mark, with utm_medium being our unique identifier, right?
      So, can we change the link to point to the contest description instead of the login-page, like this?
      https://quantiacs.com/contest?utm_source=reference&utm_medium=19014
      Then interested people could first read more details about the contest before signing up...

      posted in News and Feature Releases
      A
      antinomy
    • RE: Optimizer for simple MA crypto strategy

      @magenta-grimer Your strategy selects only the last MA values but you need them all to get full backtest results (single pass). By changing these lines

          ma_slow= qnta.lwma(close, ma_slow_param)#.isel(time=-1)   
          ma_fast= qnta.lwma(close, ma_fast_param)#.isel(time=-1)
      

      I get

      ---
      Best iteration:

      {'args': {'ma_slow_param': 125, 'ma_fast_param': 10},
      'result': {'equity': 149.87344883417938,
      'relative_return': -0.07047308274265918,
      'volatility': 0.6090118757480503,
      'underwater': -0.07047308274265918,
      'max_drawdown': -0.7355866115241121,
      'sharpe_ratio': 0.9776088443081092,
      'mean_return': 0.5953753960199653,
      'bias': -1.0,
      'instruments': 1.0,
      'avg_turnover': 0.06404024691932551,
      'avg_holding_time': 72.70270270270271},
      'weight': 0.9776088443081092,
      'exception': None}

      posted in Strategy help
      A
      antinomy

    Latest posts made by antinomy

    • RE: Some top S&P 500 companies are not available?

      The symbols are all in there, but if they are listed on NYSE you have to prepend NYS: not NAS: to the symbol. Also, I believe by 'BKR.B' you mean 'BRK.B'

      [sym for sym in data.asset.values if any(map(lambda x: x in sym, ['JPM', 'LLY', 'BRK.B']))]
      

      ['NYS:BRK.B', 'NYS:JPM', 'NYS:LLY']

      You can also search for symbols in qndata.stocks_load_spx_list() and get a little more infos like this:

      syms = qndata.stocks_load_spx_list()
      [sym for sym in syms if sym['symbol'] in ['JPM', 'LLY', 'BRK.B']]
      
      [{'name': 'Berkshire Hathaway Inc',
        'sector': 'Finance',
        'symbol': 'BRK.B',
        'exchange': 'NYS',
        'id': 'NYS:BRK.B',
        'cik': '1067983',
        'FIGI': 'tts-824192'},
       {'name': 'JP Morgan Chase and Co',
        'sector': 'Finance',
        'symbol': 'JPM',
        'exchange': 'NYS',
        'id': 'NYS:JPM',
        'cik': '19617',
        'FIGI': 'tts-825840'},
       {'name': 'Eli Lilly and Co',
        'sector': 'Healthcare',
        'symbol': 'LLY',
        'exchange': 'NYS',
        'id': 'NYS:LLY',
        'cik': '59478',
        'FIGI': 'tts-820450'}]
      

      The value for the key 'id' is what you will find in data.asset

      posted in Support
      A
      antinomy
    • RE: toolbox not working in colab

      I got the same error after installing qnt locally with pip.
      There is indeed a circular import in the current Github repo for the toolbox, introduced by this commit:
      https://github.com/quantiacs/toolbox/commit/78beafa93775f33606156169b3e6b8f995804151#diff-89350fe373763b439e4697f9b11cceb811b4a3f0adc7a655707a936ce5646c01R6-R10
      when some of the imports in output.py which were inside of fuctions before were moved to the top level.
      Now output imports from stats and stats imports from output.

      @support Can you please have a look?

      @alexeigor @omohyoid
      The conda version of qnt doesn't seem to be affected, so if that's an option for you install that one instead.
      Otherwise we can use the git version previous to the commit above:

      pip uninstall qnt
      pip install git+https://github.com/quantiacs/toolbox.git@a1e6351446cd936532af185fb519ef92f5b1ac6d
      
      posted in Support
      A
      antinomy
    • RE: Error for importing quantiacs module

      @steel-camel

      !pip install --force-reinstall python_utils
      

      should fix the issue.
      But I have no idea what would have caused it, the line in converters.py is totally messed up. The only thing that comes to my mind is a cat on the keyboard 😉

      posted in Support
      A
      antinomy
    • RE: Why .interpolate_na dosen't work well ?

      @cyan-gloom
      interpolate_na() only eliminates NaNs between 2 valid data points. Take a look at this example:

      import qnt.data as qndata
      import numpy as np
      
      stocks = qndata.stocks_load_ndx_data()
      sample = stocks[:, -5:, -6:] # The latest 5 dates for the last 6 assets
      
      print(sample.sel(field='close').to_pandas())
      """
      asset       NYS:NCLH  NYS:ORCL  NYS:PRGO  NYS:QGEN  NYS:RHT  NYS:TEVA
      time                                                                 
      2023-05-12     13.24     97.85     35.21     45.09      NaN      8.03
      2023-05-15     13.71     97.26     34.23     45.36      NaN      8.07
      2023-05-16     13.48     98.25     32.84     45.25      NaN      8.13
      2023-05-17     14.35     99.77     32.86     44.95      NaN      8.13
      2023-05-18     14.53    102.34     33.43     44.92      NaN      8.26
      """
      
      # Let's add some more NaN values:
      sample.values[3, (1,3), 0] = np.nan
      sample.values[3, 1:4, 1] = np.nan
      sample.values[3, :2, 2] = np.nan
      sample.values[3, 2:, 3] = np.nan
      sample.values[3, :-1, 5] = np.nan
      print(sample.sel(field='close').to_pandas())
      """
      asset       NYS:NCLH  NYS:ORCL  NYS:PRGO  NYS:QGEN  NYS:RHT  NYS:TEVA
      time                                                                 
      2023-05-12     13.24     97.85       NaN     45.09      NaN       NaN
      2023-05-15       NaN       NaN       NaN     45.36      NaN       NaN
      2023-05-16     13.48       NaN     32.84       NaN      NaN       NaN
      2023-05-17       NaN       NaN     32.86       NaN      NaN       NaN
      2023-05-18     14.53    102.34     33.43       NaN      NaN      8.26
      """
      
      # Interpolate the NaN values:
      print(sample.interpolate_na('time').sel(field='close').to_pandas())
      """
      asset       NYS:NCLH    NYS:ORCL  NYS:PRGO  NYS:QGEN  NYS:RHT  NYS:TEVA
      time                                                                   
      2023-05-12    13.240   97.850000       NaN     45.09      NaN       NaN
      2023-05-15    13.420  100.095000       NaN     45.36      NaN       NaN
      2023-05-16    13.480  100.843333     32.84       NaN      NaN       NaN
      2023-05-17    14.005  101.591667     32.86       NaN      NaN       NaN
      2023-05-18    14.530  102.340000     33.43       NaN      NaN      8.26
      """
      

      As you can see, only the NaNs in the first 2 columns are being replaced. The others remain untouched and might be dropped when you use dropna().

      Another thing you should keep in mind is that you might introduce lookahead bias with interpoloation, e. g. in a single run backtest. In my example for instance (pretend the NaNs I added were already in the data) you would know on 2023-05-15 that ORCL will rise when in reality you would first know that on 2023-05-18.

      posted in Support
      A
      antinomy
    • RE: How to fix this error

      Asuming whatever train is has a similar structure as the usual stock data, I get the same error as you with:

      import itertools
      import qnt.data as qndata
      
      stocks = qndata.stocks_load_ndx_data(tail=100)
      
      for comb in itertools.combinations(stocks.asset, 2):
          print(stocks.sel(asset=[comb]))
      

      There are 2 things to consider:

      1. comb is a tuple and you can't use tuples as value for the asset argument. You are putting brackets around it, but that gives you a list with one element wich is a tuple, hence the error about setting an array element as a sequence. Using stocks.sel(asset=list(comb)) instead resolves this issue but then you'll get an index error which leads to the second point
      2. each element in comb is a DataArray and cannot be used as an index element to select from the data. You want the string values instead, for this you can iterate over asset.values for instance.

      My example works when the loop looks like this:

      for comb in itertools.combinations(stocks.asset.values, 2):
          print(stocks.sel(asset=list(comb)))
      
      posted in Support
      A
      antinomy
    • RE: Python

      I'm a huge fan of Sentdex, he really tought me a lot about Python in his tutorials.
      Have a look at his website and his Youtube channel, for instance there's a tutorial for Python beginners.

      posted in General Discussion
      A
      antinomy
    • RE: Local Development with Notifications

      It's safe to ignore these notices but if they bother you, you can set the variables together with your API key using the defaults and the messages go away:

      import os
      
      os.environ['API_KEY'] = 'YOUR-API-KEY'
      os.environ['DATA_BASE_URL'] = 'https://data-api.quantiacs.io/'
      os.environ['CACHE_RETENTION'] = '7'
      os.environ['CACHE_DIR'] = 'data-cache'
      
      posted in Support
      A
      antinomy
    • Fundamental Data

      Hello @support
      Could you please add CIKs to the NASDAQ100 stock list?
      In order to load fundamental data from secgov we need the CIKs for the stocks but they're currently not in the list we get from qnt.data.stocks_load_ndx_list().
      Allthough it is still possible to get fundamentals using qnt.data.stocks_load_list(), it takes a little bit acrobatics like this for instance:

      import pandas as pd
      import qnt.data as qndata
      
      
      stocks = qndata.stocks_load_ndx_data()
      df_ndx = pd.DataFrame(qndata.stocks_load_ndx_list()).set_index('symbol')
      df_all = pd.DataFrame(qndata.stocks_load_list()).set_index('symbol')
      idx = sorted(set(df_ndx.index) & set(df_all.index))
      df = df_ndx.loc[idx]
      df['cik'] = df_all.cik[idx]
      symbols = list(df.reset_index().T.to_dict().values())
      fundamentals = qndata.secgov_load_indicators(symbols, stocks.time)
      
      

      It would be nice if we could get them with just 2 lines like so:

      stocks = qndata.stocks_load_ndx_data()
      fundamentals = qndata.secgov_load_indicators(qndata.stocks_load_ndx_list(), stocks.time)
      
      

      Also, the workaround doesn't work locally because qndata.stocks_load_list() seems to return the same list as qndata.stocks_load_ndx_list().

      Thanks in advance!

      posted in Support
      A
      antinomy
    • RE: Local Development Error "No module named 'qnt'"

      @eddiee Try step 4 without quotes, this should start jupyter notebook. And if that's your real API-key we see in the image, delete your last post. It's a bad idea to post it in a public forum 😉

      posted in Support
      A
      antinomy
    • RE: Q17 Neural Networks Algo Template; is there an error in train_model()?

      Yes, I noticed that too. And after fixing it the backtest takes forever...
      Another thing to consider is that it redefines the model with each training but I belive you can retrain already trainded NNs with new Data so they learn based on what they previously learned.

      posted in Strategy help
      A
      antinomy
    • Documentation
    • About
    • Career
    • My account
    • Privacy policy
    • Terms and Conditions
    • Cookies policy
    Home
    Copyright © 2014 - 2021 Quantiacs LLC.
    Powered by NodeBB | Contributors