Skip to content
Published on

Python Algorithmic Trading Practical Guide: Backtesting Frameworks, Strategy Development, and Risk Management

Authors
  • Name
    Twitter
Python Algorithmic Trading Practical Guide

Introduction

Algorithmic Trading is a systematic investment approach that automatically executes trades based on predefined rules. While it offers the advantage of executing consistent strategies without emotional interference, poorly designed strategies can cause significant losses in live markets.

Python has established itself as the primary language for quantitative trading thanks to its rich financial library ecosystem, data analysis tools, and easy prototyping. This guide covers the entire algorithmic trading pipeline from selecting backtesting frameworks to strategy development, risk management, and live trading considerations.

Disclaimer: This article is written for educational purposes only and does not constitute investment advice. Actual trading involves significant risk.

Algorithmic Trading Overview

Systematic vs Discretionary Trading

CategorySystematicDiscretionary
Decision MakingAlgorithm/rule-basedHuman judgment
Emotional ImpactNoneHigh
SpeedMillisecond execution possibleSeconds to minutes
ScalabilityCan manage thousands of instrumentsLimited
BacktestingSystematic verification possibleSubjective evaluation
AdaptabilityRequires rule changesFlexible response
Development CostHigh initial investmentLow

Algorithmic Trading Pipeline

  1. Data Collection: Acquire market data (prices, volumes, financial data)
  2. Strategy Development: Design trade signal logic
  3. Backtesting: Validate strategy with historical data
  4. Optimization: Parameter tuning and walk-forward validation
  5. Risk Management: Position sizing, stop-loss/take-profit setup
  6. Live Deployment: Paper trading followed by live capital deployment

Backtesting Framework Comparison

FrameworkSpeedEase of UseFeature RangeLive TradingCommunity
Backtesting.pyFastVery easyBasicNot supportedModerate
ZiplineModerateModerateExtensiveLimitedActive
vectorbtVery fastModerateAdvanced analyticsNot supportedActive
BacktraderModerateModerateVery extensiveSupported (IB)Active
QuantConnectFastEasyVery extensiveFull supportVery active

Framework Selection Guide

  • Rapid prototyping: Backtesting.py (validate strategies in a few lines of code)
  • Large-scale vector operations: vectorbt (NumPy-based high-speed processing)
  • Live trading integration: Backtrader (Interactive Brokers integration)
  • Cloud-based integrated environment: QuantConnect (data + execution integrated)

Data Collection

Market Data Collection with yfinance

import yfinance as yf
import pandas as pd
from datetime import datetime, timedelta

def fetch_market_data(
    tickers: list,
    start_date: str = "2020-01-01",
    end_date: str = None,
    interval: str = "1d",
) -> dict:
    """Collect market data"""
    if end_date is None:
        end_date = datetime.now().strftime("%Y-%m-%d")

    data = {}
    for ticker in tickers:
        try:
            df = yf.download(
                ticker,
                start=start_date,
                end=end_date,
                interval=interval,
                progress=False,
            )
            if not df.empty:
                data[ticker] = df
                print(f"{ticker}: {len(df)} rows loaded ({df.index[0]} ~ {df.index[-1]})")
            else:
                print(f"{ticker}: No data available")
        except Exception as e:
            print(f"{ticker}: Error - {e}")

    return data

# Collect data
tickers = ["AAPL", "MSFT", "GOOGL", "AMZN", "SPY"]
market_data = fetch_market_data(tickers, start_date="2020-01-01")

# Verify data
aapl = market_data["AAPL"]
print(f"\nAAPL Data Summary:")
print(f"  Period: {aapl.index[0]} ~ {aapl.index[-1]}")
print(f"  Data points: {len(aapl)}")
print(f"  Columns: {list(aapl.columns)}")

Using the Alpha Vantage API

import requests
import pandas as pd

class AlphaVantageClient:
    """Alpha Vantage API client"""

    BASE_URL = "https://www.alphavantage.co/query"

    def __init__(self, api_key: str):
        self.api_key = api_key

    def get_daily(self, symbol: str, outputsize: str = "full") -> pd.DataFrame:
        """Fetch daily OHLCV data"""
        params = {
            "function": "TIME_SERIES_DAILY_ADJUSTED",
            "symbol": symbol,
            "outputsize": outputsize,
            "apikey": self.api_key,
        }
        response = requests.get(self.BASE_URL, params=params)
        data = response.json()

        if "Time Series (Daily)" not in data:
            raise ValueError(f"API error: {data.get('Note', data.get('Error Message', 'Unknown'))}")

        df = pd.DataFrame.from_dict(data["Time Series (Daily)"], orient="index")
        df.columns = ["Open", "High", "Low", "Close", "Adj Close", "Volume", "Dividend", "Split"]
        df = df.astype(float)
        df.index = pd.to_datetime(df.index)
        df = df.sort_index()

        return df

    def get_intraday(self, symbol: str, interval: str = "5min") -> pd.DataFrame:
        """Fetch intraday data"""
        params = {
            "function": "TIME_SERIES_INTRADAY",
            "symbol": symbol,
            "interval": interval,
            "outputsize": "full",
            "apikey": self.api_key,
        }
        response = requests.get(self.BASE_URL, params=params)
        data = response.json()

        time_series_key = f"Time Series ({interval})"
        if time_series_key not in data:
            raise ValueError(f"API error: {data}")

        df = pd.DataFrame.from_dict(data[time_series_key], orient="index")
        df.columns = ["Open", "High", "Low", "Close", "Volume"]
        df = df.astype(float)
        df.index = pd.to_datetime(df.index)
        df = df.sort_index()

        return df

# Usage example
# client = AlphaVantageClient(api_key="YOUR_API_KEY")
# daily_data = client.get_daily("AAPL")

Strategy Implementation

Strategy 1: Moving Average Crossover

import pandas as pd
import numpy as np
from backtesting import Backtest, Strategy
from backtesting.lib import crossover

class MovingAverageCrossover(Strategy):
    """Moving Average Crossover Strategy
    - Buy when the fast MA crosses above the slow MA
    - Sell when the fast MA crosses below the slow MA
    """
    fast_period = 10  # Fast MA period
    slow_period = 30  # Slow MA period

    def init(self):
        close = self.data.Close
        self.fast_ma = self.I(lambda x: pd.Series(x).rolling(self.fast_period).mean(), close)
        self.slow_ma = self.I(lambda x: pd.Series(x).rolling(self.slow_period).mean(), close)

    def next(self):
        # Golden Cross: Buy
        if crossover(self.fast_ma, self.slow_ma):
            if not self.position:
                self.buy()

        # Death Cross: Sell
        elif crossover(self.slow_ma, self.fast_ma):
            if self.position:
                self.position.close()

# Prepare data
data = yf.download("AAPL", start="2020-01-01", end="2025-12-31", progress=False)
data.columns = data.columns.droplevel(1) if isinstance(data.columns, pd.MultiIndex) else data.columns

# Run backtest
bt = Backtest(
    data,
    MovingAverageCrossover,
    cash=100000,
    commission=0.001,  # 0.1% commission
    exclusive_orders=True,
)

results = bt.run()
print("=== Moving Average Crossover Results ===")
print(f"Total Return: {results['Return [%]']:.2f}%")
print(f"Annual Return: {results['Return (Ann.) [%]']:.2f}%")
print(f"Sharpe Ratio: {results['Sharpe Ratio']:.2f}")
print(f"Max Drawdown: {results['Max. Drawdown [%]']:.2f}%")
print(f"Win Rate: {results['Win Rate [%]']:.2f}%")
print(f"Total Trades: {results['# Trades']}")

# Parameter optimization
optimization_results = bt.optimize(
    fast_period=range(5, 25, 5),
    slow_period=range(20, 60, 10),
    maximize="Sharpe Ratio",
    constraint=lambda p: p.fast_period < p.slow_period,
)
print(f"\nOptimal params: fast={optimization_results._strategy.fast_period}, slow={optimization_results._strategy.slow_period}")

Strategy 2: RSI Mean Reversion

class RSIMeanReversion(Strategy):
    """RSI Mean Reversion Strategy
    - Buy when RSI enters oversold zone (below 30)
    - Sell when RSI enters overbought zone (above 70)
    """
    rsi_period = 14
    rsi_oversold = 30
    rsi_overbought = 70

    def init(self):
        close = pd.Series(self.data.Close)
        delta = close.diff()
        gain = delta.where(delta > 0, 0.0)
        loss = (-delta).where(delta < 0, 0.0)

        avg_gain = gain.rolling(window=self.rsi_period).mean()
        avg_loss = loss.rolling(window=self.rsi_period).mean()

        rs = avg_gain / avg_loss
        rsi = 100 - (100 / (1 + rs))

        self.rsi = self.I(lambda: rsi, name="RSI")

    def next(self):
        if self.rsi[-1] < self.rsi_oversold:
            if not self.position:
                self.buy()
        elif self.rsi[-1] > self.rsi_overbought:
            if self.position:
                self.position.close()

# Run backtest
bt_rsi = Backtest(
    data,
    RSIMeanReversion,
    cash=100000,
    commission=0.001,
    exclusive_orders=True,
)

results_rsi = bt_rsi.run()
print("=== RSI Mean Reversion Results ===")
print(f"Total Return: {results_rsi['Return [%]']:.2f}%")
print(f"Sharpe Ratio: {results_rsi['Sharpe Ratio']:.2f}")
print(f"Max Drawdown: {results_rsi['Max. Drawdown [%]']:.2f}%")
print(f"Win Rate: {results_rsi['Win Rate [%]']:.2f}%")

Strategy 3: Bollinger Bands Breakout

class BollingerBandsBreakout(Strategy):
    """Bollinger Bands Breakout Strategy
    - Buy when price touches the lower band (expecting mean reversion)
    - Sell when price touches the upper band
    - Stop-loss: 2% below entry price
    """
    bb_period = 20
    bb_std = 2.0
    stop_loss_pct = 0.02

    def init(self):
        close = pd.Series(self.data.Close)
        self.sma = self.I(lambda: close.rolling(self.bb_period).mean(), name="SMA")
        std = close.rolling(self.bb_period).std()
        self.upper = self.I(lambda: close.rolling(self.bb_period).mean() + self.bb_std * std, name="Upper")
        self.lower = self.I(lambda: close.rolling(self.bb_period).mean() - self.bb_std * std, name="Lower")

    def next(self):
        price = self.data.Close[-1]

        # Touch lower band: Buy
        if price <= self.lower[-1]:
            if not self.position:
                self.buy(sl=price * (1 - self.stop_loss_pct))

        # Touch upper band: Sell
        elif price >= self.upper[-1]:
            if self.position:
                self.position.close()

# Run backtest
bt_bb = Backtest(
    data,
    BollingerBandsBreakout,
    cash=100000,
    commission=0.001,
    exclusive_orders=True,
)

results_bb = bt_bb.run()
print("=== Bollinger Bands Breakout Results ===")
print(f"Total Return: {results_bb['Return [%]']:.2f}%")
print(f"Sharpe Ratio: {results_bb['Sharpe Ratio']:.2f}")
print(f"Max Drawdown: {results_bb['Max. Drawdown [%]']:.2f}%")
print(f"Win Rate: {results_bb['Win Rate [%]']:.2f}%")

Risk Management Metrics

Core Performance Metrics Calculation

import numpy as np
import pandas as pd
from scipy import stats

class RiskMetrics:
    """Risk management metrics calculator"""

    def __init__(self, returns: pd.Series, risk_free_rate: float = 0.04):
        """
        Args:
            returns: Daily returns series
            risk_free_rate: Risk-free rate (annualized, default 4%)
        """
        self.returns = returns.dropna()
        self.risk_free_rate = risk_free_rate
        self.daily_rf = (1 + risk_free_rate) ** (1/252) - 1

    def sharpe_ratio(self) -> float:
        """Calculate Sharpe Ratio"""
        excess_returns = self.returns - self.daily_rf
        if excess_returns.std() == 0:
            return 0.0
        return np.sqrt(252) * excess_returns.mean() / excess_returns.std()

    def sortino_ratio(self) -> float:
        """Calculate Sortino Ratio (considers only downside risk)"""
        excess_returns = self.returns - self.daily_rf
        downside_returns = excess_returns[excess_returns < 0]
        if len(downside_returns) == 0 or downside_returns.std() == 0:
            return 0.0
        downside_std = downside_returns.std()
        return np.sqrt(252) * excess_returns.mean() / downside_std

    def maximum_drawdown(self) -> float:
        """Calculate Maximum Drawdown (MDD)"""
        cumulative = (1 + self.returns).cumprod()
        peak = cumulative.expanding().max()
        drawdown = (cumulative - peak) / peak
        return drawdown.min()

    def value_at_risk(self, confidence: float = 0.95) -> float:
        """Calculate VaR (Value at Risk) - Historical method"""
        return np.percentile(self.returns, (1 - confidence) * 100)

    def conditional_var(self, confidence: float = 0.95) -> float:
        """Calculate CVaR (Conditional VaR)"""
        var = self.value_at_risk(confidence)
        return self.returns[self.returns <= var].mean()

    def calmar_ratio(self) -> float:
        """Calculate Calmar Ratio (Annual Return / MDD)"""
        annual_return = (1 + self.returns.mean()) ** 252 - 1
        mdd = abs(self.maximum_drawdown())
        if mdd == 0:
            return 0.0
        return annual_return / mdd

    def summary(self) -> dict:
        """Full risk metrics summary"""
        annual_return = (1 + self.returns.mean()) ** 252 - 1
        annual_volatility = self.returns.std() * np.sqrt(252)

        return {
            "Annual Return": f"{annual_return:.2%}",
            "Annual Volatility": f"{annual_volatility:.2%}",
            "Sharpe Ratio": f"{self.sharpe_ratio():.2f}",
            "Sortino Ratio": f"{self.sortino_ratio():.2f}",
            "Maximum Drawdown": f"{self.maximum_drawdown():.2%}",
            "VaR (95%)": f"{self.value_at_risk():.2%}",
            "CVaR (95%)": f"{self.conditional_var():.2%}",
            "Calmar Ratio": f"{self.calmar_ratio():.2f}",
            "Total Trading Days": len(self.returns),
            "Positive Return Days": f"{(self.returns > 0).sum()} ({(self.returns > 0).mean():.1%})",
        }

# Usage example
# Calculate daily returns for SPY
spy = yf.download("SPY", start="2020-01-01", end="2025-12-31", progress=False)
daily_returns = spy["Close"].pct_change().dropna()

metrics = RiskMetrics(daily_returns.squeeze(), risk_free_rate=0.04)
summary = metrics.summary()

print("=== SPY Risk Metrics ===")
for key, value in summary.items():
    print(f"  {key}: {value}")

Position Sizing

Kelly Criterion

class PositionSizer:
    """Position sizing algorithms"""

    @staticmethod
    def kelly_criterion(win_rate: float, avg_win: float, avg_loss: float) -> float:
        """Calculate optimal position size using Kelly Criterion

        Args:
            win_rate: Win rate (0-1)
            avg_win: Average gain rate (positive)
            avg_loss: Average loss rate (positive)

        Returns:
            Optimal betting fraction (0-1)
        """
        if avg_loss == 0:
            return 0.0

        # Kelly Formula: f = (bp - q) / b
        # b = avg_win / avg_loss (odds ratio)
        # p = win_rate, q = 1 - win_rate
        b = avg_win / avg_loss
        p = win_rate
        q = 1 - p

        kelly = (b * p - q) / b

        # No bet if negative
        return max(0.0, kelly)

    @staticmethod
    def half_kelly(win_rate: float, avg_win: float, avg_loss: float) -> float:
        """Half Kelly: Conservative approach using half the Kelly fraction"""
        full_kelly = PositionSizer.kelly_criterion(win_rate, avg_win, avg_loss)
        return full_kelly / 2

    @staticmethod
    def fixed_fractional(equity: float, risk_per_trade: float,
                          entry_price: float, stop_loss_price: float) -> int:
        """Fixed Fractional position sizing

        Args:
            equity: Current capital
            risk_per_trade: Risk per trade (e.g., 0.02 = 2%)
            entry_price: Entry price
            stop_loss_price: Stop-loss price

        Returns:
            Number of shares to buy
        """
        risk_amount = equity * risk_per_trade
        risk_per_share = abs(entry_price - stop_loss_price)

        if risk_per_share == 0:
            return 0

        shares = int(risk_amount / risk_per_share)
        return max(0, shares)

    @staticmethod
    def volatility_based(equity: float, target_volatility: float,
                          asset_volatility: float) -> float:
        """Volatility-based position sizing

        Args:
            equity: Current capital
            target_volatility: Target portfolio volatility (annualized)
            asset_volatility: Asset volatility (annualized)

        Returns:
            Position weight (0-1)
        """
        if asset_volatility == 0:
            return 0.0

        weight = target_volatility / asset_volatility
        return min(weight, 1.0)  # Max 100%

# Usage example
sizer = PositionSizer()

# Kelly Criterion calculation
win_rate = 0.55
avg_win = 0.03   # Average 3% gain
avg_loss = 0.02  # Average 2% loss

kelly = sizer.kelly_criterion(win_rate, avg_win, avg_loss)
half = sizer.half_kelly(win_rate, avg_win, avg_loss)
print(f"Kelly Criterion: {kelly:.2%}")
print(f"Half Kelly: {half:.2%}")

# Fixed fractional position sizing
equity = 100000
entry = 150.0
stop_loss = 147.0
shares = sizer.fixed_fractional(equity, 0.02, entry, stop_loss)
print(f"Shares to buy: {shares} shares (entry: {entry}, stop-loss: {stop_loss})")

Walk-Forward Optimization

Walk-Forward Analysis Implementation

import pandas as pd
import numpy as np
from backtesting import Backtest

class WalkForwardOptimizer:
    """Walk-Forward Optimizer"""

    def __init__(self, data: pd.DataFrame, strategy_class,
                 train_period: int = 252, test_period: int = 63):
        """
        Args:
            data: OHLCV data
            strategy_class: Strategy class
            train_period: Training period (trading days, default 1 year)
            test_period: Testing period (trading days, default 3 months)
        """
        self.data = data
        self.strategy_class = strategy_class
        self.train_period = train_period
        self.test_period = test_period

    def run(self, optimization_params: dict, maximize: str = "Sharpe Ratio") -> list:
        """Execute walk-forward analysis"""
        results = []
        total_days = len(self.data)
        start_idx = 0

        fold = 1
        while start_idx + self.train_period + self.test_period <= total_days:
            train_end = start_idx + self.train_period
            test_end = train_end + self.test_period

            train_data = self.data.iloc[start_idx:train_end]
            test_data = self.data.iloc[train_end:test_end]

            # Optimize parameters on training period
            bt_train = Backtest(
                train_data, self.strategy_class,
                cash=100000, commission=0.001,
            )
            opt_result = bt_train.optimize(
                **optimization_params,
                maximize=maximize,
            )

            # Extract optimized parameters
            best_params = {}
            for param_name in optimization_params:
                best_params[param_name] = getattr(opt_result._strategy, param_name)

            # Validate on test period
            bt_test = Backtest(
                test_data, self.strategy_class,
                cash=100000, commission=0.001,
            )
            # Run test with optimized parameters
            test_result = bt_test.run(**best_params)

            fold_result = {
                "fold": fold,
                "train_start": train_data.index[0],
                "train_end": train_data.index[-1],
                "test_start": test_data.index[0],
                "test_end": test_data.index[-1],
                "best_params": best_params,
                "train_return": opt_result["Return [%]"],
                "test_return": test_result["Return [%]"],
                "test_sharpe": test_result["Sharpe Ratio"],
                "test_mdd": test_result["Max. Drawdown [%]"],
            }
            results.append(fold_result)

            print(f"Fold {fold}: Train Return={fold_result['train_return']:.2f}%, "
                  f"Test Return={fold_result['test_return']:.2f}%, "
                  f"Params={best_params}")

            start_idx += self.test_period
            fold += 1

        return results

    def summary(self, results: list) -> dict:
        """Walk-forward results summary"""
        test_returns = [r["test_return"] for r in results]
        test_sharpes = [r["test_sharpe"] for r in results]

        return {
            "Total Folds": len(results),
            "Avg Test Return": f"{np.mean(test_returns):.2f}%",
            "Test Return Std Dev": f"{np.std(test_returns):.2f}%",
            "Positive Return Fold Ratio": f"{sum(1 for r in test_returns if r > 0) / len(test_returns):.1%}",
            "Avg Test Sharpe": f"{np.mean(test_sharpes):.2f}",
        }

# Usage example
# wfo = WalkForwardOptimizer(data, MovingAverageCrossover)
# results = wfo.run(
#     optimization_params={
#         "fast_period": range(5, 25, 5),
#         "slow_period": range(20, 60, 10),
#     },
# )
# print(wfo.summary(results))

Troubleshooting: Common Pitfalls

Overfitting

Strategies overly optimized on historical data often perform poorly in live trading.

class OverfitDetector:
    """Overfit detector"""

    @staticmethod
    def check_overfit(train_sharpe: float, test_sharpe: float,
                       threshold: float = 0.5) -> dict:
        """Determine whether overfitting has occurred

        Args:
            train_sharpe: Training period Sharpe ratio
            test_sharpe: Testing period Sharpe ratio
            threshold: Allowed performance degradation ratio

        Returns:
            Overfitting diagnosis results
        """
        if train_sharpe <= 0:
            return {"is_overfit": True, "reason": "Training period performance is negative"}

        degradation = 1 - (test_sharpe / train_sharpe)
        is_overfit = degradation > threshold

        return {
            "is_overfit": is_overfit,
            "train_sharpe": train_sharpe,
            "test_sharpe": test_sharpe,
            "performance_degradation": f"{degradation:.1%}",
            "recommendation": (
                "Overfitting suspected: Reduce parameters or extend training period"
                if is_overfit
                else "Performance difference within acceptable range"
            ),
        }

    @staticmethod
    def parameter_sensitivity(results_grid: dict) -> dict:
        """Parameter sensitivity analysis
        If performance drops sharply around optimal parameters, overfitting is likely
        """
        sharpe_values = list(results_grid.values())
        mean_sharpe = np.mean(sharpe_values)
        std_sharpe = np.std(sharpe_values)
        max_sharpe = max(sharpe_values)

        # Overfitting suspected if optimal is more than 2 std above mean
        is_sensitive = (max_sharpe - mean_sharpe) > 2 * std_sharpe

        return {
            "is_sensitive": is_sensitive,
            "max_sharpe": max_sharpe,
            "mean_sharpe": mean_sharpe,
            "std_sharpe": std_sharpe,
            "recommendation": (
                "High parameter sensitivity: overfitting risk"
                if is_sensitive
                else "Stable performance across parameters"
            ),
        }

Survivorship Bias Prevention

def check_survivorship_bias(tickers: list, start_date: str) -> dict:
    """Survivorship bias check
    Backtesting only with currently existing stocks creates survivorship bias

    Recommended: Use datasets that include delisted/merged stocks
    """
    warnings = []

    # Warn if testing only with currently listed stocks
    if all(yf.Ticker(t).info.get("marketCap", 0) > 0 for t in tickers[:5]):
        warnings.append(
            "Only currently listed stocks are included. "
            "Delisted or merged stocks from the past are missing, "
            "which may overestimate returns"
        )

    return {
        "ticker_count": len(tickers),
        "warnings": warnings,
        "recommendation": "Use point-in-time datasets (e.g., CRSP, Sharadar)",
    }

Look-Ahead Bias Prevention

def validate_no_lookahead(strategy_code: str) -> list:
    """Look-ahead bias check (static code analysis)"""
    warnings = []

    # Check for patterns that reference future data
    dangerous_patterns = [
        ("shift(-", "Future data reference detected via shift(-N)"),
        (".iloc[-1]", "Last row reference - may be a future reference depending on context"),
        ("resample", "Resampling may include future data"),
    ]

    for pattern, description in dangerous_patterns:
        if pattern in strategy_code:
            warnings.append(f"Warning: {description} - '{pattern}' found")

    if not warnings:
        warnings.append("No explicit look-ahead bias patterns detected")

    return warnings

Live Trading Considerations

Slippage and Transaction Costs

Backtesting assumes ideal fill prices, but in practice slippage and transaction costs occur.

class RealisticBacktestConfig:
    """Realistic backtest configuration"""

    @staticmethod
    def get_config(asset_type: str = "us_equity") -> dict:
        """Realistic transaction cost settings by asset type"""
        configs = {
            "us_equity": {
                "commission": 0.001,     # 0.1% commission
                "slippage": 0.0005,      # 0.05% slippage
                "spread": 0.0001,        # 0.01% spread (large caps)
                "market_impact": 0.0002, # 0.02% market impact
            },
            "kr_equity": {
                "commission": 0.00015,   # 0.015% (broker commission)
                "tax": 0.0018,           # 0.18% (securities transaction tax, 2026)
                "slippage": 0.001,       # 0.1% slippage
                "spread": 0.0005,        # 0.05% spread
            },
            "crypto": {
                "commission": 0.001,     # 0.1% (maker fee)
                "slippage": 0.002,       # 0.2% slippage
                "spread": 0.001,         # 0.1% spread
            },
        }
        return configs.get(asset_type, configs["us_equity"])

    @staticmethod
    def total_cost_per_trade(config: dict) -> float:
        """Calculate total cost per trade"""
        return sum(config.values())

# Check costs
for asset_type in ["us_equity", "kr_equity", "crypto"]:
    config = RealisticBacktestConfig.get_config(asset_type)
    total = RealisticBacktestConfig.total_cost_per_trade(config)
    print(f"{asset_type}: Total cost per trade approx. {total:.3%}")

Operational Notes

Pre-Live Trading Checklist

  1. Paper Trading: Validate strategy with simulated trading for at least 3 months
  2. Start Small: Begin with 5-10% of total capital and scale up gradually
  3. Monitoring System: Build real-time dashboard for positions, P&L, and risk metrics
  4. Kill Switch: Automatic trading halt logic when daily loss limit is exceeded
  5. Logging: Detailed logging of all orders, fills, and errors

Psychological Factor Management

  • Do not manually override the algorithm even when it records losses
  • Anticipate and accept the gap between backtesting and live results
  • Experience the maximum drawdown scenario in advance through simulation
  • Pre-define maximum operating period and retirement criteria for each strategy

Production Checklist

  • [ ] Backtesting completed with at least 5 years of historical data
  • [ ] Walk-forward optimization passed to verify no overfitting
  • [ ] Survivorship bias and look-ahead bias checks completed
  • [ ] Realistic transaction costs applied (commissions, slippage, taxes)
  • [ ] Position sizing algorithm applied (Kelly Criterion or fixed fractional)
  • [ ] Stop-loss/take-profit logic implemented and tested
  • [ ] Paper trading completed for at least 3 months
  • [ ] Kill switch logic implemented
  • [ ] Real-time monitoring dashboard built
  • [ ] Trade logging and performance reports auto-generated
  • [ ] Network failure and API error handling logic implemented
  • [ ] Tax and regulatory requirements verified

References