• Python机器学习17——Xgboost和Lightgbm结合分位数回归(机器学习与传统统计学结合)


    最近XGboost支持分位数回归了,我看了一下,就做了个小的代码案例。毕竟学术市场上做这种新颖的机器学习和传统统计学结合的方法还是不多,算的上创新,找个好数据集可以发论文。


    代码实现

    导入包

    1. import numpy as np
    2. import pandas as pd
    3. import matplotlib.pyplot as plt
    4. import seaborn as sns
    5. from sklearn.linear_model import LinearRegression
    6. from sklearn.model_selection import train_test_split
    7. from sklearn.metrics import mean_squared_error,r2_score
    8. import xgboost as xgb
    9. import lightgbm as lgb
    10. import statsmodels.api as sm
    11. from statsmodels.regression.quantile_regression import QuantReg

    xgboost和lightgbm都需要安装的,他们和sklearn库的机器学习方法不是一个库的。怎么安装看我《实用的机器学习》这个栏目的xgb那篇文章。


    模拟数据进行分位数回归

    先制作一个模拟数据集

    1. def f(x: np.ndarray) -> np.ndarray:
    2. return x * np.sin(x)
    3. rng = np.random.RandomState(2023)
    4. X = np.atleast_2d(rng.uniform(0, 10.0, size=1000)).T
    5. expected_y = f(X).ravel()
    6. sigma = 0.5 + X.ravel() / 10.0
    7. noise = rng.lognormal(sigma=sigma) - np.exp(sigma**2.0 / 2.0)
    8. y = expected_y + noise
    9. print(X.shape,y.shape)

    然后画图看看:

    1. plt.figure(figsize=(6,2),dpi=100)
    2. plt.scatter(X,y,s=1)
    3. plt.show()

    #划分训练集和测试集

    1. X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=rng)
    2. print(f"Training data shape: {X_train.shape}, Testing data shape: {X_test.shape}")

    这里采用三种模型进行拟合预测对比,分别是线性分位数回归,XGB结合分位数,LightGBM结合分位数:

    1. alphas = np.arange(5, 100, 5) / 100.0
    2. print(alphas)
    3. mse_qr, mse_xgb, mse_lgb = [], [], []
    4. r2_qr, r2_xgb, r2_lgb = [], [], []
    5. qr_pred,xgb_pred,lgb_pred={},{},{}
    6. # Train and evaluate
    7. for alpha in alphas:
    8. # Quantile Regression
    9. model_qr = QuantReg(y_train, sm.add_constant(X_train)).fit(q=alpha)
    10. model_pred=model_qr.predict(sm.add_constant(X_test))
    11. mse_qr.append(mean_squared_error(y_test,model_pred ))
    12. r2_qr.append(r2_score(y_test,model_pred))
    13. # XGBoost
    14. model_xgb = xgb.train({"objective": "reg:quantileerror", 'quantile_alpha': alpha},
    15. xgb.QuantileDMatrix(X_train, y_train), num_boost_round=100)
    16. model_pred=model_xgb.predict(xgb.DMatrix(X_test))
    17. mse_xgb.append(mean_squared_error(y_test,model_pred ))
    18. r2_xgb.append(r2_score(y_test,model_pred))
    19. # LightGBM
    20. model_lgb = lgb.train({'objective': 'quantile', 'alpha': alpha,'force_col_wise': True,},
    21. lgb.Dataset(X_train, y_train), num_boost_round=100)
    22. model_pred=model_lgb.predict(X_test)
    23. mse_lgb.append(mean_squared_error(y_test,model_pred))
    24. r2_lgb.append(r2_score(y_test,model_pred))
    25. if alpha in [0.1,0.5,0.9]:
    26. qr_pred[alpha]=model_qr.predict(sm.add_constant(X_test))
    27. xgb_pred[alpha]=model_xgb.predict(xgb.DMatrix(X_test))
    28. lgb_pred[alpha]=model_lgb.predict(X_test)

    分位点为0.1,0.5,0.9时记录一下,方便画图查看。

    然后画出三种模型在不同分位点下的误差和拟合优度对比:

    1. plt.figure(figsize=(7, 5),dpi=128)
    2. plt.subplot(211)
    3. plt.plot(alphas, mse_qr, label='Quantile Regression')
    4. plt.plot(alphas, mse_xgb, label='XGBoost')
    5. plt.plot(alphas, mse_lgb, label='LightGBM')
    6. plt.legend()
    7. plt.xlabel('Quantile')
    8. plt.ylabel('MSE')
    9. plt.title('MSE across different quantiles')
    10. plt.subplot(212)
    11. plt.plot(alphas, r2_qr, label='Quantile Regression')
    12. plt.plot(alphas, r2_xgb, label='XGBoost')
    13. plt.plot(alphas, r2_lgb, label='LightGBM')
    14. plt.legend()
    15. plt.xlabel('Quantile')
    16. plt.ylabel('$R^2$')
    17. plt.title('$R^2$ across different quantiles')
    18. plt.tight_layout()
    19. plt.show()

    可以看到在分位点为0.5附件,模型的误差都比较小。因为这个数据集没有很多的异常值。然后模型表现上,LGBM>XGB>线性QR。线性模型对于一个非线性的函数关系拟合在这里当然不行。

    画出拟合图:
     

    1. name=['QR','XGB-QR','LGB-QR']
    2. plt.figure(figsize=(7, 6),dpi=128)
    3. for k,model in enumerate([qr_pred,xgb_pred,lgb_pred]):
    4. n=int(str('31')+str(k+1))
    5. plt.subplot(n)
    6. plt.scatter(X_test,y_test,c='k',s=2)
    7. for i,alpha in enumerate([0.1,0.5,0.9]):
    8. sort_order = np.argsort(X_test, axis=0).ravel()
    9. X_test_sorted = np.array(X_test)[sort_order]
    10. #print(np.array(model[alpha]))
    11. predictions_sorted = np.array(model[alpha])[sort_order]
    12. plt.plot(X_test_sorted,predictions_sorted,label=fr"$\tau$={alpha}",lw=0.8)
    13. plt.legend()
    14. plt.title(f'{name[k]}')
    15. plt.tight_layout()
    16. plt.show()

    可以看到分位数回归的明显的区间特点。

    还有非参数非线性方法的优势,明显XGB和LGBM拟合得更好。


    波士顿数据集

    上面是人工数据,下面采用真实的数据集进行对比,就用回归最常用的波士顿房价数据集吧:

    1. data_url = "http://lib.stat.cmu.edu/datasets/boston"
    2. raw_df = pd.read_csv(data_url, sep="\s+", skiprows=22, header=None)
    3. data = np.hstack([raw_df.values[::2, :], raw_df.values[1::2, :2]])
    4. target = raw_df.values[1::2, 2]
    5. column_names = ['CRIM','ZN','INDUS','CHAS','NOX','RM','AGE','DIS','RAD','TAX','PTRATIO', 'B','LSTAT', 'MEDV']
    6. boston=pd.DataFrame(np.hstack([data,target.reshape(-1,1)]),columns= column_names)

    取出X和y,划分测试集和训练集

    1. X = boston.iloc[:,:-1]
    2. y = boston.iloc[:,-1]
    3. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

    拟合预测,对比

    1. alphas = np.arange(0.1, 1, 0.1)
    2. mse_qr, mse_xgb, mse_lgb = [], [], []
    3. r2_qr, r2_xgb, r2_lgb = [], [], []
    4. qr_pred,xgb_pred,lgb_pred={},{},{}
    5. # Train and evaluate
    6. for alpha in alphas:
    7. # Quantile Regression
    8. model_qr = QuantReg(y_train, sm.add_constant(X_train)).fit(q=alpha)
    9. model_pred=model_qr.predict(sm.add_constant(X_test))
    10. mse_qr.append(mean_squared_error(y_test,model_pred ))
    11. r2_qr.append(r2_score(y_test,model_pred))
    12. # XGBoost
    13. model_xgb = xgb.train({"objective": "reg:quantileerror", 'quantile_alpha': alpha},
    14. xgb.QuantileDMatrix(X_train, y_train), num_boost_round=100)
    15. model_pred=model_xgb.predict(xgb.DMatrix(X_test))
    16. mse_xgb.append(mean_squared_error(y_test,model_pred ))
    17. r2_xgb.append(r2_score(y_test,model_pred))
    18. # LightGBM
    19. model_lgb = lgb.train({'objective': 'quantile', 'alpha': alpha,'force_col_wise': True,},
    20. lgb.Dataset(X_train, y_train), num_boost_round=100)
    21. model_pred=model_lgb.predict(X_test)
    22. mse_lgb.append(mean_squared_error(y_test,model_pred))
    23. r2_lgb.append(r2_score(y_test,model_pred))
    24. if alpha in [0.1,0.5,0.9]:
    25. qr_pred[alpha]=model_qr.predict(sm.add_constant(X_test))
    26. xgb_pred[alpha]=model_xgb.predict(xgb.DMatrix(X_test))
    27. lgb_pred[alpha]=model_lgb.predict(X_test)

    画图查看不同分位点的不同模型的误差和拟合优度:

    1. plt.figure(figsize=(8, 5),dpi=128)
    2. plt.subplot(211)
    3. plt.plot(alphas, mse_qr, label='Quantile Regression')
    4. plt.plot(alphas, mse_xgb, label='XGBoost')
    5. plt.plot(alphas, mse_lgb, label='LightGBM')
    6. plt.legend()
    7. plt.xlabel('Quantile')
    8. plt.ylabel('MSE')
    9. plt.title('MSE across different quantiles')
    10. plt.subplot(212)
    11. plt.plot(alphas, r2_qr, label='Quantile Regression')
    12. plt.plot(alphas, r2_xgb, label='XGBoost')
    13. plt.plot(alphas, r2_lgb, label='LightGBM')
    14. plt.legend()
    15. plt.xlabel('Quantile')
    16. plt.ylabel('$R^2$')
    17. plt.title('$R^2$ across different quantiles')
    18. plt.tight_layout()
    19. plt.show()

    可以看到在分位点为0.6附件三个模型表现效果都比较好,然后模型表现来看,XGB>LGBM>QR,还是两个机器学习模型更厉害。


    分位数损失函数和平方和损失函数对比

    上面我们得到在分位点为0.6的时候,模型效果表现好,那么分位数模型和普通的MSE损失函数的效果比起来怎么样呢?我们继续对比:

    1. # 定义alpha值
    2. alpha = 0.5
    3. # 分位数回归模型
    4. model_qr = sm.regression.quantile_regression.QuantReg(y_train, sm.add_constant(X_train)).fit(q=alpha)
    5. qr_pred = model_qr.predict(sm.add_constant(X_test))
    6. # XGBoost分位数回归
    7. model_xgb = xgb.train({"objective": "reg:quantileerror", 'quantile_alpha': alpha},
    8. xgb.DMatrix(X_train, label=y_train), num_boost_round=100)
    9. xgb_q_pred = model_xgb.predict(xgb.DMatrix(X_test))
    10. # LightGBM分位数回归
    11. model_lgb = lgb.train({'objective': 'quantile', 'alpha': alpha,'force_col_wise': True},
    12. lgb.Dataset(X_train, label=y_train), num_boost_round=100)
    13. lgb_q_pred = model_lgb.predict(X_test)
    14. # 普通的最小二乘法线性回归
    15. model_lr = LinearRegression()
    16. model_lr.fit(X_train, y_train)
    17. lr_pred = model_lr.predict(X_test)
    18. # 普通的XGBoost
    19. model_xgb_reg = xgb.train({"objective": "reg:squarederror"}, xgb.DMatrix(X_train, label=y_train), num_boost_round=100)
    20. xgb_pred = model_xgb_reg.predict(xgb.DMatrix(X_test))
    21. # 普通的LightGBM
    22. model_lgb_reg = lgb.train({'objective': 'regression', 'force_col_wise': True}, lgb.Dataset(X_train, label=y_train), num_boost_round=100)
    23. lgb_pred = model_lgb_reg.predict(X_test)

    上面是六个模型,非别是基于分位数回归的XGB,LGBM,线性分位数回归。还有三个基于最普通的MSE损失函数的普通XGB,LGBM和最小二乘线性回归。

    # 计算6个模型的MSE和R^2 

    1. models = ['QR', 'XGB Quantile', 'LightGBM Quantile', 'Linear Reg', 'XGBoost', 'LightGBM']
    2. preds = [qr_pred, xgb_q_pred, lgb_q_pred, lr_pred, xgb_pred, lgb_pred]
    3. mse_scores = [mean_squared_error(y_test, pred) for pred in preds]
    4. r2_scores = [r2_score(y_test, pred) for pred in preds]

     画柱状图查看:

    1. colors = sns.color_palette("muted", len(models))
    2. fig, axs = plt.subplots(2, 1, figsize=(9,7))
    3. axs[0].bar(models, mse_scores, color=colors)
    4. axs[0].set_title('MSE Comparison')
    5. axs[0].set_ylabel('MSE')
    6. axs[1].bar(models, r2_scores, color=colors)
    7. axs[1].set_title(r'$R^{2}$ Comparison')
    8. axs[1].set_ylabel(r'$R^{2}$')
    9. plt.tight_layout()
    10. plt.show()

    可以看到模型效果来看,XGboost由于Lightgbm优于线性模型。但是分位数回归效果没有MSE损失好,说明在这个数据集表现上,就采用最经典的MSE损失的普通的模型效果会更好。。。

    确实是这样的,很多学术创新和改进都不一定比最经典和最常见的方法的效果好。

    如果是那种异常值很多的数据,具有异方差的数据 ,可能损失函数改用分位数的会更好。

  • 相关阅读:
    Ant design-13
    Map集合(TreeMap)的使用
    MySQL 中的锁有哪些类型,MySQL 中加锁的原则
    完全掌握Nginx的终极指南:这篇文章让你对Nginx洞悉透彻
    ASP.NET Core 使用redis
    新闻主题分类任务——torchtext 库进行文本分类
    window10安裝mysql8.0.18
    算法之跳表
    图解LSTM VS GRU
    软件工程——结构化设计
  • 原文地址:https://blog.csdn.net/weixin_46277779/article/details/134015189