机器学习 August 20, 2018

6-3 验证数据与交叉验证

Words count 5.4k Reading time 5 mins. Read count 0

问题:针对特定测试数据集过拟合?

解决方法是将数据分成三份,训练数据、验证数据、测试数据

交叉验证 Cross Validation

import numpy as np
from sklearn import datasets
/Users/shirukai/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  return f(*args, **kwds)
digits = datasets.load_digits()
X = digits.data
y = digits.target

测试trian_test_split

from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.4)
from sklearn.neighbors import KNeighborsClassifier

best_score,best_p,best_k = 0,0,0
for k in range(2,11):
    for p in range(1,6):
        knn_clf = KNeighborsClassifier(weights="distance",n_neighbors=k,p=p)
        knn_clf.fit(X_train,y_train)
        score = knn_clf.score(X_test,y_test)
        if score > best_score:
            best_score,best_p,best_k = score,p,k
print("Best K = ",best_k)
print("Best P = ",best_p)
print("Best Score = ",best_score)
Best K =  6
Best P =  3
Best Score =  0.9888734353268428

使用交叉验证

from sklearn.model_selection import cross_val_score
knn_clf  = KNeighborsClassifier()
cross_val_score(knn_clf,X_train,y_train)
array([0.97520661, 0.98055556, 0.98591549])
from sklearn.neighbors import KNeighborsClassifier

best_score,best_p,best_k = 0,0,0
for k in range(2,11):
    for p in range(1,6):
        knn_clf = KNeighborsClassifier(weights="distance",n_neighbors=k,p=p)
        # 使用交叉验证
        scores = cross_val_score(knn_clf,X_train,y_train)
        score = np.mean(scores)
        if score > best_score:
            best_score,best_p,best_k = score,p,k
print("Best K = ",best_k)
print("Best P = ",best_p)
print("Best Score = ",best_score)
Best K =  2
Best P =  3
Best Score =  0.9833216932448687
best_knn_clf = KNeighborsClassifier(weights='distance',n_neighbors=2,p=3)
best_knn_clf.fit(X_train,y_train)
best_knn_clf.score(X_test,y_test)
0.9860917941585535

回顾网格搜索

from sklearn.model_selection import GridSearchCV
# CV就是交叉验证

param_grid = [
    {
        'weights':['distance'],
        'n_neighbors':[i for i in range(2,11)],
        'p':[i for i in range(1,6)]
    }
]
grid_search = GridSearchCV(knn_clf,param_grid,verbose=1)
grid_search.fit(X_train,y_train)
Fitting 3 folds for each of 45 candidates, totalling 135 fits


[Parallel(n_jobs=1)]: Done 135 out of 135 | elapsed:  1.4min finished





GridSearchCV(cv=None, error_score='raise',
       estimator=KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
           metric_params=None, n_jobs=1, n_neighbors=10, p=5,
           weights='distance'),
       fit_params=None, iid=True, n_jobs=1,
       param_grid=[{'weights': ['distance'], 'n_neighbors': [2, 3, 4, 5, 6, 7, 8, 9, 10], 'p': [1, 2, 3, 4, 5]}],
       pre_dispatch='2*n_jobs', refit=True, return_train_score='warn',
       scoring=None, verbose=1)
grid_search.best_score_
0.9833024118738405
grid_search.best_params_
{'n_neighbors': 2, 'p': 3, 'weights': 'distance'}
best_knn_clf = grid_search.best_estimator_
best_knn_clf.score(X_test,y_test)
0.9860917941585535
# 交叉验证默认是训练3个模型,可以通过cv参数来指定模型数量
cross_val_score(knn_clf,X_train,y_train,cv=5)
array([0.97727273, 0.97247706, 0.97209302, 0.98130841, 0.97630332])

k-folds 交叉验证

把训练数据集分成k份,称为k-folds cross validation

缺点:每次训练k个模型,相当于整体性能慢了k倍

留一法 LOO-CV

把训练数据集分成m份,称为留一法 Leave-One-Out Cross Validation

完全不受随机的影响,最接近模型真正的性能指标

缺点:计算量巨大

0%