-
sklearn.inspection.permutation_importance 파라미터 정리<Python>/[Sklearn] 2021. 12. 29. 22:56728x90
permutation_importance
sklearn.inspection. permutation_importance ( estimator , X , y , * , 점수 = 없음 , n_repeats = 5 ,
n_jobs = 없음 , random_state = 없음 , sample_weight = 없음 , max_samples = 1.0 )from sklearn.inspection import permutation_importance
728x90permutation_importance 파라미터
estimator = object
X = ndarray or DataFrame, shape (n_samples, n_features)
y = array-like or None, shape (n_samples, ) or (n_samples, n_classes)
scoring = str, callable, list, tuple, or dict, default=None
# 분류때 사용하는거 ex)f1_score, accuracy 등
n_repeats = int, default=5
n_jobs = int or None, default=None
random_state = int, RandomState instance, default=None
sample_weight = array-like of shape (n_samples,), default=None
max_samples = int or float, default=1.0
# 각 반복에서 기능 중요도를 계산하기 위해 X에서 추출할 샘플 수(교체 없음)
from sklearn.datasets import load_diabetes from sklearn.ensemble import RandomForestRegressor from sklearn.inspection import permutation_importance from sklearn.model_selection import train_test_split # load the diabetes dataset diabetes = load_diabetes() # split the data into training and test sets X_train, X_test, y_train, y_test = train_test_split(diabetes.data, diabetes.target, random_state=0) # create a random forest regressor rf = RandomForestRegressor(random_state=0) # fit the model to the training data rf.fit(X_train, y_train) # calculate feature importance scores using permutation importance result = permutation_importance(rf, X_test, y_test, n_repeats=10, random_state=0) # print the feature importance scores print(result.importances_mean)
728x90'<Python> > [Sklearn]' 카테고리의 다른 글
sklearn.inspection.plot_partial_dependence 파라미터 정리 (0) 2021.12.30 sklearn.inspection.PartialDependenceDisplay 파라미터 정리 (0) 2021.12.29 sklearn.inspection.partial_dependence 파라미터 정리 (0) 2021.12.29 sklearn.inspection.partial_dependence(이론) (0) 2021.12.29 sklearn.impute.KNNImputer 파라미터 정리 (0) 2021.12.28