-
sklearn.linear_model.ElasticNet 파라미터<Python>/[Sklearn] 2022. 1. 10. 17:55728x90
sklearn.linear_model.ElasticNet 파라미터
class sklearn.linear_model.ElasticNet(alpha=1.0, *, l1_ratio=0.5, fit_intercept=True, normalize='deprecated', precompute=False, max_iter=1000, copy_X=True, tol=0.0001, warm_start=False, positive=False, random_state=None, selection='cyclic')
sklearn.linear_model.ElasticNet 파라미터
alphafloat, default=1.0
Constant that multiplies the penalty terms. Defaults to 1.0. See the notes for the exact mathematical meaning of this parameter. alpha = 0 is equivalent to an ordinary least square, solved by the LinearRegression object. For numerical reasons, using alpha = 0 with the Lasso object is not advised. Given this, you should use the LinearRegression object.
l1_ratiofloat, default=0.5
The ElasticNet mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an L2 penalty. For l1_ratio = 1 it is an L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2.
fit_interceptbool, default=True
Whether the intercept should be estimated or not. If False, the data is assumed to be already centered.
normalizebool, default=False
This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
Deprecated since version 1.0: normalize was deprecated in version 1.0 and will be removed in 1.2.
precomputebool or array-like of shape (n_features, n_features), default=False
Whether to use a precomputed Gram matrix to speed up calculations. The Gram matrix can also be passed as argument. For sparse input this option is always False to preserve sparsity.
max_iterint, default=1000
The maximum number of iterations.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
tolfloat, default=1e-4
The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.
warm_startbool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary.
positivebool, default=False
When set to True, forces the coefficients to be positive.
random_stateint, RandomState instance, default=None
The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary.
selection{‘cyclic’, ‘random’}, default=’cyclic’
728x90'<Python> > [Sklearn]' 카테고리의 다른 글
sklearn.linear_model.Lars 파라미터 (0) 2022.01.10 sklearn.linear_model.ElasticNetCV 파라미터 (0) 2022.01.10 sklearn.linear_model.SGDRegressor 파라미터 (0) 2022.01.10 sklearn.linear_model.RidgeCV 파라미터 (0) 2022.01.10 sklearn.linear_model.Ridge 파라미터 (0) 2022.01.08