Aft crypto

Comment

Author: Admin | 2025-04-27

Gamma distributionSet closer to 1 to shift towards a Poisson distribution.Parameter for using Pseudo-Huber (reg:pseudohubererror)huber_slope : A parameter used for Pseudo-Huber loss to define the \(\delta\) term. [default = 1.0]Parameter for using Quantile Loss (reg:quantileerror)quantile_alpha: A scalar or a list of targeted quantiles.Parameter for using AFT Survival Loss (survival:aft) and Negative Log Likelihood of AFT metric (aft-nloglik)aft_loss_distribution: Probability Density Function, normal, logistic, or extreme.Parameters for learning to rank (rank:ndcg, rank:map, rank:pairwise)These are parameters specific to learning to rank task. See Learning to Rank for an in-depth explanation.lambdarank_pair_method [default = topk]How to construct pairs for pair-wise learning.mean: Sample lambdarank_num_pair_per_sample pairs for each document in the query list.topk: Focus on top-lambdarank_num_pair_per_sample documents. Construct \(|query|\) pairs for each document at the top-lambdarank_num_pair_per_sample ranked by the model.lambdarank_num_pair_per_sample [range = \([1, \infty]\)]It specifies the number of pairs sampled for each document when pair method is mean, or the truncation level for queries when the pair method is topk. For example, to train with ndcg@6, set lambdarank_num_pair_per_sample to \(6\) and lambdarank_pair_method to topk.lambdarank_normalization [default = true]Added in version 2.1.0.Whether to normalize the leaf value by lambda gradient. This can sometimes stagnate the training progress.lambdarank_score_normalization [default = true]Added in version 3.0.0.Whether to normalize the delta metric by the difference of prediction scores. This cansometimes stagnate the training progress. With pairwise ranking, we can normalize thegradient using the difference between two samples in each pair to reduce influence fromthe pairs that have large difference in ranking scores. This can help us regularize themodel to reduce bias and prevent overfitting. Similar to other regularizationtechniques, this might prevent training from converging.There was no normalization before 2.0. In 2.0 and later versions this is used bydefault. In 3.0, we made this an option that users can disable.lambdarank_unbiased [default = false]Specify whether do we need to debias input click data.lambdarank_bias_norm [default = 2.0]\(L_p\) normalization for position debiasing, default is \(L_2\). Only relevant when lambdarank_unbiased is set to true.ndcg_exp_gain [default = true]Whether we should use exponential gain function for NDCG. There are two forms of gain function for NDCG, one is using relevance value directly while the other is using \(2^{rel} - 1\) to emphasize on retrieving relevant documents. When ndcg_exp_gain is true (the default), relevance degree cannot be greater than 31.Command Line ParametersThe following parameters are only used in the console version of XGBoost. The CLI has beendeprecated and will be removed in future releases.num_roundThe number of rounds for boostingdataThe path of

Add Comment