Enum Class DiscreteNaiveBayes.Model

java.lang.Object
java.lang.Enum<DiscreteNaiveBayes.Model>
smile.classification.DiscreteNaiveBayes.Model
All Implemented Interfaces:
Serializable, Comparable<DiscreteNaiveBayes.Model>, Constable
Enclosing class:
DiscreteNaiveBayes

public static enum DiscreteNaiveBayes.Model extends Enum<DiscreteNaiveBayes.Model>
The generation models of naive Bayes classifier. For document classification in NLP, there are two different ways we can set up a naive Bayes classifier: multinomial model and Bernoulli model. The multinomial model generates one term from the vocabulary in each position of the document. The multivariate Bernoulli model or Bernoulli model generates an indicator for each term of the vocabulary, either indicating presence of the term in the document or indicating absence.
  • Enum Constant Details

    • MULTINOMIAL

      public static final DiscreteNaiveBayes.Model MULTINOMIAL
      The document multinomial model generates one term from the vocabulary in each position of the document.
    • BERNOULLI

      public static final DiscreteNaiveBayes.Model BERNOULLI
      The document Bernoulli model generates an indicator for each term of the vocabulary, either indicating presence of the term in the document or indicating absence.
    • POLYAURN

      public static final DiscreteNaiveBayes.Model POLYAURN
      The document Polya Urn model is similar to MULTINOMIAL but different in the conditional probability update during learning. It simply adds twice for what is seen in training data instead of one time.
    • CNB

      public static final DiscreteNaiveBayes.Model CNB
      Complement Naive Bayes. To deal with skewed training data, CNB estimates parameters of a class c using data from all classes except c. CNB's estimates may be more effective because each uses a more even amount of training data per class, which will lessen the bias in the weight estimates.
    • WCNB

      public static final DiscreteNaiveBayes.Model WCNB
      Weight-normalized Complement Naive Bayes. In practice, it is often the case that weights tend to lean toward one class or the other. When the magnitude of Naive Bayes' weight vector is larger in one class than the others, the larger magnitude class may be preferred. To correct for the fact that some classes have greater dependencies, WCNB normalizes the weight vectors.
    • TWCNB

      public static final DiscreteNaiveBayes.Model TWCNB
      Transformed Weight-normalized Complement Naive Bayes. Before feeding into WCNB, TWCNB transforms term frequencies including TF transform, IDF transform, and length normalization. Because of IDF, TWCNB supports only batch mode.
  • Method Details

    • values

      public static DiscreteNaiveBayes.Model[] values()
      Returns an array containing the constants of this enum class, in the order they are declared.
      Returns:
      an array containing the constants of this enum class, in the order they are declared
    • valueOf

      public static DiscreteNaiveBayes.Model valueOf(String name)
      Returns the enum constant of this class with the specified name. The string must match exactly an identifier used to declare an enum constant in this class. (Extraneous whitespace characters are not permitted.)
      Parameters:
      name - the name of the enum constant to be returned.
      Returns:
      the enum constant with the specified name
      Throws:
      IllegalArgumentException - if this enum class has no constant with the specified name
      NullPointerException - if the argument is null