A 'non-parametric' version of the naive Bayes classifier

Daniele Soria, Jonathan M. Garibaldi, Federico Ambrogi, Elia M. Biganzoli, Ian O. Ellis

Research output: Contribution to journalArticlepeer-review


Many algorithms have been proposed for the machine learning task of classification. One of the simplest methods, the naive Bayes classifier, has often been found to give good performance despite the fact that its underlying assumptions (of independence and a normal distribution of the variables) are perhaps violated. In previous work, we applied naive Bayes and other standard algorithms to a breast cancer database from Nottingham City Hospital in which the variables are highly non-normal and found that the algorithm performed well when predicting a class that had been derived from the same data. However, when we then applied naive Bayes to predict an alternative clinical variable, it performed much worse than other techniques. This motivated us to propose an alternative method, based on naive Bayes, which removes the requirement for the variables to be normally distributed, but retains the essential structure and other underlying assumptions of the method. We tested our novel algorithm on our breast cancer data and on three UCI datasets which also exhibited strong violations of normality. We found our algorithm outperformed naive Bayes in all four cases and outperformed multinomial logistic regression (MLR) in two cases. We conclude that our method offers a competitive alternative to MLR and naive Bayes when dealing with data sets in which non-normal distributions are observed.

Original languageEnglish
Pages (from-to)775-784
Number of pages10
JournalKnowledge-Based Systems
Issue number6
Publication statusPublished - Aug 2011


  • Breast cancer
  • Logistic regression
  • Naive Bayes
  • Supervised learning
  • UCI data sets

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence
  • Management Information Systems
  • Information Systems and Management


Dive into the research topics of 'A 'non-parametric' version of the naive Bayes classifier'. Together they form a unique fingerprint.

Cite this