Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • harvard-cite-them-right
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Post-processing Evolved Decision Trees
University of Borås, School of Business and IT. (CSL@BS)
University of Borås, School of Business and IT. (CSL@BS)
University of Borås, School of Business and IT. (CSL@BS)ORCID iD: 0000-0003-0274-9026
University of Borås, School of Business and IT. (CSL@BS)
Show others and affiliations
2009 (English)In: Foundations of Computational Intelligence / [ed] Ajith Abraham, Springer Verlag , 2009, p. 149-164Chapter in book (Other academic)
Abstract [en]

Although Genetic Programming (GP) is a very general technique, it is also quite powerful. As a matter of fact, GP has often been shown to outperform more specialized techniques on a variety of tasks. In data mining, GP has successfully been applied to most major tasks; e.g. classification, regression and clustering. In this chapter, we introduce, describe and evaluate a straightforward novel algorithm for post-processing genetically evolved decision trees. The algorithm works by iteratively, one node at a time, search for possible modifications that will result in higher accuracy. More specifically, the algorithm, for each interior test, evaluates every possible split for the current attribute and chooses the best. With this design, the post-processing algorithm can only increase training accuracy, never decrease it. In the experiments, the suggested algorithm is applied to GP decision trees, either induced directly from datasets, or extracted from neural network ensembles. The experimentation, using 22 UCI datasets, shows that the suggested post-processing technique results in higher test set accuracies on a large majority of the datasets. As a matter of fact, the increase in test accuracy is statistically significant for one of the four evaluated setups, and substantial on two out of the other three.

Place, publisher, year, edition, pages
Springer Verlag , 2009. p. 149-164
Keywords [en]
decision trees, genetic programming, Machine learning
Keywords [sv]
data mining
National Category
Computer and Information Sciences Computer and Information Sciences
Identifiers
URN: urn:nbn:se:hb:diva-4926DOI: 10.1007/978-3-642-01088-0Local ID: 2320/5721ISBN: 978-3-642-01087-3 (print)OAI: oai:DiVA.org:hb-4926DiVA, id: diva2:884344
Available from: 2015-12-17 Created: 2015-12-17 Last updated: 2020-01-29Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Johansson, UlfKönig, RikardLöfström, TuveSönströd, Cecilia

Search in DiVA

By author/editor
Johansson, UlfKönig, RikardLöfström, TuveSönströd, Cecilia
By organisation
School of Business and IT
Computer and Information SciencesComputer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 216 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • harvard-cite-them-right
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf