Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • harvard-cite-them-right
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Post-processing Evolved Decision Trees
Högskolan i Borås, Institutionen Handels- och IT-högskolan. (CSL@BS)
Högskolan i Borås, Institutionen Handels- och IT-högskolan. (CSL@BS)
Högskolan i Borås, Institutionen Handels- och IT-högskolan. (CSL@BS)ORCID-id: 0000-0003-0274-9026
Högskolan i Borås, Institutionen Handels- och IT-högskolan. (CSL@BS)
Visa övriga samt affilieringar
2009 (Engelska)Ingår i: Foundations of Computational Intelligence / [ed] Ajith Abraham, Springer Verlag , 2009, s. 149-164Kapitel i bok, del av antologi (Övrigt vetenskapligt)
Abstract [en]

Although Genetic Programming (GP) is a very general technique, it is also quite powerful. As a matter of fact, GP has often been shown to outperform more specialized techniques on a variety of tasks. In data mining, GP has successfully been applied to most major tasks; e.g. classification, regression and clustering. In this chapter, we introduce, describe and evaluate a straightforward novel algorithm for post-processing genetically evolved decision trees. The algorithm works by iteratively, one node at a time, search for possible modifications that will result in higher accuracy. More specifically, the algorithm, for each interior test, evaluates every possible split for the current attribute and chooses the best. With this design, the post-processing algorithm can only increase training accuracy, never decrease it. In the experiments, the suggested algorithm is applied to GP decision trees, either induced directly from datasets, or extracted from neural network ensembles. The experimentation, using 22 UCI datasets, shows that the suggested post-processing technique results in higher test set accuracies on a large majority of the datasets. As a matter of fact, the increase in test accuracy is statistically significant for one of the four evaluated setups, and substantial on two out of the other three.

Ort, förlag, år, upplaga, sidor
Springer Verlag , 2009. s. 149-164
Nyckelord [en]
decision trees, genetic programming, Machine learning
Nyckelord [sv]
data mining
Nationell ämneskategori
Data- och informationsvetenskap Data- och informationsvetenskap
Identifikatorer
URN: urn:nbn:se:hb:diva-4926DOI: 10.1007/978-3-642-01088-0Lokalt ID: 2320/5721ISBN: 978-3-642-01087-3 (tryckt)OAI: oai:DiVA.org:hb-4926DiVA, id: diva2:884344
Tillgänglig från: 2015-12-17 Skapad: 2015-12-17 Senast uppdaterad: 2020-01-29Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltext

Personposter BETA

Johansson, UlfKönig, RikardLöfström, TuveSönströd, Cecilia

Sök vidare i DiVA

Av författaren/redaktören
Johansson, UlfKönig, RikardLöfström, TuveSönströd, Cecilia
Av organisationen
Institutionen Handels- och IT-högskolan
Data- och informationsvetenskapData- och informationsvetenskap

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetricpoäng

doi
isbn
urn-nbn
Totalt: 165 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • harvard-cite-them-right
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf