Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • harvard-cite-them-right
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Post-processing Evolved Decision Trees
Högskolan i Borås, Institutionen Handels- och IT-högskolan. (CSL@BS)
Högskolan i Borås, Institutionen Handels- och IT-högskolan. (CSL@BS)
Högskolan i Borås, Institutionen Handels- och IT-högskolan. (CSL@BS)ORCID-id: 0000-0003-0274-9026
Högskolan i Borås, Institutionen Handels- och IT-högskolan. (CSL@BS)
Vise andre og tillknytning
2009 (engelsk)Inngår i: Foundations of Computational Intelligence / [ed] Ajith Abraham, Springer Verlag , 2009, s. 149-164Kapittel i bok, del av antologi (Annet vitenskapelig)
Abstract [en]

Although Genetic Programming (GP) is a very general technique, it is also quite powerful. As a matter of fact, GP has often been shown to outperform more specialized techniques on a variety of tasks. In data mining, GP has successfully been applied to most major tasks; e.g. classification, regression and clustering. In this chapter, we introduce, describe and evaluate a straightforward novel algorithm for post-processing genetically evolved decision trees. The algorithm works by iteratively, one node at a time, search for possible modifications that will result in higher accuracy. More specifically, the algorithm, for each interior test, evaluates every possible split for the current attribute and chooses the best. With this design, the post-processing algorithm can only increase training accuracy, never decrease it. In the experiments, the suggested algorithm is applied to GP decision trees, either induced directly from datasets, or extracted from neural network ensembles. The experimentation, using 22 UCI datasets, shows that the suggested post-processing technique results in higher test set accuracies on a large majority of the datasets. As a matter of fact, the increase in test accuracy is statistically significant for one of the four evaluated setups, and substantial on two out of the other three.

sted, utgiver, år, opplag, sider
Springer Verlag , 2009. s. 149-164
Emneord [en]
decision trees, genetic programming, Machine learning
Emneord [sv]
data mining
HSV kategori
Identifikatorer
URN: urn:nbn:se:hb:diva-4926DOI: 10.1007/978-3-642-01088-0Lokal ID: 2320/5721ISBN: 978-3-642-01087-3 (tryckt)OAI: oai:DiVA.org:hb-4926DiVA, id: diva2:884344
Tilgjengelig fra: 2015-12-17 Laget: 2015-12-17 Sist oppdatert: 2020-01-29bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekst

Personposter BETA

Johansson, UlfKönig, RikardLöfström, TuveSönströd, Cecilia

Søk i DiVA

Av forfatter/redaktør
Johansson, UlfKönig, RikardLöfström, TuveSönströd, Cecilia
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric

doi
isbn
urn-nbn
Totalt: 165 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • harvard-cite-them-right
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf