Change search
Refine search result
12 1 - 50 of 71
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Dahlbom, Anders
    et al.
    Högskolan i Skövde.
    Maria, Riveiro
    Högskolan i Skövde.
    König, Rikard
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Brattberg, Peter
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Supporting Golf Coaching with 3D Modeling of Swings2014In: Sportinformatik X: Jahrestagung der dvs-Sektion Sportinformatik, Hamburg: Feldhaus Verlag GmbH & Co. KG , 2014, 10, p. 142-148Chapter in book (Refereed)
  • 2.
    Gabrielsson, Patrick
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Co-Evolving Online High-Frequency Trading Strategies Using Grammatical Evolution2014Conference paper (Refereed)
    Abstract [en]

    Numerous sophisticated algorithms exist for discovering reoccurring patterns in financial time series. However, the most accurate techniques available produce opaque models, from which it is impossible to discern the rationale behind trading decisions. It is therefore desirable to sacrifice some degree of accuracy for transparency. One fairly recent evolutionary computational technology that creates transparent models, using a user-specified grammar, is grammatical evolution (GE). In this paper, we explore the possibility of evolving transparent entry- and exit trading strategies for the E-mini S&P 500 index futures market in a high-frequency trading environment using grammatical evolution. We compare the performance of models incorporating risk into their calculations with models that do not. Our empirical results suggest that profitable, risk-averse, transparent trading strategies for the E-mini S&P 500 can be obtained using grammatical evolution together with technical indicators.

  • 3.
    Gabrielsson, Patrick
    et al.
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Evolving Hierarchical Temporal Memory-Based Trading Models2013Conference paper (Refereed)
    Abstract [en]

    We explore the possibility of using the genetic algorithm to optimize trading models based on the Hierarchical Temporal Memory (HTM) machine learning technology. Technical indicators, derived from intraday tick data for the E-mini S&P 500 futures market (ES), were used as feature vectors to the HTM models. All models were configured as binary classifiers, using a simple buy-and-hold trading strategy, and followed a supervised training scheme. The data set was partitioned into multiple folds to enable a modified cross validation scheme. Artificial Neural Networks (ANNs) were used to benchmark HTM performance. The results show that the genetic algorithm succeeded in finding predictive models with good performance and generalization ability. The HTM models outperformed the neural network models on the chosen data set and both technologies yielded profitable results with above average accuracy.

  • 4.
    Gabrielsson, Patrick
    et al.
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Hierarchical Temporal Memory-based algorithmic trading of financial markets2012Conference paper (Refereed)
    Abstract [en]

    This paper explores the possibility of using the Hierarchical Temporal Memory (HTM) machine learning technology to create a profitable software agent for trading financial markets. Technical indicators, derived from intraday tick data for the E-mini S&P 500 futures market (ES), were used as features vectors to the HTM models. All models were configured as binary classifiers, using a simple buy-and-hold trading strategy, and followed a supervised training scheme. The data set was divided into a training set, a validation set and three test sets; bearish, bullish and horizontal. The best performing model on the validation set was tested on the three test sets. Artificial Neural Networks (ANNs) were subjected to the same data sets in order to benchmark HTM performance. The results suggest that the HTM technology can be used together with a feature vector of technical indicators to create a profitable trading algorithm for financial markets. Results also suggest that HTM performance is, at the very least, comparable to commonly applied neural network models.

  • 5.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Obtaining Accurate and Comprehensible Data Mining Models: An Evolutionary Approach2007Doctoral thesis, monograph (Other academic)
    Abstract [en]

    When performing predictive data mining, the use of ensembles is claimed to virtually guarantee increased accuracy compared to the use of single models. Unfortunately, the problem of how to maximize ensemble accuracy is far from solved. In particular, the relationship between ensemble diversity and accuracy is not completely understood, making it hard to efficiently utilize diversity for ensemble creation. Furthermore, most high-accuracy predictive models are opaque, i.e. it is not possible for a human to follow and understand the logic behind a prediction. For some domains, this is unacceptable, since models need to be comprehensible. To obtain comprehensibility, accuracy is often sacrificed by using simpler but transparent models; a trade-off termed the accuracy vs. comprehensibility trade-off. With this trade-off in mind, several researchers have suggested rule extraction algorithms, where opaque models are transformed into comprehensible models, keeping an acceptable accuracy. In this thesis, two novel algorithms based on Genetic Programming are suggested. The first algorithm (GEMS) is used for ensemble creation, and the second (G-REX) is used for rule extraction from opaque models. The main property of GEMS is the ability to combine smaller ensembles and individual models in an almost arbitrary way. Moreover, GEMS can use base models of any kind and the optimization function is very flexible, easily permitting inclusion of, for instance, diversity measures. In the experimentation, GEMS obtained accuracies higher than both straightforward design choices and published results for Random Forests and AdaBoost. The key quality of G-REX is the inherent ability to explicitly control the accuracy vs. comprehensibility trade-off. Compared to the standard tree inducers C5.0 and CART, and some well-known rule extraction algorithms, rules extracted by G-REX are significantly more accurate and compact. Most importantly, G-REX is thoroughly evaluated and found to meet all relevant evaluation criteria for rule extraction algorithms, thus establishing G-REX as the algorithm to benchmark against.

  • 6.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Boström, Henrik
    König, Rikard
    University of Borås, School of Business and IT.
    Extending Nearest Neighbor Classification with Spheres of Confidence2008Conference paper (Refereed)
  • 7.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Boström, Henrik
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Conformal Prediction Using Decision Trees2013Conference paper (Refereed)
    Abstract [en]

    Conformal prediction is a relatively new framework in which the predictive models output sets of predictions with a bound on the error rate, i.e., in a classification context, the probability of excluding the correct class label is lower than a predefined significance level. An investigation of the use of decision trees within the conformal prediction framework is presented, with the overall purpose to determine the effect of different algorithmic choices, including split criterion, pruning scheme and way to calculate the probability estimates. Since the error rate is bounded by the framework, the most important property of conformal predictors is efficiency, which concerns minimizing the number of elements in the output prediction sets. Results from one of the largest empirical investigations to date within the conformal prediction framework are presented, showing that in order to optimize efficiency, the decision trees should be induced using no pruning and with smoothed probability estimates. The choice of split criterion to use for the actual induction of the trees did not turn out to have any major impact on the efficiency. Finally, the experimentation also showed that when using decision trees, standard inductive conformal prediction was as efficient as the recently suggested method cross-conformal prediction. This is an encouraging results since cross-conformal prediction uses several decision trees, thus sacrificing the interpretability of a single decision tree.

  • 8.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Boström, Henrik
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Linusson, Henrik
    University of Borås, School of Business and IT.
    Regression conformal prediction with random forests2014In: Machine Learning, ISSN 0885-6125, E-ISSN 1573-0565, Vol. 97, no 1-2, p. 155-176Article in journal (Refereed)
    Abstract [en]

    Regression conformal prediction produces prediction intervals that are valid, i.e., the probability of excluding the correct target value is bounded by a predefined confidence level. The most important criterion when comparing conformal regressors is efficiency; the prediction intervals should be as tight (informative) as possible. In this study, the use of random forests as the underlying model for regression conformal prediction is investigated and compared to existing state-of-the-art techniques, which are based on neural networks and k-nearest neighbors. In addition to their robust predictive performance, random forests allow for determining the size of the prediction intervals by using out-of-bag estimates instead of requiring a separate calibration set. An extensive empirical investigation, using 33 publicly available data sets, was undertaken to compare the use of random forests to existing stateof- the-art conformal predictors. The results show that the suggested approach, on almost all confidence levels and using both standard and normalized nonconformity functions, produced significantly more efficient conformal predictors than the existing alternatives.

  • 9.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Linusson, Henrik
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Boström, Henrik
    University of Borås, School of Business and IT.
    Rule Extraction with Guaranteed Fidelity2014Conference paper (Refereed)
    Abstract [en]

    This paper extends the conformal prediction framework to rule extraction, making it possible to extract interpretable models from opaque models in a setting where either the infidelity or the error rate is bounded by a predefined significance level. Experimental results on 27 publicly available data sets show that all three setups evaluated produced valid and rather efficient conformal predictors. The implication is that augmenting rule extraction with conformal prediction allows extraction of models where test set errors or test sets infidelities are guaranteed to be lower than a chosen acceptable level. Clearly this is beneficial for both typical rule extraction scenarios, i.e., either when the purpose is to explain an existing opaque model, or when it is to build a predictive model that must be interpretable.

  • 10.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Niklasson, Lars
    University of Borås, School of Business and IT.
    Increasing Rule Extraction Accuracy by Post-processing GP Trees2008In: Proceedings of the Congress on Evolutionary Computation, IEEE Press , 2008, p. 3010-3015Conference paper (Refereed)
    Abstract [en]

    Genetic programming (GP), is a very general and efficient technique, often capable of outperforming more specialized techniques on a variety of tasks. In this paper, we suggest a straightforward novel algorithm for post-processing of GP classification trees. The algorithm iteratively, one node at a time, searches for possible modifications that would result in higher accuracy. More specifically, the algorithm for each split evaluates every possible constant value and chooses the best. With this design, the post-processing algorithm can only increase training accuracy, never decrease it. In this study, we apply the suggested algorithm to GP trees, extracted from neural network ensembles. Experimentation, using 22 UCI datasets, shows that the post-processing results in higher test set accuracies on a large majority of datasets. As a matter of fact, for two setups of three evaluated, the increase in accuracy is statistically significant.

  • 11.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Niklasson, Lars
    Using Imaginary Ensembles to Select GP Classifiers2010In: Genetic Programming: 13th European Conference, EuroGP 2010, Istanbul, Turkey, April 7-9, 2010, Proceedings / [ed] A.I. et al. Esparcia-Alcazar, Springer-Verlag Berlin Heidelberg , 2010, p. 278-288Conference paper (Refereed)
    Abstract [en]

    When predictive modeling requires comprehensible models, most data miners will use specialized techniques producing rule sets or decision trees. This study, however, shows that genetically evolved decision trees may very well outperform the more specialized techniques. The proposed approach evolves a number of decision trees and then uses one of several suggested selection strategies to pick one specific tree from that pool. The inherent inconsistency of evolution makes it possible to evolve each tree using all data, and still obtain somewhat different models. The main idea is to use these quite accurate and slightly diverse trees to form an imaginary ensemble, which is then used as a guide when selecting one specific tree. Simply put, the tree classifying the largest number of instances identically to the ensemble is chosen. In the experimentation, using 25 UCI data sets, two selection strategies obtained significantly higher accuracy than the standard rule inducer J48.

  • 12.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    Niklasson, Lars
    Post-processing Evolved Decision Trees2009In: Foundations of Computational Intelligence / [ed] Ajith Abraham, Springer Verlag , 2009, p. 149-164Chapter in book (Other academic)
    Abstract [en]

    Although Genetic Programming (GP) is a very general technique, it is also quite powerful. As a matter of fact, GP has often been shown to outperform more specialized techniques on a variety of tasks. In data mining, GP has successfully been applied to most major tasks; e.g. classification, regression and clustering. In this chapter, we introduce, describe and evaluate a straightforward novel algorithm for post-processing genetically evolved decision trees. The algorithm works by iteratively, one node at a time, search for possible modifications that will result in higher accuracy. More specifically, the algorithm, for each interior test, evaluates every possible split for the current attribute and chooses the best. With this design, the post-processing algorithm can only increase training accuracy, never decrease it. In the experiments, the suggested algorithm is applied to GP decision trees, either induced directly from datasets, or extracted from neural network ensembles. The experimentation, using 22 UCI datasets, shows that the suggested post-processing technique results in higher test set accuracies on a large majority of the datasets. As a matter of fact, the increase in test accuracy is statistically significant for one of the four evaluated setups, and substantial on two out of the other three.

  • 13.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Löfström, Tuwe
    University of Borås, School of Business and IT.
    Boström, Henrik
    University of Borås, School of Business and IT.
    Evolved Decision Trees as Conformal Predictors2013Conference paper (Refereed)
    Abstract [en]

    In conformal prediction, predictive models output sets of predictions with a bound on the error rate. In classification, this translates to that the probability of excluding the correct class is lower than a predefined significance level, in the long run. Since the error rate is guaranteed, the most important criterion for conformal predictors is efficiency. Efficient conformal predictors minimize the number of elements in the output prediction sets, thus producing more informative predictions. This paper presents one of the first comprehensive studies where evolutionary algorithms are used to build conformal predictors. More specifically, decision trees evolved using genetic programming are evaluated as conformal predictors. In the experiments, the evolved trees are compared to decision trees induced using standard machine learning techniques on 33 publicly available benchmark data sets, with regard to predictive performance and efficiency. The results show that the evolved trees are generally more accurate, and the corresponding conformal predictors more efficient, than their induced counterparts. One important result is that the probability estimates of decision trees when used as conformal predictors should be smoothed, here using the Laplace correction. Finally, using the more discriminating Brier score instead of accuracy as the optimization criterion produced the most efficient conformal predictions.

  • 14.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Niklasson, Lars
    Evolving a Locally Optimized Instance Based Learner2008In: Proceeding of The 2008 International Conference on Data Mining, CSREA Press , 2008, p. 124-129Conference paper (Refereed)
  • 15.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Niklasson, Lars
    Genetic Rule Extraction Optimizing Brier Score2010In: Genetic and Evolutionary Computation Conference, GECCO 2010, Proceedings of the 12th annual conference on Genetic and evolutionary computation / [ed] Martin Pelikan, Jürgen Branke, ACM , 2010, p. 1007-1014Conference paper (Refereed)
    Abstract [en]

    Most highly accurate predictive modeling techniques produce opaque models. When comprehensible models are required, rule extraction is sometimes used to generate a transparent model, based on the opaque. Naturally, the extracted model should be as similar as possible to the opaque. This criterion, called fidelity, is therefore a key part of the optimization function in most rule extracting algorithms. To the best of our knowledge, all existing rule extraction algorithms targeting fidelity use 0/1 fidelity, i.e., maximize the number of identical classifications. In this paper, we suggest and evaluate a rule extraction algorithm utilizing a more informed fidelity criterion. More specifically, the novel algorithm, which is based on genetic programming, minimizes the difference in probability estimates between the extracted and the opaque models, by using the generalized Brier score as fitness function. Experimental results from 26 UCI data sets show that the suggested algorithm obtained considerably higher accuracy and significantly better AUC than both the exact same rule extraction algorithm maximizing 0/1 fidelity, and the standard tree inducer J48. Somewhat surprisingly, rule extraction using the more informed fidelity metric normally resulted in less complex models, making sure that the improved predictive performance was not achieved on the expense of comprehensibility.

  • 16.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Niklasson, Lars
    Genetically Evolved Nearest Neighbor Ensembles2009In: Data Mining: Special Issue in Annals of Information Systems / [ed] Robert Stahlbock, Stefan Lessmann, Sven F. Crone, Springer Verlag , 2009, p. 299-313Chapter in book (Refereed)
    Abstract [en]

    Both theory and a wealth of empirical studies have established that ensembles are more accurate than single predictive models. For the ensemble approach to work, base classifiers must not only be accurate but also diverse, i.e., they should commit their errors on different instances. Instance based learners are, however, very robust with respect to variations of a dataset, so standard resampling methods will normally produce only limited diversity. Because of this, instance based learners are rarely used as base classifiers in ensembles. In this paper, we introduce a method where Genetic Programming is used to generate kNN base classifiers with optimized k-values and feature weights. Due to the inherent inconsistency in Genetic Programming (i.e. different runs using identical data and parameters will still produce different solutions) a group of independently evolved base classifiers tend to be not only accurate but also diverse. In the experimentation, using 30 datasets from the UCI repository, two slightly different versions of kNN ensembles are shown to significantly outperform both the corresponding base classifiers and standard kNN with optimized k-values, with respect to accuracy and AUC.

  • 17.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Niklasson, Lars
    Inconsistency: Friend or Foe2007In: The International Joint Conference on Neural Networks, IEEE Press , 2007, p. 1383-1388Chapter in book (Other academic)
    Abstract [en]

    One way of obtaining accurate yet comprehensible models is to extract rules from opaque predictive models. When evaluating rule extraction algorithms, one frequently used criterion is consistency; i.e. the algorithm must produce similar rules every time it is applied to the same problem. Rule extraction algorithms based on evolutionary algorithms are, however, inherently inconsistent, something that is regarded as their main drawback. In this paper, we argue that consistency is an overvalued criterion, and that inconsistency can even be beneficial in some situations. The study contains two experiments, both using publicly available data sets, where rules are extracted from neural network ensembles. In the first experiment, it is shown that it is normally possible to extract several different rule sets from an opaque model, all having high and similar accuracy. The implication is that consistency in that perspective is useless; why should one specific rule set be considered superior? Clearly, it should instead be regarded as an advantage to obtain several accurate and comprehensible descriptions of the relationship. In the second experiment, rule extraction is used for probability estimation. More specifically, an ensemble of extracted trees is used in order to obtain probability estimates. Here, it is exactly the inconsistency of the rule extraction algorithm that makes the suggested approach possible.

  • 18.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Producing Implicit Diversity in ANN Ensembles2012Conference paper (Refereed)
    Abstract [en]

    Combining several ANNs into ensembles normally results in a very accurate and robust predictive models. Many ANN ensemble techniques are, however, quite complicated and often explicitly optimize some diversity metric. Unfortunately, the lack of solid validation of the explicit algorithms, at least for classification, makes the use of diversity measures as part of an optimization function questionable. The merits of implicit methods, most notably bagging, are on the other hand experimentally established and well-known. This paper evaluates a number of straightforward techniques for introducing implicit diversity in ANN ensembles, including a novel technique producing diversity by using ANNs with different and slightly randomized link structures. The experimental results, comparing altogether 54 setups and two different ensemble sizes on 30 UCI data sets, show that all methods succeeded in producing implicit diversity, but that the effect on ensemble accuracy varied. Still, most setups evaluated did result in more accurate ensembles, compared to the baseline setup, especially for the larger ensemble size. As a matter of fact, several setups even obtained significantly higher ensemble accuracy than bagging. The analysis also identified that diversity was, relatively speaking, more important for the larger ensembles. Looking specifically at the methods used to increase the implicit diversity, setups using the technique that utilizes the randomized link structures generally produced the most accurate ensembles.

  • 19.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Boström, Henrik
    University of Borås, School of Business and IT.
    Overproduce-and-Select: The Grim Reality2013Conference paper (Refereed)
    Abstract [en]

    Overproduce-and-select (OPAS) is a frequently used paradigm for building ensembles. In static OPAS, a large number of base classifiers are trained, before a subset of the available models is selected to be combined into the final ensemble. In general, the selected classifiers are supposed to be accurate and diverse for the OPAS strategy to result in highly accurate ensembles, but exactly how this is enforced in the selection process is not obvious. Most often, either individual models or ensembles are evaluated, using some performance metric, on available and labeled data. Naturally, the underlying assumption is that an observed advantage for the models (or the resulting ensemble) will carry over to test data. In the experimental study, a typical static OPAS scenario, using a pool of artificial neural networks and a number of very natural and frequently used performance measures, is evaluated on 22 publicly available data sets. The discouraging result is that although a fairly large proportion of the ensembles obtained higher test set accuracies, compared to using the entire pool as the ensemble, none of the selection criteria could be used to identify these highly accurate ensembles. Despite only investigating a specific scenario, we argue that the settings used are typical for static OPAS, thus making the results general enough to question the entire paradigm.

  • 20.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Boström, Henrik
    University of Borås, School of Business and IT.
    Random Brains2013Conference paper (Refereed)
    Abstract [en]

    In this paper, we introduce and evaluate a novel method, called random brains, for producing neural network ensembles. The suggested method, which is heavily inspired by the random forest technique, produces diversity implicitly by using bootstrap training and randomized architectures. More specifically, for each base classifier multilayer perceptron, a number of randomly selected links between the input layer and the hidden layer are removed prior to training, thus resulting in potentially weaker but more diverse base classifiers. The experimental results on 20 UCI data sets show that random brains obtained significantly higher accuracy and AUC, compared to standard bagging of similar neural networks not utilizing randomized architectures. The analysis shows that the main reason for the increased ensemble performance is the ability to produce effective diversity, as indicated by the increase in the difficulty diversity measure.

  • 21.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Niklasson, Lars
    University of Borås, School of Business and IT.
    Empirically Investigating the Importance of Diversity2007Conference paper (Refereed)
  • 22.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Niklasson, Lars
    Evaluating Standard Techniques for Implicit Diversity2008In: Advances in Knowledge Discovery and Data Mining, Springer Verlag , 2008, p. 613-622Conference paper (Refereed)
  • 23.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Niklasson, Lars
    The Importance of Diversity in Neural Network Ensembles: An Empirical Investigation2007Conference paper (Refereed)
    Abstract [en]

    When designing ensembles, it is almost an axiom that the base classifiers must be diverse in order for the ensemble to generalize well. Unfortunately, there is no clear definition of the key term diversity, leading to several diversity measures and many, more or less ad hoc, methods for diversity creation in ensembles. In addition, no specific diversity measure has shown to have a high correlation with test set accuracy. The purpose of this paper is to empirically evaluate ten different diversity measures, using neural network ensembles and 11 publicly available data sets. The main result is that all diversity measures evaluated, in this study too, show low or very low correlation with test set accuracy. Having said that, two measures; double fault and difficulty show slightly higher correlations compared to the other measures. The study furthermore shows that the correlation between accuracy measured on training or validation data and test set accuracy also is rather low. These results challenge ensemble design techniques where diversity is explicitly maximized or where ensemble accuracy on a hold-out set is used for optimization.

  • 24.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Norinder, Ulf
    Evaluating Ensembles on QSAR Classification2009Conference paper (Refereed)
    Abstract [en]

    Novel, often quite technical algorithms, for ensembling artificial neural networks are constantly suggested. Naturally, when presenting a novel algorithm, the authors, at least implicitly, claim that their algorithm, in some aspect, represents the state-of-the-art. Obviously, the most important criterion is predictive performance, normally measured using either accuracy or area under the ROC-curve (AUC). This paper presents a study where the predictive performance of two widely acknowledged ensemble techniques; GASEN and NegBagg, is compared to more straightforward alternatives like bagging. The somewhat surprising result of the experimentation using, in total, 32 publicly available data sets from the medical domain, was that both GASEN and NegBagg were clearly outperformed by several of the straightforward techniques. One particularly striking result was that not applying the GASEN technique; i.e., ensembling all available networks instead of using the subset suggested by GASEN, turned out to produce more accurate ensembles.

  • 25.
    Johansson, Ulf
    et al.
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Löfström, Tuve
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Sundell, Håkan
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Linnusson, Henrik
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Gidenstam, Anders
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Boström, Henrik
    School of Information and Communication Technology, Royal Institute of Technology, Sweden.
    Venn predictors for well-calibrated probability estimation trees2018In: 7th Symposium on Conformal and Probabilistic Prediction and Applications: COPA 2018, 11-13 June 2018, Maastricht, The Netherlands / [ed] Alex J. Gammerman and Vladimir Vovk and Zhiyuan Luo and Evgueni N. Smirnov and Ralf L. M. Peeter, 2018, p. 3-14Conference paper (Refereed)
    Abstract [en]

    Successful use of probabilistic classification requires well-calibrated probability estimates, i.e., the predicted class probabilities must correspond to the true probabilities. The standard solution is to employ an additional step, transforming the outputs from a classifier into probability estimates. In this paper, Venn predictors are compared to Platt scaling and isotonic regression, for the purpose of producing well-calibrated probabilistic predictions from decision trees. The empirical investigation, using 22 publicly available datasets, showed that the probability estimates from the Venn predictor were extremely well-calibrated. In fact, in a direct comparison using the accepted reliability metric, the Venn predictor estimates were the most exact on every data set.

  • 26.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    Locally Induced Predictive Models2011Conference paper (Refereed)
    Abstract [en]

    Most predictive modeling techniques utilize all available data to build global models. This is despite the wellknown fact that for many problems, the targeted relationship varies greatly over the input space, thus suggesting that localized models may improve predictive performance. In this paper, we suggest and evaluate a technique inducing one predictive model for each test instance, using only neighboring instances. In the experimentation, several different variations of the suggested algorithm producing localized decision trees and neural network models are evaluated on 30 UCI data sets. The main result is that the suggested approach generally yields better predictive performance than global models built using all available training data. As a matter of fact, all techniques producing J48 trees obtained significantly higher accuracy and AUC, compared to the global J48 model. For RBF network models, with their inherent ability to use localized information, the suggested approach was only successful with regard to accuracy, while global RBF models had a better ranking ability, as seen by their generally higher AUCs.

  • 27.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Niklasson, Lars
    Evolving decision trees using oracle guides2009Conference paper (Refereed)
    Abstract [en]

    Abstract—Some data mining problems require predictive models to be not only accurate but also comprehensible. Comprehensibility enables human inspection and understanding of the model, making it possible to trace why individual predictions are made. Since most high-accuracy techniques produce opaque models, accuracy is, in practice, regularly sacrificed for comprehensibility. One frequently studied technique, often able to reduce this accuracy vs. comprehensibility tradeoff, is rule extraction, i.e., the activity where another, transparent, model is generated from the opaque. In this paper, it is argued that techniques producing transparent models, either directly from the dataset, or from an opaque model, could benefit from using an oracle guide. In the experiments, genetic programming is used to evolve decision trees, and a neural network ensemble is used as the oracle guide. More specifically, the datasets used by the genetic programming when evolving the decision trees, consist of several different combinations of the original training data and “oracle data”, i.e., training or test data instances, together with corresponding predictions from the oracle. In total, seven different ways of combining regular training data with oracle data were evaluated, and the results, obtained on 26 UCI datasets, clearly show that the use of an oracle guide improved the performance. As a matter of fact, trees evolved using training data only had the worst test set accuracy of all setups evaluated. Furthermore, statistical tests show that two setups, both using the oracle guide, produced significantly more accurate trees, compared to the setup using training data only.

  • 28.
    Johansson, Ulf
    et al.
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Sundström, Malin
    University of Borås, Faculty of Textiles, Engineering and Business.
    Håkan, Sundell
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Rickard, König
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Jenny, Balkow
    University of Borås, Faculty of Textiles, Engineering and Business.
    Dataanalys för ökad kundförståelse2016Report (Other (popular science, discussion, etc.))
  • 29.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    Fish or Shark: Data Mining Online Poker2009Conference paper (Refereed)
    Abstract [en]

    In this paper, data mining techniques are used to analyze data gathered from online poker. The study focuses on short-handed Texas Hold’em, and the data sets used contain thousands of human players, each having played more than 1000 hands. The study has two, complementary, goals. First, building predictive models capable of categorizing players into good and bad players, i.e., winners and losers. Second, producing clear and accurate descriptions of what constitutes the difference between winning and losing in poker. In the experimentation, neural network ensembles are shown to be very accurate when categorizing player profiles into winners and losers. Furthermore, decision trees and decision lists used to acquire concept descriptions are shown to be quite comprehensible, and still fairly accurate. Finally, an analysis of obtained concept descriptions discovered several rather unexpected rules, indicating that the suggested approach is potentially valuable for the poker domain.

  • 30.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    Boström, Henrik
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Chipper: A Novel Algorithm for Concept Description2008Conference paper (Refereed)
    Abstract [en]

    In this paper, several demands placed on concept description algorithms are identified and discussed. The most important criterion is the ability to produce compact rule sets that, in a natural and accurate way, describe the most important relationships in the underlying domain. An algorithm based on the identified criteria is presented and evaluated. The algorithm, named Chipper, produces decision lists, where each rule covers a maximum number of remaining instances while meeting requested accuracy requirements. In the experiments, Chipper is evaluated on nine UCI data sets. The main result is that Chipper produces compact and understandable rule sets, clearly fulfilling the overall goal of concept description. In the experiments, Chipper's accuracy is similar to standard decision tree and rule induction algorithms, while rule sets have superior comprehensibility.

  • 31.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Accurate and Interpretable Regression Trees using Oracle Coaching2014Conference paper (Refereed)
    Abstract [en]

    In many real-world scenarios, predictive models need to be interpretable, thus ruling out many machine learning techniques known to produce very accurate models, e.g., neural networks, support vector machines and all ensemble schemes. Most often, tree models or rule sets are used instead, typically resulting in significantly lower predictive performance. The over- all purpose of oracle coaching is to reduce this accuracy vs. comprehensibility trade-off by producing interpretable models optimized for the specific production set at hand. The method requires production set inputs to be present when generating the predictive model, a demand fulfilled in most, but not all, predic- tive modeling scenarios. In oracle coaching, a highly accurate, but opaque, model is first induced from the training data. This model (“the oracle”) is then used to label both the training instances and the production instances. Finally, interpretable models are trained using different combinations of the resulting data sets. In this paper, the oracle coaching produces regression trees, using neural networks and random forests as oracles. The experiments, using 32 publicly available data sets, show that the oracle coaching leads to significantly improved predictive performance, compared to standard induction. In addition, it is also shown that a highly accurate opaque model can be successfully used as a pre- processing step to reduce the noise typically present in data, even in situations where production inputs are not available. In fact, just augmenting or replacing training data with another copy of the training set, but with the predictions from the opaque model as targets, produced significantly more accurate and/or more compact regression trees.

  • 32.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    Linusson, Henrik
    University of Borås, School of Business and IT.
    Boström, Henrik
    University of Borås, School of Business and IT.
    Regression Trees for Streaming Data with Local Performance Guarantees2014Conference paper (Refereed)
    Abstract [en]

    Online predictive modeling of streaming data is a key task for big data analytics. In this paper, a novel approach for efficient online learning of regression trees is proposed, which continuously updates, rather than retrains, the tree as more labeled data become available. A conformal predictor outputs prediction sets instead of point predictions; which for regression translates into prediction intervals. The key property of a conformal predictor is that it is always valid, i.e., the error rate, on novel data, is bounded by a preset significance level. Here, we suggest applying Mondrian conformal prediction on top of the resulting models, in order to obtain regression trees where not only the tree, but also each and every rule, corresponding to a path from the root node to a leaf, is valid. Using Mondrian conformal prediction, it becomes possible to analyze and explore the different rules separately, knowing that their accuracy, in the long run, will not be below the preset significance level. An empirical investigation, using 17 publicly available data sets, confirms that the resulting rules are independently valid, but also shows that the prediction intervals are smaller, on average, than when only the global model is required to be valid. All-in-all, the suggested method provides a data miner or a decision maker with highly informative predictive models of streaming data.

  • 33.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    One Tree to Explain Them All2011Conference paper (Refereed)
    Abstract [en]

    Random forest is an often used ensemble technique, renowned for its high predictive performance. Random forests models are, however, due to their sheer complexity inherently opaque, making human interpretation and analysis impossible. This paper presents a method of approximating the random forest with just one decision tree. The approach uses oracle coaching, a recently suggested technique where a weaker but transparent model is generated using combinations of regular training data and test data initially labeled by a strong classifier, called the oracle. In this study, the random forest plays the part of the oracle, while the transparent models are decision trees generated by either the standard tree inducer J48, or by evolving genetic programs. Evaluation on 30 data sets from the UCI repository shows that oracle coaching significantly improves both accuracy and area under ROC curve, compared to using training data only. As a matter of fact, resulting single tree models are as accurate as the random forest, on the specific test instances. Most importantly, this is not achieved by inducing or evolving huge trees having perfect fidelity; a large majority of all trees are instead rather compact and clearly comprehensible. The experiments also show that the evolution outperformed J48, with regard to accuracy, but that this came at the expense of slightly larger trees.

  • 34.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Oracle Coached Decision Trees and Lists2010Conference paper (Refereed)
    Abstract [en]

    This paper introduces a novel method for obtaining increased predictive performance from transparent models in situations where production input vectors are available when building the model. First, labeled training data is used to build a powerful opaque model, called an oracle. Second, the oracle is applied to production instances, generating predicted target values, which are used as labels. Finally, these newly labeled instances are utilized, in different combinations with normal training data, when inducing a transparent model. Experimental results, on 26 UCI data sets, show that the use of oracle coaches significantly improves predictive performance, compared to standard model induction. Most importantly, both accuracy and AUC results are robust over all combinations of opaque and transparent models evaluated. This study thus implies that the straightforward procedure of using a coaching oracle, which can be used with arbitrary classifiers, yields significantly better predictive performance at a low computational cost.

  • 35.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Using Genetic Programming to Obtain Implicit Diversity2009Conference paper (Refereed)
    Abstract [en]

    When performing predictive data mining, the use of ensembles is known to increase prediction accuracy, compared to single models. To obtain this higher accuracy, ensembles should be built from base classifiers that are both accurate and diverse. The question of how to balance these two properties in order to maximize ensemble accuracy is, however, far from solved and many different techniques for obtaining ensemble diversity exist. One such technique is bagging, where implicit diversity is introduced by training base classifiers on different subsets of available data instances, thus resulting in less accurate, but diverse base classifiers. In this paper, genetic programming is used as an alternative method to obtain implicit diversity in ensembles by evolving accurate, but different base classifiers in the form of decision trees, thus exploiting the inherent inconsistency of genetic programming. The experiments show that the GP approach outperforms standard bagging of decision trees, obtaining significantly higher ensemble accuracy over 25 UCI datasets. This superior performance stems from base classifiers having both higher average accuracy and more diversity. Implicitly introducing diversity using GP thus works very well, since evolved base classifiers tend to be highly accurate and diverse.

  • 36.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    Löfström, Tuwe
    University of Borås, School of Business and IT.
    Boström, Henrik
    Obtaining accurate and comprehensible classifiers using oracle coaching2012In: Intelligent Data Analysis, ISSN 1088-467X, E-ISSN 1571-4128, Vol. Volume 16, no Number 2, p. 247-263Article in journal (Refereed)
    Abstract [en]

    While ensemble classifiers often reach high levels of predictive performance, the resulting models are opaque and hence do not allow direct interpretation. When employing methods that do generate transparent models, predictive performance typically has to be sacrificed. This paper presents a method of improving predictive performance of transparent models in the very common situation where instances to be classified, i.e., the production data, are known at the time of model building. This approach, named oracle coaching, employs a strong classifier, called an oracle, to guide the generation of a weaker, but transparent model. This is accomplished by using the oracle to predict class labels for the production data, and then applying the weaker method on this data, possibly in conjunction with the original training set. Evaluation on 30 data sets from the UCI repository shows that oracle coaching significantly improves predictive performance, measured by both accuracy and area under ROC curve, compared to using training data only. This result is shown to be robust for a variety of methods for generating the oracles and transparent models. More specifically, random forests and bagged radial basis function networks are used as oracles, while J48 and JRip are used for generating transparent models. The evaluation further shows that significantly better results are obtained when using the oracle-classified production data together with the original training data, instead of using only oracle data. An analysis of the fidelity of the transparent models to the oracles shows that performance gains can be expected from increasing oracle performance rather than from increasing fidelity. Finally, it is shown that further performance gains can be achieved by adjusting the relative weights of training data and oracle data.

  • 37.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    Norinder, Ulf
    Boström, Henrik
    The Trade-Off between Accuracy and Comprehensibility for Predictive In Silico Modeling2011In: Future Medicinal Chemistry, ISSN 1756-8919, E-ISSN 1756-8927, Vol. 3, no 6, p. 647-663Article in journal (Refereed)
  • 38.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    Norinder, Ulf
    Boström, Henrik
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Using Feature Selection with Bagging and Rule Extraction in Drug Discovery2010Conference paper (Refereed)
    Abstract [en]

    This paper investigates different ways of combining feature selection with bagging and rule extraction in predictive modeling. Experiments on a large number of data sets from the medicinal chemistry domain, using standard algorithms implemented in theWeka data mining workbench, show that feature selection can lead to significantly improved predictive performance.When combining feature selection with bagging, employing the feature selection on each bootstrap obtains the best result.When using decision trees for rule extraction, the effect of feature selection can actually be detrimental, unless the transductive approach oracle coaching is also used. However, employing oracle coaching will lead to significantly improved performance, and the best results are obtainedwhen performing feature selection before training the opaque model. The overall conclusion is that it can make a substantial difference for the predictive performance exactly how feature selection is used in conjunction with other techniques.

  • 39.
    König, Rikard
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Rule Extraction using Genetic Programming for Accurate Sales Forecasting2014Conference paper (Refereed)
    Abstract [en]

    The purpose of this paper is to propose and evaluate a method for reducing the inherent tendency of genetic programming to overfit small and noisy data sets. In addition, the use of different optimization criteria for symbolic regression is demonstrated. The key idea is to reduce the risk of overfitting noise in the training data by introducing an intermediate predictive model in the process. More specifically, instead of directly evolving a genetic regression model based on labeled training data, the first step is to generate a highly accurate ensemble model. Since ensembles are very robust, the resulting predictions will contain less noise than the original data set. In the second step, an interpretable model is evolved, using the ensemble predictions, instead of the true labels, as the target variable. Experiments on 175 sales forecasting data sets, from one of Sweden’s largest wholesale companies, show that the proposed technique obtained significantly better predictive performance, compared to both straightforward use of genetic programming and the standard M5P technique. Naturally, the level of improvement depends critically on the performance of the intermediate ensemble.

  • 40.
    König, Rikard
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Niklasson, Lars
    Improving GP Classification Performance by Injection of Decision Trees2010Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel hybrid method combining genetic programming and decision tree learning. The method starts by estimating a benchmark level of reasonable accuracy, based on decision tree performance on bootstrap samples of the training set. Next, a normal GP evolution is started with the aim of producing an accurate GP. At even intervals, the best GP in the population is evaluated against the accuracy benchmark. If the GP has higher accuracy than the benchmark, the evolution continues normally until the maximum number of generations is reached. If the accuracy is lower than the benchmark, two things happen. First, the fitness function is modified to allow larger GPs, able to represent more complex models. Secondly, a decision tree with increased size and trained on a bootstrap of the training data is injected into the population. The experiments show that the hybrid solution of injecting decision trees into a GP population gives synergetic effects producing results that are better than using either technique separately. The results, from 18 UCI data sets, show that the proposed method clearly outperforms normal GP, and is significantly better than the standard decision tree algorithm.

  • 41.
    König, Rikard
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Niklasson, Lars
    Finding the Tree in the Forest2010In: Proceeding of IADIS International Conference Applied Computing 2010 / [ed] Hans Weghorn, Pedro Isaías, Radu Vasio, IADIS Press , 2010, p. 135-142Conference paper (Refereed)
    Abstract [en]

    Decision trees are often used for decision support since they are fast to train, easy to understand and deterministic; i.e., always create identical trees from the same training data. This property is, however, only inherent in the actual decision tree algorithm, nondeterministic techniques such as genetic programming could very well produce different trees with similar accuracy and complexity for each execution. Clearly, if more than one solution exists, it would be misleading to present a single tree to a decision maker. On the other hand, too many alternatives could not be handled manually, and would only lead to confusion. Hence, we argue for a method aimed at generating a suitable number of alternative decision trees with comparable accuracy and complexity. When too many alternative trees exist, they are grouped and representative accurate solutions are selected from each group. Using domain knowledge, a decision maker could then select a single best tree and, if required, be presented with a small set of similar solutions, in order to further improve his decisions. In this paper, a method for generating alternative decision trees is suggested and evaluated. All in all,four different techniques for selecting accurate representative trees from groups of similar solutions are presented. Experiments on 19 UCI data sets show that it often exist dozens of alternative trees, and that one of the evaluated techniques clearly outperforms all others for selecting accurate and representative models.

  • 42.
    König, Rikard
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Niklasson, Lars
    Genetic Programming: a Tool for Flexible Rule Extraction2007In: IEEE Congress on Evolutionary Computation, IEEE Press , 2007, p. 1304-1310Chapter in book (Other academic)
  • 43.
    König, Rikard
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Niklasson, Lars
    G-REX: A Versatile Framework for Evolutionary Data Mining2008Conference paper (Refereed)
  • 44.
    König, Rikard
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Niklasson, Lars
    University of Borås, School of Business and IT.
    Instance Ranking Using Ensemble Spread2007Conference paper (Refereed)
    Abstract [en]

    This paper investigates a technique for predicting ensemble uncertainty originally proposed in the weather forecasting domain. The overall purpose is to find out if the technique can be modified to operate on a wider range of regression problems. The main difference, when moving outside the weather forecasting domain, is the lack of extensive statistical knowledge readily available for weather forecasting. In this study, three different modifications are suggested to the original technique. In the experiments, the modifications are compared to each other and to two straightforward technniques, using ten publicly available regression problems. Three of the techniques show promising result, especially one modification based on genetic algorithms. The suggested modification can accurately determine whether the confidence in ensemble predictions should be high or low.

  • 45.
    König, Rikard
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Niklasson, Lars
    The Importance of Representation Languages When Extracting Estimation Rules2007Conference paper (Refereed)
  • 46.
    König, Rikard
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Niklasson, Lars
    Using Genetic Programming to Increase Rule Quality2008In: In Proceedings of the Twenty-First International FLAIRS Conference, AAAI Press , 2008, p. 288-293Conference paper (Refereed)
    Abstract [en]

    Rule extraction is a technique aimed at transforming highly accurate opaque models like neural networks into comprehensible models without losing accuracy. G-REX is a rule extraction technique based on Genetic Programming that previously has performed well in several studies. This study has two objectives, to evaluate two new fitness functions for G-REX and to show how G-REX can be used as a rule inducer. The fitness functions are designed to optimize two alternative quality measures, area under ROC curves and a new comprehensibility measure called brevity. Rules with good brevity classifies typical instances with few and simple tests and use complex conditions only for atypical examples. Experiments using thirteen publicly available data sets show that the two novel fitness functions succeeded in increasing brevity and area under the ROC curve without sacrificing accuracy. When compared to a standard decision tree algorithm, G-REX achieved slightly higher accuracy, but also added additional quality to the rules by increasing their AUC or brevity significantly.

  • 47.
    König, Rikard
    et al.
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Johansson, Ulf
    Jönköping University.
    Riveiro, Maria
    Högskolan i Skövde.
    Brattberg, Peter
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Modeling Golf Player Skill Using Machine Learning2017In: International Cross-Domain Conference for Machine Learning and Knowledge Extraction: CD-MAKE 2017: Machine Learning and Knowledge Extraction, Calabri, 2017, p. 275-294Conference paper (Refereed)
    Abstract [en]

    In this study we apply machine learning techniques to Modeling Golf Player Skill using a dataset consisting of 277 golfers. The dataset includes 28 quantitative metrics, related to the club head at impact and ball flight, captured using a Doppler-radar. For modeling, cost-sensitive decision trees and random forest are used to discern between less skilled players and very good ones, i.e., Hackers and Pros. The results show that both random forest and decision trees achieve high predictive accuracy, with regards to true positive rate, accuracy and area under the ROC-curve. A detailed interpretation of the decision trees shows that they concur with modern swing theory, e.g., consistency is very important, while face angle, club path and dynamic loft are the most important evaluated swing factors, when discerning between Hackers and Pros. Most of the Hackers could be identified by a rather large deviation in one of these values compared to the Pros. Hackers, which had less variation in these aspects of the swing, could instead be identified by a steeper swing plane and a lower club speed. The importance of the swing plane is an interesting finding, since it was not expected and is not easy to explain.

  • 48.
    Linusson, Henrik
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Boström, Henrik
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Efficiency Comparison of Unstable Transductive and Inductive Conformal Classifiers2014Conference paper (Refereed)
    Abstract [en]

    In the conformal prediction literature, it appears axiomatic that transductive conformal classifiers possess a higher predictive efficiency than inductive conformal classifiers, however, this depends on whether or not the nonconformity function tends to overfit misclassified test examples. With the conformal prediction framework’s increasing popularity, it thus becomes necessary to clarify the settings in which this claim holds true. In this paper, the efficiency of transductive conformal classifiers based on decision tree, random forest and support vector machine classification models is compared to the efficiency of corresponding inductive conformal classifiers. The results show that the efficiency of conformal classifiers based on standard decision trees or random forests is substantially improved when used in the inductive mode, while conformal classifiers based on support vector machines are more efficient in the transductive mode. In addition, an analysis is presented that discusses the effects of calibration set size on inductive conformal classifier efficiency.

  • 49.
    Linusson, Henrik
    et al.
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Johansson, Ulf
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Boström, Henrik
    Löfström, Tuwe
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Reliable Confidence Predictions Using Conformal Prediction2016In: Lecture Notes in Computer Science, 2016, Vol. 9651, p. 77-88Conference paper (Refereed)
    Abstract [en]

    Conformal classiers output condence prediction regions, i.e., multi-valued predictions that are guaranteed to contain the true output value of each test pattern with some predened probability. In order to fully utilize the predictions provided by a conformal classier, it is essential that those predictions are reliable, i.e., that a user is able to assess the quality of the predictions made. Although conformal classiers are statistically valid by default, the error probability of the prediction regions output are dependent on their size in such a way that smaller, and thus potentially more interesting, predictions are more likely to be incorrect. This paper proposes, and evaluates, a method for producing rened error probability estimates of prediction regions, that takes their size into account. The end result is a binary conformal condence predictor that is able to provide accurate error probability estimates for those prediction regions containing only a single class label.

  • 50.
    Linusson, Henrik
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Signed-Error Conformal Regression2014In: Advances in Knowledge Discovery and Data Mining 18th Pacific-Asia Conference, PAKDD 2014 Tainan, Taiwan, May 13-16, 2014 Proceedings, Part I, Springer , 2014, p. 224-236Conference paper (Refereed)
    Abstract [en]

    This paper suggests a modification of the Conformal Prediction framework for regression that will strengthen the associated guarantee of validity. We motivate the need for this modification and argue that our conformal regressors are more closely tied to the actual error distribution of the underlying model, thus allowing for more natural interpretations of the prediction intervals. In the experimentation, we provide an empirical comparison of our conformal regressors to traditional conformal regressors and show that the proposed modification results in more robust two-tailed predictions, and more efficient one-tailed predictions.

12 1 - 50 of 71
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf