Both theory and a wealth of empirical studies have established that ensembles are more accurate than single predictive models. For the ensemble approach to work, base classifiers must not only be accurate but also diverse, i.e., they should commit their errors on different instances. Instance based learners are, however, very robust with respect to variations of a dataset, so standard resampling methods will normally produce only limited diversity. Because of this, instance based learners are rarely used as base classifiers in ensembles. In this paper, we introduce a method where Genetic Programming is used to generate kNN base classifiers with optimized k-values and feature weights. Due to the inherent inconsistency in Genetic Programming (i.e. different runs using identical data and parameters will still produce different solutions) a group of independently evolved base classifiers tend to be not only accurate but also diverse. In the experimentation, using 30 datasets from the UCI repository, two slightly different versions of kNN ensembles are shown to significantly outperform both the corresponding base classifiers and standard kNN with optimized k-values, with respect to accuracy and AUC.