Archives

Nearest Neighbors Search Algorithm for High Dimensional Data


G. Vasanthi, Dr.S. Artheeswari and Dr.S. Kalaivany
Abstract

Machine learning and Computational geometry offer new methods for search, regression, and classification with large amounts of high-dimensional data. The k-nearest neighbors (k-NN) algorithm is used machine learning method that finds nearest neighbors. Several data structures that solve efficiently the problem of Nearest Neighbour search in higher dimensions. Distinctive image features from scale invariant presents a pattern matching method based on fast Nearest Neighbour search algorithms.[2] The user is not interested to find one closest point. He/She wants to find k closest points. The naïve way to solve the Nearest Neighbour search problem is to use linear search. For the k Nearest Neighbour search, the naïve algorithm consists in sorting the points according to their distance from the query point. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d Tree, the priority search k-means tree. These data structures are used to find the Nearest Neighbour search.[1]

Volume 12 | 08-Special Issue

Pages: 1215-1218

DOI: 10.5373/JARDCS/V12SP8/20202636