1.
Deduplication of metadata : magistrsko deloMartin Chuchurski, 2019, undergraduate thesis
Abstract: Duplicates are redundant data that increases the storage space needed as well as the serving cost. They also have a big impact on the search result quality of the database. Therefore, detecting and eliminating redundant data is crucial in restoring and maintaining the quality of the data stored as well as the database itself. Different methods have been used to detect duplicates. The most widely used are pattern matching algorithms, more precisely phonetic string matching algorithms. There is a wide variety of algorithms to choose from and we opted for the algorithms that best suited our needs. Jaccard, Jaro, Jaro-Winkler and Levenshtein distance algorithms were used in the development of our deduplication application. They were joined together to create a new hybrid approach for detecting duplicates in a metadata database. In a real database, the application showed promising results while maintaining relatively fast speeds and fairly small memory consumption.
Keywords: deduplikacija, metapodatki, besedilne metrike podobnosti, duplikat
Published in DKUM: 08.11.2019; Views: 760; Downloads: 68
Full text (848,73 KB)