| | SLO | ENG | Cookies and privacy

Bigger font | Smaller font

Search the digital library catalog Help

Query: search in
search in
search in
search in
* old and bologna study programme

Options:
  Reset


1 - 1 / 1
First pagePrevious page1Next pageLast page
1.
Deduplication of metadata : magistrsko delo
Martin Chuchurski, 2019, undergraduate thesis

Abstract: Duplicates are redundant data that increases the storage space needed as well as the serving cost. They also have a big impact on the search result quality of the database. Therefore, detecting and eliminating redundant data is crucial in restoring and maintaining the quality of the data stored as well as the database itself. Different methods have been used to detect duplicates. The most widely used are pattern matching algorithms, more precisely phonetic string matching algorithms. There is a wide variety of algorithms to choose from and we opted for the algorithms that best suited our needs. Jaccard, Jaro, Jaro-Winkler and Levenshtein distance algorithms were used in the development of our deduplication application. They were joined together to create a new hybrid approach for detecting duplicates in a metadata database. In a real database, the application showed promising results while maintaining relatively fast speeds and fairly small memory consumption.
Keywords: deduplikacija, metapodatki, besedilne metrike podobnosti, duplikat
Published in DKUM: 08.11.2019; Views: 456; Downloads: 44
.pdf Full text (848,73 KB)

Search done in 0.03 sec.
Back to top
Logos of partners University of Maribor University of Ljubljana University of Primorska University of Nova Gorica