| | SLO | ENG | Cookies and privacy

Bigger font | Smaller font

Search the digital library catalog Help

Query: search in
search in
search in
search in
* old and bologna study programme

Options:
  Reset


1 - 5 / 5
First pagePrevious page1Next pageLast page
1.
An efficient iterative approach to explainable feature learning
Dino Vlahek, Domen Mongus, 2023, original scientific article

Keywords: data classification, explainable artificial intelligence, feature learning, knowledge discovery
Published in DKUM: 13.06.2024; Views: 129; Downloads: 12
.pdf Full text (1,95 MB)
This document has many files! More...

2.
Categorisation of open government data literature
Aljaž Ferencek, Mirjana Kljajić Borštnar, Ajda Pretnar Žagar, 2022, review article

Abstract: Background: Due to the emerging global interest in Open Government Data, research papers on various topics in this area have increased. Objectives: This paper aims to categorise Open government data research. Methods/Approach: A literature review was conducted to provide a complete overview and classification of open government data research. Hierarchical clustering, a cluster analysis method, was used, and a hierarchy of clusters on selected data sets emerged. Results: The results of this study suggest that there are two distinct clusters of research, which either focus on government perspectives and policies on OGD, initiatives, and portals or focus on regional studies, adoption of OGD, platforms, and barriers to implementation. Further findings suggest that research gaps could be segmented into many thematic areas, focusing on success factors, best practices, the impact of open government data, barriers/challenges in implementing open government data, etc. Conclusions: The extension of the paper, which was first presented at the Entrenova conference, provides a comprehensive overview of research to date on the implementation of OGD and points out that this topic has already received research attention, which focuses on specific segments of the phenomenon and signifies in which direction new research should be made.
Keywords: open government data, open government data research, hierarchical clustering, OGD classification, OGD literature overview
Published in DKUM: 12.06.2024; Views: 134; Downloads: 11
.pdf Full text (539,06 KB)
This document has many files! More...

3.
Scoping review on the multimodal classification of depression and experimental study on existing multimodal models
Umut Arioz, Urška Smrke, Nejc Plohl, Izidor Mlakar, 2022, review article

Abstract: Depression is a prevalent comorbidity in patients with severe physical disorders, such as cancer, stroke, and coronary diseases. Although it can significantly impact the course of the primary disease, the signs of depression are often underestimated and overlooked. The aim of this paper was to review algorithms for the automatic, uniform, and multimodal classification of signs of depression from human conversations and to evaluate their accuracy. For the scoping review, the PRISMA guidelines for scoping reviews were followed. In the scoping review, the search yielded 1095 papers, out of which 20 papers (8.26%) included more than two modalities, and 3 of those papers provided codes. Within the scope of this review, supported vector machine (SVM), random forest (RF), and long short-term memory network (LSTM; with gated and non-gated recurrent units) models, as well as different combinations of features, were identified as the most widely researched techniques. We tested the models using the DAIC-WOZ dataset (original training dataset) and using the SymptomMedia dataset to further assess their reliability and dependency on the nature of the training datasets. The best performance was obtained by the LSTM with gated recurrent units (F1-score of 0.64 for the DAIC-WOZ dataset). However, with a drop to an F1-score of 0.56 for the SymptomMedia dataset, the method also appears to be the most data-dependent.
Keywords: multimodal depression classification, scoping review, real-world data, mental health
Published in DKUM: 11.08.2023; Views: 529; Downloads: 64
.pdf Full text (1,43 MB)
This document has many files! More...

4.
K-vertex: a novel model for the cardinality constraints enforcement in graph databases : doctoral dissertation
Martina Šestak, 2022, doctoral dissertation

Abstract: The increasing number of network-shaped domains calls for the use of graph database technology, where there are continuous efforts to develop mechanisms to address domain challenges. Relationships as 'first-class citizens' in graph databases can play an important role in studying the structural and behavioural characteristics of the domain. In this dissertation, we focus on studying the cardinality constraints mechanism, which also exploits the edges of the underlying property graph. The results of our literature review indicate an obvious research gap when it comes to concepts and approaches for specifying and representing complex cardinality constraints for graph databases validated in practice. To address this gap, we present a novel and comprehensive approach called the k-vertex cardinality constraints model for enforcing higher-order cardinality constraints rules on edges, which capture domain-related business rules of varying complexity. In our formal k-vertex cardinality constraint concept definition, we go beyond simple patterns formed between two nodes and employ more complex structures such as hypernodes, which consist of nodes connected by edges. We formally introduce the concept of k-vertex cardinality constraints and their properties as well as the property graph-based model used for their representation. Our k-vertex model includes the k-vertex cardinality constraint specification by following a pre-defined syntax followed by a visual representation through a property graph-based data model and a set of algorithms for the implementation of basic operations relevant for working with k-vertex cardinality constraints. In the practical part of the dissertation, we evaluate the applicability of the k-vertex model on use cases by carrying two separate case studies where we present how the model can be implemented on fraud detection and data classification use cases. We build a set of relevant k-vertex cardinality constraints based on real data and explain how each step of our approach is to be done. The results obtained from the case studies prove that the k-vertex model is entirely suitable to represent complex business rules as cardinality constraints and can be used to enforce these cardinality constraints in real-world business scenarios. Next, we analyze the performance efficiency of our model on inserting new edges into graph databases with varying number of edges and outgoing node degree and compare it against the case when there is no cardinality constraints checking. The results of the statistical analysis confirm a stable performance of the k-vertex model on varying datasets when compared against a case with no cardinality constraints checking. The k-vertex model shows no significant performance effect on property graphs with varying complexity and it is able to serve as a cardinality constraints enforcement mechanism without large effects on the database performance.
Keywords: Graph database, K-vertex cardinality constraint, Cardinality, Business rule, Property graph data model, Property graph schema, Hypernode, Performance analysis, Fraud detection, Data classification
Published in DKUM: 10.08.2022; Views: 771; Downloads: 96
.pdf Full text (3,43 MB)

5.
An algorithm for protecting knowledge discovery data
Boštjan Brumen, Izidor Golob, Tatjana Welzer Družovec, Ivan Rozman, Marjan Družovec, Hannu Jaakkola, 2003, original scientific article

Abstract: In the paper, we present an algorithm that can be applied to protect data before a data mining process takes place. The data mining, a part of the knowledge discovery process, is mainly about building models from data. We address the following question: can we protect the data and still allow the data modelling process to take place? We consider the case where the distributions of original data values are preserved while the values themselves change, so that the resulting model is equivalent to the one built with original data. The presented formal approach is especially useful when the knowledge discovery process is outsourced. The application of the algorithm is demonstrated through an example.
Keywords: data protection algorithm, classification algorithm, disclosure control, data mining, knowledge discovery, data security
Published in DKUM: 01.06.2012; Views: 2365; Downloads: 56
URL Link to full text

Search done in 0.12 sec.
Back to top
Logos of partners University of Maribor University of Ljubljana University of Primorska University of Nova Gorica