| | SLO | ENG | Cookies and privacy

Bigger font | Smaller font

Search the digital library catalog Help

Query: search in
search in
search in
search in
* old and bologna study programme

Options:
  Reset


1 - 6 / 6
First pagePrevious page1Next pageLast page
1.
K-vertex: a novel model for the cardinality constraints enforcement in graph databases : doctoral dissertation
Martina Šestak, 2022, doctoral dissertation

Abstract: The increasing number of network-shaped domains calls for the use of graph database technology, where there are continuous efforts to develop mechanisms to address domain challenges. Relationships as 'first-class citizens' in graph databases can play an important role in studying the structural and behavioural characteristics of the domain. In this dissertation, we focus on studying the cardinality constraints mechanism, which also exploits the edges of the underlying property graph. The results of our literature review indicate an obvious research gap when it comes to concepts and approaches for specifying and representing complex cardinality constraints for graph databases validated in practice. To address this gap, we present a novel and comprehensive approach called the k-vertex cardinality constraints model for enforcing higher-order cardinality constraints rules on edges, which capture domain-related business rules of varying complexity. In our formal k-vertex cardinality constraint concept definition, we go beyond simple patterns formed between two nodes and employ more complex structures such as hypernodes, which consist of nodes connected by edges. We formally introduce the concept of k-vertex cardinality constraints and their properties as well as the property graph-based model used for their representation. Our k-vertex model includes the k-vertex cardinality constraint specification by following a pre-defined syntax followed by a visual representation through a property graph-based data model and a set of algorithms for the implementation of basic operations relevant for working with k-vertex cardinality constraints. In the practical part of the dissertation, we evaluate the applicability of the k-vertex model on use cases by carrying two separate case studies where we present how the model can be implemented on fraud detection and data classification use cases. We build a set of relevant k-vertex cardinality constraints based on real data and explain how each step of our approach is to be done. The results obtained from the case studies prove that the k-vertex model is entirely suitable to represent complex business rules as cardinality constraints and can be used to enforce these cardinality constraints in real-world business scenarios. Next, we analyze the performance efficiency of our model on inserting new edges into graph databases with varying number of edges and outgoing node degree and compare it against the case when there is no cardinality constraints checking. The results of the statistical analysis confirm a stable performance of the k-vertex model on varying datasets when compared against a case with no cardinality constraints checking. The k-vertex model shows no significant performance effect on property graphs with varying complexity and it is able to serve as a cardinality constraints enforcement mechanism without large effects on the database performance.
Keywords: Graph database, K-vertex cardinality constraint, Cardinality, Business rule, Property graph data model, Property graph schema, Hypernode, Performance analysis, Fraud detection, Data classification
Published in DKUM: 10.08.2022; Views: 88; Downloads: 15
.pdf Full text (3,43 MB)

2.
Software based encoder/decoder generation for data exchange optimization in the internet of things : master's thesis
Tjaž Vračko, 2022, master's thesis

Abstract: Efficient encoding of data is an important part of projects in the Internet of Things space. Communication packets must be kept as small as possible in order to minimize the power consumption of devices. In this thesis, an automatic code generation tool, irpack, is proposed that will unify the way packets are defined across all future projects at Institute IRNAS. Using a schema, this tool generates source code of encoders and decoders in target programming languages. A schema evolution system is also defined, by which changes to packets can be compatible across multiple versions. The tool is then applied to a selection of past projects to gauge its usefulness. It is determined that irpack is able to encode the same data into a similar or smaller size packet, while also providing additional versioning information.
Keywords: encoding/decoding, schema, schema evolution, bit packing, code generation
Published in DKUM: 31.01.2022; Views: 269; Downloads: 50
.pdf Full text (2,58 MB)

3.
Crosswalk of most used metadata schemes and guidelines for metadata interoperability (Version 1.0)
Milan Ojsteršek, 2021, complete scientific database or corpus

Abstract: This resource provides crosswalks among the most commonly used metadata schemes and guidelines to describe digital objects in Open Science, including: - RDA metadata IG recommendation of the metadata element set, - EOSC Pilot - EDMI metadata set, - Dublin CORE Metadata Terms, - Datacite 4.3 metadata schema, - DCAT 2.0 metadata schema and DCAT 2.0 application profile, - EUDAT B2Find metadata recommendation, - OpenAIRE Guidelines for Data Archives, - OpenAire Guidelines for literature repositories 4.0, - OpenAIRE Guidelines for Other Research Products, - OpenAIRE Guidelines for Software Repository Managers, - OpenAIRE Guidelines for CRIS Managers, - Crossref 4.4.2 metadata XML schema, - Harvard Dataverse metadata schema, - DDI Codebook 2.5 metadata XML schema, - Europeana EDM metadata schema, - Schema.org, - Bioschemas, - The PROV Ontology.
Keywords: crosswalk, metadata, EDMI metadata set, Dublin CORE, Datacite 4.3 metadata schema, DCAT 2.0 metadata schema, UDAT B2Find metadata recommendation, OpenAIRE Guidelines for Data Archives, OpenAire Guidelines for literature repositories 4.0, OpenAIRE Guidelines for Other Research Products, OpenAIRE Guidelines for Software Repository Managers, OpenAIRE Guidelines for CRIS Managers, Crossref 4.4.2 metadata XML schema, Harvard Dataverse metadata schema, DDI Codebook 2.5 metadata XML schema, Europeana EDM metadata schema, Schema.org, Bioschemas, The PROV Ontology
Published in DKUM: 21.09.2021; Views: 685; Downloads: 34
.xlsx Research data (169,58 KB)
This document has many files! More...

4.
Perceived factors and obstacles to cognitive schema change during economic crisis
Ana Arzenšek, 2011, original scientific article

Abstract: The main objective is to present the perceived factors in cognitive schema change as experienced by participants from two Slovenian sectors and to compare them with factors from schema change theory in order to evaluate specific circumstances and obstacles to effective cognitive schema change. 31 interviews with participants from six companies were conducted twice during the 2008 economic crisis. The prevalent perceived antecedents of schema change lie within an organisation and in the business environment. Stimulating factors are also economic and financial crises and personal characteristics. The prevalent obstacles to schema change, as perceived by participants, are stability of current cognitive schemas, personal characteristics of management, and rigidity.
Keywords: cognitive schema, change, factors, obstacles, economic crisis
Published in DKUM: 10.01.2018; Views: 775; Downloads: 292
.pdf Full text (544,86 KB)
This document has many files! More...

5.
Case study of data warehouse development for monitoring of energy consumption in public buildings
Goran Kovačić, 2016, master's thesis

Abstract: The goal of this case study was to examine which implementation of the data warehouse will yield better results in the observed scenario – monitoring of energy consumption in public buildings. Data warehouse (DW) is a type of database created with the goal of preparing data for analysis and reporting. We implemented two versions of DW, one based on the star and the other on the snowflake schema model. Series of tests were conducted to evaluate implemented solutions. Statistical analysis showed that for the observed scenarios, implementation based on snowflake schema performs better, in both shorter ETL execution time and smaller size of DW.
Keywords: data warehouse, dimensional model, star schema, snowflake schema, ETL process
Published in DKUM: 04.07.2016; Views: 862; Downloads: 94
.pdf Full text (2,11 MB)

6.
MODELIRANJE FUNKCIONALNIH ZAHTEV DINAMIČNEGA IZBIRANJA STORITEV V MODELIH POSLOVNIH PROCESOV
Aleš Frece, 2013, doctoral dissertation

Abstract: V disertaciji predlagamo od jezika ter notacij za modeliranje poslovnih procesov neodvisen metamodel za formalno specifikacijo funkcionalnih zahtev dinamičnega izbiranja storitev v modelih poslovnih procesov. Predlagani metamodel naslavlja konfigurabilno dinamično izbiranje storitev na podlagi vsebine in konteksta. S prednostmi tovrstnega pristopa metamodel zagotavlja večjo fleksibilnost modeliranih procesov, hkrati pa zmanjšuje njihovo kompleksnost. Izvedljivost in učinkovitost metamodela preverimo z njegovo vključitvijo v de facto standard za modeliranje poslovnih procesov Business Process Model and Notation 2.0, tako da predlagamo razširitev omenjene specifikacije. Za potrebe opisovanja vmesnikov v točkah dinamičnega izbiranja storitev predlagamo rešitev za podporo definicijam XML Schema strukturnih omejitev, ki so odvisne od posameznih primerov uporabe (t.j. od kandidatnih storitev). V ta namen predlagamo razširitve specifikacije XML Schema, ki so tudi širše uporabne. Omogočajo namreč definicijo tipov in elementov, ki se uporabljajo v različnih primerih uporabe s specifičnimi strukturnimi omejitvami. Na ta način predlagane razširitve omogočajo minimalno število potrebnih tipov za modeliranje neke domene. Učinkovitost metamodela pokažemo skozi merjenje kompleksnosti modelov realnih poslovnih procesov z namenskimi metrikami. Predlagamo tudi novo, s storitveno usmerjeno arhitekturo kompatibilno metodo za merjenje funkcionalne velikosti modelov poslovnih procesov. V ta namen adaptiramo metodo COSMIC na domeno upravljanja poslovnih procesov. Predlagano metodo nato uporabimo za preverjanje doprinosa predlaganega metamodela k zmanjševanju potrebnega napora za implementacijo modeliranih poslovnih procesov, s čimer ugotovitve o učinkovitosti predlagane rešitve še dodatno podkrepimo.
Keywords: modeliranje poslovnih procesov, fleksibilnost poslovnih procesov, funkcionalne zahteve, dinamično izbiranje storitev, Business Process Model and Notation, XML Schema
Published in DKUM: 08.04.2013; Views: 1590; Downloads: 195
.pdf Full text (4,04 MB)

Search done in 0.1 sec.
Back to top
Logos of partners University of Maribor University of Ljubljana University of Primorska University of Nova Gorica