51. Application of normalization method to fracture toughness testing of welds with pronounced strength heterogeneity : doctoral disertationPrimož Štefane, 2022, doktorska disertacija Opis: This doctoral dissertation presents the results of an extensive fracture testing programme of
welds with pronounced strength heterogeneity. Purpose of this programme was to determine
fracture toughness of heterogeneous welds that contain a midplane crack. Application of
standardized fracture testing methods in heterogeneous welds might lead to overestimation or
underestimation of fracture toughness and consequentially to inaccurate assessment of
structural integrity. Reasons for that are variations in mechanical properties of different
material regions in the weld which have a significant impact on development of deformation at
the crack tip, and consequently on the crack driving force. Experimental procedures in scope
of this research include fabrication of weld sample plates, that were welded with MAG process.
The welds were fabricated using two different electrodes, one with higher and one with lower
mechanical properties, with respect to base material S690QL in order to replicate extreme
variations of mechanical properties in the weldment. Fabricated welds were then characterized
in detail using metallography, three-point bend impact testing, indentation hardness
measurements and tensile testing of flat miniature and round bar standard tensile specimens.
Resistance of welds to stable tearing was investigated by fracture testing of square surface
cracked SE(B) specimens containing a weld midplane notch. J-integral has been estimated from
plastic work, using the normalization data reduction method that is included in standard ASTM
E1820. The advantage of the normalization data reduction method is that no special equipment
or complex testing method is needed to measure ductile crack growth during fracture testing.
The ductile crack growth is determined directly from the load-displacement record, by applying
appropriate calibration function and physical lengths of initial and final cracks that were
measured post-mortem with the nine-point method. Several correction factors had to be
calibrated in order to successfully implement the normalization data reduction method to
fracture testing of welds with pronounced strength heterogeneity. For that reason, parametric
finite element analyses were conducted for several weld configurations. Finite element models
incorporated plane strain conditions in order to provide calibrated factors that comply with
plane strain equations included in ASTM E1820. Additionally, crack tip constraint has been
extensively analysed and correlated with the plastic deformation fields. This clarified altered
deformation behaviour of modelled welds in comparison with the base material and
corresponding effect on fracture toughness. Finally, calibrated factors were applied to
computation of J-integral from data that were measured during fracture testing. J-R resistance
curves were constructed for the tested heterogeneous welds and compared to the ones of the
base material. This directly showed the effect of variations of mechanical properties on the
weld fracture behaviour. Ključne besede: weld, strength mismatch, fracture, normalization data reduction technique, plastic
correction factors, test fixture, SE(B) specimen, J-R resistance curve Objavljeno v DKUM: 10.01.2023; Ogledov: 778; Prenosov: 186 Celotno besedilo (20,57 MB) |
52. Open Science Summer School : Maribor, Slovenia, 12.-16. September 20222022, druge monografije in druga zaključena dela Opis: Skills and competencies related to data literacy and data management are indispensable in today’s data-intensive research environment, both in STEM as well as in the humanities and social sciences. In recent years, there has been a significant shift in scientific communication towards open science, and many new approaches, tools and technologies have emerged, enabling a research process that goes beyond the traditional way of research. Open science has the potential to lead researchers to an open, collaborative, transparent, reproducible and, therefore, effective research process.
The University of Maribor Summer School offers PhD students and young researchers an opportunity to learn and develop skills in the fields of research data management, open access to research results, the use of open research infrastructure, supercomputing resources and the European Open Science Cloud. With lectures and workshops led by domestic and international experts, the course will provide a deeper understanding of these topics and insight into valuable practical experience. Ključne besede: scientific research, open science, open publications, research data Objavljeno v DKUM: 21.12.2022; Ogledov: 896; Prenosov: 11 Povezava na datoteko Gradivo ima več datotek! Več... |
53. Mapping of the emergence of society 5.0 : a bibliometric analysisVasja Roblek, Maja Meško, Iztok Podbregar, 2021, izvirni znanstveni članek Opis: Background and purpose: The study aims to answer a research question: With which essential cornerstones technological innovations the transformation from Society 4.0 and Industry 4.0 to Society 5.0 and Industry 5.0 is enabled? The study is important for practitioners and researchers to understand the meaning of Society 5.0 and to familiarise themselves with the drivers that will help shape Society 5.0 policies and play an important role in its further development. Therefore, the authors conducted a quantitative bibliometric study that provides insights into the importance of the topic and incorporates current characteristics and future research trends.
Methodology: The study used algorithmic co-occurrence of keywords to gain a different insight into the evolution of Society 5.0. Thirty-six selected articles from the Web of Science database were analysed with the bibliometric analysis and overlay visualisation.
Results: The co-occurrence analysis shows that terms artificial intelligence, cyber-physical systems, big data, Industry 4.0, Industry 5.0, open innovation, Society 5.0, super-smart society have been widely used in researches in the last three years.
Conclusion: The study presents a bibliometric analysis to analyse the current and future development drivers of a Society 5.0. According to the results, the transition from Society 4.0 to Society 5.0 can be achieved by implementing knowledge and technologies in the IoT, robotics, and Big Data to transform society into a smart society (Society 5.0). In particular, the concept would enable the adaptation of services and industrial activities to individuals’ real needs. Furthermore, these technologies allow advanced digital service platforms that will eventually be integrated into all areas of life. Ključne besede: society 5.0, industry 5.0, information society, smart society, data-driven innovations Objavljeno v DKUM: 15.09.2022; Ogledov: 614; Prenosov: 30 Celotno besedilo (988,92 KB) Gradivo ima več datotek! Več... |
54. K-vertex: a novel model for the cardinality constraints enforcement in graph databases : doctoral dissertationMartina Šestak, 2022, doktorska disertacija Opis: The increasing number of network-shaped domains calls for the use of graph database technology, where there are continuous efforts to develop mechanisms to address domain challenges. Relationships as 'first-class citizens' in graph databases can play an important role in studying the structural and behavioural characteristics of the domain. In this dissertation, we focus on studying the cardinality constraints mechanism, which also exploits the edges of the underlying property graph. The results of our literature review indicate an obvious research gap when it comes to concepts and approaches for specifying and representing complex cardinality constraints for graph databases validated in practice.
To address this gap, we present a novel and comprehensive approach called the k-vertex cardinality constraints model for enforcing higher-order cardinality constraints rules on edges, which capture domain-related business rules of varying complexity. In our formal k-vertex cardinality constraint concept definition, we go beyond simple patterns formed between two nodes and employ more complex structures such as hypernodes, which consist of nodes connected by edges. We formally introduce the concept of k-vertex cardinality constraints and their properties as well as the property graph-based model used for their representation. Our k-vertex model includes the k-vertex cardinality constraint specification by following a pre-defined syntax followed by a visual representation through a property graph-based data model and a set of algorithms for the implementation of basic operations relevant for working with k-vertex cardinality constraints.
In the practical part of the dissertation, we evaluate the applicability of the k-vertex model on use cases by carrying two separate case studies where we present how the model can be implemented on fraud detection and data classification use cases. We build a set of relevant k-vertex cardinality constraints based on real data and explain how each step of our approach is to be done. The results obtained from the case studies prove that the k-vertex model is entirely suitable to represent complex business rules as cardinality constraints and can be used to enforce these cardinality constraints in real-world business scenarios. Next, we analyze the performance efficiency of our model on inserting new edges into graph databases with varying number of edges and outgoing node degree and compare it against the case when there is no cardinality constraints checking. The results of the statistical analysis confirm a stable performance of the k-vertex model on varying datasets when compared against a case with no cardinality constraints checking. The k-vertex model shows no significant performance effect on property graphs with varying complexity and it is able to serve as a cardinality constraints enforcement mechanism without large effects on the database performance. Ključne besede: Graph database, K-vertex cardinality constraint, Cardinality, Business rule, Property graph data model, Property graph schema, Hypernode, Performance analysis, Fraud detection, Data classification Objavljeno v DKUM: 10.08.2022; Ogledov: 771; Prenosov: 96 Celotno besedilo (3,43 MB) |
55. 35th Bled eConference Digital Restructuring and Human (Re)action : June 26 – 29, 2022, Bled, Slovenia, Conference Proceedings2022, zbornik Opis: The Bled eConference, organised by the University of Maribor, Faculty of Organizational Sciences, has been shaping electronic interaction since 1988. After 2 years COVID-19 pandemic, when the conference was held online, this year we met again in Bled, Slovenia. The theme of the 35th conference is "Digital Restructuring and Human (re)Action". During the pandemic, we experienced the important role of digital technologies in enabling people and enterprises to interact, collaborate, and find new opportunities and ways to overcome various challenges. The use of digital technologies in these times has accelerated the digital transformation of enterprises and societies. It will be important to leverage this momentum for further implementation and exploitation of digital technologies that will bring positive impacts and solutions for people, enterprises and societies. The need to achieve sustainability goals and sustainable development of society has increased. Digital technologies will continue to play an important role in achieving these goals. The papers in this conference proceedings address digital transformation of enterprises, digital wellness and health solutions, digital ethics challenges, artificial intelligence and data science solutions, new and digital business models, digital consumer behaviour and solutions, including the impact of social media, restructuring of work due to digital technologies, digital education challenges and examples, and solutions for smart sustainable cities. Ključne besede: Digital transformation, digital business, digital technologijes, innovations, digitalization, sustainable development, smart and sustainable cities and societies, digital health and wellness, artificial intelligence and data science, digital ethics, digital education, restructured work, digital consumer, social media Objavljeno v DKUM: 23.06.2022; Ogledov: 1176; Prenosov: 67 Celotno besedilo (15,08 MB) Gradivo ima več datotek! Več... |
56. Analiza uporabe in postavitve podatkovnega jezera : magistrsko deloMarcel Koren, 2021, magistrsko delo Opis: Velepodatki in podatkovna jezera sta pojma, ki jih v zadnjih letih vedno pogosteje uporabljamo v povezavi s porastom količine ustvarjenih podatkov. V magistrskem delu predstavljamo lastnosti podatkovnih jezer, čemu so namenjena, kako jih lahko vzpostavimo ter kako so povezana z velepodatki. Podrobno opišemo odprtokodno rešitev Apache Hadoop in oblačno rešitev Microsoft Azure Data Lake. Pri tem smo spoznali tudi orodja, ki jih rešitvi ponujata, med katerimi sta pomembnejši Apache Spark in Azure Databricks. V nadaljevanju predstavljamo, kako ju vzpostavimo ter izvedemo eksperiment, kjer na podlagi hitrosti izvajanja in stroškov spoznamo njune prednosti in slabosti. Ključne besede: velepodatki, podatkovna jezera, Hadoop, Spark, Azure Data Lake Objavljeno v DKUM: 16.12.2021; Ogledov: 1204; Prenosov: 127 Celotno besedilo (2,31 MB) |
57. A Comparison of Traditional and Modern Data Warehouse Architectures : zaključno deloRok Virant, 2021, diplomsko delo Opis: Data has never been as desired or valued as it is today. The value of data and information over the past decade has not only changed trends in business and the IT industry but has also changed the dynamic of work. Enormous amounts of aggregate data offer companies and other corporations the option to explore and study data samples. Data collection and information processing are new dynamic factors, not only for individuals but also for corporations. Companies and corporations who are able to process large amounts of data in the shortest possible time can place themselves in a leading position in certain professions. In this bachelor’s thesis we will describe the basic concepts and factors that have shaped new, cloud-based data warehouse technologies. At the same time, we also emphasize why and how these technologies are used. We focus on how the changing technology influenced the users and their consumption of data, the changing dynamics of work as well as the changes of data itself. In the practical part, we created two DWH environments (on-premises and cloud) that we compare with each other. In the experiment, we underlined the fact that CDWHs are in certain situations not always faster than TDWH. Ključne besede: Data Warehouses, Cloud Computing, Outsourcing, Data, Information Objavljeno v DKUM: 18.10.2021; Ogledov: 1121; Prenosov: 177 Celotno besedilo (3,58 MB) |
58. Crosswalk of most used metadata schemes and guidelines for metadata interoperability (Version 1.0)Milan Ojsteršek, 2021, zaključena znanstvena zbirka raziskovalnih podatkov Opis: This resource provides crosswalks among the most commonly used metadata schemes and guidelines to describe digital objects in Open Science, including:
- RDA metadata IG recommendation of the metadata element set,
- EOSC Pilot - EDMI metadata set,
- Dublin CORE Metadata Terms,
- Datacite 4.3 metadata schema,
- DCAT 2.0 metadata schema and DCAT 2.0 application profile,
- EUDAT B2Find metadata recommendation,
- OpenAIRE Guidelines for Data Archives,
- OpenAire Guidelines for literature repositories 4.0,
- OpenAIRE Guidelines for Other Research Products,
- OpenAIRE Guidelines for Software Repository Managers,
- OpenAIRE Guidelines for CRIS Managers,
- Crossref 4.4.2 metadata XML schema,
- Harvard Dataverse metadata schema,
- DDI Codebook 2.5 metadata XML schema,
- Europeana EDM metadata schema,
- Schema.org,
- Bioschemas,
- The PROV Ontology. Ključne besede: crosswalk, metadata, EDMI metadata set, Dublin CORE, Datacite 4.3 metadata schema, DCAT 2.0 metadata schema, UDAT B2Find metadata recommendation, OpenAIRE Guidelines for Data Archives, OpenAire Guidelines for literature repositories 4.0, OpenAIRE Guidelines for Other Research Products, OpenAIRE Guidelines for Software Repository Managers, OpenAIRE Guidelines for CRIS Managers, Crossref 4.4.2 metadata XML schema, Harvard Dataverse metadata schema, DDI Codebook 2.5 metadata XML schema, Europeana EDM metadata schema, Schema.org, Bioschemas, The PROV Ontology Objavljeno v DKUM: 21.09.2021; Ogledov: 2030; Prenosov: 70 Raziskovalni podatki (169,58 KB) Gradivo ima več datotek! Več... |
59. Proceedings of the 2021 7th Student Computer Science Research Conference (StuCoSReC)2021, zbornik Opis: The 7th Student Computer Science Research Conference is an answer to the fact that modern PhD and already Master level Computer Science programs foster early research activity among the students. The prime goal of the conference is to become a place for students to present their research work and hence further encourage students for an early research. Besides the conference also wants to establish an environment where students from different institutions meet, let know each other, exchange the ideas, and nonetheless make friends and research colleagues. At last but not least, the conference is also meant to be meeting place for students with senior researchers from institutions others than their own. Ključne besede: student conference, computer and information science, artificial intelligence, data science, data mining Objavljeno v DKUM: 13.09.2021; Ogledov: 1360; Prenosov: 172 Celotno besedilo (11,87 MB) Gradivo ima več datotek! Več... |
60. Implementation of a new reporting process in a group xSara Črešnik, 2021, magistrsko delo Opis: Reporting is present in every company. Whether it is small or big, it cannot be avoided. It plays a crucial role in the process and progress of business. The quality of reporting affects the development of the work environment and the company. Since business report is a document that contains business information, which supports the decisions about the future-oriented business decisions, it is very important for it to be designed in such a way that it contains the key information for the recipient and provides support for business decisions. The reporting process can take place horizontally upwards or downwards. Content and structure vary depending on the recipient of the report. We live in an age when our every step is accompanied by digitization, computerization, artificial intelligence, mass data, the Internet of Things, machine learning, and robotics. These changes have also affected the reporting process as well as its processes. The processes of data acquisition, processing and sharing have changed. Furthermore, the data quantity has increased, whereas the speed of the time in which to prepare the reports has decreased. We can have data without information, but we cannot have information without data. There is never enough time, especially nowadays when we are used to having everything at our fingertips. These are two conflicting factors – having more data and less time to prepare quality reports. The systems are developed to optimize the process, increase efficiency and quality and, what is nowadays most important, they have been created to obtain mass data in the shortest possible time. Therefore, it is important to adapt and implement software that can help achieve our daily tasks. We must know how to process huge amounts of real-time data and deliver the information they contain. It is crucial for companies to keep up with the environment and implement changes and innovations into their business process. A company is like a living organism for it must constantly evolve and grow. As soon as it stops growing and evolving, it can fail because it starts lagging and is therefore no longer competitive to others. To deliver faster feedback, companies need data of better quality. There are tools that can improve the business process, better facilitating the capacity of the human agents. The goal is to harness the employees’ full potential and knowledge for important tasks, such as analyzing, reviewing, and understanding data and acting upon them, invoking information technology to automate repetitive processes and facilitate better communication.
The focus in this master’s thesis is on the reporting process in Group X. Group X is one of the world leaders in the automotive industry, a multinational corporation based in Canada with subsidiaries around the world. The complexity of the business reporting that is implemented for the Headquarters in Canada has to address the complexity of the multinational corporation to support the decision process.
The aim of the thesis is to propose a reporting process for preparing and producing reports with a huge amount of data in a very time-efficient manner. We start by examining the existing processes and upon that, identifying the processes required for the reports to reach the final recipients. Our goal is to identify the toolset, which would increase efficiency, accuracy, credibility, and reduce errors in the fastest possible time. We investigate a short-term and a long-term solution. By a short-term solution, we mean a system, program, or a tool that can help us increase our potential by using digital resources, which are already existing in the organization. By a long-term solution, we mean a solution, which requires employment of specialized future tools in the field of reporting and in repetitive processes, which we can identify with current knowledge and expectations for development. This includes machine learning, robotic process automatization, artificial intelligence. Ključne besede: Consolidated reporting, reporting process, robotic process automatization, business intelligence, artificial intelligence, machine learning, SharePoint, Big Data, digital transformation, electronic data interchange. Objavljeno v DKUM: 01.09.2021; Ogledov: 894; Prenosov: 7 Celotno besedilo (1,71 MB) |