| | SLO | ENG | Cookies and privacy

Bigger font | Smaller font

Search the digital library catalog Help

Query: search in
search in
search in
search in
* old and bologna study programme

Options:
  Reset


1 - 10 / 14
First pagePrevious page12Next pageLast page
1.
The “objective test” and the downstream market presence requirement in Big Data access cases under the essential facilities doctrine - a critical assessment
Rok Dacar, 2024, original scientific article

Abstract: One possible way to gain access to competitively relevant sets of Big Data is toapply the essential facilities doctrine. However, the European Commission and theEuropean Court of Justice have established several different criteria for applying thedoctrine. Since neither institution has yet applied the doctrine in Big Data accesscases, it is not clear which of the criteria applies in such positions. This paper attemptsto analyze the impact of the “objective test” and the requirement that the controllingcompany be active in the downstream market (which are included in all assessmentcriteria) in Big Data access cases, with the goal of answering the research question,“Do the application of the “objective test” and the requirement that the controllingcompany be active in the downstream market impede the effectiveness of the doctrinein Big Data access cases under EU competition law, and if so, how should they bechanged?” The conclusion is that in Big Data access cases, the “objective test” shouldbe mitigated and replaced by the “subjective test” or the “average company test” andthe requirement that the controlling company be active in the downstream marketshould be discarded altogether in order for the doctrine to be an effective tool foraccessing competitively relevant sets of Big Data.
Keywords: essential facilities, doctrine, big data, mandated data access, Bronner ruling
Published in DKUM: 29.08.2025; Views: 0; Downloads: 10
.pdf Full text (209,48 KB)
This document has many files! More...

2.
Predicting corn moisture content in continuous drying systems using LSTM neural networks
Marko Simonič, Mirko Ficko, Simon Klančnik, 2025, original scientific article

Abstract: As we move toward Agriculture 4.0, there is increasing attention and pressure on the productivity of food production and processing. Optimizing efficiency in critical food processes such as corn drying is essential for long-term storage and economic viability. By using innovative technologies such as machine learning, neural networks, and LSTM modeling, a predictive model was implemented for past data that include various drying parameters and weather conditions. As the data collection of 3826 samples was not originally intended as a dataset for predictive models, various imputation techniques were used to ensure integrity. The model was implemented on the imputed data using a multilayer neural network consisting of an LSTM layer and three dense layers. Its performance was evaluated using four objective metrics and achieved an RMSE of 0.645, an MSE of 0.416, an MAE of 0.352, and a MAPE of 2.555, demonstrating high predictive accuracy. Based on the results and visualization, it was concluded that the proposed model could be a useful tool for predicting the moisture content at the outlets of continuous drying systems. The research results contribute to the further development of sustainable continuous drying techniques and demonstrate the potential of a data-driven approach to improve process efficiency. This method focuses on reducing energy consumption, improving product quality, and increasing the economic profitability of food processing
Keywords: drying, moisture prediction, big data, artificial intelligence, LSTM
Published in DKUM: 21.03.2025; Views: 0; Downloads: 13
.pdf Full text (2,99 MB)
This document has many files! More...

3.
Big data usage in European Countries : cluster analysis approach
Mirjana Pejić Bach, Tine Bertoncel, Maja Meško, Daila Suša-Vugec, Lucija Ivančić, 2020, original scientific article

Abstract: The goal of this research was to investigate the level of digital divide among selected European countries according to the big data usage among their enterprises. For that purpose, we apply the K-means clustering methodology on the Eurostat data about the big data usage in European enterprises. The results indicate that there is a significant difference between selected European countries according to the overall usage of big data in their enterprises. Moreover, the enterprises that use internal experts also used diverse big data sources. Since the usage of diverse big data sources allows enterprises to gather more relevant information about their customers and competitors, this indicates that enterprises with stronger internal big data expertise also have a better chance of building strong competitiveness based on big data utilization. Finally, the substantial differences among the industries were found according to the level of big data usage.
Keywords: big data, cluster analysis, digital divide, k-means, enterprise, industry, Europe, quality
Published in DKUM: 14.01.2025; Views: 0; Downloads: 7
.pdf Full text (3,39 MB)
This document has many files! More...

4.
Big data in sports : a bibliometric and topic study
Ana Šuštaršič, Mateja Videmšek, Damir Karpljuk, Ivan Miloloža, Maja Meško, 2022, review article

Abstract: Background: The development of the sports industry was impacted by the era of Big Data due to the rapid growth of information technology. Unfortunately, that has become an increasingly challenging Issue. Objectives: The purpose of the research was to analyze the scientific production of Big Data in sports and sports-related activities in two databases, Web of Science and Scopus. Methods/Approach: Bibliometric analysis and topic mining were done on 51 articles selected after four exclusion criteria (written in English, journal articles, the final stage of publication, and a detailed review of all full texts). The software tool used was Statistica Data Miner. Results: We found that the first articles appeared in Scopus in 2013 and WoS in 2014. USA and China are countries which produced the most articles. The most common research areas in WoS and Scopus are Public environmental and occupational health, Medicine, Environmental science ecology, and Engineering. Conclusions: We conducted that further research and literature review will be required as this is a broad and new topic.
Keywords: big data, sport, bibliometric study, topic study, health care management, services, decision making
Published in DKUM: 05.07.2024; Views: 154; Downloads: 12
.pdf Full text (777,24 KB)
This document has many files! More...

5.
The essential facilities doctrine, intellectual property rights, and access to big data
Rok Dacar, 2023, original scientific article

Abstract: This paper analyzes the criteria for applying the essential facilities doctrine to intellectual property rights and the possibility of applying it in cases where Big Data is the alleged essential facility. It aims to answer the research question: ‘‘What are the specifics of the intellectual property criteria in essential facilities cases and are these criteria applicable to Big Data?’’ It points to the semantic openness of the ‘‘new product’’ and ‘‘technical progress’’ conditions that have been developed for assessing whether an intellectual property right constitutes an essential facility. The paper argues that the intellectual property criteria are not applicable in all access to Big Data cases because Big Data is not necessarily protected by copyright. While a set of Big Data could be protected by copyright if certain conditions are met, even in such cases the lack of intrinsic value of Big Data significantly limits the applicability of the intellectual property criteria.
Keywords: essential facilities doctrine, intellectual property rights, big data, new product condition, technical progress condition
Published in DKUM: 11.04.2024; Views: 224; Downloads: 30
.pdf Full text (318,47 KB)
This document has many files! More...

6.
Implementation of a new reporting process in a group x
Sara Črešnik, 2021, master's thesis

Abstract: Reporting is present in every company. Whether it is small or big, it cannot be avoided. It plays a crucial role in the process and progress of business. The quality of reporting affects the development of the work environment and the company. Since business report is a document that contains business information, which supports the decisions about the future-oriented business decisions, it is very important for it to be designed in such a way that it contains the key information for the recipient and provides support for business decisions. The reporting process can take place horizontally upwards or downwards. Content and structure vary depending on the recipient of the report. We live in an age when our every step is accompanied by digitization, computerization, artificial intelligence, mass data, the Internet of Things, machine learning, and robotics. These changes have also affected the reporting process as well as its processes. The processes of data acquisition, processing and sharing have changed. Furthermore, the data quantity has increased, whereas the speed of the time in which to prepare the reports has decreased. We can have data without information, but we cannot have information without data. There is never enough time, especially nowadays when we are used to having everything at our fingertips. These are two conflicting factors – having more data and less time to prepare quality reports. The systems are developed to optimize the process, increase efficiency and quality and, what is nowadays most important, they have been created to obtain mass data in the shortest possible time. Therefore, it is important to adapt and implement software that can help achieve our daily tasks. We must know how to process huge amounts of real-time data and deliver the information they contain. It is crucial for companies to keep up with the environment and implement changes and innovations into their business process. A company is like a living organism for it must constantly evolve and grow. As soon as it stops growing and evolving, it can fail because it starts lagging and is therefore no longer competitive to others. To deliver faster feedback, companies need data of better quality. There are tools that can improve the business process, better facilitating the capacity of the human agents. The goal is to harness the employees’ full potential and knowledge for important tasks, such as analyzing, reviewing, and understanding data and acting upon them, invoking information technology to automate repetitive processes and facilitate better communication. The focus in this master’s thesis is on the reporting process in Group X. Group X is one of the world leaders in the automotive industry, a multinational corporation based in Canada with subsidiaries around the world. The complexity of the business reporting that is implemented for the Headquarters in Canada has to address the complexity of the multinational corporation to support the decision process. The aim of the thesis is to propose a reporting process for preparing and producing reports with a huge amount of data in a very time-efficient manner. We start by examining the existing processes and upon that, identifying the processes required for the reports to reach the final recipients. Our goal is to identify the toolset, which would increase efficiency, accuracy, credibility, and reduce errors in the fastest possible time. We investigate a short-term and a long-term solution. By a short-term solution, we mean a system, program, or a tool that can help us increase our potential by using digital resources, which are already existing in the organization. By a long-term solution, we mean a solution, which requires employment of specialized future tools in the field of reporting and in repetitive processes, which we can identify with current knowledge and expectations for development. This includes machine learning, robotic process automatization, artificial intelligence.
Keywords: Consolidated reporting, reporting process, robotic process automatization, business intelligence, artificial intelligence, machine learning, SharePoint, Big Data, digital transformation, electronic data interchange.
Published in DKUM: 01.09.2021; Views: 894; Downloads: 10
.pdf Full text (1,71 MB)

7.
Trendi digitalizacije v podjetju-industrija 4.0
Sara Vaupotič, 2020, undergraduate thesis

Abstract: Sam začetek industrijske revolucije sega v začetke druge polovice 18.stoletja z začetkom parne lokomotive ter strojev za predenje. Skozi zgodovino so ljudje vedno iskali izboljšave ter delali s tem, kar so imeli na voljo. Od samih začetkov parnih strojev, odkritja električne energije, prvih telegramov in telefonov, avtomobilov, letal, vse do razvoja digitalne tehnologije, poslovne programske opreme, razvoja prvih računalnikov ter superračunalnikov, razvoja komunikacijske tehnologije, prvih prenosnikov ter industrijske robotike, pa se trenutno razvija četrta industrijska revolucija, znana kot industrija 4.0. Razvija se v smeri digitalizacije in avtomatizacije, pametnih tovarn in naprav, povezanih med seboj (angl. Internet of Things), sistemov za shranjevanje velikih količin informacij ter podatkov (angl. Big Data) in proizvodnih zmogljivosti, ki lahko podatke shranjujejo samostojno kadarkoli in brez človeške prisotnosti. Tako proizvodnja, kot poslovanje potekata v veliki meri digitalno. Digitalizacija, ki je že zamenjala nekoč tradicionalno kulturo v podjetju, vso papirno hrambo so že nadomestile različne računalniške rešitve ter sistemi za lažje, bolj pregledno, brez papirno ter hitrejšo poslovanje (DMS). Za hitrejši dostop do podatkov, lažje in bolj pregledno poslovanje ter zbranost podatkov na enem mestu pa skrbijo sistemi za načrtovanje in pregledno planiranje virov podjetja (ERP), ki se lahko povezujejo tudi s sistemi za upravljanje proizvodnje (MES). Prav zaradi potrebe po izboljšavah, večji učinkovitosti, lažjim pregledom nad stroški in logistiko pa je nastala četrta industrijska revolucija. Kot dober primer podjetja, ki smernice četrte industrijske revolucije že v večji meri upošteva, pa bomo predstavili tehnološko podjetje (Xiaomi), ki trenutno zaseda četrto mesto na trgu pametnih telefonov.
Keywords: industrijske revolucije, industrija 4.0, IoT, Big Data, ERP, MES, DMS, elektronsko poslovanje, Xiaomi.
Published in DKUM: 23.11.2020; Views: 1431; Downloads: 210
.pdf Full text (767,12 KB)

8.
Big data for business ecosystem players
Igor Perko, Peter Ototsky, 2016, original scientific article

Abstract: In the provided research, some of the Big Data most prospective usage domains connect with distinguished player groups found in the business ecosystem. Literature analysis is used to identify the state of the art of Big Data related research in the major domains of its use—namely, individual marketing, health treatment, work opportunities, financial services, and security enforcement. System theory was used to identify business ecosystem major player types disrupted by Big Data: individuals, small and mid-sized enterprises, large organizations, information providers, and regulators. Relationships between the domains and players were explained through new Big Data opportunities and threats and by players’ responsive strategies. System dynamics was used to visualize relationships in the provided model.
Keywords: business ecosystems, Big Data, information providers, system dynamics
Published in DKUM: 03.04.2017; Views: 1279; Downloads: 419
.pdf Full text (544,68 KB)
This document has many files! More...

9.
PRAVNI VIDIKI VELIKEGA PODATKOVJA (BIG DATA)
Hana Kosi, 2016, undergraduate thesis

Abstract: Ljudje vsakodnevno z uporabo mobilnih aplikacij, brskanjem po spletu in celo pri nakupovanju v trgovini največkrat nezavedno in prostovoljno delimo svoje osebne podatke. Ker se vse več spletnih podjetij in drugih organizacij za lažje in boljše delovanje ukvarja z zbiranjem in obdelovanjem osebnih podatkov, se postavlja vedno pomembnejše vprašanje varstva osebnih podatkov. Prvi pravno zvezujoči mednarodni sporazum na področju varstva osebnih podatkov je oblikoval Svet Evrope in ga leta 1981 predstavil kot Konvencijo o varstvu posameznika glede na avtomatsko obdelavo osebnih podatkov. Evropska unija je v zadnjih 30. letih zaradi vedno naprednejše informacijske tehnologije na tem področju oblikovala obsežno zakonodajo, katere najpomembnejši dokumenti v povezavi z varstvom podatkov so Direktiva 95/46/ES o varstvu posameznikov pri obdelavi osebnih podatkov in o prostem pretoku takih podatkov in Listina Evropske unije o temeljnih pravicah, zadnje spremembe pa sta prinesli Uredba (EU) 2016/679 o varstvu posameznikov pri obdelavi osebnih podatkov in o prostem pretoku takih podatkov ter o razveljavitvi Direktive 95/46/ES in Direktiva (EU) 2016/680 o varstvu posameznikov pri obdelavi osebnih podatkov, ki jih pristojni organi obdelujejo za namene preprečevanja, preiskovanja, odkrivanja ali pregona kaznivih dejanj ali izvrševanja kazenskih sankcij, in o prostem pretoku takih podatkov ter o razveljavitvi Okvirnega sklepa Sveta 2008/977/PNZ. Zbiranje in obdelovanje podatkov igra pomembno vlogo tudi z vidika konkurence, natančneje pri oblikovanju tržne moči posameznih podjetij in vstopnih ovir, ki jih ta podjetja ustvarjajo za majhna in še ne uveljavljena podjetja. Na tem področju sta francoski in nemški organ za konkurenco maja letos objavila skupno poročilo o velikem podatkovju in konkurenčnem pravu, ki vsebuje raziskavo o tem, na katere načine lahko podatki postanejo vir tržne moči in kako lahko dostop do podatkov s povečanjem preglednosti trga izkrivlja konkurenco, vsebuje pa tudi predstavitev ravnanja s podatki, ki lahko kršijo konkurenčno zakonodajo. V želji po boljši in lažji dostopnosti do podatkov podjetja posegajo po številnih protikonkurenčnih ravnanjih, kot so združevanje podjetij in izključujoča ravnanja, kamor spadajo zavrnitev dostopa, diskriminatoren dostop do podatkov, izključujoči dogovori in vezane prodaje ter navzkrižna uporaba podatkovnih nizov, podatki pa so lahko tudi sredstvo cenovne diskriminacije.
Keywords: veliko podatkovje, varstvo zasebnosti, varstvo osebnih podatkov, konkurenca, zakonodaja Evropske unije, Big Data
Published in DKUM: 02.12.2016; Views: 19480; Downloads: 244
.pdf Full text (819,05 KB)

10.
UPORABA VELIKE KOLIČINE PODATKOV ZA POTREBE INFORMACIJSKIH REŠITEV UPRAVLJANJA ODNOSOV S STRANKAMI
Jernej Omulec, 2016, master's thesis

Abstract: Številna podjetja se že poslužujejo upravljanja odnosov s strankami in CRM-rešitev. CRM lahko definiramo kot upravljavsko filozofijo, kjer se cilji podjetja najlažje dosežejo skozi identificiranje in zadovoljevanje potrošniških želj (tako izjavljenih kot tudi neizjavljenih) in potreb. CRM pomaga pri profiliranju potencialnih strank, razumevanju njihovih potreb in grajenju odnosov, tako da se jim ponudi najprimernejši izdelek. Na drugi strani pa imamo Big Data, ki sicer po podjetjih še ni tako razširjena zadeva kot CRM, vendar vedno bolj pridobiva na popularnosti. Big Data so nizi podatkov, katerih obseg presega sposobnosti zajema, hranjenja, upravljanja in analiziranja s pomočjo klasičnih programskih rešitev. Uvedba CRM-rešitve in integracija Big Data sami po sebi pa ne bosta prinesli dobička. Za izboljšanja delovanja podjetja, poslovnega izida in prihodkov potrebujemo ljudi, ki bodo dana orodja in podatke znali analizirati in izkoristiti v svoj prid. Ne smemo pozabiti ene precej pomembne stvari. Lahko uvedemo najboljšo in najdražjo rešitev in nam ta ne bo prav nič koristila, če je sami ne bomo znali maksimalno izkoristiti. CRM-rešitve in Big Data sta samo pripomočka, ki nam precej olajšata razumevanje strank, vendar sama po sebi ne bosta storila ničesar.
Keywords: Big Data, velike količine podatkov, CRM, upravljanje odnosov s strankami, Salesforce, Datameer.
Published in DKUM: 24.10.2016; Views: 1413; Downloads: 212
.pdf Full text (1,89 MB)

Search done in 0.05 sec.
Back to top
Logos of partners University of Maribor University of Ljubljana University of Primorska University of Nova Gorica