| | SLO | ENG | Piškotki in zasebnost

Večja pisava | Manjša pisava

Iskanje po katalogu digitalne knjižnice Pomoč

Iskalni niz: išči po
išči po
išči po
išči po
* po starem in bolonjskem študiju

Opcije:
  Ponastavi


1 - 10 / 132
Na začetekNa prejšnjo stran12345678910Na naslednjo stranNa konec
1.
Stiskanje vokseliziranih drevesnih struktur na podlagi napovedi
Matej Slomšek, 2025, diplomsko delo

Opis: Diplomsko delo obravnava stiskanje vokseliziranih drevesnih struktur s pomočjo napovednega modela. Cilj raziskave je bil preučiti učinkovitost brezizgubnega stiskanja, ki temelji na napovedovanju podatkov in kasnejšem stiskanju napak. Uporabili smo različne algoritme, kot so Zip, 7-Zip, WinRAR in FLVC, ter jih primerjali z našo metodo NM (napovedna metoda). Rezultati kažejo, da FLVC dosega najboljša razmerja stiskanja, NM pa se izkaže kot učinkovit pristop za manjše datoteke
Ključne besede: voksel, drevesna struktura, brezizgubno stiskanje, RLE, stiskanje, vokselizacija, napoved, napaka napovedi, FLVC.
Objavljeno v DKUM: 08.05.2025; Ogledov: 0; Prenosov: 7
.pdf Celotno besedilo (1,23 MB)
Gradivo ima več datotek! Več...

2.
Interaktivno upodabljanje digitalnega modela reliefa s štiriškim drevesom
Anej Krajnc, 2025, diplomsko delo

Opis: Diplomsko delo opisuje implementacijo upodabljanja digitalnega modela reliefa s štiriškim drevesom, trikotniško mrežo, z Blinn-Phongovo osvetlitvijo in sivinsko barvo. Cilj je pohitriti prikaz digitalnega modela reliefa s štiriškim drevesom. Ugotovili smo, da je prikaz s štiriškim drevesom hitrejši od prikaza celotnega modela reliefa, vendar ima večjo pomnilniško zahtevnost. Hitrost izrisa je odvisna tudi od števila oglišč, kar smo ugotovili z uporabo treh digitalnih modelov reliefa z različnim številom oglišč.
Ključne besede: štiriško drevo, digitalni model reliefa, trikotniška mreža, Blinn-Phongov osvetlitveni model, OpenGL
Objavljeno v DKUM: 08.05.2025; Ogledov: 0; Prenosov: 3
.pdf Celotno besedilo (1,61 MB)

3.
A hierarchical universal algorithm for geometric objects’ reflection symmetry detection
Borut Žalik, Damjan Strnad, Štefan Kohek, Ivana Kolingerová, Andrej Nerat, Niko Lukač, David Podgorelec, 2022, izvirni znanstveni članek

Opis: A new algorithm is presented for detecting the global reflection symmetry of geometric objects. The algorithm works for 2D and 3D objects which may be open or closed and may or may not contain holes. The algorithm accepts a point cloud obtained by sampling the object’s surface at the input. The points are inserted into a uniform grid and so-called boundary cells are identified. The centroid of the boundary cells is determined, and a testing symmetry axis/plane is set through it. In this way, the boundary cells are split into two parts and they are faced with the symmetry estimation function. If the function estimates the symmetric case, the boundary cells are further split until a given threshold is reached or a non-symmetric result is obtained. The new testing axis/plane is then derived and tested by rotation around the centroid. This paper introduces three techniques to accelerate the computation. Competitive results were obtained when the algorithm was compared against the state of the art.
Ključne besede: computer science, computational geometry, uniform subdivision, centroids
Objavljeno v DKUM: 01.04.2025; Ogledov: 0; Prenosov: 4
.pdf Celotno besedilo (2,99 MB)
Gradivo ima več datotek! Več...

4.
Učenje redkih nevronskih mrež z iterativnim rezanjem parametrov
Nejc Podvratnik, 2025, magistrsko delo

Opis: Moderno globoko učenje pogosto vključuje nevronske mreže, ki imajo nepotrebno veliko število parametrov oziroma povezav. Posledica tega je višja časovna in prostorska zahtevnost pri delu z mrežami. Možna rešitev problema je hipoteza loterijskih srečk, ki pravi, da v množici povezav vsake naprej povezane nevronske mreže obstaja podmnožica, ki je manjša in ohranja enako in v nekaterih primerih tudi večjo uspešnost. Imenujemo jo zmagovita srečka. V magistrskem delu je predstavljena in dokazana hipoteza loterijske srečke ter implementiran lasten algoritem za iterativno rezanje parametrov, ki je bil testiran in analiziran na različnih arhitekturah, podatkovnih zbirkah in hiperparametrih.
Ključne besede: strojno učenje, redka nevronska mreža, hipoteza loterijskih srečk, rezanje
Objavljeno v DKUM: 04.03.2025; Ogledov: 0; Prenosov: 31
.pdf Celotno besedilo (2,88 MB)

5.
Brezizgubno stiskanje avdio posnetkov z nevronskimi mrežami : magistrsko delo
Luka Železnik, 2025, magistrsko delo

Opis: Magistrska naloga se začne s kratkim pregledom relevantnih arhitektur nevronskih mrež in obstoječih brezizgubnih metod stiskanja avdia. Nato je predstavljena nova metoda za brezizgubno stiskanje avdia, ki temelji na napovedovanju naslednjega avdio vzorca s pomočjo konvolucijske nevronske mreže. Mreža se za vsak vhodni avdio posnetek uči posebej. Sledijo optimizacija hiperparametrov in nastavitev algoritma ter primerjava predlagane metode z obstoječimi algoritmi.
Ključne besede: stiskanje, algoritem, entropija, Golomb-Riceovo kodiranje, strojno učenje
Objavljeno v DKUM: 06.02.2025; Ogledov: 0; Prenosov: 31
.pdf Celotno besedilo (2,74 MB)
Gradivo ima več datotek! Več...

6.
State-of-the-art trends in data compression : COMPROMISE case study
David Podgorelec, Damjan Strnad, Ivana Kolingerová, Borut Žalik, 2024, izvirni znanstveni članek

Opis: After a boom that coincided with the advent of the internet, digital cameras, digital video and audio storage and playback devices, the research on data compression has rested on its laurels for a quarter of a century. Domain-dependent lossy algorithms of the time, such as JPEG, AVC, MP3 and others, achieved remarkable compression ratios and encoding and decoding speeds with acceptable data quality, which has kept them in common use to this day. However, recent computing paradigms such as cloud computing, edge computing, the Internet of Things (IoT), and digital preservation have gradually posed new challenges, and, as a consequence, development trends in data compression are focusing on concepts that were not previously in the spotlight. In this article, we try to critically evaluate the most prominent of these trends and to explore their parallels, complementarities, and differences. Digital data restoration mimics the human ability to omit memorising information that is satisfactorily retrievable from the context. Feature-based data compression introduces a two-level data representation with higher-level semantic features and with residuals that correct the feature-restored (predicted) data. The integration of the advantages of individual domain-specific data compression methods into a general approach is also challenging. To the best of our knowledge, a method that addresses all these trends does not exist yet. Our methodology, COMPROMISE, has been developed exactly to make as many solutions to these challenges as possible inter-operable. It incorporates features and digital restoration. Furthermore, it is largely domain-independent (general), asymmetric, and universal. The latter refers to the ability to compress data in a common framework in a lossy, lossless, and near-lossless mode. COMPROMISE may also be considered an umbrella that links many existing domain-dependent and independent methods, supports hybrid lossless–lossy techniques, and encourages the development of new data compression algorithms
Ključne besede: data compression, data resoration, universal algorithm, feature, residual
Objavljeno v DKUM: 04.02.2025; Ogledov: 0; Prenosov: 11
.pdf Celotno besedilo (1,13 MB)

7.
Efficient compressed storage and fast reconstruction of large binary images using chain codes
Damjan Strnad, Danijel Žlaus, Andrej Nerat, Borut Žalik, 2024, izvirni znanstveni članek

Opis: Large binary images are used in many modern applications of image processing. For instance, they serve as inputs or target masks for training machine learning (ML) models in computer vision and image segmentation. Storing large binary images in limited memory and loading them repeatedly on demand, which is common in ML, calls for efficient image encoding and decoding mechanisms. In the paper, we propose an encoding scheme for efficient compressed storage of large binary images based on chain codes, and introduce a new single-pass algorithm for fast parallel reconstruction of raster images from the encoded representation. We use three large real-life binary masks to test the efficiency of the proposed method, which were derived from vector layers of single-class objects – a building cadaster, a woody vegetation landscape feature map, and a road network map. We show that the masks encoded by the proposed method require significantly less storage space than standard lossless compression formats. We further compared the proposed method for mask reconstruction from chain codes with a recent state-of-the-art algorithm, and achieved between and faster reconstruction on test data
Ključne besede: binary mask, machine learning, chain code, binary encoding, bitmap reconstruction
Objavljeno v DKUM: 29.01.2025; Ogledov: 0; Prenosov: 147
.pdf Celotno besedilo (1,45 MB)

8.
Efficient encoding and decoding of voxelized models for machine learning-based applications
Damjan Strnad, Štefan Kohek, Borut Žalik, Libor Váša, Andrej Nerat, 2025, izvirni znanstveni članek

Opis: Point clouds have become a popular training data for many practical applications of machine learning in the fields of environmental modeling and precision agriculture. In order to reduce high space requirements and the effect of noise in the data, point clouds are often transformed to a structured representation such as a voxel grid. Storing, transmitting and consuming voxelized geometry, however, remains a challenging problem for machine learning pipelines running on devices with limited amount of on-chip memory with low access latency. A viable solution is to store the data in a compact encoded format, and perform on-the-fly decoding when it is needed for processing. Such on-demand expansion must be fast in order to avoid introducing substantial additional delay to the pipeline. This can be achieved by parallel decoding, which is particularly suitable for massively parallel architecture of GPUs on which the majority of machine learning is currently executed. In this paper, we present such method for efficient and parallelizable encoding/decoding of voxelized geometry. The method employs multi-level context-aware prediction of voxel occupancy based on the extracted binary feature prediction table, and encodes the residual grid with a pointerless sparse voxel octree (PSVO). We particularly focused on encoding the datasets of voxelized trees, obtained from both synthetic tree models and LiDAR point clouds of real trees. The method achieved 15.6% and 12.8% reduction of storage size with respect to plain PSVO on synthetic and real dataset, respectively. We also tested the method on a general set of diverse voxelized objects, where an average 11% improvement of storage space was achieved.
Ključne besede: voxel grid, feature prediction, tree models, prediction-based encoding, key voxels, residuals, sparse voxel octree
Objavljeno v DKUM: 09.01.2025; Ogledov: 0; Prenosov: 5
.pdf Celotno besedilo (20,93 MB)

9.
A case study on entropy-aware block-based linear transforms for lossless image compression
Borut Žalik, David Podgorelec, Ivana Kolingerová, Damjan Strnad, Štefan Kohek, 2024, izvirni znanstveni članek

Opis: Data compression algorithms tend to reduce information entropy, which is crucial, especially in the case of images, as they are data intensive. In this regard, lossless image data compression is especially challenging. Many popular lossless compression methods incorporate predictions and various types of pixel transformations, in order to reduce the information entropy of an image. In this paper, a block optimisation programming framework is introduced to support various experiments on raster images, divided into blocks of pixels. Eleven methods were implemented within , including prediction methods, string transformation methods, and inverse distance weighting, as a representative of interpolation methods. Thirty-two different greyscale raster images with varying resolutions and contents were used in the experiments. It was shown that reduces information entropy better than the popular JPEG LS and CALIC predictors. The additional information associated with each block in is then evaluated. It was confirmed that, despite this additional cost, the estimated size in bytes is smaller in comparison to the sizes achieved by the JPEG LS and CALIC predictors.
Ključne besede: computer science, information entropy, prediction, inverse distance transform, string transformations
Objavljeno v DKUM: 07.01.2025; Ogledov: 0; Prenosov: 9
.pdf Celotno besedilo (5,13 MB)

10.
Region segmentation of images based on a raster-scan paradigm
Luka Lukač, Andrej Nerat, Damjan Strnad, Štefan Horvat, Borut Žalik, 2024, izvirni znanstveni članek

Opis: This paper introduces a new method for the region segmentation of images. The approach is based on the raster-scan paradigm and builds the segments incrementally. The pixels are processed in the raster-scan order, while the construction of the segments is based on a distance metric in regard to the already segmented pixels in the neighbourhood. The segmentation procedure operates in linear time according to the total number of pixels. The proposed method, named the RSM (raster-scan segmentation method), was tested on selected images from the popular benchmark datasets MS COCO and DIV2K. The experimental results indicate that our method successfully extracts regions with similar pixel values. Furthermore, a comparison with two of the well-known segmentation methods—Watershed and DBSCAN—demonstrates that the proposed approach is superior in regard to efficiency while yielding visually similar results.
Ključne besede: segment, image analysis, distance metric, Watershed, DBSCAN
Objavljeno v DKUM: 05.12.2024; Ogledov: 0; Prenosov: 4
URL Povezava na datoteko

Iskanje izvedeno v 0.18 sek.
Na vrh
Logotipi partnerjev Univerza v Mariboru Univerza v Ljubljani Univerza na Primorskem Univerza v Novi Gorici