1. |
2. Using neural networks in the process of calibrating the microsimulation models in the analysis and design of roundabouts in urban areasIrena Ištoka Otković, 2011, dissertation Abstract: The thesis researches the application of neural networks in computer program calibration of traffic micro-simulation models. The calibration process is designed on the basis of the VISSIM micro-simulation model of local urban roundabouts.
From the five analyzed methods of computer program calibration, Methods I, II and V were selected for a more detailed research. The three chosen calibration methods varied the number of outgoing traffic indicators predicted by neural networks and a number of neural networks in the computer program calibration procedure. Within the calibration program, the task of neural networks was to predict the output of VISSIM simulations for selected functional traffic parameters - traveling time between the measurement points and queue parameters (maximum queue and number of stopping at the roundabout entrance). The Databases for neural network training consisted of 1379 combinations of input parameters whereas the number of output indicators of VISSIM simulations was varied. The neural networks (176 of them) were trained and compared for the calibration process according to training and generalization criteria. The best neural network for each calibration method was chosen by using the two-phase validation of neural networks.
The Method I is the calibration method based on calibration of a traffic indicator -traveling time and it enables validation related to the second observed indicator – queue parameters. Methods II and V connect the previously described calibration and validation procedures in one calibration process which calibrates input parameters according to two traffic indicators.
Validation of the analyzed calibration methods was performed on three new sets of measured data - two sets at the same roundabout and one set on another location. The best results in validation of computer program calibration were achieved by the Method I which is the recommended method for computer program calibration.
The modeling results of selected traffic parameters obtained by calibrated VISSIM traffic model were compared with: values obtained by measurements in the field, the existing analysis methods of operational roundabouts characteristics (Lausanne method, Kimber-Hollis, HCM) and modeling by the uncalibrated VISSIM model. The calibrated model shows good correspondence with measured values in real traffic conditions. The efficiency of the calibration process was confirmed by comparing the measured and modeled values of delays, of an independent traffic indicator that was not used in the process of calibration and validation of traffic micro-simulation models.
There is also an example of using the calibrated model in the impact analysis of pedestrian flows on conflicting input and output flows of vehicles in the roundabout. Different traffic scenarios were analyzed in the real and anticipated traffic conditions. Keywords: traffic models, traffic micro-simulation, calibration of the VISSIM model, computer program calibration method, neural networks in the calibration process, micro-simulation of roundabouts, traffic modeling parameters, driving time, queue parameters, delay Published: 02.06.2011; Views: 3697; Downloads: 262 Full text (13,21 MB) |
3. Natural convection of micropolar fluid in an enclosure with boundary element methodMatej Zadravec, Matjaž Hriberšek, Leopold Škerget, 2009, original scientific article Abstract: The contribution deals with numerical simulation of natural convection in micropolar fluids, describing flow of suspensions with rigid and underformable particles with own rotation. The micropolar fluid flow theory is incorporated into the framework of a velocity-vorticity formulation of Navier-Stokes equations. The governing equations are derived in differential and integral form, resulting from the application of a boundary element method (BEM). In integral transformations, the diffusion-convection fundamental solution for flow kinetics, including vorticity transport, heat transport and microrotation transport, is implemented. The natural convection test case is the benchmark case of natural convection in a square cavity, and computations are performed for Rayleigh number values up to 107. The results show, which microrotation of particles in suspension in general decreases overall heat transfer from the heated wall and should not therefore be neglected when computing heat and fluid flow of micropolar fluids. Keywords: natural convection, micropolar fluid, boundary element method Published: 31.05.2012; Views: 1428; Downloads: 67 Link to full text |
4. Comparison between wavelet and fast multipole data sparse approximations for Poisson and kinematics boundary - domain integral equationsJure Ravnik, Leopold Škerget, Zoran Žunič, 2009, original scientific article Abstract: The boundary element method applied on non-homogenous partial differential equations requires calculation of a fully populated matrix of domain integrals. This paper compares two techniques: the fast multipole method and the fast wavelet transform, which are used to reduce the complexity of such domain matrices. The employed fast multipole method utilizes the expansion of integral kernels into series of spherical harmonics. The wavelet transform for vectors of arbitrary length, based on Haar wavelets and variable thresholding limit, is used. Both methods are tested and compared by solving the scalar Poisson equation and the velocity-vorticity vector kinematics equation. The results show comparable accuracy for both methods for a given data storage size. Wavelets are somewhat better for high and low compression ratios, and the fast multipole methods gives better results for moderate compressions. Considering implementation of the methods, the wavelet transform can easily be adapted for any problem, while the fast multipole method requires different expansion for each integral kernel. Keywords: wavelets, fast multipole method, Poisson equation, BEM Published: 31.05.2012; Views: 1434; Downloads: 66 Link to full text |
5. BEM simulation of compressible fluid flow in an enclosure induced by thermoacoustic wavesLeopold Škerget, Jure Ravnik, 2009, original scientific article Abstract: The problem of unsteady compressible fluid flow in an enclosure induced by thermoacoustic waves is studied numerically. Full compressible set of Navier-Stokes equations are considered and numerically solved by boundary-domain integral equations approach coupled with wavelet compression and domain decomposition to achieve numerical efficiency. The thermal energy equation is written in its most general form including the Rayleigh and reversible expansion rate terms. Both, the classical Fourier heat flux model and wave heat conduction model are investigated. The velocity-vorticity formulation of the governing Navier-Stokes equations is employed, while the pressure field is evaluated from the corresponding pressure Poisson equation. Material properties are taken to be for the perfect gas, and assumed to be pressure and temperature dependent. Keywords: compressible fluid flow, boundary element method, thermoacoustic waves, velocity-vorticity fomulation Published: 31.05.2012; Views: 1355; Downloads: 64 Link to full text |
6. 3D multidomain BEM for a Poisson equationMatjaž Ramšak, Leopold Škerget, 2009, original scientific article Abstract: This paper deals with the efficient 3D multidomain boundary element method (BEM) for solving a Poisson equation. The integral boundary equation is discretized using linear mixed boundary elements. Sparse system matrices similar to the finite element method are obtained, using a multidomain approach, also known as the ćsubdomain techniqueć. Interface boundary conditions between subdomains lead to an overdetermined system matrix, which is solved using a fast iterative linear least square solver. The accuracy, efficiency and robustness of the developed numerical algorithm are presented using cube and sphere geometry, where the comparison with the competitive BEM is performed. The efficiency is demonstrated using a mesh with over 200,000 hexahedral volume elements on a personal computer with 1 GB memory. Keywords: fluid mechanics, Poisson equation, multidomain boundary element method, boundary element method, mixed boundary elements, multidomain method Published: 31.05.2012; Views: 1558; Downloads: 64 Link to full text |
7. Fast single domain-subdomain BEM algorithm for 3D incompressible fluid flow and heat transferJure Ravnik, Leopold Škerget, Zoran Žunič, 2009, original scientific article Abstract: In this paper acceleration and computer memory reduction of an algorithm for the simulation of laminar viscous flows and heat transfer is presented. The algorithm solves the velocity-vorticity formulation of the incompressible Navier-Stokes equations in 3D. It is based on a combination of a subdomain boundary element method (BEM) and single domain BEM. The CPU time and storage requirements of the single domain BEM are reduced by implementing a fast multipole expansion method. The Laplace fundamental solution, which is used as a special weighting function in BEM, is expanded in terms of spherical harmonics. The computational domain and its boundary are recursively cut up forming a tree of clusters of boundary elements and domain cells. Data sparse representation is used in parts of the matrix, which correspond to boundary-domain clusters pairs that are admissible for expansion. Significant reduction of the complexity is achieved. The paper presents results of testing of the multipole expansion algorithm by exploring its effect on the accuracy of the solution and its influence on the non-linear convergence properties of the solver. Two 3D benchmark numerical examples are used: the lid-driven cavity and the onset of natural convection in a differentially heated enclosure. Keywords: boundary element method, fast multipole method, fluid flow, heat transfer, velocity-vorticity fomulation Published: 31.05.2012; Views: 1346; Downloads: 53 Link to full text |
8. MONTE CARLO MODEL FOR NEUTRON PRODUCTION BY THE INTERACTIONS OF LOW ENERGY DEUTERONS IN SOLID TARGETS Alberto Milocco, 2012, dissertation Abstract: The construction of the nuclear fusion plant 'ITER' has started in 2009 at Cadarache, France. The ITER machine represents a milestone in the civil use of the nuclear fusion energy. The physics of ITER is based on the fusion reaction between deuteron and triton nuclei (d-t). The deuteron-deuteron reaction (d-d) is also interesting and is foreseen for the next generation of fusion reactors. The experimental activities carried out in the context of the ITER neutronics involve intense fields of neutrons produced with a linear accelerator for deuterons, a target containing tritium or deuterium and auxiliary structures, such as the detector system, cooling system, room walls, etc. Experimental data have been obtained from the FNG (Frascati Neutron Generator, Italy), FNS (Fast Neutron Source, Japan), OKTAVIAN (Osaka University, Japan) and IRMM (Institute for Reference Materials and Measurements, EU). An independent method was developed at FNG for the simulation of the d-t neutron spectra at different angles. The FNG source routine models the Monte Carlo deuteron transport in solid tritiated targets as done in the well known SRIM code. The neutrons are generated according to the tabulated probability of the d-t reactions as for the DROSG2000 code. The FNG source routine is implemented into the MCNP distributions. The user is asked to define into the MCNP input file the deuteron energy (up to 10 MeV), the beam width and the target dimensions and composition. This source routine has been chosen as starter for the present thesis. Improvements and extensions were introduced.
- The methodology, originally developed for the d-t neutron source, has been extended to d-d neutron sources.
- Assuming the the SRIM code constitutes the reference calculation for the deuteron transport in matter, its implementation in the source routine has been cross-checked by extracting from the latter the same quantities as provided by the original code.
II
- In the present version of the source routines, the cross sections are internally generated from built in table based on modern evaluated nuclear data files instead of tables obtained from the DROSG200 code.
- Since the model may be used up to 10 MeV deuteron energy, the relativistic kinematics has been implemented to avoid unnecessary approximations.
- Simulations of the bare neutron source spectra and angular yields measurements have been carried out to validate the model.
- New editions of the d-t and d-d source routine have been released for the latest versions of the MCNP codes and tested on LINUX and WINDOWS machines. The validation activities with the FNG and IRMM experimental data suggested a possible application of the source routine for the characterisation of neutron spectrometers in the MeV energy region.
The source routine has been adopted to simulate integral benchmark experiments at FNG, FNS and OKTAVIAN. Brand new MCNP benchmark models have been developed for inclusion of all the available experimental information. It is shown that the d-t source routine is an accurate tool for the generation of the source eutrons. It also demonstrates to be useful for the evaluation of the neutron source term and associated uncertainties. The accuracy of the analyses is pursued to the point that the quality of the nuclear data employed in the simulation can be assessed. To this extent, the case of a new evaluation of the neutron interaction nuclear data for Manganese-55 is tested. A set of integral benchmark experiments has been used in the validation phase of the nuclear data. The computational models rely on the source routine, the object of the thesis. In conclusion, the source routine claims the inclusion of the major features responsible for the experimental resolution associated with the source term. The doctoral thesis explores its usage in the context of the experimental activities for ITER. The future exploitation of the source routine for the simulation of worldwide experiments might become an occasion to compare it with the source models available in the other laboratories Keywords: deuteron-triton reactions, low-energy deuterons, neutron source model, Monte Carlo method, solid tritium target, solid deuterium target, fusion neutronics, benchmark experiments, diamond detectors. Published: 07.03.2012; Views: 3033; Downloads: 93 Full text (8,50 MB) |
9. Two-dimensional velocity-vorticity based LES for the solution of natural convection in a differentially heated enclosure by wavelet transform based BEM and FEMJure Ravnik, Leopold Škerget, Matjaž Hriberšek, 2006, original scientific article Abstract: A wavelet transform based boundary element method (BEM) numerical scheme is proposed for the solution of the kinematics equation of the velocity-vorticityformulation of Navier-Stokes equations. FEM is used to solve the kinetics equations. The proposed numerical approach is used to perform two-dimensional vorticity transfer based large eddy simulation on grids with 105 nodes. Turbulent natural convection in a differentially heated enclosure of aspect ratio 4 for Rayleigh number values Ra=107-109 is simulated. Unstable boundary layer leads to the formation of eddies in the downstream parts of both vertical walls. At the lowest Rayleigh number value an oscillatory flow regime is observed, while the flow becomes increasingly irregular, non-repeating, unsymmetric and chaotic at higher Rayleigh number values. The transition to turbulence is studied with time series plots, temperature-vorticity phase diagrams and with power spectra. The enclosure is found to be only partially turbulent, what is qualitatively shown with second order statistics-Reynolds stresses, turbulent kinetic energy, turbulent heat fluxes and temperature variance. Heat transfer is studied via the average Nusselt number value, its time series and its relationship to the Rayleigh number value. Keywords: numerical modelling, boundary element method, discrete wavelet transform, large eddy simulation, velocity-vertocity formulation, natural convection Published: 31.05.2012; Views: 1546; Downloads: 54 Link to full text |
10. Optimization of elastic systems using absolute nodal coordinate finite element formulationBojan Vohar, Marko Kegl, Zoran Ren, 2006, short scientific article Abstract: An approach to a shape optimization of elastic dynamic multibody systems is presented. The proposed method combines an appropriate shape parameterization concept and recently introduced finite element type using absolute nodal coordinate formulation (ANCF). In ANCF, slopes and displacements are used as the nodal coordinates instead of infinitesimal or finite rotations. This way one avoids interpolation of rotational coordinates and problems with finite rotations. ANCF elements are able to describe nonlinear deformation accurately; therefore, this method is very useful for simulations of lightweight multibody structures, where large deformations have to be taken into account. The optimization problem is formulated as a nonlinear programming problem and a gradient-based optimization procedure is implemented. The introduced optimization design variables are related to the cross-sectional parameters of the element and to the shape of the whole structure. The shape parameterization is based on the design element techniqueand a rational B ezier body is used as a design element. A body-like design element makes possible to unify the shape optimization of both simple beams and beam-like (skeletal) structures. Keywords: mechanics, dynamics of material systems, multibody systems, elastic mechanical systems, manipulators, dynamically loaded beams, optimum shape design, absolute nodal coordinate formulation, design element technique, finite element method Published: 31.05.2012; Views: 1163; Downloads: 75 Link to full text |