SPMS 2025

Europe/Prague
Nároží 6, Děčín
Jiří Franc (The Czech Technical University in Prague), Václav Kůs (KM FJFI CVUT), Antonie Brožová
Description

The Stochastic and Physical Monitoring Systems (SPMS) has been an annual international conference since 2010 with the aim to bring together students and researchers with areas of interest related to the following topics:

  • Traffic and pedestrian modeling, microscopic structure of vehicular streams.
  • Elasticity and classification in material defectoscopy;
  • Small area estimation and Generalized mixed linear model;
  • Data analysis in particle and experimental physics.
  • Stochastic models, Bayesian analysis, competitive games.

 

The SPMS 2025 conference will be held at Educational center "Zámecká Sýpka" in Děčín,
 Czech Republic from 19th to 22nd of June, 2025.

The meeting is organized by the Group of Applied Mathematics and Stochastics (GAMS), Department of Mathematics, Czech Technical University in Prague. Follow us on Instagram.

Registration
SPMS 2025 Registration Form
    • 2:00 PM
      Registration
    • 4:00 PM
      Welcome session
    • 5:00 PM
      Up to the sky session
    • 7:00 PM
      Dinner
    • 8:00 AM
      Breakfast
    • Stochastic Monitoring Systems
      Convener: Tomáš Hobza (FJFI CVUT)
      • 1
        Dispersion of a point set

        This presentation addresses the mathematical concept of dispersion of a point set, defined as the volume of the largest axis-aligned empty box within the unit cube that avoids a given set of points. Dispersion, closely related to discrepancy, serves as a measure of uniformity in point distributions and has applications in numerical integration, optimization, machine learning, and computer graphics. We establish new lower bounds on dispersion for sufficiently large volumes by leveraging a connection with cover-free families from extremal set theory. These bounds are asymptotically optimal up to a logarithmic factor and are derived using a small, well-chosen family of test boxes. The method further generalizes to arbitrary dimensions and volumes, leading to improved bounds and new state-of-the-art results.

        Speaker: Matěj Trödler
      • 2
        Gamma linear mixed models for estimation of small area additive parameters

        This study presents an approach to predicting population means and poverty proportions in small area estimation using gamma mixed linear models. Extending the framework of (Hobza et al.,2020), which utilizes an inverse link function, we introduce a model based on a logarithmic link. Parameter estimation is carried out via maximum likelihood, employing the Laplace approximation. To estimate additive parameters, we develop both plug-in and marginal predictors, and assess their mean squared errors through a parametric bootstrap procedure. The proposed methodology is evaluated against the original approach using real-world data from the 2013 Spanish Living Conditions Survey for the Valencia region.

        Speaker: Vendula Rusá
      • 3
        Implementation of a Monitoring Dashboard for the AMBER Experiment at CERN

        This presentation introduces the design and implementation of a monitoring dashboard for the AMBER experiment conducted at CERN. The goal of the project was to evaluate existing software solutions for IT infrastructure monitoring, analyze the requirements for a new monitoring system, and most importantly, develop and implement an optimal solution. The resulting system enables aggregation and visualization of key experiment metrics, alert notifications, and historical data storage. The presentation also outlines potential future enhancements, such as a configuration interface or deployment in a Kubernetes environment.

        Speaker: Jan Chrastina
    • 10:10 AM
      Coffee Break
    • Machine Learning Applications and Data Analysis
      Convener: Jiří Franc (Czech Technical University (CZ))
      • 4
        Reconstruction of Powder Diffractograms in 4D-STEM Microscopy

        This presentation addresses the reconstruction of 2D powder diffractograms acquired using 4D-STEM in SEM microscopy. Using image processing techniques, we aim to improve the analysis of crystalline samples through a novel method, 4D-STEM/PNBD, developed in collaboration with UMCH and UTIA. After a brief introduction to the 4D-STEM/PNBD framework, we summarize existing reconstruction approaches and propose two new classical methods for diffractogram reconstruction. We also present two procedures for generating synthetic diffractograms—one based on estimated data characteristics and the other on physical simulation of the diffraction process. These synthetic images are then used to train a segmentation neural network to support the reconstruction process. Finally, all proposed methods are evaluated on three crystalline samples.

        Speaker: David Rendl
      • 5
        Subgroup detection using embedding-based approaches

        We present SDUEBA, a structured and modular pipeline for subgroup discovery that combines embedding-based instance representation, manifold-space clustering, and interpretable rule induction using decision trees. Designed to overcome the limitations of traditional exhaustive search methods, SDUEBA transforms heterogeneous input data—including categorical, numerical, binary, and textual features—into dense, unified vector embeddings. These embeddings capture the latent structure of the data and are clustered using scalable algorithms optimized for high-dimensional spaces.

        To ensure interpretability, cluster membership is translated into human-readable rules using decision tree models trained on the original feature space. This approach allows for clear, actionable subgroup definitions while preserving statistical validity and minimizing redundancy. The framework supports both supervised and unsupervised embedding strategies, making it highly adaptable to diverse use cases and target definitions.

        Comprehensive experimental evaluation on real-world datasets demonstrates that SDUEBA is capable of identifying a wide range of meaningful and diverse subgroups. The resulting patterns exhibit strong alignment with target variables, high coverage, and low overlap, all while maintaining concise and interpretable descriptions. This positions SDUEBA as a powerful tool for practitioners seeking scalable and explainable subgroup discovery in complex data environments.

        Speaker: Matyáš Veselý
      • 6
        Explainability in Deep Learning using AJIVE matrix factorization

        This contribution presents results of an ongoing project on explainable AI in context of breast cancer risk prediction. The problem is addressed as a multi-view classification task on x-ray mammography images. The two views of a standard mammography screening (CC and MLO) are considered as distinct modalities and a CNN based deep neural network is used to predict the cancer risk score (BI-RADS) in a fully supervised setup. The trained network is used to separately extract latent representations from the two views, creating two data blocks that are further analyzed by Angle based Joint and Individual Variation Explained (AJIVE) method. This method is designed to analyze multi-block data by decomposing each block into three components:

        • Joint component - variation shared across all blocks,
        • Individual component - variation unique to the block,
        • Residual noise.

        This decomposition is combined with saliency map visualization to highlight the regions in the original mammography images responsible for shared/individual variation.

        Speaker: Jan Zavadil (UiT)
    • 12:30 PM
      Lunch
    • Acoustic Emission and Defectoscopy
      Convener: Václav Kůs (KM FJFI CVUT)
      • 7
        Machine Learning for Ultrasonic Fault Propagation Monitoring

        This paper focuses on using machine learning methods to detect and monitor fatigue crack propagation using ultrasonic signals. The basic principle is the detection of a prolonged transit time of the signal as an indicator of changes in the material caused by growth of the failure. A multilayer perceptron (MLP) network supplemented with simple convolutional layers is employed to analyse the signal parameters. First, synthetic data are generated using basic physical principles to verify the underlying phenomenology and the network's ability to detect material changes based on prolonged elastic wave transit times. The focus is particularly on methods to suppress the over-training phenomenon. Subsequently, the approach is applied to real pipeline fatigue test data. The results demonstrate the potential of even small neural architectures for the early detection of failures in engineering components.

        Speaker: Mr Milan Chlada
      • 8
        Bearing Fault Detection Using Neural Networks

        The research focuses on the classification of bearing damage using acoustic emission data. The methodology involves the collection of experimental data across six categories of bearing faults using three different types of sensors. Several neural network architectures are proposed and evaluated, including Convolutional Neural Networks (CNN), Long Short-Term Memory networks (LSTM), and InceptionTime. The study includes hyperparameter optimization and selection of a baseline model for performance comparison. To assess practical applicability, two key experiments are conducted: the first evaluating the impact of additional gearbox noise on classification accuracy, and the second testing the models’ ability to generalize across sensors not used during training.

        Speaker: Petr Vojtášek (Student)
      • 9
        Development of an application for long-term acoustic emission measurement

        As a part of an experiment, which is to be carried out at the Institute of Thermodynamics of the Czech Academy of Sciences, measuring acoustic emission in a bearing assembly the need rose to design and implement an application with specific requirements intended for long-term data collection from a TiePie oscilloscope. This is the first of two parts of my bachelor thesis, in which data from the experiment are to be measured and then analysed. The main goal of the first part was to create a practical application, which was achieved by using a graphical user interface, in which user can adjust different settings of the oscilloscope and can also directly examine measured data in the form of a simple graph and an audio recording.

        Speaker: Martin Satranský (CTU FNSPE)
      • 10
        Improved Detection of Continuous AE Sources through Time-Reversed Signal Processing

        Reliable localisation of acoustic emission (AE) sources is one of the most critical inverse problems in non-destructive testing and structural health monitoring of engineering structures. Conventional AE localisation techniques often fail when applied to complex structures exhibiting wave dispersion, velocity variations, or geometric irregularities. Locating continuous AE signals, such as those generated by leakage, remains particularly challenging compared to localising short, burst-type emissions.
        Time reversal (TR) signal processing offers an effective solution to these challenges. By enabling space-time wave focusing and partial reconstruction of the source signal, TR improves localisation accuracy. In this study, we employ the TR method to enhance the signal-to-noise ratio (S/N) and use cross-correlation functions (CCF) to identify the source location of continuous AE signals.
        The proposed TR-based localisation approach was validated using an artificial AE source (a piezoelectric transducer emitting a continuous noise signal) applied to an aluminium plate with circular holes. The simulated leakage signal was recorded at various positions, time-reversed, and retransmitted into the structure. Accurate source localisation required detailed surface scanning around the initially estimated source area.
        Experimental results demonstrate the potential of TR signal processing for practical engineering applications, particularly in the precise localisation of continuous AE sources.

        Speaker: Dr Zuzana Dvořáková (Institute of Thermomechanics of the CAS)
    • 3:40 PM
      Coffee Break
    • 4:30 PM
      Up to the sky - Advanced session
    • 7:00 PM
      Dinner
    • 7:45 AM
      Breakfast
    • 9:00 AM
      Whole day trip
    • 6:00 PM
      Dinner and a Fire in the Sky
    • 8:30 AM
      Breakfast
    • Machine Learning Applications and Data Analysis
      Convener: František Gašpar
      • 11
        Estimation of source term of Chernobyl wildfires using variational deep image prior

        In April 2020, monitoring stations across Europe detected the presence of Cesium-137, released during wildfires in the Chernobyl region. The source of this emission is believed to be both spatially and temporally distributed, with Cesium-137 bound to particles of varying sizes. To estimate the spatio-temporal distribution of the emission, we formulate the task as an inverse problem using a source-receptor sensitivity matrix. However, due to a limited number of measurements, this problem is highly ill-posed. To obtain a meaningful solution without relying on additional data, we incorporate prior information through a neural network. Specifically, we employ the concept of deep image prior, which leverages the inherent structural bias of convolutional U-net architectures toward generating smooth and natural-looking images. In this talk, we will introduce deep image prior for atmospheric inversion and compare its performance with traditional inversion method based on Tikhonov regularization and an estimate derived from satelitte images for the Chernobyl wildfires in 2020.

        Speaker: Antonie Brožová
      • 12
        SpheroSeg: A platform for robust segmentation of tumor spheroids

        Accurate quantitative analysis of 3D multicellular tumor spheroids is crucial in cancer research and drug development. However, manual segmentation of machine-scanned data is time-consuming and represents a significant obstacle to further development in this field. This presentation introduces SpheroSeg, a new AI-based web platform for robust spheroid segmentation. We will describe the process in detail, starting with the development of an optimized pipeline using classical image processing methods, which enabled the creation of a large dataset containing 21,462 images and their annotated masks. This dataset was then used to train a segmentation model based on convolutional neural networks. Our model is a ResUNet architecture enhanced with residual blocks, Squeeze-and-Excitation modules, and attention gates, designed to handle diverse spheroid morphologies and image inconsistencies. We will present the architecture, training paradigm, and performance evaluation of the model, demonstrating its high accuracy and generalization ability. The SpheroSeg online platform aims to make advanced spheroid analysis accessible to the scientific community, thereby accelerating development and reproducibility.

        Speaker: Michal Průšek
    • Dynamic Decision Making
      Convener: František Gašpar
      • 13
        Tools for Adaptive Portfolio Optimization

        This research project presents a combination of techniques to develop a
        sound mathematical approach to the portfolio optimization problem. The problem
        is formulated as a Linear Quadratic Regulator and solved using Dynamic Program-
        ming. The key contributions include integrating multivariate regression modeling of
        returns with structure estimation for the regressor subset and employing exponential
        forgetting with an algorithm for varying forgetting factor. The optimal allocation
        is obtained by solving a constrained quadratic programming problem featuring a
        custom reward function. We highlight the importance of structure estimation and
        the sequential approach, while also exploring the potential of modeling optimal al-
        location using the same regression framework as for returns.

        Speaker: Tomáš Procházka
      • 14
        Belief and Reality in Quantum-Like Model: Lessons from a Rat in a Maze

        We present a simple toy model of a rat moving in a maze [1, 2], where the rat’s motion is governed by a quantum-inspired rule. At each time step, the rat samples its next position from a probability distribution that evolves according to a fixed Hamiltonian. This gives rise to the objective probability — the distribution that actually drives the rat’s movement.
        Meanwhile, an observer outside the maze does not know the rat’s objective probability distribution. Instead, the observer updates their subjective probability — a belief about where the rat might be — based on indirect information (the rat’s actual positions over time) and prior knowledge (initial position and evolution rule).
        This setup clearly illustrates the difference between objective and subjective probabilities, a distinction that plays a central role in quantum foundations (e.g., in QBism [3] and related interpretations). Our model demonstrates how the same physical system can give rise to two distinct yet meaningful probability distributions: one tied to the system's internal behavior, and one tied to the observer's knowledge. Their difference can be called mismodelling.
        Although the presented setup is simple, its main contribution is interpretational and connects to areas where real-world systems must be tracked or inferred based on uncertain or partial data.

        [1] A. Gaj, M. Kárný. Quantum Model of the Rat in a Maze. Poster, April 2024. DYNALIFE: Quantum Information and Decision Making in Life Sciences, Prague.
        [2] A. Gaj, M. Kárný. Quantum Rat Vol.2: Out-of-the-Box Thinking in a Boxed Environment. Poster, June 2024. Quantum Information and Probability: from Foundations to Engineering (QIP25), Vaxjo, Sweden.
        [3] C. M. Caves, Ch. A. Fuchs, and R. Schack. Quantum probabilities as Bayesian probabilities. Physical Review A, 65(2), January 2002.

        The authors acknowledge the contribution of the Grant Agency of the CTU in Prague, grant No. SGS25/167/OHK4/3T/14.

        Speaker: Mr Aleksej Gaj
    • 10:55 AM
      Coffee break
    • Stochastic Monitoring Systems
      Convener: Václav Kůs (KM FJFI CVUT)
      • 15
        Constrained Convolution Schema: A Deterministic Framework for Random Walks on Fractal Sets

        Random walks on fractal sets are central to modelling diffusion in complex systems, but traditional Monte Carlo simulations introduce stochastic noise and rely on averaging, often obscuring fine structural effects. We present a deterministic method, the constrained convolution schema for modelling random walks on sparse grids. Grounded in a standard Markovian framework, this approach yields an exact and noise-free evolution of the probability distribution over time. We demonstrate its usability through an efficient algorithmic computation of return probabilities and the second moments. Numerical experiments on reference fractals with known dimensions confirm the precision and effectiveness of the method. The constrained convolution schema provides a general alternative to Monte Carlo techniques in applications requiring high precision and a complete probabilistic description of diffusion.

        Speaker: František Gašpar
      • 16
        Dirichlet Forms and Soft Boundary Conditions in Discrete Potential Theory

        This presentation introduces a novel framework of soft boundary conditions for discrete potential theory, where energetic preferences replace rigid constraints. We will show a "Quadratic Orbit Theorem" to calculate minimal energy under these conditions using sums over specifically defined soft orbits. This framework is then applied to a continuous double-well potential model. The Main theorem demonstrates that the partition function of this double-well system can be exactly mapped to that of an effective Ising-like model, termed the "Gauss-Ising model." The interactions in this new model are determined by our orbit-based energy calculations, and it exhibits distinct low- and high-temperature behaviours compared to the standard Ising
        model.

        Speaker: Daniel Khol (Department of Mathematics)
      • 17
        Explainable AI Models for Knowledge Assessment

        This presentation focuses on the use of explainable artificial intelligence methods in the context of knowledge testing, with a particular emphasis on Bayesian networks. Building on a research project originally conducted as part of a master's thesis, the aim was to demonstrate how these models can not only accurately assess students’ skills based on their responses but also provide clear and interpretable explanations of the results. Special attention is given to computational methods within these models, especially the comparison between MPE (Most Probable Explanation) and MAP (Maximum a Posteriori) inference. The findings show that the choice of inference method can lead to significantly different outcomes, which has critical implications for the interpretation of test results.

        Speaker: Barbora Bumbálková
      • 18
        Numerical implementation of Singular Value Decomposition.

        While SVD has been a cornerstone of numerical linear algebra for over a century, its efficient computation remains an active domain: Golub–Kahan bidiagonalisation, Jacobi and QR iterations, divide-and-conquer schemes, and modern randomized or block-Krylov variants now coexist with GPU-accelerated implementations. Each algorithm offers distinct trade-offs among accuracy, stability, memory traffic, and parallel scalability across dense, sparse, and streaming data sets.

        A unified software interface can encapsulate several of these algorithms—whether in C++, Python, or CUDA—allowing practitioners to switch strategies as matrix size, sparsity, or hardware changes. Such flexibility is crucial in data-driven pipelines where SVD underpins dimensionality reduction (PCA), latent semantic indexing, image denoising, and low-rank recommendation. Selecting an appropriate algorithm therefore directly affects downstream performance, numerical robustness, and energy efficiency. The talk outlines how contemporary SVD implementations bring classical theory and modern data processing into productive alignment.

        Speaker: Ruslan Guliev
    • 12:30 PM
      Lunch
    • 1:30 PM
      Closing remarks and Departure