The Stochastic and Physical Monitoring Systems (SPMS) has been an annual international conference since 2010 with the aim to bring together students and researchers with areas of interest related to the following topics:
The meeting is organized by the Group of Applied Mathematics and Stochastics (GAMS), Department of Mathematics, Czech Technical University in Prague.
The SPMS 2019 conference will be held at Sokol Dobřichovice in the Czech Republic from 20th to 24th of June, 2019.
The paper presents a new theory of invariants to Gaussian blur. Unlike earlier methods, the blur kernel may be arbitrary oriented, scaled and elongated. Such blurring is a semi-group action in the image space, where the orbits are classes of blur-equivalent images. We propose a non-linear projection operator which extracts blur-insensitive component of the image. The invariants are then formally defined as moments of this component but can be computed directly from the blurred image without an explicit construction of the projections. Image description by the new invariants does not require any prior knowledge of the particular blur kernel shape and does not include any deconvolution. Potential applications are in blur-invariant image recognition and in robust template matching.
Variational Bayes method has been previously used for analysis of scintigraphic image sequences. The aim of the analysis is to extract time-activity curves of radiotracer in different regions of scanned body part. This problem may be viewed as a nonnegative matrix factorization, the sequence can be decomposed into a matrix of factor curves and a matrix of factor images. Modelling factor curves by convolution of tissue specific kernel and input function is inspired by distribution of radiotracer in tissue. Proposed convolutional model was further improved by incorporating an assumption of smoothnes on convolution kernels in order to meet psysiological expectations. The inference is tested on a sequence of MRI images.
General introduction of Mechatronics department of Porsche Engineering Services. Areas of intrest of our organisational units with some examples of finished or ongoing projects. There is growing activity in ADAS development in Prague, which provides some oportunities for advanced signal processing, signal fusion, picture processing and so on. We also cooperate with academic partners, mainly CTU and CIIRC, especially in autonomous driving development. There is a possibility for students join the projects in the form of part-time jobs or with thesis.
Independent Component Extraction (ICE) is a novel approach based on fundamentals of Independent Component Analysis that deals with the blind source separation. The model assumes a linear mixutre of indepent sources of interest (SOIs) and aims to restore them from the mixture. In ICE, only one SOI is extracted and the rest is considered as a background.
Most of the applications of ICA/ICE are in the accusting signal processing. In this paper, the ICE model is used for a blind detection of a single topic in text documents. The results of known unsupervised methods are used for the comparison. The transfer learning is then used to compare ICE also with supervised methods. Pros and cons of all aproaches are discussed.
Accurate and effective assessment of individual and collective behaviour as well as team dynamics is a key managerial task. Current methods, including annual employee reports and direct observations, are dependent on either manager's emotional capacity or employee's ability to assess his overall affective state. Therefore, we present method allowing continuous stream of information and construct methodology for both current mood state and overall well-being assessment. Firstly, we define basic quantities as mood state, stability and predictability as probability distribution, which are, with many others, basic characteristic of each employee. Afterwards, measures allowing short-term and long-term assessment of employee's affective state are introduced. Beside individual state assessment, we draw our attention to collective behaviour. By means of distance measures, we evaluate alikeness of team of individuals, as similarity between distributions representing specific characteristics of employees. Finally, we present visualisation possibilities, based on dimensionality reduction methods, allowing comparison of employees based on their position in two dimensional plane.
The purpose of this work is to show some aspects of developing higher-order numerical schemes for the flow in porous medium. Gentel introduction to Mixed-Hybrid FEM method is involved. Particularly we show the development of the second-order scheme both in space and time . Also, we verify the validity of the scheme on the 1D problem consisting of single porous medium equation by means of the well-known Barenblatt's analytical solution.
A set of evacuation experiments were performed in road tunnels in Poland in order to test how pedestrians react when exposed to reduced visibility, how the decision making process is carried out, and finally what is the impact of various circumstances like: different level of smokiness, competitive behavior or learning effect on an evacuation process. In some experiments pedestrians were exposed to artificial, non-toxic smoke. During evacuation of a group of people gathered in low and moderate level of smokiness (when $Cs < 0.5 m^{-1}$) we observed multi-line patterns created by pedestrians. Decision making was engaged in only by the first group of passengers, while under heavy smokiness $Cs>0.7 m^{-1}$ we have observed decision making by small groups and characteristic double-lines patterns. In four experiments the same group of participants was involved, and a learning effect was observed: increasingly shorter pre-movement time and decreasing time required to leave the main tunnel. We show, that movement speed in smoke is influenced by the evacuees' attitude and familiarity with environment and evacuation procedures and not only by the visibility level.
In pedestrian dynamics research, enhancing the quality of models and data analysis is the principal goal. The global aim of this research is to show how the individual analysis of data from egress experiments can improve the pedestrian model. Improved rules at all decision-making levels make the model more realistic from a microscopic perspective, thus its predictive power increases.
This contribution presents the beginning of the whole process, our microscopic decision-based model in continuous both time and space is introduced. The model rules determining the pedestrian movement in a very individual way are implemented and their impact on the model output is shown in a calibration case study.
The contribution presents a generalized approach for the approximation of flow in the cellular floor-field model of pedestrian dynamics. Approximations are successfully applied to models with both Euclidean and Manhattan metric, and for various number and locations of exit cells. The concept of spacial dependent friction is presented. The flow through bottleneck can be calibrated by the value of local friction parameter. The accuracy of the approximation method is also verified for this model variation. The efficiency of the procedures is presented by calibrating the model for a real evacuation experiment.
A double-deck rail car egress experiment aiming to estimate the effects of exit type, exit width and structure of passengers on evacuation time was organized in cooperation with UCEEB FSv ČVUT. Almost 90 participants were divided into two groups: high school students created the homogeneous group, the heterogeneous group was mixed from young children, adults and elderly respecting the distribution observed in trains in the Czech Republic. The exit width varied from 0.65 m to 1.34 m and the exit type was changeable as well.
Effect of the investigated parameters is following: Evacuation time decreases with the increasing exit width (30% difference between 0.65 m and 1.34 m), the homogeneous group of students evacuated faster than the standardized passengers’ sample and the egress to the platform is faster than using stairs to the terrain or jumping to the terrain.
The detailed insight how exit width affects evacuation time is provided by time headways (see the figure below). In case of the narrow exit, the default time headway between pedestrians was 1 s that corresponds to the flow 1 ped/s, i.e. 1.54 ped/s/m. With increasing the width, the mean value of flow increased, but the default headway stayed. Surprisingly the higher flow was caused by increasing the frequency of the situation when two participants passed the door simultaneously.
This article attemps to analyse the factors affecting pedestrian's velocity. First, the relation velocity - individual density is quantified in the language of correlations and R2 of linear model. Then, the distance to nearest pedestrian in the view angle is introduced as a new microscopic quantity to explain trends in velocity. It seems that velocity and distance to nearest pedestrian have a positive linear correlation. The data from evacuation experiment realized in 2014 within the GAMS group were used to build on.
Reliable location of Acoustic Emission (AE) sources is one of the most important inverse problems in non-destructive testing (NDT) and structural health monitoring (SHM) of engineering structures. Standard AE source location procedures often fail in more complicated structures with wave dispersion, velocity or geometry changes, etc. Localization of continuous AE signals generated by leakage, i.e. random noise is much more difficult than localization of short burst signal sources. Eeffective tool in all such situations is time reversal (TR) signal processing [1], which results in space - time wave focusing, partial source signal reconstruction and more precise source location than other used techniques (up to 1 mm). After artificial AE sources location tests on various plates, reported in [1], we applied TR technique on large steam pipeline (length = 4140, diameter = 245, and thickness =37.5 mm) and smaller pressure vessels to prove its practical use under industrial conditions. Leakages were simulated with random noise signals emitted by piezoelectric transducers. These signals were recorded during the real leaks on power plant. Leak signals were overlapped with other more intensive signals reproducing real large background noise on power plant (water flow in the pipe). Noise mixture was detected by AE transducers mounted on 1m long waveguides welded to the tested structure. Long signals from two transducers were recorded, time reversed and rebroadcast to the structure (reciprocal TR method). Maximum of their cross-correlation denotes the leakage source on structure surface. Detailed surface scanning around roughly pre-localized source is necessary for precise source location. Scanning can be realized e.g. by scanning laser interferometer or numerically in a perfect computer model of the structure.
Optimal sensor configuration in cases of complex structure shapes is one of the crucial steps and a premise for precise acoustic emission (AE) source location estimate. Such task leads to numerical analysis of relations between the signal arrival chronology and the coordinates of emission source. Local problems of AE source positions ambiguities in case of continuous 2D body can be demonstrated by the simple method of two hyperboles intersections. For more general cases, using the algorithm for finding the shortest ways in discretely defined bodies it is possible to design three parallel tools how to evaluate problematic areas even for non-continuous or anisotropic materials. Analogically to Global Positioning System, location of AE sources meets the geometrical dilution of precision (GDOP) phenomena. Similarly to GDOP parameter, recently introduced sensitivity map shows critical regions characterized by strong local sensitivity of location results to signal arrival time changes or errors. Next two methods (similarity and ambiguity maps) illustrate the topology of arrival time differences space and possible ambiguities of source location. To check the numerical forecast of location capabilities of given sensor configuration, theoretical results were reviewed with the data measured on the real steam storage vessel.
The aim of this talk is to present round robin tests performed in an aluminum sample presenting a calibrated slit and a hole with the same size. Nonlinear acoustics and vibration has become increasingly important during the last forty years due to the increase of higher sensitivity of electronic instrumentation and its associate signal processing algorithms. The nonlinearity of materials results in nonlinear effects, which arise from defects in the materials present at all scales. Applications include nonlinear nondestructive testing (NDT), harmonic medical ultrasound imaging and development of new materials such as nanocomposite and memory based materials. One of the strategic plan of the international NDT community is to define standards for developing nonlinear non-destructive testing for automated set-up in mass production.
The paper presents the Time Reversal based Nonlinear Elastic Wave Spectroscopy TR-NEWS device which is associated to the development of a phenomenological characterization of material local elastic properties [1] working at 20 MHz allowing the measurements of degradation and aging of complex structures. The experimental device was tested with the V3 calibration block, improved and specially scaled in order to access to a wide range of multivalued parameters: mechanical properties, ultrasonic parameters (celerity and attenuation) and local geometric data. Also tested for biomedical applications too, the well-known complexity of the sample constitutes a strong advantage for the TR-NEWS efficiency. Linear and nonlinear signatures of material properties are measured locally thanks to an optimized signal processing involving time reversal, correlation and pulse inversion algorithms.
Acoustic emission (AE) method belongs to non-destructive methods in defectoscopy. By means of AE we are able to reveal the damage in materials or construction before fatal destruction of the construction (that is very useful for example in case of bridges or nuclear reactors). Classification of AE consists of two main parts. First we have to find some significant parameters of measured signals or their spectra. In this part we can successfully apply phi-divergences between normed signal spectra as a parameter to compare measured signals. Obtained parameters create a feature space. The second step in classification is to find clusters in the feature space. Each cluster should contain data coming from one type of AE signal. For classification of cluster we can use several methods. We use mainly Fuzzy classification, Model Based Clustering and also we use new-designed method called Divergence Decision Tree, in which again we use theory of phi-divergence.
In this contribution, we will present a brief introduction to neutrinos physics and will focus on two experiments from Fermi National Laboratory, NOvA and DUNE. We will describe how these collaborations works, how they use machine learning and how these methods can contribute to new discoveries and more accurate measurements.
Convolutional neural networks (CNNs) show outstanding results for problems in the area of image classification. Particle classification as a part of high energy physics is no exception. This presentation will deal with basic concepts of artificial neural networks alongside with CNN aspects. The application of CNN on Monte Carlo samples from neutrino experiment protoDUNE will be later discussed.
Classification task is the crucial step in high energy physics data analysis. As many reconstruction steps in high energy physics are similar to image pattern recognition tasks, we explore the potential of appropriate deep learning techniques utilisation. In particular, convolutional neural networks (CNN) are able to extract characteristic features from image pixelmaps at different scales and use these features for particle identification. That is why CNN techniques are used for neutrino interaction classification within the NOvA experiment in Fermilab. Furthermore, we present the concept of residual neural networks, which perform remarkably in the field of image pattern recognition. We construct several classification models with different CNN architecture and show the results of residual neural networks for particle classification task using Monte Carlo simulated data from the NOvA experiment.
Binary classification of measured data is a common task in high energy physics (HEP). Precise knowledge of the classification algorithm means an advantage in attempt to reach the best accuracy. This paper examines the effects of data transformation tailored to the classificator properties. HEP dataset and the SDDT algorithm were used for this purpose.
In high energy physics analyses, it is necessary to verify whether measured data have the same distribution as a simulated Monte Carlo sample. One of the methods that are used for verification is homogeneity testing. Monte Carlo samples are usually weighted and therefore modifications of homogeneity tests must be employed. In ROOT, a C++ framework used by high energy physics community, some homogeneity tests of weighted samples are implemented; however, none of them performs well. Therefore, we implemented different tests in ROOT: Kolmogorov-Smirnov, Cramér-von Mises, and Anderson-Darling. Asymptotic properties of these modified tests are compared in a simulation. Afterward, they are applied to samples from DUNE experiment.
For proper description of heavy nuclei collision products, it is often necessary to solve a binary classification problem using machine learning methods. In this contribution, D meson decay will be analyzed using ensemble learning methods applied to the data from the STAR experiment at the Relativistic Heavy Ion Collider in Brookhaven National Laboratory.
We discuss the latest results on electron neutrino appearance and muon neutrino disappearance in the NOvA experiment. Our primary focus will be processing of the Convolutional Visual Network classifier output.
In most automobile factories, the quality inspection process is mainly performed by human vision, which is often insufficient and unstable. The artificial intelligence can improve some parts of this control process. The aim of our project was to develop the "Proof-of-concept" of such an automated visual inspection system. We focused on the level of cooling liquid and mismatching wheel rims. The analysis includes data collection and labeling, pre-processing, feature extraction, and classification.
Our main goal is to develop a system detecting faulty parts of assembled products, such as the low level of cooling liquid or mismatching wheel rims for a large automotive company. We installed two cameras to collect data of a car engine space and wheel rims. So far we analyzed data of a car engine space by several methods of image recognition. In the future we will recognize the low level of cooling liquid by using HSV color space.
This presentation will be focusing on analyzing data about failures of a product for a company from automotive industry. The exploratory data analysis shows how different factors of a product can influence the failure rates. Secondly the failure rates will be analyzed as a time series. What we are hoping for is to explain the seasonalities and trends for different failures depending on different combinations of factors. Lastly the focus should be on detecting when failure rates are increasing significantly in order to inform the company of a possible error in production. For that methods such as GMA, CUSUM or SIC can be used.
Average incomes of small areas can be usually treated as normally distributed random variables and modelled by the Fay-Herriot model. However, the assumption of normality of the average incomes can be misleading for small sample sizes. We consider that every average follows a gamma distribution and investigate a corresponding gamma mixed model at the area-level. The parameters of the model are estimated by applying the MLE-method to the Laplace approximation of the likelihood and the empirical best predictor is subsequently derived. Its performance is compared with the plug-in predictor. The mean squared errors of the predictors are estimated by parametric bootstrap.
Small area estimation is a field of statistics which deals with the problem of obtaining reliable estimates of characteristics of interest in situations when the sample is divided into domains for which the sample sizes are often small. For the use in this field a new unit-level logit mixed model is proposed. The model uses fixed effects for areas with larger sample sizes and models the rest of the domains by random effects. In order to predict area means empirical best predictor and plug-in predictor are used and compared via a simulation experiment. Simulation studies are carried out in order to compare the quality of predictions acquired from the proposed model with predictions obtained from a binomial logit-mixed model which only uses random effects to model the domains. The two models are applied to the estimation of poverty risks in counties of the region of Valencia, Spain, and their predictions are compared.
Many well-known problems like ranking problems or hypothesis testing, fall into the class of tasks in which the goal is to classify only a specific amount of the most relevant samples. In our previous work, we investigated the properties of such problems and showed that each of them falls into our general formulation.
This paper describes three main problems which fall into our formulation, namely the TopPush, the Accuracy at the Top and the Neyman-Pearson problem. Furthermore, the paper presents the main theoretical properties of these problems and also derives an easily solvable approximation of the ones without good theoretical behavior
Introduction talk to ancient civilizations section.
Current data collection techniques often produce massive data sets in which entities are described by a large number of properties/features/attributes. The number of features in many cases reaches thousands. Typically, it is possible to divide the features into relevant, redundant, irrelevant, or noise. To understand such a complex system, it is necessary to identify informative features, their internal dependencies, and to identify patterns or their anomalies. The aim is to describe the system using as few abstract dependencies as possible at a suitably chosen level of inaccuracy. The processing of data sets containing a large number of binary and ordinal symptoms, e.g., in the social sciences and humanities domains, using traditional statistical techniques is very limited.
In this contribution, we propose a methodology based on information theory and apply it to data sets with predominantly binary and ordinal features. The process allows to identify key relationships among binarized features and discover patterns and hierarchies even when many data items are missing or very noisy. While a majority of published methods focus on identifying relevant symptoms, our proposed technique benefits from robust properties of redundant features. The methods can be used not only for multidimensional data but also for the detection of communities in complex networks. Although direct calculation by definition exhibits cubic complexity, sparse structures can be processed at near linear time.
The methodology will be demonstrated on data from ancient Egypt. Specifically, it is a data set describing the anthropological features of selected people living in the Old Kingdom (2700–2180 BC , i.e. the time of pyramid builders): royal family members, high ranking dignitaries, middle and low officials. The proposed methodology enables, for example, to evaluate the influence of the individual's social status connected with supposed specific habitual activity on the manifestations of degenerative changes in joints (arthrosis) or entheseal changes. Relationships can be examined in both the supervised and unsupervised modes, i.e. clustering. Performance aspects will be briefly demonstrated as part of the Czech ČTK (Czech News Agency) reports processing aimed at identifying news topics in the millions of unique words and millions of ČTK news written in Czech.
This project focuses on the partial analysis of the ancient Egyptian society considering the records retrieved from the Egyptological research. These records capture data about more than 3000 individuals and information about time inclusion, family, job titles etc.This work aims to find basic structure in ancient society. For this purpose, we use K-medoids clustering method. First main goal is to group people to clusters based on titles, which they own. Finding representative person for each cluster is the second goal.
This project focuses on the partial analysis of the ancient Egyptian society considering the records retrieved from the Egyptological research. These records capture data about more than 3000 individuals and information about time inclusion, family, job titles etc. Main contribution of this work is an application of the decision tree algorithm to cluster the specially-formed dataset from available records according to the possessed job titles. This approach helps to create an idea about job structure and title distribution of ancient Egypt.