general overhaul, better images, better texts

This commit is contained in:
Steffen Illium 2024-02-05 23:16:26 +01:00
parent fd1d34a85a
commit da72fdcf7f
82 changed files with 149 additions and 188 deletions

5
.vscode/settings.json vendored Normal file
View File

@ -0,0 +1,5 @@
{
"files.associations": {
"*.yaml": "home-assistant"
}
}

View File

@ -8,21 +8,21 @@ header:
--- ---
![logo](\assets\images\projects\robot.png){: .align-left style="padding:0.1em; width:5em"} ![logo](\assets\images\projects\robot.png){: .align-left style="padding:0.1em; width:5em"}
In cooperation with [Fraunhofer IKS](https://www.iks.fraunhofer.de/) this project aimed to investigate and detect emergent effects in multi-agent reinforcement learning scenarios, i.e., mixed-vendor autonomous systems (AI fusion). In cooperation with [Fraunhofer IKS](https://www.iks.fraunhofer.de/), this project explored emergent effects in multi-agent reinforcement learning scenarios, such as mixed-vendor autonomous systems. Emergence, defined as complex dynamics arising from interactions among entities and their environment, was a key focus.
Emergence in general refers to emerging dynamics of higher complexity (i.e., sum), which are fed by interacting entities (each other and the environment) of a specific complexity level (regarding their policies and capabilities).
![Relation emergence](/assets/images/projects/rel_emergence.png){: .align-center style="padding:0.1em; width:30em"}
In this context, we developed a full-stack, high-performance environment in Python, following the [gymnasium](https://gymnasium.farama.org/main/) specification for the training of reinforcement learning algorithms. ![Relation emergence](/assets/images/projects/rel_emergence.png){: .align-center style="padding:0.1em; width:80%"}
<div class="table-right"> <div class="table-right" style="text-align:right">
| ![logo](\assets\images\projects\full_domain.png){: style="margin:0em; padding:0em; width:15em"} |
![logo](\assets\images\projects\full_domain.png){: .align-right style="padding:0.5em; width:10em"} | [GitHub Repo](https://github.com/illiumst/marl-factory-grid/) |
| [Install via PyPI](https://pypi.org/project/Marl-Factory-Grid/) |
| [GitHub Repo](https://github.com/illiumst/marl-factory-grid/) | [Read-the-docs](https://marl-factory-grid.readthedocs.io/en/latest/)  | | [Read-the-docs](https://marl-factory-grid.readthedocs.io/en/latest/) |
| [Install via PyPI](https://pypi.org/project/Marl-Factory-Grid/) | Read the Paper (TBA)  | | Read the Paper (TBA) |
</div> </div>
The main differentiation from already established MARL environments is its ability to employ various scenarios as a combination of `modules` and `configurations`. As well as the option to define per-agent observations, including placeholder and combined observation slices (grid-world). Moreover, this environment can handle multi-agent scenarios as well as sequential actions for inter-step observations. We developed a high-performance environment in Python, adhering to the [gymnasium](https://gymnasium.farama.org/main/) specifications, to facilitate reinforcement learning algorithm training.
Furthermore, we designed and implemented a [Unity demonstrator unit](https://github.com/illiumst/F-IKS_demonstrator) that can load and replay specific pre-recorded scenarios. This way, emerging unwanted and unsafe situations can be replayed and intuitively investigated. This environment uniquely supports a variety of scenarios through `modules` and `configurations`, with capabilities for per-agent observations and handling of multi-agent and sequential actions.
Additionally, a [Unity demonstrator unit](https://github.com/illiumst/F-IKS_demonstrator) was developed to replay and analyze specific scenarios, aiding in the investigation of emerging dynamics.

View File

@ -7,19 +7,12 @@ header:
teaser: assets/images/projects/arch.png teaser: assets/images/projects/arch.png
--- ---
![logo](\assets\images\projects\arch.png){: .align-left style="padding:0.1em; width:5em"} ![logo](\assets\images\projects\arch.png){: .align-left style="padding:0.1em; width:5em"}
During my time at the Mobile Distributed Systems Chair, I also supported my colleagues regarding the setup and maintenance of our workstations, the Windows server hypervisor, our Linux file server, and our networking infrastructure. During my tenure at the Mobile and Distributed Systems Chair, I played a key role in the setup and maintenance of our technical infrastructure, including workstations, Windows server hypervisors, Linux file servers, and networking. Our approach to managing a diverse ecosystem of operating systems, hardware, and libraries involved extensive use of Ansible for orchestration.
We orchestrated and utilized multiple systems with varying operating systems, hardware, driver versions, and libraries (damn you CUDA), mostly through Ansible.
Most recently transferred a major partition of or services to Kubernetes (K3S) while setting up a complete tool chain including Longhorn, Argo CD, Sealed Secrets, and GitLab. For ingress and egress, we set up Traefik as our fully automated proxy manager. I spearheaded the transition of a significant portion of our services to Kubernetes (K3S), implementing a comprehensive toolchain that included Longhorn, Argo CD, Sealed Secrets, and GitLab. For managing ingress and egress, Traefik served as our automated proxy manager, enabling us to efficiently route traffic within our network and accommodate external users securely through WireGuard.
This way, we could differentiate between routes through different virtual networks for clients within our networks and external users, i.e., colleagues in the home office or students.
Crossing network borders was possible through a self-hosted WireGuard jump-host, which could also be utilized as our full-fledged VPN when traveling aboard.
This way, the whole team could access our ZFS-file server from anywhere.
Starting with self-scripted ML runs on non-reliable SLURM infrastructure (pool PCs could be turned off or used by students, which killed the SLURM agent at any time), I slowly transitioned to runs on individual, high-performance workstations. My experience extended to optimizing machine learning workflows, transitioning from unreliable SLURM-based setups to automated, high-performance workstation runs using Weights & Biases (WandB) for experiment management, leveraging our self-hosted GitLab registry for Docker container orchestration.
Lately, those units were automated by weights-and-biases (WandB), which automatically runs experiments in pre-defined Docker containers; for that, we utilized our self-hosted registry through GitLab.
All in all, I learned a lot about Linux server administration, networking, code-as-infrastructure, and the spirit of the cloud native approach. This journey enriched my skills in Linux server administration, networking, infrastructure as code, and cloud-native technologies. It fostered a preference for minimalist, microservice-based architectures, and I've applied these principles to my personal projects, including self-hosting this website and other services, underscoring my commitment to practical, efficient technology solutions.
Nowadays, I thoroughly enjoy the `make your requirements less dumb` approach to things, moving from central services to a bunch of problem-specific (self-hosted) microservices.
Moreover, I really like to build systems from the ground up with only the necessary elements on top of a rolling release distribution (Arch-Linux-based).
I took some concepts into my `home lab`, where I now self-host most of my services, including this website. Isn't that cool? :cool: More of the tech stack I encountered on my journey is listed [here](/about). More of the tech stack I encountered on my journey is listed [here](/about).

View File

@ -6,8 +6,9 @@ excerpt: "We propose an approach to annotate trajectories using sequences of spa
header: header:
teaser: assets/figures/0_trajectory_reconstruction_teaser.png teaser: assets/figures/0_trajectory_reconstruction_teaser.png
--- ---
<figure class="half">
<img src="/assets/figures/0_trajectory_isovist.jpg" alt="" style="width:48%">
<img src="/assets/figures/0_trajectory_reconstruction.jpg" alt="" style="width:48%">
</figure>
This work establishes a foundation for enhancing interaction between robots and humans in shared spaces by developing reliable systems for verbal communication. It introduces an unsupervised learning method using neural autoencoding to learn continuous spatial representations from trajectory data, enabling clustering of movements based on spatial context. The approach yields semantically meaningful encodings of spatio-temporal data for creating prototypical representations, setting a promising direction for future applications in robotic-human interaction. {% cite feld2018trajectory %}
![Isovist Concept](\assets\figures\0_trajectory_isovist.jpg){:style="display:block; margin-left:auto; margin-right:auto; width:350px"}
In the near future, more and more machines will perform tasks in the vicinity of human spaces or support them directly in their spatially bound activities. In order to simplify the verbal communication and the interaction between robotic units and/or humans, reliable and robust systems w.r.t. noise and processing results are needed. This work builds a foundation to address this task. By using a continuous representation of spatial perception in interiors learned from trajectory data, our approach clusters movement in dependency to its spatial context. We propose an unsupervised learning approach based on a neural autoencoding that learns semantically meaningful continuous encodings of spatio-temporal trajectory data. This learned encoding can be used to form prototypical representations. We present promising results that clear the path for future applications. {% cite feld2018trajectory %}
![Trajectory Reconstruction](\assets\figures\0_trajectory_reconstruction.jpg){:style="display:block; margin-left:auto; margin-right:auto; width:350px"}

View File

@ -7,9 +7,8 @@ header:
teaser: assets/figures/1_self_replication_pca_space.jpg teaser: assets/figures/1_self_replication_pca_space.jpg
--- ---
![Self-Replication Robustness](\assets\figures\1_self_replication_robustness.jpg){:style="display:block; margin-left:auto; margin-right:auto; width:350px"} ![Self-Replication Robustness](\assets\figures\1_self_replication_robustness.jpg){:style="display:block; width:40%" .align-right}
The foundation of biological structures is self-replication. Neural networks are the prime structure used for the emergent construction of complex behavior in computers. We analyze how various network types lend themselves to self-replication. We argue that backpropagation is the natural way to navigate the space of network weights and show how it allows non-trivial self-replicators to arise naturally. We then extend the setting to construct an artificial chemistry environment of several neural networks. This text discusses the fundamental role of self-replication in biological structures and its application to neural networks for developing complex behaviors in computing. It explores different network types for self-replication, highlighting the effectiveness of backpropagation in navigating network weights and fostering the emergence of non-trivial self-replicators. The study further delves into creating an artificial chemistry environment comprising several neural networks, offering a novel approach to understanding and implementing self-replication in computational models. For in-depth insights, refer to the work by {% cite gabor2019self %}.
{% cite gabor2019self %}
![Self-replicators in PCA Space (Soup)](\assets\figures\1_self_replication_pca_space.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Self-replicators in PCA Space (Soup)](\assets\figures\1_self_replication_pca_space.jpg){:style="display:block; width:80%" .align-center}

View File

@ -7,6 +7,5 @@ header:
teaser: assets/figures/3_deep_neural_baselines_teaser.jpg teaser: assets/figures/3_deep_neural_baselines_teaser.jpg
--- ---
![Self-Replication Robustness](\assets\figures\3_deep_neural_baselines.jpg){:style="display:block; margin-left:auto; margin-right:auto; width:250px"} ![Self-Replication Robustness](\assets\figures\3_deep_neural_baselines.jpg){:style="display:block; width:30%" .align-right}
Detecting sleepiness from spoken language is an ambitious task, which is addressed by the Interspeech 2019 Computational Paralinguistics Challenge (ComParE). We propose an end-to-end deep learning approach to detect and classify patterns reflecting sleepiness in the human voice. Our approach is based solely on a moderately complex deep neural network architecture. It may be applied directly on the audio data without requiring any specific feature engineering, thus remaining transferable to other audio classification tasks. Nevertheless, our approach performs similar to state-of-the-art machine learning models. The study presents an innovative end-to-end deep learning method to identify sleepiness in spoken language, as part of the Interspeech 2019 ComParE challenge. This method utilizes a deep neural network architecture to analyze audio data directly, eliminating the need for specific feature engineering. This approach not only achieves performance comparable to state-of-the-art models but is also adaptable to various audio classification tasks. For more details, refer to the work by {% cite elsner2019deep %}.
{% cite elsner2019deep %}

View File

@ -6,8 +6,6 @@ excerpt: "Team market value estimation, similarity search and rankings."
header: header:
teaser: assets/figures/2_steve_algo.jpg teaser: assets/figures/2_steve_algo.jpg
--- ---
![STEVE Algorithm](\assets\figures\2_steve_algo.jpg){:style="display:block; width:60%" .align-center}
In this work we present STEVE - Soccer TEam VEctors, a principled approach for learning real valued vectors for soccer teams where similar teams are close to each other in the resulting vector space. STEVE only relies on freely available information about the matches teams played in the past. These vectors can serve as input to various machine learning tasks. Evaluating on the task of team market value estimation, STEVE outperforms all its competitors. Moreover, we use STEVE for similarity search and to rank soccer teams. This study introduces STEVE (Soccer Team Vectors), a novel method for generating real-valued vectors representing soccer teams, organized so that similar teams are proximate in vector space. Utilizing publicly available match data, these vectors facilitate various machine learning applications, notably excelling in team market value estimation and enabling effective similarity search and team ranking. STEVE demonstrates superior performance over competing models in these domains. For further details, please consult the work by {% cite muller2020soccer %}.
{% cite muller2020soccer %}
![STEVE Algorithm](\assets\figures\2_steve_algo.jpg){:style="display:block; margin-left:auto; margin-right:auto;"}

View File

@ -7,9 +7,8 @@ header:
teaser: assets/figures/4_point_cloud_segmentation_teaser.jpg teaser: assets/figures/4_point_cloud_segmentation_teaser.jpg
--- ---
![Point Cloud Segmentation](\assets\figures\4_point_cloud_pipeline.jpg){:style="display:block; margin-left:auto; margin-right:auto;"} ![Point Cloud Segmentation](\assets\figures\4_point_cloud_pipeline.jpg){:style="display:block; width:100%" .align-center}
The segmentation and fitting of solid primitives to 3D point clouds is a complex task. Existing systems are restricted either in the number of input points or the supported primitive types. This paper proposes a hybrid pipeline that is able to reconstruct spheres, bounded cylinders and rectangular cuboids on large point sets. It uses a combination of deep learning and classical RANSAC for primitive fitting, a DBSCAN-based clustering scheme for increased stability and a specialized Genetic Algorithm for robust cuboid extraction. In a detailed evaluation, its performance metrics are discussed and resulting solid primitive sets are visualized. The paper concludes with a discussion of the approachs limitations. This paper introduces a hybrid approach for segmenting and fitting solid primitives to 3D point clouds, overcoming limitations in handling large datasets and diverse primitive shapes. By integrating deep learning with RANSAC for primitive fitting, employing DBSCAN for clustering, and utilizing a specialized Genetic Algorithm for cuboid extraction, this method achieves enhanced stability and robustness. It excels in reconstructing spheres, cylinders, and cuboids from large point sets, with performance metrics and visualizations provided to demonstrate its effectiveness, alongside a discussion on its limitations. For more detailed insights, refer to {% cite friedrich2020hybrid %}.
{% cite friedrich2020hybrid %}
![Point Cloud Segmentation](\assets\figures\4_point_cloud_segmentation.jpg){:style="display:block; margin-left:auto; margin-right:auto;"} ![Point Cloud Segmentation](\assets\figures\4_point_cloud_segmentation.jpg){:style="display:block; width:80%" .align-center}

View File

@ -7,11 +7,7 @@ header:
teaser: assets/figures/6_ood_pipeline.jpg teaser: assets/figures/6_ood_pipeline.jpg
--- ---
![PEOC Pipeline](\assets\figures\6_ood_pipeline.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![PEOC Performance](\assets\figures\6_ood_performance.jpg){:style="display:block; width:45%" .align-right}In this work, the development of PEOC, a policy entropy-based classifier for detecting unencountered states in deep reinforcement learning, is proposed. Utilizing the agent's policy entropy as a score, PEOC effectively identifies out-of-distribution scenarios, crucial for ensuring safety in real-world applications. Evaluated against advanced one-class classifiers within procedurally generated environments, PEOC demonstrates competitive performance.
Additionally, a structured benchmarking process for out-of-distribution classification in reinforcement learning is presented, offering a comprehensive approach to evaluating such systems' reliability and effectiveness. {% cite sedlmeier2020policy %}
One critical prerequisite for the deployment of reinforcement learning systems in the real world is the ability to reliably detect situations on which the agent was not trained. Such situations could lead to potential safety risks when wrong predictions lead to the execution of harmful actions. In this work, we propose PEOC, a new policy entropy based out-of-distribution classifier that reliably detects unencountered states in deep reinforcement learning. It is based on using the entropy of an agent's policy as the classification score of a one-class classifier. We evaluate our approach using a procedural environment generator. Results show that PEOC is highly competitive against state-of-the-art one-class classification algorithms on the evaluated environments. Furthermore, we present a structured process for benchmarking out-of-distribution classification in reinforcement learning.
{% cite sedlmeier2020peoc %}
![PEOC Performance](\assets\figures\6_ood_performance.jpg){:style="display:block; margin-left:auto; margin-right:auto"}
![PEOC Pipeline](\assets\figures\6_ood_pipeline.jpg){:style="display:block; width:90%" .align-center}

View File

@ -7,9 +7,9 @@ header:
teaser: assets/figures/5_meantime_coverage.jpg teaser: assets/figures/5_meantime_coverage.jpg
--- ---
![Estimated Service Coverage](assets\figures\5_meantime_coverage.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Estimated Service Coverage](\assets\figures\5_meantime_coverage.jpg){:style="display:block; width:80%" .align-center}
Fully autonomously driving vehicles are expected to be a widely available technology in the near future. Privately owned cars, which remain parked for the majority of their lifetime, may therefore be capable of driving independently during their usual long parking periods (e.g. their owners working hours). Our analysis aims to focus on the potential of a privately owned shared car concept as transition period between the present usages of privately owned cars towards a transportation paradigm of privately owned shared autonomous vehicles. We propose two methods in the field of reachability analysis to evaluate the impact of such vehicles during parking periods. Our proposed methods are applied to a dataset of parking times of users of a telematics service provider in the Munich area (Germany). We show the impact of time and location dependent effects on the analyzed service coverage, such as business week rush hours or cover age divergence between urban and suburban regions. This analysis explores the concept of privately owned shared autonomous vehicles as a transitional phase towards a new transportation paradigm. It proposes two reachability analysis methods to assess the impact of utilizing privately owned cars during their typical long parking intervals, such as during an owner's work hours. By applying these methods to a dataset from the Munich area, the study reveals how time and location-dependent factors, like rush hours and urban vs. suburban differences, affect service coverage.
{% cite illium2020meantime %} {% cite illium2020meantime %}
![Parked Vehicle Availability](\assets\figures\5_meantime_availability.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Parked Vehicle Availability](\assets\figures\5_meantime_availability.jpg){:style="display:block; width:80%" .align-center}

View File

@ -7,9 +7,8 @@ header:
teaser: assets/figures/7_mask_models.jpg teaser: assets/figures/7_mask_models.jpg
--- ---
![PEOC Pipeline](\assets\figures\7_mask_mels.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![PEOC Pipeline](\assets\figures\7_mask_mels.jpg){:style="display:block; width:80%" .align-center}
In many fields of research, labeled data-sets are hard to acquire. This is where data augmentation promises to overcome the lack of training data in the context of neural network engineering and classification tasks. The idea here is to reduce model over-fitting to the feature distribution of a small under-descriptive training data-set. We try to evaluate such data augmentation techniques to gather insights in the performance boost they provide for several convolutional neural networks on mel-spectrogram representations of audio data. We show the impact of data augmentation on the binary classification task of surgical mask detection in samples of human voice. Also we consider four varying architectures to account for augmentation robustness. Results show that most of the baselines given by ComParE are outperformed This study assesses the effectiveness of data augmentation in enhancing neural network models for audio data classification, focusing on mel-spectrogram representations. Specifically, it examines the role of data augmentation in improving the performance of convolutional neural networks for detecting the presence of surgical masks from human voice samples, testing across four different network architectures. The findings indicate a significant enhancement in model performance, surpassing many of the existing benchmarks established by the ComParE challenge. For further details, refer to {% cite illium2020surgical %}.
{% cite illium2020surgical %}
![Models](\assets\figures\7_mask_models.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Models](\assets\figures\7_mask_models.jpg){:style="display:block; width:80%" .align-center}

View File

@ -7,7 +7,6 @@ header:
teaser: assets/figures/8_anomalous_sound_teaser.jpg teaser: assets/figures/8_anomalous_sound_teaser.jpg
--- ---
![Pipeline](\assets\figures\8_anomalous_sound_features.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Pipeline](\assets\figures\8_anomalous_sound_features.jpg){:style="display:block; width:40%" .align-right}
This study explores the use of pretrained neural networks as feature extractors for detecting anomalous sounds, utilizing these networks to derive semantically rich features for a Gaussian Mixture Model that estimates normality. It examines extractors trained on diverse data domains—images, environmental sounds, and music—applied to industrial noises from machinery. Surprisingly, features based on music data often surpass others, including an autoencoder baseline, suggesting that domain similarity between extractor training and application might not always correlate with performance improvement.
The problem of Constructive Solid Geometry (CSG) tree reconstruction from 3D point clouds or 3D triangle meshes is hard to solve. At first, the input data set (point cloud, triangle soup or triangle mesh) has to be segmented and geometric primitives (spheres, cylinders, ...) have to be fitted to each subset. Then, the size- and shape optimal CSG tree has to be extracted. We propose a pipeline for CSG reconstruction consisting of multiple stages: A primitive extraction step, which uses deep learning for primitive detection, a clustered variant of RANSAC for parameter fitting, and a Genetic Algorithm (GA) for convex polytope generation. It directly transforms 3D point clouds or triangle meshes into solid primitives. The filtered primitive set is then used as input for a GA-based CSG extraction stage. We evaluate two different CSG extraction methodologies and furthermore compare our pipeline to current state-of-the-art methods.
{% cite muller2020analysis %} {% cite muller2020analysis %}

View File

@ -6,10 +6,9 @@ excerpt: "Acoustic Anomaly Detection for Machine Sounds based on Image Transfer
header: header:
teaser: assets/figures/9_image_transfer_sound_teaser.jpg teaser: assets/figures/9_image_transfer_sound_teaser.jpg
--- ---
![Workflow](\assets\figures\9_image_transfer_sound_workflow.jpg){:style="display:block; width:45%" .align-right}
![Mels](\assets\figures\9_image_transfer_sound_mels.jpg){:style="display:block; margin-left:auto; margin-right:auto"} This paper explores acoustic malfunction detection in industrial machinery using transfer learning, specifically leveraging neural networks pretrained on image classification to extract features.
These features, when used with anomaly detection models, outperform traditional convolutional autoencoders in noisy conditions across different machine types. The study highlights the superiority of features from ResNet architectures over AlexNet and Squeezenet, with Gaussian Mixture Models and One-Class Support Vector Machines showing the best performance in detecting anomalies.
In industrial applications, the early detection of malfunctioning factory machinery is crucial. In this paper, we consider acoustic malfunction detection via transfer learning. Contrary to the majority of current approaches which are based on deep autoencoders, we propose to extract features using neural networks that were pretrained on the task of image classification. We then use these features to train a variety of anomaly detection models and show that this improves results compared to convolutional autoencoders in recordings of four different factory machines in noisy environments. Moreover, we find that features extracted from ResNet based networks yield better results than those from AlexNet and Squeezenet. In our setting, Gaussian Mixture Models and One-Class Support Vector Machines achieve the best anomaly detection performance.
{% cite muller2020acoustic %} {% cite muller2020acoustic %}
![Workflow](\assets\figures\9_image_transfer_sound_workflow.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Mels](\assets\figures\9_image_transfer_sound_mels.jpg){:style="display:block; width:85%" .align-center}

View File

@ -7,9 +7,9 @@ header:
teaser: assets/figures/10_water_networks_teaser.jpg teaser: assets/figures/10_water_networks_teaser.jpg
--- ---
![Leak-Mels](\assets\figures\10_water_networks_mel.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Approach](\assets\figures\10_water_networks_approach.jpg){:style="display:block; width:40%" .align-right}
This study introduces a method for acoustic leak detection in water networks, focusing on energy efficiency and easy deployment. Utilizing recordings from microphones on a municipal water network, various anomaly detection models, both shallow and deep, were trained. The approach mimics human leak detection methods, allowing intermittent monitoring instead of constant surveillance. While detecting nearby leaks proved easy for most models, neural network-based methods excelled at identifying leaks from a distance, showcasing their effectiveness in practical applications.
In industrial applications, the early detection of malfunctioning factory machinery is crucial. In this paper, we consider acoustic malfunction detection via transfer learning. Contrary to the majority of current approaches which are based on deep autoencoders, we propose to extract features using neural networks that were pretrained on the task of image classification. We then use these features to train a variety of anomaly detection models and show that this improves results compared to convolutional autoencoders in recordings of four different factory machines in noisy environments. Moreover, we find that features extracted from ResNet based networks yield better results than those from AlexNet and Squeezenet. In our setting, Gaussian Mixture Models and One-Class Support Vector Machines achieve the best anomaly detection performance.
{% cite muller2021acoustic %} {% cite muller2021acoustic %}
![Approach](\assets\figures\10_water_networks_approach.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Leak-Mels](\assets\figures\10_water_networks_mel.jpg){:style="display:block; width:85%" .align-center}

View File

@ -7,9 +7,9 @@ header:
teaser: assets/figures/11_recurrent_primate_workflow.jpg teaser: assets/figures/11_recurrent_primate_workflow.jpg
--- ---
![Leak-Mels](\assets\figures\11_recurrent_primate_workflow.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Leak-Mels](\assets\figures\11_recurrent_primate_workflow.jpg){:style="display:block; width:40%" .align-right}
Wildlife monitoring is an essential part of most conservation efforts where one of the many building blocks is acoustic monitoring. Acoustic monitoring has the advantage of being noninvasive and applicable in areas of high vegetation. In this work, we present a deep and recurrent architecture for the classification of primate vocalizations that is based upon well proven modules such as bidirectional Long Short-Term Memory neural networks, pooling, normalized softmax and focal loss. Additionally, we apply Bayesian optimization to obtain a suitable set of hyperparameters. We test our approach on a recently published dataset of primate vocalizations that were recorded in an African wildlife sanctuary. This study introduces a deep, recurrent architecture for classifying primate vocalizations, leveraging bidirectional Long Short-Term Memory networks and advanced techniques like normalized softmax and focal loss. Bayesian optimization was used to fine-tune hyperparameters, and the model was evaluated on a dataset of primate calls from an African sanctuary, showcasing the effectiveness of acoustic monitoring in wildlife conservation efforts.
{% cite muller2021deep %} {% cite muller2021deep %}
![Approach](\assets\figures\11_recurrent_primate_results.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Approach](\assets\figures\11_recurrent_primate_results.jpg){:style="display:block; width:85%" .align-center}

View File

@ -7,9 +7,9 @@ header:
teaser: assets/figures/12_vision_transformer_teaser.jpg teaser: assets/figures/12_vision_transformer_teaser.jpg
--- ---
![Leak-Mels](\assets\figures\12_vision_transformer_data.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Approach](\assets\figures\12_vision_transformer_models.jpg){:style="display:block; width:80%" .align-center}
We apply the vision transformer, a deep machine learning model build around the attention mechanism, on mel-spectrogram representations of raw audio recordings. When adding mel-based data augmentation techniques and sample-weighting, we achieve comparable performance on both (PRS and CCS challenge) tasks of ComParE21, outperforming most single model baselines. We further introduce overlapping vertical patching and evaluate the influence of parameter configurations. This work utilizes the vision transformer model on mel-spectrogram audio data, enhanced by mel-based data augmentation and sample weighting, to achieve notable performance in the ComParE21 challenge, surpassing many single model baselines. The introduction of overlapping vertical patching and the analysis of parameter configurations further refine the approach, demonstrating the model's adaptability and effectiveness in audio processing tasks.
{% cite illium2021visual %} {% cite illium2021visual %}
![Approach](\assets\figures\12_vision_transformer_models.jpg){:style="display:block; margin-left:auto; margin-right:auto"}

View File

@ -7,7 +7,7 @@ header:
teaser: assets/figures/13_sr_teaser.jpg teaser: assets/figures/13_sr_teaser.jpg
--- ---
![Self-Replicator Analysis](\assets\figures\13_sr_analysis.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Self-Replicator Analysis](\assets\figures\13_sr_analysis.jpg){:style="display:block; width:80%" .align-center}
Self-replicating neural networks can be trained to output a representation of themselves, making them navigate towards non-trivial fixpoints in their weight space. We explore the problem of adding a secondary functionality to the primary task of replication. We find a successful solution in training the networks with separate input/output vectors for one network trained in both tasks so that the additional task does not hinder (and even stabilizes) the self-replication task. Furthermore, we observe the interaction of our goal-networks in an artificial chemistry environment. We examine the influence of different action parameters on the population and their effects on the groups learning capability. Lastly we show the possibility of safely guiding the whole group to goal-fulfilling weight configurations via the inclusion of one specially-developed guiding particle that is able to propagate a secondary task to its peers. This research delves into the innovative concept of self-replicating neural networks capable of performing secondary tasks alongside their primary replication function. By employing separate input/output vectors for dual-task training, the study demonstrates that additional tasks can complement and even stabilize self-replication. The dynamics within an artificial chemistry environment are explored, examining how varying action parameters affect the collective learning capability and how a specially developed 'guiding particle' can influence peers towards achieving goal-oriented behaviors, illustrating a method for steering network populations towards desired outcomes.
{% cite gabor2021goals %} {% cite gabor2021goals %}

View File

@ -7,7 +7,8 @@ header:
teaser: assets/figures/14_ad_rl_teaser.jpg teaser: assets/figures/14_ad_rl_teaser.jpg
--- ---
![Formal Definition](\assets\figures\14_ad_rl.jpg){:style="display:block; margin-left:auto; margin-right:auto"}
Identifying datapoints that substantially differ from normality is the task of anomaly detection (AD). While AD has gained widespread attention in rich data domains such as images, videos, audio and text, it has has been studied less frequently in the context of reinforcement learning (RL). This is due to the additional layer of complexity that RL introduces through sequential decision making. Developing suitable anomaly detectors for RL is of particular importance in safety-critical scenarios where acting on anomalous data could result in hazardous situations. In this work, we address the question of what AD means in the context of RL. We found that current research trains and evaluates on overly simplistic and unrealistic scenarios which reduce to classic pattern recognition tasks. We link AD in RL to various fields in RL such as lifelong RL and generalization. We discuss their similarities, differences, and how the fields can benefit from each other. Moreover, we identify non-stationarity to be one of the key drivers for future research on AD in RL and make a first step towards a more formal treatment of the problem by framing it in terms of the recently introduced block contextual Markov decision process. Finally, we define a list of practical desiderata for future problems. This work investigates anomaly detection (AD) within reinforcement learning (RL), highlighting its importance in safety-critical applications due to the complexity of sequential decision-making in RL. The study criticizes the simplicity of current AD research scenarios in RL, connecting AD to lifelong RL and generalization, discussing their interrelations and potential mutual benefits. It identifies non-stationarity as a crucial area for future AD research in RL, proposing a formal approach through the block contextual Markov decision process and outlining practical requirements for future studies.
{% cite muller2022towards %} {% cite muller2022towards %}
![Formal Definition](\assets\figures\14_ad_rl.jpg){:style="display:block; width:50%" .align-center}

View File

@ -7,9 +7,9 @@ header:
teaser: assets/figures/15_sr_journal_teaser.jpg teaser: assets/figures/15_sr_journal_teaser.jpg
--- ---
![Children Evolution](\assets\figures\15_sr_journal_children.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Children Evolution](\assets\figures\15_sr_journal_children.jpg){:style="display:block; width:65%" .align-center}
A key element of biological structures is self-replication. Neural networks are the prime structure used for the emergent construction of complex behavior in computers. We analyze how various network types lend themselves to self-replication. Backpropagation turns out to be the natural way to navigate the space of network weights and allows non-trivial self-replicators to arise naturally. We perform an in-depth analysis to show the self-replicators’ robustness to noise. We then introduce artificial chemistry environments consisting of several neural networks and examine their emergent behavior. In extension to this works previous version (Gabor et al., 2019), we provide an extensive analysis of the occurrence of fixpoint weight configurations within the weight space and an approximation of their respective attractor basins. This study extends previous work on self-replicating neural networks, focusing on backpropagation as a mechanism for facilitating non-trivial self-replication. It delves into the robustness of these self-replicators against noise and introduces artificial chemistry environments to observe emergent behaviors. Additionally, it provides a detailed analysis of fixpoint weight configurations and their attractor basins, enhancing the understanding of self-replication dynamics within neural networks.
{% cite gabor2022self %} {% cite gabor2022self %}
![Noise Levels](\assets\figures\15_noise_levels.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Noise Levels](\assets\figures\15_noise_levels.jpg){:style="display:block; width:65%" .align-center}

View File

@ -7,9 +7,11 @@ header:
teaser: assets/figures/16_on_teaser.jpg teaser: assets/figures/16_on_teaser.jpg
--- ---
![Organism Network Architecture](\assets\figures\16_on_architecture.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Organism Network Architecture](\assets\figures\16_on_architecture.jpg){:style="display:block; width:65%" .align-center}
A key element of biological structures is self-replication. Neural networks are the prime structure used for the emergent construction of complex behavior in computers. We analyze how various network types lend themselves to self-replication. Backpropagation turns out to be the natural way to navigate the space of network weights and allows non-trivial self-replicators to arise naturally. We perform an in-depth analysis to show the self-replicators’ robustness to noise. We then introduce artificial chemistry environments consisting of several neural networks and examine their emergent behavior. In extension to this works previous version (Gabor et al., 2019), we provide an extensive analysis of the occurrence of fixpoint weight configurations within the weight space and an approximation of their respective attractor basins. This work delves into the concept of self-replicating neural networks, focusing on how backpropagation facilitates the emergence of complex, self-replicating behaviors.
{% cite illium2022constructing %} ![Dropout](\assets\figures\16_on_dropout.jpg){:style="display:block; width:45%" .align-right}
By evaluating different network types, the study highlights the natural emergence of robust self-replicators and explores their behavior in artificial chemistry environments.
A significant extension over a previous version, this research offers a deep analysis of fixpoint weight configurations and their attractor basins, advancing the understanding of neural network self-replication.
![Dropout](\assets\figures\16_on_dropout.jpg){:style="display:block; margin-left:auto; margin-right:auto"} For more detailed insights, refer to {% cite illium2022constructing %}.

View File

@ -7,11 +7,10 @@ header:
teaser: assets/figures/17_vp_teaser.jpg teaser: assets/figures/17_vp_teaser.jpg
--- ---
![Organism Network Architecture](\assets\figures\17_vp_lion.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Organism Network Architecture](\assets\figures\17_vp_lion.jpg){:style="display:block; width:85%" .align-center}
Overfitting is a problem in Convolutional Neural Networks (CNN) that causes poor generalization of models on unseen data. To remediate this problem, many new and diverse data augmentation methods (DA) have been proposed to supplement or generate more training data, and thereby increase its quality. In this work, we propose a new data augmentation algorithm: VoronoiPatches (VP). We primarily utilize non-linear recombination of information within an image, fragmenting and occluding small information patches. Unlike other DA methods, VP uses small convex polygon-shaped patches in a random layout to transport information around within an image. Sudden transitions created between patches and the original image can, optionally, be smoothed. In our experiments, VP outperformed current DA methods regarding model variance and overfitting tendencies. We demonstrate data augmentation utilizing non-linear re-combination of information within images, and non-orthogonal shapes and structures improves CNN model robustness on unseen data. This study introduces VoronoiPatches (VP), a novel data augmentation algorithm that enhances Convolutional Neural Networks' performance by using non-linear recombination of image information. VP distinguishes itself by utilizing small, convex polygon-shaped patches in random layouts to redistribute information within an image, potentially smoothing transitions between patches and the original image. This method has shown to outperform existing data augmentation techniques in reducing model variance and overfitting, thus improving the robustness of CNN models on unseen data. {% cite illium2022voronoipatches %}
{% cite illium2022voronoipatches %}
:trophy: This paper won the conference's [Best Poster Award](https://icaart.scitevents.org/PreviousAwards.aspx?y=2024#2023), which is a special honor. :trophy: :trophy: Our work was awarded the [Best Poster Award](https://icaart.scitevents.org/PreviousAwards.aspx?y=2024#2023) at ICAART 2023 :trophy:
![Dropout](\assets\figures\17_vp_results.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Dropout](\assets\figures\17_vp_results.jpg){:style="display:block; width:90%" .align-center}

View File

@ -7,10 +7,9 @@ header:
teaser: assets/figures/18_surprised_soup_teaser.jpg teaser: assets/figures/18_surprised_soup_teaser.jpg
--- ---
![Social Soup Schematics](\assets\figures\18_surprised_soup_schematic.jpg){: .align-right style="padding:2em; width:20em"} ![Social Soup Schematics](\assets\figures\18_surprised_soup_schematic.jpg){:style="display:block; width:40%" .align-right}
This research explores artificial chemistry systems with neural network particles that exhibit self-replication. Introducing interactions that enable these particles to recognize and predict each other's behavior, the study observes emergent behaviors akin to stability patterns previously seen in explicit self-replication training. A unique catalyst particle introduces evolutionary pressure, demonstrating how 'social' interactions among particles can lead to complex, emergent outcomes.
A recent branch of research in artificial life has constructed artificial chemistry systems whose particles are dynamic neural networks. These particles can be applied to each other and show a tendency towards self-replication of their weight values. We define new interactions for said particles that allow them to recognize one another and learn predictors for each others behavior. For instance, each particle minimizes its surprise when observing another particles behavior. Given a special catalyst particle to exert evolutionary selection pressure on the soup of particles, these social interactions are sufficient to produce emergent behavior similar to the stability pattern previously only achieved via explicit self-replication training.
{% cite zorn23surprise %} {% cite zorn23surprise %}
![Soup Trajectories](\assets\figures\18_surprised_soup_trajec.jpg){:style="display:block; margin-left:auto; margin-right:auto;"} ![Soup Trajectories](\assets\figures\18_surprised_soup_trajec.jpg){:style="display:block; width:90%" .align-center}

View File

@ -7,11 +7,10 @@ header:
teaser: assets/figures/19_binary_primates_teaser.jpg teaser: assets/figures/19_binary_primates_teaser.jpg
--- ---
![Multiclass Training Pipeline](\assets\figures\19_binary_primates_pipeline.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Multiclass Training Pipeline](\assets\figures\19_binary_primates_pipeline.jpg){:style="display:block; width:40%" .align-right}
This study advances machine learning applications in wildlife observation by introducing a sophisticated approach to audio classification. By meticulously relabeling subsegments of MEL spectrograms, it significantly refines the process of multi-class classification, crucial for identifying various primate species from audio recordings. Employing convolutional neural networks alongside innovative data augmentation techniques, the methodology showcases remarkable enhancements in classification performance. When applied to the demanding ComparE 2021 dataset, this approach not only achieved substantially higher accuracy and UAR scores over existing baselines but also marked a significant stride in the field of bioacoustics research, demonstrating the potential of machine learning to overcome challenges presented by datasets with weak labeling, varying lengths, and poor signal-to-noise ratios.
In the field of wildlife observation and conservation, approaches involving machine learning on audio recordings are becoming increasingly popular. Unfortunately, available datasets from this field of research are often not optimal learning material; Samples can be weakly labeled, of different lengths or come with a poor signal-to-noise ratio. In this work, we introduce a generalized approach that first relabels subsegments of MEL spectrogram representations, to achieve higher performances on the actual multi-class classification tasks. For both the binary pre-sorting and the classification, we make use of convolutional neural networks (CNN) and various data-augmentation techniques. We showcase the results of this approach on the challenging ComparE 2021 dataset, with the task of classifying between different primate species sounds, and report significantly higher Accuracy and UAR scores in contrast to comparatively equipped model baselines.
{% cite koelle23primate %} {% cite koelle23primate %}
![Thresholding](\assets\figures\19_binary_primates_thresholding.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Thresholding](\assets\figures\19_binary_primates_thresholding.jpg){:style="display:block; width:70%" .align-center}
![Thresholding](\assets\figures\19_binary_primates_results.jpg){:style="display:block; margin-left:auto; margin-right:auto"} ![Thresholding](\assets\figures\19_binary_primates_results.jpg){:style="display:block; width:70%" .align-center}

View File

@ -7,18 +7,16 @@ header:
teaser: assets/images/teaching/computer_gear.png teaser: assets/images/teaching/computer_gear.png
--- ---
![logo](\assets\images\teaching\computer_gear.png){: .align-left style="padding:0.1em; width:5em"}In the semesters listed below, my job was to assist in organiszing this bachelors lecture of about 600 students. ![logo](\assets\images\teaching\computer_gear.png){: .align-left style="padding:0.1em; width:5em"}
We had a team of 10-12 tutors that were employed to balance the workload. During my tenure as a Ph.D. student, I was involved in organizing a bachelor's lecture titled "Rechnerarchitektur" with approximately 600 students per semester.
Also, we created each weeks graded exercise sheets as well as the written exam and organized it. My responsibilities encompassed managing a team of 10-12 tutors to distribute the workload evenly, designing weekly graded exercise sheets, and overseeing the written examination process. The curriculum introduced students to the fundamental concepts of computer science and architecture, covering a wide range of topics from data representation to the intricacies of machine and assembly language programming, under the leadership of Prof. Dr. Linnhoff-Popien.
### Contents ### Contents
<div class="table-right"> <div class="table-right">
| [Summer semester 2019](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/rechnerarchitektur-sose19/)| [Summer semester 2018](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/rechnerarchitektur-sose18/)| | [Summer semester 2019](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/rechnerarchitektur-sose19/)| [Summer semester 2018](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/rechnerarchitektur-sose18/)|
</div>This lecture provided an introduction to the technical foundations of computer science and the architecture of computers. </div>
Topics introduced in the lecture include representation of information in computers, classical components of a computer, arithmetic in computers, logical design of computers, switching circuits, representation of memory contents, primary and secondary memories, input and output, and pipelining.
More concrete:
- Representation as bits: (numbers, text, images, audio, video, programs). - Representation as bits: (numbers, text, images, audio, video, programs).
- Storage and Transfer of data, error detection and correction - Storage and Transfer of data, error detection and correction
@ -30,5 +28,3 @@ More concrete:
- Machine model - Machine model
- Machine and assembly language programming - Machine and assembly language programming
- Introduction to Quantum Computing - Introduction to Quantum Computing
This lecture was held by Prof. Dr. Linnhoff-Popien titled "Rechnerarchitektur" at [https://www.mobile.ifi.lmu.de/](LMU).

View File

@ -10,10 +10,22 @@ header:
![logo](\assets\images\teaching\server.png){: .align-left style="padding:0.1em; width:5em"} ![logo](\assets\images\teaching\server.png){: .align-left style="padding:0.1em; width:5em"}
In the context of the lecture [Internet of Things (IoT)](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/iot-ws1819/), my task was to come up with a practical exercise which could be implemented in the scope of 1-2 classes. We went with a typical [MQQT](https://mqtt.org/) based communication approach, which incooperated an [InfluxDB](https://www.influxdata.com/) backend, while simulating some high frequency sensors. In the context of the lecture [Internet of Things (IoT)](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/iot-ws1819/), my task was to come up with a practical exercise which could be implemented in the scope of 1-2 classes. We went with a typical [MQQT](https://mqtt.org/) based communication approach, which incooperated an [InfluxDB](https://www.influxdata.com/) backend, while simulating some high frequency sensors.
The task was to implement this all from scratch in [Python](https://www.python.org/), which was tought in seperate [lecture](/teaching/python). The task was to implement this all from scratch in [Python](https://www.python.org/), which was tought in seperate [lecture](/teaching/Python/).
![IOT Influx Pipeline](\assets\figures\iot_inflex_pipeline.png){:style="display:block; margin-left:auto; margin-right:auto; padding: 2em;"} ![IOT Influx Pipeline](\assets\figures\iot_inflex_pipeline.png){:style="display:block; margin-left:auto; margin-right:auto; padding: 2em;"}
This practical course was held in front of about 200 students in winter 2018. This practical course was held in front of about 200 students in winter 2018.
### Contents ### Contents
The general topics of the lecture included: **1)** Arduino and Raspberry Pi, **2)** Wearables and ubiquitous computing, **3)** Metaheuristics for optimization problems, **4)** Edge/fog/cloud computing and storage, **5)** Scalable algorithms and approaches, **6)** Spatial data mining, **7)** Information retrieval and mining, **8)** Blockchain and digital consensus, **9)** Combinatorial optimization in practice, **10)** Predictive maintenance systems, **11)** Smart IoT applications, **12)** Cyber security & **13)** Web of Things - Arduino and Raspberry Pi
- Wearables and ubiquitous computing
- Metaheuristics for optimization problems
- Edge/fog/cloud computing and storage,
- Scalable algorithms and approaches
- Spatial data mining,
- Information retrieval and mining
- Blockchain and digital consensus
- Combinatorial optimization in practice
- Predictive maintenance systems
- Smart IoT applications
- Cyber security
- Web of Things

View File

@ -9,8 +9,6 @@ header:
--- ---
![logo](\assets\images\teaching\py.png){: .align-left style="padding:0.1em; width:5em"} ![logo](\assets\images\teaching\py.png){: .align-left style="padding:0.1em; width:5em"}
The "Python 101"-Lecture was held within the context of the [IOT](/teaching/IOT/) lecture, held in winter semester 2018. During the winter semester of 2018, as part of the [IOT](/teaching/IOT/) lecture series, we conducted a "Python 101" course. This extensive introduction to [`Python`](https://www.python.org/), which I co-developed and co-taught, spanned four classes and reached approximately 200 students.
Over the course of four classes, we tought an extensive introduction to the [`Python`](https://www.python.org/) programming language.
Not only was the cource slides developed by me and my collegue, we also shared the lectures in front of about 200 students.
There was also a practical part of the course, which allowed students to the practical acquisition of programming skills in the `Python` programming language. In addition to theoretical lessons, we incorporated a practical component to enhance students' programming skills in Python.

View File

@ -7,9 +7,7 @@ header:
teaser: assets/images/teaching/computer_os.png teaser: assets/images/teaching/computer_os.png
--- ---
![logo](\assets\images\teaching\computer_os.png){: .align-left style="padding:0.1em; width:5em"}In the semesters listed below, my job was to assist in organiszing this bachelors lecture of about 300-400 students. ![logo](\assets\images\teaching\computer_os.png){: .align-left style="padding:0.1em; width:5em"}In the semesters listed below, I assisted in organizing the "Operating Systems" lecture for 300-400 students, coordinating with a team of 10-12 tutors to manage workload.
We had a team of 10-12 tutors that were employed to balance the workload.
Also, we created each weeks graded exercise sheets as well as the written exam and organized it.
### Content ### Content
@ -18,11 +16,4 @@ Also, we created each weeks graded exercise sheets as well as the written exam a
| [Winter semester 2019](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/bs-ws1920/)| | [Winter semester 2019](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/bs-ws1920/)|
| [Summer semester 2018](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/bs-ws1819/)| | [Summer semester 2018](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/bs-ws1819/)|
</div>The lecture `Operating Systems` was a continuation of the lecture [`Computer Architecture`](teaching/computer_achitecture/) held in the summer semester. </div>We developed weekly graded exercises and exams. This lecture, a continuation of [`Computer Architecture`](teaching/computer_achitecture/) focused on system programming concepts like OS programming, synchronization, process communication, and memory management. Practical exercises used Java, particularly the Thread API, and the course concluded with distributed systems architecture. Taught by Prof. Dr. Linnhoff-Popien at [LMU Munich](https://www.mobile.ifi.lmu.de/).
The focus of the lecture lay on presenting the concepts of system programming.
This included the programming of the operating system and of service programs such as editors, compilers and interpreters.
The lecture provided an overview of the main tasks and problem around operating system, with particular emphasis on the areas of synchronization, process communication, kernel and memory management.
Java (in particular the Thread API) was used to teach the practical implementation of the concepts introduced in the lecture in the practical exercises.
At the end of the lecture, the architecture of distributed systems, cross-computer communication and remote procedure calls was discussed, also.
This lecture was held by Prof. Dr. Linnhoff-Popien titled `Betriebssysteme` at [LMU](https://www.mobile.ifi.lmu.de/).

View File

@ -9,12 +9,9 @@ header:
--- ---
![logo](\assets\images\teaching\ios.png){: .align-left style="padding:0.1em; width:5em"} ![logo](\assets\images\teaching\ios.png){: .align-left style="padding:0.1em; width:5em"}
One semester and with the experience in [android app developement](teaching/android), I stepped in to support my collegue in teaching mobile app developement at LMU. Leveraging my [android app developement](/teaching/android) experience, I contributed to teaching a mobile app development lab at LMU, focusing on iOS programming with Swift.
The lab was divided into two phases:
**1)** In the introductory phase, the theoretical basics were taught in a weekly preliminary meeting, in addition to practical timeslots.
**2)** During the project phase, students then worked independently in groups on their own projects.
There were individual appointments with the project groups to discuss the respective status of the project work.
The course had an introductory phase for theoretical basics and practical sessions, followed by a project phase where students worked in groups on their projects, with individual guidance provided.
Specifically, the practical course provided an introduction to programming for the Apple iOS operating system. Specifically, the practical course provided an introduction to programming for the Apple iOS operating system.
The focus was on programming with Swift and an introduction to specific concepts of programming on mobile devices. The focus was on programming with Swift and an introduction to specific concepts of programming on mobile devices.
@ -26,4 +23,4 @@ The focus was on programming with Swift and an introduction to specific concepts
- Teamwork and planning of timed projects - Teamwork and planning of timed projects
- Agile feature developement and tools - Agile feature developement and tools
IOS app developement was tought as `Praktikum Mobile und Verteilte Systeme (MSP)` This IOS app developement seminar was named `IOS Praktikum (IOS)`

View File

@ -9,11 +9,9 @@ header:
--- ---
![logo](\assets\images\teaching\android.png){: .align-left style="padding:0.1em; width:5em"} ![logo](\assets\images\teaching\android.png){: .align-left style="padding:0.1em; width:5em"}
Over the course of several semesters me and my collegues tought mobile app developement at [LMU](https://www.mobile.ifi.lmu.de/). Over multiple semesters, my colleagues and I taught mobile app development at LMU.
The lab was divided into two phases: The course was structured into two phases:
**1)** In the introductory phase, the theoretical basics were taught in a weekly preliminary meeting, in addition to practical timeslots. an introductory phase covering theoretical basics and practical skills, followed by a project phase where students worked in groups on their projects, receiving individual guidance.
**2)** During the project phase, students then worked independently in groups on their own projects.
There were individual appointments with the project groups to discuss the respective status of the project work.
### Content ### Content
@ -35,6 +33,4 @@ There were individual appointments with the project groups to discuss the respec
- Teamwork and planning of timed projects - Teamwork and planning of timed projects
- Agile feature developement and tools - Agile feature developement and tools
&nbsp;
This course was held as `Praktikum Mobile und Verteilte Systeme (MSP)` This course was held as `Praktikum Mobile und Verteilte Systeme (MSP)`

View File

@ -8,7 +8,7 @@ header:
--- ---
![logo](\assets\images\teaching\thesis.png){: .align-left style="padding:0.1em; width:5em"} ![logo](\assets\images\teaching\thesis.png){: .align-left style="padding:0.1em; width:5em"}
This seminar deals with selected topics from the field of mobile and distributed systems, in particular from the main research areas of the chair. In recent semesters, this has led to a focus on topics from the field of machine learning and quantum computing. The seminar focuses on mobile and distributed systems, with recent iterations emphasizing machine learning and quantum computing, reflecting the chair's main research areas.
### Content ### Content
@ -21,6 +21,6 @@ This seminar deals with selected topics from the field of mobile and distributed
| [2021](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-trends-in-mobilen-und-verteilten-systemen-sose21/)| [2021](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-ws2122-2/) | | [2021](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-trends-in-mobilen-und-verteilten-systemen-sose21/)| [2021](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-ws2122-2/) |
| --- |[2020](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-trends-in-mobilen-und-verteilten-systemen-wise2021/)| | --- |[2020](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-trends-in-mobilen-und-verteilten-systemen-wise2021/)|
</div>One aim of the seminar is also to learn and practise scientific working techniques. To this end, a course on presentation and working techniques is offered during the semester and supplemented by individual presentation coaching/feedback. </div>The seminar aims to enhance scientific working techniques through a dedicated course on presentation and working methods, complemented by individual presentation coaching and feedback.
The final grade for the seminar is based on the quality of the academic work, the presentation and active participation in the seminars. The final grade reflects the quality of academic work, presentation skills, and active seminar participation.

View File

@ -8,8 +8,7 @@ header:
--- ---
![logo](\assets\images\teaching\thesis_master.png){: .align-left style="padding:0.1em; width:5em"} ![logo](\assets\images\teaching\thesis_master.png){: .align-left style="padding:0.1em; width:5em"}
This seminar deals with selected topics from the field of mobile and distributed systems, in particular from the main research topics of the chair. The seminar explores topics in mobile and distributed systems, especially those aligning with the chair's research interests, recently emphasizing machine learning and quantum computing.
In recent semesters, this has led to a focus on topics from the field of machine learning and quantum computing.
### Content ### Content
<div class="table-right"> <div class="table-right">
@ -21,6 +20,4 @@ In recent semesters, this has led to a focus on topics from the field of machine
| [2021](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-sose21/)| [2021](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-ws2122/) | | [2021](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-sose21/)| [2021](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-ws2122/) |
| --- |[2020](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-ws2021/)| | --- |[2020](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-ws2021/)|
</div>One aim of the seminar is also to learn and practise scientific working techniques. To this end, a course on presentation and working techniques is offered during the semester and supplemented by individual presentation coaching/feedback. </div>The seminar aims to teach and practice scientific working techniques, offering a course on presentation and working methods plus individual coaching. Grades are based on academic work, presentation quality, and seminar participation.
The final grade for the seminar is based on the quality of the academic work, the presentation and active participation in the seminars.

View File

@ -9,25 +9,19 @@ canonical_url: "https://steffenillium.de"
permalink: "/about/" permalink: "/about/"
--- ---
<div class="table-right"> <div style="text-align: center;border-collapse: collapse; border: none;" class="table-right">
|![logo](\assets\images\longshot.jpg){: style="margin:0em; padding:0em; width:10em"}| |![Profile Image](\assets\images\longshot.jpg){: style="margin:0em; padding:0em; width:10em"}|
|:--:| | **Steffen Illium**<br>*AI Researcher & Data Scientist*<br>*PHD Student @ LMU Munich*|
| **Steffen Illium**<br>*AI Researcher & Data Scientist*<br>*PHD Student @ LMU Munich*<br>*Living in Augsburg*|
</div>
Working at a university means being a teacher in theoretical classes, an advisor in practical classes, a speaker and an organizer for lectures. In the respective pages, you can learn more about my [teaching](teaching) and [research](research) topics.
Working on [projects](projects) on behalf of or together with industry partners was another task. Here I learned, e.g. about audio signal processing and training of deep neural networks in the context of sequences and image data.
In my final year, I was given the opportunity to study multi-agent reinforcement learning in the context of safety and emergent phenomena in fused industrial environments.
Together with my personal interests, this formed the basis for the [publications](publications) we were fortunate enough to work on, as well as the skills I aquired over the course of time.
Furthermore, my colleagues and I worked on what we called '*hobbies*', which led me to being the head organizer of an [open-source conference](https://openmunich.eu).
Soon thereafter, I took over the editorial office of our [online magazine](https://digitaleweltmagazin.de/).
I was fortunate to get the opportunity to work in various roles and with tools I had never imagined before.
[Grab my CV here](\assets\illium_cv_censored.pdf){: .btn .btn--success} [Grab my CV here](\assets\illium_cv_censored.pdf){: .btn .btn--success}
</div>
Working at a university encompasses a broad spectrum of roles including teaching theoretical courses, guiding practical sessions, and contributing as both a speaker and organizer for lectures. For further insights into my academic contributions, explore my [teaching](teaching) and [research](research) pages.
My involvement in [projects](projects) often entailed collaboration with industry partners, where I delved into audio signal processing and honed my skills in training deep neural networks for analyzing sequences and image data. My final year presented a unique opportunity to investigate multi-agent reinforcement learning, focusing on safety and emergent phenomena within integrated industrial settings. This experience, combined with my personal interests, laid the groundwork for the [publications](publications) I've had the privilege to contribute to and the diverse skill set I've developed over time.
Additionally, my colleagues and I pursued what we affectionately termed as '*hobbies*', which led to my role as the lead organizer of the [open-source conference](https://openmunich.eu). My journey continued as I assumed responsibility for the editorial office of our [online magazine](https://digitaleweltmagazin.de/), further broadening my professional experience and introducing me to a variety of roles and tools beyond my initial expectations.
--- ---

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 111 KiB

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 183 KiB

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 136 KiB

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 137 KiB

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 123 KiB

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 178 KiB

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 144 KiB

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 136 KiB

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 112 KiB

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 101 KiB

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 86 KiB

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 186 KiB

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 159 KiB

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 139 KiB

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 242 KiB

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 162 KiB

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 86 KiB

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 164 KiB

After

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 942 KiB

After

Width:  |  Height:  |  Size: 577 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 127 KiB

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 189 KiB

After

Width:  |  Height:  |  Size: 82 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 79 KiB

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 77 KiB

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 929 KiB

After

Width:  |  Height:  |  Size: 89 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.5 MiB

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 612 KiB

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 720 KiB

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1023 KiB

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.4 MiB

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.2 MiB

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 224 KiB

After

Width:  |  Height:  |  Size: 91 KiB

9
cv.md
View File

@ -1,9 +0,0 @@
---
# Feel free to add content and custom Front Matter to this file.
# To modify the layout, see https://jekyllrb.com/docs/themes/#overriding-theme-defaults
layout: single
author_profile: true
title: "Curriculum vitae"
permalink: /cv/
---

View File

@ -10,10 +10,10 @@ permalink: "/"
entries_layout: grid entries_layout: grid
--- ---
Hey, glad you found me! :wave: Welcome, and thank you for visiting! :wave:
This web page is intended to provide an [overview of my current professional life](/about) as a doctoral student at the Ludwig Maximilian's University, Munich ([LMU Munich](https://www.lmu.de)).  This website is designed to give you a comprehensive [overview of my journey](/about) as a doctoral student at Ludwig Maximilian University, Munich ([LMU Munich](https://www.lmu.de)).
Please find futher details on pages in the upperhand menu.<br> For more detailed information, please explore the options available in the top menu.<br>
<figure class="third"> <figure class="third">
<img src="/assets/images/photo/bike.jpg" alt="Bike in the Garden"> <img src="/assets/images/photo/bike.jpg" alt="Bike in the Garden">
@ -21,4 +21,5 @@ Please find futher details on pages in the upperhand menu.<br>
<img src="/assets/images/photo/azores.jpg" alt="Rough, stormy coastline of the Azores with pink flowers on green gras in foreground"> <img src="/assets/images/photo/azores.jpg" alt="Rough, stormy coastline of the Azores with pink flowers on green gras in foreground">
</figure> </figure>
This sites' general structure follows the structure of my last five years of worklife, which can basicaly be devided in [research](/research), [teaching](/teaching), work on [projects](/projects) and resulting [publications](/publications). Just have a look aroung and thanks for coming here :blush:
Reflecting the diverse facets of my professional life over the past five years, the structure of this site encompasses my [endeavors in research](/research), [teaching](\teaching), [projects](/projects), and [publications](/publications). Feel free to browse through and discover more about my work. I appreciate your interest! :blush:

View File

@ -10,10 +10,11 @@ author_profile: true
entries_layout: list entries_layout: list
--- ---
Here you will find an overview of the projects I worked on at the [mobile and distributed systems chair](http://www.mobile.ifi.lmu.de/). Here you will find an overview of the projects I worked on at the [mobile and distributed systems chair](http://www.mobile.ifi.lmu.de/).
I had multiple roles within my time, such as technician, researcher, project communicator, conference organizer, and editor-in-chief. Throughout my tenure, I embraced various roles including technician, researcher, project communicator, conference organizer, and editor-in-chief.
Therefore, this list consists of a mix of real industrial projects (in cooperation with SWA and Fraunhofer) and what we call “hobbies” within the chair's reach. As a result, the list below represents a blend of genuine industrial projects, undertaken in collaboration with SWA and Fraunhofer, as well as pursuits we affectionately refer to as “hobbies” within the ambit of the chair.
## List of Projects ## List of Projects
--- ---

View File

@ -8,11 +8,11 @@ title: "publications"
permalink: /publications/ permalink: /publications/
--- ---
This is a list of scientific papers to which I have contributed or which were based on my research and ideas. This section presents a collection of scientific papers to which I have contributed, or that were inspired by my research and ideas.
Due to my interest in the general principles of deep learning and neural networks, the topics range from deep dives into the inner workings to real-world applications of neural networks. My keen interest in the foundational principles of deep learning and neural networks has led me to explore a wide array of topics, ranging from in-depth analyses of their inner mechanisms to practical applications in various domains.
Certainly, the latter were influenced by the [projects](/projects) I was involved in and working on. Many of these endeavors were directly influenced by the [projects](/projects) I participated in.
Moreover, my colleagues and I were full of excitement, pursuing rather exotic concepts. Alongside my colleagues, driven by curiosity and enthusiasm, we ventured into the exploration of somewhat unconventional concepts. I invite you to explore these works and share in our journey of discovery. 🤗
Please see for yourself. :hugs:
--- ---

View File

@ -11,8 +11,7 @@ author_profile: true
entries_layout: grid entries_layout: grid
--- ---
Here you'll find an overview over the papers in which I am listed either as first author or was involved in the process and listed down the line. Here you'll find a curated overview of the papers where I have played a pivotal role, either as the first author or as a contributing author further down the authorship line. My involvement has spanned a variety of activities, from conceptualizing the initial ideas and developing machine learning models, to providing support and insights to my colleagues, or rigorously reviewing and refining the work.
This ranges from the developing the initial idea, to implementing and tuning machine learning models to help my coleagues or simply discussing and checking on a piece of work.
## List of Papers ## List of Papers

View File

@ -9,8 +9,9 @@ taxonomy: teaching
author_profile: true author_profile: true
entries_layout: list entries_layout: list
--- ---
Being a doctoral student, I was happy to also assume a teaching role, either as a mentor for undergraduate and graduate students' theses, an assistant for arranging larger lectures, or as a facilitator for practical seminars and courses. As a doctoral student, embracing the role of an educator brought me great joy, whether it was mentoring undergraduate and graduate students on their theses, assisting in the organization of larger lectures, or leading practical seminars and courses. Below is a list of subjects where I contributed either as an assistant or as the main instructor.
Below, you'll find a list of subjects in which I played an assisting or leading role.
A comprehensive listing of past thesis topics can be accessed on my [LMU profile page](https://www.mobile.ifi.lmu.de/team/steffen-illium/). For a detailed list of thesis topics I have supervised, please visit my [LMU profile page](https://www.mobile.ifi.lmu.de/team/steffen-illium/).
--- ---