Website overhaul

This commit is contained in:
Steffen Illium 2025-03-27 22:57:31 +01:00
parent 2b75326eac
commit 755fd297bb
70 changed files with 1389 additions and 709 deletions

4
404.md
View File

@ -14,10 +14,10 @@ author_profile: false
![https://unsplash.com/photos/two-person-standing-on-gray-tile-paving-TamMbr4okv4](/assets/images/404.jpg)
{: .text-center}
Sorry, but the page you were trying to view does not exist.
Sorry, but the page you are trying to view does not exist.
{: .text-center}
 
<a href="/" class="btn btn--primary">Back to the start</a>
<a href="/" class="btn btn--primary">Go back</a>
{: .text-center}

View File

@ -100,10 +100,11 @@ organization={Conference on Artificial Life - Alife 2023},
publisher={Copernicus Publications Göttingen, Germany}
}
@article{illium2020surgical,
title={Surgical mask detection with convolutional neural networks and data augmentations on spectrograms},
author={Illium, Steffen and Müller, Robert and Sedlmeier, Andreas and Linnhoff-Popien, Claudia},
journal={arXiv preprint arXiv:2008.04590},
@inproceedings{illium2020surgical,
title={Surgical Mask Detection with Convolutional Neural Networks and Data Augmentations on Spectrograms},
author={Illium, Steffen and M{\"u}ller, Robert and Sedlmeier, Andreas and Linnhoff-Popien, Claudia},
booktitle={Proc. Interspeech 2020},
pages={2052--2056},
year={2020}
}
@ -193,18 +194,23 @@ organization={Conference on Artificial Life - Alife 2023},
organization={Springer International Publishing Cham}
}
@article{illium2022constructing,
title={Constructing Organism Networks from Collaborative Self-Replicators},
author={Illium, Steffen and Zorn, Maximilian and Kölle, Michael and Linnhoff-Popien, Claudia and Gabor, Thomas},
journal={arXiv preprint arXiv:2212.10078},
year={2022}
@inproceedings{illium2022constructing,
title={Constructing organism networks from collaborative self-replicators},
author={Illium, Steffen and Zorn, Maximilian and Lenta, Cristian and K{\"o}lle, Michael and Linnhoff-Popien, Claudia and Gabor, Thomas},
booktitle={2022 IEEE Symposium Series on Computational Intelligence (SSCI)},
pages={1268--1275},
year={2022},
organization={IEEE}
}
@article{illium2022voronoipatches,
title={VoronoiPatches: Evaluating A New Data Augmentation Method},
author={Illium, Steffen and Griffin, Gretchen and Kölle, Michael and Zorn, Maximilian and Nü{\ss}lein, Jonas and Linnhoff-Popien, Claudia},
journal={arXiv preprint arXiv:2212.10054},
year={2022}
@inproceedings{illium2023voronoipatches,
title={VoronoiPatches: Evaluating a New Data Augmentation Method},
author={Illium, Steffen and Griffin, Gretchen and K{\"o}lle, Michael and Zorn, Maximilian and N{\"u}sslein, Jonas and Linnhoff-Popien, Claudia},
booktitle={International Conference on Agents and Artificial Intelligence},
volume={15},
number={Volume 3},
pages={350--357},
year={2023}
}
@article{kolle2023compression,

View File

@ -38,7 +38,7 @@ layout: default
{% include page__meta.html %}
</header>
{% endunless %}
<br>
<section class="page__content" itemprop="text">
{% if page.toc %}
<aside class="sidebar__right {% if page.toc_sticky %}sticky{% endif %}">

View File

@ -1,18 +1,31 @@
---
layout: single
title: "Mobile Internet Innovations"
title: "InnoMi Project"
categories: projects
excerpt: "Aiming to strengthen Bavaria by transferring innovations from the university to industry at an early stage."
excerpt: "Early-stage mobile/distributed tech transfer between academia and industry (Bavaria)."
header:
teaser: assets/images/projects/innomi.png
teaser: /assets/images/projects/innomi.png
---
![InnoMi Logo](/assets/images/projects/innomi.png){: .align-left style="padding: 0.1em; width: 5em;"}The InnoMi research initiative served as a vital bridge between academic research and industrial application within Bavaria. Funded by the state government and operating under the umbrella of the Zentrum Digitalisierung.Bayern, the project provided crucial resources and a collaborative framework.
---
![logo](/assets/images/projects/innomi.png){: .align-left style="padding:0.1em; width:5em"}
The [Innomi](https:\\innomi.org) research project, part of the [Zentrum Digitalisierung.Bayern (ZDB)](https://www.bayern-innovativ.de/de/unternehmen/zdb), enhances Bavaria's economy by fostering early innovation transfers from academia to industry. Funded by the Bavarian Ministry of Economic Affairs and in collaboration with local companies, it has supported the [Mobile Distributed Systems Chair](https:\\mobile.ifi.lmu.de) since 2016, yielding numerous scientific publications across diverse research areas.
[Innomi](https:\\innomi.org) has also enabled the organization of conferences like [Digicon](https://digitaleweltmagazin.de/digicon/) and [OpenMunich](https://openmunich.eu/), bridging industry and academia. My role extended to editing [Digitale Welt Magazin (DW)](https://digitaleweltmagazin.de), further linking current digitalization trends with industry needs.
**Project:** [InnoMi - Innovations for the Mobile Internet](https://innomi.org)<br>
**Affiliation:** Zentrum Digitalisierung.Bayern (ZDB)<br>
**Funding:** Bavarian Ministry of Eco. Affairs, Regional Dev. and Energy (StMWi)<br>
**Duration:** Supported the Chair for Mobile and Distributed Systems (2018-2023)<br>
**Objective:** To strengthen the Bavarian economy by facilitating the early transfer of innovations from university research, specifically at the [Chair for Mobile and Distributed Systems at LMU Munich](https://www.mobile.ifi.lmu.de), to local industry partners.
{% details_link zorn23surprise %}
---
**Key Outcomes & Contributions:**
* **Research Advancement:** The project directly supported foundational and applied research activities at the LMU Chair, leading to numerous scientific [publications](/publications) across various domains within mobile and distributed systems.
* **Knowledge Transfer & Networking:** InnoMi facilitated vital interactions between academia and industry. Within this framework, I contributed to the organization and management of key events designed to foster this exchange, including:
* [OpenMunich Conference](https://openmunich.eu) (2018-2019): Served as lead organizer.
* [DigiCon Conference Series](https://digitaleweltmagazin.de/digicon/) (2018-2019): Provided organizational assistance.
* **Dissemination & Editorial Leadership:** To further bridge the gap between cutting-edge digitalization trends and industry practitioners, I served as the Head of the Online Editorial Team for the associated [Digitale Welt Magazin (DW)](https://digitaleweltmagazin.de/) from 2018 to 2023, a role supported by the InnoMi initiative.
This project provided a platform not only for advancing research but also for developing crucial skills in project communication, event management, and editorial leadership, directly contributing to the technology transfer goals of the Bavarian region.

View File

@ -1,24 +1,42 @@
---
layout: single
title: "Leading an editorial office."
title: "DW Editorial Lead"
categories: projects
excerpt: "A unique line of text to describe this post that will display in an archive listing and meta description with SEO benefits."
excerpt: "Led online editorial team for DIGITALE WELT Magazin (2018-2023)."
header:
teaser: assets/images/projects/dw.png
teaser: /assets/images/projects/dw.png
---
![DIGITALE WELT Logo](/assets/images/projects/dw.png){: .align-left style="padding:0.1em; width:5em"}
**Role:** Head of Online Editorial Team<br>
**Publication:** [DIGITALE WELT Magazin (DW)](https://digitaleweltmagazin.de)<br>
**Affiliation:** [LMU Munich](/projects/innomi/) <br>
**Duration:** 2018 - 2023
---
![logo](\assets\images\projects\dw.png){: .align-left style="padding:0.1em; width:5em"}
During my doctoral studies and research tenure at LMU Munich, I led the online editorial team for *DIGITALE WELT Magazin*. This role, supported by the [InnoMi project](/projects/innomi/), involved managing the publication's digital presence and strategic direction, aiming to effectively bridge scientific research and industry perspectives on digitalization trends.
As Editor in Chief at [DIGITALE WELT Magazin (DW)](https://digitaleweltmagazin.de) during my tenure at LMU, I oversaw online and social media content, aiming to blend scientific and economic discourse on a singular platform. I streamlined content acquisition and publication processes, significantly reducing workload through automation and broadening our content portfolio. My tenure also included overseeing a major website redesign and transition towards a digital-first approach.
---
**Key Responsibilities & Achievements:**
* **Digital Strategy & Content Management:** Oversaw all online content publication and social media channels, defining the editorial calendar and ensuring alignment with the magazine's goal of integrating academic and economic discourse.
* **Process Optimization & Automation:** Developed and implemented streamlined workflows for content acquisition, editing, and publication. Introduced automation solutions that significantly reduced manual workload and improved efficiency.
* **Portfolio Expansion:** Actively broadened the scope and variety of online content to better serve the target audience and reflect emerging digital trends.
* **Website Relaunch Oversight:** Played a key role in managing a major website redesign project, focusing on user experience, modern aesthetics, and facilitating a transition towards a digital-first content strategy.
<center>
<img src="\assets\images\projects\dw_screenshot.png" width="550">
<img src="/assets/images/projects/dw_screenshot.png" alt="Screenshot of the DIGITALE WELT Magazin Website" width="550">
<figcaption>DIGITALE WELT Magazin Website Interface</figcaption>
</center>
<br>
Prior to leading the online team, I contributed to the print editions of the magazine, specifically managing the "Wissen" (Knowledge) section. These earlier contributions are archived and accessible [online](https://digitaleweltmagazin.de/alle-magazine/).
Previously, I managed the "Knowledge" section, contributing to its printed editions, now accessible [online](https://digitaleweltmagazin.de/alle-magazine/).
<center>
<img src="\assets\images\projects\dw_magazin.png" width="550">
<img src="/assets/images/projects/dw_magazin.png" alt="Cover collage of printed DIGITALE WELT Magazin issues" width="550">
<figcaption>Examples of DIGITALE WELT Print Magazine Covers</figcaption>
</center>
<br>

View File

@ -1,17 +1,31 @@
---
layout: single
title: "Detection and localization of leakages in water networks."
title: "ErLoWa Leak Detection"
categories: projects
excerpt: "We researched the possibilities of leakage detection in real-world water networks in Munichs suburbs."
tags: acoustic anomaly-detection
excerpt: "Deep learning detects acoustic water leaks with SWM."
tags: acoustic anomaly-detection deep-learning real-world-data signal-processing
header:
teaser: assets/images/projects/pipe_leak.png
teaser: /assets/images/projects/pipe_leak.png # Corrected path
role: Data Scientist, Machine Learning Expert
skills: Real-world model application
skills: Acoustic Signal Processing, Deep Learning (CNNs), Anomaly Detection, Real-world Data Handling, Sensor Data Analysis, Industry Collaboration
---
![Leaking pipe image](/assets/images/projects/pipe_leak.png){: .align-left style="padding:0.1em; width:5em"}
Collaborating with Munich City Services ([Stadtwerke München (SWM)](https://www.swm.de/)), our project focused on detecting leaks in water networks. We equipped Munich's suburban infrastructure with contact microphones to capture the sounds of potential leaks.
![Leaking pipe icon](/assets/images/projects/pipe_leak.png){: .align-left style="padding:0.1em; width:5em"}
**Project:** ErLoWa (Erkennung von Leckagen in Wasserleitungsnetzen)<br>
**Partner:** [Stadtwerke München (SWM)](https://www.swm.de/)<br>
**Duration:** Late 2018 - Early 2020<br>
**Objective:** To investigate and develop methods for the automated detection and localization of leaks in urban water distribution networks using acoustic sensor data.<br>
---
In collaboration with Munich's municipal utility provider, Stadtwerke München (SWM), this project explored the feasibility of using acoustic monitoring for early leak detection in water pipe infrastructure. The primary goal was to develop machine learning models capable of identifying leak-indicating sound patterns within a real-world operational environment.
**Methodology & Activities:**
* **Data Acquisition:** Sensor networks comprising contact microphones were deployed across sections of Munich's suburban water network to capture continuous acoustic data.
* **Signal Processing:** Raw audio signals were pre-processed and transformed into mel spectrograms, converting the time-domain audio data into image-like representations suitable for analysis with computer vision techniques.
* **Model Development:** Various machine learning approaches were evaluated. Deep neural networks, particularly Convolutional Neural Networks (CNNs), were trained on the spectrogram data to classify segments as containing leak sounds or normal background noise.
* **Analysis & Validation:** The performance of the models was assessed against ground truth data provided by SWM, identifying both the successes and challenges of applying these methods in a complex, noisy, real-world setting.
<center>
<figure class="half" style="max-width: 70%; text-align:center;">
@ -23,7 +37,12 @@ Collaborating with Munich City Services ([Stadtwerke München (SWM)](https://www
</figure>
</center><br>
Our study highlighted technical challenges but also provided key insights. By transforming audio into mel spectrograms, we discovered that deep neural networks could identify crucial features more effectively than traditional machine learning methods, leading to further research publications.
**Key Findings & Outcomes:**
* The project demonstrated the potential of deep learning models applied to mel spectrograms for identifying relevant acoustic features indicative of water leaks.
* CNN-based approaches showed advantages over traditional machine learning methods in capturing the complex patterns associated with leak sounds amidst background noise.
* Significant insights were gained regarding the practical challenges of sensor deployment, data quality variability, and noise interference in real-world utility networks.
* The research conducted within this project formed the basis for several scientific [publications](/publications) on acoustic anomaly detection. [Paper writeup](/research/acoustic-leak-detection/) for {% cite muller2021acoustic %}
<center>
<figure class="half" style="max-width: 70%; text-align:center;">
@ -35,7 +54,6 @@ Our study highlighted technical challenges but also provided key insights. By tr
</figure>
</center><br>
This project was active from late 2018 to early 2020.
This applied research project provided valuable experience in handling real-world sensor data, adapting machine learning models for specific industrial challenges, and collaborating effectively with industry partners.
{% include reference.html %}

View File

@ -1,18 +1,36 @@
---
layout: single
title: "OpenMunich.eu - Conference Organisation"
categories: acoustic anomaly-detection projects
excerpt: "Organization a Munich based open-souce conference with Red Hat and Accenture"
title: "OpenMunich Lead Organizer"
categories: projects
tags: community-engagement
excerpt: "Led OpenMunich (2018-19) connecting academia, industry, students on open-source."
header:
teaser: assets/images/projects/openmunich.png
role: Head Conference Manager
skills: Real-world model application
teaser: /assets/images/projects/openmunich.png
role: Lead Conference Organizer
skills: Event Management, Stakeholder Coordination (Industry & Academia), Project Planning, Website Management, Communication, Sponsorship Liaison
---
![logo](\assets\images\projects\openmunich.png){: .align-left style="padding:0.1em; width:5em"}
In collaboration with [Accenture](https://www.accenture.com/de-de) and [Red Hat](https://www.redhat.com/en), our chair hosted the [`OpenMunich`](https:\\openmunich.eu) conference, targeting professionals and students with an interest in open-source technologies. This event served as a platform for discussing our research, spanning topics from Machine Learning to Quantum Computing advancements.
![OpenMunich Logo](/assets/images/projects/openmunich.png){: .align-left style="padding:0.1em; width:5em"}
**Event:** [OpenMunich Conference](https://openmunich.eu)<br>
**Partners:** [Accenture](https://www.accenture.com/de-de), [Red Hat](https://www.redhat.com/en)<br>
**Affiliation:** Chair for Mobile and Distributed Systems, LMU Munich<br>
**Role:** Lead Organizer<br>
**Duration:** 2018 - 2019
Accenture and Red Hat not only provided financial backing but also contributed significantly to the program, offering sessions on `Ansible`, ML, and QC.
---
My responsibilities included organizing the infrastructure, coordinating with partners, colleagues, and external speakers, and managing the project's website—overseeing its content, structure, and maintenance.
![OpenMunich Website](\assets\images\projects\openmunich_website.png){: .align-right style="padding:0.1em; width:10em"}
As Lead Organizer, I spearheaded the planning and execution of the OpenMunich conference series during 2018 and 2019. This event, organized by the LMU Chair for Mobile and Distributed Systems in collaboration with industry partners Accenture and Red Hat, aimed to create a forum for professionals, researchers, and students interested in the latest developments within the open-source ecosystem.
The conference provided a platform to showcase research emerging from the university, covering topics from Machine Learning to Quantum Computing, alongside practical insights and technology demonstrations from our industry partners.
![OpenMunich Website Screenshot](/assets/images/projects/openmunich_website.png){: .align-right style="padding:0.1em; width:10em; margin-left: 1em;" alt="Screenshot of the OpenMunich conference website homepage"}
**Key Responsibilities:**
* **Overall Event Management:** Oversaw all logistical aspects of the conference planning and execution, including venue coordination, scheduling, and technical infrastructure setup.
* **Stakeholder Coordination:** Served as the primary point of contact between the university chair, industry partners (Accenture, Red Hat), internal colleagues, external speakers, and attendees.
* **Program Development Support:** Collaborated with partners on defining the conference agenda, ensuring a balanced mix of academic research presentations and industry sessions (e.g., on Ansible, ML applications, QC).
* **Website & Communication:** Managed the official conference website openmunich.eu (offline), including content creation, structural design, updates, and maintenance. Handled external communications and promotions.
* **Sponsorship Liaison:** Coordinated with Accenture and Red Hat regarding their sponsorship contributions and participation requirements.
This role required comprehensive organizational skills, effective communication across diverse stakeholder groups, and project management to ensure the successful delivery of the conference.

View File

@ -1,28 +1,67 @@
---
layout: single
title: "AI-Fusion: Emergence Detection for Mixed MARL Systems."
categories: acoustic anomaly-detection projects
excerpt: "Bringing together agents can be an inherent safety problem. Building the basis to mix and match."
title: "AI-Fusion Safety"
categories: projects
tags: multi-agent-systems reinforcement-learning safety emergence simulation
excerpt: "Studied MARL emergence and safety, built simulations with Fraunhofer."
header:
teaser: assets/images/projects/robot.png
teaser: /assets/images/projects/robot.png
role: Researcher, Software Developer
skills: Multi-Agent Reinforcement Learning (MARL), Emergence Analysis, AI Safety, Simulation Environment Design, Python, Gymnasium API, Software Engineering, Unity (Visualization), Industry Collaboration
---
![logo](\assets\images\projects\robot.png){: .align-left style="padding:0.1em; width:5em"}
In cooperation with [Fraunhofer IKS](https://www.iks.fraunhofer.de/), this project explored emergent effects in multi-agent reinforcement learning scenarios, such as mixed-vendor autonomous systems. Emergence, defined as complex dynamics arising from interactions among entities and their environment, was a key focus.
<div class="container">
<div class="sidebar" style="float: right; width: 25%; border: 0.5px grey solid; padding: 15px;">
<h4 style="margin-top: 0;">Project Resources</h4>
<ul style="list-style: none; padding-left: 0;">
<li><a href="https://github.com/illiumst/marl-factory-grid/" target="_blank" rel="noopener noreferrer"><i class="fab fa-fw fa-github" aria-hidden="true"></i> GitHub Repo</a></li>
<li><a href="https://pypi.org/project/Marl-Factory-Grid/" target="_blank" rel="noopener noreferrer"><i class="fab fa-fw fa-python" aria-hidden="true"></i> Install via PyPI</a></li>
<li><a href="https://marl-factory-grid.readthedocs.io/en/latest/" target="_blank" rel="noopener noreferrer"><i class="fas fa-fw fa-book" aria-hidden="true"></i> ReadTheDocs</a></li>
<li><i class="fas fa-fw fa-file-alt" aria-hidden="true"></i> {% cite altmann2024emergence %}</li>
</ul>
![logo](\assets\images\projects\full_domain.png){: style="margin:0em; padding:0em; width:15em"}
</div>
<div class="main-content" style="float: left; width: 75%;">
![Relation emergence](/assets/images/projects/rel_emergence.png){: .align-center style="padding:0.1em; width:80%"}
<div class="table-right" style="text-align:right">
| ![logo](\assets\images\projects\full_domain.png){: style="margin:0em; padding:0em; width:15em"} |
| [GitHub Repo](https://github.com/illiumst/marl-factory-grid/) |
| [Install via PyPI](https://pypi.org/project/Marl-Factory-Grid/) |
| [Read-the-docs](https://marl-factory-grid.readthedocs.io/en/latest/) |
| Read the Paper (TBA) |
![Robot Arm Icon](/assets/images/projects/robot.png){: .align-left style="padding:0.1em; width:5em"}
**Project:** AI-Fusion<br>
**Partner:** [Fraunhofer Institute for Cognitive Systems (IKS)](https://www.iks.fraunhofer.de/)<br>
**Duration:** 2022 - 2023<br>
**Objective:** To investigate the detection and mitigation of potentially unsafe emergent behaviors in complex systems composed of multiple interacting AI agents, particularly in scenarios involving heterogeneous agents (e.g., mixed-vendor autonomous systems).
In collaboration with Fraunhofer IKS, the AI-Fusion project addressed the critical challenge of understanding and ensuring safety in multi-agent reinforcement learning (MARL) systems. Emergence, defined as the arising of complex, often unpredictable, system-level dynamics from local interactions between agents and their environment, was a central focus due to its implications for system safety and reliability.
</div>
</div>
We developed a high-performance environment in Python, adhering to the [gymnasium](https://gymnasium.farama.org/main/) specifications, to facilitate reinforcement learning algorithm training.
---
This environment uniquely supports a variety of scenarios through `modules` and `configurations`, with capabilities for per-agent observations and handling of multi-agent and sequential actions.
To facilitate research into these phenomena, key contributions included the development of specialized simulation tools:
Additionally, a [Unity demonstrator unit](https://github.com/illiumst/F-IKS_demonstrator) was developed to replay and analyze specific scenarios, aiding in the investigation of emerging dynamics.
**1. High-Performance MARL Simulation Environment:**
* A flexible and efficient simulation environment was developed in Python, adhering to the [Gymnasium (formerly Gym) API specification](https://gymnasium.farama.org/main/).
* **Purpose:** Designed specifically for training and evaluating reinforcement learning algorithms in multi-agent contexts prone to emergent behaviors.
* **Features:**
* **Modularity:** Supports diverse scenarios through configurable `modules` and `configurations`.
* **Observation/Action Spaces:** Handles complex agent interactions, including per-agent observations and sequential/multi-agent action coordination.
* **Performance:** Optimized for efficient simulation runs, enabling extensive experimentation.
**2. Unity-Based Demonstrator Unit:**
* A complementary visualization tool was created using the Unity engine.
* **Purpose:** Allows for the replay, inspection, and detailed analysis of specific simulation scenarios and agent interactions.
* **Utility:** Aids researchers in identifying and understanding the mechanisms behind observed emergent dynamics.
* [View Demonstrator on GitHub](https://github.com/illiumst/F-IKS_demonstrator)
<div style="clear: both;"></div>
<center>
<img src="/assets/images/projects/rel_emergence.png" alt="Diagram illustrating the concept of emergence from interactions between agents and environment" style="padding:0.1em; width:80%">
<figcaption>Conceptual relationship defining emergence in multi-agent systems.</figcaption>
</center>
This project involved close collaboration with industry-focused researchers, software development adhering to modern standards, and deep investigation into the theoretical underpinnings of emergence and safety in MARL systems. The developed tools provide a valuable platform for continued research in this critical area.
{% include reference.html %}

View File

@ -1,18 +1,50 @@
---
layout: single
title: IT Expert Role
categories: projects server_admin unix
excerpt: Linux server (Workstations and Web) and cloud infrastructure administration
title: "LMU DevOps Admin"
categories: projects
tags: devops kubernetes server-administration infrastructure
excerpt: "Managed LMU chair IT: Kubernetes, CI/CD, automation (2018-2023)."
header:
teaser: assets/images/projects/arch.png
teaser: /assets/images/projects/arch.png # Corrected path
role: System Administrator, DevOps Engineer, Network Administrator
skills: Kubernetes (K3S), Ansible, Docker, CI/CD (GitLab CI, Argo CD), GitOps, Linux Server Administration (Debian, Arch), Networking (Traefik, WireGuard), Virtualization (Hyper-V), Storage (ZFS, Longhorn), Monitoring (WandB), Infrastructure as Code (IaC)
---
![logo](\assets\images\projects\arch.png){: .align-left style="padding:0.1em; width:5em"}
During my tenure at the Mobile and Distributed Systems Chair, I played a key role in the setup and maintenance of our technical infrastructure, including workstations, Windows server hypervisors, Linux file servers, and networking. Our approach to managing a diverse ecosystem of operating systems, hardware, and libraries involved extensive use of Ansible for orchestration.
I spearheaded the transition of a significant portion of our services to Kubernetes (K3S), implementing a comprehensive toolchain that included Longhorn, Argo CD, Sealed Secrets, and GitLab. For managing ingress and egress, Traefik served as our automated proxy manager, enabling us to efficiently route traffic within our network and accommodate external users securely through WireGuard.
![Arch Linux Logo](/assets/images/projects/arch.png){: .align-left style="padding:0.1em; width:5em" alt="Arch Linux Logo"}
**Role:** IT Infrastructure & DevOps Lead (Informal)<br>
**Affiliation:** Chair for Mobile and Distributed Systems, LMU Munich<br>
**Duration:** 2018 - 2023 (Concurrent with Research Role)<br>
**Objective:** Continious maintenance of IT infrastructure
My experience extended to optimizing machine learning workflows, transitioning from unreliable SLURM-based setups to automated, high-performance workstation runs using Weights & Biases (WandB) for experiment management, leveraging our self-hosted GitLab registry for Docker container orchestration.
This journey enriched my skills in Linux server administration, networking, infrastructure as code, and cloud-native technologies. It fostered a preference for minimalist, microservice-based architectures, and I've applied these principles to my personal projects, including self-hosting this website and other services, underscoring my commitment to practical, efficient technology solutions.
During my tenure at the LMU Chair for Mobile and Distributed Systems, alongside my research activities, I assumed responsibility for the ongoing maintenance of the group's IT infrastructure. This encompassed Linux workstations, Windows Server-based hypervisors, Linux file servers (utilizing ZFS), and core network services.
More of the tech stack I encountered on my journey is listed [here](/about).
**Key Initiatives & Achievements:**
* **Infrastructure as Code & Orchestration:**
* Leveraged **Ansible** extensively for automated configuration management and orchestration across a heterogeneous environment, ensuring consistency and reducing manual effort in managing diverse operating systems (Debian, Arch Linux, Windows), hardware configurations, and software libraries.
* **Containerization & Kubernetes Migration:**
* Spearheaded the migration of numerous internal services (including web applications, databases, and research tools) from traditional VMs and bare-metal deployments to a **Kubernetes (K3S)** cluster. This enhanced scalability, resilience, and resource utilization.
* Implemented **Longhorn** for persistent, distributed block storage within the Kubernetes cluster.
* **DevOps & GitOps Implementation:**
* Established a modern DevOps workflow centered around a self-hosted **GitLab** instance, utilizing **GitLab CI** for automated testing and container building.
* Implemented **Argo CD** for GitOps-based continuous deployment to the Kubernetes cluster, ensuring declarative state management and automated synchronization.
* Managed sensitive information using **Sealed Secrets** for secure secret handling within the GitOps workflow.
* **Networking & Security:**
* Configured **Traefik** as the primary reverse proxy and ingress controller for the Kubernetes cluster, automating routing, service discovery, and TLS certificate management.
* Implemented and managed a **WireGuard** VPN server to provide secure remote access for chair members to internal resources.
* **ML Workflow Optimization:**
* Re-architected the execution environment for machine learning experiments. Transitioned from managing dependencies directly on workstations or via a less reliable SLURM setup to a containerized approach using **Docker**.
* Utilized the self-hosted **GitLab Container Registry** for storing ML environment images and integrated **Weights & Biases (WandB)** for robust experiment tracking, visualization, and collaboration, significantly improving reproducibility and simplifying resource management on high-performance workstations.
---
**Outcomes & Philosophy:**
This hands-on role provided deep practical experience in modern system administration, networking, Infrastructure as Code (IaC), and cloud-native technologies within an academic research setting. It fostered my preference for minimalist, reproducible, and microservice-oriented architectures. These principles and skills are actively applied in my personal projects, including the self-hosting and management of this website and various other containerized services.
A more comprehensive list of the technologies I work with can be found on the [About Me](/about/) page.

View File

@ -0,0 +1,24 @@
---
layout: single
title: "Learned Trajectory Annotation"
categories: research
tags: geoinformatics machine-learning unsupervised-learning human-robot-interaction autoencoder
excerpt: "Unsupervised autoencoder learns spatial context from trajectory data for annotation."
header:
teaser: /assets/figures/0_trajectory_reconstruction_teaser.png
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
<center>
<img src="/assets/figures/0_trajectory_isovist.jpg" alt="Visualization of spatial perception field (e.g., isovist) from a point on a trajectory" style="width:48%; display: inline-block; margin: 1%;">
<img src="/assets/figures/0_trajectory_reconstruction.jpg" alt="Clustered or reconstructed trajectories based on learned spatial representations" style="width:48%; display: inline-block; margin: 1%;">
<figcaption>Learning spatial context representations (left) enables clustering and annotation of trajectories (right).</figcaption>
</center><br>
This research addresses the challenge of enabling more intuitive human-robot interaction in shared spaces, particularly focusing on grounding verbal communication in spatial understanding. The work introduces a novel unsupervised learning methodology based on neural autoencoders.
The core contribution is a system that learns continuous, low-dimensional representations of spatial context directly from trajectory data, without requiring explicit environmental maps or predefined regions. By processing sequences of spatial perceptions (analogous to visibility fields or isovists) along a path, the autoencoder captures salient environmental features relevant to movement.
These learned latent representations facilitate the effective clustering of trajectories based on shared spatial experiences. The outcome is a set of semantically meaningful encodings and prototypical representations of movement patterns within an environment. This approach lays essential groundwork for developing robotic systems capable of understanding, interpreting, and potentially describing movement through space in human-comprehensible terms, representing a promising direction for future human-robot collaboration. {% cite feld2018trajectory %}

View File

@ -1,14 +0,0 @@
---
layout: single
title: "Trajectory annotation by spatial perception"
categories: research
excerpt: "We propose an approach to annotate trajectories using sequences of spatial perception."
header:
teaser: assets/figures/0_trajectory_reconstruction_teaser.png
---
<figure class="half">
<img src="/assets/figures/0_trajectory_isovist.jpg" alt="" style="width:48%">
<img src="/assets/figures/0_trajectory_reconstruction.jpg" alt="" style="width:48%">
</figure>
This work establishes a foundation for enhancing interaction between robots and humans in shared spaces by developing reliable systems for verbal communication. It introduces an unsupervised learning method using neural autoencoding to learn continuous spatial representations from trajectory data, enabling clustering of movements based on spatial context. The approach yields semantically meaningful encodings of spatio-temporal data for creating prototypical representations, setting a promising direction for future applications in robotic-human interaction. {% cite feld2018trajectory %}

View File

@ -0,0 +1,25 @@
---
layout: single
title: "Neural Self-Replication"
categories: research
tags: neural-networks artificial-life complex-systems self-organization
excerpt: "Neural networks replicating weights, inspired by biology and artificial life."
header:
teaser: /assets/figures/1_self_replication_pca_space.jpg
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
![alt text](\assets\figures\1_self_replication_robustness.jpg)
{:style="display:block; width:45%" .align-right}
Drawing inspiration from the fundamental process of self-replication in biological systems, this research explores the potential for implementing analogous mechanisms within neural networks. The objective is to develop computational models capable of autonomously reproducing their own structure (specifically, their connection weights), potentially leading to the emergence of complex, adaptive behaviors.
The study investigates various neural network architectures and learning paradigms suitable for achieving self-replication. A key finding highlights the efficacy of leveraging backpropagation-like mechanisms, not for a typical supervised task, but for navigating the weight space in a manner conducive to replication. This approach facilitates the development of non-trivial self-replicating networks.
Furthermore, the research extends this concept by proposing an "artificial chemistry" environment. This framework involves populations of interacting neural networks, where self-replication dynamics can lead to emergent properties and complex ecosystem behaviors. This work offers a novel computational perspective on self-replication, providing tools and insights for exploring artificial life and the principles of self-organization in computational systems. For a detailed discussion, please refer to the publication by {% cite gabor2019self %}.
<div style="clear: both;"></div>
<center>
<img src="/assets/figures/1_self_replication_pca_space.jpg" alt="PCA visualization showing clusters or trajectories of self-replicating networks in a latent space" style="display:block; width:100%">
<figcaption>Visualization of self-replicator populations evolving in a PCA-reduced weight space.</figcaption>
</center>

View File

@ -1,14 +0,0 @@
---
layout: single
title: "Self-Replication in Neural Networks"
categories: research
excerpt: "Introduction of NNs that are able to replicate their own weights."
header:
teaser: assets/figures/1_self_replication_pca_space.jpg
---
![Self-Replication Robustness](\assets\figures\1_self_replication_robustness.jpg){:style="display:block; width:40%" .align-right}
This text discusses the fundamental role of self-replication in biological structures and its application to neural networks for developing complex behaviors in computing. It explores different network types for self-replication, highlighting the effectiveness of backpropagation in navigating network weights and fostering the emergence of non-trivial self-replicators. The study further delves into creating an artificial chemistry environment comprising several neural networks, offering a novel approach to understanding and implementing self-replication in computational models. For in-depth insights, refer to the work by {% cite gabor2019self %}.
![Self-replicators in PCA Space (Soup)](\assets\figures\1_self_replication_pca_space.jpg){:style="display:block; width:80%" .align-center}

View File

@ -0,0 +1,18 @@
---
layout: single
title: "Deep Audio Baselines"
categories: research
tags: deep-learning audio-classification paralinguistics speech-analysis
excerpt: "Deep learning audio baseline for Interspeech 2019 ComParE challenge."
header:
teaser: /assets/figures/3_deep_neural_baselines_teaser.jpg
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
![alt text](\assets\figures\3_deep_neural_baselines.jpg)
{:style="display:block; width:30%" .align-right}
This research, presented as part of the Interspeech 2019 Computational Paralinguistics Challenge (ComParE), specifically addresses the Sleepiness Sub-Challenge. We introduced a robust, end-to-end deep learning methodology designed to serve as a strong baseline for audio classification tasks within the paralinguistics domain.
The core innovation lies in utilizing a deep neural network architecture (e.g., CNNs, potentially combined with recurrent layers) that directly processes raw or minimally processed audio data (such as spectrograms). This end-to-end approach bypasses the need for extensive, task-specific manual feature engineering, which is often a complex and time-consuming aspect of traditional audio analysis pipelines.
Our proposed baseline model achieved performance comparable to established state-of-the-art methods on the sleepiness detection task. Furthermore, the architecture was designed with adaptability in mind, demonstrating its potential applicability to a broader range of audio classification challenges beyond sleepiness detection. This work underscores the power of deep learning to automatically extract relevant features from audio signals for complex paralinguistic tasks. For further details, please consult the publication by {% cite elsner2019deep %}.

View File

@ -1,11 +0,0 @@
---
layout: single
title: "Deep-Neural Baseline"
categories: research
excerpt: "Introduction a deep baseline for audio classification."
header:
teaser: assets/figures/3_deep_neural_baselines_teaser.jpg
---
![Self-Replication Robustness](\assets\figures\3_deep_neural_baselines.jpg){:style="display:block; width:30%" .align-right}
The study presents an innovative end-to-end deep learning method to identify sleepiness in spoken language, as part of the Interspeech 2019 ComParE challenge. This method utilizes a deep neural network architecture to analyze audio data directly, eliminating the need for specific feature engineering. This approach not only achieves performance comparable to state-of-the-art models but is also adaptable to various audio classification tasks. For more details, refer to the work by {% cite elsner2019deep %}.

View File

@ -1,11 +1,23 @@
---
layout: single
title: "Learning Soccer-Team Vecors"
title: "Soccer Team Vectors"
categories: research
excerpt: "Team market value estimation, similarity search and rankings."
tags: machine-learning representation-learning sports-analytics similarity-search
excerpt: "STEVE learns soccer team embeddings from match data for analysis."
header:
teaser: assets/figures/2_steve_algo.jpg
teaser: /assets/figures/2_steve_algo.jpg
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
![STEVE Algorithm](\assets\figures\2_steve_algo.jpg){:style="display:block; width:60%" .align-center}
This research introduces **STEVE (Soccer Team Vectors)**, a novel methodology for learning meaningful, real-valued vector representations (embeddings) for professional soccer teams. The primary goal is to capture intrinsic team characteristics and relationships within a continuous vector space, such that teams with similar playing styles, strengths, or performance levels are positioned closely together.
This study introduces STEVE (Soccer Team Vectors), a novel method for generating real-valued vectors representing soccer teams, organized so that similar teams are proximate in vector space. Utilizing publicly available match data, these vectors facilitate various machine learning applications, notably excelling in team market value estimation and enabling effective similarity search and team ranking. STEVE demonstrates superior performance over competing models in these domains. For further details, please consult the work by {% cite muller2020soccer %}.
Leveraging widely available public data from soccer matches (e.g., results, possibly performance statistics), STEVE employs machine learning techniques to generate these low-dimensional team vectors.
The utility of these learned representations is demonstrated through several downstream applications:
![alt text](\assets\figures\2_steve_algo.jpg){:style="display:block; width:60%" .align-right}
* **Team Market Value Estimation:** The vectors serve as effective features for predicting team market values, outperforming baseline models.
* **Similarity Search:** The vector space allows for efficient identification of teams similar to a given query team based on proximity.
* **Team Ranking:** The embeddings provide a basis for generating data-driven team rankings.
Across these application domains, STEVE demonstrated superior performance compared to competing approaches evaluated in the study. This work provides a valuable tool for quantitative analysis in sports analytics, enabling various machine learning tasks related to team comparison and prediction. For a comprehensive description of the methodology and results, please refer to the publication by {% cite muller2020soccer %}.

View File

@ -0,0 +1,31 @@
---
layout: single
title: "3D Primitive Segmentation"
categories: research
tags: computer-vision 3d-processing point-clouds segmentation deep-learning genetic-algorithms
excerpt: "Hybrid method segments/fits primitives in large 3D point clouds."
header:
teaser: /assets/figures/4_point_cloud_segmentation_teaser.jpg
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
<center>
<img src="/assets/figures/4_point_cloud_pipeline.jpg" alt="Diagram illustrating the hybrid point cloud segmentation pipeline" style="display:block; width:100%">
<figcaption>Overview of the hybrid segmentation and primitive fitting pipeline.</figcaption>
</center><br>
This research addresses challenges in accurately segmenting large-scale 3D point clouds into meaningful geometric primitives, specifically spheres, cylinders, and cuboids. Existing methods often struggle with scalability or robustness when faced with diverse shapes and noisy real-world data.
We propose a novel **hybrid approach** that synergistically combines multiple techniques to overcome these limitations:
1. **Deep Learning Integration:** Utilized to potentially enhance initial feature extraction or provide guidance for subsequent steps (the exact role should be clarified based on the paper, e.g., pre-segmentation, feature learning).
2. **RANSAC-based Primitive Fitting:** Employs the robust RANSAC algorithm for accurately fitting simpler geometric shapes like spheres and cylinders to subsets of the point cloud.
3. **DBSCAN Clustering:** Applied for grouping remaining points or refining segmentation boundaries, effectively handling noise and varying point densities.
4. **Specialized Genetic Algorithm:** A custom Genetic Algorithm is introduced specifically for the robust detection and fitting of cuboid primitives, which are often challenging for standard fitting methods.
This integrated pipeline demonstrates enhanced stability and robustness compared to methods relying on a single technique. It particularly excels in reconstructing the target primitives from large and complex point sets. The effectiveness of the approach is validated through quantitative performance metrics and qualitative visualizations, with a discussion acknowledging the method's scope and potential limitations. For a detailed technical description and evaluation, please refer to the publication by {% cite friedrich2020hybrid %}.
<center>
<img src="/assets/figures/4_point_cloud_segmentation.jpg" alt="Example result showing a point cloud segmented into different colored geometric primitives" style="display:block; width:890%">
<figcaption>Example segmentation result demonstrating primitive identification.</figcaption>
</center>

View File

@ -1,14 +0,0 @@
---
layout: single
title: "Point Cloud Segmentation"
categories: research
excerpt: "Segmetation of point clouds into primitive building blocks."
header:
teaser: assets/figures/4_point_cloud_segmentation_teaser.jpg
---
![Point Cloud Segmentation](\assets\figures\4_point_cloud_pipeline.jpg){:style="display:block; width:100%" .align-center}
This paper introduces a hybrid approach for segmenting and fitting solid primitives to 3D point clouds, overcoming limitations in handling large datasets and diverse primitive shapes. By integrating deep learning with RANSAC for primitive fitting, employing DBSCAN for clustering, and utilizing a specialized Genetic Algorithm for cuboid extraction, this method achieves enhanced stability and robustness. It excels in reconstructing spheres, cylinders, and cuboids from large point sets, with performance metrics and visualizations provided to demonstrate its effectiveness, alongside a discussion on its limitations. For more detailed insights, refer to {% cite friedrich2020hybrid %}.
![Point Cloud Segmentation](\assets\figures\4_point_cloud_segmentation.jpg){:style="display:block; width:80%" .align-center}

View File

@ -1,13 +0,0 @@
---
layout: single
title: "Policy Entropy for OOD Classification"
categories: research
excerpt: "PEOC for reliably detecting unencountered states in deep RL"
header:
teaser: assets/figures/6_ood_pipeline.jpg
---
![PEOC Performance](\assets\figures\6_ood_performance.jpg){:style="display:block; width:45%" .align-right}In this work, the development of PEOC, a policy entropy-based classifier for detecting unencountered states in deep reinforcement learning, is proposed. Utilizing the agent's policy entropy as a score, PEOC effectively identifies out-of-distribution scenarios, crucial for ensuring safety in real-world applications. Evaluated against advanced one-class classifiers within procedurally generated environments, PEOC demonstrates competitive performance.
Additionally, a structured benchmarking process for out-of-distribution classification in reinforcement learning is presented, offering a comprehensive approach to evaluating such systems' reliability and effectiveness. {% cite sedlmeier2020policy %}
![PEOC Pipeline](\assets\figures\6_ood_pipeline.jpg){:style="display:block; width:90%" .align-center}

View File

@ -0,0 +1,29 @@
---
layout: single
title: "PEOC OOD Detection"
categories: research
tags: deep-reinforcement-learning out-of-distribution-detection ai-safety anomaly-detection
excerpt: "PEOC uses policy entropy for OOD detection in deep RL."
header:
teaser: /assets/figures/6_ood_pipeline.jpg
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
![Graph comparing PEOC performance against other OOD detection methods](\assets\figures\6_ood_performance.jpg)
{:style="display:block; width:45%" .align-right}
Ensuring the safety and reliability of deep reinforcement learning (RL) agents deployed in real-world environments necessitates the ability to detect when the agent encounters states significantly different from those seen during training (i.e., out-of-distribution or OOD states). This research introduces **PEOC (Policy Entropy-based OOD Classifier)**, a novel and computationally efficient method designed for this purpose.
The core idea behind PEOC is to leverage the entropy of the agent's learned policy as an intrinsic indicator of state familiarity. High policy entropy often correlates with uncertainty, suggesting the agent is in a less familiar or potentially OOD state. PEOC utilizes this readily available metric as a scoring function to distinguish between in-distribution and out-of-distribution inputs.
PEOC's effectiveness was rigorously evaluated within procedurally generated environments, which allow for controlled introduction of novel states. Its performance was benchmarked against several state-of-the-art one-class classification methods adapted for the RL context. The results demonstrate that PEOC achieves competitive performance in identifying OOD states while being simple to implement and integrate into existing deep RL frameworks.
Furthermore, this work contributes a structured benchmarking process specifically designed for evaluating OOD classification methods within the context of reinforcement learning, providing a valuable framework for assessing the reliability of such safety-critical components. For a detailed methodology and evaluation, please refer to the publication by {% cite sedlmeier2020policy %}.
<div style="clear: both;"></div>
<figure style="display:block; width:90%; margin: 1em auto; text-align: center;">
<img src="/assets/figures/6_ood_pipeline.jpg" alt="Diagram showing the PEOC pipeline integrated with a deep RL agent" style="display:block; width:90%">
<figcaption>Conceptual pipeline of the PEOC method for OOD detection in deep RL.</figcaption>
</figure>

View File

@ -0,0 +1,31 @@
---
layout: single
title: "AV Meantime Coverage"
categories: research
tags: autonomous-vehicles shared-mobility transportation-systems urban-computing geoinformatics
excerpt: "Analyzing service coverage of parked AVs during downtime ('meantime')."
header:
teaser: /assets/figures/5_meantime_coverage.jpg
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
<center>
<img src="/assets/figures/5_meantime_coverage.jpg" alt="Map visualization showing estimated service coverage areas from parked autonomous vehicles" style="display:block; width:80%">
<figcaption>Visualization of estimated service coverage achievable by utilizing parked autonomous vehicles.</figcaption>
</center><br>
This research investigates a potential transitional model towards future transportation systems, focusing on **privately owned shared autonomous vehicles (SAVs)**. The central idea, termed "What to do in the Meantime," explores the feasibility of leveraging these vehicles for ride-sharing services during the significant portions of the day when they are typically parked and idle (e.g., while the owner is at work).
To assess the potential impact and viability of such a model, we developed and applied **two distinct reachability analysis methods**. These methods estimate the geographic area that could be effectively served by SAVs originating from their parking locations within given time constraints.
The analysis was conducted using a real-world dataset representing mobility patterns and parking durations in the greater **Munich metropolitan area**. Key findings reveal the significant influence of spatio-temporal factors on potential service coverage:
* **Time Dependency:** Service potential fluctuates considerably throughout the day, heavily impacted by rush hours which affect travel times and vehicle availability.
* **Location Dependency:** Marked differences in coverage potential were observed between dense urban centers and more dispersed suburban areas.
This study provides quantitative insights into the opportunities and limitations of utilizing the "meantime" of privately owned autonomous vehicles, contributing to the understanding of how future shared mobility systems might evolve. {% cite illium2020meantime %}
<center>
<img src="/assets/figures/5_meantime_availability.jpg" alt="Graph or map showing the temporal or spatial availability of parked vehicles" style="display:block; width:80%">
<figcaption>Analysis of spatio-temporal availability patterns of potentially shareable parked vehicles.</figcaption>
</center>

View File

@ -1,15 +0,0 @@
---
layout: single
title: "What to do in the Meantime"
categories: research
excerpt: "Service Coverage Analysis for Parked Autonomous Vehicles"
header:
teaser: assets/figures/5_meantime_coverage.jpg
---
![Estimated Service Coverage](\assets\figures\5_meantime_coverage.jpg){:style="display:block; width:80%" .align-center}
This analysis explores the concept of privately owned shared autonomous vehicles as a transitional phase towards a new transportation paradigm. It proposes two reachability analysis methods to assess the impact of utilizing privately owned cars during their typical long parking intervals, such as during an owner's work hours. By applying these methods to a dataset from the Munich area, the study reveals how time and location-dependent factors, like rush hours and urban vs. suburban differences, affect service coverage.
{% cite illium2020meantime %}
![Parked Vehicle Availability](\assets\figures\5_meantime_availability.jpg){:style="display:block; width:80%" .align-center}

View File

@ -1,14 +0,0 @@
---
layout: single
title: "Surgical Mask Detection"
categories: research audio deep-learning
excerpt: "Convolutional Neural Networks and Data Augmentations on Spectrograms"
header:
teaser: assets/figures/7_mask_models.jpg
---
![PEOC Pipeline](\assets\figures\7_mask_mels.jpg){:style="display:block; width:80%" .align-center}
This study assesses the effectiveness of data augmentation in enhancing neural network models for audio data classification, focusing on mel-spectrogram representations. Specifically, it examines the role of data augmentation in improving the performance of convolutional neural networks for detecting the presence of surgical masks from human voice samples, testing across four different network architectures. The findings indicate a significant enhancement in model performance, surpassing many of the existing benchmarks established by the ComParE challenge. For further details, refer to {% cite illium2020surgical %}.
![Models](\assets\figures\7_mask_models.jpg){:style="display:block; width:80%" .align-center}

View File

@ -0,0 +1,26 @@
---
layout: single
title: "Surgical-Mask Detection"
categories: research
tags: audio-classification deep-learning data-augmentation computer-vision paralinguistics
excerpt: "CNN mask detection in speech using augmented spectrograms."
header:
teaser: /assets/figures/7_mask_models.jpg
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
This study investigates the efficacy of various **data augmentation techniques** applied directly to **mel-spectrogram representations** of audio data for improving classification performance. The specific task addressed is the detection of surgical mask usage based on human speech signals, a relevant problem in paralinguistics and audio analysis.
We systematically evaluated the impact of data augmentation when training **Convolutional Neural Networks (CNNs)** for this binary classification task. The input to the networks consisted of mel-spectrograms derived from voice samples. The effectiveness of augmentation strategies (such as frequency masking, time masking, or combined approaches like SpecAugment) was assessed across **four different CNN architectures**.
<center>
<img src="/assets/figures/7_mask_mels.jpg" alt="Examples of mel-spectrograms of speech with and without a surgical mask" style="display:block; width:80%">
<figcaption>Mel-spectrogram representations of speech signals used as input for CNNs.</figcaption>
</center><br>
The core finding of this research is that applying appropriate data augmentation directly to the spectrogram inputs significantly enhances the performance and generalization capabilities of the CNN models for surgical mask detection. The augmented models demonstrated improved accuracy, robustness, and notably **surpassed many established benchmark results** from the relevant ComParE (Computational Paralinguistics Challenge) tasks. This highlights the importance of data augmentation as a crucial component in building effective deep learning models for audio classification, particularly when dealing with limited or variable datasets. For a detailed description of the methods and results, please refer to {% cite illium2020surgical %}.
<center>
<img src="/assets/figures/7_mask_models.jpg" alt="Diagrams illustrating the different CNN architectures tested" style="display:block; width:100%">
<figcaption>Overview of the different Convolutional Neural Network architectures evaluated.</figcaption>
</center>

View File

@ -1,12 +0,0 @@
---
layout: single
title: "Anomalous Sound Detection"
categories: research audio deep-learning anomalie-detection
excerpt: "Analysis of Feature Representations for Anomalous Sound Detection"
header:
teaser: assets/figures/8_anomalous_sound_teaser.jpg
---
![Pipeline](\assets\figures\8_anomalous_sound_features.jpg){:style="display:block; width:40%" .align-right}
This study explores the use of pretrained neural networks as feature extractors for detecting anomalous sounds, utilizing these networks to derive semantically rich features for a Gaussian Mixture Model that estimates normality. It examines extractors trained on diverse data domains—images, environmental sounds, and music—applied to industrial noises from machinery. Surprisingly, features based on music data often surpass others, including an autoencoder baseline, suggesting that domain similarity between extractor training and application might not always correlate with performance improvement.
{% cite muller2020analysis %}

View File

@ -0,0 +1,30 @@
---
layout: single
title: "Anomalous Sound Features"
categories: research
tags: anomaly-detection audio-classification deep-learning transfer-learning feature-extraction
excerpt: "Pretrained networks extract features for anomalous industrial sound detection."
header:
teaser: /assets/figures/8_anomalous_sound_teaser.jpg
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
![Diagram showing features extracted by different pretrained networks visualized in a latent space](\assets\figures\8_anomalous_sound_features.jpg)
{:style="display:block; width:40%" .align-right}
Detecting anomalous sounds, particularly in industrial settings, is crucial for predictive maintenance and safety. This often involves unsupervised or semi-supervised approaches where models learn a representation of 'normal' sounds. This research explores the effectiveness of leveraging **transfer learning** for this task by using **pretrained deep neural networks** as fixed feature extractors.
The core methodology involves:
1. Taking pretrained networks trained on large datasets from various domains.
2. Using these networks to extract high-level, potentially semantically rich feature vectors from audio signals (specifically, industrial machine noises relevant to challenges like DCASE - Detection and Classification of Acoustic Scenes and Events).
3. Modeling the distribution of features extracted from 'normal' sounds using a **Gaussian Mixture Model (GMM)**.
4. Identifying anomalous sounds as those whose extracted features have low likelihood under the learned normality model.
A key aspect of this study was comparing feature extractors pretrained on distinctly different domains:
* **Images** (e.g., models trained on ImageNet)
* **Environmental Sounds** (e.g., models trained on AudioSet or ESC-50)
* **Music** (e.g., models trained on music tagging datasets)
These were evaluated alongside a baseline autoencoder trained directly on the target machine sound data.
Surprisingly, the results indicated that features derived from networks pretrained on **music data** often yielded the best performance in detecting anomalous industrial sounds, frequently surpassing features from environmental sound models and the autoencoder baseline. This counter-intuitive finding suggests that direct domain similarity between the pretraining dataset and the target application data is not necessarily the primary factor determining the utility of transferred features for anomaly detection. {% cite muller2020analysis %}

View File

@ -1,14 +0,0 @@
---
layout: single
title: "Anomalous Image Transfer"
categories: research audio deep-learning anomalie-detection
excerpt: "Acoustic Anomaly Detection for Machine Sounds based on Image Transfer Learning"
header:
teaser: assets/figures/9_image_transfer_sound_teaser.jpg
---
![Workflow](\assets\figures\9_image_transfer_sound_workflow.jpg){:style="display:block; width:45%" .align-right}
This paper explores acoustic malfunction detection in industrial machinery using transfer learning, specifically leveraging neural networks pretrained on image classification to extract features.
These features, when used with anomaly detection models, outperform traditional convolutional autoencoders in noisy conditions across different machine types. The study highlights the superiority of features from ResNet architectures over AlexNet and Squeezenet, with Gaussian Mixture Models and One-Class Support Vector Machines showing the best performance in detecting anomalies.
{% cite muller2020acoustic %}
![Mels](\assets\figures\9_image_transfer_sound_mels.jpg){:style="display:block; width:85%" .align-center}

View File

@ -0,0 +1,38 @@
---
layout: single
title: "Sound Anomaly Transfer"
categories: research
tags: anomaly-detection audio-classification deep-learning transfer-learning feature-extraction computer-vision
excerpt: "Image nets detect acoustic anomalies in machinery via spectrograms."
header:
teaser: /assets/figures/9_image_transfer_sound_teaser.jpg
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
![Workflow diagram showing mel-spectrogram input, feature extraction via image network, and anomaly detection model](\assets\figures\9_image_transfer_sound_workflow.jpg)
{:style="display:block; width:45%" .align-right}
This study investigates an effective approach for **acoustic anomaly detection** in industrial machinery, focusing on identifying malfunctions through sound analysis. The core methodology leverages **transfer learning** by repurposing deep neural networks originally trained for large-scale **image classification** (e.g., on ImageNet) as powerful feature extractors for audio data represented as **mel-spectrograms**.
The process involves:
1. Converting audio signals from machinery into mel-spectrogram images.
2. Feeding these spectrograms into various pretrained image classification networks (specifically comparing **ResNet architectures** against **AlexNet** and **SqueezeNet**) to extract deep feature representations.
3. Training standard anomaly detection models particularly **Gaussian Mixture Models (GMMs)** and **One-Class Support Vector Machines (OC-SVMs)** on the features extracted from normal operation sounds.
4. Classifying new sounds as anomalous if their extracted features deviate significantly from the learned normality model.
Key findings from the experiments, conducted across different machine types and noise conditions, include:
* The proposed transfer learning approach significantly **outperforms baseline methods like traditional convolutional autoencoders**, especially in the presence of background noise.
* Features extracted using **ResNet architectures consistently yielded superior anomaly detection performance** compared to those from AlexNet and SqueezeNet.
* **GMMs and OC-SVMs proved highly effective** as anomaly detection classifiers when applied to these transferred features.
<div style="clear: both;"></div>
<center>
<img src="/assets/figures/9_image_transfer_sound_mels.jpg" alt="Examples of mel-spectrograms from normal and anomalous machine sounds" style="display:block; width:85%">
<figcaption>Mel-spectrogram examples of normal vs. anomalous machine sounds.</figcaption>
</center>
This work demonstrates the surprising effectiveness of transferring knowledge from the visual domain to the acoustic domain for anomaly detection, offering a robust and readily implementable method for monitoring industrial equipment. {% cite muller2020acoustic %}

View File

@ -1,15 +0,0 @@
---
layout: single
title: "Acoustic Leak Detection"
categories: research audio deep-learning anomalie-detection
excerpt: "Anomalie based Leak Detection in Water Networks"
header:
teaser: assets/figures/10_water_networks_teaser.jpg
---
![Approach](\assets\figures\10_water_networks_approach.jpg){:style="display:block; width:40%" .align-right}
This study introduces a method for acoustic leak detection in water networks, focusing on energy efficiency and easy deployment. Utilizing recordings from microphones on a municipal water network, various anomaly detection models, both shallow and deep, were trained. The approach mimics human leak detection methods, allowing intermittent monitoring instead of constant surveillance. While detecting nearby leaks proved easy for most models, neural network-based methods excelled at identifying leaks from a distance, showcasing their effectiveness in practical applications.
{% cite muller2021acoustic %}
![Leak-Mels](\assets\figures\10_water_networks_mel.jpg){:style="display:block; width:85%" .align-center}

View File

@ -0,0 +1,30 @@
---
layout: single
title: "Acoustic Leak Detection"
categories: research
tags: anomaly-detection audio-processing deep-learning signal-processing real-world-application
excerpt: "Anomaly detection models for acoustic leak detection in water networks."
header:
teaser: /assets/figures/10_water_networks_teaser.jpg
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
![Diagram illustrating the anomaly detection approach for leak detection](\assets\figures\10_water_networks_approach.jpg)
{:style="display:block; width:40%" .align-right}
Detecting leaks in vast municipal water distribution networks is critical for resource conservation and infrastructure maintenance. This study introduces and evaluates an **anomaly detection approach for acoustic leak identification**, specifically designed with **energy efficiency** and **ease of deployment** as key considerations.
The methodology leverages acoustic recordings captured by microphones deployed directly on a section of a real-world **municipal water network**. Instead of requiring continuous monitoring, the proposed system mimics human inspection routines by performing **intermittent checks**, significantly reducing power consumption and data load.
Various **anomaly detection models**, ranging from traditional "shallow" methods (e.g., GMMs, OC-SVMs) to more complex **deep learning architectures** (e.g., autoencoders, potentially CNNs on spectrograms), were trained using data representing normal network operation. These models were then evaluated on their ability to distinguish anomalous sounds indicative of leaks.
Key findings include:
* Detecting leaks occurring acoustically **nearby** the sensor proved relatively straightforward for most evaluated models.
* **Neural network-based methods demonstrated superior performance** in identifying leaks originating **further away** from the sensor, showcasing their ability to capture more subtle acoustic signatures amidst background noise.
<center>
<img src="/assets/figures/10_water_networks_mel.jpg" alt="Mel-spectrogram examples showing acoustic signatures of normal operation versus leak sounds" style="display:block; width:90%">
<figcaption>Mel-spectrogram visualizations comparing normal sounds and leak-related acoustic patterns.</figcaption>
</center><br>
This research validates the feasibility of using anomaly detection for practical, energy-efficient acoustic leak monitoring in water networks, highlighting the advantages of deep learning techniques for detecting more challenging, distant leaks. {% cite muller2021acoustic %}

View File

@ -1,15 +0,0 @@
---
layout: single
title: "Primate Vocalization Classification"
categories: research audio deep-learning anomalie-detection
excerpt: "A Deep and Recurrent Architecture"
header:
teaser: assets/figures/11_recurrent_primate_workflow.jpg
---
![Leak-Mels](\assets\figures\11_recurrent_primate_workflow.jpg){:style="display:block; width:40%" .align-right}
This study introduces a deep, recurrent architecture for classifying primate vocalizations, leveraging bidirectional Long Short-Term Memory networks and advanced techniques like normalized softmax and focal loss. Bayesian optimization was used to fine-tune hyperparameters, and the model was evaluated on a dataset of primate calls from an African sanctuary, showcasing the effectiveness of acoustic monitoring in wildlife conservation efforts.
{% cite muller2021deep %}
![Approach](\assets\figures\11_recurrent_primate_results.jpg){:style="display:block; width:85%" .align-center}

View File

@ -0,0 +1,30 @@
---
layout: single
title: "Primate Vocalization Classification"
categories: research
tags: deep-learning audio-classification bioacoustics conservation-technology recurrent-neural-networks
excerpt: "Deep BiLSTM classifies primate vocalizations for acoustic wildlife monitoring."
header:
teaser: /assets/figures/11_recurrent_primate_workflow.jpg
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
![Workflow diagram showing audio input, feature extraction, BiLSTM processing, and classification output](\assets\figures\11_recurrent_primate_workflow.jpg)
{:style="display:block; width:40%" .align-right}
Acoustic monitoring offers a powerful, non-invasive tool for wildlife conservation, enabling the study and tracking of animal populations through their vocalizations. This research focuses on improving the automated classification of **primate vocalizations**, a challenging task due to call variability and environmental noise.
We propose a novel **deep, recurrent neural network architecture** specifically designed for this purpose. The core of the model utilizes **bidirectional Long Short-Term Memory (BiLSTM) networks**, which are adept at capturing temporal dependencies within the audio signals (represented, for example, as spectrograms or MFCCs).
To further enhance classification performance, particularly in potentially imbalanced datasets common in bioacoustics, the architecture incorporates advanced techniques:
* **Normalized Softmax:** Improves calibration and potentially robustness.
* **Focal Loss:** Addresses class imbalance by focusing training on hard-to-classify examples.
Hyperparameter tuning, a critical step for optimizing deep learning models, was systematically performed using **Bayesian optimization**.
<center>
<img src="/assets/figures/11_recurrent_primate_results.jpg" alt="Graph or table showing classification accuracy or confusion matrix for primate calls" style="display:block; width:90%">
<figcaption>Performance results demonstrating classification accuracy.</figcaption>
</center><br>
The model's effectiveness was evaluated on a challenging real-world dataset comprising diverse primate calls recorded at an **African wildlife sanctuary**. The results demonstrate the capability of the proposed deep recurrent architecture for accurate primate vocalization classification, underscoring the potential of advanced deep learning techniques combined with automated acoustic monitoring for practical wildlife conservation efforts. {% cite muller2021deep %}

View File

@ -1,15 +0,0 @@
---
layout: single
title: "Mel-Vision Transformer"
categories: research audio deep-learning anomalie-detection
excerpt: "Attention based audio classification on Mel-Spektrograms"
header:
teaser: assets/figures/12_vision_transformer_teaser.jpg
---
![Approach](\assets\figures\12_vision_transformer_models.jpg){:style="display:block; width:80%" .align-center}
This work utilizes the vision transformer model on mel-spectrogram audio data, enhanced by mel-based data augmentation and sample weighting, to achieve notable performance in the ComParE21 challenge, surpassing many single model baselines. The introduction of overlapping vertical patching and the analysis of parameter configurations further refine the approach, demonstrating the model's adaptability and effectiveness in audio processing tasks.
{% cite illium2021visual %}

View File

@ -0,0 +1,29 @@
---
layout: single
title: "Audio Vision Transformer"
categories: research
tags: deep-learning audio-classification computer-vision attention-mechanisms transformers
excerpt: "Vision Transformer on spectrograms for audio classification, with data augmentation."
header:
teaser: /assets/figures/12_vision_transformer_teaser.jpg
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
This research explores the application of the **Vision Transformer (ViT)** architecture, originally designed for image processing, to the domain of audio classification by operating on **mel-spectrogram representations**. The ViT's attention mechanisms offer a potentially powerful alternative to convolutional approaches for capturing relevant patterns in spectrogram data.
<center>
<img src="/assets/figures/12_vision_transformer_models.jpg" alt="Diagram illustrating the Vision Transformer architecture adapted for mel-spectrogram input" style="display:block; width:80%">
<figcaption>Adapting the Vision Transformer architecture for processing mel-spectrograms.</figcaption>
</center><br>
Key aspects of the methodology include:
* **ViT Adaptation:** Applying the ViT model directly to mel-spectrograms treated as images.
* **Data Augmentation:** Employing **mel-based data augmentation** techniques (e.g., SpecAugment variants) to improve model robustness and generalization.
* **Sample Weighting:** Utilizing sample weighting strategies to address potential class imbalances or focus on specific aspects of the dataset.
* **Patching Strategy:** Introducing and evaluating an **overlapping vertical patching** method, potentially better suited for capturing temporal structures in spectrograms compared to standard non-overlapping patches.
The effectiveness of this "Mel-Vision Transformer" approach was demonstrated within the context of the **ComParE 2021 (Computational Paralinguistics Challenge)**. The proposed model achieved notable performance, **surpassing many established single-model baseline results** on the challenge tasks.
Furthermore, the study includes an analysis of different parameter configurations and architectural choices, providing insights into optimizing ViT models for audio processing tasks. This work showcases the adaptability and potential of transformer architectures, particularly ViT, for effectively tackling audio classification challenges. {% cite illium2021visual %}

View File

@ -1,13 +0,0 @@
---
layout: single
title: "Self-Replication Goals"
categories: research audio deep-learning anomalie-detection
excerpt: "Combining replication and auxiliary task for neural networks."
header:
teaser: assets/figures/13_sr_teaser.jpg
---
![Self-Replicator Analysis](\assets\figures\13_sr_analysis.jpg){:style="display:block; width:80%" .align-center}
This research delves into the innovative concept of self-replicating neural networks capable of performing secondary tasks alongside their primary replication function. By employing separate input/output vectors for dual-task training, the study demonstrates that additional tasks can complement and even stabilize self-replication. The dynamics within an artificial chemistry environment are explored, examining how varying action parameters affect the collective learning capability and how a specially developed 'guiding particle' can influence peers towards achieving goal-oriented behaviors, illustrating a method for steering network populations towards desired outcomes.
{% cite gabor2021goals %}

View File

@ -0,0 +1,26 @@
---
layout: single
title: "Tasked Self-Replication"
categories: research
tags: artificial-life complex-systems neural-networks self-organization multi-task-learning
excerpt: "Self-replicating networks perform tasks, exploring stabilization in artificial chemistry."
header:
teaser: /assets/figures/13_sr_teaser.jpg
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
Building upon the concept of self-replicating neural networks, this research explores the integration of **auxiliary functional goals** alongside the primary objective of self-replication. The aim is to create networks that can not only reproduce their own weights but also perform useful computations or interact meaningfully with an environment simultaneously.
<center>
<img src="/assets/figures/13_sr_analysis.jpg" alt="Analysis graphs or visualizations related to dual-task self-replicating networks" style="display:block; width:80%">
<figcaption>Analysis of networks balancing self-replication and auxiliary tasks.</figcaption>
</center><br>
The study introduces a methodology for **dual-task training**, utilizing distinct input/output vectors to manage both the replication process and the execution of a secondary task. A key finding is that the presence of an auxiliary task does not necessarily hinder self-replication; instead, it can sometimes **complement and even stabilize** the replication dynamics.
Further investigations were conducted within the framework of an **"artificial chemistry" environment**, where populations of these dual-task networks interact:
* The impact of varying **action parameters** (related to the secondary task) on the collective learning or emergent behavior of the network population was examined.
* A concept of a specially designed **"guiding particle"** network was introduced. This network influences its peers, demonstrating a mechanism for potentially steering the population's evolution towards desired goal-oriented behaviors.
This work provides insights into how functional complexity can be integrated with self-replication in computational systems, offering potential pathways for developing more sophisticated artificial life models and exploring guided evolution within network populations. {% cite gabor2021goals %}

View File

@ -1,14 +0,0 @@
---
layout: single
title: "Anomaly Detection in RL"
categories: research audio deep-learning anomalie-detection
excerpt: "Towards Anomaly Detection in Reinforcement Learning"
header:
teaser: assets/figures/14_ad_rl_teaser.jpg
---
This work investigates anomaly detection (AD) within reinforcement learning (RL), highlighting its importance in safety-critical applications due to the complexity of sequential decision-making in RL. The study criticizes the simplicity of current AD research scenarios in RL, connecting AD to lifelong RL and generalization, discussing their interrelations and potential mutual benefits. It identifies non-stationarity as a crucial area for future AD research in RL, proposing a formal approach through the block contextual Markov decision process and outlining practical requirements for future studies.
{% cite muller2022towards %}
![Formal Definition](\assets\figures\14_ad_rl.jpg){:style="display:block; width:50%" .align-center}

View File

@ -0,0 +1,26 @@
---
layout: single
title: "RL Anomaly Detection"
categories: research
tags: reinforcement-learning anomaly-detection ai-safety lifelong-learning generalization
excerpt: "Perspective on anomaly detection challenges and future in reinforcement learning."
header:
teaser: /assets/figures/14_ad_rl_teaser.jpg
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
Anomaly Detection (AD) is crucial for the safe deployment of Reinforcement Learning (RL) agents, especially in safety-critical applications where encountering unexpected or out-of-distribution situations can lead to catastrophic failures. This work provides a perspective on the state and future directions of AD research specifically tailored for the complexities inherent in RL.
The paper argues that current AD research within RL often relies on overly simplified scenarios that do not fully capture the challenges of sequential decision-making under uncertainty. It establishes important conceptual connections between AD and other critical areas of RL research:
* **Lifelong Reinforcement Learning:** AD is framed as a necessary component for agents that must continually adapt to changing environments and tasks. Detecting anomalies signals the need for adaptation or learning updates.
* **Generalization:** The ability to detect anomalies is closely related to an agent's generalization capabilities; anomalies often represent situations outside the agent's learned experience manifold.
The study highlights **non-stationarity** (i.e., changes in the environment dynamics or reward structure over time) as a particularly critical and under-explored challenge for AD in RL. To address this formally, the paper proposes utilizing the framework of **block contextual Markov decision processes (BCMDPs)** as a suitable model for defining and analyzing non-stationary anomalies.
<center>
<img src="/assets/figures/14_ad_rl.jpg" alt="Mathematical formalism or diagram related to the block contextual MDP framework" style="display:block; width:50%">
<figcaption>Formalizing non-stationary anomalies using the BCMDP framework.</figcaption>
</center>
Finally, it outlines practical requirements and desiderata for future research in this area, advocating for more rigorous evaluation protocols and benchmark environments to advance the development of robust and reliable AD methods for RL agents. {% cite muller2022towards %}

View File

@ -1,15 +0,0 @@
---
layout: single
title: "Self-Replication in NNs"
categories: research audio deep-learning anomalie-detection
excerpt: "Elaboration and journal article of the initial paper"
header:
teaser: assets/figures/15_sr_journal_teaser.jpg
---
![Children Evolution](\assets\figures\15_sr_journal_children.jpg){:style="display:block; width:65%" .align-center}
This study extends previous work on self-replicating neural networks, focusing on backpropagation as a mechanism for facilitating non-trivial self-replication. It delves into the robustness of these self-replicators against noise and introduces artificial chemistry environments to observe emergent behaviors. Additionally, it provides a detailed analysis of fixpoint weight configurations and their attractor basins, enhancing the understanding of self-replication dynamics within neural networks.
{% cite gabor2022self %}
![Noise Levels](\assets\figures\15_noise_levels.jpg){:style="display:block; width:65%" .align-center}

View File

@ -0,0 +1,30 @@
---
layout: single
title: "Extended Self-Replication"
categories: research
tags: artificial-life complex-systems neural-networks self-organization dynamical-systems
excerpt: "Journal extension: self-replication, noise robustness, emergence, dynamical system analysis."
header:
teaser: /assets/figures/15_sr_journal_teaser.jpg
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
<center>
<img src="/assets/figures/15_sr_journal_children.jpg" alt="Visualization showing the evolution or diversity of 'child' networks generated through self-replication" style="display:block; width:65%">
<figcaption>Analyzing the lineage and diversity in populations of self-replicating networks.</figcaption>
</center><br>
This journal article provides an extended and more in-depth exploration of self-replicating neural networks, building upon earlier foundational work ([Gabor et al., 2019](link-to-previous-paper-if-available)). The research further investigates the use of **backpropagation-like mechanisms** not for typical supervised learning, but as an effective means to enable **non-trivial self-replication** where networks learn to reproduce their own connection weights.
Key extensions and analyses presented in this work include:
* **Robustness Analysis:** A systematic evaluation of the self-replicating networks' resilience and stability when subjected to various levels of **noise** during the replication process.
* **Artificial Chemistry Environments:** Further development and analysis of simulated environments where populations of self-replicating networks interact, leading to observable **emergent collective behaviors** and ecosystem dynamics.
* **Dynamical Systems Perspective:** A detailed theoretical analysis of the self-replication process viewed as a dynamical system. This includes identifying **fixpoint weight configurations** (networks that perfectly replicate themselves) and characterizing their **attractor basins** (the regions in weight space from which networks converge towards a specific fixpoint).
<center>
<img src="/assets/figures/15_noise_levels.jpg" alt="Graph showing the impact of different noise levels on self-replication fidelity or population dynamics" style="display:block; width:65%">
<figcaption>Investigating the influence of noise on the self-replication process.</figcaption>
</center><br>
By delving deeper into the mechanisms, robustness, emergent properties, and underlying dynamics, this study significantly enhances the understanding of how self-replication can be achieved and analyzed within neural network models, contributing valuable insights to the fields of artificial life and complex systems. {% cite gabor2022self %}

View File

@ -0,0 +1,34 @@
---
layout: single
title: "Organism Network Emergence"
categories: research
tags: artificial-life complex-systems neural-networks self-organization emergent-computation
excerpt: "Self-replicating networks collaborate forming higher-level Organism Networks with emergent functionalities."
header:
teaser: /assets/figures/16_on_teaser.jpg
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
This research investigates the transition from simple self-replication to higher levels of organization by exploring how populations of basic, self-replicating neural network units can form **"Organism Networks" (ONs)** through **collaboration and emergent differentiation**. Moving beyond the replication of individual networks, the focus shifts to the collective dynamics and functional capabilities that arise when these units interact within a shared environment (akin to an "artificial chemistry").
<center>
<img src="/assets/figures/16_on_architecture.jpg" alt="Diagram showing individual self-replicating units interacting to form a larger Organism Network structure" style="display:block; width:65%">
<figcaption>Conceptual architecture of an Organism Network emerging from interacting self-replicators.</figcaption>
</center><br>
The core hypothesis is that through local interactions and potentially shared environmental feedback, initially homogeneous populations of self-replicators can spontaneously develop specialized roles or structures, leading to a collective entity with capabilities exceeding those of individual units.
![Visualization potentially related to network robustness, differentiation, or communication channels.](\assets\figures\16_on_dropout.jpg)
{:style="display:block; width:45%" .align-right}
Key aspects explored in this work include:
* **Mechanisms for Collaboration:** Investigating how communication or resource sharing between self-replicating units can be established and influence collective behavior.
* **Emergent Differentiation:** Analyzing scenarios where units within the population begin to specialize, adopting different internal states (weight configurations) or functions, analogous to cellular differentiation in biological organisms.
* **Formation of Structure:** Studying how interactions lead to stable spatial or functional structures within the population, forming the basis of the Organism Network.
* **Functional Advantages:** Assessing whether these emergent ONs exhibit novel collective functionalities or improved problem-solving capabilities compared to non-interacting populations. (The role of dropout, as suggested by the image, might relate to promoting robustness or specialization within this context).
This study bridges the gap between single-unit self-replication and the emergence of complex, multi-unit systems in artificial life research, offering insights into how collaborative dynamics can lead to higher-order computational structures. For more detailed insights, refer to {% cite illium2022constructing %}.
<!-- Add clearing div after text if float is used -->
<div style="clear: both;"></div>

View File

@ -1,17 +0,0 @@
---
layout: single
title: "Organism Networks"
categories: research audio deep-learning anomalie-detection
excerpt: "Constructing ON from Collaborative Self-Replicators"
header:
teaser: assets/figures/16_on_teaser.jpg
---
![Organism Network Architecture](\assets\figures\16_on_architecture.jpg){:style="display:block; width:65%" .align-center}
This work delves into the concept of self-replicating neural networks, focusing on how backpropagation facilitates the emergence of complex, self-replicating behaviors.
![Dropout](\assets\figures\16_on_dropout.jpg){:style="display:block; width:45%" .align-right}
By evaluating different network types, the study highlights the natural emergence of robust self-replicators and explores their behavior in artificial chemistry environments.
A significant extension over a previous version, this research offers a deep analysis of fixpoint weight configurations and their attractor basins, advancing the understanding of neural network self-replication.
For more detailed insights, refer to {% cite illium2022constructing %}.

View File

@ -0,0 +1,40 @@
---
layout: single
title: "Voronoi Data Augmentation"
categories: research
tags: data-augmentation computer-vision deep-learning convolutional-neural-networks
excerpt: "VoronoiPatches improves CNN robustness via non-linear recombination augmentation."
header:
teaser: /assets/figures/17_vp_teaser.jpg
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
Data augmentation is essential for improving the performance and generalization of Convolutional Neural Networks (CNNs), especially when training data is limited. This research introduces **VoronoiPatches (VP)**, a novel data augmentation algorithm based on the principle of **non-linear recombination** of image information.
<center>
<img src="/assets/figures/17_vp_lion.jpg" alt="Example of an image augmented with VoronoiPatches, showing polygon patches blended onto a lion image" style="display:block; width:85%">
<figcaption>Visual example of the VoronoiPatches augmentation applied to an image.</figcaption>
</center><br>
Unlike traditional methods that often apply uniform transformations or cutout regions, VP operates by:
1. Generating a random layout of points within an image.
2. Creating a Voronoi diagram based on these points, partitioning the image into unique, convex polygon-shaped patches.
3. Redistributing information between these patches or blending information across patch boundaries (specific mechanism detailed in the paper).
This approach potentially allows for smoother transitions between augmented regions and the original image compared to sharp cutout methods. The core idea is to encourage the CNN to learn more robust features by exposing it to varied, non-linearly recombined versions of the input data.
---
<div style="text-align: center; margin: 1em 0; font-weight: bold; color: #D4AF37;">
:trophy: Best Poster Award - ICAART 2023 :trophy:<br>
<small>(<a href="https://icaart.scitevents.org/PreviousAwards.aspx?y=2024#2023" target="_blank" rel="noopener noreferrer">Official Link</a>)</small>
</div>
---
Evaluations demonstrate that VoronoiPatches can effectively **reduce model variance and combat overfitting**. Comparative studies indicate that VP **outperforms several existing state-of-the-art data augmentation techniques** in improving the robustness and generalization performance of CNN models on unseen data across various benchmarks. {% cite illium2023voronoipatches %}
<center>
<img src="/assets/figures/17_vp_results.jpg" alt="Graphs showing performance comparison (e.g., accuracy, loss) of VoronoiPatches against other augmentation methods" style="display:block; width:90%">
<figcaption>Comparative results illustrating the performance benefits of VoronoiPatches.</figcaption>
</center><br>

View File

@ -1,16 +0,0 @@
---
layout: single
title: "Voronoi Patches"
categories: research audio deep-learning anomalie-detection
excerpt: "Evaluating A New Data Augmentation Method"
header:
teaser: assets/figures/17_vp_teaser.jpg
---
![Organism Network Architecture](\assets\figures\17_vp_lion.jpg){:style="display:block; width:85%" .align-center}
This study introduces VoronoiPatches (VP), a novel data augmentation algorithm that enhances Convolutional Neural Networks' performance by using non-linear recombination of image information. VP distinguishes itself by utilizing small, convex polygon-shaped patches in random layouts to redistribute information within an image, potentially smoothing transitions between patches and the original image. This method has shown to outperform existing data augmentation techniques in reducing model variance and overfitting, thus improving the robustness of CNN models on unseen data. {% cite illium2022voronoipatches %}
:trophy: Our work was awarded the [Best Poster Award](https://icaart.scitevents.org/PreviousAwards.aspx?y=2024#2023) at ICAART 2023 :trophy:
![Dropout](\assets\figures\17_vp_results.jpg){:style="display:block; width:90%" .align-center}

View File

@ -0,0 +1,30 @@
---
layout: single
title: "Emergent Social Dynamics"
categories: research
tags: artificial-life complex-systems neural-networks self-organization emergent-behavior predictive-coding
excerpt: "Artificial chemistry networks develop predictive models via surprise minimization."
header:
teaser: /assets/figures/18_surprised_soup_teaser.jpg
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
This research extends the study of **artificial chemistry** systems populated by neural network "particles," focusing on the emergence of complex behaviors driven by **social interaction** rather than explicit programming. Building on systems where particles may exhibit self-replication, we introduce interactions based on principles of **predictive processing and surprise minimization** (akin to the Free Energy Principle).
![Schematic diagram illustrating interacting neural network particles in the 'social soup'](\assets\figures\18_surprised_soup_schematic.jpg)
{:style="display:block; width:40%" .align-right}
Specifically, particles are equipped with mechanisms enabling them to **recognize and build predictive models of their peers' behavior**. The learning process is driven by the minimization of prediction error, or "surprise," incentivizing particles to accurately anticipate the actions or state changes of others within the "soup."
Key observations from this setup include:
* The emergence of **stable behavioral patterns and population dynamics** purely from these local, predictive interactions. Notably, these emergent patterns often resemble the stability observed in systems where self-replication was an explicitly trained objective.
* The introduction of a unique **"catalyst" particle** designed to exert evolutionary pressure on the system, demonstrating how external influences or specialized agents can shape the collective dynamics.
<center>
<img src="/assets/figures/18_surprised_soup_trajec.jpg" alt="Trajectories or state space visualization of the particle population dynamics over time" style="display:block; width:90%">
<figcaption>Visualization of particle trajectories or population dynamics within the 'social soup'.</figcaption>
</center>
This study highlights how complex, seemingly goal-directed social behaviors and stable ecosystem structures can emerge from simple, local rules based on mutual prediction and surprise minimization among interacting agents, offering insights into the self-organization of complex adaptive systems. {% cite zorn23surprise %}

View File

@ -1,15 +0,0 @@
---
layout: single
title: "Social NN-Soup"
categories: research audio deep-learning anomalie-detection
excerpt: "Social interaction based on surprise minimization"
header:
teaser: assets/figures/18_surprised_soup_teaser.jpg
---
![Social Soup Schematics](\assets\figures\18_surprised_soup_schematic.jpg){:style="display:block; width:40%" .align-right}
This research explores artificial chemistry systems with neural network particles that exhibit self-replication. Introducing interactions that enable these particles to recognize and predict each other's behavior, the study observes emergent behaviors akin to stability patterns previously seen in explicit self-replication training. A unique catalyst particle introduces evolutionary pressure, demonstrating how 'social' interactions among particles can lead to complex, emergent outcomes.
{% cite zorn23surprise %}
![Soup Trajectories](\assets\figures\18_surprised_soup_trajec.jpg){:style="display:block; width:90%" .align-center}

View File

@ -1,16 +0,0 @@
---
layout: single
title: "Binary Presorting"
categories: research audio deep-learning anomalie-detection
excerpt: "Improving primate sounds classification by sublabeling"
header:
teaser: assets/figures/19_binary_primates_teaser.jpg
---
![Multiclass Training Pipeline](\assets\figures\19_binary_primates_pipeline.jpg){:style="display:block; width:40%" .align-right}
This study advances machine learning applications in wildlife observation by introducing a sophisticated approach to audio classification. By meticulously relabeling subsegments of MEL spectrograms, it significantly refines the process of multi-class classification, crucial for identifying various primate species from audio recordings. Employing convolutional neural networks alongside innovative data augmentation techniques, the methodology showcases remarkable enhancements in classification performance. When applied to the demanding ComparE 2021 dataset, this approach not only achieved substantially higher accuracy and UAR scores over existing baselines but also marked a significant stride in the field of bioacoustics research, demonstrating the potential of machine learning to overcome challenges presented by datasets with weak labeling, varying lengths, and poor signal-to-noise ratios.
{% cite koelle23primate %}
![Thresholding](\assets\figures\19_binary_primates_thresholding.jpg){:style="display:block; width:70%" .align-center}
![Thresholding](\assets\figures\19_binary_primates_results.jpg){:style="display:block; width:70%" .align-center}

View File

@ -0,0 +1,36 @@
---
layout: single
title: "Primate Subsegment Sorting"
categories: research
tags: bioacoustics audio-classification deep-learning data-labeling signal-processing
excerpt: "Binary subsegment presorting improves noisy primate sound classification."
header:
teaser: /assets/figures/19_binary_primates_teaser.jpg
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
![Diagram illustrating the multi-class training pipeline incorporating subsegment relabeling](\assets\figures\19_binary_primates_pipeline.jpg)
{:style="display:block; width:40%" .align-right}
Automated acoustic classification plays a vital role in wildlife monitoring and bioacoustics research. This study introduces a sophisticated pre-processing and training strategy to significantly enhance the accuracy of multi-class audio classification, specifically targeting the identification of different primate species from field recordings.
A key challenge in bioacoustics is dealing with datasets containing weak labels (where calls of interest occupy only a portion of a labeled segment), varying segment lengths, and poor signal-to-noise ratios (SNR). Our approach addresses this by:
1. **Subsegment Analysis:** Processing audio recordings represented as **MEL spectrograms**.
2. **Refined Labeling:** Meticulously **relabeling subsegments** within the spectrograms. This "binary presorting" step effectively identifies and isolates the actual vocalizations of interest within longer, weakly labeled recordings.
3. **CNN Training:** Training **Convolutional Neural Networks (CNNs)** on these refined, higher-quality subsegment inputs.
4. **Data Augmentation:** Employing innovative **data augmentation techniques** suitable for spectrogram data to further improve model robustness.
<center>
<img src="/assets/figures/19_binary_primates_thresholding.jpg" alt="Visualization related to the thresholding or selection process for subsegment labeling" style="display:block; width:70%">
<figcaption>Thresholding or selection criteria for subsegment refinement.</figcaption>
</center><br>
The effectiveness of this methodology was evaluated on the challenging **ComParE 2021 Primate dataset**. The results demonstrate remarkable improvements in classification performance, achieving substantially higher accuracy and Unweighted Average Recall (UAR) scores compared to existing baseline methods.
<center>
<img src="/assets/figures/19_binary_primates_results.jpg" alt="Graphs or tables showing improved classification results (accuracy, UAR) compared to baselines" style="display:block; width:70%">
<figcaption>Comparative performance results on the ComParE 2021 dataset.</figcaption>
</center><br>
This work represents a significant advancement in handling difficult, real-world bioacoustic data, showcasing how careful data refinement prior to deep learning model training can dramatically enhance classification outcomes. {% cite koelle23primate %}

View File

@ -0,0 +1,36 @@
---
layout: single
title: "Aquarium MARL Environment"
categories: research
tags: multi-agent-reinforcement-learning MARL simulation emergence complex-systems
excerpt: "Aquarium: Open-source MARL environment for predator-prey studies."
header:
teaser: /assets/figures/20_aquarium.png
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
![Diagram illustrating the multi-agent reinforcement learning cycle within the Aquarium environment](\assets\figures\20_aquarium.png){:style="display:block; width:40%" .align-right}
The study of complex interactions using Multi-Agent Reinforcement Learning (MARL), particularly **predator-prey dynamics**, often requires specialized simulation environments. To streamline research and avoid redundant development efforts, we introduce **Aquarium**: a versatile, open-source MARL environment specifically designed for investigating predator-prey scenarios and related **emergent behaviors**.
Key Features of Aquarium:
* **Framework Integration:** Built upon and seamlessly integrates with the popular **PettingZoo API**, allowing researchers to readily apply existing MARL algorithm implementations (e.g., from Stable-Baselines3, RLlib).
* **Physics-Based Movement:** Simulates agent movement on a two-dimensional, continuous plane with edge-wrapping boundaries, incorporating basic physics for more realistic interactions.
* **High Customizability:** Offers extensive configuration options for:
* **Agent-Environment Interactions:** Observation spaces, action spaces, and reward functions can be tailored to specific research questions.
* **Environmental Parameters:** Key dynamics like agent speeds, prey reproduction rates, predator starvation mechanisms, sensor ranges, and more are fully adjustable.
* **Visualization & Recording:** Includes a resource-efficient visualizer and supports video recording of simulation runs, facilitating qualitative analysis and understanding of agent behaviors.
<div style="display: flex; align-items: center; justify-content: center;">
<center>
<img src="/assets/figures/20_observation_vector.png" alt="Diagram detailing the construction of the observation vector for an agent" style="display:inline-table; width:85%">
<figcaption>Construction details of the agent observation vector.</figcaption>
</center>
<center>
<img src="/assets/figures/20_capture_statistics.png" alt="Graphs showing average captures or rewards per prey agent under different training regimes" style="display:inline-table; width:100%">
<figcaption>Performance metrics (e.g., average captures/rewards) comparing training strategies.</figcaption>
</center>
</div>
To demonstrate its capabilities, we conducted preliminary studies using **Proximal Policy Optimization (PPO)** to train multiple prey agents learning to evade a predator within Aquarium. Consistent with findings in existing MARL literature, our results showed that training agents with **individual policies led to suboptimal performance**, whereas utilizing **parameter sharing** among prey agents significantly improved coordination, sample efficiency, and overall evasion success. {% cite kolle2024aquarium %}

View File

@ -1,18 +0,0 @@
---
layout: single
title: "Aquarium"
categories: research MARL reinforcement-learning multi-agent
excerpt: "Exploring Predator-Prey Dynamics in multi-agent reinforcement-learning"
header:
teaser: assets/figures/20_aquarium.png
---
![Multi-Agent Reinforcement Learning Cycle](\assets\figures\20_aquarium.png){:style="display:block; width:40%" .align-right}
Recent advances in multi-agent reinforcement learning have enabled the modeling of complex interactions between agents in simulated environments. In particular, predator-prey dynamics have garnered significant interest, and various simulations have been adapted to meet unique requirements. To avoid further time-intensive development efforts, we introduce *Aquarium*, a versatile multi-agent reinforcement learning environment designed for studying predator-prey interactions and emergent behavior. *Aquarium* is open-source and seamlessly integrates with the PettingZoo framework, allowing for a quick start using established algorithm implementations. It features physics-based agent movement on a two-dimensional, edge-wrapping plane. Both the agent-environment interactions (observations, actions, rewards) and environmental parameters (agent speed, prey reproduction, predator starvation, and more) are fully customizable. In addition to providing a resource-efficient visualization, *Aquarium* supports video recording, facilitating a visual understanding of agent behavior.
To showcase the environment's capabilities, we conducted preliminary studies using proximal policy optimization (PPO) to train multiple prey agents to evade a predator. Consistent with existing literature, we found that individual learning leads to worse performance, while parameter sharing significantly improves coordination and sample efficiency.
{% cite kolle2024aquarium %}
![Construction of the Observation Vector](\assets\figures\20_capture_statistics.png){:style="display:block; width:70%" .align-center}
![Average captures and rewards per prey agent](\assets\figures\20_observation_vector.png){:style="display:block; width:70%" .align-center}

View File

@ -1,18 +0,0 @@
---
layout: single
title: "MAS Emergence"
categories: research multi-agent reinforcement-learning safety emergence
excerpt: "A safety perspective on emergence in multi-agent reinforcement-learning"
header:
teaser: assets/figures/21_coins_teaser.png
---
![Evaluation Environments](\assets\figures\21_envs.png){:style="display:block; width:40%" .align-right}
Emergent effects can occur in multi-agent systems (MAS), where decision-making is decentralized and based on local information. These effects may range from minor deviations in behavior to catastrophic system failures. To formally define these phenomena, we identify misalignments between the global inherent specification (the true specification) and its local approximation (e.g., the configuration of distinct reward components or observations). Leveraging established safety concepts, we develop a framework for understanding these emergent effects. To demonstrate the resulting implications, we examine two highly configurable gridworld scenarios, where inadequate specifications lead to unintended behavior deviations when derived independently. Acknowledging that a global solution may not always be practical, we propose adjusting the underlying parameterizations to mitigate these issues, thereby improving system alignment and reducing the risk of emergent failures.
{% cite altmann2024emergence %}
![Instances of emergent behavior](\assets\figures\21_coins.png){:style="display:block; width:70%" .align-center}
![Blocking behavior](\assets\figures\21_blocking.png){:style="display:block; width:70%" .align-center}

View File

@ -0,0 +1,31 @@
---
layout: single
title: "MAS Emergence Safety"
categories: research
tags: multi-agent-systems MARL AI-safety emergence system-specification
excerpt: "Formalized MAS emergence misalignment; proposed safety mitigation strategies."
header:
teaser: /assets/figures/21_coins_teaser.png
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
![Diagrams of the gridworld environments used for evaluation](\assets\figures\21_envs.png)
{:style="display:block; width:40%" .align-right}
Multi-Agent Systems (MAS), particularly those employing decentralized decision-making based on local information (common in MARL), can exhibit **emergent effects**. These phenomena, arising from complex interactions, range from minor behavioral quirks to potentially catastrophic system failures, posing significant **safety challenges**.
This research provides a framework for understanding and mitigating undesirable emergence from a **safety perspective**. We propose a formal definition: emergent effects arise from **misalignments between the *global inherent specification*** (the intended overall system goal or behavior) **and its *local approximation*** used by individual agents (e.g., distinct reward components, limited observations).
<center>
<img src="/assets/figures/21_coins.png" alt="Visualization showing agents exhibiting emergent coin-collecting behavior" style="display:block; width:70%">
<figcaption>Example of emergent behavior (e.g., coin hoarding) due to specification misalignment.</figcaption>
</center><br>
Leveraging established concepts from system safety engineering, we analyze how such misalignments can lead to deviations from intended global behavior. To illustrate the practical implications, we examine two highly configurable gridworld scenarios. These demonstrate how inadequate or independently derived local specifications (rewards/observations) can predictably result in unintended emergent behaviors, such as resource hoarding or inefficient coordination.
<center>
<img src="/assets/figures/21_blocking.png" alt="Visualization showing agents exhibiting emergent blocking behavior" style="display:block; width:60%">
<figcaption>Example of emergent behavior (e.g., mutual blocking) due to specification misalignment.</figcaption>
</center><br>
Recognizing that achieving a perfectly aligned global specification might be impractical in complex systems, we propose strategies focused on **adjusting the underlying local parameterizations** (e.g., reward shaping, observation design) to mitigate harmful emergence. By carefully tuning these local components, system alignment can be improved, reducing the risk of emergent failures and enhancing overall safety. {% cite altmann2024emergence %}

View File

@ -1,30 +1,51 @@
---
layout: single
title: "Lecture: Computer Architectures"
title: "Computer Architecture TA"
categories: teaching
excerpt: "Assisting to manage a lecture about the technical foundations of computer science."
excerpt: "TA & Coordinator, LMU Computer Architecture course."
header:
teaser: assets/images/teaching/computer_gear.png
teaser: /assets/images/teaching/computer_gear.png
role: Teaching Assistant, Tutorial Coordinator
skills: Team Management, Curriculum Support, Exercise Design, Examination Coordination, Large-Scale Course Organization
duration: 2018 - 2019 (Specific Semesters)
---
![logo](\assets\images\teaching\computer_gear.png){: .align-left style="padding:0.1em; width:5em"}
During my tenure as a Ph.D. student, I was involved in organizing a bachelor's lecture titled "Rechnerarchitektur" with approximately 600 students per semester.
My responsibilities encompassed managing a team of 10-12 tutors to distribute the workload evenly, designing weekly graded exercise sheets, and overseeing the written examination process. The curriculum introduced students to the fundamental concepts of computer science and architecture, covering a wide range of topics from data representation to the intricacies of machine and assembly language programming, under the leadership of Prof. Dr. Linnhoff-Popien.
![Computer Gear Icon](/assets/images/teaching/computer_gear.png){: .align-left style="padding:0.1em; width:5em" alt="Computer Gear Icon"}
During my doctoral studies at LMU Munich, I served as a Teaching Assistant and took on significant organizational responsibilities for the undergraduate lecture **"Rechnerarchitektur" (Computer Architecture)**. This foundational course, led by Prof. Dr. Linnhoff-Popien, catered to approximately 600 students each semester.
### Contents
<div class="table-right">
My primary responsibilities focused on managing the tutorial component and supporting the overall lecture delivery:
| [Summer semester 2019](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/rechnerarchitektur-sose19/)| [Summer semester 2018](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/rechnerarchitektur-sose18/)|
* **Tutorial Coordination:** Managed a team of 10-12 student tutors, including recruitment, training, task assignment, and ensuring equitable workload distribution to effectively support the large student cohort.
* **Curriculum Support:** Designed weekly exercise sheets, including theoretical problems and practical programming tasks (e.g., assembly language), aligned with the lecture content. Coordinated the grading process across the tutor team.
* **Examination Management:** Contributed to the design, organization, and supervision of the final written examinations, ensuring smooth execution for a large number of participants.
The course provided students with a comprehensive introduction to the fundamental principles of computer science and architecture.
---
<div class="container" style="margin-top: 1.5em;">
<div class="sidebar" style="float: right; width: 25%; border: 0.5px grey solid; padding: 15px; margin-left: 15px; box-sizing: border-box;">
<h4 style="margin-top: 0;">Course Materials</h4>
<ul style="list-style: none; padding-left: 0; margin-bottom: 0;">
<li><a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/rechnerarchitektur-sose19/" target="_blank" rel="noopener noreferrer">Summer Semester 2019</a></li>
<li><a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/rechnerarchitektur-sose18/" target="_blank" rel="noopener noreferrer">Summer Semester 2018</a></li>
</ul>
</div>
<div class="main-content" style="float: left; width: calc(75% - 15px); box-sizing: border-box;">
<h4 style="margin-top: 0;">Course Content Overview</h4>
Key topics covered included:
<ul>
<li>Data Representation (Numbers, Text, Images, Audio, Video, Programs as Bits)</li>
<li>Data Storage, Transfer, Error Detection, and Correction</li>
<li>Boolean Algebra and Logic Gates</li>
<li>Digital Circuit Design and Switching Networks</li>
<li>Number Representation and Computer Arithmetic</li>
<li>Combinational and Sequential Logic (Switching Functions, Networks, Plants)</li>
<li>The Von Neumann Architecture Model</li>
<li>Abstract Machine Models</li>
<li>Machine and Assembly Language Programming</li>
<li>Introduction to Quantum Computing Concepts</li>
</ul>
</div>
<div style="clear: both;"></div>
</div>
- Representation as bits: (numbers, text, images, audio, video, programs).
- Storage and Transfer of data, error detection and correction
- Boolean algebra
- Processing of data: circuit design, switching networks
- Number representation and arithmetic
- Switching functions, switching networks, switching plants
- Von Neumann model
- Machine model
- Machine and assembly language programming
- Introduction to Quantum Computing

View File

@ -1,31 +1,61 @@
---
layout: single
title: "IOT: Devices & Connectivity"
title: "IoT Practical Exercise"
categories: teaching
tags: teaching iot
excerpt: "Teaching to plan and develope distributed mobile apps for Android as a team."
tags: teaching iot mqtt python influxdb distributed-systems practical-course
excerpt: "Designed/taught IoT practical (MQTT, Python) for ~200 students."
header:
teaser: assets/images/teaching/server.png
teaser: /assets/images/teaching/server.png
role: Practical Course Instructor/Designer
skills: Curriculum Design (Practical Exercise), IoT Protocols (MQTT), Time Series Databases (InfluxDB), Python Programming, Live Coding, Large Group Instruction
duration: Winter Semester 2018/19
---
![logo](\assets\images\teaching\server.png){: .align-left style="padding:0.1em; width:5em"}
In the context of the lecture [Internet of Things (IoT)](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/iot-ws1819/), my task was to come up with a practical exercise which could be implemented in the scope of 1-2 classes. We went with a typical [MQQT](https://mqtt.org/) based communication approach, which incooperated an [InfluxDB](https://www.influxdata.com/) backend, while simulating some high frequency sensors.
The task was to implement this all from scratch in [Python](https://www.python.org/), which was tought in seperate [lecture](/teaching/Python/).
![IOT Influx Pipeline](\assets\figures\iot_inflex_pipeline.png){:style="display:block; margin-left:auto; margin-right:auto; padding: 2em;"}
This practical course was held in front of about 200 students in winter 2018.
![Server Icon](/assets/images/teaching/server.png){: .align-left style="padding:0.1em; width:5em" alt="Server/Network Icon"}
As part of the lecture **[Internet of Things (IoT): Devices, Connectivity, and Services](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/iot-ws1819/)**, I was responsible for designing and conducting a practical programming exercise suitable for completion within one to two class sessions. This exercise targeted approximately 200 students during the Winter Semester 2018/19.
### Contents
The goal was to provide hands-on experience with fundamental IoT communication patterns. The chosen approach involved:
- Arduino and Raspberry Pi
- Wearables and ubiquitous computing
- Metaheuristics for optimization problems
- Edge/fog/cloud computing and storage,
- Scalable algorithms and approaches
- Spatial data mining,
- Information retrieval and mining
- Blockchain and digital consensus
- Combinatorial optimization in practice
- Predictive maintenance systems
- Smart IoT applications
- Cyber security
- Web of Things
* **Communication Protocol:** Implementing a typical publish/subscribe system using the **[MQTT](https://mqtt.org/)** protocol.
* **Data Persistence:** Storing simulated sensor data in an **[InfluxDB](https://www.influxdata.com/)** time-series database backend.
* **Sensor Simulation:** Generating high-frequency data streams to mimic real-world sensor behavior.
* **Implementation Language:** Requiring students to implement the entire pipeline from scratch using **[Python](https://www.python.org/)**. Foundational Python skills were covered in a [separate preparatory course](/teaching/Python/).
<center>
<img src="/assets/figures/iot_inflex_pipeline.png" alt="Diagram showing simulated sensors publishing via MQTT to a broker, which is subscribed to by an InfluxDB logger" style="max-width: 80%;">
<figcaption>Conceptual pipeline for the MQTT-InfluxDB practical exercise.</figcaption>
</center><br>
The exercise aimed to solidify theoretical concepts discussed in the main lecture by applying them in a practical, albeit simulated, IoT scenario.
<div class="container" style="margin-top: 1.5em;">
<div class="sidebar" style="float: right; width: 30%; border: 0.5px grey solid; padding: 15px; margin-left: 15px; box-sizing: border-box;">
<h4 style="margin-top: 0;">Associated Lecture Topics</h4>
<ul style="list-style: none; padding-left: 0; margin-bottom: 0; font-size: smaller;">
<li>Arduino and Raspberry Pi</li>
<li>Wearables & Ubiquitous Computing</li>
<li>Edge/Fog/Cloud Computing</li>
<li>Scalable Algorithms</li>
<li>Spatial Data Mining</li>
<li>Blockchain & Digital Consensus</li>
<li>Predictive Maintenance</li>
<li>Smart IoT Applications</li>
<li>Cyber Security</li>
<li>Web of Things</li>
<!-- Note: This list represents the broader lecture, not just the practical exercise -->
</ul>
</div>
<div class="main-content" style="float: left; width: calc(70% - 15px); box-sizing: border-box;">
<h4 style="margin-top: 0;">Practical Exercise Focus</h4>
The hands-on session concentrated specifically on:
<ul>
<li>Understanding the MQTT Publish/Subscribe pattern.</li>
<li>Implementing MQTT clients (publishers/subscribers) in Python.</li>
<li>Interfacing with InfluxDB for time-series data storage using Python libraries.</li>
<li>Simulating basic sensor data streams.</li>
<li>Integrating components into a functional pipeline.</li>
</ul>
This practical work provided direct experience related to the broader lecture themes of IoT connectivity, data handling, and application development.
</div>
<div style="clear: both;"></div>
</div>

View File

@ -1,14 +1,44 @@
---
layout: single
title: "Lecture: Python 101"
title: "Python 101 Course"
categories: teaching
tags: teaching coding
excerpt: "Teaching the basics of python."
tags: teaching python programming introductory-course curriculum-development
excerpt: "Co-developed/taught intensive introductory Python course for 200 students."
header:
teaser: assets/images/teaching/py.png
teaser: /assets/images/teaching/py.png
role: Co-Instructor, Course Co-Developer
skills: Python Programming (Fundamentals), Curriculum Development, Teaching, Practical Exercise Design, Large Group Instruction
duration: Winter Semester 2018/19 (4 sessions)
---
![logo](\assets\images\teaching\py.png){: .align-left style="padding:0.1em; width:5em"}
During the winter semester of 2018, as part of the [IOT](/teaching/IOT/) lecture series, we conducted a "Python 101" course. This extensive introduction to [`Python`](https://www.python.org/), which I co-developed and co-taught, spanned four classes and reached approximately 200 students.
![Python Logo](/assets/images/teaching/py.png){: .align-left style="padding:0.1em; width:5em" alt="Python Logo"}
In preparation for the practical exercises within the [Internet of Things (IoT) lecture series](/teaching/IOT/), we identified the need for foundational programming skills among the student cohort. Consequently, during the Winter Semester 2018/19, I **co-developed and co-taught** an intensive introductory course focused on the **[Python programming language](https://www.python.org/)**.
In addition to theoretical lessons, we incorporated a practical component to enhance students' programming skills in Python.
This "Python 101" module, delivered over four dedicated class sessions to approximately 200 students, was designed to equip them with the essential programming concepts required for the subsequent [IoT practical exercise](/teaching/IOT/).
The course structure balanced theoretical instruction with hands-on practical components to build coding proficiency. This preparatory course ensured that students possessed the necessary Python skills to successfully engage with and benefit from the more complex programming tasks in the main IoT lecture's practical sessions.
<div class="container" style="margin-top: 1.5em;">
<div class="sidebar" style="float: right; width: 30%; border: 0.5px grey solid; padding: 15px; margin-left: 15px; box-sizing: border-box;">
<h4 style="margin-top: 0;">Key Topics Covered</h4>
<ul style="list-style: none; padding-left: 0; margin-bottom: 0; font-size: smaller;">
<li>Basic Syntax and Operators</li>
<li>Data Types (Integers, Floats, Strings, Lists, Dictionaries)</li>
<li>Control Flow (If/Else, For/While)</li>
<li>Functions and Scope</li>
<li>Basic Input/Output</li>
<li>Modules and Libraries Intro</li>
<li>Debugging Fundamentals</li>
</ul>
</div>
<div class="main-content" style="float: left; width: calc(70% - 15px); box-sizing: border-box;">
<h4 style="margin-top: 0;">Course Structure</h4>
The curriculum included:
<ul>
<li><b>Theoretical Lessons:</b> Covering core Python syntax, data types, control flow, functions, and basic programming principles.</li>
<li><b>Practical Application:</b> Incorporating programming exercises designed to reinforce theoretical knowledge and build practical coding proficiency.</li>
</ul>
The focus was on providing the essential toolkit for tackling subsequent IoT-related programming tasks.
</div>
<div style="clear: both;"></div>
</div>

View File

@ -1,26 +1,53 @@
---
layout: single
title: "Seminar: TIMS"
title: "TIMS Seminar"
categories: teaching
excerpt: "Teaching bachelor students how to do research scientifically as a team."
excerpt: "Supervised student research, writing, presentation in Mobile/Distributed Systems, ML, QC."
header:
teaser: assets/images/teaching/thesis.png
teaser: /assets/images/teaching/thesis.png
role: Seminar Supervisor / Teaching Assistant
skills: Research Mentoring, Scientific Writing Guidance, Presentation Coaching, Academic Assessment, Topic Curation
duration: 2020 - 2023 (Multiple Semesters)
---
![logo](\assets\images\teaching\thesis.png){: .align-left style="padding:0.1em; width:5em"}
The seminar focuses on mobile and distributed systems, with recent iterations emphasizing machine learning and quantum computing, reflecting the chair's main research areas.
![Thesis Icon](/assets/images/teaching/thesis.png){: .align-left style="padding:0.1em; width:5em" alt="Thesis/Paper Icon"}
As part of my teaching responsibilities at the Chair for Mobile and Distributed Systems (LMU Munich), I regularly supervised the **"Trends in Mobile and Distributed Systems" (TIMS)** seminar series for both Bachelor and Master students.
### Content
This seminar is designed to introduce students to the process of scientific research and academic work. Each semester focused on specific cutting-edge topics within the chair's main research areas, primarily **Mobile and Distributed Systems**, with recent iterations emphasizing **Machine Learning** and **Quantum Computing**.
<div class="align-right">
The core objectives and structure involved guiding students through:
| Summer semester | Winter semester |
| --- | --- |
| [2023](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-trends-in-mobilen-und-verteilten-systemen-sose23/)| --- |
| [2022](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-trends-in-mobilen-und-verteilten-systemen-sose22/)| [2022](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-ws2122-2/) |
| [2021](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-trends-in-mobilen-und-verteilten-systemen-sose21/)| [2021](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-ws2122-2/) |
| --- |[2020](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-trends-in-mobilen-und-verteilten-systemen-wise2021/)|
* **Topic Exploration:** Selecting and defining a research topic within the semester's theme.
* **Literature Review:** Conducting thorough searches and critically analyzing relevant scientific papers.
* **Scientific Writing:** Structuring and writing a formal academic seminar paper summarizing their findings.
* **Presentation Skills:** Preparing and delivering a scientific presentation to their peers and instructors.
* **Academic Discourse:** Actively participating in discussions and providing constructive feedback on others' work.
</div>The seminar aims to enhance scientific working techniques through a dedicated course on presentation and working methods, complemented by individual presentation coaching and feedback.
To support student development, the seminar included dedicated sessions on scientific working methods, presentation techniques, and individual coaching sessions with personalized feedback. The final assessment considered the quality of the written paper, the clarity and delivery of the presentation, and active participation throughout the seminar.
The final grade reflects the quality of academic work, presentation skills, and active seminar participation.
<div class="container" style="margin-top: 1.5em;">
<div class="sidebar" style="float: right; width: 30%; border: 0.5px grey solid; padding: 15px; margin-left: 15px; box-sizing: border-box;">
<h4 style="margin-top: 0;">Past Seminar Iterations</h4>
<ul style="list-style: none; padding-left: 0; margin-bottom: 0; font-size: smaller;">
<li><strong>Summer 2023:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-trends-in-mobilen-und-verteilten-systemen-sose23/" target="_blank" rel="noopener noreferrer">TIMS</a></li>
<li><strong>Winter 22/23:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-trends-in-mobilen-und-verteilten-systemen-ws2223/" target="_blank" rel="noopener noreferrer">TIMS</a></li>
<li><strong>Summer 2022:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-trends-in-mobilen-und-verteilten-systemen-sose22/" target="_blank" rel="noopener noreferrer">TIMS</a></li>
<li><strong>Winter 21/22:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-ws2122-2/" target="_blank" rel="noopener noreferrer">Tims</a></li>
<li><strong>Summer 2021:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-trends-in-mobilen-und-verteilten-systemen-sose21/" target="_blank" rel="noopener noreferrer">TIMS</a></li>
<li><strong>Winter 20/21:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-trends-in-mobilen-und-verteilten-systemen-wise2021/" target="_blank" rel="noopener noreferrer">TIMS</a></li>
</ul>
</div>
<div class="main-content" style="float: left; width: calc(70% - 15px); box-sizing: border-box;">
<h4 style="margin-top: 0;">Seminar Objectives & Structure</h4>
Students were guided through the full research lifecycle:
<ul>
<li>Selecting and refining a research question.</li>
<li>Conducting comprehensive literature surveys.</li>
<li>Structuring and writing an academic seminar paper.</li>
<li>Preparing and delivering effective scientific presentations.</li>
<li>Engaging in critical academic discussions.</li>
</ul>
Dedicated coaching on methodology and presentation skills was provided.
</div>
<div style="clear: both;"></div>
</div>

View File

@ -1,23 +1,51 @@
---
layout: single
title: "Seminar: VTIMS"
title: "VTIMS Advanced Seminar"
categories: teaching
excerpt: "Teaching master students how to do research scientifically as a team."
excerpt: "Supervised Master's advanced research/analysis in Mobile/Distributed Systems, ML, QC."
header:
teaser: assets/images/teaching/thesis_master.png
teaser: /assets/images/teaching/thesis_master.png
role: Seminar Supervisor / Teaching Assistant
skills: Advanced Research Mentoring, Critical Literature Analysis, Scientific Writing Supervision, Presentation Coaching, Academic Assessment (Master Level)
duration: 2020 - 2023 (Multiple Semesters)
---
![logo](\assets\images\teaching\thesis_master.png){: .align-left style="padding:0.1em; width:5em"}
The seminar explores topics in mobile and distributed systems, especially those aligning with the chair's research interests, recently emphasizing machine learning and quantum computing.
![Master Thesis Icon](/assets/images/teaching/thesis_master.png){: .align-left style="padding:0.1em; width:5em" alt="Master Thesis/Paper Icon"}
Complementing the Bachelor-level seminar, I also supervised the **"Vertiefte Themen in Mobilen und Verteilten Systemen" (VTIMS)** seminar, designed specifically for **Master's students** at the LMU Chair for Mobile and Distributed Systems.
### Content
<div class="table-right">
This advanced seminar aimed to deepen students' understanding of cutting-edge research topics and further hone their scientific working methodologies. Similar to TIMS, the thematic focus aligned with the chair's research activities, particularly **Mobile and Distributed Systems**, **Machine Learning**, and **Quantum Computing**.
| Summer semester | Winter semester |
| --- | --- |
| [2023](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-sose23/)| --- |
| [2022](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-sose22/)| [2022](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-ws2223/) |
| [2021](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-sose21/)| [2021](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-ws2122/) |
| --- |[2020](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-ws2021/)|
Compared to the introductory seminar, VTIMS placed greater emphasis on:
</div>The seminar aims to teach and practice scientific working techniques, offering a course on presentation and working methods plus individual coaching. Grades are based on academic work, presentation quality, and seminar participation.
* **In-depth Analysis:** Requiring students to engage more critically with complex, state-of-the-art research papers.
* **Independent Research:** Fostering greater autonomy in topic definition, literature synthesis, and potentially minor novel contributions or critical perspectives.
* **Advanced Presentation:** Expecting higher standards in the structure, content, and delivery of scientific presentations.
The structure involved guiding students through the research process, including topic selection, intensive literature review, rigorous academic writing, and polished presentation delivery. Dedicated support included sessions on advanced scientific methods, presentation refinement, and personalized coaching. Assessment criteria mirrored those of TIMS but with expectations adjusted for the Master's level, focusing on the depth of academic work, presentation quality, and insightful participation.
<div class="container" style="margin-top: 1.5em;">
<div class="sidebar" style="float: right; width: 30%; border: 0.5px grey solid; padding: 15px; margin-left: 15px; box-sizing: border-box;">
<h4 style="margin-top: 0;">Past Seminar Iterations</h4>
<ul style="list-style: none; padding-left: 0; margin-bottom: 0; font-size: smaller;">
<li><strong>SoSe 2023:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-sose23/" target="_blank" rel="noopener noreferrer">VTIMS</a></li>
<li><strong>WiSe 22/23:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-ws2223/" target="_blank" rel="noopener noreferrer">VTIMS</a></li>
<li><strong>SoSe 2022:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-sose22/" target="_blank" rel="noopener noreferrer">VTIMS</a></li>
<li><strong>WiSe 21/22:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-ws2122/" target="_blank" rel="noopener noreferrer">VTIMS</a></li>
<li><strong>SoSe 2021:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-sose21/" target="_blank" rel="noopener noreferrer">VTIMS</a></li>
<li><strong>WiSe 20/21:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/seminar-vertiefte-themen-in-mobilen-und-verteilten-systemen-ws2021/" target="_blank" rel="noopener noreferrer">VTIMS</a></li>
</ul>
</div>
<div class="main-content" style="float: left; width: calc(70% - 15px); box-sizing: border-box;">
<h4 style="margin-top: 0;">Seminar Objectives (Master Level)</h4>
Emphasis was placed on:
<ul>
<li>Critical engagement with state-of-the-art research.</li>
<li>Independent literature synthesis and analysis.</li>
<li>Developing rigorous academic writing skills.</li>
<li>Delivering high-quality scientific presentations.</li>
<li>Contributing insightful perspectives during discussions.</li>
</ul>
Coaching focused on advanced research methodologies.
</div>
<div style="clear: both;"></div>
</div>

View File

@ -1,19 +1,49 @@
---
layout: single
title: "Lecture: Operating Systems"
title: "Operating Systems TA"
categories: teaching
excerpt: "Teaching the inner working of bits and bytes."
excerpt: "TA & Coordinator, Operating Systems lecture (~350 students), system programming."
header:
teaser: assets/images/teaching/computer_os.png
teaser: /assets/images/teaching/computer_os.png
role: Teaching Assistant, Tutorial Coordinator
skills: System Programming Concepts (Processes, Threads, Sync, IPC, Memory Mgmt), Java Programming (Threads), Exercise Design, Examination Support, Tutorial Coordination, Large-Scale Course Organization
duration: Winter 2018/19, Winter 2019/20
---
![logo](\assets\images\teaching\computer_os.png){: .align-left style="padding:0.1em; width:5em"}In the semesters listed below, I assisted in organizing the "Operating Systems" lecture for 300-400 students, coordinating with a team of 10-12 tutors to manage workload.
![Operating System Icon](/assets/images/teaching/computer_os.png){: .align-left style="padding:0.1em; width:5em" alt="Operating System Icon"}
Following the introductory course on Computer Architecture, I also served as a Teaching Assistant and Tutorial Coordinator for the subsequent **"Betriebssysteme" (Operating Systems)** lecture at LMU Munich, taught by Prof. Dr. Linnhoff-Popien. This course typically enrolled 300-400 students per semester.
### Content
My role involved supporting the lecture and managing the associated tutorial sessions:
<div class="table-right">
* **Tutorial Coordination:** Led a team of 10-12 student tutors, organizing their schedules and ensuring consistent support for the students' learning process.
* **Exercise & Examination Support:** Contributed to the development of weekly graded exercise sheets, focusing on practical application of theoretical concepts. Assisted in the preparation and administration of final examinations.
| [Winter semester 2019](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/bs-ws1920/)|
| [Summer semester 2018](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/bs-ws1819/)|
The lecture built upon the foundations laid in [Computer Architecture](/teaching/computer_achitecture/), delving into core operating system and system programming concepts. Practical exercises were primarily implemented in **Java**, making extensive use of the **Thread API** to illustrate concurrency concepts.
</div>We developed weekly graded exercises and exams. This lecture, a continuation of [`Computer Architecture`](teaching/computer_achitecture/) focused on system programming concepts like OS programming, synchronization, process communication, and memory management. Practical exercises used Java, particularly the Thread API, and the course concluded with distributed systems architecture. Taught by Prof. Dr. Linnhoff-Popien at [LMU Munich](https://www.mobile.ifi.lmu.de/).
<div class="container" style="margin-top: 1.5em;">
<div class="sidebar" style="float: right; width: 30%; border: 0.5px grey solid; padding: 15px; margin-left: 15px; box-sizing: border-box;">
<h4 style="margin-top: 0;">Course Materials</h4>
<small>(Semesters involved)</small>
<ul style="list-style: none; padding-left: 0; margin-bottom: 0; font-size: smaller;">
<li><a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/bs-ws1920/" target="_blank" rel="noopener noreferrer">Winter 19/20</a></li>
<li><a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/bs-ws1819/" target="_blank" rel="noopener noreferrer">Winter 18/19</a></li>
</ul>
</div>
<div class="main-content" style="float: left; width: calc(70% - 15px); box-sizing: border-box;">
<h4 style="margin-top: 0;">Key Course Topics</h4>
The curriculum focused on:
<ul>
<li>Operating System Structures and Services</li>
<li>Process Management and Scheduling</li>
<li>Thread Management and Concurrency</li>
<li>Synchronization Mechanisms (Mutexes, Semaphores, Monitors)</li>
<li>Inter-Process Communication (IPC)</li>
<li>Memory Management (Paging, Segmentation, Virtual Memory)</li>
<li>File Systems</li>
<li>Introduction to Distributed Systems Architectures</li>
</ul>
</div>
<div style="clear: both;"></div>
</div>
Practical exercises emphasized concurrent programming using Java Threads.

View File

@ -1,26 +1,51 @@
---
layout: single
title: "IOS - App Developement"
title: "iOS App Development"
categories: teaching
tags: app developement
excerpt: "Teaching to plan and develope distributed mobile apps for IOS as a team."
tags: teaching ios swift mobile-development app-development agile teamwork
excerpt: "Supervised iOS Praktikum: student teams built Swift apps using agile."
header:
teaser: assets/images/teaching/ios.png
teaser: /assets/images/teaching/ios.png
role: Practical Course Supervisor / Teaching Assistant
skills: iOS Development (Swift), Mobile Application Architecture, Client-Server Communication, Wireless Technologies (WiFi/Bluetooth), Location Services (GPS), Agile Methodologies, Team Project Supervision, Code Review
duration: Winter Semester 2019/20
---
![logo](\assets\images\teaching\ios.png){: .align-left style="padding:0.1em; width:5em"}
Leveraging my [android app developement](/teaching/android) experience, I contributed to teaching a mobile app development lab at LMU, focusing on iOS programming with Swift.
![iOS Logo](/assets/images/teaching/ios.png){: .align-left style="padding:0.1em; width:5em" alt="Apple iOS Logo"}
Building upon my experience supervising the [Android development practical course](/teaching/android), I also co-supervised the **"iOS Praktikum"** at LMU Munich. This hands-on lab course focused on native mobile application development for the Apple iOS platform using the **Swift** programming language.
The course had an introductory phase for theoretical basics and practical sessions, followed by a project phase where students worked in groups on their projects, with individual guidance provided.
Specifically, the practical course provided an introduction to programming for the Apple iOS operating system.
The focus was on programming with Swift and an introduction to specific concepts of programming on mobile devices.
The course was structured in two main phases:
### Content
1. **Introductory Phase:** Covered foundational theoretical concepts of iOS development and Swift programming, complemented by practical exercises to solidify understanding.
2. **Project Phase:** Students formed small teams to conceptualize, design, develop, and test their own iOS application ideas. Throughout this phase, I provided regular **individual guidance and technical support** to the teams, assisting with architectural decisions, debugging, and project management.
- Client-Server Architecture
- Usage of wireless lokal networks (Wifi / Bluetooth)
- GPS and outdoor positioning
- Teamwork and planning of timed projects
- Agile feature developement and tools
A significant emphasis was placed not only on technical implementation but also on software engineering practices relevant to mobile development.
This IOS app developement seminar was named `IOS Praktikum (IOS)`
<div class="container" style="margin-top: 1.5em;">
<div class="sidebar" style="float: right; width: 30%; border: 0.5px grey solid; padding: 15px; margin-left: 15px; box-sizing: border-box;">
<h4 style="margin-top: 0;">Key Practical Topics</h4>
<ul style="list-style: none; padding-left: 0; margin-bottom: 0; font-size: smaller;">
<li>Swift Programming Fundamentals</li>
<li>iOS SDK and Core Frameworks (UIKit/SwiftUI)</li>
<li>Client-Server Architecture & Networking</li>
<li>Wireless Local Networks (WiFi / Bluetooth)</li>
<li>Location Services (GPS & Outdoor Positioning)</li>
<li>User Interface & Experience Design</li>
<li>Data Persistence</li>
<li>Agile Feature Development</li>
<li>Version Control (Git)</li>
</ul>
</div>
<div class="main-content" style="float: left; width: calc(70% - 15px); box-sizing: border-box;">
<h4 style="margin-top: 0;">Course Focus & Structure</h4>
The Praktikum aimed to provide comprehensive experience in:
<ul>
<li>Native iOS development using Swift.</li>
<li>Understanding specific concepts of mobile device programming (e.g., lifecycle, sensors, connectivity).</li>
<li>Planning and executing timed software projects as a team.</li>
<li>Applying agile development principles and utilizing associated tools.</li>
</ul>
Students progressed from guided exercises to independent team-based project realization.
</div>
<div style="clear: both;"></div>
</div>

View File

@ -1,36 +1,58 @@
---
layout: single
title: Android Apps
title: "MSP Android Course"
categories: teaching
tags: app developement
excerpt: Teaching to plan and develope distributed mobile apps for Android as a team.
tags: teaching android java kotlin mobile-development app-development agile teamwork
excerpt: "Supervised MSP: teams built Android apps (Java/Kotlin) using agile."
header:
teaser: assets/images/teaching/android.png
teaser: /assets/images/teaching/android.png
role: Practical Course Supervisor / Teaching Assistant
skills: Android Development (Java/Kotlin), Mobile Application Architecture, Client-Server Communication, Wireless Technologies (WiFi/Bluetooth), Location Services (GPS), Agile Methodologies, Team Project Supervision, Code Review
duration: 2018 - 2023 (Multiple Semesters)
---
![logo](\assets\images\teaching\android.png){: .align-left style="padding:0.1em; width:5em"}
Over multiple semesters, my colleagues and I taught mobile app development at LMU.
The course was structured into two phases:
an introductory phase covering theoretical basics and practical skills, followed by a project phase where students worked in groups on their projects, receiving individual guidance.
![Android Logo](/assets/images/teaching/android.png){: .align-left style="padding:0.1em; width:5em" alt="Android Logo"}
### Content
Over several semesters during my time at LMU Munich, I co-supervised the **"Praktikum Mobile und Verteilte Systeme" (MSP)**, often referred to as the Android development practical course. This intensive lab course provided students with hands-on experience in designing, developing, and testing native applications for the **Android** platform, primarily using **Java** and later **Kotlin**.
<div class="table-right">
The course consistently followed a two-phase structure:
| Summer semester | Winter semester |
| --- | --- |
| [2022](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-sose22/) | [2022](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-ws2223/)|
| [2021](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-sose21/) | [2021](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-ws2122/)|
| [2020](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-sose20/) | [2020](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-ws2021/)|
| [2019](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/msp-sose19/) | [2019](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-ws1920/)|
| --- | [2018](https://www.mobile.ifi.lmu.de/lehrveranstaltungen/msp-ws1819/)|
1. **Introductory Phase:** Focused on imparting fundamental concepts of Android development, relevant APIs, architectural patterns, and necessary tooling through lectures and guided practical exercises.
2. **Project Phase:** Student teams collaborated on developing a complete Android application based on their own concepts or provided themes. My role involved providing continuous technical mentorship, architectural guidance, code review feedback, and support in project planning and agile execution to each team.
Emphasis was placed on applying software engineering best practices within the context of mobile application development.
<div class="container" style="margin-top: 1.5em;">
<div class="sidebar" style="float: right; width: 30%; border: 0.5px grey solid; padding: 15px; margin-left: 15px; box-sizing: border-box;">
<h4 style="margin-top: 0;">Past Course Iterations</h4>
<ul style="list-style: none; padding-left: 0; margin-bottom: 0; font-size: smaller;">
<!-- Winter Semesters -->
<li><strong>WiSe 22/23:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-ws2223/" target="_blank" rel="noopener noreferrer">MSP</a></li>
<li><strong>WiSe 21/22:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-ws2122/" target="_blank" rel="noopener noreferrer">MSP</a></li>
<li><strong>WiSe 20/21:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-ws2021/" target="_blank" rel="noopener noreferrer">MSP</a></li>
<li><strong>WiSe 19/20:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-ws1920/" target="_blank" rel="noopener noreferrer">MSP</a></li>
<li><strong>WiSe 18/19:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/msp-ws1819/" target="_blank" rel="noopener noreferrer">MSP</a></li>
<!-- Summer Semesters -->
<li><strong>SoSe 2022:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-sose22/" target="_blank" rel="noopener noreferrer">MSP</a></li>
<li><strong>SoSe 2021:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-sose21/" target="_blank" rel="noopener noreferrer">MSP</a></li>
<li><strong>SoSe 2020:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-sose20/" target="_blank" rel="noopener noreferrer">MSP</a></li>
<li><strong>SoSe 2019:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/msp-sose19/" target="_blank" rel="noopener noreferrer">MSP</a></li>
</ul>
</div>
<div class="main-content" style="float: left; width: calc(70% - 15px); box-sizing: border-box;">
<h4 style="margin-top: 0;">Key Learning Areas</h4>
Students gained practical experience in:
<ul>
<li>Native Android App Development (Java/Kotlin)</li>
<li>Android SDK, Activity/Fragment Lifecycle, UI Design (XML Layouts, Jetpack Compose later)</li>
<li>Client-Server Architecture & Networking (e.g., Retrofit, Volley)</li>
<li>Using Wireless Local Networks (WiFi / Bluetooth APIs)</li>
<li>Implementing Location Services (GPS / Fused Location Provider)</li>
<li>Background Processing and Services</li>
<li>Data Persistence (SharedPreferences, SQLite, Room)</li>
<li>Teamwork and Collaborative Software Development (Git)</li>
<li>Agile Methodologies and Project Management Tools</li>
</ul>
</div>
<div style="clear: both;"></div>
</div>
- Developement of Android-Apps
- Client-Server Architecture
- Usage of wireless lokal networks (Wifi / Bluetooth)
- GPS and outdoor positioning
- Teamwork and planning of timed projects
- Agile feature developement and tools
This course was held as `Praktikum Mobile und Verteilte Systeme (MSP)`

View File

@ -130,10 +130,12 @@ margin: auto;
overflow: hidden;
align-content: center;
display: flex;
max-height: 110px;
img {
width: auto;
margin: auto;
max-height: 90px;
overflow: hidden;
max-width: 200px;
}
}
@ -209,9 +211,9 @@ align-content: center;
.entries-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(180px, 1fr));
column-gap: 12px;
row-gap: 8px;
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
column-gap: 16px;
row-gap: 16px;
}
.grid__item {

View File

@ -13,7 +13,7 @@ body {
padding: 0;
color: $text-color;
font-family: $global-font-family;
line-height: 1;
line-height: 1.2;
&.overflow--hidden {
/* when primary navigation is visible, the content in the background won't scroll */

192
about.md
View File

@ -1,125 +1,32 @@
---
# Feel free to add content and custom Front Matter to this file.
# To modify the layout, see https://jekyllrb.com/docs/themes/#overriding-theme-defaults
layout: single
author_profile: false
title: "about me"
canonical_url: "https://steffenillium.de"
title: "About Me"
canonical_url: "https://steffenillium.de/about/"
permalink: "/about/"
---
<div style="text-align: center;border-collapse: collapse; border: none;" class="table-right">
|![Profile Image](\assets\images\longshot.jpg){: style="margin:0em; padding:0em; width:10em"}|
| **Steffen Illium**<br>*AI Researcher & Data Scientist*<br>*PHD Student @ LMU Munich*|
| **Steffen Illium**<br>*AI Consultant & Data Scientist*|
[Grab my CV here](\assets\illium_cv_censored.pdf){: .btn .btn--success}
</div>
Working at a university encompasses a broad spectrum of roles including teaching theoretical courses, guiding practical sessions, and contributing as both a speaker and organizer for lectures. For further insights into my academic contributions, explore my [teaching](/teaching) and [research](/research) pages.
My academic and professional path reflects a deep-seated interest in transforming data into actionable insights, beginning with a foundation in Geography (BSc, JGU Mainz) and Geo-Informatics (MSc, University of Augsburg), and culminating in a PhD in Computer Science from LMU Munich (summa cum laude). During my doctoral studies and subsequent research assistant role at LMU (2018-2023), I focused on advancing machine learning models for sequential data, self-learning systems, and contributing to foundational research in neural network applications.
My involvement in [projects](/projects) often entailed collaboration with industry partners, where I delved into audio signal processing and honed my skills in training deep neural networks for analyzing sequences and image data. My final year presented a unique opportunity to investigate multi-agent reinforcement learning, focusing on safety and emergent phenomena within integrated industrial settings. This experience, combined with my personal interests, laid the groundwork for the [publications](/publications) I've had the privilege to contribute to and the diverse skill set I've developed over time.
My research frequently involved collaborations with industry partners on projects such as "ErLoWa" (leak detection in water networks) and "AI-Fusion" (emergent dysfunction detection in MA-RL), providing extensive experience in areas like audio signal processing, deep learning for sequence and image analysis, and multi-agent reinforcement learning (MARL), particularly concerning safety and emergence in industrial contexts. This blend of theoretical research and practical application forms the basis of my [publications](/publications) and [research](/research) activities.
Additionally, my colleagues and I pursued what we affectionately termed as '*hobbies*', which led to my role as the lead organizer of the [open-source conference](https://openmunich.eu). My journey continued as I assumed responsibility for the editorial office of our [online magazine](https://digitaleweltmagazin.de/), further broadening my professional experience and introducing me to a variety of roles and tools beyond my initial expectations.
Beyond core research, I have actively engaged in teaching and academic service. My experience includes lecturing, supervising practical courses (e.g., iOS, Android development), managing seminars (IMAPS), leading Python crash courses, and mentoring numerous Bachelor's (20) and Master's (9) theses. Details can be found on the [teaching](/teaching) page.
Additionally, I have embraced leadership and organizational roles within the academic community. I served as the lead organizer for the [OpenMunich conference](https://openmunich.eu) (2018-2019) and headed the editorial team for the ["DIGITALE WELT Magazin"](https://digitaleweltmagazin.de/) (2018-2023), broadening my experience in project management, communication, and community building.
---
Roles:
### Research Profiles
<br>
![Teacher](https://img.shields.io/badge/Teacher-blue?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Researcher](https://img.shields.io/badge/Researcher-blue?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Data Scientist](https://img.shields.io/badge/Data-Scientist-blue?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Machine Learning Expert](https://img.shields.io/badge/Machine_Learning-Expert-blue?style=for-the-badge&logo=microsoft-office&logoColor=white)
![System Administrator](https://img.shields.io/badge/System-Administrator-blue?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Cloud Architect](https://img.shields.io/badge/Cloud-Architect-blue?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Project Manager](https://img.shields.io/badge/Project-Manager-blue?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Editor in Chief](https://img.shields.io/badge/Editor_in-Chief-blue?style=for-the-badge&logo=microsoft-office&logoColor=white)
Concepts:
<br>
![Machine Learning](https://img.shields.io/badge/machine_learning-orange?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Classification](https://img.shields.io/badge/classification-orange?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Anomaly Detection](https://img.shields.io/badge/anomaly_detection-orange?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Out of Distribution Detection](https://img.shields.io/badge/OOD-orange?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Reinforcement Learning](https://img.shields.io/badge/reinforcement_learning-orange?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Multi-Agent RL](https://img.shields.io/badge/multi--agent_rl-orange?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Emergence](https://img.shields.io/badge/emergence-orange?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Industrial Safety](https://img.shields.io/badge/industrial_safety-orange?style=for-the-badge&logo=microsoft-office&logoColor=white)
Languages:
<br>
![Python](https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54)
![Markdown](https://img.shields.io/badge/Markdown-000000?style=for-the-badge&logo=markdown&logoColor=white)
![PHP](https://img.shields.io/badge/php-%23777BB4.svg?style=for-the-badge&logo=php&logoColor=white)
![LaTeX](https://img.shields.io/badge/latex-%23008080.svg?style=for-the-badge&logo=latex&logoColor=white)
![Shell Script](https://img.shields.io/badge/shell_script-%23121011.svg?style=for-the-badge&logo=gnu-bash&logoColor=white)
![Windows Terminal](https://img.shields.io/badge/Windows%20Terminal-%234D4D4D.svg?style=for-the-badge&logo=windows-terminal&logoColor=white)
![Kotlin](https://img.shields.io/badge/kotlin-%237F52FF.svg?style=for-the-badge&logo=kotlin&logoColor=white)
![JavaScript](https://img.shields.io/badge/javascript-%23323330.svg?style=for-the-badge&logo=javascript&logoColor=%23F7DF1E)
![HTML5](https://img.shields.io/badge/html5-%23E34F26.svg?style=for-the-badge&logo=html5&logoColor=white)
![CSS3](https://img.shields.io/badge/css3-%231572B6.svg?style=for-the-badge&logo=css3&logoColor=white)
![GraphQL](https://img.shields.io/badge/-GraphQL-E10098?style=for-the-badge&logo=graphql&logoColor=white)
![Spring](https://img.shields.io/badge/Spring-6DB33F?style=for-the-badge&logo=spring&logoColor=white)
PY-Libraries:
<br>
![PyTorch](https://img.shields.io/badge/PyTorch-%23EE4C2C.svg?style=for-the-badge&logo=PyTorch&logoColor=white)
![scikit-learn](https://img.shields.io/badge/scikit--learn-%23F7931E.svg?style=for-the-badge&logo=scikit-learn&logoColor=white)
![NumPy](https://img.shields.io/badge/numpy-%23013243.svg?style=for-the-badge&logo=numpy&logoColor=white)
![Matplotlib](https://img.shields.io/badge/Matplotlib-%23ffffff.svg?style=for-the-badge&logo=Matplotlib&logoColor=black)
![Pandas](https://img.shields.io/badge/pandas-%23150458.svg?style=for-the-badge&logo=pandas&logoColor=white)
![Plotly](https://img.shields.io/badge/Plotly-%233F4F75.svg?style=for-the-badge&logo=plotly&logoColor=white)
![TensorFlow](https://img.shields.io/badge/TensorFlow-%23FF6F00.svg?style=for-the-badge&logo=TensorFlow&logoColor=white)
![Keras](https://img.shields.io/badge/Keras-%23D00000.svg?style=for-the-badge&logo=Keras&logoColor=white)
Operating Sytems:
<br>
![Arch](https://img.shields.io/badge/Arch%20Linux-1793D1?logo=arch-linux&logoColor=fff&style=for-the-badge)
![Debian](https://img.shields.io/badge/Debian-D70A53?style=for-the-badge&logo=debian&logoColor=white)
![Manjaro](https://img.shields.io/badge/Manjaro-35BF5C?style=for-the-badge&logo=Manjaro&logoColor=white)
![Android](https://img.shields.io/badge/Android-3DDC84?style=for-the-badge&logo=android&logoColor=white)
![Windows](https://img.shields.io/badge/Windows-0078D6?style=for-the-badge&logo=windows&logoColor=white)
Databases:
<br>
![MongoDB](https://img.shields.io/badge/MongoDB-4EA94B?style=for-the-badge&logo=mongodb&logoColor=white)
![SQLite](https://img.shields.io/badge/SQLite-07405E?style=for-the-badge&logo=sqlite&logoColor=white)
![SUPABASE](https://img.shields.io/badge/Supabase-181818?style=for-the-badge&logo=supabase&logoColor=white)
Tools & Services:
<br>
![Git](https://img.shields.io/badge/git-%23F05033.svg?style=for-the-badge&logo=git&logoColor=white)
![Wireguard](https://img.shields.io/badge/wireguard-%2388171A.svg?style=for-the-badge&logo=wireguard&logoColor=white)
![Traefik](https://img.shields.io/badge/Traefik-red?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Ansible](https://img.shields.io/badge/ansible-%231A1918.svg?style=for-the-badge&logo=ansible&logoColor=white)
![Docker](https://img.shields.io/badge/docker-%230db7ed.svg?style=for-the-badge&logo=docker&logoColor=white)
![Nginx](https://img.shields.io/badge/nginx-%23009639.svg?style=for-the-badge&logo=nginx&logoColor=white)
![Home Assistant](https://img.shields.io/badge/home%20assistant-%2341BDF5.svg?style=for-the-badge&logo=home-assistant&logoColor=white)
![Kubernetes](https://img.shields.io/badge/kubernetes-%23326ce5.svg?style=for-the-badge&logo=kubernetes&logoColor=white)
![Mosquitto](https://img.shields.io/badge/mosquitto-%233C5280.svg?style=for-the-badge&logo=eclipsemosquitto&logoColor=white)
![Rancher](https://img.shields.io/badge/rancher-%230075A8.svg?style=for-the-badge&logo=rancher&logoColor=white)
![Selenium](https://img.shields.io/badge/-selenium-%43B02A?style=for-the-badge&logo=selenium&logoColor=white)
![DigitalOcean](https://img.shields.io/badge/DigitalOcean-%230167ff.svg?style=for-the-badge&logo=digitalOcean&logoColor=white)
![Longhorn](https://img.shields.io/badge/LONGHORN-%23326ce5.svg?style=for-the-badge&logo=kubernetes&logoColor=white)
![Sealed Secrets](https://img.shields.io/badge/SEALED_SECRETS-%23326ce5.svg?style=for-the-badge&logo=kubernetes&logoColor=white)
![Google Analytics](https://img.shields.io/badge/Google%20Analytics-E37400?style=for-the-badge&logo=google%20analytics&logoColor=white)
Programs:
<br>
![Microsoft Office](https://img.shields.io/badge/Microsoft_Office-D83B01?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Android Studio](https://img.shields.io/badge/Android%20Studio-3DDC84.svg?style=for-the-badge&logo=android-studio&logoColor=white)
![Visual Studio Code](https://img.shields.io/badge/Visual%20Studio%20Code-0078d7.svg?style=for-the-badge&logo=visual-studio-code&logoColor=white)
![PyCharm](https://img.shields.io/badge/pycharm-143?style=for-the-badge&logo=pycharm&logoColor=black&color=black&labelColor=green)
![PhpStorm](https://img.shields.io/badge/phpstorm-143?style=for-the-badge&logo=phpstorm&logoColor=black&color=black&labelColor=darkorchid)
![Obsidian](https://img.shields.io/badge/Obsidian-%23483699.svg?style=for-the-badge&logo=obsidian&logoColor=white)
![Notepad++](https://img.shields.io/badge/Notepad++-90E59A.svg?style=for-the-badge&logo=notepad%2b%2b&logoColor=black)
---
Publications:
<br>
<figure class="research_icons" style="max-width: 70%; text-align:center;">
<a href="https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en">
@ -144,4 +51,81 @@ Publications:
---
Thank you for coming here :wave:
### Core Competencies & Technical Skills
**Roles & Expertise:**
<br>
![Teacher](https://img.shields.io/badge/Teacher-blue?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Researcher](https://img.shields.io/badge/Researcher-blue?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Data Scientist](https://img.shields.io/badge/Data_Scientist-blue?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Machine Learning Expert](https://img.shields.io/badge/Machine_Learning_Expert-blue?style=for-the-badge&logo=microsoft-office&logoColor=white)
![AI Consultant](https://img.shields.io/badge/AI_Consultant-blue?style=for-the-badge&logo=microsoft-office&logoColor=white)
![System Administrator](https://img.shields.io/badge/System_Administrator-blue?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Project Management](https://img.shields.io/badge/Project_Management-blue?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Editor in Chief](https://img.shields.io/badge/Editor_in_Chief-blue?style=for-the-badge&logo=microsoft-office&logoColor=white)
**Concepts & Methodologies:**
<br>
![Machine Learning](https://img.shields.io/badge/Machine_Learning-orange?style=for-the-badge)
![Deep Learning](https://img.shields.io/badge/Deep_Learning-orange?style=for-the-badge)
![Data Augmentation](https://img.shields.io/badge/Data_Augmentation-orange?style=for-the-badge)
![Classification](https://img.shields.io/badge/Classification-orange?style=for-the-badge)
![Segmentation](https://img.shields.io/badge/Segmentation-orange?style=for-the-badge)
![Anomaly Detection](https://img.shields.io/badge/Anomaly_Detection-orange?style=for-the-badge)
![Out-of-Distribution Detection](https://img.shields.io/badge/OOD_Detection-orange?style=for-the-badge)
![Reinforcement Learning](https://img.shields.io/badge/Reinforcement_Learning-orange?style=for-the-badge)
![Multi-Agent RL](https://img.shields.io/badge/Multi_Agent_RL-orange?style=for-the-badge)
![Emergence](https://img.shields.io/badge/Emergence-orange?style=for-the-badge)
![Industrial Safety (AI)](https://img.shields.io/badge/Industrial_Safety_(AI)-orange?style=for-the-badge)
![Geoinformatics](https://img.shields.io/badge/Geoinformatics-orange?style=for-the-badge)
**Programming Languages:**
<br>
![Python](https://img.shields.io/badge/Python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54)
![LaTeX](https://img.shields.io/badge/LaTeX-%23008080.svg?style=for-the-badge&logo=latex&logoColor=white)
![Kotlin](https://img.shields.io/badge/Kotlin-%237F52FF.svg?style=for-the-badge&logo=kotlin&logoColor=white)
![PHP](https://img.shields.io/badge/PHP-%23777BB4.svg?style=for-the-badge&logo=php&logoColor=white)
![Shell Script](https://img.shields.io/badge/Shell_Script-%23121011.svg?style=for-the-badge&logo=gnu-bash&logoColor=white)
![HTML5](https://img.shields.io/badge/HTML5-%23E34F26.svg?style=for-the-badge&logo=html5&logoColor=white)
![CSS3](https://img.shields.io/badge/CSS3-%231572B6.svg?style=for-the-badge&logo=css3&logoColor=white)
![Markdown](https://img.shields.io/badge/Markdown-000000?style=for-the-badge&logo=markdown&logoColor=white)
![JavaScript](https://img.shields.io/badge/JavaScript-%23323330.svg?style=for-the-badge&logo=javascript&logoColor=%23F7DF1E)
![SQL](https://img.shields.io/badge/SQL-black?style=for-the-badge&logo=postgresql&logoColor=white)
![NoSQL](https://img.shields.io/badge/NoSQL-black?style=for-the-badge&logo=mongodb&logoColor=white)
**Libraries & Frameworks (Python Focus):**
<br>
![PyTorch](https://img.shields.io/badge/PyTorch-%23EE4C2C.svg?style=for-the-badge&logo=PyTorch&logoColor=white)
![NumPy](https://img.shields.io/badge/NumPy-%23013243.svg?style=for-the-badge&logo=numpy&logoColor=white)
![Pandas](https://img.shields.io/badge/Pandas-%23150458.svg?style=for-the-badge&logo=pandas&logoColor=white)
![Scikit-learn](https://img.shields.io/badge/scikit--learn-%23F7931E.svg?style=for-the-badge&logo=scikit-learn&logoColor=white)
![FastAPI](https://img.shields.io/badge/FastAPI-005571?style=for-the-badge&logo=fastapi)
![Matplotlib](https://img.shields.io/badge/Matplotlib-%23ffffff.svg?style=for-the-badge&logo=Matplotlib&logoColor=black)
![Plotly](https://img.shields.io/badge/Plotly-%233F4F75.svg?style=for-the-badge&logo=plotly&logoColor=white)
**Systems & DevOps:**
<br>
![Linux](https://img.shields.io/badge/Linux_(Arch,_NixOS,_Debian)-FCC624?style=for-the-badge&logo=linux&logoColor=black)
![Docker](https://img.shields.io/badge/Docker_(&_Swarm)-%230db7ed.svg?style=for-the-badge&logo=docker&logoColor=white)
![Kubernetes](https://img.shields.io/badge/Kubernetes-%23326ce5.svg?style=for-the-badge&logo=kubernetes&logoColor=white)
![Git](https://img.shields.io/badge/Git-%23F05033.svg?style=for-the-badge&logo=git&logoColor=white)
![Nginx](https://img.shields.io/badge/Nginx-%23009639.svg?style=for-the-badge&logo=nginx&logoColor=white)
![Traefik Proxy](https://img.shields.io/badge/Traefik_Proxy-%2324a1c1?style=for-the-badge&logo=traefikproxy&logoColor=white)
![WireGuard](https://img.shields.io/badge/WireGuard-%2388171A.svg?style=for-the-badge&logo=wireguard&logoColor=white)
![ZFS](https://img.shields.io/badge/ZFS-0079f2.svg?style=for-the-badge&logo=dependabot&logoColor=white)
**Databases:**
<br>
![SQL (General)](https://img.shields.io/badge/SQL-07405E?style=for-the-badge&logo=sqlite&logoColor=white)
![MongoDB](https://img.shields.io/badge/MongoDB-4EA94B?style=for-the-badge&logo=mongodb&logoColor=white)
**Tools & Software:**
<br>
![VS Code](https://img.shields.io/badge/VS_Code-0078d7.svg?style=for-the-badge&logo=visual-studio-code&logoColor=white)
![IntelliJ IDEA](https://img.shields.io/badge/IntelliJ_IDEA-000000.svg?style=for-the-badge&logo=intellij-idea&logoColor=white)
![Microsoft Office](https://img.shields.io/badge/Microsoft_Office-D83B01?style=for-the-badge&logo=microsoft-office&logoColor=white)
![Obsidian](https://img.shields.io/badge/Obsidian-%23483699.svg?style=for-the-badge&logo=obsidian&logoColor=white)
---
Thank you for your interest in my profile.

View File

@ -1,7 +1,4 @@
---
# Feel free to add content and custom Front Matter to this file.
# To modify the layout, see https://jekyllrb.com/docs/themes/#overriding-theme-defaults
title: "Blog"
permalink: /blog/
layout: category

View File

@ -1,4 +0,0 @@
---
layout: home
author_profile: true
---

View File

@ -1,25 +1,23 @@
---
# Feel free to add content and custom Front Matter to this file.
# To modify the layout, see https://jekyllrb.com/docs/themes/#overriding-theme-defaults
layout: home
author_profile: true
# title: "about me"
canonical_url: "https://steffenillium.de"
permalink: "/"
entries_layout: grid
---
Welcome, and thank you for visiting! :wave:
Welcome.
I am a Machine Learning Expert, Data Scientist, and Researcher specializing in areas including Data Augmentation & Synthesis, Classification & Segmentation, Anomaly & OOD Detection, and Multi-Agent Systems.
This portfolio offers a comprehensive overview of my academic background, professional journey, research contributions, and technical expertise.
This is my portfolio page build to share a comprehensive [overview](/about) of my academic and prrofessional journey.
For more detailed information, please explore the options available in the top menu.<br>
<figure class="third">
<img src="/assets/images/photo/bike.jpg" alt="Bike in the Garden">
<img src="/assets/images/photo/bike.jpg" alt="Cycling equipment leaning against a wall in a garden setting.">
<img src="/assets/images/photo/vulkan_wave.jpg" alt="Waving on top of a Vulcano">
<img src="/assets/images/photo/vulkan_wave.jpg" alt="Waving atop a volcanic peak under a clear sky.">
<img src="/assets/images/photo/azores.jpg" alt="Rough, stormy coastline of the Azores with pink flowers on green gras in foreground">
<img src="/assets/images/photo/azores.jpg" alt="Stormy coastline of the Azores featuring pink flowers on green grass in the foreground.">
</figure>
Reflecting the diverse facets of my professional life over the past years, the structure of this site encompasses my [endeavors in research](/research), [teaching](\teaching), [projects](/projects), and [publications](/publications). Feel free to browse through and discover more about my work. I appreciate your interest! :blush:
Explore the sections detailing my [research](/research), [teaching](/teaching) experience, key [projects](/projects), and [publications](/publications) to gain deeper insights into my work. You can navigate through the site using the top menu for detailed information on specific areas.

View File

@ -1,49 +1,61 @@
# Check if client is capable of handling webp
map $http_accept $webp_suffix {
default "";
"~*webp" ".webp";
}
# Map to check if client is capable of handling webp
map $http_accept $webp_suffix {
default "";
"~*webp" ".webp"; # Sets suffix to .webp if Accept header contains webp
}
# Capture image path, without the file extension
map $uri $image {
~*^/(images)/(.+)\.(jpe?g|png)$ /$1/$2;
}
# Map to capture the image path *without* the file extension
# This regex captures everything before the last dot and jpg/jpeg/png extension
map $uri $image_path_without_ext {
~^(?<captured_path>.+)\.(?:jpe?g|png)$ $captured_path;
}
server {
listen 80;
listen [::]:80;
server_name localhost;
server_name localhost; # Replace localhost with your actual domain if needed
location ~* ^/.+\.(jpg|jpeg|png)$ {
root /usr/share/nginx/html;
# BEGIN Browser Caching of WebP
# Define the root directory for your website files
root /usr/share/nginx/html;
# Location block specifically for JPG, JPEG, and PNG images
location ~* \.(jpe?g|png)$ {
# Add Vary header tells caches that response depends on Accept header
add_header Vary Accept;
# Set cache expiration headers for images
expires 180d;
add_header Pragma "public";
add_header Cache-Control "public";
# END Browser Caching of WebP
add_header Vary Accept;
try_files $image$webp_suffix $uri =404;
# Try to serve the .webp file first if browser supports it
# $image_path_without_ext comes from the map above
# $webp_suffix comes from the map above (.webp or empty)
# If $image_path_without_ext$webp_suffix exists (e.g., /path/image.webp), serve it.
# Otherwise, try the original $uri (e.g., /path/image.png).
# If neither exists, return 404.
try_files $image_path_without_ext$webp_suffix $uri =404;
}
# Default location block for other requests (HTML, CSS, JS, etc.)
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html; # Common pattern for SPAs/frameworks
}
error_page 404 500 502 503 504 /404.html;
# Error pages
error_page 404 /404.html;
location = /404.html {
root /usr/share/nginx/html;
internal; # Prevents direct access to the error page
}
## Browser Caching
#location ~* \.(css|js|ico|gif|jpeg|jpg|webp|png|svg|eot|otf|woff|woff2|ttf|ogg)$ {
# expires 180d;
# add_header Pragma "public";
# add_header Cache-Control "public";
#}
gzip on;
gzip_comp_level 4;
gzip_types text/html text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
error_page 500 502 503 504 /50x.html; # Optional: generic 50x error page
location = /50x.html {
internal;
}
# Gzip settings (keep as they are)
gzip on;
gzip_comp_level 4;
gzip_types text/html text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
}

View File

@ -1,8 +1,5 @@
---
# Feel free to add content and custom Front Matter to this file.
# To modify the layout, see https://jekyllrb.com/docs/themes/#overriding-theme-defaults
title: "projects"
title: "Projects"
permalink: /projects/
layout: category
taxonomy: projects
@ -10,11 +7,12 @@ author_profile: true
entries_layout: list
---
Here you will find an overview of the projects I worked on at the [mobile and distributed systems chair](http://www.mobile.ifi.lmu.de/).
Throughout my tenure, I embraced various roles including technician, researcher, project communicator, conference organizer, and editor-in-chief.
As a result, the list below represents a blend of genuine industrial projects, undertaken in collaboration with SWA and Fraunhofer, as well as pursuits we affectionately refer to as “hobbies” within the ambit of the chair.
This section details key projects undertaken during my tenure as a Research Assistant and PhD candidate at the [Chair for Mobile and Distributed Systems](http://www.mobile.ifi.lmu.de/), LMU Munich (2018-2024), as well as subsequent engagements.
My involvement spanned a diverse range of initiatives, encompassing foundational research, applied projects in collaboration with industry partners such as Stadtwerke München (SWM) and the Fraunhofer Institute for Cognitive Systems (IKS), and significant contributions to academic community organization and outreach. Across these projects, I assumed various responsibilities, including researcher, technical lead, project communicator, conference organizer, and editorial lead.
The following list provides an overview of these varied engagements, highlighting the objectives, methodologies, and outcomes associated with each initiative.
## List of Projects
---

View File

@ -1,18 +1,23 @@
---
# Feel free to add content and custom Front Matter to this file.
# To modify the layout, see https://jekyllrb.com/docs/themes/#overriding-theme-defaults
title: "Publications"
permalink: /publications/
layout: single
author_profile: true
title: "publications"
permalink: /publications/
scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
---
This section presents a collection of scientific papers to which I have contributed, or that were inspired by my research and ideas.
My keen interest in the foundational principles of deep learning and neural networks has led me to explore a wide array of topics, ranging from in-depth analyses of their inner mechanisms to practical applications in various domains.
Many of these endeavors were directly influenced by the [projects](/projects) I participated in.
Alongside my colleagues, driven by curiosity and enthusiasm, we ventured into the exploration of somewhat unconventional concepts. I invite you to explore these works and share in our journey of discovery. 🤗
This section highlights my contributions to scientific literature. My research primarily focuses on advancing the understanding and application of machine learning and deep neural networks.
Key areas of investigation reflected in my publications include:
* **Foundational Deep Learning:** Analyses of neural network mechanisms and core principles.
* **Methodological Development:** Novel techniques in Data Augmentation & Synthesis, Classification & Segmentation, Anomaly & Out-of-Distribution (OOD) Detection.
* **Advanced Systems:** Exploration of Multi-Agent Reinforcement Learning (MARL), emergence, and safety considerations.
* **Applied Domains:** Leveraging these techniques in areas such as Geoinformatics, audio analysis, and sequence modeling, often stemming from collaborative [projects](/projects).
The publications listed below represent significant outputs from my doctoral studies at LMU Munich and ongoing research activities, contributing to both foundational knowledge and practical solutions.
For a comprehensive and continuously updated list of my publications, please visit my profile on <a href="{{ page.scholar_link }}" target="_blank" rel="noopener noreferrer">Google Scholar</a>.
---

View File

@ -1,7 +1,4 @@
---
# Feel free to add content and custom Front Matter to this file.
# To modify the layout, see https://jekyllrb.com/docs/themes/#overriding-theme-defaults
#layout: single
title: "research"
permalink: /research/