257 lines
35 KiB
XML
257 lines
35 KiB
XML
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="http://localhost:4000/feed.xml" rel="self" type="application/atom+xml" /><link href="http://localhost:4000/" rel="alternate" type="text/html" /><updated>2025-07-22T08:56:15+02:00</updated><id>http://localhost:4000/feed.xml</id><title type="html">Steffen Illium</title><subtitle>Personal Website</subtitle><author><name>Steffen Illium</name></author><entry><title type="html">MAS Emergence Safety</title><link href="http://localhost:4000/research/mas-emergence-safety/" rel="alternate" type="text/html" title="MAS Emergence Safety" /><published>2024-10-27T00:00:00+02:00</published><updated>2024-10-27T00:00:00+02:00</updated><id>http://localhost:4000/research/mas-emergence-safety</id><content type="html" xml:base="http://localhost:4000/research/mas-emergence-safety/"><
|
||
{:style="display:block; width:40%" .align-right}
|
||
|
||
Multi-Agent Systems (MAS), particularly those employing decentralized decision-making based on local information (common in MARL), can exhibit **emergent effects**. These phenomena, arising from complex interactions, range from minor behavioral quirks to potentially catastrophic system failures, posing significant **safety challenges**.
|
||
|
||
This research provides a framework for understanding and mitigating undesirable emergence from a **safety perspective**. We propose a formal definition: emergent effects arise from **misalignments between the *global inherent specification*** (the intended overall system goal or behavior) **and its *local approximation*** used by individual agents (e.g., distinct reward components, limited observations).
|
||
|
||
<center>
|
||
<img src="/assets/figures/21_coins.png" alt="Visualization showing agents exhibiting emergent coin-collecting behavior" style="display:block; width:70%">
|
||
<figcaption>Example of emergent behavior (e.g., coin hoarding) due to specification misalignment.</figcaption>
|
||
</center><br>
|
||
|
||
Leveraging established concepts from system safety engineering, we analyze how such misalignments can lead to deviations from intended global behavior. To illustrate the practical implications, we examine two highly configurable gridworld scenarios. These demonstrate how inadequate or independently derived local specifications (rewards/observations) can predictably result in unintended emergent behaviors, such as resource hoarding or inefficient coordination.
|
||
|
||
<center>
|
||
<img src="/assets/figures/21_blocking.png" alt="Visualization showing agents exhibiting emergent blocking behavior" style="display:block; width:60%">
|
||
<figcaption>Example of emergent behavior (e.g., mutual blocking) due to specification misalignment.</figcaption>
|
||
</center><br>
|
||
|
||
Recognizing that achieving a perfectly aligned global specification might be impractical in complex systems, we propose strategies focused on **adjusting the underlying local parameterizations** (e.g., reward shaping, observation design) to mitigate harmful emergence. By carefully tuning these local components, system alignment can be improved, reducing the risk of emergent failures and enhancing overall safety. {% cite altmann2024emergence %}]]></content><author><name>Steffen Illium</name></author><category term="research" /><category term="multi-agent-systems" /><category term="MARL" /><category term="safety" /><category term="emergence" /><category term="system-specification" /><summary type="html"><![CDATA[Formalized MAS emergence misalignment; proposed safety mitigation strategies.]]></summary></entry><entry><title type="html">Aquarium MARL Environment</title><link href="http://localhost:4000/research/aquarium-marl-environment/" rel="alternate" type="text/html" title="Aquarium MARL Environment" /><published>2024-01-13T00:00:00+01:00</published><updated>2024-01-13T00:00:00+01:00</updated><id>http://localhost:4000/research/aquarium-marl-environment</id><content type="html" xml:base="http://localhost:4000/research/aquarium-marl-environment/"><{:style="display:block; width:40%" .align-right}
|
||
|
||
The study of complex interactions using Multi-Agent Reinforcement Learning (MARL), particularly **predator-prey dynamics**, often requires specialized simulation environments. To streamline research and avoid redundant development efforts, we introduce **Aquarium**: a versatile, open-source MARL environment specifically designed for investigating predator-prey scenarios and related **emergent behaviors**.
|
||
|
||
Key Features of Aquarium:
|
||
|
||
* **Framework Integration:** Built upon and seamlessly integrates with the popular **PettingZoo API**, allowing researchers to readily apply existing MARL algorithm implementations (e.g., from Stable-Baselines3, RLlib).
|
||
* **Physics-Based Movement:** Simulates agent movement on a two-dimensional, continuous plane with edge-wrapping boundaries, incorporating basic physics for more realistic interactions.
|
||
* **High Customizability:** Offers extensive configuration options for:
|
||
* **Agent-Environment Interactions:** Observation spaces, action spaces, and reward functions can be tailored to specific research questions.
|
||
* **Environmental Parameters:** Key dynamics like agent speeds, prey reproduction rates, predator starvation mechanisms, sensor ranges, and more are fully adjustable.
|
||
* **Visualization & Recording:** Includes a resource-efficient visualizer and supports video recording of simulation runs, facilitating qualitative analysis and understanding of agent behaviors.
|
||
|
||
<div style="display: flex; align-items: center; justify-content: center;">
|
||
<center>
|
||
<img src="/assets/figures/20_observation_vector.png" alt="Diagram detailing the construction of the observation vector for an agent" style="display:inline-table; width:85%">
|
||
<figcaption>Construction details of the agent observation vector.</figcaption>
|
||
</center>
|
||
<center>
|
||
<img src="/assets/figures/20_capture_statistics.png" alt="Graphs showing average captures or rewards per prey agent under different training regimes" style="display:inline-table; width:100%">
|
||
<figcaption>Performance metrics (e.g., average captures/rewards) comparing training strategies.</figcaption>
|
||
</center>
|
||
</div>
|
||
|
||
To demonstrate its capabilities, we conducted preliminary studies using **Proximal Policy Optimization (PPO)** to train multiple prey agents learning to evade a predator within Aquarium. Consistent with findings in existing MARL literature, our results showed that training agents with **individual policies led to suboptimal performance**, whereas utilizing **parameter sharing** among prey agents significantly improved coordination, sample efficiency, and overall evasion success. {% cite kolle2024aquarium %}]]></content><author><name>Steffen Illium</name></author><category term="research" /><category term="multi-agent-reinforcement-learning" /><category term="MARL" /><category term="simulation" /><category term="emergence" /><category term="complex-systems" /><summary type="html"><![CDATA[Aquarium: Open-source MARL environment for predator-prey studies.]]></summary></entry><entry><title type="html">LMU DevOps Admin</title><link href="http://localhost:4000/projects/server-administration/" rel="alternate" type="text/html" title="LMU DevOps Admin" /><published>2023-10-15T00:00:00+02:00</published><updated>2023-10-15T00:00:00+02:00</updated><id>http://localhost:4000/projects/server-administration</id><content type="html" xml:base="http://localhost:4000/projects/server-administration/"><{: .align-left style="padding:0.1em; width:5em" alt="Arch Linux Logo"}
|
||
During my tenure at the LMU Chair for Mobile and Distributed Systems, alongside my research activities, I assumed responsibility for the ongoing maintenance of the group's IT infrastructure. This encompassed Linux workstations, Windows Server-based hypervisors, Linux file servers (utilizing ZFS), and core network services.
|
||
|
||
---
|
||
|
||
**Role:** IT Infrastructure & DevOps Lead (Informal)<br>
|
||
**Affiliation:** Chair for Mobile and Distributed Systems, LMU Munich<br>
|
||
**Duration:** 2018 - 2023 (Concurrent with Research Role)<br>
|
||
**Objective:** Continious maintenance of IT infrastructure
|
||
|
||
---
|
||
|
||
**Key Initiatives & Achievements:**
|
||
|
||
* **Infrastructure as Code & Orchestration:**
|
||
* Leveraged **Ansible** extensively for automated configuration management and orchestration across a heterogeneous environment, ensuring consistency and reducing manual effort in managing diverse operating systems (Debian, Arch Linux, Windows), hardware configurations, and software libraries.
|
||
|
||
* **Containerization & Kubernetes Migration:**
|
||
* Spearheaded the migration of numerous internal services (including web applications, databases, and research tools) from traditional VMs and bare-metal deployments to a **Kubernetes (K3S)** cluster. This enhanced scalability, resilience, and resource utilization.
|
||
* Implemented **Longhorn** for persistent, distributed block storage within the Kubernetes cluster.
|
||
|
||
* **DevOps & GitOps Implementation:**
|
||
* Established a modern DevOps workflow centered around a self-hosted **GitLab** instance, utilizing **GitLab CI** for automated testing and container building.
|
||
* Implemented **Argo CD** for GitOps-based continuous deployment to the Kubernetes cluster, ensuring declarative state management and automated synchronization.
|
||
* Managed sensitive information using **Sealed Secrets** for secure secret handling within the GitOps workflow.
|
||
|
||
* **Networking & Security:**
|
||
* Configured **Traefik** as the primary reverse proxy and ingress controller for the Kubernetes cluster, automating routing, service discovery, and TLS certificate management.
|
||
* Implemented and managed a **WireGuard** VPN server to provide secure remote access for chair members to internal resources.
|
||
|
||
* **ML Workflow Optimization:**
|
||
* Re-architected the execution environment for machine learning experiments. Transitioned from managing dependencies directly on workstations or via a less reliable SLURM setup to a containerized approach using **Docker**.
|
||
* Utilized the self-hosted **GitLab Container Registry** for storing ML environment images.
|
||
|
||
---
|
||
|
||
**Outcomes & Philosophy:**
|
||
|
||
This hands-on role provided deep practical experience in modern system administration, networking, Infrastructure as Code (IaC), and cloud-native technologies within an academic research setting. It fostered my preference for minimalist, reproducible, and microservice-oriented architectures. These principles and skills are actively applied in my personal projects, including the self-hosting and management of this website and various other containerized services.
|
||
|
||
A more comprehensive list of the technologies I work with can be found on the [About Me](/about/) page.]]></content><author><name>Steffen Illium</name></author><category term="projects" /><category term="devops" /><category term="kubernetes" /><category term="server-administration" /><category term="infrastructure" /><summary type="html"><![CDATA[Managed LMU chair IT: Kubernetes, CI/CD, automation (2018-2023).]]></summary></entry><entry><title type="html">Primate Subsegment Sorting</title><link href="http://localhost:4000/research/primate-subsegment-sorting/" rel="alternate" type="text/html" title="Primate Subsegment Sorting" /><published>2023-06-25T00:00:00+02:00</published><updated>2023-06-25T00:00:00+02:00</updated><id>http://localhost:4000/research/primate-subsegment-sorting</id><content type="html" xml:base="http://localhost:4000/research/primate-subsegment-sorting/"><
|
||
{:style="display:block; width:40%" .align-right}
|
||
|
||
Automated acoustic classification plays a vital role in wildlife monitoring and bioacoustics research. This study introduces a sophisticated pre-processing and training strategy to significantly enhance the accuracy of multi-class audio classification, specifically targeting the identification of different primate species from field recordings.
|
||
|
||
A key challenge in bioacoustics is dealing with datasets containing weak labels (where calls of interest occupy only a portion of a labeled segment), varying segment lengths, and poor signal-to-noise ratios (SNR). Our approach addresses this by:
|
||
|
||
1. **Subsegment Analysis:** Processing audio recordings represented as **MEL spectrograms**.
|
||
2. **Refined Labeling:** Meticulously **relabeling subsegments** within the spectrograms. This "binary presorting" step effectively identifies and isolates the actual vocalizations of interest within longer, weakly labeled recordings.
|
||
3. **CNN Training:** Training **Convolutional Neural Networks (CNNs)** on these refined, higher-quality subsegment inputs.
|
||
4. **Data Augmentation:** Employing innovative **data augmentation techniques** suitable for spectrogram data to further improve model robustness.
|
||
|
||
<center>
|
||
<img src="/assets/figures/19_binary_primates_thresholding.jpg" alt="Visualization related to the thresholding or selection process for subsegment labeling" style="display:block; width:70%">
|
||
<figcaption>Thresholding or selection criteria for subsegment refinement.</figcaption>
|
||
</center><br>
|
||
|
||
The effectiveness of this methodology was evaluated on the challenging **ComParE 2021 Primate dataset**. The results demonstrate remarkable improvements in classification performance, achieving substantially higher accuracy and Unweighted Average Recall (UAR) scores compared to existing baseline methods.
|
||
|
||
<center>
|
||
<img src="/assets/figures/19_binary_primates_results.jpg" alt="Graphs or tables showing improved classification results (accuracy, UAR) compared to baselines" style="display:block; width:70%">
|
||
<figcaption>Comparative performance results on the ComParE 2021 dataset.</figcaption>
|
||
</center><br>
|
||
|
||
This work represents a significant advancement in handling difficult, real-world bioacoustic data, showcasing how careful data refinement prior to deep learning model training can dramatically enhance classification outcomes. {% cite koelle23primate %}]]></content><author><name>Steffen Illium</name></author><category term="research" /><category term="bioacoustics" /><category term="audio-classification" /><category term="deep-learning" /><category term="data-labeling" /><category term="signal-processing" /><summary type="html"><![CDATA[Binary subsegment presorting improves noisy primate sound classification.]]></summary></entry><entry><title type="html">Emergent Social Dynamics</title><link href="http://localhost:4000/research/emergent-social-dynamics/" rel="alternate" type="text/html" title="Emergent Social Dynamics" /><published>2023-05-01T00:00:00+02:00</published><updated>2023-05-01T00:00:00+02:00</updated><id>http://localhost:4000/research/emergent-social-dynamics</id><content type="html" xml:base="http://localhost:4000/research/emergent-social-dynamics/"><
|
||
{:style="display:block; width:40%" .align-right}
|
||
|
||
Specifically, particles are equipped with mechanisms enabling them to **recognize and build predictive models of their peers' behavior**. The learning process is driven by the minimization of prediction error, or "surprise," incentivizing particles to accurately anticipate the actions or state changes of others within the "soup."
|
||
|
||
Key observations from this setup include:
|
||
* The emergence of **stable behavioral patterns and population dynamics** purely from these local, predictive interactions. Notably, these emergent patterns often resemble the stability observed in systems where self-replication was an explicitly trained objective.
|
||
* The introduction of a unique **"catalyst" particle** designed to exert evolutionary pressure on the system, demonstrating how external influences or specialized agents can shape the collective dynamics.
|
||
|
||
|
||
<center>
|
||
<img src="/assets/figures/18_surprised_soup_trajec.jpg" alt="Trajectories or state space visualization of the particle population dynamics over time" style="display:block; width:90%">
|
||
<figcaption>Visualization of particle trajectories or population dynamics within the 'social soup'.</figcaption>
|
||
</center>
|
||
|
||
|
||
This study highlights how complex, seemingly goal-directed social behaviors and stable ecosystem structures can emerge from simple, local rules based on mutual prediction and surprise minimization among interacting agents, offering insights into the self-organization of complex adaptive systems. {% cite zorn23surprise %}]]></content><author><name>Steffen Illium</name></author><category term="research" /><category term="artificial-life" /><category term="complex-systems" /><category term="neural-networks" /><category term="self-organization" /><category term="emergent-behavior" /><category term="predictive-coding" /><summary type="html"><![CDATA[Artificial chemistry networks develop predictive models via surprise minimization.]]></summary></entry><entry><title type="html">Autoencoder Trajectory Compression</title><link href="http://localhost:4000/research/autoencoder-trajectory-compression/" rel="alternate" type="text/html" title="Autoencoder Trajectory Compression" /><published>2023-02-25T00:00:00+01:00</published><updated>2023-02-25T00:00:00+01:00</updated><id>http://localhost:4000/research/autoencoder-trajectory-compression</id><content type="html" xml:base="http://localhost:4000/research/autoencoder-trajectory-compression/"><{:style="display:block; width:50%" .align-right}
|
||
|
||
Our method was evaluated on two distinct datasets: one from a gaming context and another real-world dataset (T-Drive). We assessed performance across a range of compression ratios and trajectory lengths, comparing it against the widely used traditional **Douglas-Peucker algorithm**.
|
||
|
||
**Key findings:**
|
||
|
||
* The LSTM autoencoder approach significantly **outperforms Douglas-Peucker** in terms of reconstruction accuracy, as measured by both **discrete Fréchet distance** and **Dynamic Time Warping (DTW)**.
|
||
* Unlike point-reduction techniques like Douglas-Peucker, our method performs a **lossy reconstruction at every point** along the trajectory. This offers potential advantages in maintaining temporal resolution and providing greater flexibility for downstream analysis.
|
||
|
||
Experimental results demonstrate the effectiveness and potential benefits of using deep learning, specifically LSTM autoencoders, for GPS trajectory compression, offering improved accuracy over conventional geometric algorithms. {% cite kolle2023compression %}]]></content><author><name>Steffen Illium</name></author><category term="research" /><category term="deep-learning" /><category term="recurrent-neural-networks" /><category term="trajectory-analysis" /><category term="data-compression" /><category term="geoinformatics" /><summary type="html"><![CDATA[LSTM autoencoder better DP for trajectory compression (Fréchet/DTW).]]></summary></entry><entry><title type="html">Voronoi Data Augmentation</title><link href="http://localhost:4000/research/voronoi-data-augmentation/" rel="alternate" type="text/html" title="Voronoi Data Augmentation" /><published>2023-02-24T00:00:00+01:00</published><updated>2023-02-24T00:00:00+01:00</updated><id>http://localhost:4000/research/voronoi-data-augmentation</id><content type="html" xml:base="http://localhost:4000/research/voronoi-data-augmentation/"><![CDATA[Data augmentation is essential for improving the performance and generalization of Convolutional Neural Networks (CNNs), especially when training data is limited. This research introduces **VoronoiPatches (VP)**, a novel data augmentation algorithm based on the principle of **non-linear recombination** of image information.
|
||
|
||
<center>
|
||
<img src="/assets/figures/17_vp_lion.jpg" alt="Example of an image augmented with VoronoiPatches, showing polygon patches blended onto a lion image" style="display:block; width:85%">
|
||
<figcaption>Visual example of the VoronoiPatches augmentation applied to an image.</figcaption>
|
||
</center><br>
|
||
|
||
Unlike traditional methods that often apply uniform transformations or cutout regions, VP operates by:
|
||
1. Generating a random layout of points within an image.
|
||
2. Creating a Voronoi diagram based on these points, partitioning the image into unique, convex polygon-shaped patches.
|
||
3. Redistributing information between these patches or blending information across patch boundaries (specific mechanism detailed in the paper).
|
||
|
||
This approach potentially allows for smoother transitions between augmented regions and the original image compared to sharp cutout methods. The core idea is to encourage the CNN to learn more robust features by exposing it to varied, non-linearly recombined versions of the input data.
|
||
|
||
---
|
||
|
||
<div style="text-align: center; margin: 1em 0; font-weight: bold; color: #D4AF37;">
|
||
:trophy: Best Poster Award - ICAART 2023 :trophy:<br>
|
||
<small>(<a href="https://icaart.scitevents.org/PreviousAwards.aspx?y=2024#2023" target="_blank" rel="noopener noreferrer">Official Link</a>)</small>
|
||
</div>
|
||
|
||
---
|
||
|
||
Evaluations demonstrate that VoronoiPatches can effectively **reduce model variance and combat overfitting**. Comparative studies indicate that VP **outperforms several existing state-of-the-art data augmentation techniques** in improving the robustness and generalization performance of CNN models on unseen data across various benchmarks. {% cite illium2023voronoipatches %}
|
||
|
||
<center>
|
||
<img src="/assets/figures/17_vp_results.jpg" alt="Graphs showing performance comparison (e.g., accuracy, loss) of VoronoiPatches against other augmentation methods" style="display:block; width:90%">
|
||
<figcaption>Comparative results illustrating the performance benefits of VoronoiPatches.</figcaption>
|
||
</center><br>]]></content><author><name>Steffen Illium</name></author><category term="research" /><category term="data-augmentation" /><category term="computer-vision" /><category term="deep-learning" /><category term="convolutional-neural-networks" /><summary type="html"><![CDATA[VoronoiPatches improves CNN robustness via non-linear recombination augmentation.]]></summary></entry><entry><title type="html">Organism Network Emergence</title><link href="http://localhost:4000/research/organism-network-emergence/" rel="alternate" type="text/html" title="Organism Network Emergence" /><published>2022-12-01T00:00:00+01:00</published><updated>2022-12-01T00:00:00+01:00</updated><id>http://localhost:4000/research/organism-network-emergence</id><content type="html" xml:base="http://localhost:4000/research/organism-network-emergence/"><
|
||
{:style="display:block; width:45%" .align-right}
|
||
|
||
Key aspects explored in this work include:
|
||
|
||
* **Mechanisms for Collaboration:** Investigating how communication or resource sharing between self-replicating units can be established and influence collective behavior.
|
||
* **Emergent Differentiation:** Analyzing scenarios where units within the population begin to specialize, adopting different internal states (weight configurations) or functions, analogous to cellular differentiation in biological organisms.
|
||
* **Formation of Structure:** Studying how interactions lead to stable spatial or functional structures within the population, forming the basis of the Organism Network.
|
||
* **Functional Advantages:** Assessing whether these emergent ONs exhibit novel collective functionalities or improved problem-solving capabilities compared to non-interacting populations. (The role of dropout, as suggested by the image, might relate to promoting robustness or specialization within this context).
|
||
|
||
This study bridges the gap between single-unit self-replication and the emergence of complex, multi-unit systems in artificial life research, offering insights into how collaborative dynamics can lead to higher-order computational structures. For more detailed insights, refer to {% cite illium2022constructing %}.
|
||
|
||
<!-- Add clearing div after text if float is used -->
|
||
<div style="clear: both;"></div>]]></content><author><name>Steffen Illium</name></author><category term="research" /><category term="artificial-life" /><category term="complex-systems" /><category term="neural-networks" /><category term="self-organization" /><category term="emergent-computation" /><summary type="html"><![CDATA[Self-replicating networks collaborate forming higher-level Organism Networks with emergent functionalities.]]></summary></entry><entry><title type="html">MSP Android Course</title><link href="http://localhost:4000/teaching/android/" rel="alternate" type="text/html" title="MSP Android Course" /><published>2022-10-15T00:00:00+02:00</published><updated>2022-10-15T00:00:00+02:00</updated><id>http://localhost:4000/teaching/android</id><content type="html" xml:base="http://localhost:4000/teaching/android/"><{: .align-left style="padding:0.1em; width:5em" alt="Android Logo"}
|
||
|
||
Over several semesters during my time at LMU Munich, I co-supervised the **"Praktikum Mobile und Verteilte Systeme" (MSP)**, often referred to as the Android development practical course. This intensive lab course provided students with hands-on experience in designing, developing, and testing native applications for the **Android** platform, primarily using **Java** and later **Kotlin**.
|
||
|
||
The course consistently followed a two-phase structure:
|
||
|
||
1. **Introductory Phase:** Focused on imparting fundamental concepts of Android development, relevant APIs, architectural patterns, and necessary tooling through lectures and guided practical exercises.
|
||
2. **Project Phase:** Student teams collaborated on developing a complete Android application based on their own concepts or provided themes. My role involved providing continuous technical mentorship, architectural guidance, code review feedback, and support in project planning and agile execution to each team.
|
||
|
||
Emphasis was placed on applying software engineering best practices within the context of mobile application development.
|
||
|
||
<div class="container" style="margin-top: 1.5em;">
|
||
<div class="sidebar" style="float: right; width: 30%; border: 0.5px grey solid; padding: 15px; margin-left: 15px; box-sizing: border-box;">
|
||
<h4 style="margin-top: 0;">Past Course Iterations</h4>
|
||
<ul style="list-style: none; padding-left: 0; margin-bottom: 0; font-size: smaller;">
|
||
<!-- Winter Semesters -->
|
||
<li><strong>WiSe 22/23:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-ws2223/" target="_blank" rel="noopener noreferrer">MSP</a></li>
|
||
<li><strong>WiSe 21/22:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-ws2122/" target="_blank" rel="noopener noreferrer">MSP</a></li>
|
||
<li><strong>WiSe 20/21:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-ws2021/" target="_blank" rel="noopener noreferrer">MSP</a></li>
|
||
<li><strong>WiSe 19/20:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-ws1920/" target="_blank" rel="noopener noreferrer">MSP</a></li>
|
||
<li><strong>WiSe 18/19:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/msp-ws1819/" target="_blank" rel="noopener noreferrer">MSP</a></li>
|
||
<!-- Summer Semesters -->
|
||
<li><strong>SoSe 2022:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-sose22/" target="_blank" rel="noopener noreferrer">MSP</a></li>
|
||
<li><strong>SoSe 2021:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-sose21/" target="_blank" rel="noopener noreferrer">MSP</a></li>
|
||
<li><strong>SoSe 2020:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/praktikum-mobile-und-verteilte-systeme-sose20/" target="_blank" rel="noopener noreferrer">MSP</a></li>
|
||
<li><strong>SoSe 2019:</strong> <a href="https://www.mobile.ifi.lmu.de/lehrveranstaltungen/msp-sose19/" target="_blank" rel="noopener noreferrer">MSP</a></li>
|
||
</ul>
|
||
</div>
|
||
<div class="main-content" style="float: left; width: calc(70% - 15px); box-sizing: border-box;">
|
||
<h4 style="margin-top: 0;">Key Learning Areas</h4>
|
||
Students gained practical experience in:
|
||
<ul>
|
||
<li>Native Android App Development (Java/Kotlin)</li>
|
||
<li>Android SDK, Activity/Fragment Lifecycle, UI Design (XML Layouts, Jetpack Compose later)</li>
|
||
<li>Client-Server Architecture & Networking (e.g., Retrofit, Volley)</li>
|
||
<li>Using Wireless Local Networks (WiFi / Bluetooth APIs)</li>
|
||
<li>Implementing Location Services (GPS / Fused Location Provider)</li>
|
||
<li>Background Processing and Services</li>
|
||
<li>Data Persistence (SharedPreferences, SQLite, Room)</li>
|
||
<li>Teamwork and Collaborative Software Development (Git)</li>
|
||
<li>Agile Methodologies and Project Management Tools</li>
|
||
</ul>
|
||
</div>
|
||
<div style="clear: both;"></div>
|
||
</div>]]></content><author><name>Steffen Illium</name></author><category term="teaching" /><category term="teaching" /><category term="android" /><category term="java" /><category term="kotlin" /><category term="mobile-development" /><category term="app-development" /><category term="agile" /><category term="teamwork" /><summary type="html"><![CDATA[Supervised MSP: teams built Android apps (Java/Kotlin) using agile.]]></summary></entry><entry><title type="html">Extended Self-Replication</title><link href="http://localhost:4000/research/extended-self-replication/" rel="alternate" type="text/html" title="Extended Self-Replication" /><published>2022-08-01T00:00:00+02:00</published><updated>2022-08-01T00:00:00+02:00</updated><id>http://localhost:4000/research/extended-self-replication</id><content type="html" xml:base="http://localhost:4000/research/extended-self-replication/"><). The research further investigates the use of **backpropagation-like mechanisms** not for typical supervised learning, but as an effective means to enable **non-trivial self-replication** – where networks learn to reproduce their own connection weights.
|
||
|
||
Key extensions and analyses presented in this work include:
|
||
|
||
* **Robustness Analysis:** A systematic evaluation of the self-replicating networks' resilience and stability when subjected to various levels of **noise** during the replication process.
|
||
* **Artificial Chemistry Environments:** Further development and analysis of simulated environments where populations of self-replicating networks interact, leading to observable **emergent collective behaviors** and ecosystem dynamics.
|
||
* **Dynamical Systems Perspective:** A detailed theoretical analysis of the self-replication process viewed as a dynamical system. This includes identifying **fixpoint weight configurations** (networks that perfectly replicate themselves) and characterizing their **attractor basins** (the regions in weight space from which networks converge towards a specific fixpoint).
|
||
|
||
<center>
|
||
<img src="/assets/figures/15_noise_levels.jpg" alt="Graph showing the impact of different noise levels on self-replication fidelity or population dynamics" style="display:block; width:65%">
|
||
<figcaption>Investigating the influence of noise on the self-replication process.</figcaption>
|
||
</center><br>
|
||
|
||
By delving deeper into the mechanisms, robustness, emergent properties, and underlying dynamics, this study significantly enhances the understanding of how self-replication can be achieved and analyzed within neural network models, contributing valuable insights to the fields of artificial life and complex systems. {% cite gabor2022self %}]]></content><author><name>Steffen Illium</name></author><category term="research" /><category term="artificial-life" /><category term="complex-systems" /><category term="neural-networks" /><category term="self-organization" /><category term="dynamical-systems" /><summary type="html"><![CDATA[Journal extension: self-replication, noise robustness, emergence, dynamical system analysis.]]></summary></entry></feed> |