diff --git a/_posts/research/2020-06-01-peoc-ood-detection.md b/_posts/research/2020-06-01-peoc-ood-detection.md
index 76bfd8a4..b7b103ee 100644
--- a/_posts/research/2020-06-01-peoc-ood-detection.md
+++ b/_posts/research/2020-06-01-peoc-ood-detection.md
@@ -2,7 +2,7 @@
layout: single
title: "PEOC OOD Detection"
categories: research
-tags: deep-reinforcement-learning out-of-distribution-detection ai-safety anomaly-detection
+tags: deep-reinforcement-learning out-of-distribution-detection safety anomaly-detection
excerpt: "PEOC uses policy entropy for OOD detection in deep RL."
header:
teaser: /assets/figures/6_ood_pipeline.jpg
diff --git a/_posts/research/2022-02-25-rnn-memory-limits.md b/_posts/research/2022-02-25-rnn-memory-limits.md
new file mode 100644
index 00000000..bbdd242f
--- /dev/null
+++ b/_posts/research/2022-02-25-rnn-memory-limits.md
@@ -0,0 +1,26 @@
+---
+layout: single
+title: "RNN Memory Limits"
+categories: research deep-learning recurrent-neural-networks sequence-modeling theoretical-ml
+excerpt: "Investigated the practical limits of RNNs (vanilla, LSTM, GRU) in recalling past inputs from uncorrelated sequences via standard backpropagation."
+header:
+ teaser: /assets/figures/22_rnn_limits.png
+scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
+---
+
+Recurrent Neural Networks (RNNs), including variants like Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU), are designed with the intent to capture temporal dependencies within sequential data. Their internal mechanisms allow information from previous time steps to influence current processing.
+
+This research investigates the fundamental memory capacity of these architectures under challenging conditions: specifically, when processing sequences where data points are generated independently, possessing **no inherent temporal correlation**. In such scenarios, any recall of past inputs relies solely on the network's ability to explicitly memorize information through standard backpropagation training, rather than leveraging statistical patterns in the sequence.
+
+Our empirical analysis demonstrates that while RNNs *can* learn to recall a limited number of past inputs even from uncorrelated sequences, this capability is significantly constrained:
+
+* **Limited Recall Range:** The effective range over which vanilla RNNs, LSTMs, and GRUs can reliably reproduce past inputs from uncorrelated data is substantially shorter than the recall range achievable when even minimal temporal correlations are present.
+* **Architectural Influence:** This limitation is influenced by both the specific RNN architecture (vanilla, LSTM, GRU) and the network size (number of hidden units).
+* **Practical Bound:** The findings suggest a practical upper bound on the temporal memory achievable through standard training in these scenarios, which appears well below theoretical information storage limits.
+
+
+
+ RNN Memory Horizon.
+
+
+These results highlight an inherent constraint in the capacity of standard RNN architectures to identify and utilize long-range dependencies when processing sequences lacking temporal structure, providing insights into their limitations in specific types of sequence modeling tasks. {% cite illium2022empirical %}
diff --git a/_posts/research/2022-05-09-rl-anomaly-detection.md b/_posts/research/2022-05-09-rl-anomaly-detection.md
index 4c3faed1..80ac8ce4 100644
--- a/_posts/research/2022-05-09-rl-anomaly-detection.md
+++ b/_posts/research/2022-05-09-rl-anomaly-detection.md
@@ -2,7 +2,7 @@
layout: single
title: "RL Anomaly Detection"
categories: research
-tags: reinforcement-learning anomaly-detection ai-safety lifelong-learning generalization
+tags: reinforcement-learning anomaly-detection safety lifelong-learning generalization
excerpt: "Perspective on anomaly detection challenges and future in reinforcement learning."
header:
teaser: /assets/figures/14_ad_rl_teaser.jpg
diff --git a/_posts/research/2023-02-25-autoencoder-trajectory-compression.md b/_posts/research/2023-02-25-autoencoder-trajectory-compression.md
new file mode 100644
index 00000000..5480eda0
--- /dev/null
+++ b/_posts/research/2023-02-25-autoencoder-trajectory-compression.md
@@ -0,0 +1,31 @@
+---
+layout: single
+title: "Autoencoder Trajectory Compression"
+categories: research deep-learning recurrent-neural-networks trajectory-analysis data-compression geoinformatics
+excerpt: "Introduced an LSTM autoencoder approach for GPS trajectory compression, demonstrating superior reconstruction accuracy compared to Douglas-Peucker based on Fréchet distance and DTW."
+header:
+ teaser: /assets/figures/23_trajectory_model.png
+scholar_link: "https://scholar.google.de/citations?user=NODAd94AAAAJ&hl=en"
+---
+
+The proliferation of location-aware mobile devices generates vast amounts of GPS trajectory data, necessitating efficient storage solutions. While various compression techniques aim to reduce data volume, preserving essential spatio-temporal information remains crucial.
+
+
+
+ Schematic of the LSTM Decoder Architecture.
+
+
+
+This paper introduces a novel approach for **compressing and reconstructing GPS trajectories** using a **Long Short-Term Memory (LSTM) autoencoder**. The autoencoder learns a compressed latent representation of the trajectory sequence, which can then be decoded to reconstruct the original path.
+
+{:style="display:block; width:50%" .align-right}
+
+Our method was evaluated on two distinct datasets: one from a gaming context and another real-world dataset (T-Drive). We assessed performance across a range of compression ratios and trajectory lengths, comparing it against the widely used traditional **Douglas-Peucker algorithm**.
+
+**Key findings:**
+
+* The LSTM autoencoder approach significantly **outperforms Douglas-Peucker** in terms of reconstruction accuracy, as measured by both **discrete Fréchet distance** and **Dynamic Time Warping (DTW)**.
+* Unlike point-reduction techniques like Douglas-Peucker, our method performs a **lossy reconstruction at every point** along the trajectory. This offers potential advantages in maintaining temporal resolution and providing greater flexibility for downstream analysis.
+
+Experimental results demonstrate the effectiveness and potential benefits of using deep learning, specifically LSTM autoencoders, for GPS trajectory compression, offering improved accuracy over conventional geometric algorithms. {% cite kolle2023compression %}
+
diff --git a/_posts/research/2024-10-27-mas-emergence-safety.md b/_posts/research/2024-10-27-mas-emergence-safety.md
index 3f0ec2ac..f8b87f0a 100644
--- a/_posts/research/2024-10-27-mas-emergence-safety.md
+++ b/_posts/research/2024-10-27-mas-emergence-safety.md
@@ -2,7 +2,7 @@
layout: single
title: "MAS Emergence Safety"
categories: research
-tags: multi-agent-systems MARL AI-safety emergence system-specification
+tags: multi-agent-systems MARL safety emergence system-specification
excerpt: "Formalized MAS emergence misalignment; proposed safety mitigation strategies."
header:
teaser: /assets/figures/21_coins_teaser.png
diff --git a/assets/figures/22_rnn_limits.png b/assets/figures/22_rnn_limits.png
new file mode 100644
index 00000000..1da07679
Binary files /dev/null and b/assets/figures/22_rnn_limits.png differ
diff --git a/assets/figures/23_trajectory_model.png b/assets/figures/23_trajectory_model.png
new file mode 100644
index 00000000..0e11f7ff
Binary files /dev/null and b/assets/figures/23_trajectory_model.png differ
diff --git a/assets/figures/23_trajectory_scores.png b/assets/figures/23_trajectory_scores.png
new file mode 100644
index 00000000..e5625d85
Binary files /dev/null and b/assets/figures/23_trajectory_scores.png differ