some work on the new journal experiments with cristions code

This commit is contained in:
Maximilian Zorn
2021-05-14 09:13:42 +02:00
parent 9bf37486a0
commit 22d34d4e75
7 changed files with 310 additions and 82 deletions

View File

@ -1,9 +1,27 @@
# cristian_lenta - BA code
# code ALIFE paper journal edition
Changes made from last meeting:
- `batch_size` renamed to `log_step_size`
- changed `second order fixpoints`: now comparing first input with second output, without weight change of the network
- removed rounding of the weights (line 99, network.py): need full float precision
- see journal_basins.py for the "train -> spawn with noise -> train again and see where they end up" first draft. Apply noise follows the `vary` function that was used in the paper robustness test with `+- prng() * eps`. Change if desired.
- has some interesting results, but maybe due to PCA the newly spawned weights + noise get plotted quite a bit aways from the parent particle, even though the weights are similar to 10e-8?
- see journal_basins.py for an attempt at a distance matrix between the nets/end_weight_states of an experiment. Also has the position-invariant-manhattan-distance as option. (checks n^2, neither fast nor elegant ;D )
- i forgot what "sanity check for the beacon" meant but we leave that out anyway, right?
**Important**:
- now that we have full float precision, the robustness test is **never** failing for the noise values of `10^-6, 10^-7, 10^-8, 10^-9`. May just be coding mistake, I am not sure yet.
## Some changes from cristian's code (or suggestions, rather)
This is just my understanding, I might be wrong here. Just a short writeup of what I noticed from trying to implement the new experiments.
EDIT: I also saw that you updated your branch, so some of these things might have already been adressed.
- I think, id_function is only training to reproduce the *very first weight* configuration, right? Now I see where the confusion is. But according to my understanding the selfrep networks gets trained with the task to ouput the *current weights at each training timestep*, so dynamic targets as the weight learns until it stabilizes/converges. I have changed that accordingly in the experiments to produce one input/target **per step** and train on that once (batch_size 1) for e.g. ST_step many times (not ST_many times on the inital one input/target).
- Not sure about this one but: Train only seems to save the *output* (i.e, the prediction, not the net weight states)? Semantically this changes the 3d trajectories from the papers:
- "the trajectory dont change anymore because the *weights* are always the same" , ie. the backprop gradient doesnt change anything because the loss of the prediction is basically nonexistant,
- to "the net has learned to return the input vector 1:1 (id_function, yes) and the *output* prediction is the *same* everytime". Eventually weights == output == target_data, but we are interested in the weight states trajectory during learning and not really the output, i guess (because we know that the output will eventually converge to the right prediction of the weighs, but not how the weights develop during training to accomodate this ability). Logging target_data would be better because that is basically the weights at each step we are aiming for. Thats what i am using now at least.
- robustness test doesnt seem to self apply the prediction currently, it only changes the weights (apply weights ≠ self.apply), right? Thats why the Readme has the notice "never fails for the smaller values", because it only adds epsilon small enough to not destroy the fixpoint property (10e-6 onwards) and not actually tries to self-apply. If the changed weights + noise small enough = fixpoint, then it will always hold without change (i.e., without the actual self application). Also the noise is *on the input*, which is a robustness-test for id_function, yes, while the paper experiment has the noise *on the weights*. Semantically, noise on the input asks "can the same net/weights produce the same input/output even when we change the output", which of course not. But the output may be changed (small enough) that its within epsilon-degree of change and therefore not looses the fixpoint property.
The robustness exp in the paper tests self-application resistance, which means how much faster do the nets loose prediction-accuracy on self-application when weights get x-amount of noise. They all loose precision even without noise (see the paper, self-application is by nature a value degrading operation/predicion closer to 0-values, "easier to predict"), its "just" the visualisation of how much faster it collapses to the 0-fixpoint with different amounts of noise on weight (and since the nets sample from within their own weights, on the input as well; weights => input).
- getting randdigit for the path destroys save-order, no? Makes finding stuff tricky. IRC thats why steffen used timestamps, they are ordered ascendingly?
- the normalize() is different from the paper, right? It gets normalized over len(state_dict) = 14, not over the positional encoding of each layer / cell / weight_value?
- test_for_fixpoint doesnt return/or set the id_functions array? How does that work? Do you then just filter all nets with the fitting "string" property somewhere?

View File

@ -53,10 +53,12 @@ class SelfTrainExperiment:
net_name = f"ST_net_{str(i)}"
net = Net(self.net_input_size, self.net_hidden_size, self.net_out_size, net_name)
input_data = net.input_weight_matrix()
target_data = net.create_target_weights(input_data)
net.self_train(self.epochs, self.log_step_size, self.net_learning_rate, input_data, target_data)
for _ in range(self.epochs):
input_data = net.input_weight_matrix()
target_data = net.create_target_weights(input_data)
net.self_train(1, self.log_step_size, self.net_learning_rate, input_data, target_data)
print(f"\nLast weight matrix (epoch: {self.epochs}):\n{net.input_weight_matrix()}\nLossHistory: {net.loss_history[-10:]}")
self.nets.append(net)
def weights_evolution_3d_experiment(self):
@ -122,17 +124,18 @@ class SelfApplicationExperiment:
net = Net(self.net_input_size, self.net_hidden_size, self.net_out_size, net_name
)
input_data = net.input_weight_matrix()
target_data = net.create_target_weights(input_data)
for _ in range(self.SA_steps):
input_data = net.input_weight_matrix()
target_data = net.create_target_weights(input_data)
if self.train_nets == "before_SA":
net.self_train(self.ST_steps, self.log_step_size, self.net_learning_rate, input_data, target_data)
net.self_application(input_data, self.SA_steps, self.log_step_size)
elif self.train_nets == "after_SA":
net.self_application(input_data, self.SA_steps, self.log_step_size)
net.self_train(self.ST_steps, self.log_step_size, self.net_learning_rate, input_data, target_data)
else:
net.self_application(input_data, self.SA_steps, self.log_step_size)
if self.train_nets == "before_SA":
net.self_train(1, self.log_step_size, self.net_learning_rate, input_data, target_data)
net.self_application(input_data, self.SA_steps, self.log_step_size)
elif self.train_nets == "after_SA":
net.self_application(input_data, self.SA_steps, self.log_step_size)
net.self_train(1, self.log_step_size, self.net_learning_rate, input_data, target_data)
else:
net.self_application(input_data, self.SA_steps, self.log_step_size)
self.nets.append(net)
@ -217,10 +220,12 @@ class SoupExperiment:
# Self-training each network in the population
for j in range(self.population_size):
net = self.population[j]
input_data = net.input_weight_matrix()
target_data = net.create_target_weights(input_data)
net.self_train(self.ST_steps, self.log_step_size, self.net_learning_rate, input_data, target_data)
for _ in range(self.ST_steps):
input_data = net.input_weight_matrix()
target_data = net.create_target_weights(input_data)
net.self_train(1, self.log_step_size, self.net_learning_rate, input_data, target_data)
# Testing for fixpoints after each batch of ST steps to see relevant data
if i % self.ST_steps == 0:
@ -323,18 +328,23 @@ class MixedSettingExperiment:
for i in loop_population_size:
net = self.nets[i]
input_data = net.input_weight_matrix()
target_data = net.create_target_weights(input_data)
if self.train_nets == "before_SA":
net.self_train(self.ST_steps_between_SA, self.log_step_size, self.net_learning_rate, input_data,
target_data)
for _ in range(self.ST_steps_between_SA):
input_data = net.input_weight_matrix()
target_data = net.create_target_weights(input_data)
net.self_train(1, self.log_step_size, self.net_learning_rate, input_data, target_data)
input_data = net.input_weight_matrix()
net.self_application(input_data, self.SA_steps, self.log_step_size)
elif self.train_nets == "after_SA":
net.self_application(input_data, self.SA_steps, self.log_step_size)
net.self_train(self.ST_steps_between_SA, self.log_step_size, self.net_learning_rate, input_data,
target_data)
elif self.train_nets == "after_SA":
input_data = net.input_weight_matrix()
net.self_application(input_data, self.SA_steps, self.log_step_size)
for _ in range(self.ST_steps_between_SA):
input_data = net.input_weight_matrix()
target_data = net.create_target_weights(input_data)
net.self_train(1, self.log_step_size, self.net_learning_rate, input_data, target_data)
print(f"\nLast weight matrix (epoch: {j}):\n{net.input_weight_matrix()}\nLossHistory: {net.loss_history[-10:]}")
test_for_fixpoints(self.fixpoint_counters, self.nets)
# Rounding the result not to run into other problems later regarding the exact representation of floating number
fixpoints_percentage = round((self.fixpoint_counters["fix_zero"] + self.fixpoint_counters[
@ -411,8 +421,10 @@ class RobustnessExperiment:
self.nets = []
# Create population:
self.populate_environment()
print("Nets:\n", self.nets)
self.count_fixpoints()
[print(net.is_fixpoint) for net in self.nets]
self.test_robustness()
def populate_environment(self):
@ -423,14 +435,15 @@ class RobustnessExperiment:
net_name = f"net_{str(i)}"
net = Net(self.net_input_size, self.net_hidden_size, self.net_out_size, net_name)
input_data = net.input_weight_matrix()
target_data = net.create_target_weights(input_data)
net.self_train(self.ST_steps, self.log_step_size, self.net_learning_rate, input_data, target_data)
for _ in range(self.ST_steps):
input_data = net.input_weight_matrix()
target_data = net.create_target_weights(input_data)
net.self_train(1, self.log_step_size, self.net_learning_rate, input_data, target_data)
self.nets.append(net)
def test_robustness(self):
test_for_fixpoints(self.fixpoint_counters, self.nets, self.id_functions)
#test_for_fixpoints(self.fixpoint_counters, self.nets, self.id_functions)
zero_epsilon = pow(10, -5)
data = [[0 for _ in range(10)] for _ in range(len(self.id_functions))]
@ -460,11 +473,16 @@ class RobustnessExperiment:
while still_id_func and data[i][j] <= 1000:
data[i][j] += 1
new_weights = original_net_clone.create_target_weights(changed_weights)
original_net_clone = original_net_clone.apply_weights(original_net_clone, new_weights)
input_data = original_net_clone.input_weight_matrix()
original_net_clone = original_net_clone.self_application(input_data, 1, self.log_step_size)
#new_weights = original_net_clone.create_target_weights(changed_weights)
#original_net_clone = original_net_clone.apply_weights(original_net_clone, new_weights)
still_id_func = is_identity_function(original_net_clone, input_data, target_data, zero_epsilon)
print(f"Data {data}")
if data.count(0) == 10:
print(f"There is no network resisting the robustness test.")
text = f"For this population of \n {self.population_size} networks \n there is no" \
@ -476,7 +494,7 @@ class RobustnessExperiment:
def count_fixpoints(self):
exp_details = f"ST steps: {self.ST_steps}"
test_for_fixpoints(self.fixpoint_counters, self.nets)
self.id_functions = test_for_fixpoints(self.fixpoint_counters, self.nets)
bar_chart_fixpoints(self.fixpoint_counters, self.population_size, self.directory_name, self.net_learning_rate,
exp_details)

View File

@ -98,6 +98,7 @@ def test_for_fixpoints(fixpoint_counter: Dict, nets: List, id_functions=[]):
fixpoint_counter["other_func"] += 1
nets[i].is_fixpoint = "other_func"
return id_functions
def changing_rate(x_new, x_old):
return x_new - x_old

186
journal_basins.py Normal file
View File

@ -0,0 +1,186 @@
import os
from tqdm import tqdm
import random
import copy
from functionalities_test import is_identity_function
from network import Net
from visualization import plot_3d_self_train, plot_loss
import numpy as np
from sklearn.metrics import mean_absolute_error as MAE
from sklearn.metrics import mean_squared_error as MSE
def prng():
return random.random()
def l1 (tup):
a, b = tup
return abs(a-b)
def mean_invariate_manhattan_distance(X,Y):
# One of these one-liners that might be smart or really dumb. Goal is to find pairwise
# distances of ascending values, ie. sum (abs(min1_X-min1_Y), abs(min2_X-min2Y) ...) / mean.
# Idea was to find weight sets that have same values but just in different positions, that would
# make this distance 0.
return np.mean(list(map(l1, zip(sorted(X),sorted(Y)))))
def distance_matrix(nets, distance="MIM", print_it=True):
matrix = [[0 for _ in range(len(nets))] for _ in range(len(nets))]
for net in range(len(nets)):
weights = nets[net].input_weight_matrix()[:,0]
for other_net in range(len(nets)):
other_weights = nets[other_net].input_weight_matrix()[:,0]
if distance in ["MSE"]:
matrix[net][other_net] = MSE(weights, other_weights)
elif distance in ["MAE"]:
matrix[net][other_net] = MAE(weights, other_weights)
elif distance in ["MIM"]:
matrix[net][other_net] = mean_invariate_manhattan_distance(weights, other_weights)
if print_it:
print(f"\nDistance matrix [{distance}]:")
[print(row) for row in matrix]
return matrix
class SpawnExperiment:
@staticmethod
def apply_noise(network, noise: int):
""" Changing the weights of a network to values + noise """
for layer_id, layer_name in enumerate(network.state_dict()):
for line_id, line_values in enumerate(network.state_dict()[layer_name]):
for weight_id, weight_value in enumerate(network.state_dict()[layer_name][line_id]):
#network.state_dict()[layer_name][line_id][weight_id] = weight_value + noise
if prng() < 0.5:
network.state_dict()[layer_name][line_id][weight_id] = weight_value + noise
else:
network.state_dict()[layer_name][line_id][weight_id] = weight_value - noise
return network
def __init__(self, population_size, log_step_size, net_input_size, net_hidden_size, net_out_size, net_learning_rate,
epochs, ST_steps, noise, directory_name) -> None:
self.population_size = population_size
self.log_step_size = log_step_size
self.net_input_size = net_input_size
self.net_hidden_size = net_hidden_size
self.net_out_size = net_out_size
self.net_learning_rate = net_learning_rate
self.epochs = epochs
self.ST_steps = ST_steps
self.loss_history = []
self.nets = []
self.noise = noise or 10e-5
print("\nNOISE:", self.noise)
self.directory_name = directory_name
os.mkdir(self.directory_name)
self.populate_environment()
self.spawn_and_continue()
self.weights_evolution_3d_experiment()
#self.visualize_loss()
distance_matrix(self.nets)
def populate_environment(self):
loop_population_size = tqdm(range(self.population_size))
for i in loop_population_size:
loop_population_size.set_description("Populating experiment %s" % i)
net_name = f"ST_net_{str(i)}"
net = Net(self.net_input_size, self.net_hidden_size, self.net_out_size, net_name)
for _ in range(self.ST_steps):
input_data = net.input_weight_matrix()
target_data = net.create_target_weights(input_data)
net.self_train(1, self.log_step_size, self.net_learning_rate, input_data, target_data)
#print(f"\nLast weight matrix (epoch: {self.epochs}):\n{net.input_weight_matrix()}\nLossHistory: {net.loss_history[-10:]}")
self.nets.append(net)
def spawn_and_continue(self, number_spawns:int = 5):
# For every initial net {i} after populating (that is fixpoint after first epoch);
for i in range(self.population_size):
net = self.nets[i]
net_input_data = net.input_weight_matrix()
net_target_data = net.create_target_weights(net_input_data)
if is_identity_function(net, net_input_data, net_target_data):
print(f"\nNet {i} is fixpoint")
#print("\nNet weights before training\n", target_data)
# Clone the fixpoint x times and add (+-)self.noise to weight-sets randomly;
# To plot clones starting after first epoch (z=ST_steps), set that as start_time!
for j in range(number_spawns):
clone = Net(net.input_size, net.hidden_size, net.out_size, f"ST_net_{str(i)}_clone_{str(j)}", start_time=self.ST_steps)
clone.load_state_dict(copy.deepcopy(net.state_dict()))
rand_noise = prng() * self.noise
clone = self.apply_noise(clone, rand_noise)
# Then finish training each clone {j} (for remaining epoch-1 * ST_steps) and add to nets for plotting;
for _ in range(self.epochs - 1):
for _ in range(self.ST_steps):
input_data = clone.input_weight_matrix()
target_data = clone.create_target_weights(input_data)
clone.self_train(1, self.log_step_size, self.net_learning_rate, input_data, target_data)
#print(f"clone {j} last weights: {target_data}, noise {noise}")
if is_identity_function(clone, input_data, target_data):
print(f"Clone {j} (of net_{i}) is fixpoint. \nMSE(j,i): {MSE(net_target_data, target_data)}, \nMAE(j,i): {MAE(net_target_data, target_data)}\n")
self.nets.append(clone)
# Finally take parent net {i} and finish it's training for comparison to clone development.
for _ in range(self.epochs - 1):
for _ in range(self.ST_steps):
input_data = net.input_weight_matrix()
target_data = net.create_target_weights(input_data)
net.self_train(1, self.log_step_size, self.net_learning_rate, input_data, target_data)
#print("\nNet weights after training \n", target_data)
else:
print("No fixpoints found.")
def weights_evolution_3d_experiment(self):
exp_name = f"ST_{str(len(self.nets))}_nets_3d_weights_PCA"
return plot_3d_self_train(self.nets, exp_name, self.directory_name, self.log_step_size)
def visualize_loss(self):
for i in range(len(self.nets)):
net_loss_history = self.nets[i].loss_history
self.loss_history.append(net_loss_history)
plot_loss(self.loss_history, self.directory_name)
if __name__=="__main__":
NET_INPUT_SIZE = 4
NET_OUT_SIZE = 1
# Define number of runs & name:
ST_runs = 1
ST_runs_name = "test-27"
ST_steps = 1500
ST_epochs = 2
ST_log_step_size = 10
# Define number of networks & their architecture
ST_population_size = 1
ST_net_hidden_size = 2
ST_net_learning_rate = 0.04
ST_name_hash = random.getrandbits(32)
print(f"Running the Spawn experiment:")
for noise_factor in range(3, 6):
SpawnExperiment(
population_size=ST_population_size,
log_step_size=ST_log_step_size,
net_input_size=NET_INPUT_SIZE,
net_hidden_size=ST_net_hidden_size,
net_out_size=NET_OUT_SIZE,
net_learning_rate=ST_net_learning_rate,
epochs=ST_epochs,
ST_steps=ST_steps,
noise=pow(10,-noise_factor),
directory_name=f"./experiments/spawn_basin/{ST_name_hash}_10e-{noise_factor}"
)

16
main.py
View File

@ -40,16 +40,16 @@ if __name__ == '__main__':
NET_OUT_SIZE = 1
""" ------------------------------------- Self-training (ST) experiment ------------------------------------- """
run_ST_experiment_bool = True
run_ST_experiment_bool = False
# Define number of runs & name:
ST_runs = 3
ST_runs = 1
ST_runs_name = "test-27"
ST_epochs = 500
ST_log_step_size = 5
ST_epochs = 1000
ST_log_step_size = 10
# Define number of networks & their architecture
ST_population_size = 10
ST_population_size = 1
ST_net_hidden_size = 2
ST_net_learning_rate = 0.04
@ -58,7 +58,7 @@ if __name__ == '__main__':
""" ----------------------------------- Self-application (SA) experiment ----------------------------------- """
run_SA_experiment_bool = True
run_SA_experiment_bool = False
# Define number of runs, name, etc.:
SA_runs_name = "test-17"
@ -82,7 +82,7 @@ if __name__ == '__main__':
""" -------------------------------------------- Soup experiment -------------------------------------------- """
run_soup_experiment_bool = True
run_soup_experiment_bool = False
# Define number of runs, name, etc.:
soup_runs = 1
@ -107,7 +107,7 @@ if __name__ == '__main__':
""" ------------------------------------------- Mixed experiment -------------------------------------------- """
run_mixed_experiment_bool = True
run_mixed_experiment_bool = False
# Define number of runs, name, etc.:
mixed_runs_name = "test-17"

View File

@ -1,4 +1,4 @@
from __future__ import annotations
#from __future__ import annotations
import copy
import torch
import torch.nn as nn
@ -33,7 +33,7 @@ class Net(nn.Module):
return False
@staticmethod
def apply_weights(network: Net, new_weights: Tensor) -> Net:
def apply_weights(network, new_weights: Tensor):
""" Changing the weights of a network to new given values. """
i = 0
@ -46,9 +46,12 @@ class Net(nn.Module):
return network
def __init__(self, i_size: int, h_size: int, o_size: int, name=None) -> None:
super().__init__()
def __init__(self, i_size: int, h_size: int, o_size: int, name=None, start_time=1) -> None:
super().__init__()
self.start_time = start_time
self.name = name
self.input_size = i_size
self.hidden_size = h_size
@ -61,6 +64,7 @@ class Net(nn.Module):
self.s_application_weights_history = []
self.loss_history = []
self.trained = False
self.number_trained = 0
self.is_fixpoint = ""
@ -75,15 +79,13 @@ class Net(nn.Module):
return x
def normalize(self, value):
def normalize(self, value, norm):
""" Normalizing the values >= 1 and adding pow(10, -8) to the values equal to 0 """
if value >= 1:
return value/len(self.state_dict())
elif value == 0:
return pow(10, -8)
if norm > 1:
return float(value) / float(norm)
else:
return value
return float(value)
def input_weight_matrix(self) -> Tensor:
""" Calculating the input tensor formed from the weights of the net """
@ -92,11 +94,13 @@ class Net(nn.Module):
weight_matrix = np.arange(self.no_weights * 4).reshape(self.no_weights, 4).astype("f")
i = 0
max_layer_id = len(self.state_dict()) - 1
for layer_id, layer_name in enumerate(self.state_dict()):
max_cell_id = len(self.state_dict()[layer_name]) - 1
for line_id, line_values in enumerate(self.state_dict()[layer_name]):
max_weight_id = len(line_values) - 1
for weight_id, weight_value in enumerate(self.state_dict()[layer_name][line_id]):
weight_matrix[i] = weight_value.item(), self.normalize(layer_id), self.normalize(weight_id), self.normalize(line_id)
weight_matrix[i] = weight_value.item(), self.normalize(layer_id, max_layer_id), self.normalize(line_id, max_cell_id), self.normalize(weight_id, max_weight_id)
i += 1
return torch.from_numpy(weight_matrix)
@ -108,9 +112,10 @@ class Net(nn.Module):
self.trained = True
for training_step in range(training_steps):
self.number_trained +=1
optimizer.zero_grad()
output = self(input_data)
loss = F.mse_loss(output, target_data)
optimizer.zero_grad()
loss.backward()
optimizer.step()
@ -118,22 +123,22 @@ class Net(nn.Module):
# If it is a soup/mixed env. save weights only at the end of all training steps (aka a soup/mixed epoch)
if "soup" not in self.name and "mixed" not in self.name:
# If self-training steps are lower than 10, then append weight history after each ST step.
if training_steps < 10:
self.s_train_weights_history.append(output.T.detach().numpy())
self.loss_history.append(round(loss.detach().numpy().item(), 5))
if self.number_trained < 10:
self.s_train_weights_history.append(target_data.T.detach().numpy())
self.loss_history.append(loss.detach().numpy().item())
else:
if training_step % log_step_size == 0:
self.s_train_weights_history.append(output.T.detach().numpy())
self.loss_history.append(round(loss.detach().numpy().item(), 5))
if self.number_trained % log_step_size == 0:
self.s_train_weights_history.append(target_data.T.detach().numpy())
self.loss_history.append(loss.detach().numpy().item())
# Saving weights only at the end of a soup/mixed exp. epoch.
if "soup" in self.name or "mixed" in self.name:
self.s_train_weights_history.append(output.T.detach().numpy())
self.loss_history.append(round(loss.detach().numpy().item(), 5))
self.s_train_weights_history.append(target_data.T.detach().numpy())
self.loss_history.append(loss.detach().numpy().item())
return output.detach().numpy(), loss, self.loss_history
def self_application(self, weights_matrix: Tensor, SA_steps: int, log_step_size: int) -> Net:
def self_application(self, weights_matrix: Tensor, SA_steps: int, log_step_size: int) :
""" Inputting the weights of a network to itself for a number of steps, without backpropagation. """
data = copy.deepcopy(weights_matrix)
@ -162,7 +167,7 @@ class Net(nn.Module):
return new_net
def attack(self, other_net: Net) -> Net:
def attack(self, other_net):
other_net_weights = other_net.input_weight_matrix()
SA_steps = 1
log_step_size = 1

View File

@ -34,7 +34,7 @@ def plot_loss(loss_array, directory_name, batch_size=1):
plt.xlabel("Epochs")
plt.ylabel("Loss")
filepath = f"A:/Bachelorarbeit_git/thesis_code/{directory_name}"
filepath = f"./{directory_name}"
filename = f"{filepath}/_nets_loss_function.png"
plt.savefig(f"{filename}")
@ -66,7 +66,7 @@ def bar_chart_fixpoints(fixpoint_counter: Dict, population_size: int, directory_
plt.bar(range(len(fixpoint_counter)), list(fixpoint_counter.values()), align='center')
plt.xticks(range(len(fixpoint_counter)), list(fixpoint_counter.keys()))
filepath = f"A:/Bachelorarbeit_git/thesis_code/{directory_name}"
filepath = f"./{directory_name}"
filename = f"{filepath}/{str(population_size)}_nets_fixpoints_barchart.png"
plt.savefig(f"{filename}")
@ -89,7 +89,7 @@ def plot_3d(matrices_weights_history, folder_name, population_size, z_axis_legen
for i in loop_matrices_weights_history:
loop_matrices_weights_history.set_description("Plotting weights 3D PCA %s" % i)
weight_matrix = matrices_weights_history[i]
weight_matrix, start_time = matrices_weights_history[i]
weight_matrix = np.array(weight_matrix)
n, x, y = weight_matrix.shape
weight_matrix = weight_matrix.reshape(n, x * y)
@ -101,7 +101,7 @@ def plot_3d(matrices_weights_history, folder_name, population_size, z_axis_legen
for j in range(len(weight_matrix_pca)):
xdata.append(weight_matrix_pca[j][0])
ydata.append(weight_matrix_pca[j][1])
zdata = np.arange(1, len(ydata)*batch_size+1, batch_size).tolist()
zdata = np.arange(start_time, len(ydata)*batch_size+start_time, batch_size).tolist()
ax.plot3D(xdata, ydata, zdata)
ax.scatter(np.array(xdata), np.array(ydata), np.array(zdata), s=7)
@ -120,7 +120,7 @@ def plot_3d(matrices_weights_history, folder_name, population_size, z_axis_legen
ax.set_ylabel("PCA Y")
ax.set_zlabel(f"Epochs")
filepath = f"A:/Bachelorarbeit_git/thesis_code/{folder_name}"
filepath = f"./{folder_name}"
filename = f"{filepath}/{exp_name}{is_trained}.png"
if os.path.isfile(filename):
letters = string.ascii_lowercase
@ -129,8 +129,8 @@ def plot_3d(matrices_weights_history, folder_name, population_size, z_axis_legen
else:
plt.savefig(f"{filename}")
# plt.show()
plt.clf()
plt.show()
#plt.clf()
def plot_3d_self_train(nets_array: List, exp_name: String, directory_name: String, batch_size: int):
@ -142,7 +142,7 @@ def plot_3d_self_train(nets_array: List, exp_name: String, directory_name: Strin
for i in loop_nets_array:
loop_nets_array.set_description("Creating ST weights history %s" % i)
matrices_weights_history.append(nets_array[i].s_train_weights_history)
matrices_weights_history.append( (nets_array[i].s_train_weights_history, nets_array[i].start_time) )
z_axis_legend = "epochs"
@ -158,7 +158,7 @@ def plot_3d_self_application(nets_array: List, exp_name: String, directory_name:
for i in loop_nets_array:
loop_nets_array.set_description("Creating SA weights history %s" % i)
matrices_weights_history.append(nets_array[i].s_application_weights_history)
matrices_weights_history.append( (nets_array[i].s_application_weights_history, nets_array[i].start_time) )
if nets_array[i].trained:
is_trained = "_trained"
@ -202,7 +202,7 @@ def line_chart_fixpoints(fixpoint_counters_history: list, epochs: int, ST_steps_
plt.plot(ST_steps_per_SA, fixpoint_counters_history, color="green", marker="o")
filepath = f"A:/Bachelorarbeit_git/thesis_code/{directory_name}"
filepath = f"./{directory_name}"
filename = f"{filepath}/{str(population_size)}_nets_fixpoints_linechart.png"
plt.savefig(f"{filename}")
@ -223,7 +223,7 @@ def box_plot(data, directory_name, population_size):
axs[1].boxplot(data)
axs[1].set_title('Box plot')
filepath = f"A:/Bachelorarbeit_git/thesis_code/{directory_name}"
filepath = f"./{directory_name}"
filename = f"{filepath}/{str(population_size)}_nets_fixpoints_barchart.png"
plt.savefig(f"{filename}")
@ -232,7 +232,7 @@ def box_plot(data, directory_name, population_size):
def write_file(text, directory_name):
filepath = f"A:/Bachelorarbeit_git/thesis_code/{directory_name}"
filepath = f"./{directory_name}"
f = open(f"{filepath}/experiment.txt", "w+")
f.write(text)
f.close()