from unittest import TestCase
Errors
ErrorResponse - aggregation
BaseErrorType
BaseErrorType
BaseErrorType (flip_error_response=False)
Base class of a type error response. This class is not used direclty by developers, but defines the interface common to all.
RootSumSquaredError
RootSumSquaredError
RootSumSquaredError (flip_error_response=False)
The square root of the sum of the square of the errors.
RootMeanSquareError
RootMeanSquareError
RootMeanSquareError (flip_error_response=False)
The square root of the mean of the sum of the square of the errors.
SummedError
SummedError
SummedError (flip_error_response=False)
Sum of all errors.
CurrentError
CurrentError
CurrentError (flip_error_response=False)
The current error, rather than a function of the historical values.
CurrentRMSError
CurrentRMSError
CurrentRMSError (flip_error_response=False)
The current RMS error, rather than a function of the historical values.
SmoothError
SmoothError
SmoothError (flip_error_response=False)
The exponential smoothed value of the error.
MovingSumError
MovingSumError
MovingSumError (flip_error_response=False)
The moving sum of the error.
MovingAverageError
MovingAverageError
MovingAverageError (flip_error_response=False)
The moving average of the error.
WelfordVarianceError
WelfordVarianceError
WelfordVarianceError (flip_error_response=False, population_variance=False)
*Welford’s online algorithm for computing sample variance.
This numerically stable algorithm computes the running variance without storing all previous values. It’s particularly useful for streaming data and avoids numerical precision issues that can occur with naive variance calculations.
The algorithm maintains: - count: number of observations - mean: running mean - M2: sum of squared differences from the mean
Variance Types: - Sample variance: M2 / (count - 1) - Uses Bessel’s correction (N-1) to provide an unbiased estimate when the data represents a sample from a larger population. This accounts for the loss of one degree of freedom from estimating the mean. - Population variance: M2 / count - Divides by N when the data represents the entire population of interest, not just a sample.
The choice depends on whether your data is: - A sample from a larger population → use sample variance (default) - The complete population → use population variance*
ErrorResponseFactory
ErrorResponseFactory
ErrorResponseFactory ()
Initialize self. See help(type(self)) for accurate signature.
Error collection - from each iteration
ErrorCollectorFactory
ErrorCollectorFactory
ErrorCollectorFactory ()
Initialize self. See help(type(self)) for accurate signature.
BaseErrorCollector
BaseErrorCollector
BaseErrorCollector (limit, error_response, min=True)
Base class of an error collector. This class is not used direclty by developers, but defines the interface common to all.
TotalError
TotalError
TotalError (limit=None, error_response=None, min=None, **cargs)
A class to collect all the errors of the control system run.
TopError
TopError
TopError (limit=None, error_response=None, min=None, **cargs)
A class to collect all the errors of the top-level nodes.
InputsError
InputsError
InputsError (limit=None, error_response=None, min=None, **cargs)
A class to collect the values of the input values.
ReferencedInputsError
ReferencedInputsError
ReferencedInputsError (limit=None, error_response=None, min=None, **cargs)
A class to collect the values of the input values subtracted from reference values.
RewardError
RewardError
RewardError (limit=None, error_response=None, min=None, **cargs)
A class that collects the reward value of the control system run.
FitnessError
FitnessError
FitnessError (limit=None, error_response=None, min=None, **cargs)
A class that collects the fitness value of the control system run.
Examples
= RootMeanSquareError()
rms for i in range(10):
rms([i])= rms.get_error_response()
er print(er)
5.338539126015656, places=6) TestCase().assertAlmostEqual(er,
5.338539126015656
= RootSumSquaredError()
rsse = TotalError(error_response=rsse, limit=250,min=True)
te 1, 2])
te.add_error_data([print(te)
=te.error()
errprint(err)
2.23606797749979, places=6) TestCase().assertAlmostEqual(err,
TotalError limit:250, limit_exceeded:False, : RootSumSquaredError error_response:2.23606797749979
2.23606797749979
= ErrorResponseFactory.createErrorResponse('RootSumSquaredError')
et 102)
et(print(et.get_error_response())
= ErrorCollectorFactory.createErrorCollector('TotalError')
iprms 100)
iprms.set_limit(
iprms.set_error_response(et)print(iprms.error())
102.0
102.0
= BaseErrorCollector.collector( 'RootMeanSquareError','InputsError', 10, flip_error_response=False, min=False) iprms
= np.array([[1, 2, 3],
time_series_example 4, 5, 6],
[7, 8, 9],
[10, 11, 12],
[13, 14, 15]])
[for ts in time_series_example:
iprms.add_error_data(ts)= iprms.error()
erms print(erms)
print(iprms)
9.092121131323903, places=6) TestCase().assertAlmostEqual(erms,
9.092121131323903
InputsError limit:10, limit_exceeded:False, : RootMeanSquareError error_response:9.092121131323903
iprms.reset()print(iprms)
InputsError limit:10, limit_exceeded:False, : RootMeanSquareError error_response:None
= BaseErrorCollector.collector( 'RootMeanSquareError','InputsError', 10, flip_error_response=False, min=False)
iprms2 = np.array([[1, 2, 3],
time_series_example2 4, 5, 6],
[7, 8, 9],
[10, 11, 12],
[13, 14, 15]])
[for ts in time_series_example2:
iprms2.add_error_data_array(ts)= iprms2.error()
erms2 print(erms2)
print(iprms2)
15.748015748023622, places=6) TestCase().assertAlmostEqual(erms2,
15.748015748023622
InputsError limit:10, limit_exceeded:False, : RootMeanSquareError error_response:15.748015748023622
InputsError limit:10, limit_exceeded:False, : RootMeanSquareError error_response:15.748015748023622
= BaseErrorCollector.collector( 'RootMeanSquareError','InputsError', 10, flip_error_response=False, min=False)
iprms1 3])
iprms1.add_error_data([5])
iprms1.add_error_data([= iprms1.error()
erms1 print(erms1)
print(iprms1)
4.123105625617661, places=6) TestCase().assertAlmostEqual(erms1,
4.123105625617661
InputsError limit:10, limit_exceeded:False, : RootMeanSquareError error_response:4.123105625617661
InputsError limit:10, limit_exceeded:False, : RootMeanSquareError error_response:4.123105625617661
= BaseErrorCollector.collector( 'CurrentRMSError','InputsError', 10, flip_error_response=False, min=False)
ip_curr_rms = [4, 5, 6]
data
ip_curr_rms.add_error_data_array(data)= ip_curr_rms.error()
rms print(rms)
print(ip_curr_rms)
5.066228051190222, places=6) TestCase().assertAlmostEqual(rms,
5.066228051190222
InputsError limit:10, limit_exceeded:False, : CurrentRMSError error_response:5.066228051190222
= BaseErrorCollector.collector( 'RootMeanSquareError','ReferencedInputsError', 10, flip_error_response=False, min=False)
refins_rms = np.array([[1, 2, 3],
time_series1 4, 5, 6],
[7, 8, 9],
[10, 11, 12],
[13, 14, 15]])
[for ts in time_series1:
refins_rms.add_error_data_array(ts)= refins_rms.error()
erms print(erms)
print(refins_rms)
15.748015748023622, places=6) TestCase().assertAlmostEqual(erms,
15.748015748023622
ReferencedInputsError limit:10, limit_exceeded:False, : RootMeanSquareError error_response:15.748015748023622
= BaseErrorCollector.collector( 'SmoothError','InputsError', 10, flip_error_response=False, min=False, properties={'error_response': {'smooth_factor': 0.9}})
ins_sm = np.array([[1, 2, 3],
time_series1 4, 5, 6],
[7, 8, 9],
[10, 11, 12],
[13, 14, 15]])
[for ts in time_series1:
ins_sm.add_error_data(ts)= ins_sm.error()
ersm print(ersm)
print(ins_sm)
7.853020188851838, places=6) TestCase().assertAlmostEqual(ersm,
7.853020188851838
InputsError limit:10, limit_exceeded:False, : SmoothError error_response:7.853020188851838
= BaseErrorCollector.collector( 'SmoothError','InputsError', 10, flip_error_response=False, min=False, properties={'error_response': {'smooth_factor': 0.9}})
ins_sm1 = np.array([[1, 2, 3],
time_series1 4, 5, 6],
[7, 8, 9],
[10, 11, 12],
[13, 14, 15]])
[for ts in time_series1:
ins_sm1.add_error_data_array(ts)= ins_sm1.error()
ersm1 print(ersm1)
print(ins_sm1)
6.161823641446112, places=6) TestCase().assertAlmostEqual(ersm1,
6.161823641446112
InputsError limit:10, limit_exceeded:False, : SmoothError error_response:6.161823641446112
= BaseErrorCollector.collector( 'SmoothError','InputsError', 10, flip_error_response=False, min=False, properties={'error_response': {'smooth_factor': 0.5}})
ins_sm2 = ins_sm2.get_error_response()
error_response = 100
initial
error_response.set_error_response(initial)for i in range(5):
+i])
ins_sm2.add_error_data_array([initial= ins_sm2.error()
ersm1 print(ersm1)
print(ins_sm2)
103.0625, places=6) TestCase().assertAlmostEqual(ersm1,
103.0625
InputsError limit:10, limit_exceeded:False, : SmoothError error_response:103.0625
InputsError limit:10, limit_exceeded:False, : SmoothError error_response:103.0625
= np.array([[1, 2, 3],
time_series1 4, 5, 6],
[7, 8, 9],
[10, 11, 12],
[13, 14, 15]])
[= np.linalg.norm(time_series1, axis=1)
norms print(norms)
[ 3.74165739 8.77496439 13.92838828 19.10497317 24.2899156 ]
# Example: Welford's Variance Algorithm
print("=== Welford's Variance Examples ===")
# Test with simple data set
= [2, 4, 4, 4, 5, 5, 7, 9]
data = WelfordVarianceError()
welford
print(f"Processing data: {data}")
for value in data:
welford(value)print(f"Value: {value}, Count: {welford.get_sample_size()}, Mean: {welford.get_mean():.4f}, Variance: {welford.get_error_response():.4f}, StdDev: {welford.get_standard_deviation():.4f}")
# Verify against NumPy
= np.var(data, ddof=1) # Sample variance (ddof=1)
np_var = np.mean(data)
np_mean print(f"\nNumPy verification:")
print(f"NumPy sample variance: {np_var:.4f}")
print(f"Welford sample variance: {welford.get_error_response():.4f}")
print(f"NumPy mean: {np_mean:.4f}")
print(f"Welford mean: {welford.get_mean():.4f}")
=6)
TestCase().assertAlmostEqual(welford.get_error_response(), np_var, places=6) TestCase().assertAlmostEqual(welford.get_mean(), np_mean, places
=== Welford's Variance Examples ===
Processing data: [2, 4, 4, 4, 5, 5, 7, 9]
Value: 2, Count: 1, Mean: 2.0000, Variance: 0.0000, StdDev: 0.0000
Value: 4, Count: 2, Mean: 3.0000, Variance: 2.0000, StdDev: 1.4142
Value: 4, Count: 3, Mean: 3.3333, Variance: 1.3333, StdDev: 1.1547
Value: 4, Count: 4, Mean: 3.5000, Variance: 1.0000, StdDev: 1.0000
Value: 5, Count: 5, Mean: 3.8000, Variance: 1.2000, StdDev: 1.0954
Value: 5, Count: 6, Mean: 4.0000, Variance: 1.2000, StdDev: 1.0954
Value: 7, Count: 7, Mean: 4.4286, Variance: 2.2857, StdDev: 1.5119
Value: 9, Count: 8, Mean: 5.0000, Variance: 4.5714, StdDev: 2.1381
NumPy verification:
NumPy sample variance: 4.5714
Welford sample variance: 4.5714
NumPy mean: 5.0000
Welford mean: 5.0000
# Example: Welford with array data (using Euclidean norm)
print("\n=== Welford with Array Data ===")
= WelfordVarianceError()
welford_array
= np.array([[1, 2, 3],
time_series_data 4, 5, 6],
[7, 8, 9],
[10, 11, 12]])
[
for i, ts in enumerate(time_series_data):
= np.linalg.norm(ts)
norm
welford_array(ts)print(f"Array {i+1}: {ts}, Norm: {norm:.4f}, Variance: {welford_array.get_error_response():.4f}")
print(f"Final statistics: Mean={welford_array.get_mean():.4f}, Variance={welford_array.get_error_response():.4f}, N={welford_array.get_sample_size()}")
# Verify with NumPy on the norms
= [np.linalg.norm(ts) for ts in time_series_data]
norms = np.var(norms, ddof=1)
np_var_norms print(f"NumPy variance of norms: {np_var_norms:.4f}")
=6) TestCase().assertAlmostEqual(welford_array.get_error_response(), np_var_norms, places
=== Welford with Array Data ===
Array 1: [1 2 3], Norm: 3.7417, Variance: 0.0000
Array 2: [4 5 6], Norm: 8.7750, Variance: 12.6671
Array 3: [7 8 9], Norm: 13.9284, Variance: 25.9436
Array 4: [10 11 12], Norm: 19.1050, Variance: 43.7666
Final statistics: Mean=11.3875, Variance=43.7666, N=4
NumPy variance of norms: 43.7666
# Example: Using Welford Variance with Error Collector
print("\n=== Welford Variance with Error Collector ===")
# Create an error collector using Welford variance for input monitoring
= BaseErrorCollector.collector(
welford_collector 'WelfordVarianceError', 'InputsError',
=5.0, min=False, # Terminate when variance drops below 5.0
limit=False
flip_error_response
)
# Simulate some input data with decreasing variance
import random
42) # For reproducible results
random.seed(
print("Simulating input data with decreasing variance:")
for epoch in range(5):
# Generate data with decreasing variance over time
= 10.0 / (epoch + 1) # Variance decreases each epoch
variance_scale = [random.gauss(0, variance_scale) for _ in range(3)]
data
welford_collector.add_error_data_array(data)= welford_collector.error()
current_variance
print(f"Epoch {epoch+1}: Data={[f'{x:.2f}' for x in data]}, Variance={current_variance:.4f}")
if welford_collector.is_terminated():
print(f"Terminated at epoch {epoch+1} - variance dropped below limit")
break
print(f"\nFinal collector state: {welford_collector}")
# Demonstrate population vs sample variance
print("\n=== Population vs Sample Variance ===")
= [1, 2, 3, 4, 5]
data_pop
= WelfordVarianceError(population_variance=False)
welford_sample = WelfordVarianceError(population_variance=True)
welford_population
for value in data_pop:
welford_sample(value)
welford_population(value)
print(f"Data: {data_pop}")
print(f"Sample variance (N-1): {welford_sample.get_error_response():.4f}")
print(f"Population variance (N): {welford_population.get_error_response():.4f}")
print(f"NumPy sample (ddof=1): {np.var(data_pop, ddof=1):.4f}")
print(f"NumPy population (ddof=0): {np.var(data_pop, ddof=0):.4f}")
=1), places=6)
TestCase().assertAlmostEqual(welford_sample.get_error_response(), np.var(data_pop, ddof=0), places=6) TestCase().assertAlmostEqual(welford_population.get_error_response(), np.var(data_pop, ddof
=== Welford Variance with Error Collector ===
Simulating input data with decreasing variance:
Epoch 1: Data=['-1.44', '-1.73', '-1.11'], Variance=0.0000
Epoch 2: Data=['3.51', '-0.64', '-7.49'], Variance=16.7175
Epoch 3: Data=['1.11', '-0.89', '-0.72'], Variance=13.1900
Epoch 4: Data=['0.29', '0.58', '2.91'], Variance=9.1255
Epoch 5: Data=['1.31', '0.22', '-1.48'], Variance=7.5334
Final collector state: InputsError limit:5.0, limit_exceeded:False, : WelfordVarianceError error_response:7.5334062673732385
=== Population vs Sample Variance ===
Data: [1, 2, 3, 4, 5]
Sample variance (N-1): 2.5000
Population variance (N): 2.0000
NumPy sample (ddof=1): 2.5000
NumPy population (ddof=0): 2.0000
# Example: Using set_properties with WelfordVarianceError
print("\n=== Welford Variance with set_properties ===")
# Create error collector with population variance using properties
= BaseErrorCollector.collector(
welford_pop_collector 'WelfordVarianceError', 'InputsError',
=2.0, min=False,
limit={
properties'error_response': {'population_variance': True}
}
)
# Create error collector with sample variance using properties (default)
= BaseErrorCollector.collector(
welford_sample_collector 'WelfordVarianceError', 'InputsError',
=2.0, min=False,
limit={
properties'error_response': {'population_variance': False}
}
)
# Test data
= [1.0, 2.0, 3.0, 4.0, 5.0]
test_data
print("Processing the same data with both population and sample variance:")
print(f"Data: {test_data}")
for value in test_data:
welford_pop_collector.add_error_data_array([value])
welford_sample_collector.add_error_data_array([value])
= welford_pop_collector.error()
pop_variance = welford_sample_collector.error()
sample_variance
print(f"Population variance (via properties): {pop_variance:.4f}")
print(f"Sample variance (via properties): {sample_variance:.4f}")
print(f"Difference: {abs(sample_variance - pop_variance):.4f}")
# Verify with NumPy
= np.var(test_data, ddof=0)
np_pop = np.var(test_data, ddof=1)
np_sample print(f"NumPy population variance: {np_pop:.4f}")
print(f"NumPy sample variance: {np_sample:.4f}")
=6)
TestCase().assertAlmostEqual(pop_variance, np_pop, places=6) TestCase().assertAlmostEqual(sample_variance, np_sample, places
=== Welford Variance with set_properties ===
Processing the same data with both population and sample variance:
Data: [1.0, 2.0, 3.0, 4.0, 5.0]
Population variance (via properties): 2.0000
Sample variance (via properties): 2.5000
Difference: 0.5000
NumPy population variance: 2.0000
NumPy sample variance: 2.5000