from unittest import TestCaseErrors
ErrorResponse - aggregation
BaseErrorType
BaseErrorType
BaseErrorType (flip_error_response=False)
Base class of a type error response. This class is not used direclty by developers, but defines the interface common to all.
RootSumSquaredError
RootSumSquaredError
RootSumSquaredError (flip_error_response=False)
The square root of the sum of the square of the errors.
RootMeanSquareError
RootMeanSquareError
RootMeanSquareError (flip_error_response=False)
The square root of the mean of the sum of the square of the errors.
SummedError
SummedError
SummedError (flip_error_response=False)
Sum of all errors.
CurrentError
CurrentError
CurrentError (flip_error_response=False)
The current error, rather than a function of the historical values.
CurrentRMSError
CurrentRMSError
CurrentRMSError (flip_error_response=False)
The current RMS error, rather than a function of the historical values.
SmoothError
SmoothError
SmoothError (flip_error_response=False)
The exponential smoothed value of the error.
MovingSumError
MovingSumError
MovingSumError (flip_error_response=False)
The moving sum of the error.
MovingAverageError
MovingAverageError
MovingAverageError (flip_error_response=False)
The moving average of the error.
WelfordVarianceError
WelfordVarianceError
WelfordVarianceError (flip_error_response=False, population_variance=False)
*Welford’s online algorithm for computing sample variance.
This numerically stable algorithm computes the running variance without storing all previous values. It’s particularly useful for streaming data and avoids numerical precision issues that can occur with naive variance calculations.
The algorithm maintains: - count: number of observations - mean: running mean - M2: sum of squared differences from the mean
Variance Types: - Sample variance: M2 / (count - 1) - Uses Bessel’s correction (N-1) to provide an unbiased estimate when the data represents a sample from a larger population. This accounts for the loss of one degree of freedom from estimating the mean. - Population variance: M2 / count - Divides by N when the data represents the entire population of interest, not just a sample.
The choice depends on whether your data is: - A sample from a larger population → use sample variance (default) - The complete population → use population variance*
ErrorResponseFactory
ErrorResponseFactory
ErrorResponseFactory ()
Initialize self. See help(type(self)) for accurate signature.
Error collection - from each iteration
ErrorCollectorFactory
ErrorCollectorFactory
ErrorCollectorFactory ()
Initialize self. See help(type(self)) for accurate signature.
BaseErrorCollector
BaseErrorCollector
BaseErrorCollector (limit, error_response, min=True)
Base class of an error collector. This class is not used direclty by developers, but defines the interface common to all.
TotalError
TotalError
TotalError (limit=None, error_response=None, min=None, **cargs)
A class to collect all the errors of the control system run.
TopError
TopError
TopError (limit=None, error_response=None, min=None, **cargs)
A class to collect all the errors of the top-level nodes.
InputsError
InputsError
InputsError (limit=None, error_response=None, min=None, **cargs)
A class to collect the values of the input values.
ReferencedInputsError
ReferencedInputsError
ReferencedInputsError (limit=None, error_response=None, min=None, **cargs)
A class to collect the values of the input values subtracted from reference values.
RewardError
RewardError
RewardError (limit=None, error_response=None, min=None, **cargs)
A class that collects the reward value of the control system run.
FitnessError
FitnessError
FitnessError (limit=None, error_response=None, min=None, **cargs)
A class that collects the fitness value of the control system run.
Examples
rms = RootMeanSquareError()
for i in range(10):
rms([i])
er = rms.get_error_response()
print(er)
TestCase().assertAlmostEqual(er, 5.338539126015656, places=6)5.338539126015656
rsse = RootSumSquaredError()
te = TotalError(error_response=rsse, limit=250,min=True)
te.add_error_data([1, 2])
print(te)
err=te.error()
print(err)
TestCase().assertAlmostEqual(err, 2.23606797749979, places=6)TotalError limit:250, limit_exceeded:False, : RootSumSquaredError error_response:2.23606797749979
2.23606797749979
et = ErrorResponseFactory.createErrorResponse('RootSumSquaredError')
et(102)
print(et.get_error_response())
iprms = ErrorCollectorFactory.createErrorCollector('TotalError')
iprms.set_limit(100)
iprms.set_error_response(et)
print(iprms.error())102.0
102.0
iprms = BaseErrorCollector.collector( 'RootMeanSquareError','InputsError', 10, flip_error_response=False, min=False)time_series_example = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12],
[13, 14, 15]])
for ts in time_series_example:
iprms.add_error_data(ts)
erms = iprms.error()
print(erms)
print(iprms)
TestCase().assertAlmostEqual(erms, 9.092121131323903, places=6)9.092121131323903
InputsError limit:10, limit_exceeded:False, : RootMeanSquareError error_response:9.092121131323903
iprms.reset()
print(iprms)InputsError limit:10, limit_exceeded:False, : RootMeanSquareError error_response:None
iprms2 = BaseErrorCollector.collector( 'RootMeanSquareError','InputsError', 10, flip_error_response=False, min=False)
time_series_example2 = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12],
[13, 14, 15]])
for ts in time_series_example2:
iprms2.add_error_data_array(ts)
erms2 = iprms2.error()
print(erms2)
print(iprms2)
TestCase().assertAlmostEqual(erms2, 15.748015748023622, places=6)15.748015748023622
InputsError limit:10, limit_exceeded:False, : RootMeanSquareError error_response:15.748015748023622
InputsError limit:10, limit_exceeded:False, : RootMeanSquareError error_response:15.748015748023622
iprms1 = BaseErrorCollector.collector( 'RootMeanSquareError','InputsError', 10, flip_error_response=False, min=False)
iprms1.add_error_data([3])
iprms1.add_error_data([5])
erms1 = iprms1.error()
print(erms1)
print(iprms1)
TestCase().assertAlmostEqual(erms1, 4.123105625617661, places=6)4.123105625617661
InputsError limit:10, limit_exceeded:False, : RootMeanSquareError error_response:4.123105625617661
InputsError limit:10, limit_exceeded:False, : RootMeanSquareError error_response:4.123105625617661
ip_curr_rms = BaseErrorCollector.collector( 'CurrentRMSError','InputsError', 10, flip_error_response=False, min=False)
data = [4, 5, 6]
ip_curr_rms.add_error_data_array(data)
rms = ip_curr_rms.error()
print(rms)
print(ip_curr_rms)
TestCase().assertAlmostEqual(rms, 5.066228051190222, places=6)5.066228051190222
InputsError limit:10, limit_exceeded:False, : CurrentRMSError error_response:5.066228051190222
refins_rms = BaseErrorCollector.collector( 'RootMeanSquareError','ReferencedInputsError', 10, flip_error_response=False, min=False)
time_series1 = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12],
[13, 14, 15]])
for ts in time_series1:
refins_rms.add_error_data_array(ts)
erms = refins_rms.error()
print(erms)
print(refins_rms)
TestCase().assertAlmostEqual(erms, 15.748015748023622, places=6)15.748015748023622
ReferencedInputsError limit:10, limit_exceeded:False, : RootMeanSquareError error_response:15.748015748023622
ins_sm = BaseErrorCollector.collector( 'SmoothError','InputsError', 10, flip_error_response=False, min=False, properties={'error_response': {'smooth_factor': 0.9}})
time_series1 = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12],
[13, 14, 15]])
for ts in time_series1:
ins_sm.add_error_data(ts)
ersm = ins_sm.error()
print(ersm)
print(ins_sm)
TestCase().assertAlmostEqual(ersm, 7.853020188851838, places=6)7.853020188851838
InputsError limit:10, limit_exceeded:False, : SmoothError error_response:7.853020188851838
ins_sm1 = BaseErrorCollector.collector( 'SmoothError','InputsError', 10, flip_error_response=False, min=False, properties={'error_response': {'smooth_factor': 0.9}})
time_series1 = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12],
[13, 14, 15]])
for ts in time_series1:
ins_sm1.add_error_data_array(ts)
ersm1 = ins_sm1.error()
print(ersm1)
print(ins_sm1)
TestCase().assertAlmostEqual(ersm1, 6.161823641446112, places=6)6.161823641446112
InputsError limit:10, limit_exceeded:False, : SmoothError error_response:6.161823641446112
ins_sm2 = BaseErrorCollector.collector( 'SmoothError','InputsError', 10, flip_error_response=False, min=False, properties={'error_response': {'smooth_factor': 0.5}})
error_response = ins_sm2.get_error_response()
initial = 100
error_response.set_error_response(initial)
for i in range(5):
ins_sm2.add_error_data_array([initial+i])
ersm1 = ins_sm2.error()
print(ersm1)
print(ins_sm2)
TestCase().assertAlmostEqual(ersm1, 103.0625, places=6)103.0625
InputsError limit:10, limit_exceeded:False, : SmoothError error_response:103.0625
InputsError limit:10, limit_exceeded:False, : SmoothError error_response:103.0625
time_series1 = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12],
[13, 14, 15]])
norms = np.linalg.norm(time_series1, axis=1)
print(norms)[ 3.74165739 8.77496439 13.92838828 19.10497317 24.2899156 ]
# Example: Welford's Variance Algorithm
print("=== Welford's Variance Examples ===")
# Test with simple data set
data = [2, 4, 4, 4, 5, 5, 7, 9]
welford = WelfordVarianceError()
print(f"Processing data: {data}")
for value in data:
welford(value)
print(f"Value: {value}, Count: {welford.get_sample_size()}, Mean: {welford.get_mean():.4f}, Variance: {welford.get_error_response():.4f}, StdDev: {welford.get_standard_deviation():.4f}")
# Verify against NumPy
np_var = np.var(data, ddof=1) # Sample variance (ddof=1)
np_mean = np.mean(data)
print(f"\nNumPy verification:")
print(f"NumPy sample variance: {np_var:.4f}")
print(f"Welford sample variance: {welford.get_error_response():.4f}")
print(f"NumPy mean: {np_mean:.4f}")
print(f"Welford mean: {welford.get_mean():.4f}")
TestCase().assertAlmostEqual(welford.get_error_response(), np_var, places=6)
TestCase().assertAlmostEqual(welford.get_mean(), np_mean, places=6)=== Welford's Variance Examples ===
Processing data: [2, 4, 4, 4, 5, 5, 7, 9]
Value: 2, Count: 1, Mean: 2.0000, Variance: 0.0000, StdDev: 0.0000
Value: 4, Count: 2, Mean: 3.0000, Variance: 2.0000, StdDev: 1.4142
Value: 4, Count: 3, Mean: 3.3333, Variance: 1.3333, StdDev: 1.1547
Value: 4, Count: 4, Mean: 3.5000, Variance: 1.0000, StdDev: 1.0000
Value: 5, Count: 5, Mean: 3.8000, Variance: 1.2000, StdDev: 1.0954
Value: 5, Count: 6, Mean: 4.0000, Variance: 1.2000, StdDev: 1.0954
Value: 7, Count: 7, Mean: 4.4286, Variance: 2.2857, StdDev: 1.5119
Value: 9, Count: 8, Mean: 5.0000, Variance: 4.5714, StdDev: 2.1381
NumPy verification:
NumPy sample variance: 4.5714
Welford sample variance: 4.5714
NumPy mean: 5.0000
Welford mean: 5.0000
# Example: Welford with array data (using Euclidean norm)
print("\n=== Welford with Array Data ===")
welford_array = WelfordVarianceError()
time_series_data = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12]])
for i, ts in enumerate(time_series_data):
norm = np.linalg.norm(ts)
welford_array(ts)
print(f"Array {i+1}: {ts}, Norm: {norm:.4f}, Variance: {welford_array.get_error_response():.4f}")
print(f"Final statistics: Mean={welford_array.get_mean():.4f}, Variance={welford_array.get_error_response():.4f}, N={welford_array.get_sample_size()}")
# Verify with NumPy on the norms
norms = [np.linalg.norm(ts) for ts in time_series_data]
np_var_norms = np.var(norms, ddof=1)
print(f"NumPy variance of norms: {np_var_norms:.4f}")
TestCase().assertAlmostEqual(welford_array.get_error_response(), np_var_norms, places=6)
=== Welford with Array Data ===
Array 1: [1 2 3], Norm: 3.7417, Variance: 0.0000
Array 2: [4 5 6], Norm: 8.7750, Variance: 12.6671
Array 3: [7 8 9], Norm: 13.9284, Variance: 25.9436
Array 4: [10 11 12], Norm: 19.1050, Variance: 43.7666
Final statistics: Mean=11.3875, Variance=43.7666, N=4
NumPy variance of norms: 43.7666
# Example: Using Welford Variance with Error Collector
print("\n=== Welford Variance with Error Collector ===")
# Create an error collector using Welford variance for input monitoring
welford_collector = BaseErrorCollector.collector(
'WelfordVarianceError', 'InputsError',
limit=5.0, min=False, # Terminate when variance drops below 5.0
flip_error_response=False
)
# Simulate some input data with decreasing variance
import random
random.seed(42) # For reproducible results
print("Simulating input data with decreasing variance:")
for epoch in range(5):
# Generate data with decreasing variance over time
variance_scale = 10.0 / (epoch + 1) # Variance decreases each epoch
data = [random.gauss(0, variance_scale) for _ in range(3)]
welford_collector.add_error_data_array(data)
current_variance = welford_collector.error()
print(f"Epoch {epoch+1}: Data={[f'{x:.2f}' for x in data]}, Variance={current_variance:.4f}")
if welford_collector.is_terminated():
print(f"Terminated at epoch {epoch+1} - variance dropped below limit")
break
print(f"\nFinal collector state: {welford_collector}")
# Demonstrate population vs sample variance
print("\n=== Population vs Sample Variance ===")
data_pop = [1, 2, 3, 4, 5]
welford_sample = WelfordVarianceError(population_variance=False)
welford_population = WelfordVarianceError(population_variance=True)
for value in data_pop:
welford_sample(value)
welford_population(value)
print(f"Data: {data_pop}")
print(f"Sample variance (N-1): {welford_sample.get_error_response():.4f}")
print(f"Population variance (N): {welford_population.get_error_response():.4f}")
print(f"NumPy sample (ddof=1): {np.var(data_pop, ddof=1):.4f}")
print(f"NumPy population (ddof=0): {np.var(data_pop, ddof=0):.4f}")
TestCase().assertAlmostEqual(welford_sample.get_error_response(), np.var(data_pop, ddof=1), places=6)
TestCase().assertAlmostEqual(welford_population.get_error_response(), np.var(data_pop, ddof=0), places=6)
=== Welford Variance with Error Collector ===
Simulating input data with decreasing variance:
Epoch 1: Data=['-1.44', '-1.73', '-1.11'], Variance=0.0000
Epoch 2: Data=['3.51', '-0.64', '-7.49'], Variance=16.7175
Epoch 3: Data=['1.11', '-0.89', '-0.72'], Variance=13.1900
Epoch 4: Data=['0.29', '0.58', '2.91'], Variance=9.1255
Epoch 5: Data=['1.31', '0.22', '-1.48'], Variance=7.5334
Final collector state: InputsError limit:5.0, limit_exceeded:False, : WelfordVarianceError error_response:7.5334062673732385
=== Population vs Sample Variance ===
Data: [1, 2, 3, 4, 5]
Sample variance (N-1): 2.5000
Population variance (N): 2.0000
NumPy sample (ddof=1): 2.5000
NumPy population (ddof=0): 2.0000
# Example: Using set_properties with WelfordVarianceError
print("\n=== Welford Variance with set_properties ===")
# Create error collector with population variance using properties
welford_pop_collector = BaseErrorCollector.collector(
'WelfordVarianceError', 'InputsError',
limit=2.0, min=False,
properties={
'error_response': {'population_variance': True}
}
)
# Create error collector with sample variance using properties (default)
welford_sample_collector = BaseErrorCollector.collector(
'WelfordVarianceError', 'InputsError',
limit=2.0, min=False,
properties={
'error_response': {'population_variance': False}
}
)
# Test data
test_data = [1.0, 2.0, 3.0, 4.0, 5.0]
print("Processing the same data with both population and sample variance:")
print(f"Data: {test_data}")
for value in test_data:
welford_pop_collector.add_error_data_array([value])
welford_sample_collector.add_error_data_array([value])
pop_variance = welford_pop_collector.error()
sample_variance = welford_sample_collector.error()
print(f"Population variance (via properties): {pop_variance:.4f}")
print(f"Sample variance (via properties): {sample_variance:.4f}")
print(f"Difference: {abs(sample_variance - pop_variance):.4f}")
# Verify with NumPy
np_pop = np.var(test_data, ddof=0)
np_sample = np.var(test_data, ddof=1)
print(f"NumPy population variance: {np_pop:.4f}")
print(f"NumPy sample variance: {np_sample:.4f}")
TestCase().assertAlmostEqual(pop_variance, np_pop, places=6)
TestCase().assertAlmostEqual(sample_variance, np_sample, places=6)
=== Welford Variance with set_properties ===
Processing the same data with both population and sample variance:
Data: [1.0, 2.0, 3.0, 4.0, 5.0]
Population variance (via properties): 2.0000
Sample variance (via properties): 2.5000
Difference: 0.5000
NumPy population variance: 2.0000
NumPy sample variance: 2.5000