Example2¶
This is an example2. In this example, a transfer learning Gaussian process regression surrogate model is constructed.
Here, the subject is a beam bending problem. The displacements of a cantilevered beam under horizontal and vertical loads are to be determined. The formula is as follows.
\[D(\mathbf{x})=\frac{4 L^3}{E w t} \sqrt{\left(\frac{Y}{t^2}\right)^2+\left(\frac{X}{w^2}\right)^2}\]
where \(D\) is a displacement, \(L\) is a length, \(E\) is Young’s modulus, \(w\) is a width, \(t\) is a height, \(X\) is a horizontal load, \(Y\) is a vertical load.
First this function is defined.
[1]:
import numpy as np
def beam_function(length, width, height, yang_modulus, load_horizontal, load_vertical):
displacement = (4.0*length*length*length/yang_modulus/height/width) * np.sqrt(np.square(load_vertical/height/height)+np.square(load_horizontal/width/width))
return displacement
Create training data¶
Here, we assume that the source data is an analysis on members with different Young’s modulus. For other parameters, dimensions are assumed to be fixed and loads are assumed to be indeterminate.
[2]:
import TL_GPRSM.utils.sampling as sampling
length = 3.0
width = 0.2
height = 0.1
target_x = sampling.latin_hypercube_sampling(10, 3, False)
target_x = sampling.uniform_scaling(target_x, np.array([7.0e10*0.9, 5000.0*0.8, 10000.0*0.8]), np.array([7.0e10*1.1, 5000.0*1.2, 10000.0*1.2]))
target_y = np.array([beam_function(length, width, height, target_x[i,0], target_x[i,1], target_x[i,2]) for i in range(target_x.shape[0])])[:,np.newaxis]
source_x = sampling.latin_hypercube_sampling(50, 3, False)
source_x = sampling.uniform_scaling(source_x, np.array([2.06e11*0.9, 5000.0*0.8, 10000.0*0.8]), np.array([2.06e11*1.1, 5000.0*1.2, 10000.0*1.2]))
source_y = np.array([beam_function(length, width, height, source_x[i,0], source_x[i,1], source_x[i,2]) for i in range(source_x.shape[0])])[:,np.newaxis]
print(target_x.shape, target_y.shape, source_x.shape, source_y.shape)
(10, 3) (10, 1) (50, 3) (50, 1)
Construct TL-GPRSM¶
[3]:
import TL_GPRSM.models.GPRSM as GPRSM
gprsm = GPRSM(target_x, target_y, kernel_name="Matern52")
gprsm.set_transfer_learning(source_x, source_y)
gprsm.optimize(max_iter=1e4)
c:\Users\saida\Downloads\temp_0330\venv\lib\site-packages\paramz\transformations.py:111: RuntimeWarning:overflow encountered in expm1
Optimization restart 1/10, f = -337.7729887430097
Optimization restart 2/10, f = -337.4309989442386
Optimization restart 3/10, f = -337.91045566764296
Optimization restart 4/10, f = -337.4601961548478
Optimization restart 5/10, f = -335.74369807361484
Optimization restart 6/10, f = -335.7868282723364
Optimization restart 7/10, f = -337.7697880453027
Optimization restart 8/10, f = -337.90583014656556
Optimization restart 9/10, f = -337.9105955093783
Optimization restart 10/10, f = -337.4114574484434
Evaluation¶
Firest, get the ARD contribution.
[4]:
contributions = gprsm.get_ard_contribution()
print(contributions)
[2.11999180e+01 1.44326074e-01 2.73857023e+00 4.17810230e-01
1.91054986e+00 3.49176740e-02 5.60770130e+01 3.88342730e-02
1.74380606e+01]
Second, get a effect of transfer learning.
[5]:
tl_effect = gprsm.get_transfer_learning_effect()
print(tl_effect)
0.3274171965641457
Finally, evaluate with r2 index.
[6]:
import TL_GPRSM.utils.metrics as metrics
test_x = sampling.latin_hypercube_sampling(10000, 3, False)
test_x = sampling.uniform_scaling(test_x, np.array([7.0e10*0.9, 5000.0*0.8, 10000.0*0.8]), np.array([7.0e10*1.1, 5000.0*1.2, 10000.0*1.2]))
test_y = np.array([beam_function(length, width, height, test_x[i,0], test_x[i,1], test_x[i,2]) for i in range(test_x.shape[0])])[:,np.newaxis]
predict_y_mean, predict_y_std = gprsm.predict(test_x)
r2 = metrics.r2_index(test_y, predict_y_mean)
print(r2)
0.9999885656886551