Here are the first three surge trained experts. They should encompass almost any need if used correctly.
The image line.
The specific image trained SVAE structures are dubbed;
- SVAE-Fresnel
tiny - 64x64
small - 128x128
base - 256x256 <- cooking current MSE=0.000181 -> Operating CV: 0.3769
large - 512x512 <- upcoming
xl - 1024x1024 <-upcoming
xxl - 2048x2048 <-upcoming
giant - 4096x4096 <-upcoming
The initial Fresnel shows the model can reconstruct images far out of scope at entirely different sizes, entirely never seen images can be fully reconstructed within the same spectrum of MSE as the trained images.
Tests show;
- the Fresnel models can piecemeal images back together at a higher accuracy and lower error rate than running the full model. Tested up to 1024x1024 with near perfect reconstruction. 0.0000029 MSE
Fresnel CANNOT reconstruct noise directly; 1.0~ MSE
The 256x256 variant is cooking right now. The MSE is dropping rapidly and it's nearly as accurate as the 128x128 counterpart with only partial cooking.
The noise line. the specific noise trained SVAE structures;
- SVAE-Johanna
This model is capable of learning and reconstructing noise and this will train a noise compressor that can deconstruct/reconstruct any noise automatically with it.
tiny - 64x64 <-first train faulted, tried 16 types of noise out of the gate, going to restart with curriculum training.
small - 128x128 <-gaussian prototype ready = 0.012 MSE <- back in the oven 16 spectrum noise
small - 128x128 - 16 noise; <- MSE=0.053170 CV=0.4450 -> learning 16 noise types
base - 256x256 <- upcoming
large - 512x512 <- upcoming
xl - 1024x1024 <-upcoming POSSIBLE if large works
Johanna is being trained on 12 types of noise. The MSE is dropping as expected and the noises are in fact being learned and represented to be replicated.
The text line is exactly the same as the others.
-SVAE-Alexandria
Alexandria is meant to encode/decode text in a perfect or near-perfect reconstruction capacity.