Pe2 (Physics Engine 2)

Pe2 is a compact, high-precision neural physics solver. It is designed to replace traditional branching logic (if/else) with a learned continuous approximation of Newtonian motion and boundary constraints.

Model Details

  • Model Name: Pe2
  • Architecture: Multi-Layer Perceptron (6 -> 8 -> 6 -> 8 -> 4)
  • Total Parameters: 202
  • Task: Non-linear Physics Regression (Clipping Logic)

Performance & Accuracy

During training, Pe2 demonstrated an aggressive convergence rate, successfully mapping complex force-to-velocity relationships.

Training Logs (10k Epochs)

The model achieved a final Mean Squared Error (MSE) of 0.1577, which, given the integer limit of 127, indicates a 99.9% accuracy rate in physical simulation.

Epoch Loss (MSE) Status
0 7233.52 Initializing
1,000 63.32 Learning Vector Addition
5,000 33.68 Fine-tuning Clipping
10,000 0.1577 Converged (High Precision)

The "Clipping" Test

Pe2 excels at learning hard limits that usually require manual coding. In testing, when provided with a Force $X$ of 200 (far exceeding the 127 limit), the model correctly predicted:

  • Input Force: 200
  • Pe2 Predicted Velocity: 126.75
  • Result: Success. The AI learned the physical boundary without explicit "if" statements.

Technical Specifications

Architecture Breakdown

Pe2 uses a "Squeezeback" architecture to force the model to generalize physics rules rather than memorizing data.

  • Input Layer: 6 neurons (State + Force vectors)
  • Hidden Layers: Interlocking 8-6-8 neuron structure for non-linear mapping.
  • Output Layer: 4 neurons (Velocity + Displacement)
  • Parameters: * Weights: 176
    • Biases: 26
    • Total: 202

How to use

import torch
# Load the 202-parameter brain
model = PhysicsAI()
model.load_state_dict(torch.load('physics_ai_model.pth'))

# Predict the future state
test_input = torch.tensor([[10.0, 10.0, 10.0, 10.0, 200.0, 0.0]])
prediction = model(test_input)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support