Normalize inputs for stable training, but not outputs—non-normalized outputs reduce approximation errors.
Avoid Euler angles due to gimbal lock. Quaternions are better but can be inconsistent. Rotation matrices are the most stable.
predict only 6 values of the rotation matrix and compute the last 3 with a cross-product. This keeps the network smaller
While PReLU works well for large networks, Tanh provides more stable predictions outside the training data.
Key Takeaways for Neural Networks Driving Joints:
After experimenting with neural networks for joint rotations, here are a few lessons I’ve learned
Have you tried similar methods? Let’s discuss!
#MachineLearning #DeepLearning #NeuralNetworks #AnimationTech #Rigging