I have been 3D scanning for years and love recreating real-world objects in 3D using photogrammetry. Recently, however, I’ve been experimenting with a relatively new form of 3D recreation: Gaussian Splats.
Gaussian Splats are quite similar to NeRFs (Neural Radiance Fields). From a dataset of images or video, a point cloud is generated in 3D space using all the still objects in your scene. Unlike photogrammetry, where the points are joined to create a 3D mesh, Gaussian Splats create a radiance field composed of "Gaussians." A Gaussian is somewhat like a single brushstroke in a painting—it has a distinct size, position, opacity, shape, and colour. On its own, it’s just a blob, but with enough of them, a detailed image emerges.
For this test, I’m using a program called Jawset Postshot. To refine your splats, you need to train your dataset to better fine-tune the position, size, and colour of the Gaussians. As you train the model, you can see the image becoming progressively clearer in real time.
Gaussian Splats have several advantages over photogrammetry. For example, they allow dynamic surface reflections. Unlike photogrammetry, where textures are baked in, Gaussian Splats use spherical harmonics to represent view-dependent changes in surface colour, specular highlights, and reflections—without needing a neural network. Additionally, they optimize the point cloud by distributing Gaussians to areas with more detail while simplifying areas with less detail, like the sky. And because Gaussian Splats don’t create a solid mesh, they achieve much higher fine detail in organic elements, like plants. Gaussian Splats also run and can be edited in real-time, unlike NeRFs.
This has been a very broad overview, and I’m just beginning my journey with Gaussian Splats. With tools like Jawset Postshot and new plugins for importing splats into Unreal Engine, Blender, and After Effects, you’ll definitely be seeing more developments in this technology from me soon!
Информация по комментариям в разработке