3D scan
- 3d body scan
- Photogrammetry
- Polycam
- Abound
- Reality Capture
- Tools to aid in photogrammetry
- Gaussian Splatting: online and local
- Core principles of photogrammetry
- LiDAR
- Random tips
3d body scan
Een snelle verzameling van opties en tips die ik ben tegengekomen online, bevindingen moeten hier nog aan toegevoed worden. Dit book is dus niet compleet.
Tips van gebruikers in mode industrie
True to Form
Luma Al
Photogrammetry
Photogrammetry is a technique through which you can extract 3D information from photographs. By extensively photographing an object from all sides, you can use software like Polycam or Abound to create a 3D mesh from these photo's. This mesh can then be imported into 3D software like Blender, and can be rendered using the photo's as textures.
Polycam
Polycam is an app that can use photogrammetry, gaussian splatting and LiDaR (with apple iPad/iPhone Pro models) to make 3D captures. It can be used for free and paid. The free version offers 20 object scans with a maximum of 100 images per capture. It also offers a free gaussian splatting tool. The paid version has unlimited scans and images per capture. Here's an overview of functions, both free and paid: https://poly.cam/pricing
Unlike in other software, the Polycam app helps you make scans. While you are circling around an object, the app takes photos for you and tells you when to slow down. It also shows you how much photos you have left. When in LiDaR mode, it overlays and previews the mesh that its building in real time. After you're done capturing, it uploads your images to Polycam where it builds your scans. You can view and download them online.
In general, use LiDaR when scanning spaces or environments and use photogrammetry for objects. Gaussian splats may work in scenarios where the object you're scanning has specular, fuzzy or translucent qualities. It is harder to convert into a mesh, however. A short description of when to use LiDaR vs Photogrammetry can be found here: https://www.youtube.com/watch?v=gZ6AWrzIx6c&list=PLqnRz-4Awhm7MXTpkgq9paJLC3ONJEBUX&index=1
Working with LiDaR, photogrammetry and guanssian splatting in Polycam:
Here's a tutorial detailing how to work with LiDaR in Polycam: https://learn.poly.cam/hc/en-us/articles/27419935601940-Creating-LiDAR-Captures
For photogrammetry, see this tutorial: https://learn.poly.cam/hc/en-us/articles/27425185907348-Creating-Photogrammetry-Captures-in-Object-Mode
And for Gaussian splatting: https://learn.poly.cam/hc/en-us/articles/27740818315668-How-to-Create-Gaussian-splats-on-Polycam-mobile
Taking your scans into other software
If you want to process your scans into software like blender or unity, Polycam has great tutorials on these topics on YouTube.
Blender: https://www.youtube.com/watch?v=1HxJiwihi6g&list=PLqnRz-4Awhm7MXTpkgq9paJLC3ONJEBUX&index=9&t=105s
Unity: https://www.youtube.com/watch?v=DEbDsxETQuE&list=PLqnRz-4Awhm7MXTpkgq9paJLC3ONJEBUX&index=14
Find more software use cases here: https://www.youtube.com/playlist?list=PLqnRz-4Awhm7MXTpkgq9paJLC3ONJEBUX
Abound
Reality Capture
Tools to aid in photogrammetry
Circular polarisation filters
Anti-reflective coating spray
Gaussian Splatting: online and local
Gaussian splatting offers an interesting alternative to photogrammetry for specific use cases, particularly where real-time rendering, photorealistic results, and the ability to capture reflective and transparent surfaces are necessary. There are lots of online options for Gaussian splatting, both paid and unpaid. Kiri Engine seems to be a very complete suite and now has the option to make meshes out of splats, for use in for instance Blender in the paid version.
For all online platforms (paid or unpaid), please be aware of your data and privacy!
Running Gaussian Splatting locally
If you don't want to be reliant on external systems, you can make Gaussian splats (and photogrammetry) locally with a somewhat beefy computer. There are multiple tools available for this, but here we chose Postshot (Gaussian splatting) and RealityCapture (photogrammetry) for a quick comparison.
Workflow in Postshot:
- Install Postshot from https://www.jawset.com/
- Make a video of the object or space. You can import multiple videos in the software, taking all videos with the same camera will have better results.
- Drag the videos into Postshot
- Render. Postshot mainly runs on GPU. The render below (50 sec video) took about 20 minutes.
- After rendering you can crop the image to exclude all the fuzzy blobs. For this look under Paramters - Edit in the menu on the right
- After rendering you can export to .ply
- To import to different software you will need a plugin
There are plugins for
- After Effects (not tested here)
- Unreal (paid plugin, not tested here)
- Blender (lower resolution, slightly more abstract results). Blender does not produce a Mesh!
- Unity (not tested here)
Below: the same chair model in Postshot and Blender
Postshot |
same model in Blender |
Compared to local photogrammetry (RealityCapture)
We've used the same source video for a render in RealityCapture. This render took about three minutes and shows one of the problems with photogrammetry: shiny objects become invisible. You can fix this by using a polarizing filter over your lens.
In RealityCapture |
.obj imported in Blender |
Render with lighting in Blender |
Thoughts on using Gaussian Splatting
As long as Gaussian splatting does not easily convert to meshes, it's use in live 3d engines might be limited. Models can be quite heavy (the chair above is 130MB as a splat, vs 20MB as a .obj) and can't be reduced easily.
It might be more applicable to pre-rendered applications, where you have can re-edit the camera from the original recording. Change angles, change camera movement, etc.
Various tutorials on Gaussian Splatting
importing .ply gaussian splat in Blender
Core principles of photogrammetry
Image quality, Information overlap, Subject coverage
Very good guide: Photogrammetry Basics
Screenshots below were taken from this Unreal Engine YouTube seminar
LiDAR
Light Detection and Ranging (LiDAR) is a technique that uses lasers to measure distance to an object.
Apple iPad and iPhone Pro models
All iPad ad iPhone Pro models have a depth camera.
Zed 2i
ZED 2i is an IP66-rated Rolling Shutter camera built for spatial analytics and immersive experiences, powered by Neural Depth Engine 2. Ready to deploy, it has a robust aluminum enclosure, high-performance IMU and USB 3.1 connection.
ALL the info on ZED camera's
Works on PC only
How to install: https://www.stereolabs.com/docs/installation/windows
Main Features
- Dual-Lens Stereo Vision: Provides advanced depth perception and 3D mapping capabilities.
- Spatial Understanding: Offers a detailed understanding of the surrounding environment.
- Motion Tracking: Tracks objects and people in real-time with high accuracy.
- High-Resolution Imaging: Captures high-quality images, essential for detailed visual work.
- Robust Build: Designed for a variety of environments, enhancing versatility.
- Integrated Sensors: Includes IMU, barometer, and magnetometer for comprehensive data collection.
- Flexible Connectivity: USB 3.1 connection for easy integration with various systems.
Here are the links to the TouchDesigner documentation regarding ZED TOP, CHOP and SOP:
TOP: https://docs.derivative.ca/ZED_TOP
CHOP: https://docs.derivative.ca/ZED_CHOP
SOP: https://docs.derivative.ca/ZED_SOP
Also, if this is a topic of your interest, it might come in handy to have a look at the official ZED documentation:
https://www.stereolabs.com/docs
more info on https://interactiveimmersive.io/blog/touchdesigner-integrations/updated-zed-camera-features-in-touchdesigner/
Skeleton tracking keypoints (there are various options)
Random tips
Je kunt met je de camera van je telefoon of tablet foto's maken en deze inladen in software die vervolgens deze foto's omzet naar een scan.
Je kunt daarvoor je camera app gebruiken of een 3Dscan app zoals 3DScannerApp (ios) of Scaniverse (iOs of android) die je kunnen helpen als guide om de juiste strategie/route (opeenvolgende foto's of overlap) te volgen voor het fotograferen.
Als je met photogrammetry (meerdere foto's maken en daarna processen in software zoals Abound) een scan wilt maken:
- Zet je wit balans en focus op automatisch
- Raw kwaliteit voor goede scan
- Groot mogelijke scherpte diepte met zo klein mogelijke diafragma want je wilt geen geblurde achtergrond. De achterond informatie is nodig om de juiste diepte referentie te kunnen maken bij het verwerken van de foto's.
- Omgeving niet leeg. De achtergrond is nodig voor referentie tijden het verwerken tpt 3D scan.
- 20% overlap tussen je foto's
- Maak bogen om je onderwerp tijdens het fotograferen
Tools om je foto's te verwerken tot scan:
Abound (online) of RealityCapture (software)
Afhankelijk van waar je je 3D scan voor wilt gebruiken is het handig om rekening te houden met de hoeveelheid polygonen die de scan heeft. Dit zou je kunnen uitleggen als de resolutie van de scan en details die de scan heeft. Hoe meer hoe groter het bestand en hoe zwaarder om in te laden in sommige toepassingen. Voor 3D printen zou je een hoge polygon dichtheid willen. Voor VR juiste en lage want VR wordt continu live gerenderd. Dan zorgt minder polygonen voor een snelle weergave. (lowpoly)
Deze website is heel uitgebreid en leerzaam: https://dev.epicgames.com/community/learning/courses/blA/unreal-engine-capturing-reality-photogrammetry-basics-by-quixel/r222/unreal-engine-capturing-reality-an-introduction-to-photogrammetry