3D scanning runimations

 
 
 

this above link is a nice little process video for the David scan. it looks like the light is structured in a pattern of alternating stripped boxes. I can’t tell if it is animated or not. Point trackers really like right angles when trying find good tracks. When I get to a machine where I can run Matchmover I will do a test to see what it does. My intuititive response is the projector needs to cycle between bright uniform light (to record the color map) and the structured light projection to determine the contours of the single point view from the camera. Multiple snapshots are then fused and the UV color map is trimmed to fit the patches of mesh. The less the angle of the turntable the better the color map fits the object without stretching. If you notice the donkey back got no geometry…and was fudged. The camera was set a bit low.

 
The need for the screen in the back is also interesting. It mean the object size is limited by the size of the screen.
 
Inline image 1
 
In the meantime I started thinking about Normal Maps
Normal maps are a means of adding surface detail to low poly objects. The color channels are used as vector information and “distort” the surface. Since this is being done at the rendering stage… gamers like this method because it keeps the poly count low for the CPU.
 
Because all the 3D scanners use the color map to add surface detail – I started wondering if the use of three lights (I am thinking panels of Red Green Blue LEDs for very wide lights) – this method might produce a facsimile normal map. That then could be applied as a texture to modify the surface. There would need to be some differencing between color map and the normal map and would most likely limit the exercise at first to white objects.