Maya pummeling test

<p>pummeling test from <a href=”http://vimeo.com/user491023″>Dennis Hlynsky</a> on <a href=”https://vimeo.com”>Vimeo</a&gt;.</p>

 

An attempt to create a battleground surface shaped by massive artillery fire.

In Autodesk Maya an Ncloth is created and hit with Nparticles. The idea was to create a slightly moving surface. Since the polygon surface used to make the Ncloth is being used as an attractor to shape the surface one could get similar results with a series of blend shapes. The advantage of using particles to strike the surface is the option of secondary particles controlled by the collisions.

There are three sections

The first does limits the restitution to the surface

The second allows the surface to “spring back” but would be a means of producing a series of blend shapes which could be used to keep the cratering.

The third sections is an attempt to use a texture map to modify the cloth attraction to the original poly topography.

 

First recordings with the Edgertronic Camera – Tests

First off I want to say I am using pushing this camera in terms of ISO. At the recommended ISO the camera noise is not an issue. All cameras with fast frame sample rates need light.. they take from f stops which results in shallow depth of focus. Since I require depth of focus I am pushing the ISO much higher than one might in situations where action is at a predictable distance from the camera and bright light is possible.

The Edgertronic cameras are great inexpensive cameras in the slow motion class. Priced out at around $5,000 these cameras do a very good job at a difficult task. The design criteria for these camera seems to be “a slow motion camera for the rest of us” When I say the rest of us I mean those of us who don’t have between 15 to 80 thousand dollars to drop on a camera.

Personally I like inexpensive cameras. One sacrifices a bit of quality for an oppressive overhead. Elimination of the overhead allows one a bit more freedom to respect the creative whim.

flight of a small brown moth from Dennis Hlynsky on Vimeo.

The quality/accessibility question is perpetual – How much quality does one sacrifice for the ability to do? At times this is a no brainer. No one complained about those soft fuzzy video recordings of the moon landing. We use our cell phones to photograph and record video and live with rolling shutter and autoexposure. Deciding when “good enough” is acceptable requires personal judgement.

If the lack of overhead allows one to record something truly wonderful then a bit of quality compromise is acceptable. But, its a fine line. It really depends upon intended use. A bit of noise is OK in scientific data collection as long as it doesn’t obscure anything significant – but not OK when inserting a slow-motion sequence into a finely crafted narrative film. One could use a low ISO and slower frame sample rate to record acceptable footage… but for the desire to stop a bumble bee wing.

The Edgertronic camera skirts this line. I immediately took to the Monochromatic version of the Edgertronic.  When I shared the first shots from the Monochromatic camera the comment most heard was “Wow … it’s so sharp! … Wow WOW!”

pollination from Dennis Hlynsky on Vimeo.

I did tint the image a bit but really… the camera does shine.

I was using an ISO of 600. I was shooting outside in the shade of a house on a cloudless day. So I wanted to try the color version…the ideal ISO is 100 but I needed the stops.

butterfly color test from Dennis Hlynsky on Vimeo.

Camera time: Thu Aug 7 18:14:35 2014
Sensitivity: 800 ISO
Shutter: 1/6000 Seconds
Frame Rate: 1000 Frames/Second
Horizontal: 976 Pixels
Vertical: 496 Pixels
Sub-sampling: Off sub-sample
Duration: 13.199 Seconds
Pre-trigger: 10 percent

Technically the differences between the cameras is an added color grid which reduces the capability of the color camera by 2 stops. Recording at high shutter speeds cuts down on the light entering the camera. I found I tried making up for the two stops by increasing ISO. At higher ISO the signal off the sensor is amplified and one gets more noise. Flashing is also evident. For data collection this is OK… but it imposes itself on the narrative. The noise and the flashing distract from “reading” the story of the image. The standing noise is especially evident in the shadows… meaning the noise doesn’t move as the framing changes. This might be caused by the Bayer Grid and the extra large photon buckets of the sensor. Color noise (grain) in high ISO chemical film moves from frame to frame. Our eyes blend these consecutive changes into pointillist hues. This doesn’t happen with standing noise. The introduction of standing noise in Edgertronic recordings place another “visual layer” on the recording. Again lowering the ISO will minimize this issue.

The extra two stops becomes constraining because it trades sharp deep focus for noise.

Another issue of concern with these cameras are visible stripes, flashing and occasional vertical streaking. Again, I’m asking a lot of a $5000 Slow Motion Camera. For scientific data collection I don’t believe it is a problem. For Aesthetic concerns the flashing distracts the eye. From the few testes I have done I have yet to determine the sweet spot between shutter speed, ISO and the minimizing of this artifact. It feels like it gets worse with higher shutter speeds. The flashing might be caused by the background of the shot. More tests are needed.

The “Pollination” example is shot with a fairly slow shutter with little flashing artifact. These artifacts are clearly evident in the clip of the Smoke Ring.

Smoke Rings from Dennis Hlynsky on Vimeo.

Camera time: Tue Aug 5 15:23:52 2014
Sensitivity: 1600 ISO
Shutter: 1/2000 Seconds
Frame Rate: 1000 Frames/Second
Horizontal: 1088 Pixels
Vertical: 576 Pixels
Sub-sampling: Off sub-sample
Duration: 10.13 Seconds
Pre-trigger: 10 percent

Perhaps I should explain what I mean by the narrative of the image.

Often when a process is scaled (made faster, slower, bigger or smaller) the criteria for understanding what is going on shifts. We compare what we are seeing to what we have seen and deduce some information.  I call this knowledge a narrative. Narrative and story are not the same thing. One of the shifts in reading the image occurs when scaling the ratio of the recorded frame rate to the play back frame rate. For personal clarity I think of the recorded frame rate  as the image sample rate. Traditionally, this is referred to as “frame rate”. Using frame rate is confusing because the recorded frame rate and playback frame rate are not identical. One can record at 1000 frames per second and play back at 30. I found the term sample rate to be much more useful because it eliminates the confusion between recording and playback. “Sampling” always occurs while recording. Sample rate, playback frame rate, image quality, color, and sharpness influence our the sense we make of the image sequence. They play an important part in how we obtain narrative knowledge from the video.

In this next video the sample rate is 1000 images per second but the ratio of those frames to playback rate is manipulated. The video slides between slow-motion and realtime. Once can get a sense of something big in the first part of this video. Slowed motion can sometimes convey a lumbering giant… when fewer of the recorded samples are used and the narrative has shifted. the Hummingbird Moth appears nervously shaking the flower a little high on a sugar nectar rush.

As an artist I don’t want a the artifacts introduced by a camera to interfere with my intended narrative.

hummingbird moth 02 from Dennis Hlynsky on Vimeo.

Interface Notes

The new macbooks don’t have ethernet ports. I suppose one could convert a thunderbolt … shooting outdoors introduces a bunch of other design issues. The camera can be set with a computer through the HTML controller and disconnected to record independently. The “hummingbird moth 02″ was shot handheld with the the camera disconnected from the computer. Some small computer… like a Rasberry Pi  with a small monitor would be all one would need as a camera controller.

The camera interface should have a brighter exit button. It is almost impossible to see in bright light using the Google browser. There are other interface issues. It would be nice to have a histogram. Even a readout of the value of the brightest pixel or zebra stripes would be helpful. Judging the visual exposure on the computer screen compared to the H264.mov is a bit disheartening. Often its very difficult to judge the whites. Since the standing noise is most evident in the shadows, my tendency was to let as much light in as possible to avoid the noise. My judgement of the live image as being properly exposed often resulted in the recording being too bright.

The camera rig for shooting in the pollination garden.

https://www.flickr.com/photos/dhlynsky/14835401855/

 

3D scanning runimations

 
 
 

this above link is a nice little process video for the David scan. it looks like the light is structured in a pattern of alternating stripped boxes. I can’t tell if it is animated or not. Point trackers really like right angles when trying find good tracks. When I get to a machine where I can run Matchmover I will do a test to see what it does. My intuititive response is the projector needs to cycle between bright uniform light (to record the color map) and the structured light projection to determine the contours of the single point view from the camera. Multiple snapshots are then fused and the UV color map is trimmed to fit the patches of mesh. The less the angle of the turntable the better the color map fits the object without stretching. If you notice the donkey back got no geometry…and was fudged. The camera was set a bit low.

 
The need for the screen in the back is also interesting. It mean the object size is limited by the size of the screen.
 
Inline image 1
 
In the meantime I started thinking about Normal Maps
Normal maps are a means of adding surface detail to low poly objects. The color channels are used as vector information and “distort” the surface. Since this is being done at the rendering stage… gamers like this method because it keeps the poly count low for the CPU.
 
Because all the 3D scanners use the color map to add surface detail – I started wondering if the use of three lights (I am thinking panels of Red Green Blue LEDs for very wide lights) – this method might produce a facsimile normal map. That then could be applied as a texture to modify the surface. There would need to be some differencing between color map and the normal map and would most likely limit the exercise at first to white objects.

Big City

First renders of Big City. This is the opening shot. I have about a third of the countryside built in Maya. In this shot I am using a ramp shader (light angle shader in the toon menu) for all of the buildings. Placing several colored directional lights to light the scene creates new additive colors. Choosing the hue of the lights started with a systematic approach, Two lights 180 degrees to each other and 180 degrees opposite on the color wheel. Then 30 degree rotation in z at hue offset of 180 degrees…. With two lights one can mix them to create white light. The more lights in the scene the dimmer they need to be. In the end I had a “sun” light – a single primary light casting shadows and five other color “wash” lights. Some of these lights have intensity that is animated. Remembering an old cheat… a negative intensity light takes light (and color) away from the scene.

icQTown script was used for the city. (a real fast build) When using icQTown I found that the larger the polygon the taller the building. So a proportional scale of the center of the grid creates a nice “downtown cluster”. The rest of the houses, trees, are placed using Level Tools. Like I said… as soon as you get the materials in hand it is a very fast build.

Rendering shadows using the toon shader is a problem I don’t quite know how to resolve. I began using “depth map” shadows but I get weird looking flashing shadows filled with triangles. I believe this is what is called “unwanted artifacts” Placing a 3D texture (rock) in the shadow color produced a wonderful grainy soft shadow. But the unwanted artifacts prevented its use. I am planning to use a shadow pass and add the grain in post.

having wonderful time… wish you were here.

 

 

STEMS – V3

Andrew Hlynsky provided the original audio tracks for this work.
1. The tracks were made into keyframes in After Effects using the keyframe assistant.
2. The sets of keyframes were scaled so the max variation was five units.
3. The keyframes were copied and pasted into Microsoft Excel.
4. A simple =(current field) + (field directly above) formula produced a series of progressively increasing values.
5. Additional fields of Maya Mel commands were added – copied and pasted into the rotation channel of the cylinders.
6. long n-hair were attached to the cylinders. Gravity and wind were set to opposite directions and balanced to make the N-hair curves float.
7. “chalk” paint effect was added to the N-hair curves – as well as “watercolor spatter.mel”
8 All was processed back in After Effects.