CameraTracker

From RMIT Visual Effects
Jump to: navigation, search

The Camera Tracker is the glue that sticks live action footage, compositing and 3D together. Its function is to calculate the camera movement from the footage. Simultaneously it will also calculate the geometry of the scene.

  • Limitations: The CameraTracker does not like any kind of motion other than camera motion. Therefore any movement within the scene (figures, cars etc) will need to be masked out.
  • Remember: Low resolution images are very difficult to get good tracks from low rez or poor quality footage. Also... pan shots (a camera rotating on the tripod) are easier to camera track than dolly shots (left to right movement). Also... fast moving cameras are troublesome.
  • Important: Avoid at all costs shiny and reflective surfaces, especially floors and rain puddles. These will confuse the camera track. Also moving water, for the same reason.
The 'CameraTracker' tab of the CameraTracker properties.
The 'AutoTracks' tab of the CameraTracker properties.
The 'Scene' tab of the CameraTracker properties.
The tabs of the CameraTracker
Name Description
CameraTracker Here the CameraTracker is set up, the track is solved and its output is exported as a Scene. Its possible to use the CameraTracker without ever leaving this tab (though not advised).
UserTracks The CameraTracker will automatically decide what points to track. If you wish to define your own key points in the scene, then this is where you do it. Only use if if you think that the cameraTracker has missed obviously useful points.
AutoTracks Here are plotted the tracks that the CameraTracker produces. This is where you can adjust the maximum error value. This is useful if your error value (as shown in the CameraTracker tab) is a lot more than 1.
Settings Adjust the values in this tab only if your track is giving you a lot of trouble. The most important values are 'Number of Features' (which should be increased to improve the tack) and 'Feature Separation' (which should be lowed to compensate for the increase in features).
Scene Here you can scale the entire scene. This should be done to get the approximate scale of the scene to match Maya's.
Output Here lives the animated values that the CameraTracker produces. Best to leave alone.

Once you have your footage, the camera track workflow is as follows:

  1. Check footage. As with all tracking apps, the footage must first be inspected for any problems. It should not be blurry, or feature in-camera motion (i.e. movement within the scene as opposed to movement of the camera). Ensure that the footage has been converted to an image sequence. A high quality format is recommended, such as .png or JPEG Quality at 100%.
  2. Project Settings. As usual, ensure that your Project Settings match the format and .fps of your footage.
  3. Input camera properties. Input known data about the camera. Needed are: the focal length, and the film back size (i.e. the physical size of the camera sensor). If it is a common camera, the film back can be set via the drop down menu of 'Film Back Preset'. A width and a height are needed. Conceivably your camera specs will say it is something like '1/3', then this can be translated using the conversion chart here. A list of common camera back sizes is here. The film back size of the CANON 700D is 22.5 mm x 12.7 mm. For 'Lens Distortion' input 'Unknown lens' and tick 'undistorted footage'.
  4. Start tracking. Press the 'Track' knob. This will examine the footage for features and track them. Just as in a Tracker node, a feature is a point or corner. Unlike the Tracker node, it will track hundreds of features. Instead of the track lasting the entire length of the shot (as in the Tracker node), they can last for as little as three frames. When the tracking is finished, the screen will be full of tracked points. Each one will have a little tail. The length of the tail indicates the frame length of the track.
  5. Error. This value should ideally be below 1. certainly anything above 2 will indicate a very poor track. See troubleshooting below.
  6. Solve. Solving turns the tracked data into 3D data.
  7. Set ground plane. In the viewer you will see many little points that correspond to features in the 3D scene. In the Viewer select 'Select Vertices' from the drop-down menu in the 3D select knob. This will enable you to select these points. Select a bunch of vertices that correspond to the ground plane. Right click on these and select 'Set as ground plane'. This will ensure that your 3D scene is in the correct orientation.
  8. Export. From the drop down menu, select 'Scene +' and press 'Create'. This will automatically output a linked: Scene, Camera, ScanlineRender, LensDistortion, and a cameraTracker point cloud.
  9. Make a card. Attach the 'Viewer' to the 'CameraTracker' and ensure that the CameraTracker's properties are open (by double clicking on the node). Select some ground plane points again, and right click: 'Attach a card'. This will create a 'Card' node that will be aligned to the ground plane. Attach the card to the 'Scene' node that the 'CameraTacker created.

Troubleshooting:

  • Improving the footage. This might be done by increasing the contrast (thereby making the key features more visible) or decreasing the noise using a Bilateral, DegrainBlue, DegrainSimple or Denoise.
  • Lowering error values in the ' AutoTracks' tab. You will see in the Viewer the deleted tracks show up red. These parameters (on the right hand side of the tab) should be selected in pairs:
    • track len - min + Min Length (raise the 'Min Length' slider to cut off the low values)
    • error - rms + Max Track Error (lower the 'Max Track Error' slider to cut off the high values)
    • error - max + Max Error (lower the 'Max Error' slider to cut off high values)
  • You can re-start your track using new settings, in the 'Settings' tab. First tick 'Preview Features'. This will make the features show up in the Viewer. The important parameters are: 'Number of Features' (which should be increased), and (to compensate) decrease the 'Feature Separation' value. When you have done this, start the tracker again. More trouble shooting info on this page.

Once you have completed an effective track, you will have a piece of geometry (the Card) and a camera, both of which correspond to the 'reality' of what the camera saw. This can now be exported into a 3D app. The camera movement data can be exported as a Chan file through the file menu parameter of the Camera (the little folder icon in the front tab). By attaching a WriteGeo node to the scene, Alembic (.abc) of .fbx data can be exported, which will include all scene data including geometry.

You can now import this stuff into your 3D app, and render out a moving image that can be composited onto your footage.

A tutorial from the Foundry is here. Another one by a bloke with a British accent.