- Nuke's 3D environment
- The camera tracker
- Integrating a 3D object into a moving camera shot
Though primarily a 2D application, Nuke also has a 3D environment. It is not as fully featured as that of 3Ds Max, or Maya, for example it is very difficult to make 3D geometry in it. However, it integrates very well with Nuke's 2D environment,
Full lighting and rendering is rarely used in Nuke's 3D world. The most common use for Nuke's 3D environment is to produce 2.5D compositing and camera projection, which are explained in their relative pages.
More complex than 2.5D compositing and camera projection, is camera tracking. So what is camera projection? Well... we have looked at Nuke's Tracker, which captures translation (up and down), rotation (0 to 360 degrees) and scale (expressed as a percentage of size). However, clearly this is insuficant for when true 3D information is required. Consider the following two scenarios:
- The camera is moving a small distance from left to right. The compositor wants to add set of curtains to a window that faces the camera on a building 50 meters away.
- The camera is rotating around a traffic sign, which is located 2 meters away from the camera.
Scenarios 1 requires no more that translation information, for which the Tracker node will be sufficient. Scenarion 2 requires full 3D camera tracked information. This is what the CameraTracker node does.
All camera trackers work in the following manner: first 2D information is captured from the footage. This is in the form of features. A features is a dot, a corner or a blob (just like the Tracker node). Different to the Tracker, the camera tracker will automatically look for features, and gather hundreds of them in one go. After the Camera Tracker has this info, it will process it to extract 3D information. This is where the magic is.
The usual outputs of a camera tracker are:
- A camera that matches the original in it's focal length, frame back size, animation etc.
- The ground plane, which should correctly relate to the orientation of the camera. This will be needed in order to correctly reproduce cast shadows.
- An item of geometry that relates correctly to the geometry of the scene. This need be no more complex that a simple locator (or a point). This serves to relate items in the 3D app to items in the footage.
Most times a camera tracker is used, some sort of 3D authoring application is also employed. The normal camera tracker to 3D app workflow is:
Though a 3D authoring application is usually involved in a camera tracking scenario, it's possible to not leave Nuke at all. I will discuss such scenarios in class.
You have a choice of assignments:
Option 1 (aka the 'easy option'): You are to make a 2.5D composite using cards. You may use camera projection should you wish (in fact it is recommended). It should feature a small camera movement. It may include live action footage, or be sourced entirely from still images.
Option 2 (aka the 'challenging option'): You are to shoot some footage and, using the camera tracker, incorporate 3D output. This may be in the form of a 3D model within Nuke itself, or you may incorporate a 3D authoring app. This is the more challenging option, but I will be available to help you should you need.
|A 2.5D scene (01)||A reasonably complex 2.5D scene. Many layers have been used in a jungle scene, and a slight camera move has been emulated.||Download (160mb)|
|A 2.5D scene (02)||A simple 2.5D scene. Again, a simple camera animation.||Download (30mb)|
|2D to 3D integration||A 3D .obj has been incorporated into some footage using a CameraTracker.||Awaiting upload|