During this session, I have gained knowledge in the software 3Dequalizer, learning the interface, uses and tracking devices. For this session, we learned how to track a piece of footage (in this instance a video of Camden lock) and use the 3D points to later convert into 3D space. The process requires stable and key points of the footage to be tracked across the whole timeline of the video; this includes high contrast areas and objects in the scene that do not move around a lot. A key point is to never track water, as it is very unstable and organically interchangeable. The tracking should take place across every corner of the footage in order to create a clean and non-biased tracking point to prevent the successful warping of the footage later on before the lens distortion.
After the tracking is complete, and there is an even spread of points, the lens points are changed to match the original camera, and the tracking is calculated in the parameter adjustments to ensure the whole track is smooth and does not have any drastic ‘glitches’ in the playback of the video. The Camera and the tracked points can then be exported, and placed in Maya where the process of scene blocking can begin.
After this process, once all of the data is calculated and cleaned, the points can be exported to be processed in Maya. The modelling stage of the matchmove can then be built up around these different tracking points.
During this process, I built rough models using the locators as spatial reference points to map out the real scene into 3D space. In the process of adding additional details such as a railing and windows to give a clearer indication of the objects for potential interaction.
Below is the final modelled product of my track, where a 3d character can potentially be animated interacting with the live-action scene to prepare for the compositing process.