This technology pertains to the creation of 4D models of real people as high-resolution characters in environments that appear to be real at HD quality, such as the behavior of the character models, the changes in the tone of voice and the quality of the voice synthesis, the way that the character model moves their facial muscles and the characteristics of the eyes blinking, the way that the person walks programmed into the character model, and the way that the person gives speeches programmed into the character model. The scenery is also programmed, including outdoor environmental sounds and effects, such as flowers and trees swaying with wind, the seasonal changes to the plants and the affects of the weather, and the location of the sun and its reflections on the models based on the time of the year and the time of the day.
This software would be able to create 4D video animations based on the modeling characteristics, and the input of transcriptions including text for speeches. This would allow multiple speeches to be given at the same time, review of how the speech would occur, and for security purposes of cloning the speech presenter. The input data for the character modeling would be standardized, based on a 3D camera that scans the person three-dimensionally with coordinates of distance and movement while they go through a process of calibrating the 4D model. The characteristics of each 4D video scene output can be changed based on variables, with an interface that simulates various types of pan and tilt camera angles, menus that allow changes to variables to change the scenery characteristics, and even abilities to change the mood and tone of the speech. The 4D modeling is able to do walk-by models, such as the ability to scan in multiple persons into the modeling software, and the ability to have multiple speeches in the same 4D model, such that the persons both have their own models in the 4D model system and the models can be programmed to interact in conversation with each other.
There are other characteristics that can be simulated, including interaction with staff, models of the Press in the background of speeches, the simulation of audiences of speeches and unique types of fly-over camera views that show very detailed, and optionally slow motion, views of cheering fans with customized clothing, signage, and conversations that are based on a variety of different linguistic patterns that correlate to the timing of the speech. This allows HD modeling in the crowd to get responses from crowd models that are reacting to the Press based on the speech. The detail of all of the modeling is such that the characteristics of the modeling can be zoomed to see the details rendered, such as the intricate details of the lawn and what the lawn would look like based on when the lawn was last mowed, and even variables that differ based on different types of lawn mowing patterns. There are modeling engines based on the aging of the buildings and infrastructure, such that the paint will eventually show as aged and developing webbed cracks rather than an always perfect look – and that is a variable that can be modified whether or not the modeling of the infrastructure accounts for the buildings and infrastructure always being maintained perfectly or if there are certain amounts of time before the infrastructure is painted, for instance.