Digests from CG articales
Turtle Talk
Prior to the on-set motion capture, the team had the actors perform expressions while being scanned with Disney Research's Medusa system. The Muse team decomposed those scans into components within Fez, ILM's facial animation system that animators used to re-create the expressions. After facial motion capture, layout and R&D teams triangulated the markers in the 2D images into 3D space, frame by frame, and then applied them to controls on those components in Fez.
Helping animators create believable dragons and engaging characters was a new generation of animation tools the studio has named Premo, and a new rigging system designed to take advantage of fast graphics cards. The crew on this film were the first to use the new system.
Helping the lighting team achieve that look was a new-generation lighting system the studio calls Torch. "Lighters can see quick renders of their setups," DeBlois says. "They can manipulate and tweak the light in each shot. It allows more subtlety."
As the animators work, they can see keyframes rendered in real time with a standard lighting setup that gives them a sense of volume, dimension, light, and shadow.
detail-oriented work
"The next step might be having a setup in which the lighters work concurrently with the animators so they can animate to the lighting.
The maturation of the animation and lighting tools, which has opened new ways of working for the animators and lighting artists, parallels the story in the film they worked on
http://www.cgw.com/Publications/CGW/2013/Volume-36-Issue-7-Nov-Dec-2013-/Winter-Wonderland.aspx
The rigging team devised a tool they named Spaces that gave animators a convenient way to reconfigure the rig. “He has one rig with mechanisms for connecting and disconnecting,” Hanner says. Working in Autodesk’s Maya, an animator could click a button to have Olaf’s head fall off and still animate his body walking away.
“The most important thing was bringing the nuances and subtleties of the hand-drawn characters to the CG characters,”
“The traditional CG hair interaction techniques, which involve curves, digital brushes, and digital combs, didn’t work well. So we wrote a new software package we call Tonic. It gives our hair artists a sculpture-based tool set.”
Typically the modelers would first create rough proxies that showed shapes or rough directions. Once approved, the hair artists began refining those shapes with Tonic. In Tonic, they could see pipes or tubes that represented hair and could toggle individual strands of hair within to see the flow. “Working with these volumes gives hairstyles complete fullness,” Hanner says. Once groomed and structured with Tonic, the hair moved into Disney’s simulation package called “Dynamic Wires.” “The transition is automatic,” Hanner says. “But, the artists can rearrange and procedurally regenerate subsets of data the simulation works with.”
To create the snow and manage the interaction between snow and characters, the team developed two systems: Snow Batcher for shallow snow, and Matterhorn for deep snow and close-ups. “The standard slight [foot] impressions wouldn’t work, so we created our Snow Batcher pipeline. It could define which characters disturbed the snow and how deep, automatically create foot impressions, add additional snow, and kick it up. We put a lot of information into the database for each shot.”
“We used raytracing for the large ice-palace environments, which were very, very expensive. For the snow, we generated large point clouds for subsurface scattering and used deep shadow maps.” “We ended up shaping shallow and deep subsurface scattering lobes according to real data and then combining the two different effects. It isn’t raytracing through a volume; it’s an approximation. But, we got a nice lighting effect.” For the deep snow and snow that the characters interact with, the lighting team used a completely different shading system that lit the snow as if it were a volume.
Disney uses a custom cloth simulator called Fabric that they updated to handle the bunad costumes.By the end of the film, the team had created 245 simulation rigs for the clothing, more than double the number used for all their previous films combined.
http://www.cgw.com/Publications/CGW/2013/Volume-36-Issue-7-Nov-Dec-2013-/Gravitational-Pull.aspx
In early 2010, long before production started, Paul Debevec’s group at ICT/USC had demonstrated a light stage system in which light from LEDs surrounding an actor provided changing lighting conditions, while high-speed cameras captured the actor’s face.
Now, it is about storytelling, the character animation, the personalities, and other aspects other than the technical achievements.
The storytelling credibility that Gravity gives to 3D feels like a breakthrough statement on how the technique can amplify the audience experience in a very visceral, emotionally fulfilling way.
‘Look, there are fundamental skills you need that go beyond knowing how to write Maya scripts. You need to know basics, fundamental aspects of the creative process. You need to know how to think about different formulas for creating new media. And, you must have the flexibility to adapt. That will allow them to survive in the volatile job market.”
Reel FX Animation Studios's feathering system, Avian. The proprietary system enabled the artists to generate feathers of all sizes and shapes, and then groom the birds, viewing the results in real time within the Autodesk Maya viewport prior to rendering.
“That was one of our goals, to make it artist-friendly, so you didn’t have to be a programmer to use it,”
Avian took approximately a year to develop, and many of the features were devised on the fly, so to speak, as the need arose.
During development, Michalakeas, Pitts, and Sawyer did a tremendous amount of physical and theoretical research, including examining a real turkey wing for the lay of the feathers and studying numerous SIGGRAPH papers,
Indeed, the artists had to contend with the constant colliding and stretching whenever the turkeys moved. Initially, they devised a plan that involved solving for a simulation, but after a long, painful process, they changed directions and instead of trying to fix the problem, developed a system that would prevent the colliding in the first place. Called Aim Mesh, the solution essentially placed a polygon cage over the Avian feather system to drive the orientation of the feathers. Animation would then rig up the Aim Mesh so when there was squashing of the body, such as when a turkey lifted its leg, rather than have penetration, custom deformers on the Aim Mesh would simply pull up the mesh.
“It prevented 99 percent of our collision right off the bat; the remaining 1 percent that was colliding was barely visible, so we just let that go,” says Pitts.
In Free Birds, the turkeys often use their wings as hands, so the feathers had to act sort of like fingers.
With help from Tom Jordan, modeling supervisor, the crew devised a feather-stacking solution, as bunches of feathers were stacked atop one another, with the thickness controlled by animation.
To keep rendering manageable, the crew employed some tricks to help cut back on the data crunching
a final note on Gravity's influence on filmmaking: The film was primarily a conversion from 2D to 3D. Increasingly the technique is being used because it can be more manageable to shoot live action in 2D and then convert.
Another efficiency involved a nonlinear approach to constructing a shot. "As soon as layout was launched, we could start animating and lighting, so all three departments could work at the same time, which is unique," Peterson says. "Each department fed the other one cache, but we also had to be careful so that we were in sync, because at any time there could have been an update to a texture, model topology, or UV and the data would change. And that would affect all the shot-centric departments." To help the group avoid those issues, Reel FX developed an asset state system, which alerted artists to any change in the assets and automatically delivered the new pieces and packaged them correctly.
the group created a simple but elegant crowd system, whereby the animators would store a library of cycles and assign those cycles on the fly and blend between them. "
The crew also developed an in-house slicing system that let multiple animators work on the same shot - which was particularly useful when several characters were on screen at the same time. "Asking an animator to animate 15 characters would be challenging, so we developed a slicing system that could publish an animator's cache to their counterparts. So if one person works on Jake and one on Reggie, the person working on Reggie would have a Jake guest, that would be fed the cache from the person working on Jake," Esneault explains. "They could continually update each other on the fly all day long, and in the end, we would send and bake it down into a single shot for lighting."
The mood was also established through lighting, look dev, and the virtual camerawork.
Modelers created the overall sculpture within Autodesk’s Maya and then used Pixologic’s ZBrush for the scales. “We had close to a million individual scales on the dragon,” Saindon says. “We tried to build as many as we could. Sometimes in geometry. Sometimes in displacement. When he bends, the scales fold over and slide on top of one another so he doesn’t look like a big rubber thing.
Typically, Weta Digital’s creatures with human forms have a common UV layout to easily apply textures from one creature to another. But in this case, the shapes and sizes were different enough that the crew needed to do texture migration. “We set up a version of the transforming creature to have a common place where we could migrate the human textures into a bear-texture space,” Aitken says. “Our fur system already supported the ability to animate the length of fur with texture maps. So, we used the maps to shrink his bear fur and grow his human hair.”
As is typical for creatures that talk, the rigging team at Weta created an articulated model based on the Facial Action Coding System (FACS), which breaks expressions down into individual muscle movements.
the simulation engine needed to move the coins in ways most rigid-body
dynamics solvers don’t address.
beautiful water,” Capogreco says. “You can see all the bubbles from the
bottom of the water to the top, and they’re all rendered with a single
pass. The particles store what we call ‘primvars,’ variables that the
shader looks up and shades according to age, velocity, and vorticity.
Because everything is custom and internal, we had complete control over
the look.
The two big ones,” he elaborates, “were the skin/scale system and the muscle system our team of character setup artists created for us, which enabled us to show how these amazing creatures were built, moved, and interacted.”
Animal Logic developed a number of tools that helped push the boundary of reality in the film. The artists used a procedural- rather than textural-based approach to the scales. Instead of painting them on or modeling them individually, they opted for a technique that was similar to what they used to create fur and feathers, “where we use a lot of maps to describe the kind of scale in different areas of the body in terms of shape, size, profile,” To this end, the studio developed a scale system, called Reptile。
The main surfacing challenge for the prehistoric cast resulted from the scale-based characters.
Rigging Supervisor Raffaele Fragapane and R&D Lead Aloys Baillet came up with the brand-new system, which properly managed individual muscles and bones, and provided interaction with the outer skin and internal fat. The process was completely transparent to animators and did not require shot-specific adjustments
Previs, Techvis, and Postvis on The Avengers
Previs: storyboard -->camera angles, frame the shots, and etc. motion capture or keyframe animation for previs. Give director a sense of the locations and identify issues before shooting.
Techvis: 比如根据previs的相机数据,现场特效人员计算爆炸物的大小和速度,这些数据techvis人员,techvis制作出相应的CG物体,确定哪些相机可用,然后告诉现场特效人员合适的爆炸时间和地点、相机走位、角色走位。
Posvis:实拍时,测试previs的数据是否正确。Postvis also gives the director and editors richer scenes for cutting decisions, especially for sequences with partial sets and ones in which CG effects will drive the story
Pete's Dragon (from Cenifex 137)
and used Weta's Simulcam Slave Mocon setup to shoot interactions between Bilbo and the dwarves and the larger-scale Gandalf and Beorn.
To generate the volume and complexity of water interactions, Weta Digital retooled its physics simulation engine, Odin.
uln this case, everything was volumetric, including all the water surfaces, volumes, iso-surfaces and bubbles. We produced material models to create foam and aeration in the water, and by applying different properties and densities, we created areas of turbulence in a single system.
we added controls to give animators the ability to art-direct coin piles, maintaining natural flows that the solver could handle。 Effects lead Niall Ryan developed the rigid body solver in Odin, allowing interactions in up to 20 million treasure pieces arrayed in tiled layouts. 0 As Smaug passed through each tile, we activated those areas. Each tile fed into our rigid body simulator, which selected treasure assets - coins, cups, plates, all sorts of different treasure pieces - and propagated those through the tiles. 0 Technical directors applied variables to create varieties of treasure textures, color and reflectivity, and output simulations in giant renders in Synapse.
Effects artists created Smaug's internal body glow using geometry within the creature's belly to generate a subsurface pass that illuminated bones and muscles, and backlit dragon scales.
Flame effects were generated in Odin, embodying a range of physical properties that allowed the gigantic and highly detailed fireballs to splash and adhere to surfaces.
To create the flood of gold, Weta developed a model capable of handling visco-elastlc plastic materials. ... That allowed us to art-direct the statue's collapse as it fell under its own weight and created a jet of liquid.
Beautiful Dreamer (from Cenifex 137)
Visual effects included environment extensions, using photographic elements projected onto rough geometry to extend city streets into skyscraper canyons, and interactive debris that churned up street surfaces using MPC's dynamic simulation tool, Kali.
Extra- Vehicular Activity (from Cenifex 136)
P44
P45
p51
p60
"The rule was simple textures, complex lighting,"
To help design lighting for various sequences and sets, the artists began testing possibilities in the previs stage.
The lighting team did the previs work in Autodesk's Maya. The sets then moved into PDI/DreamWorks' proprietary animation system and the lighting into the studio's proprietary lighting and rendering software.
Effects artists working with Autodesk's Naiad (formerly from Exotic Matter) would run the simulation and then meet with animators,
Preparing Through Previsualization
For shots that required compositing CG elements into a plate with a camera move, artists would use Vicon's Boujou to matchmove the camera. After importing the tracked camera into the Maya scene, they would animate the effects, set extensions, and actions.
When shots didn't require a 3D camera move, the postvis team would composite elements onto the plates using After Effects.
Digests from CG articales的更多相关文章
- cg数据类型
Cg 支持7 种基本的数据类型:1. float,32 位浮点数据,一个符号位.浮点数据类型被所有的profile 支持(但是DirectX8 pixel profiles 在一些操作中降低了浮点数的 ...
- [CG编程] 基本光照模型的实现与拓展以及常见光照模型解析
0.前言 这篇文章写于去年的暑假.大二的假期时间多,小组便开发一个手机游戏的项目,开发过程中忙里偷闲地了解了Unity的shader编写,而CG又与shaderLab相似,所以又阅读了<CG教程 ...
- [Unity] Shader - CG语言 流程控制语句
CG语言中: 不支持 switch 语句(可以写,但不能很好的执行.) 循环语句中, 循环次数不能大于 1024 ,否则会报错. If...ELSE 条件判断语句: if (true) { } els ...
- [Unity] Shader - CG语言 和 HLSL语言
CG 跟 HLSL几乎是一摸一样的. (HLSL官方参考,包含语法格式,函数库,关键字,在这个地方: http://msdn.microsoft.com/en-us/library/bb509638( ...
- CG Rendering v.s. Browser Rendering
浏览器的渲染技术 v.s. CG渲染器的渲染技术 看了两篇文章: 浏览器的渲染原理简介, How browsers work(译文), 想到了一些东西, 对比两者, 或许有些东西能想得更明白些. 以下 ...
- 解读Unity中的CG编写Shader系列十 (光滑的镜面反射(冯氏着色))
前文完成了最基本的镜面反射着色器,单平行光源下的逐顶点着色(per-vertex lighting),又称为古罗着色(Gouraud shading).这篇文章作为后续讨论更光滑的镜面反射方式,逐像素 ...
- 解读Unity中的CG编写Shader系列八(镜面反射)
转自http://www.itnose.net/detail/6117378.html 讨论完漫反射之后,接下来肯定就是镜面反射了 在开始镜面反射shader的coding之前,要扩充一下前面提到的知 ...
- 解读Unity中的CG编写Shader系列六(漫反射)
转自 http://www.itnose.net/detail/6116553.html 如果前面几个系列文章的内容过于冗长缺乏趣味着实见谅,由于时间原因前面的混合部分还没有写完,等以后再补充,现在开 ...
- 解读Unity中的CG编写Shader系列五(理论知识)
转自 http://www.itnose.net/detail/6098474.html 经过前面的系列文章中的三个例子,尽管代码简单,但是我想应该还有些地方没有100%弄明白,我们现在得回过头来补充 ...
随机推荐
- mybatis配置文件查询参数的传递
通常来说,参数传递可以使用#与$进行编写,但是使用#的效率更高,使用$方式,查看日志更方便些,尤其是当执行的sql语句非常麻烦的时候. 1) 接口 形式 以下方式 [传递参数是一个实体] public ...
- SqlServer性能优化 性能调控(十)
如何做资源的调控: 1.建立资源池. 2.创建工作负荷组 create resource pool ImporPool with ( min_cpu_percent=30,max_cpu_percen ...
- mac中使用brew安装软件,下载太慢怎么办?
mac中使用brew安装软件,下载太慢怎么办? 本文所说的软件是指较大的软件,如果软件较小,例如软件只有几M,那么使用此方法后,提升会非常小. 了解brew原理: 1: 从网络下载安装包 2: 执行一 ...
- Asp.net Web.Config - 配置元素 caching
Asp.net Web.Config - 配置元素 caching 记得之前在写缓存DEMO的时候,好像配置过这个元素,好像这个元素还有点常用. 一.caching元素列表 元素 说明 cache ...
- JavaScript闭包模型
JavaScript闭包模型 ----- [原创翻译]2016-09-01 09:32:22 < 一> 闭包并不神秘 本文利用JavaScript代码来阐述闭包,目的是为了使普通 ...
- 一次完整的自动化登录测试-基于python+selenium进行cnblog的自动化登录测试
Web登录测试是很常见的测试!手动测试大家再熟悉不过了,那如何进行自动化登录测试呢!本文作者就用python+selenium结合unittest单元测试框架来进行一次简单但比较完整的cnblog自动 ...
- PHP反向代理-百度图片
最近在一些开发中需要调用百度贴吧等一系列的百度图片 但是防盗链实在讨厌 于是就简单利用curl实现了反向代理(应该是这么叫的) 如果网站直接调用百度图片 会出现如下(博客园貌似在白名单 可以直接用百度 ...
- 给自己~~微语&&歌单
如果你很忙,除了你真的很重要以外,更可能的原因是:你很弱,你没有什么更好的事情去做,你生活太差不得不努力来弥补,或者你装作很忙,让自己显得很重要.——史蒂夫-乔布斯 时间并不会因为你的迷茫和迟疑而停留 ...
- serialVersionUID的作用
Java的序列化机制是通过在运行时判断类的serialVersionUID来验证版本一致性的.在进行反序列化时,JVM会把传来的字节流中的serialVersionUID与本地相应实体(类)的seri ...
- 关于Java的基本类型
Java的基本类型分为整数型,浮点型,字符型,布尔型.顾名思义整数型用来表示整数,浮点型用来表示带小数的数,字符型用来表示字符.特殊的是布尔型用来表示逻辑上的true(真)和false(假),一般与分 ...