February 19th, 2008
The Supreme Court rejected a challenge to the Bush administration's domestic spying program. However, the justices' decision Tuesday included no comment explaining why they turned down the appeal from the American Civil Liberties Union.
Oh, I can think of several reasons why. Not the least of which is, how do you think are people going to react when the Supreme Court up and announces it's revoking a well-known Constitutional right?
In other news of the decline of the American empire:
Iran established its first oil products bourse Sunday in a free trade zone on the Persian Gulf Island of Kish, the country's oil ministry said. Oil and petrochemical products will be traded in Iranian Rials, as well as all other hard currencies, the statement quoted Iranian Oil Minister Gholam Hossein Nozari as saying. About 20 brokers are already active in the market, it said.
( Collapse )
Enter motion capture company Mova, whose Contour Reality Capture system uses an array of cameras to create 100,000 polygon facial models that are accurate to within a tenth of a millimeter, no reflective spots required. At this year's GDC, the company is trying to attract the game industry's attention by unveiling examples of their facial modeling running in real-time on the popular Unreal Engine 3.
"People have never had this kind of data available before in a game context ... their heads are spinning," he said. "What you're seeing right there is the result of, having time to wrap our heads around this thing and see how we're going to use it, and yes, we can in fact get a face that looks almost photo-real - you know, not quite, but almost photo-real - running in a game engine today."
Believe it or not, though, the Contour system can create even more detailed animation when processing time isn't an issue. Check out the below video, which shows how Reality Capture data can look when pre-rendered for a movie or cut scene. "You can see the difference then between what's achievable in cinema and what's achievable right now in video games," Perlman says. "But next generation game machines, they'll be able to essentially show in real time what we can do currently in non-real-time using renderers. ... Next generation, you're going to have interactive sequences where people think there's a live person in the game."
Watch the video - the level of animation detail is insane. I call that photo-realistic as far as the mesh and motions. (But we're still screwing up on skin, it still doesn't look right. Possibly sub-surface scattering simulation will fix that.)