Tuesday, January 31, 2012

1/31 Automated Character Locomotion

I thought this paper was really impressive. Being able to automatically change a character's walk to suit the terrain can save a ton of time for animators; instead of animating a different walk for each situation, you only have to do one! However, I'm pretty skeptical of using the data from one character for another, ex. using data from an adult human for a humanoid animal or a small child. I suppose maybe it could work for an animal doing impossible things, since we don't really know what those actions should look like. Going with the example we talked about in class, no one has seen a real alligator dance, so I think audiences will be forgiving. A dancing alligator doesn't have to look completely realistic because it's not realistic, so maybe this type of technology could work using adult human data. I'm far more skeptical about using this to animate a baby or a toddler. Yes, they're all human and have the same joints and physical capabilities as adult humans, but they simply don't walk the same way. We all know what toddlers look like when they run. They waddle because they still haven't quite gotten a hang of what they're doing, not to mention they're wearing a diaper. An adult might be able to imitate this, but I think it would look really weird if you tried to use the data from an adult walking regularly to animate a toddler. Especially since everyone knows what toddlers look like, it would look extremely unrealistic, and honestly probably a bit creepy.

Thursday, January 26, 2012

1/26 Direct Control Interfaces

Motion capture can definitely help to animate characters more quickly, but in lecture today I realized how many limitations there are to this technology. A lot of things are physically impossible for the actor to do without creating a whole set for him/her to perform in. For example, I think someone in class mentioned pushing a car. I have far too much experience pushing cars, and I can tell you that you can't convincingly mime that action. When you push a car, you're leaned over really far, throwing your whole body into it - you simply can't do this without something to lean on! Another thing, which we talked about a lot, is showing force. I'm saying "leaning" onto the car, but really that's the wrong word - you're pushing it with your whole body. To a motion capture system, leaning and pushing look pretty similar. Your body is in almost the exact same position, so why do the two look different? People can differentiate easily between the two actions. Is there a way to make the motion capture system automatically know when a person is exerting a lot of force, or just leaning casually? It's a pretty difficult question. I think what we saw today, though, would be really cool for games. The video mentioned actors interacting on screen even though they were performing miles away from each other, and I immediately thought it would make for cool multiplayer games. Imagine playing tennis online with your friend across the country!

Tuesday, January 24, 2012

1/24 Directing Physically Based Interactions

We saw some examples today of a user moving the mouse to direct a character's head (for the lamp) or center of mass (for the starfish) in order to make it move. I'm sure some artists might not like this because it doesn't allow complete control of the character. The directions given from the mouse are always secondary to physical constraints, which can help the character to look more believable, but sometimes, especially with "cartoony" characters, physical constraints aren't what we want. However, I think for the average user this system is intuitive. Especially with the lamp, it reminded me of something you might see on an iPhone game or something - moving the character through an obstacle course by directing it with your finger on the screen. For most people, I think this system would be pretty easy to use. Also, people tend to be pretty forgiving of the graphics in games. Things don't look perfectly realistic in games, and people know this. For movies, though, the standards are definitely higher. In a game, we have to pay attention to strategy, whereas in a movie our attention isn't deviated away from the visuals as much. I think that's why some artists might be critical of the center-of-mass method of directing the characters. Artists want complete control of the look of their characters, so this system is a good starting point for animation but I don't think it can work at this point as the entire means of animating a character.

Monday, January 23, 2012

1/19 Noise, Variability In Movement

I think it's very important that characters are given some additional noise. When they're moving, noise might not make a huge difference, but I'm sure we all can agree that a character standing completely still looks unnatural. When real people stand "still," they shift their weight, touch their clothes, and look around. It's impossible to stand completely still, which is why adding a little bit of noise to an animated character can make all the difference. What I did notice, though, is that there's definitely such a thing as too much noise. We looked at some of Ken Perlin's interactive characters, such as the Emotive Virtual Actors, the red/green couple, and their level of noise works. Granted, they're pretty simplistic characters, but I think the way they move when left alone looks believable. Then if you look at the Face Demo, the close up of a woman's face, it's like she's moving nonstop! If you tell someone to look straight ahead, sure, they will move a little, but no one naturally moves as much as this woman does when told to hold still. I learned from these examples that while noise can help the character not to look robotic, it's easy to overdo it and stop being natural and believable.