maxofs2d:

Timings, etc. have already been figured out for gameplay balance, so I have to keep strict to most of them (like firing and melee attacks).
However, thankfully, and unlike other people I’ve had the misfortune to have to collaborate with, I work closely together with the lead programmer of Wrack, and we’ve implemented cool stuff like non-strict blending for weapons. For instance, when you draw out the shotgun, the last 10 frames or so are just recoil and kickback for transitioning, but the engine can skip these or only use a portion under certain conditions (if you want to fire, or if you’re running). 
Although I have to say, the best thing we’ve implemented is the camera bone, which allows me to alter the camera in a nearly 1:1 fashion. I move a bone in 3ds Max, I rotate it, etc. and its animation is transferred over to the in-game camera. It’s a feature that gives so much more depth to first-person animations… I’m a person who hates the “sliding camera” syndrome in first-person games.
I don’t program the features themselves; I’m not a programmer by any means. But I have a large amount of input on how the systems are designed and work. One thing to know is that we use DirectX 9 libraries extensively (our model file format is .X too). Animation blending is directly drawing from these libraries, and they’re limited to only two blending tracks (as far as I’m aware). This has the annoying effect of causing the animation to snap if you’re doing something that will change the sequence while it’s already transitioning; the only case we have is if you’re standing still and it transitions to/from the rare idles or the draw, during those 0.25 seconds, if you start to move, it will snap.
The Source engine has 3-way blending code to prevent this sort of stuff.
I guess restrictions could be coded in for this specific problem but we have more important stuff on our collective plates.
Other collaborative things include procedural animation: for example, the first-person guns “lagging behind” your mouse movements, other things that move automatically/parametrically (like the Hyperblade’s cubes that spin faster depending on your kill chain).
Overall it’s a lot of big and small things that come together, and attention to detail! :)

maxofs2d:

Timings, etc. have already been figured out for gameplay balance, so I have to keep strict to most of them (like firing and melee attacks).

However, thankfully, and unlike other people I’ve had the misfortune to have to collaborate with, I work closely together with the lead programmer of Wrack, and we’ve implemented cool stuff like non-strict blending for weapons. For instance, when you draw out the shotgun, the last 10 frames or so are just recoil and kickback for transitioning, but the engine can skip these or only use a portion under certain conditions (if you want to fire, or if you’re running). 

Although I have to say, the best thing we’ve implemented is the camera bone, which allows me to alter the camera in a nearly 1:1 fashion. I move a bone in 3ds Max, I rotate it, etc. and its animation is transferred over to the in-game camera. It’s a feature that gives so much more depth to first-person animations… I’m a person who hates the “sliding camera” syndrome in first-person games.

I don’t program the features themselves; I’m not a programmer by any means. But I have a large amount of input on how the systems are designed and work. One thing to know is that we use DirectX 9 libraries extensively (our model file format is .X too). Animation blending is directly drawing from these libraries, and they’re limited to only two blending tracks (as far as I’m aware). This has the annoying effect of causing the animation to snap if you’re doing something that will change the sequence while it’s already transitioning; the only case we have is if you’re standing still and it transitions to/from the rare idles or the draw, during those 0.25 seconds, if you start to move, it will snap.

The Source engine has 3-way blending code to prevent this sort of stuff.

I guess restrictions could be coded in for this specific problem but we have more important stuff on our collective plates.

Other collaborative things include procedural animation: for example, the first-person guns “lagging behind” your mouse movements, other things that move automatically/parametrically (like the Hyperblade’s cubes that spin faster depending on your kill chain).

Overall it’s a lot of big and small things that come together, and attention to detail! :)

8 notes

Three months with the Surface Pro 2

image

I’ve bought this device exactly three months ago. I’m going to give you a thorough review of what’s good and what’s bad about it, and why, if you want a tablet that is also a kickass laptop, you might want to buy this… over the newest Surface Pro 3.

Read More

33 notes

H.265 & VP9

You know it if you’re familiar with video compression technologies: in the current generation, you have two big choices for your video streams. The first one is H.264. It’s not patent-free, it’s not royalty-free, but it’s got the best compression out there, most notably thanks to x264, which is the best open-source encoder for the spec that exists. x264 has notably implemented several schemes for better encoder decisions (such as macroblock trees), and optimized them at an assembler level, making it both efficient and extremely fast.

VP8 is royalty-free and (sort of) patent-free, which is cool, but by comparison, suffers from two problems: the encoder is much slower, and the compression is around 30 to 50% worse than x264 at the same bitrate (basing on SSIM results).

The next generation is upon us, and I was expecting the same trend to continue. It turns out that the situation is a lot closer this time around.

I re-encoded some of my videos using the latest available encoder builds for VP9 and H.265, at 512kbps average bitrate, and using the “medium” speed/quality tradeoff. The quality is somewhat similar for both, save for one big thing: H.265 suffers from really awful chroma blocking.

image

This is VP9, and here’s H.265.

image

Yikes. You can sort of understand what they’re trying to do: the human eye is far more receptive to luma changes compared to chroma changes… but the encoder takes that advice and runs away with it.

All things considered, this is however pretty good for high definition at 512kbps! This is how x264 performs under the same conditions…

image

You can download the H.265 file here, and the VP9 file here.

There is however one thing that needs to be said: the VP9 encoder is horrifying slow. One of the major reasons for this: it’s not even multi-threaded. This video took nearly 3 hours to encode with VP9, whereas the H.265 encode took mere minutes.

Here’s a bonus comparison between VP8 and VP9, with a high-framerate 480p video that was encoded at 512kbps as well. This gives you a good idea of the generational gap.

Hopefully more psycho-visual optimizations will be implemented in VP9 in the future; it is extremely distracting to see sudden pixelization across a few frames, even if the quality jumps back right after.

YouTube has also been putting the encoder in production for a few months now, which I personally don’t believe is an entirely good decision considering how slow the encoder AND decoder are (dropped frames ahoy!). And it sometimes does weird stuff, like on the background of this video

0 notes