Technical question about theora color-corrected videos

ummm…how was it done? I don’t expect a detailed tech explanation, just maybe a reference to something or a brief description if done without a lib function. thanks.

In short, I revised the entire code, ensuring as much precision as possible when using variables (for example, there was an unneeded conversion from doubles to ints), and then I manually experimented with the coefficients until I determined the best formula. It turned out being the first one explained here: http://www.fourcc.org/fccyvrgb.php

It’s definitely accurate and speedy enough: we’re able to play smoothly full 1080 HD videos at 30fps on a 2.4Ghz computer. Eventually, I might move to a more advanced library with more efficient conversions, perhaps hardware-based :slight_smile:

I’ve encountered an encoder called png2theora, might be useful to those who want to bypass an intermediate video pass. Haven’t come across a command-line version yet.

I too am trying to get the .ogv videos to accurately and precisely match everything else in the nodes.

They seem - for me - to be slightly darker than the rest of the panorama. It’s usually not a huge issue, I’ve found ways, workarounds, to match it nearly perfectly in the last few days, but it’d be nice to have some way to automatically apply the exact same subtle filtering settings on the videos that is automatically applied to everything else.

BTW, the audio and video - we could really use more complete documentation and explanation on them in the wiki. The command reference still has a lot of unknowns and question marks.

Matt, are you using the latest version of the engine? That is, from GitHub or Serena? Because I did extensive tests with Theora and couldn’t find any differences in the newer versions (for example, all videos in Serena blended perfectly).

As for documentation, yes, I admit we need to improve it a great deal. At least we’ll be releasing the source code of Serena soon with many comments and details :slight_smile:

I am using the version of DAGON from Serena now, yet the video is still slightly off, and after extensive tests I’ve figured out exactly how much off it is, and brightened the source video to compensate. It still isn’t quite perfect but it’s way closer to seamless than it was earlier.

UPDATE:
I’ve completely solved the video color issue at this point. It was my .ogv converter, it had issues converting from some codecs to .ogv, where the colors would end up slightly off.

So, it wasn’t DAGON’s fault after all. The videos were just a little ‘off’ and it wasn’t noticeable when playing the video on its own, but when loaded into the node the discrepancy between the video and the surrounding scene became visible.

Now my main concern is synchronizing video. I have animated elements that stretch across several cube faces. This is obviously problematic because although the video elements ‘sort of’ sync with the ‘sync’ option active, there are times when a seam between video clips becomes noticeably visible despite this. Is there a way to make video synchronization work better?

While asking that, I’d like to pose a second question: what are the best ways to improve video performance (generally) in DAGON?

Ah, glad to hear you nailed the problem with the color! I was worried about this one as I spent quite a bit of time fine-tuning the color conversion in the engine :slight_smile:

As for videos synchronizing with each other, the ‘sync’ actually has nothing to do with it. That value is supposed to indicate the engine if it should wait until the video stops playing. Is that what you’re attempting to do?

If you have several videos that should be synced with each other, playing all of them at the same time should work…

I’m trying to synchronize videos on adjacent faces with each other. I am not sure how best to do this.

Playing them all at the same time - the time the node is loaded - gets me close, but not quite there. I am hoping there is some way to keep them in sync.

Would it make any difference to modify the bitrates of the video files? The pixel dimensions?

I see that on one node - the only node that is totally functional - the multiple videos sync perfectly, and they all have the same dimensions and same bitrate.

The video clips on that node were also initialized in the code, in clockwise order, all timed to play when the node is first loaded.

On some of the other nodes that don’t work, the videos all are timed to start when the node is loaded, but initialized in a random order. I don’t know if that makes any difference.

Any insights you can offer would be appreciated.

this is a great topic! But how about we move it to a new subject, it has nothing to do with video color. Agustin?

Done. I’ve started a new thread: video synchronization thread

OK, I’ll follow up over there!