Color skewed videos


#1

Earlier, Nige was having problems with his videos being color skewed and now I am encountering the same problem. Nige, did you ever solve the problem of your video patch of the windmill being too blue? I’m starting to add in my video patches and they are also color skewed. In my case they are too green. :frowning:

When adding in my first set of doors, the areas were quite dark so I did not see a color difference, but in this next section you enter from the outdoors and the difference is noticeable. The full screen video of the door was made in Vue, using the original scene file. The patch was made in HitFilm, using a section cropped from the cube side and then overlayed with a fire effect. Both were rendered out as PNGs in an image sequence and then processed as OGVs in VideoMach with a bitrate of 4800. Not sure where I’m going wrong as Agustin’s vids blend perfectly.


#2

no I never found a way to fix it
I dunno if this could be related but Mac and Windows use a different gamma setup, I think Mac is 1.7 and Windows is 2.000


#3

I just checked to be sure and the PNG files genereated by HitFilm for the fire effect match the cube face. I also tried truning off all effects in Dagon, but they were still green. It must be either something to do with the processing into OGVs in VideoMach or to the playback in Dagon.


#4

I’m afraid the conversion I’m using in Dagon isn’t very accurate. If your ‘patch’ has too many plain or bright colours, the differences are going to be noticeable.

The only workaround for the time being is tweaking the values in the C++ code and recompiling. Sorry, this is something I’ll be looking into in the coming months :frowning:


#5

No problem, Agustin. I’m pretty certain that you will be done with your fixes long before I will be done with my game. :wink:


#6

Same issue as this thread?
http://www.senscape.net/forum/viewtopic.php?f=9&t=1199&start=60
Reading up on the Theora library it does look like it’s capable of higher-quality settings but I haven’t seem those abilities exposed in plugin or app, and the library by itself isn’t useful except accessing from code somehow.


#7

Some thoughts about video color issues…
Agustin may improve the video playback but I’ve viewed theora video outside of Dagon and the problem still exists. Video just wasn’t designed to be compatible with computer graphics.
The libtheora docs mention that version 1.1 is capable of 4:4:4 color compression (which is NO color compression), but I can’t find any apps that expose that option for the end user.
__
Even professional video equipment doesn’t record 4:4:4, the files are just too HUGE and traditional full-screen video color shifts are too subtle to perceive EXCEPT in this new-fangled world of computer graphics. (one exception: green-screen effects,that overlay filmed video with computer graphics, specifically run into this problem and fix it by requiring less-color-compressed video formats.)
__
The libtheora doc specifically mentions that 4:4:4 is useful for screencasts, which directly indicates the issue of computer graphics vs traditional video, since screencasts are usually much slower framerate that regular video.
__
The downside of 4:4:4 is larger files and slower decoding.
There is no problem with full-screen videos, only mini-videos that have to overlay a computer graphic.
__
I can think of several options, although none that are easy on Agustin :slight_smile:
Use regular theora for full-screen videos, since those will be large anyway and must require significant compression.
Spot videos however, are much smaller and can have smaller framerates too, maybe.
This means filesize and decoding wont’ be nearly as much of an issue (as long at the producer doesn’t try to embed a really big video). This means there are other approaches we can try…
Agustin could try to compile a custom app that uses the libtheora codec and forces the 4:4:4 pixel format. This still allows the video file to be compressed but it won’t screw with the color. (if Dagon is already using theora version 1.1 then decoding 4:4:4 is already implemented.)
He could also implement a custom page-flipping approach that uses native images stored as a sequence (maybe zipped up, stored as AVI raw, CorePNG, HUFFyuv, or some other lossless format). Right now a user-script that tries to update a spot image is extremely inefficient. Maybe a focus on streamlining that ability specifically with the intent to address small page-flipped images at rates fast enough to pass for video would work.
Agustin, you have nothing else to do, right? :stuck_out_tongue:


#8

I like the idea of playing an image sequence. I wonder if they could also be compiled as TEX files to cut down the size. With TGA, perhaps it would be possible to have alpha. It wouldn’t work for large videos, but for small repetitions, it might work like a GIF file. Could be useful for a candle flame or a blinking light, for instance.

The old Adventure Maker program had a great feature that seems to me to be similar to the pixel pushing abilities of the Squirlz program. It allowed you to draw areas on the frame that would become wavy — you could specify speed and direction. It worked great for rippling water, fountains, candle flames, etc.

I know that Agustin’s swamped with promoting and finIshing the Asylum… I’m just sayin’ … :stuck_out_tongue:

As for the color skewed videos, I tried upping the bitrate to 8000 and it made no difference at all. So, as a temporary work around, since my my videos were playing back a greenish-yellow color consistently, I experimented by applying filters to up the red and blue and diminish the green slightly, and it matched better. The color change works best on a full screen video, where the screen went noticably greener as soon as the video started. I’m trying now to crop the patches along edges in the scene to minimize the edges showing…


#9

I guess an advantage to paroramic stuff like doors etc can be done face on so its not that hard to hide the colour difference, darker scenes aren’t as effected so much

Silly idea would the node work as 6 movie clips, but just a static single frame, I haven’t got the ovg converter at the moment, pc went into spoilt brat mode last week so I had a mass clear out as offering it sweets wasn’t working :no:


#10

Good alternatives! I have some experience with HuffYUV and it’s a very clean codec, although I think the sequence of PNGs would be the solution that makes most sense.

Yes, the problem is most definitely the YUV->RGB conversion. There’s no choice when it comes to the presentation of video patches: it has to be in RGB format because this is what OpenGL expects. Rather ironically, it would be easy to present YUV videos in fullscreen, but that’s useless :slight_smile:

I’m afraid that any solution has very low priority in our case because the HUE differences in Asylum are barely noticeable. Good news is that we will be open sourcing the Dagon code soon-ish, so it’s always possible other users will chime in to help.

What I can do in the short term is allow you to fine tune the color conversion. It’s far from optimal, but it could be a handy workaround in the meantime.


#11

I found a binary of ffmpeg with built-in theora. It’s all command line but not that hard to use. The good news is that if I feed it an AVI raw file (no compression), it’s spits out a 4:4:4 (no color compression) and the size increase isn’t that bad. No hue shifts either!
The bad news (so far) is that all video-compression still targets television, which means white isn’t zero and black isn’t 255, plus I some gamma is thrown in. I had to use a picture editor to mess with the contrast and brightness and gamma and was able to get a perfect match to my background. I’m still working on how I might get fmpeg to do it for me, in which case all we need is somebody to put it in a nice wrapper program so we don’t have to live on the console and we are good to go!

Here’s another tech tidbit: modern video sensors arrange the color-sensing-pixels in patterns that minimize color-compression artifacts, since even the pro cameras color-compress on the fly. But a CGI rendering doesn’t care about that at all and can produce some really aweful-to-compress images.

BTW, I’m not sure how well a hue-shifting approach would work. The resulting hue shift we are seeing is due to the codec grabbing a block of 2x2 pixels and figuring out the average color. This is why gray-scale images work well. If, however, you have two red pixels and two green pixels, the result is 4 yellow pixels. Of all the color mixes, only the red/green combo leads to the color yellow that doesn’t look at all like either one. That’s why the worst hue shifts always look yellow. Red/blue yield purple, which still seems to have qualities of red AND blue. Same with blue/green.
Here’s the best example I can find: Notice how 3 of the 4 take on a yellowish cast?

Bottom line…the hue shift depends on the picture.


#12

Thanks, shadowphile. Your explanation of the color shifting answers a number of questions for me. I was wondering just last night why with just a general color shift, some areas matched up perfectly and others did not. I was using an image sequence of tgas and color filtering in VideoMach when converting to ogg videos.


#13

Quick question: OpenGL expects RGB, so what video solutions are popular or recommended for OpenGL? We could use that for embedded video and Theora for fullscreen. I had no problem playing 4:4:4: 24fps 1920x1020 Theora videos, although my system is fairly robust. 10 yo laptops might have a problem, but they then always will at this point.


#14

I can’t say for sure… Theora always seemed like the best choice, especially because it’s not encumbered by any patents (at least in theory).

I’d say sequence of PNGs is the most appropriate alternative. Color matching would be perfect.


#15

Sequence would be easy. My Blender spot-video plugin could pump those out with no problem. We would definitely need a way to package them up.


#16

I’m toying with the idea of packing EVERYTHING. That is, one big bundle that includes the script and assets. That way you can simply throw this big file into your Dagon interpreter and voilá, game executed. Maximum portability.

But yes, intermediate files packing PNGs would be necessary as well.


#17

makes sense for distribution purposes, especially if one wants to secure their assets from public access.


#18

Agustin, when if ever are we likely to see spot videos implemented by sequenced PNGs? My images are rarely going to be as grainy as Asylum so I’m always going to have artifact-y animations.

BTW, one partial solution for this would be mini-videos with a grey-scale alpha layer so there are no seam issues, which I think would prevent 90% of the problem. But Theora doesn’t do alpha, does it? :frowning:


#19

Not sure about PNG support yet, sorry. We still need to implement many critical features for Asylum (and any commercial game for that matter!): saves, packaging, menu, etc.

But we’re also experiencing bad colors in Asylum itself, so it’s definitely in my todo list to tweak the Theora implementation. Don’t lose your hopes!

AFAIK, transparency on Theora is only possible via hacks, and we honestly want to stay away from that :frowning:

PS: Just to be clear, we ARE going to support sequence of PNGs eventually, just not sure when.


#20

Thanks, Agustin. That would definitely make my life easier as well… :stuck_out_tongue: Right now I’m running each video through filters for color, but it’s trial and error and not very accurate.

I was experimenting with a timer – fading in and out pngs with alpha for a slowly blinking light — but it seemed easier in the end to make a video. I should go back and try again though. I thought that something along those lines might work also in lieu of video on an overlay.

I have too many unfinished tasks…