August 3, 2014 at 11:27 #25535
The Silk Soles set, Valentine’s Vargas Girl, is quite beautiful – both the actual photoset and the “Behind the Scenes” video. May I ask, however, about the apparent difference in the red of Ariel’s dress and the cushions on the still set and the video? Is this an effect of the different cameras, or the lighting – or was it something done in “post production”?
It also looks to me like the stills have shadowless lighting which I understand to be a classic studio setup but on the video Ariel’s face is a little more lit from the right (as viewed by the camera). This fits with what I thought I saw of the studio layout on the “making of” video: constant light units giving light from the right and a flash unit to the left. Have I got that right? If so, was that just how the still setup turned out on the video or was it deliberately designed to create the different effects?
Apologies to bondage purists for mentioning a “Silk Soles” set. There is, of course, no bondage in “Valentine’s Vargas Girl”. But there are many brilliant poses and photographs including frozen action of Ariel swirling her satin ballgown and evocative shots of her cuddling a cushion.
AndrewAugust 4, 2014 at 15:58 #25536
I don’t think I am familiar with that set — where is it?August 4, 2014 at 21:34 #25537
It’s on http://www.silksoles.com, Doug.
The colour differences come from a lot of contributions.
1) The lighting is different. The video was lit with LED panels. These are discontinuous sources, and one of the parts of the spectrum they are relatively lacking in is the deep reds. So these colours are hard to reproduce under that lighting without messing up skin tones. I always prioritise the girls’ skin tones, of course. The stills were shot with studio flash, which is much closer to a full spectrum, and a somewhat different colour temperature to boot.
I’d like to use the same lighting for both, but it isn’t practical on my budget. Video can cope with dimmer light as one generally shoots at 1/50th of a second, which isn’t enough to freeze motion or camera shake on stills, which I like to shoot at 1/400th of a second. Plus the native ISO of the RED is around 400, whereas the native ISO of the Hasselblad is around 80. So stills need a LOT more light- which fortunately one can generate in very short synchronised bursts using flash. I’d love to be able to use powerful HMI lights for video and use similar softboxes etc. but they cost a lot and as of yet I’ve not been able to afford them.
The cove is also very differently lit as for stills it has a TON of flash into it from the ceiling mounted units. The house lights do not create the same effect for stills!
The colour temperature of the lights was different too. The stills generally come in around 5000 K whereas I *think* the video would have been around 3200 K to match the house lights and modelling light in the stills flash. So the actual initial light spectrum coming off the scene is different.
Actually I tried to recreate the shadowless lighting scheme on video by using a barrage of LED panels, but it doesn’t really work so in the end I just opted for lighting from the up-camera side, which is the usual cinema default.
So you’re right, I’ve gone for the most familiar convention for each lighting pattern: shadowless with the cove for stills, up-camera lighting for the video. I wouldn’t have been technically able to achieve doing it the other way around, so might as well go with the more normal convention for each!
An HMI with softboxes is on my shopping list for if the recession ever ends 🙂
2) They were shot through different lenses. The colour rendition of the Hasselblad lenses (stills) and Canon lenses (video, static shoots) and GoPro lenses (video, moving shots) is a little different.
3) The colour dyes on each camera’s sensor are a bit different, and the capabilities of the camera to handle over and under exposure varies a bit too.
4) The post processing chain is very different for each camera, and each manufacturer characterises their camera differently. The best results are usually obtained with a the manufacturers’ own “special sauce” colour processing- for example the Hasselblad shots processed through Phocus look radically different from the default look that Aperture prefers. Either can be morph to the other without too much trouble if all you need is a vague resemblance- but getting an exact match can be almost impossible as there are so many variables to play with.
For video, the main task was to match the RED and the GoPro footage, which I did by shooting in a flat profile on the GoPro then using a filter called FilmConvert to interpret that as if it were Fujichrome Provia film, which gives a close rendition to the RED with my preferred settings. It isn’t exact- even having balanced the skin tones, there’s a warm/magenta cast to the RED footage and a green cast to the whites of the cove for the GoPro. There’s a limit to how much colour correction it is possible for me to do given that we have to produce one video a week for RE and SS 🙁 I could probably also tune it closer to the stills in post.
5) The output format is different, and again handles over and underexposure differently. For stills, its RAW -> 16 bit TIFFs in Prophoto RGB -> 8 bit JPEGS. The stills software is very smart at avoiding clipping highlights or creating bad colour artefacts, and very good at getting the colour balance spot on. For video, its REDcode RAW -> 10 bit ProRes 422 HQ in REC709 colour space -> highly compressed MP4 (and also in parallel to JPEGs for the framegrabs). On video, the clipping isn’t handled as well, and suffers from being stuffed into a restricted colour space at the first export step. REC709, the HD TV colour space standard, is a smaller colour space than Prophoto RGB and plays merry hell with highly saturated colours. It also has something called “superwhites”: values of white greater than 100% intensity. This makes no sense from a stills point of view, or really from a digital imaging point of view. It is a hold-older from analogue TV and from the way video signals are traditionally encoded into luminosity and two colour signals at lower resolution. Technically one isn’t supposed to exceed 100% white (“broadcast legal” signal) which doesn’t matter when one sees the video in isolation. Your eyes adjust to a dimmer white. But by comparison with the stills, where 100% white is likely to be driving your monitor as bright as it can go, the image is likely to look washed out.
So when one exports from this restricted colour space to JPEGs to get the framegrabs, all manner of monkey business has already been applied to the images. I put in a “one size fits all” correction when I make the JPEGs to remove the worst of the effect, but it never gets there exactly- it leaves some headroom, which is also a hold-over from the superwhites thing.
I’d love it if we could get rid of these hangovers from crappy analogue SD TV and just use a nice clean RGB full-range wide gamut colour space for video.
The colour differences and different lighting renditions bother me, but generally it is only an issue if you look at the images side to side comparing stills and video. I’d still love to shoot something in motion that really looks like an RE still. Generally the limiting factor is just the amount of light needed to get there- so I think I’ve probably got closest with golden hour daylight.
August 12, 2014 at 16:15 #25546
- This reply was modified 6 years, 11 months ago by Hywel.
Thank you, Hywel, for the detailed response. It is a reminder of the effort that goes into your productions and the standards you work to. I am hopeful that as I work through my photography learning curve I will actually understand more and more of it.
I write this while still in Florida after FetishCon and as we are about to try filming with our own, recently acquired GoPro camera. Forgive us if for now we just have fun and leave the technical stuff (and kink) till later.
- This reply was modified 6 years, 11 months ago by tfandrew.
The forum ‘Techie Talk’ is closed to new topics and replies.