AWS Thinkbox Discussion Forums

LUT support

Hi, i wounder if you could put on draft wishlist a lut support ? That will be great because color in our video are not what they are suppose to be.

Thanks !

Fred

Did you check out the Beta 10 build?

See:
viewtopic.php?f=127&t=7491

Just saw it, funny timming ! Is there any way to be email when a build come out

Where do i put the line, i want everything to get out with the rec709 lut.

Thanks :slight_smile:

[code]import sys
import os
import datetime
import copy
import xml.etree.ElementTree as xml

import Draft
from DraftParamParser import *

print “Draft Version: %s” % Draft.LibraryInfo.Version()

def ResizeWithLetterbox(self, width, height):
if width <= 0:
raise RuntimeError(‘width must be a positive number’)
if height <= 0:
raise RuntimeError(‘height must be a positive number’)
sourceAR = float(self.width) / self.height
destAR = float(width) / height
if sourceAR == destAR:
self.Resize(width, height)
else:
image = copy.deepcopy(self)
if width <= self.width and height <= self.height:
self.Crop(0, 0, width, height)
else:
self.Resize(width,height)
self.SetToColor(Draft.ColorRGBA(0, 0, 0, 1.0))
if sourceAR > destAR:
image.Resize(width,int(round(width/sourceAR)))
else:
image.Resize(int(round(height*sourceAR)),height)
self.CompositeWithPositionAndGravity(image, 0.5, 0.5, Draft.PositionalGravity.CenterGravity, Draft.CompositeOperator.CopyCompositeOp)

Draft.Image.ResizeWithLetterbox = ResizeWithLetterbox

#Returns a dictionary of a Deadline Job’s properties
def getDeadlineJob (job, repository):
deadlineJobPath = (repository + “\jobs\” + job + “\” + job + “.job”)
jobKeys = (xml.parse(deadlineJobPath)).getroot()
jobDict = {}
for o in list(jobKeys):
if len(o.getchildren()) < 1:
jobDict[o.tag] = o.text
else:
jobDict[o.tag] = []
for t in list(o):
(jobDict[o.tag]).append(t.text)
jobDict[‘deadlineJobPath’] = deadlineJobPath
return jobDict

#Returns a list of frames based on the given frameString
def FrameListToFrames( frameString ):
frames = []
frameRangeTokens = re.split( ‘\s+|,+’, frameString )

for token in frameRangeTokens:
try:
if ( len(token) > 0 ):
dashIndex = string.find( token, ‘-’, 1)

        if ( dashIndex == -1 ):
           startFrame = int(token)
           frames.add( startFrame )
        else:
           startFrame = int(token[0:dashIndex])

           m = re.match( "(-?\d+)(?:(x|step|by|every)(\d+))?", token[dashIndex + 1:] )
           if ( m == None ):
              raise StandardError( "Second part of Token '" + token[dashIndex + 1:] + "' failed regex match" )
           else:
              endFrame = int(m.group(1))

              if ( m.group(2) == None ):
                 frames.extend( range(startFrame, endFrame + 1 ))
              else:
                 dir = 1
                 if startFrame > endFrame:
                    dir = -1

                 byFrame = int(m.group(3));

                 frame = startFrame
                 while (startFrame * dir) <= (endFrame * dir):
                    frames.add( frame )
                    frame += byFrame * dir

  except:
     print "ERROR: Frame Range token '" + token + "' is malformed. Skipping this token."
     raise

frames = list(set(frames))
frames.sort()

return frames

#CHANGE ME! Path to the Deadline Repository root
deadlineRepo = “\fx-deadline\deadline\”

#CHANGE ME! Path to an image containing the background of the slate frame
slateFrame = “\\fx-deadline\deadline\Draft\Slate_Montage5K.png”

#The argument name/types we’re expecting from the command line arguments
expectedTypes = dict()
expectedTypes[‘frameList’] = ‘’
expectedTypes[‘inFile’] = ‘’
expectedTypes[‘outFile’] = ‘’
expectedTypes[‘username’] = ‘’
expectedTypes[‘entity’] = ‘’
expectedTypes[‘version’] = ‘’
expectedTypes[‘deadlineJobID’] = ‘’

#Parse the command line arguments
params = ParseCommandLine( expectedTypes, sys.argv )

inFilePattern = params[‘inFile’]
frames = FrameListToFrames( params[‘frameList’] )
(outBase, outExt)= os.path.splitext( params[‘outFile’] )

#not a huge deal if we can’t connect to the repo, we’ll just be missing some info
try:
jobParams = getDeadlineJob( params[‘deadlineJobID’], deadlineRepo )
except:
jobParams = {}

outWidth = 1920
outHeight = 1080
slateFrames = 1

for eye in [‘l’,‘r’]:
#Build up the encoders
outBaseEye = outBase.replace( ‘%v’, eye )

#Appends (#) on the end of the filename until we have a unique name
increment = 2
newFileName = outBaseEye + outExt
while os.path.exists( newFileName ):
newFileName = “%s (%d)%s” % (outBaseEye, increment, outExt)
increment += 1

MJPEGencoder = Draft.VideoEncoder( newFileName, 24.0, outWidth, outHeight, 100000, “MJPEG” )
h264Encoder = Draft.VideoEncoder( newFileName + “-h264” + outExt, 24, outWidth, outHeight, 16000, “H264” )
#Annotation info used for burn ins
annotationInfo = Draft.AnnotationInfo()
annotationInfo.FontType = “Times-New-Roman”
annotationInfo.PointSize = int( outHeight * 0.022 )
annotationInfo.Color = Draft.ColorRGBA( 1.0, 1.0, 1.0, 1.0 )

#prep the Slate Frame
try:
slate = Draft.Image.ReadFromFile( slateFrame )
except:
slate = Draft.Image.CreateImage( outWidth, outHeight )
slate.SetToColor( Draft.ColorRGBA( 0.0, 0.0, 0.0, 1.0 ) )

if ( slate.width != outWidth or slate.height != outHeight ):
slate.ResizeWithLetterbox( outWidth, outHeight )

#sets up the text on the slate frame
slateAnnotations = [
(“SHOW”, jobParams.get(‘ExtraInfo1’, ‘’)), #This line is skipped if there is not ExtraInfo1
(“Episode”, params.get(‘episode’, ‘’)), #This line is skipped if ‘episode’ isn’t in the extra args
(“Shot”, params[‘entity’]),
(“Frames”, params[‘frameList’]),
(“Handles”, params.get(‘handles’, ‘’)), #This line is skipped if ‘handles’ isn’t in the extra args
(“Version”, params[‘version’]),
("",’’),
("",’’),
(“Artist”, params[‘username’]),
(“Date”, datetime.datetime.now().strftime("%m/%d/%Y %I:%M %p") )
]

#comp the annotations over top the slate frame
skipLines = 0
for i in range( 0, len( slateAnnotations ) ):
annotationTuple = slateAnnotations[i]

  if ( annotationTuple[1] == "<SKIP>" ):
     skipLines += 1
     continue

  lineNum = i - skipLines
  if ( annotationTuple[0] != "" ):
     annotation = Draft.Image.CreateAnnotation( slateAnnotations[i][0] + ": ", annotationInfo )
     slate.CompositeWithPositionAndGravity( annotation, 0.45, 0.7 - (lineNum * 0.06), Draft.PositionalGravity.SouthEastGravity, Draft.CompositeOperator.OverCompositeOp )

  if ( annotationTuple[1] != "" ):
     annotation = Draft.Image.CreateAnnotation( slateAnnotations[i][1], annotationInfo )
     slate.CompositeWithPositionAndGravity( annotation, 0.46, 0.7 - (lineNum * 0.06), Draft.PositionalGravity.SouthWestGravity, Draft.CompositeOperator.OverCompositeOp )

#encode the slate frames at the start of the video
print( “Encoding Slate Frames…” )
for i in range( 0, slateFrames ):
MJPEGencoder.EncodeNextFrame( slate )
h264Encoder.EncodeNextFrame( slate )

studioAnnotation = Draft.Image.CreateAnnotation( “The Ice Age”, annotationInfo )
entityAnnotation = Draft.Image.CreateAnnotation( “%s %s” % (params[‘entity’], datetime.datetime.now().strftime("%m/%d/%Y")), annotationInfo )
annotationInfo.BackgroundColor = Draft.ColorRGBA( 0.0, 0.0, 0.0, 1.0 )

#Main encoding loop
for frameNumber in frames:
print( “Processing Frame: %d…-1” % frameNumber )

  inFile = inFilePattern.replace( '%v', eye )
  inFile = ReplaceFilenameHashesWithNumber( inFile, frameNumber )
  
  #check if the file exists
  blackFrame = ( not os.path.exists( inFile ) )
  
  if not blackFrame:
     try:
        #try to read in the frame
        bgFrame = Draft.Image.ReadFromFile( inFile )
     except:
        #failed to read in, encode a black frame instead
        blackFrame = True

  #create a black frame if we weren't able to read it in 
  if blackFrame:
     bgFrame = Draft.Image.CreateImage( outWidth, outHeight )
     bgFrame.SetToColor( Draft.ColorRGBA( 0.0, 0.0, 0.0, 1.0 ) )
  elif ( bgFrame.width != outWidth or bgFrame.height != outHeight ):
     bgFrame.ResizeWithLetterbox( outWidth, outHeight )

  #Do the frame burnins
  framesAnnotation = Draft.Image.CreateAnnotation( str( frameNumber ), annotationInfo )
  bgFrame.CompositeWithPositionAndGravity( studioAnnotation, 0.0, 1.0, Draft.PositionalGravity.NorthWestGravity, Draft.CompositeOperator.OverCompositeOp )
  bgFrame.CompositeWithPositionAndGravity( entityAnnotation, 0.0, 0.0, Draft.PositionalGravity.SouthWestGravity, Draft.CompositeOperator.OverCompositeOp )
  bgFrame.CompositeWithPositionAndGravity( framesAnnotation, 1.0, 0.0, Draft.PositionalGravity.SouthEastGravity, Draft.CompositeOperator.OverCompositeOp )

  MJPEGencoder.EncodeNextFrame( bgFrame )
  h264Encoder.EncodeNextFrame( bgFrame )

#Finalize the encoding process
MJPEGencoder.FinalizeEncoding()
h264Encoder.FinalizeEncoding()
[/code]

Go to the Draft Builds page. Near the bottom of the page, in a blue rectangle, you should see a link that says “Subscribe forum”. Click this link.

Now you should receive an email whenever a new build is posted. But please note that you must click the link in your email. If you don’t click the link, then it will stop sending new notifications.

Add this line after “slateFrames = 1”: outLut = Draft.LUT.CreateRec709()

And apply the LUT before you encode each frame. Change: MJPEGencoder.EncodeNextFrame( bgFrame ) h264Encoder.EncodeNextFrame( bgFrame )to:[code] outLut.Apply( bgFrame )

    MJPEGencoder.EncodeNextFrame( bgFrame )
    h264Encoder.EncodeNextFrame( bgFrame )[/code]

Here is a new version of your script that includes these changes:
simple_slate_eyes_rec709.zip (3.02 KB)

Thanks i`ll test it right away !

Hi,

I experienced problems with the LUTs. The only working LUT is Cineon for me, sRGB and Rec709 just give me linear like my input EXR sequence. I wonder if someone successfully used sRGB and Rec?
Thanks in advance.

Best regards,
Dziga

I’m not sure if the LUTs are correct, but they do make a difference for me:draft_lut_test.png

Would it be possible for you to please send us a script and an image file that reproduces the problem? You can send us files using our ticket system. Also, could you please tell us what platform you’re using? For example, 64-bit Windows.

Thanks for the info. I just created a test scene in Maya and still experience issues. Now, I see a slight difference between the sRGB, Rec709 and without LUT but they don’t look as I expected.
I generally compare the movs to the look of the EXRs in RV Player. I zipped the scene file plus output and Draft movs. I wonder if there’s a problem with non clamped outputs. I set my render settings to linear workflow, gamma 2.2 and no clamping. Gonna test it with clamping next.

specs:
Thinkbox Product: Draft
Product Version: Build 10
Operating System: Windows 7 64bit
Rendering Software: Vray 2.0

lut_comparison.jpg

Best regards,
Dziga
OM_default_h264_scale1280_25p_sRGB.zip (1.58 KB)

Thank you for your detailed report!

EDIT: I believe this post is incorrect. Please see my revised post here: http://forums.thinkboxsoftware.com/viewtopic.php?f=127&t=7540&p=31334#p31334

It seems the difference comes from two things: gamma, and RV’s “Color” does the opposite of Draft’s LUTs.

To get the same gamma as RV, we first invert the 2.2 gamma: sourceImg.ApplyGamma( 1.0 / 2.2 ) (This gamma convention may be the opposite of normal. Please let me know.)

Next, it seems that when you choose a Color in RV, it converts from that color space to linear. Draft’s LUTs do the opposite: they convert from linear to that color space. To get the same behaviour as RV, we must apply the LUT’s inverse:lut = Draft.LUT.CreateSRGB().Inverse() lut.Apply( sourceImg )

Here’s an image showing the results with gamma and the inverse LUT:

Please find attached a new version of your script that includes the changes described here.
OM_default_h264_scale1280_25p_sRGB_v2.zip (1.56 KB)

Hi Paul,

Do you think you could integrate this to my script aswell ? we just realise that we got that problem aswell

thanks :slight_smile:

Fred

[code]import sys
import os
import datetime
import copy
import xml.etree.ElementTree as xml

import Draft
from DraftParamParser import *

print “Draft Version: %s” % Draft.LibraryInfo.Version()

def ResizeWithLetterbox(self, width, height):
if width <= 0:
raise RuntimeError(‘width must be a positive number’)
if height <= 0:
raise RuntimeError(‘height must be a positive number’)
sourceAR = float(self.width) / self.height
destAR = float(width) / height
if sourceAR == destAR:
self.Resize(width, height)
else:
image = copy.deepcopy(self)
if width <= self.width and height <= self.height:
self.Crop(0, 0, width, height)
else:
self.Resize(width,height)
self.SetToColor(Draft.ColorRGBA(0, 0, 0, 1.0))
if sourceAR > destAR:
image.Resize(width,int(round(width/sourceAR)))
else:
image.Resize(int(round(height*sourceAR)),height)
self.CompositeWithPositionAndGravity(image, 0.5, 0.5, Draft.PositionalGravity.CenterGravity, Draft.CompositeOperator.CopyCompositeOp)

Draft.Image.ResizeWithLetterbox = ResizeWithLetterbox

#Returns a dictionary of a Deadline Job’s properties
def getDeadlineJob (job, repository):
deadlineJobPath = (repository + “\jobs\” + job + “\” + job + “.job”)
jobKeys = (xml.parse(deadlineJobPath)).getroot()
jobDict = {}
for o in list(jobKeys):
if len(o.getchildren()) < 1:
jobDict[o.tag] = o.text
else:
jobDict[o.tag] = []
for t in list(o):
(jobDict[o.tag]).append(t.text)
jobDict[‘deadlineJobPath’] = deadlineJobPath
return jobDict

#Returns a list of frames based on the given frameString
def FrameListToFrames( frameString ):
frames = []
frameRangeTokens = re.split( ‘\s+|,+’, frameString )

for token in frameRangeTokens:
    try:
        if ( len(token) > 0 ):
            dashIndex = string.find( token, '-', 1)

            if ( dashIndex == -1 ):
                startFrame = int(token)
                frames.add( startFrame )
            else:
                startFrame = int(token[0:dashIndex])

                m = re.match( "(-?\d+)(?:(x|step|by|every)(\d+))?", token[dashIndex + 1:] )
                if ( m == None ):
                    raise StandardError( "Second part of Token '" + token[dashIndex + 1:] + "' failed regex match" )
                else:
                    endFrame = int(m.group(1))

                    if ( m.group(2) == None ):
                        frames.extend( range(startFrame, endFrame + 1 ))
                    else:
                        dir = 1
                        if startFrame > endFrame:
                            dir = -1

                        byFrame = int(m.group(3));

                        frame = startFrame
                        while (startFrame * dir) <= (endFrame * dir):
                            frames.add( frame )
                            frame += byFrame * dir

    except:
        print "ERROR: Frame Range token '" + token + "' is malformed. Skipping this token."
        raise

frames = list(set(frames))
frames.sort()

return frames

#CHANGE ME! Path to the Deadline Repository root
deadlineRepo = “\fx-deadline\deadline\”

#CHANGE ME! Path to an image containing the background of the slate frame
slateFrame = “\\fx-deadline\deadline\Draft\Slate_Montage5K.png”

#The argument name/types we’re expecting from the command line arguments
expectedTypes = dict()
expectedTypes[‘frameList’] = ‘’
expectedTypes[‘inFile’] = ‘’
expectedTypes[‘outFile’] = ‘’
expectedTypes[‘username’] = ‘’
expectedTypes[‘entity’] = ‘’
expectedTypes[‘version’] = ‘’
expectedTypes[‘deadlineJobID’] = ‘’

#Parse the command line arguments
params = ParseCommandLine( expectedTypes, sys.argv )

inFilePattern = params[‘inFile’]
frames = FrameListToFrames( params[‘frameList’] )

if(True):
(outDir, outFile) = os.path.split(params[‘outFile’])
(outBase, outExt) = os.path.splitext(outFile)

outFolder = os.path.basename(outDir)

if not os.path.exists(os.path.join(outDir, '1080p')):
    os.makedirs(os.path.join(outDir, '1080p'))
if not os.path.exists(os.path.join(outDir, 'halfrez')):
    os.makedirs(os.path.join(outDir, 'halfrez'))
if not os.path.exists(os.path.join(outDir, 'fullrez')):
    os.makedirs(os.path.join(outDir, 'fullrez'))

else:
(outBase, outExt) = os.path.splitext(params[‘outFile’])

#not a huge deal if we can’t connect to the repo, we’ll just be missing some info
try:
jobParams = getDeadlineJob( params[‘deadlineJobID’], deadlineRepo )
except:
jobParams = {}

outWidth = 1920
outHeight = 1080
fullWidth = 5120
fullHeight = 2700
halfWidth = 2560
halfHeight = 1350
slateFrames = 1
outLut = Draft.LUT.CreateRec709()

for eye in [‘l’,‘r’]:
#Build up the encoders
outBaseEye = outBase.replace( ‘%v’, eye )
outBaseEyeHalf = outBase.replace( ‘%v’, eye ) + “-Hres”
outBaseEyeFull = outBase.replace( ‘%v’, eye ) + “-Fres”

#Appends (#) on the end of the filename until we have a unique name
increment = 2
newFileName = '%s/1080p/%s%s'%(outDir, outBaseEye, outExt)
while os.path.exists( newFileName ):
    newFileName = '%s/1080p/%s (%d)%s'%(outDir, outBaseEye, increment, outExt)
    #newFileName = "%s (%d)%s" % (outBaseEye, increment, outExt)
    increment += 1

#Appends (#) on the end of the filename until we have a unique name
increment = 2
newFileNameHalf = '%s/halfrez/%s%s'%(outDir, outBaseEyeHalf, outExt)
while os.path.exists( newFileNameHalf ):
    newFileNameHalf = '%s/halfrez/%s (%d)%s'%(outDir, outBaseEyeHalf, increment, outExt)
    #newFileNameHalf = "%s (%d)%s" % (outBaseEyeHalf, increment, outExt)
    increment += 1

#Appends (#) on the end of the filename until we have a unique name
#increment = 2
#newFileNameFull = '%s/fullrez/%s%s'%(outDir, outBaseEyeFull, outExt)
#while os.path.exists( newFileNameFull ):
    #newFileNameFull = '%s/fullrez/%s (%d)%s'%(outDir, outBaseEyeFull, increment, outExt)
    #newFileNameFull = "%s (%d)%s" % (outBaseEyeFull, increment, outExt)
    #increment += 1	

MJPEGencoder = Draft.VideoEncoder( newFileName, 24.0, outWidth, outHeight, 75000, "MJPEG" )
MJPEGencoderHRes = Draft.VideoEncoder( newFileNameHalf, 24.0, halfWidth, halfHeight, 225000, "MJPEG" )
#MJPEGencoderFRes = Draft.VideoEncoder( newFileNameFull, 24.0, fullWidth, fullHeight, 350000, "MJPEG" )
    #Annotation info used for burn ins
annotationInfo = Draft.AnnotationInfo()
annotationInfo.FontType = "Times-New-Roman"
annotationInfo.PointSize = int( outHeight * 0.022 )
annotationInfo.Color = Draft.ColorRGBA( 1.0, 1.0, 1.0, 1.0 )

#prep the Slate Frame
try:
    slate = Draft.Image.ReadFromFile( slateFrame )
except:
    slate = Draft.Image.CreateImage( outWidth, outHeight )
    slate.SetToColor( Draft.ColorRGBA( 0.0, 0.0, 0.0, 1.0 ) )

if ( slate.width != outWidth or slate.height != outHeight ):
    slate.ResizeWithLetterbox( outWidth, outHeight )

#sets up the text on the slate frame
slateAnnotations = [
     ("SHOW", jobParams.get('ExtraInfo1', '<SKIP>')), #This line is skipped if there is not ExtraInfo1
     ("Episode", params.get('episode', '<SKIP>')), #This line is skipped if 'episode' isn't in the extra args
     ("Shot", params['entity']),
     ("Frames", params['frameList']),
     ("Handles", params.get('handles', '<SKIP>')), #This line is skipped if 'handles' isn't in the extra args
     ("Version", params['version']),
     ("",''),
     ("",''),
     ("Artist", params['username']),
     ("Date", datetime.datetime.now().strftime("%m/%d/%Y %I:%M %p") )
   ]

#comp the annotations over top the slate frame
skipLines = 0
for i in range( 0, len( slateAnnotations ) ):
    annotationTuple = slateAnnotations[i]

    if ( annotationTuple[1] == "<SKIP>" ):
        skipLines += 1
        continue

    lineNum = i - skipLines
    if ( annotationTuple[0] != "" ):
        annotation = Draft.Image.CreateAnnotation( slateAnnotations[i][0] + ": ", annotationInfo )
        slate.CompositeWithPositionAndGravity( annotation, 0.45, 0.7 - (lineNum * 0.06), Draft.PositionalGravity.SouthEastGravity, Draft.CompositeOperator.OverCompositeOp )

    if ( annotationTuple[1] != "" ):
        annotation = Draft.Image.CreateAnnotation( slateAnnotations[i][1], annotationInfo )
        slate.CompositeWithPositionAndGravity( annotation, 0.46, 0.7 - (lineNum * 0.06), Draft.PositionalGravity.SouthWestGravity, Draft.CompositeOperator.OverCompositeOp )

outLut.Apply( slate )

#encode the slate frames at the start of the video
print( "Encoding Slate Frames..." )
for i in range( 0, slateFrames ):
    MJPEGencoder.EncodeNextFrame( slate )
    MJPEGencoderHRes.EncodeNextFrame( slate )
    #MJPEGencoderFRes.EncodeNextFrame( slate )
    
studioAnnotation = Draft.Image.CreateAnnotation( "The Ice Age", annotationInfo )
entityAnnotation = Draft.Image.CreateAnnotation( "%s    %s" % (params['entity'], datetime.datetime.now().strftime("%m/%d/%Y")), annotationInfo )
annotationInfo.BackgroundColor = Draft.ColorRGBA( 0.0, 0.0, 0.0, 1.0 )

#Main encoding loop
for frameNumber in frames:
    print( "Processing Frame: %d...-1" % frameNumber )

    inFile = inFilePattern.replace( '%v', eye )
    inFile = ReplaceFilenameHashesWithNumber( inFile, frameNumber )

    #check if the file exists
    blackFrame = ( not os.path.exists( inFile ) )

    if not blackFrame:
        try:
            #try to read in the frame
            bgFrame = Draft.Image.ReadFromFile( inFile )
        except:
            #failed to read in, encode a black frame instead
            blackFrame = True

    #create a black frame if we weren't able to read it in
    if blackFrame:
        bgFrame = Draft.Image.CreateImage( outWidth, outHeight )
        bgFrame.SetToColor( Draft.ColorRGBA( 0.0, 1.0, 0.0, 1.0 ) )
    elif ( bgFrame.width != outWidth or bgFrame.height != outHeight ):
        bgFrame.ResizeWithLetterbox( outWidth, outHeight )

    #Do the frame burnins
    framesAnnotation = Draft.Image.CreateAnnotation( str( frameNumber ), annotationInfo )
    bgFrame.CompositeWithPositionAndGravity( studioAnnotation, 0.0, 1.0, Draft.PositionalGravity.NorthWestGravity, Draft.CompositeOperator.OverCompositeOp )
    bgFrame.CompositeWithPositionAndGravity( entityAnnotation, 0.0, 0.0, Draft.PositionalGravity.SouthWestGravity, Draft.CompositeOperator.OverCompositeOp )
    bgFrame.CompositeWithPositionAndGravity( framesAnnotation, 1.0, 0.0, Draft.PositionalGravity.SouthEastGravity, Draft.CompositeOperator.OverCompositeOp )

    outLut.Apply( bgFrame )

    MJPEGencoder.EncodeNextFrame( bgFrame )
    MJPEGencoderHRes.EncodeNextFrame( bgFrame )
    #MJPEGencoderFRes.EncodeNextFrame( bgFrame )
 
#Finalize the encoding process
MJPEGencoder.FinalizeEncoding()
MJPEGencoderHRes.FinalizeEncoding()
#MJPEGencoderFRes.FinalizeEncoding()

[/code]

Sure, you mean both the gamma and inverse LUT?
simple_slate_eyes_inverse_lut.zip (3.36 KB)

I think my original post was wrong. In it I mentioned gamma but I don’t think that has anything to do with it.

Looking at RV I see two sets of controls related to pre-defined LUTs:

  • Under the Color menu, “File Nonlinear to Linear Conversion”, and
  • Under the View menu, “Linear to Display Correction”.

We want to get similar behaviour in Draft. The first, “File Nonlinear to Linear Conversion”, is accomplished using an Inverse LUT in Draft. For example, let’s say you have this set to “Rec709” in RV. In Draft:lut = Draft.LUT.CreateRec709().Inverse() lut.Apply( sourceImg )
If the input image is already linear then you’d do nothing here in Draft.

The second, “Linear to Display Correction”, is accomplished using a regular LUT in Draft. For example, let’s say you have this set to “sRGB” in RV. In Draft:displayLut = Draft.LUT.CreateSRGB() displayLut.Apply( sourceImg )

We apply the lut followed by the displayLut in Draft. Note that if your lut is the inverse of your displayLut, such as in the sRGB example below, then they cancel each other out, so you can remove them from your script.


(Note: your results for Cineon will look different. This difference is due to a couple bugs that will be fixed in the next build.)
OM_default_h264_scale1280_25p_sRGB_v3.zip (1.56 KB)

Aaaha! Thank you for your effort in analyzing this :smiley:. I’ll have a closer look at it.

Best regards,
Dziga

I am pretty confuse,

What do i put like setting if in rv my rv is set to
file non linear to linear conversion is set to rec 709 and my linear to display correction is to rec709 aswell ?

Thanks !

Fred

Did the output of your script here look wrong?

In your specific case, if you’re making a MOV to play back in QuickTime player, and you want it to look the same as your EXRs look in RV, then I believe you don’t need any LUT in Draft. This is because your “File Nonlinear to Linear Conversion” and “Linear to Display Conversion” are both the same thing in RV.

In general the answer depends on the color handling in the rest of your pipeline. I think you’d typically want to do something like:* Convert from your input’s color space to linear. This is normally done using an Inverse() LUT in Draft. If your input EXR is already in linear color space then you don’t need to do anything here.

  • Next, do your Draft compositing operations.
  • Finally, convert to the color space you need for your output. For example, this may be LUT.CreateSRGB() for a QuickTime. If you’re writing to an EXR with linear color space then you don’t need to do anything here.

Are your results different from what you expect? Some side-by-side screenshots showing what you’re getting vs. what you want may help us figure this out.

Whoops :smiley:…did it the wrong way. Of course, I have to inverse Cineon first and then apply the viewer LUT. Sorry (:

/*
Hullo,

unfortunately, I have to revive this dead thread. I still come back here from time to time to make use of everything you told me, Paul. Now, I am again at that point where I am stuck.
I am currently working with Alexa material and write AlexaV3LogC EXRs out of Nuke. While I think I understand how RV and Draft work I neither get the Alexa LUT nor Cineon LUT to work correctly.

I build my Draft script step by step, constantly rendering H264 Quicktimes and comparing them to the EXRs in RV.

When I have RV set to sRGB viewer LUT and choose “No Conversion” in the “Color” menu I get the same look in my QT when using Draft.LUT.CreateSRGB(). As we did before I use the sRGB LUT to compensate RV’s viewer LUT.
Setting RV’s color conversion to sRGB and applying an inversed sRGB LUT in Draft also results in a correct look. That’s why I assumed I could just change my inversed sRGB to an inversed AlexaV3LogC or to an inversed Cineon LUT
but the results differ extremely (way brighter than in RV).

Actually, I felt quite confident about colorspaces :confused: . What am I missing this time?

displayLut = Draft.LUT.CreateSRGB() displayLut.Apply ( sourceImg ) lut = Draft.LUT.CreateSRGB().Inverse() lut.Apply ( sourceImg ) = perfect displayLut = Draft.LUT.CreateSRGB() displayLut.Apply ( sourceImg ) lut = Draft.LUT.CreateCineon().Inverse() lut.Apply ( sourceImg ) = too bright and overexposed

Not sure if it’s helpful, but when I want to write a H264 out of Nuke I read in the EXRs with AlexaV3LogC colorspace and write my QT as “Gamma 1.8”. Don’t really understand that either :confused:

Thanks in advance and with best regards,
Dziga
*/

I understand you got this working? If so, I’m happy to hear it! :smiley:

We recently added the AlexaV3LogC LUT directly to Draft, which might give you better results: lut = Draft.LUT.CreateAlexaV3LogC().Inverse()

Privacy | Site terms | Cookie preferences