Just now, I added the burnin color fields to set custom colors for the burnin (info about the shot).
I experienced issues with the quality of the text when I change the color and I wonder if you or I can improve the look.
As you can see when setting RGBA to 0,0,0,1 I get a nice and sharp text but when changing the RGB values it seems to get jaggy, aliased.
I don’t really want to increase the font size to improve the quality as it takes to much space in the overall image, especially when rendering smaller than 1080p.
Or is this an ecoding issue because I tested it with h264? Will try a high quality encoding next.
Any suggestion is greatly appreciated. I am willing to share the new interface once it reached a solid state if someone is interested (and if I am allowed to ).
Are you drawing a shadow as well, or just the main text? (I think just the main text.)
The pink text was written to different x,y coordinates than the black text (on a separate image, then copied to this image?)… I’m thinking it was placed over the center window?
When I zoom in, neither has terribly good quality, but the pink is definitely worse.
Could you send me the draft script? For the extra arguments it will expect, are you using the ones shown in the screen snapshot? (I’m assuming your custom interface simply sends additional arguments to the script?) I’d like to do some testing with the script (probably tomorrow, if you get it to me by then).
One of the things I’d like to test is how the two annotations compare when placed in identical x,y coordinates in identical frames, since the transparency and contrast with the background will have an effect.
I have an idea that I’m wondering if it might help… what happens if you create the annotation larger, and then resize it to the size you’re currently using? Does that improve the aliasing, or make it worse?
I am not drawing a shadow, text only. As far as I remember the shadow was very distracting when using a smaller text size.
The pink and black text are composited completely the same way as you will see in the Draft script. I only changed the RGB values in my submission dialog, position doesn’t change.
I just combined two screenshots of the outputs in Photoshop for comparison (cropped the pink one and put ontop of the other).
You are right about my script just adding more arguments that are read by the encoding script. I attach both. It’s a work in progress, so some areas might not be very efficient or well programmed haha. Learning by doing.
I think even in the same frame the pink looked worse but it might also be how we perceive the color in cunjunction with its surrounding.
The idea with resizing is definitely worth a try. Didn’t think of it (:. I am going to try that tomorrow though as I need to go home now.
Thanks! I’ll play with this tomorrow. (Or sooner if the other stuff that’s urgent happens to get done faster than I expect.)
Just to let you know, one of the things I’m going to look at is the two annotations saved as separate images, without compositing, so that I can see what the raw pixels look like with regards to colour and transparency. The reason I was thinking about the same frame is that the aliased portions are, if I recall correctly, semi-transparent, and so the background will have an effect on their appearance.
I tested your idea with resizing. The text looks more sharpened but doesn’t look well in some areas. Also it seems I get a too extremely sharpened edge (dark edge) as if the radius of the sharpen filter is too large.
Not sure if one can influence this.
Also I tested red text encoded with ProRes but unfortunately it looks just as h264.
If you find something and change one of the scripts, could you mark exactly where you changed things because I still work on some other functions in the meantime. So I can combine both afterwards
I am going to build a burnin with a half transparent background now and see how different contrasts affect the look of the text.
You are definitely right about the semi-transparent pixels. It was a little dumb that I captured different frames for debugging
I have another completely different question. I am using the ‘Get Artist’ and ‘Get Version’ buttons to get the proper information without using Shotgun because it takes 2mins 36secs to open the Shotgun login dialog.
Do you think it’s a network issue that this takes so long? I wonder if there might be some kind of 150secs timeout for a failing connection or similar. Once the dialog is initiated it establishes the connection pretty fast and without errors.
If you have an idea, please let me know.
Aaaaand, is there a way to start a new line in the Tooltips.ini? I have a new dropdown with resize options like fit, fill, etc. and I want to display a tooltip with multiple lines.
I have no idea what could be causing that huge initial delay. I’ve definitely noticed in some of my testing that the initial connection would sometimes take longer, but never on the order of 2+ minutes (I’m talking like maybe 5-10 seconds). It might be a scale issue that we didn’t detect in testing because our Shotgun DB is pretty simplistic.
Out of curiosity, does this delay occur if you switch users after you’ve initially connected? Or is it only on the first user connection of that session? I’ll definitely have a look to see if we’re doing anything weird.
EDIT: I just noticed you mentioned it took that long to open the dialog. So does that mean it takes that long before it even shows anything at all?
yes, it takes that long to show the login dialog. Just clicking the “Use Shotgun Data…” button causes the wait time.
As soon as the dialog pops up I can login with whatever username and this takes maybe 2 seconds.
So for the tooltips, if you want newlines in them you’ll have to set them directly in the Python code instead of through the Tooltips.ini file, since the ‘\n’ will just get interpreted as text, unfortunately.
To do this, you’ll have to keep a reference to the controls you’re creating, and call their “SetTooltip” function, like so:
priorityLabel = scriptDialog.AddControl( "PriorityLabel", "LabelControl", "Priority", labelWidth, -1 )
priorityLabel.SetTooltip( "This is the first line.\nThis on a new line!" )
You could also do it windows style by adding a \r in front of the \n, but Windows understands the ‘\n’ alone just fine, so I never bother.
You’ll also want to remove any entry in the Tooltips.ini file for controls that you set this way, otherwise it’ll just get over-written by whatever’s still in the file.
In the lines after if burnin != “None”:, you create a second image (comp) of the desired height and width, and composite the source image onto it.
Why do you shift the image up one pixel?
Unless you want the image shifted by one pixel, this step is unnecessary (and slowing you down), simply use sourceImg.Composite… instead of comp.Composite…
We’ve updated the names to use “Anchor” instead of “Gravity”… so “CompositeWithPositionAndGravity” should be “CompositeWithPositionAndAnchor”, and “Draft.PositionalGravity.NorthEastGravity” should be “Draft.Anchor.NorthEast”, etc. You can still use the old names, but you’ll get deprecation warnings.
Font observations:
For testing, I used an image that had a plain white background where the fonts happened to appear, and tested with black, red, and a pink with RGB of (1, 0.5, 0.5). Then I added a line in the code that saved the frame to a png file. Zoomed in to 500%, it looks like each of the three have equivalent quality:
I’m guessing the problem must be with the video encoding. Are you able to reproduce the problem when saving to a lossless format?
Actually, I just opened up the one-frame movie I made, and this particular pink looks fine on it too. Could you send me one of your input images to test with? (I don’t want to use the one above, since it already has text composited on it.)
Thanks all of you.
That’s a good test and proves our assumption that the background affects the half transparent pixels.
I think we still use an older Draft version. That’s why I used the PositionWithGravity. I will have a look into the 1 pixel translation, don’t think that’s intended (:.
But the two floats in the compositing operation determine a percentage, don’t they? Like position x = 0% of, position y = 100%? Or is the anchor operation with pixels?
I miss or overlook a documentation on these two values and found it a bit confusing in some cases.
I attach one frame.
Thank you, Gavin. ProRes and DNxHD looked the same though, still could be my encoding settings.
Alright, regarding the initial issue I think you were totally right. I just rendered DPX with red burnin and the text looks ultimatively sharp. Must be the encoding settings that lose all the detail.
Definitely not a Draft issue.
Thanks again!
another quick question regarding the texts. I want to control the text’s opacity by typing different alpha values in my submission dialog.
Also I changed the resolution to 1920x1200 with ‘fit’ to get a letterbox for my HD material to print the burnin information into the bottom letterbox.
The strings marked with orange have identical properties (textInfo in the script).
Interestingly, the alpha value does not affect the display of text on black, only ontop of the source footage. Why is that I wonder?
When I have a text with RGB = 1 and alpha = 0.3 it should appear as 30% grey even on a black background when I am not wrong.
Hm, currently I am directly using the sourceImg for the compositing. Is there a way to only change the alpha for the whole image to 1 without touching RGB?
Otherwise, I would need to create a canvas first and compose the sourceImg ontop but then I need more code for different reformat types as I would need to create them manually with compositing.
The letterbox/black background I used now was created “automatically” because of the reformat with ‘fit’ enabled.
Can I go deep somewhere in a script and modify the background creation when using build-in operations like .Resize?
I hope I expressed myself understandable