This is based on a very old and primitive approach, from the dark and gone days where libplacebo had no native opengl support and only basic HDR/SDR conversion. It is not only unnecessarily verbose and clumsy, but it also prevents the use of modern features, and most importantly, it regressed in quality when HDR/SDR tone mapping switched to a LUT-based approach (which this old integration cannot currently support). So we only get the bad fallback logic - no good tone mapping, no peak detection, no inverse tone mapping, etc.
This code should be removed entirely, and HDR/SDR conversion etc. integrated into pl_scale where desired. (Which should possibly be renamed to pl_filter or something)
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items
0
No child items are currently assigned. Use child items to break down this issue into smaller parts.
This code should be removed entirely, and HDR/SDR conversion etc. integrated into pl_scale where desired. (Which should possibly be renamed to pl_filter or something)
It doesn't seem like the correct filter to add HDR/SDR conversion tbh.
What would you propose? Having more than one pl_renderer-based filter would be redundant. In theory we could try and integrate something based on pl_opengl more directly into the sampler code, but it seems harder than just adding it to a filter.
How easy/hard is it for filters to access the graphics context (e.g. OpenGL device) from the vout?
If we make such filters, how do they interact with e.g. the libplacebo vout which does all of its filtering/processing directly in the vout?
Having separate filters for separate processing steps is possibly very inefficient as it forces an indirection every time
Better is to have "filters" that only expose/set the pl_render_params parameters and forward the desired filtering configuration to the vout, which the vout can apply (directly in vout_placebo, and via a gl_filter in vout_opengl)
How easy/hard is it for filters to access the graphics context (e.g. OpenGL device) from the vout?
You cannot. But you can create additional graphics contexts from the same decoder device.
If we make such filters, how do they interact with e.g. the libplacebo vout which does all of its filtering/processing directly in the vout?
You can have a specific libplacebo vout+converter, and link the metadata + operations from multiple consecutive libplacebo filters so that they are really applied by the converter or the vout (or both if you can have preprocess steps independant of the screen dimension) in the end. However, filters cannot link their output to the rendering size currently.
Having separate filters for separate processing steps is possibly very inefficient as it forces an indirection every time
What do you mean?
Better is to have "filters" that only expose/set the pl_render_params parameters and forward the desired filtering configuration to the vout, which the vout can apply (directly in vout_placebo, and via a gl_filter in vout_opengl)
You cannot. But you can create additional graphics contexts from the same decoder device.
That feels like a major roadblock because for opengl at least you need to create the graphics context from the windowing system which requires access to the window which requires access to the vout, isn't that a major data dependency issue? This is why I think it's better for vouts to do the GPU filtering.
Passing GPU frames in between filters also sounds like a nightmare (synchronization? cross-platform? dmabuf only exists on linux, what do you do on windows?)
That feels like a major roadblock because for opengl at least you need to create the graphics context from the windowing system which requires access to the window which requires access to the vout, isn't that a major data dependency issue?
You don't. :)
Tying filters with outputs is a big no-no as it breaks transcoding.
Exactly this ^
It motivated the current design. You can have an offscreen opengl implementation using vlc_gl_CreateOffscreen, and it will produce picture_t on swap.
You can also provide your own vlc_gl_t offscreen implementation with libplacebo that will require the addition of a converter to serialize the output into a real buffer if you want to queue operations together.
Passing GPU frames in between filters also sounds like a nightmare (synchronization? cross-platform? dmabuf only exists on linux,
It is, but there's not really any other way supporting decoding/encoding/filtering use cases at the same time.
what do you do on windows?
Currently, I don't because I didn't have the time to write this and it's clearly not urgent for 4.0 compared to, eg. proper wgl_dcomp implementation and input decoder fixes. But ANGLE will provide dxgi textures as output to OpenGL filtering. It's somewhat possible to keep using current windowing plugins as source by writing an indirect rendering implementation, but after all those years I secretely hoped we moved forward to a more resourceful (pun intended) usage of OpenGL, especially since it still breaks on transcoding scenario. Trying to get the vout context for filtering is a poor indirect rendering abstraction which leaks on the public vout API and filter API and tie them together, which is really not great.
Do I understand it correctly that there would be no opposition to the following design?
Libplacebo options moved to video filters that don't perform actual filtering but merely send the desired settings downstream to the vout (similar to how brightness/contrast/etc. are handled currently) (edit: nvm, seems the adjust/mixer stuff is always done in CPU?)
Write common helper function to translate from these settings to pl_render_params
Have the opengl vout apply these settings, if needed, via what's essentially the same pl_scale code as currently (but with the extra pl_render_params plumbed in)
As I already wrote, you can't tie filters and outputs.
I suppose you're referring to the VDPAU case (or something that took its inspiration from the VDPAU case). Note that the fake filters are all tied to opaque picture formats. This ensures that all modules involved agree and understand the layout of the filter meta data that's attached to each picture. You can't do this with regular pixel formats or with the existing (non-placebo) opaque formats.
With that said, conceptually, you could introduce an opaque placebo format and do the filtering between a conversion filter to that placebo format and the placebo vout.
But in practice, I don't think that that would actually work with the current design assumptions of the video output core. So far VLC has always filtered then converted rather than converted then filtered pictures. This was originally so that filters would operate on planar YUV picture buffers before conversion to whatever, say packed RGB.
Do I understand it correctly that there would be no opposition to the following design?
Libplacebo options moved to video filters that don't perform actual filtering but merely send the desired settings downstream to the vout (similar to how brightness/contrast/etc. are handled currently) (edit: nvm, seems the adjust/mixer stuff is always done in CPU?)
Write common helper function to translate from these settings to pl_render_params
Have the opengl vout apply these settings, if needed, via what's essentially the same pl_scale code as currently (but with the extra pl_render_params plumbed in)
You need two renderers: a vout component somehow (display, gl filter, interop, etc, but interop wont work without !1021 (merged)) and a converter for every other use case where libplacebo is not at the end of the chain (other filters after libplacebo filters, transcode). Except this difference, I agree with the design and won't even have anything to say since it correctly separates libplacebo from the rest.
Instead of removing the code I opted to fix some of the most blatant issues for the time being, in !2677 (merged). So that basically resolves the major issue here. The remainder is basically redundant with #27283.