- Feb 14, 2024
-
-
We should not assume that the depth is the bits per pixel. The RFB spec says that bpp must be either 8, 16, or 32 and that depth must be <= to bpp.
-
libvnc does not support decoding 8 bpp + tight encoding, so upgrade the requested color depth to 16 to prevent libvnc returning an error when we try to decode.
-
In the unlikely event that the server's native pixel format is not true color, this flag will be received as zero, and so we cannot assume that it has already been set. Since we only support true color pixel formats, we therefore need to ensure that the flag is set ourselves.
-
Outside of keeping bits per pixel consistent, we never made any attempt to match the server's native pixel format in the first place, always requesting our own hardcoded format for any color depth, so we might as well also always ask the server for LE byte order (regardless of client endianness) and code around that. This change has been tested with LE and BE clients. Previous to this commit colors were wrong in the case of (e.g.) LE <-> LE clients and servers.
-
Rewrite the first patch on top of v1.3.1 and simplify it since we always need the same name for the static zlib target. Remove the second patch since there is now a ZLIB_BUILD_EXAMPLES cmake switch to disable the examples.
-
-
-
-
-
- Feb 12, 2024
-
-
Signed-off-by:
Claudio Cambra <developer@claudiocambra.com>
-
Signed-off-by:
Claudio Cambra <developer@claudiocambra.com>
-
-
-
- Feb 11, 2024
-
-
Changelog: https://github.com/taglib/taglib/releases/tag/v2.0 `VLC_PATCHED_TAGLIB_ID3V2_READSTYLE` removed see: https://github.com/taglib/taglib/commit/c13a42021a78038d5a72c75645abf55261ed833e
-
-
-
Steve Lhomme authored
Rather than the attached format used for cropping.
-
Steve Lhomme authored
Rather than the attached format used for cropping. Ultimately we should assert that the incoming format is clean.
-
Steve Lhomme authored
Rather than the attached format used for cropping.
-
Steve Lhomme authored
Rather than the attached format used for cropping.
-
Steve Lhomme authored
The value was 0 for text regions, no need to force it again.
-
Steve Lhomme authored
Rather than the attached format used for cropping.
-
Steve Lhomme authored
Rather than the attached format used for cropping.
-
Steve Lhomme authored
-
Steve Lhomme authored
Rather than the attached format used for cropping.
-
Steve Lhomme authored
Rather than the attached format used for cropping.
-
Steve Lhomme authored
Rather than the attached format used for cropping.
-
Steve Lhomme authored
We already use the picture pitch. We map the picture to a texture that is rendered using the buffer dimensions. The other format is mostly used for positioning.
-
Steve Lhomme authored
We map the picture to a texture that is rendered using the buffer dimensions. The other format is mostly used for positioning.
-
Steve Lhomme authored
The other format is mostly used for positioning.
-
Steve Lhomme authored
We map the picture to a texture that is rendered using the buffer dimensions. The other format is mostly used for positioning.
-
Steve Lhomme authored
We map the picture to a texture that is rendered with the appropriate colorimetry. The other format is mostly used for positioning.
-
Signed-off-by:
Claudio Cambra <developer@claudiocambra.com>
-
- Feb 10, 2024
-
-
Holding this reference that breaks encapsulation can easily be avoided now by passing the sout context to the extraction function.
-
WebVTT and plain subtitle tracks can now be exposed in HLS. an extra media rendition is created for each subtitle ES and the HLS server creates max-sized vtt segments following the pace of the other ES. WebVTT segmentation in scenario where no subtitle is output for a while is handled using the PCR of the stream. Having reliable clock info helps a lot to create empty subtitle segments at the correct time.
-
-
The subtitle segmenter is a meta-muxer used to create subtitles segments. It handles subtitle frames splitting, re-creation of the subtitles muxer and empty segments creation when no data is available.
-
This will be used by HLS to properly segment subtitles.
-
Those will be used as a delimitation hint from muxers. WebVTT, TS and mp4frag will use those eventually.
-
The segmentation was previously done in SetPCR to output segments at precise stream times instead of reliying on the quantity of data output by the muxers. This had two main issues: - SetPCR does not report errors and the segment creation phase can fail at multiple stages. - The calculated stream time did not actually reflect the quantity of data that left muxers. Muxers can be delayed by mux caching and are allowed to have an internal queue. The previous algorithm was outputting smaller segments due to the muxer delay. Those issues were fine with TS as a single muxer but with the future introduction of VTT/FMP4 segmentation that both make choice base on segment times, we needed to change the segmenting strategy. Segmentation is now done at muxer output, bigger segments are output when all the muxers sent the right amount of data.
-