Enums in response model with no nullability or default value will make the API very fragile as each extension to the enum will break the API for some clients, but a lot of enums actually do have an unknown value which should be used as a default. This set all model properties that are non-nullable using an enum that has an Unknown member in 10.10, except MediaStream.VideoRangeType which is refactored in #13277
(cherry picked from commit 4a4fef830eccf0629d7cf955126f0cd78867e0ee)
The setter of the Track class is not intended for such use cases and will have unwanted side effects to change valid values. We should never use them use all fields from ATL.Track class read-only.
This seems like a simple and safe (small) win.
Automatically invalidating cache entries after a while would be even better
(or not having a cache at all) but such changes are too big for a point release IMO.
* Fix image encoding concurrency limit
The current FFmpeg image extractor is configured to use a resource pool size that always equals 2 times the number of CPU cores, which is somewhat excessive. Make the default equal to the core count instead of twice, and respect the `ParallelImageEncodingLimit` option.
* Fix code stype
* Check null value for unit tests
Although the number type is nullable from the type definition of ATL, the lib might still normalize all unknown values to 0 which makes doing null check only not enough. Fallback to ffprobe results when the number is 0 as well.
We are still using `Subnet.Contains` a lot but that does not handle IPv4 mapped to IPv6 addresses at all. It was partially fixed by #12094 in local network checking, but it may not always happen on LAN.
Also make all local network checking to use IsInLocalNetwork method instead of just performing `Subnet.Contains` which is not accurate.
Filter out all link-local addresses for external interface matching.
When writing an image to the disk, we use the completion of the async task as a signal indicating the completion of a write operation. However, this approach may not be entirely accurate, as the operating system can optimize IO operations by writing data to an intermediate cache instead of directly to the disk before completing the operation. This optimization can lead to a data race for our scanner, as subsequent tasks such as blurhash computation may attempt to read a file that has not yet been flushed from the volatile cache. Consequently, the data within the file becomes invalid, causing the blurhash computation task to fail.
Use WriteThrough mode to ensure the data is actual on disk before return to resolve this issue.