* Updated to net7.0
* Updated GA to .net 7
* Updated System.IO.Abstractions to use New factory.
* Converted Regex into SourceGenerator in Parser.
* Updated more regex to source generators.
* Enabled Nullability and more regex changes throughout codebase.
* Parser is 100% GeneratedRegexified
* Lots of nullability code
* Enabled nullability for all repositories.
* Fixed another unit test
* Refactored some code around and took care of some todos.
* Updating code for nullability and cleaning up methods that aren't used anymore. Refctored all uses of Parser.Normalize() to use new extension
* More nullability exercises. 500 warnings to go.
* Fixed a bug where custom file uploads for entities wouldn't save in webP.
* Nullability is done for all DTOs
* Fixed all unit tests and nullability for the project. Only OPDS is left which will be done with an upcoming OPDS enhancement.
* Use localization in book service after validating
* Code smells
* Switched to preview build of swashbuckle for .net7 support
* Fixed up merge issues
* Disable emulate comic book when on single page reader
* Fixed a regression where double page renderer wouldn't layout the images correctly
* Updated to swashbuckle which support .net 7
* Fixed a bad GA action
* Some code cleanup
* More code smells
* Took care of most of nullable issues
* Fixed a broken test due to having more than one test run in parallel
* I'm really not sure why the unit tests are failing or are so extremely slow on .net 7
* Updated all dependencies
* Fixed up build and removed hardcoded framework from build scripts. (this merge removes Regex Source generators). Unit tests are completely busted.
* Unit tests and code cleanup. Needs shakeout now.
* Adjusted Series model since a few fields are not-nullable. Removed dead imports on the project.
* Refactored to use Builder pattern for all unit tests.
* Switched nullability down to warnings. It wasn't possible to switch due to constraint issues in DB Migration.
* Introduced a new claim on the Token to get UserId as well as Username, thus allowing for many places of reduced DB calls. All users will need to reauthenticate.
Introduced UTC Dates throughout the application, they are not exposed in all DTOs, that will come later when we fully switch over. For now, Utc dates will be updated along side timezone specific dates.
Refactored get-progress/progress api to be 50% faster by reducing how much data is loaded from the query.
* Speed up the following apis:
collection/search, download/bookmarks, reader/bookmark-info, recommended/quick-reads, recommended/quick-catchup-reads, recommended/highly-rated, recommended/more-in, recommended/rediscover, want-to-read/
* Added a migration to sync all dates with their new UTC counterpart.
* Added LastReadingProgressUtc onto ChapterDto for some browsing apis, but not all.
Added LastReadingProgressUtc to reading list items.
Refactored the migration to run raw SQL which is much faster.
* Added LastReadingProgressUtc onto ChapterDto for some browsing apis, but not all.
Added LastReadingProgressUtc to reading list items.
Refactored the migration to run raw SQL which is much faster.
* Fixed the unit tests
* Fixed an issue with auto mapper which was causing progress page number to not get sent to UI
* series/volume has chapter last reading progress
* Added filesize and library name on reading list item dto for CDisplayEx.
* Some minor code cleanup
* Forgot to fill a field
* Recreated Kavita Logging with Serilog instead of Default. This needs to be move out of the appsettings now, to allow auto updater to patch.
* Refactored the code to be completely configured via Code rather than appsettings.json. This is a required step for Auto Updating.
* Added in the ability to send logs directly to the UI only for users on the log route. Stopping implementation as Alerts page will handle the rest of the implementation.
* Fixed up the backup service to not rely on Config from appsettings.json
* Tweaked the Logging levels available
* Moved everything over to File-scoped namespaces
* Moved everything over to File-scoped namespaces
* Code cleanup, removed an old migration and changed so debug logging doesn't print sensitive db data
* Removed dead code
* Staging the code for the new scan loop.
* Implemented a basic idea of changes on drives triggering scan loop. Issues: 1. Scan by folder does not work, 2. Queuing system is very hacky and needs a separate thread, 3. Performance degregation could be very real.
* Started writing unit test for new loop code
* Implemented a basic method to scan a folder path with ignore support (not implemented, code in place)
* Added some code to the parser to build out the idea of processing series in batches based on some top level folder.
* Scan Series now uses the new code (folder based parsing) and now handles the LocalizedSeries issue.
* Got library scan working with the new folder-based scan loop. Updated code to set FolderPath (for improved scan times and partial scan support).
* Wrote some notes on update library scan loop.
* Removed migration for merge
* Reapplied the SeriesFolder migration after merge
* Refactored a check that used multiple db calls into one.
* Made lots of progress on ignore support, but some confusion on underlying library. Ticket created. On hold till then.
* Updated Scan Library and Scan Series to exit early if no changes are on the underlying folders that need to be scanned.
* Implemented the ability to have .kavitaignore files within your directories and Kavita will parse them and ignore files and directories based on rules within them.
* Fixed an issue where ignore files nested wouldn't stack with higher level ignores
* Wrote out some basic code that showcases how we can scan series or library based on file events on the underlying system. Very buggy, needs lots of edge case testing and logging and dupplication checking.
* Things are working kinda. I'm getting lost in my own code and complexity. I'm not sure it's worth it.
* Refactored ScanFiles out to Directory Service.
* Refactored more code out to keep the code clean.
* More unit tests
* Refactored the signature of ParsedSeries to use IList. Started writing unit tests and reworked the UpdateLibrary to work how it used to with new scan loop code (note: using async update library/series does not work).
* Fixed the bug where processSeriesInfos was being invoked twice per series and made the code work very similar to old code (except loose leaf files dont work) but with folder based scanning.
* Prep for unit tests (updating broken ones with new implementations)
* Just some notes. Not sure I want to finish this work.
* Refactored the LibraryWatcher with some comments and state variables.
* Undid the migrations in case I don't move forward with this branch
* Started to clean the code and prepare for finishing this work.
* Fixed a bad merge
* Updated signatures to cleanup the code and commit to the new strategy for scanning.
* Swapped out the code with async processing of series on a small library
* The new scan loop is working in both Sync and Async methods. The code is slow and not optimized. This represents a good point to start polling and applying optimizations.
* Refactored UpdateSeries out of Scanner and into a dedicated file.
* Refactored how ProcessTasks are awaited to allow more async
* Fixed an issue where side nav item wouldn't show correct highlight and migrated to OnPush
* Moved where we start to stopwatch to encapsulate the full scan
* Cleaned up SignalR events to report correctly (still needs a redesign)
* Remove the "remove" code until I figure it out
* Put in extremely expensive series deletion code for library scan.
* Have Genre and Tag update the DB immediately to avoid dup issues
* Taking a break
* Moving to a lock with People was successful. Need to apply to others.
* Refactored code for series level and tag and genre with new locking strategy.
* New scan loop works. Next up optimization
* Swapped out the Kavita log with svg for faster load
* Refactored metadata updates to occur when the series are being updated.
* Code cleanup
* Added a new type of generic message (Info) to inform the user.
* Code cleanup
* Implemented an optimization which prevents any I/O (other than an attribute lookup) for Library/Series Scan. This can bring a recently updated library on network storage (650 series) to fully process in 2 seconds.
Fixed a bug where File Analysis was running everytime for each non-epub file.
* Fixed ARM x64 builds not being able to view PDF cover images due to a bad update in DocNet.
* Some code cleanup
* Added experimental signalr update code to have a more natural refresh of library-detail page
* Hooked in ability to send new series events to UI
* Moved all scan (file scan only) tasks into Scan Queue. Made it so scheduled ScanLibraries will now check if any existing task is being run and reschedule for 3 hours, and 10 mins for scan series.
* Implemented the info event in the events widget and added a clear all button to dismiss all infos and errors. Added --event-widget-info-bg-color
* Remove --drawer-background-color since it's not used
* When new series added, inject directly into the view.
* Some debug code cleanup
* Fixed up the unit tests
* Ensure all config directories exist on startup
* Disabled Library Watching (that will go in next build)
* Ensure update for series is admin only
* Lots of code changes, scan series kinda works, specials are splitting, optimizations are failing. Demotivated on this work again.
* Removed SeriesFolder migration
* Added the SeriesFolder migration
* Added a new pipe for dates so we can provide some nicer defaults. Added folder path to the series detail.
* The scan optimizations now work for NTFS systems.
* Removed a TODO
* Migrated all the times to use DateTime.Now and not Utc.
* Refactored some repo calls to use the includes flag pattern
* Implemented a check for the library scan optimization check to validate if the library was updated (type change, library rename, folder change, or series deleted) and let the optimization be bypassed.
* Added another optimization which will use just folder attribute of last write time if the drive is not NTFS.
* Fixed a unit test
* Some code cleanup
* Some performance refactoring around getting Library and avoid a byte[] copy for getting cover images for epubs.
* Initial commit. Rewrote the main series scan loop to use chunks of data at a time. Not fully shaken out.
* Hooked in the ability for the UI to react to series being added or removed from the DB.
* Cleaned up the messaging in the scan loop to be more clear.
* Metadata scan and scan work as expected and populate data to the UI. There is a slow down in speed for overall operation.
Scan series and refresh series metadata does not work fully.
* Fixed a bug where MangaFiles were not having LastModified Updated correctly, meaning they were opening archives every scan.
* Modified the code to be more realistic to the underlying file
* Updated ScanService to properly handle deleted files and not result in a higher-level scan.
* Shuffled around volume related repo apis to the volume repo rather than being in series.
* Rewrote scan series to be much cleaner and more concise on the flow. Fixed an issue in UpdateVolumes such that the debug code to log out removed volumes could throw an exception and actually break updating volumes.
* Refactored the code to set MangaFile last modified timestamp into the MangaFile entity.
* Added Series Name to ScanSeries event
* Added additional checks in ScanSeries to ensure we never go outside the library folder.
Added extra debug messages for when a metadata refresh doesn't actually make changes and for when we regen cover images.
* More logging statements saying where they originate from. Fixed a critical bug which caused only 1 chunk to ever be processed.
* Fixed a concurrency issue with natural sorter which could cause issues in ArchiveService.cs.
* Log cleanups
* Fixed an issue with logging out total time of a scan.
* Only show added toastrs for admins. When kicking off a refresh metadata for series, make sure we regenerate all cover images.
* Code smells on benchmark despite it being ignored