Kavita/API/Services/DirectoryService.cs
Joe Milazzo a8ee1d2191
v0.7.4 - Kavita+ Launch (#2117)
* Initial Canary Push (#2055)

* Added AniList Token

* Implemented the ability to set your AniList token. License check is not in place.

* Added a check that validates AniList token is still valid. As I build out more support, I will add more checks.

* Refactored the code to validate the license before allowing UI control to be edited.

* Started license server stuff, but may need to change approach.

Hooked up ability to scrobble rating events to KavitaPlus API.

* Hooked in the ability to sync Mark Series as Read/Unread

* Fixed up unit tests and only scrobble when a full chapter is read naturally.

* Fixed up the Scrobbling service

* Tweak one of the queries

* Started an idea for Scrobble History, might rework into generic TaskHistory.

* AniList Token now has a validation check.

* Implemented a mechanism such that events are persisted to the database, processed every X hours to the API layer, then deleted from the database.

* Hooked in code for want to read so we only send what's important. Will migrate these to bulk calls to lessen strain on API server.

* Added some todos. Need to take a break.

* Hooked up the ability to backfill scrobble events after turning it on.

* Started on integrating license key into the server and ability to turn off scrobbling at the library level. Added sync history table for scrobbling and other API based information.

* Started writing to sync table

* Refactored the migrations to flatten them.

Started working a basic license add flow and added in some of the cache. Lots to do.

* Ensure that when we backfill scrobble events, we respect if a library has scrobbling turned on or not.

* Hooked up the ability to send when the series was started to be read

* Refactored the UI to streamline and group KavitaPlus Account Forms.

* Aligning with API

* Fixed bad merge

* Fixed up inputting a user license.

* Hooked up a cron task that validates licenses every 4 hours and on startup.

* Reworked how the update license code works so that we always update the cache and we handle removing license from user.

* Cleaned up some UI code

* UserDto now has if there is a valid license or not. It's not exposed though as there is no need to expose the license key ever.

* Fixed a strange encoding issue with extra ".

Started working on having the UI aware of the license information.

Refactored all code to properly pass the correct license to the API layer.

* There is a circular dependency in the code.

Fixed some theme code which wasn't checking the right variable.

Reworked the JWT interceptor to be better at handling async code.

Lots of misc code changes, DI circular issue is still present.

* Fixed the DI issue and moved all things that need bootstrapping to app.component.

* Hooked up the ability to not have a donation button show up if the server default user/admin has a valid KavitaPlus license.

* Refactored how we extract out ids from weblinks

* Ensure if API fails, we don't delete the record.

* Refactored how rate checks occur for scrobbling processing.

* Lots of testing and ensuring rate limit doesn't get destroyed.

* Ensure the media item is valid for that user's providers set.

* Refactored the loop code into one method to keep things much cleaner

* Lots of code to get the scrobbling streamlined and foolproof. Unknown series are now reported on the UI.

* Prevent duplicates for scrobble errors.

* Ensure we are sending the correct type to the Scrobble Provider

* Ensure we send the date of the scrobble event for upstream to use.

* Replaced the dedicated run backfilling of scrobble events to just trigger when setting the anilist token for the first time.

Streamlined a lot of the code for adding your license to ensure user understands how it works.

* Fixed a bug where scan series wasn't triggering word count or cover generation.

* Started the plumbing for recommendations

* Merge conflicts

* Recommendation plumbing is nearly complete.

* Setup response caching and general cleanup

* Fixed UI not showing the recommendation tab

* Switched to prod url

* Fixed broken unit tests due to Hangfire not being setup for unit tests

* Fixed branch selection (#2056)

* Damn you GA (#2058)

* Bump versions by dotnet-bump-version.

* Fixed GA not pulling the right branch and removed unneeded building from veresion job (#2060)

* Bump versions by dotnet-bump-version.

* Canary Second (#2071)

* Just started

* Started building the user review card. Fixed Recommendations not having user progress on them.

* Fixed a bug where scrobbling ratings wasn't working.

* Added a temp ability to trigger scrobbling processing for testing.

* Cleaned up the design of review card. Added a temp way to trigger scrobbling.

* Fixed clear scrobbling errors and refactored so reviews now load from DB and is streamlined.

* Refactored so edit review is now a single module component and editable from the series detail page.

* Removed SyncHistory table as it's no longer needed. Refactored read events to properly update to the latest progress information. Refactored to a new way of clearing events, so that user's can see their scrobble history.

* Fixed a bug where Anilist token wouldn't show as set due to some state issue

* Added the ability to see your own scrobble events

* Avoid a potential collision with recommendations.

* Fixed an issue where when checking for a license on UI, it wouldn't force the check (in case server was down on first check).

* External reviews are implemented.

* Fixed unit tests

* Bump versions by dotnet-bump-version.

* Made the api url dynamic based on dev more or not. (#2072)

* Bump versions by dotnet-bump-version.

* Canary Build 3 (#2079)

* Updated reviews to have tagline support to match how Anilist has them.

Cleaned up the KavitaPlus documentation and added a feature list.

Review cards look much better.

* Fixed up a NPE in scrobble event creation

* Removed the ability to have images leak in the read more review card.

Review's now show the user if they are a local user, else External.

* Added caching to the reviews and recommendations that come from an external source. Max of 50MB will be used across whole instance. Entries are cached for 1 hour.

* Reviews are looking much better

* Added the ability for users to share their series reviews with other users on the server via a new opt-in mechanism.

Fixed up some cache busting mechanism for reviews.

* More review polish to align with better matching

* Added the extra information for Recommendation matching.

* Preview of the review is much cleaner now and the full body is styled better.

* More anilist specific syntax

* Fixed bad regex

* Added the ability to bust cache.

Spoilers are now implemented for reviews. Introduces:
--review-spoiler-bg-color
--review-spoiler-text-color

* Bump versions by dotnet-bump-version.

* Canary Build 4 (#2086)

* Updated Kavita Plus feature list. Added a hover-over to the progress bars in the app to know exact percentage of reading for a chapter or series.

* Added a button to go to external review. Changed how enums show in the documentation so you can see their string value too.

Limited reviews to top 10 with proper ordering. Drastically cleaned up how we handle preview summary generation

* Cleaned up the margin below review section

* Fixed an issue where a processed scrobble event would get updated instead of a new event created.

* By default, there is now a prompt on series review to add your own, which fills up the space nicely.

Added the backend for Series Holds.

* Scrobble History is now ordered by recent -> latest. Some minor cleanup in other files.

* Added a simple way to see and toggle scrobble service from the series.

* Fixed a bug where updating the user's last active time wasn't writing to database and causing a logout event.

* Tweaked the registration email wording to be more clear for email field.

* Improved OPDS Url generation and included using host name if defined.

* Fixed the issues with choosing the correct series cover image. Added many unit tests to cover the edge cases.

* Small cleanup

* Fixed an issue where urls with , in them would break weblinks.

* Fixed a bug where we weren't trying a png before we hit fallback for favicon parsing.

* Ensure scrobbling tab isn't active without a license.

Changed how updating user last active worked to supress more concurrency issues.

* Fixed an issue where duplicate series could appear on newly added during a scan.

* Bump versions by dotnet-bump-version.

* Fixed a bad dto (#2087)

* Bump versions by dotnet-bump-version.

* Canary Build 4 (#2089)

* New server-based auth is in place with the ability to register the instance.

* Refactored to single install bound licensing.

* Made the Kavita+ tab gold.

* Change the JWTs to last 10 days. This is a self-hosted software and the usage doesn't need the level of 2 days expiration

* Bump versions by dotnet-bump-version.

* Canary Build 4 (#2090)

* By default, a new library will only have scrobbling on if it's of type book or manga given current scrobble providers.

* Started building out external reviews.

* Added the ability to re-enter your license information.

* Fixed side nav not extending enough

* Fixed a bug with info cards

* Integrated rating support, fixed review cards without a tagline, and misc fixes.

* Streamlined where ratings are located on series detail page.

* Aligned with other series lookups

* Bump versions by dotnet-bump-version.

* Canary Build 6 (#2092)

* Cleaned up some messaging

* Fixed up series detail

* Cleanup

* Bump versions by dotnet-bump-version.

* Canary Build 6 (#2093)

* Fixed scrobble token not being visible by default.

* Added a loader for external reviews

* Added the ability to edit series details (weblinks) from Scrobble Issues page.

* Slightly lessened the focus on buttons

* Fixed review cards so whenever you click your own review, it will open the edit modal.

* Need for speed - Updated Kavita log to be much smaller and replaced all code ones with a 32x version.

* Optimized a ton of our images to be much smaller and faster to load.

* Added more MIME types for response compression

* Edit Series modal name field should be readonly as it is directly mapped to file metadata or filename parsed. It shouldn't be changeable via the UI.

* Removed the ability to update the Series name via Kavita UI/API as it is no longer editable.

* Moved Image component to be standalone

* Moved ReadMore component to be standalone

* Moved PersonBadge component to be standalone

* Moved IconAndTitle component to be standalone

* Fixed some bugs with standalone.

* Hooked in the ability to scrobble series reviews.

* Refactored everything to use HashUtil token rather than InstallId.

* Swapped over to a generated machine token and fixed an issue where after registering, the license would not say valid.

* Added the missing migration for review scrobble events.

* Clean up some wording around busting cache.

* Fixed a bug where chapters within a volume could be unordered in the UI info screen.

* Refactored to prepare for external series rendering on series detail.

* Implemented external recs

* Bump versions by dotnet-bump-version.

* Canary Build 7 (#2097)

* Aligned ExtractId to extract a long, since MAL id can be just that.

* Fixed external series card not clicking correctly.

Fixed a bug when extracting a Mal link.

Fixed cancel button on license component.

* Renamed user-license to license component given new direction for licensing.

* Implemented card layout for recommendations

* Moved more components over to be standalone and removed pipes module. This is going to take some time for sure.

* Removed Cards and SharedCardsSideNav and SideNav over to standalone. This has been shaken out.

* Cleaned up a bunch of extra space on reading list detail page.

* Fixed rating popover not having a black triangle.

* When checking license, show a loading indicator for validity icon.

* Cache size can now be changed by admins if they want to give more memory for better browsing.

* Added LastReadTime

* Cleanup the scrobbling control text for Library Settings.

* Fixed yet another edge case for getting series cover image where first volume is higher than 1 and the rest is just loose leaf chapters.

* Changed OPDS Content Type to be application/atom+xml to align better with the spec.

* Fixed unit tests

* Bump versions by dotnet-bump-version.

* Canary Build 7 (#2098)

* Fixed the percentage readout on card item progress bar

* Ensure scrobble control is always visible

* Review card could show person icon in tablet viewport.

* Changed how the ServerToken for node locking works as docker was giving different results each time.

* After we update series metadata, bust cache

* License componet cleanup on the styles

* Moved license to admin module and removed feature modal as wiki is much easier to maintain.

* Bump versions by dotnet-bump-version.

* Canary Build 8 (#2100)

* Fixed a very slight amount of the active nav tag bleeding outside the border radius

* Switched how we count words in epub to handle languages that don't have spaces.

* Updated dependencies and fixed a series cover image on list item view for recs.

* Fixed a bug where external recs werent showing summary of the series.

* Rewrote the rec loop to be cleaner

* Added the ability to see series summary on series detail page on list view.

Changed Scrobble Event page to show in server time and not utc.

* Added tons of output to identify why unraid generates a new fingerprint each time.

* Refactored scrobble event table to have filtering and pagination support.

Fixed a few bad template issues and fixed loading scrobbling tab on refresh of page.

* Aligned a few apis to use a default pagination rather than a higher level one.

* Undo OPDS change as Chunky/Panels break.

* Moved the holds code around

* Don't show an empty review for the user, it eats up uneeded space and is ugly.

* Cleaned up the review code

* Fixed a bug with arrow on sortable table header.

* More scrobbling debug information to ensure events are being processed correctly.

* Applied a ton of code cleanup build warnings

* Enhanced rec matching by prioritizing matching on weblinks before falling back to name matching.

* Fixed the calculation of word count for epubs.

* Bump versions by dotnet-bump-version.

* Canary Build 9 (#2104)

* Added another unit test

* Changed how we create cover images to force the aspect ratio, which allows for Kavita to do some extra work later down the line. Prevents skewing from comic sources.

* Code cleanup

* Updated signatures to explicitly indicate they return a physical file.

* Refactored the GA to be a bit more streamlined.

* Fixed up how after cover conversion, how we refresh volume and series image links.

* Undid the PhysicalFileResult stuff.

* Fixed an issue in the epub reader where html tags within an anchor could break the navigation code for inner-links.

* Fixed a bug in GetContinueChapter where a special could appear ahead of a loose leaf chapter.

* Optimized aspect ratios for custom library images to avoid shift layout.

Moved the series detail page down a bit to be inline with first row of actionables.

* Finally fixed the media conversion issue where volumes and series wouldn't get their file links updated.

* Added some new layout for license to allow a user to buy a sub after their last sub expired.

* Added more metrics for fingerprinting to test on docker.

* Tried to fix a bug with getnextchapter looping incorrectly, but unable to solve.

* Cleanup some UI stuff to reduce bad calls.

* Suppress annoying issues with reaching K+ when it's down (only affects local builds)

* Fixed an edge case bug for picking the correct cover image for a series.

* Fixed a bug where typeahead x wouldn't clear out the input field.

* Renamed Clear -> Reset for metadata filter to be more informative of its function.

* Don't allow duplicates for reading list characters.

* Fixed a bug where when calculating recently updated, series with the same name but different libraries could get grouped.

* Fixed an issue with fit to height where there could still be a small amount of scroll due to a timing issue with the image loading.

* Don't show a loading if the user doesn't have a license for external ratings

* Fixed bad stat url

* Fixed up licensing to make it so you have to email me to get a sub renewed.

* Updated deps

* When scrobbling reading events, recalculate the highest chapter/volume during processing.

* Code cleanup

* Disabled some old test code that is likely not needed as it breaks a lot on netvips updates

* Bump versions by dotnet-bump-version.

* Canary Build 10 (#2105)

* Aligned fingerprint to be unique

* Updated email button to have a template

* Fixed inability to progress to next chapter when last page is a spread and user is using split rendering.

* Attempted fix at the column reader cutting off parts of the words. Can't fully reproduce, but added a bit of padding to help.

* Aligned AniList icon to match that of weblinks.

* Bump versions by dotnet-bump-version.

* Canary Build 11 (#2108)

* Fixed an issue with continuous reader in manga reader.

* Aligned KavitaPlus->Kavita+

* Updated the readme

* Adjusted first time registration messaging.

* Fixed a bug where having just one type of weblink could cause a bad recommendation lookup

* Removed manual invocation of scrobbling as testing is over for that feature.

* Fixed a bad observerable for downloading logs from browser.

* Don't get reviews/recs for comic libraries. Override user selection for scrobbling on Comics since there are no places to scrobble to.

* Added a migration so all existing comic libraries will have scrobbling turned off.

* Don't allow the UI to toggle scrobbling on a library with no providers.

* Refactored the code to not throw generic 500 toasts on the UI. Added the ability to clear your license on Kavita side.

* Converted reader settings to new accordion format.

* Converted user preferences to new accordion format.

* I couldn't convert CBL Reading modal to new accordion directives due to some weird bug.

* Migrated the whole application to standalone components. This fixes the download progress bar not showing up.

* Hooked up the ability to have reading list generate random items. Removed the old code as it's no longer needed.

* Added random covers for collection's as well.

* Added a speed up to not regenerate merged covers if we've already created them.

* Fixed an issue where tooltips weren't styled correctly after updating a library. Migrated Library access modal to OnPush.

* Fixed broken table styling. Fixed grid breakpoint css variables not using the ones from variables due to a missing import.

* Misc fixes around tables and some api doc cleanup

* Fixed a bug where when switching from webtoon back to a non-webtoon reading mode, if the browser size isn't large enough for double, the reader wouldn't go to single mode.

* When combining external recs, normalize names to filter out differences, like capitalization.

* Finally get to update ExCSS to the latest version! This adds much more css properties for epubs.

* Ensure rejected reviews are saved as errors

* A crap ton of code cleanup

* Cleaned up some equality code in GenreHelper.cs

* Fixed up the table styling after the bootstrap update changed it.

* Bump versions by dotnet-bump-version.

* Canary Build 12 (#2111)

* Aligned GA (#2059)

* Fixed the code around merging images to resize them. This will only look correct if this release's cover generation runs.

* Misc code cleanup

* Fixed an issue with epub column layout cutting off text

* Collection detail page will now default sort by sort name.

* Explicitly lazy load library icon images.

* Make sure the full error message can be passed to the license component/user.

* Use WhereIf in some places

* Changed the hash util code for unraid again

* Fixed up an issue with split render mode where last page wouldn't move into the next chapter.

* Bump versions by dotnet-bump-version.

* Don't ask me how, but i think I fixed the epub cutoff issue (#2112)

* Bump versions by dotnet-bump-version.

* Canary 14 (#2113)

* Switched how we build the unraid fingerprint.

* Fixed a bit of space below the image on fit to height

* Removed some bad code

* Bump versions by dotnet-bump-version.

* Canary Build 15 (#2114)

* When performing a scan series, force a recount of words/pages to ensure read time gets updated.

* Fixed broken download logs button (develop)

* Sped up the query for getting libraries and added caching for that api, which is helpful for users with larger library counts.

* Fixed an issue in directory picker where if you had two folders with the same name, the 2nd to last wouldn't be clickable.

* Added more destroy ref stuff.

* Switched the buy/manage links over to be environment specific.

* Bump versions by dotnet-bump-version.

* Canary Build 16 (#2115)

* Added the promo code for K+ and version bump.

* Don't show see more if there isn't more to see on series detail.

* Bump versions by dotnet-bump-version.

* Last Build (#2116)

* Merge

* Close the view after removing a license key from server.

* Bump versions by dotnet-bump-version.

* Reset version to v0.7.4 for merge.
2023-07-11 11:14:18 -07:00

992 lines
38 KiB
C#

using System;
using System.Collections.Generic;
using System.Collections.Immutable;
using System.IO;
using System.IO.Abstractions;
using System.Linq;
using System.Text.RegularExpressions;
using System.Threading.Tasks;
using API.DTOs.System;
using API.Entities.Enums;
using API.Extensions;
using Kavita.Common.Helpers;
using Microsoft.Extensions.Logging;
namespace API.Services;
#nullable enable
public interface IDirectoryService
{
IFileSystem FileSystem { get; }
string CacheDirectory { get; }
string CoverImageDirectory { get; }
string LogDirectory { get; }
string TempDirectory { get; }
string ConfigDirectory { get; }
string SiteThemeDirectory { get; }
string FaviconDirectory { get; }
/// <summary>
/// Original BookmarkDirectory. Only used for resetting directory. Use <see cref="ServerSettingKey.BackupDirectory"/> for actual path.
/// </summary>
string BookmarkDirectory { get; }
/// <summary>
/// Lists out top-level folders for a given directory. Filters out System and Hidden folders.
/// </summary>
/// <param name="rootPath">Absolute path of directory to scan.</param>
/// <returns>List of folder names</returns>
IEnumerable<DirectoryDto> ListDirectory(string rootPath);
Task<byte[]> ReadFileAsync(string path);
bool CopyFilesToDirectory(IEnumerable<string> filePaths, string directoryPath, string prepend = "");
bool CopyFilesToDirectory(IEnumerable<string> filePaths, string directoryPath, IList<string> newFilenames);
bool Exists(string directory);
void CopyFileToDirectory(string fullFilePath, string targetDirectory);
int TraverseTreeParallelForEach(string root, Action<string> action, string searchPattern, ILogger logger);
bool IsDriveMounted(string path);
bool IsDirectoryEmpty(string path);
long GetTotalSize(IEnumerable<string> paths);
void ClearDirectory(string directoryPath);
void ClearAndDeleteDirectory(string directoryPath);
string[] GetFilesWithExtension(string path, string searchPatternExpression = "");
bool CopyDirectoryToDirectory(string? sourceDirName, string destDirName, string searchPattern = "");
Dictionary<string, string> FindHighestDirectoriesFromFiles(IEnumerable<string> libraryFolders,
IList<string> filePaths);
IEnumerable<string> GetFoldersTillRoot(string rootPath, string fullPath);
IEnumerable<string> GetFiles(string path, string fileNameRegex = "", SearchOption searchOption = SearchOption.TopDirectoryOnly);
bool ExistOrCreate(string directoryPath);
void DeleteFiles(IEnumerable<string> files);
void RemoveNonImages(string directoryName);
void Flatten(string directoryName);
Task<bool> CheckWriteAccess(string directoryName);
IEnumerable<string> GetFilesWithCertainExtensions(string path,
string searchPatternExpression = "",
SearchOption searchOption = SearchOption.TopDirectoryOnly);
IEnumerable<string> GetDirectories(string folderPath);
IEnumerable<string> GetDirectories(string folderPath, GlobMatcher? matcher);
string GetParentDirectoryName(string fileOrFolder);
IList<string> ScanFiles(string folderPath, GlobMatcher? matcher = null);
DateTime GetLastWriteTime(string folderPath);
GlobMatcher? CreateMatcherFromFile(string filePath);
}
public class DirectoryService : IDirectoryService
{
public const string KavitaIgnoreFile = ".kavitaignore";
public IFileSystem FileSystem { get; }
public string CacheDirectory { get; }
public string CoverImageDirectory { get; }
public string LogDirectory { get; }
public string TempDirectory { get; }
public string ConfigDirectory { get; }
public string BookmarkDirectory { get; }
public string SiteThemeDirectory { get; }
public string FaviconDirectory { get; }
private readonly ILogger<DirectoryService> _logger;
private const RegexOptions MatchOptions = RegexOptions.Compiled | RegexOptions.IgnoreCase;
private static readonly Regex ExcludeDirectories = new Regex(
@"@eaDir|\.DS_Store|\.qpkg|__MACOSX|@Recently-Snapshot|@recycle|\.@__thumb",
MatchOptions,
Tasks.Scanner.Parser.Parser.RegexTimeout);
private static readonly Regex FileCopyAppend = new Regex(@"\(\d+\)",
MatchOptions,
Tasks.Scanner.Parser.Parser.RegexTimeout);
public static readonly string BackupDirectory = Path.Join(Directory.GetCurrentDirectory(), "config", "backups");
public DirectoryService(ILogger<DirectoryService> logger, IFileSystem fileSystem)
{
_logger = logger;
FileSystem = fileSystem;
CoverImageDirectory = FileSystem.Path.Join(FileSystem.Directory.GetCurrentDirectory(), "config", "covers");
CacheDirectory = FileSystem.Path.Join(FileSystem.Directory.GetCurrentDirectory(), "config", "cache");
LogDirectory = FileSystem.Path.Join(FileSystem.Directory.GetCurrentDirectory(), "config", "logs");
TempDirectory = FileSystem.Path.Join(FileSystem.Directory.GetCurrentDirectory(), "config", "temp");
ConfigDirectory = FileSystem.Path.Join(FileSystem.Directory.GetCurrentDirectory(), "config");
BookmarkDirectory = FileSystem.Path.Join(FileSystem.Directory.GetCurrentDirectory(), "config", "bookmarks");
SiteThemeDirectory = FileSystem.Path.Join(FileSystem.Directory.GetCurrentDirectory(), "config", "themes");
FaviconDirectory = FileSystem.Path.Join(FileSystem.Directory.GetCurrentDirectory(), "config", "favicons");
ExistOrCreate(SiteThemeDirectory);
ExistOrCreate(CoverImageDirectory);
ExistOrCreate(CacheDirectory);
ExistOrCreate(LogDirectory);
ExistOrCreate(TempDirectory);
ExistOrCreate(BookmarkDirectory);
ExistOrCreate(FaviconDirectory);
}
/// <summary>
/// Given a set of regex search criteria, get files in the given path.
/// </summary>
/// <remarks>This will always exclude <see cref="Tasks.Scanner.Parser.Parser.MacOsMetadataFileStartsWith"/> patterns</remarks>
/// <param name="path">Directory to search</param>
/// <param name="searchPatternExpression">Regex version of search pattern (ie \.mp3|\.mp4). Defaults to * meaning all files.</param>
/// <param name="searchOption">SearchOption to use, defaults to TopDirectoryOnly</param>
/// <returns>List of file paths</returns>
public IEnumerable<string> GetFilesWithCertainExtensions(string path,
string searchPatternExpression = "",
SearchOption searchOption = SearchOption.TopDirectoryOnly)
{
if (!FileSystem.Directory.Exists(path)) return ImmutableList<string>.Empty;
var reSearchPattern = new Regex(searchPatternExpression, RegexOptions.IgnoreCase, Tasks.Scanner.Parser.Parser.RegexTimeout);
return FileSystem.Directory.EnumerateFiles(path, "*", searchOption)
.Where(file =>
reSearchPattern.IsMatch(FileSystem.Path.GetExtension(file)) && !FileSystem.Path.GetFileName(file).StartsWith(Tasks.Scanner.Parser.Parser.MacOsMetadataFileStartsWith));
}
/// <summary>
/// Returns a list of folders from end of fullPath to rootPath. If a file is passed at the end of the fullPath, it will be ignored.
///
/// Example) (C:/Manga/, C:/Manga/Love Hina/Specials/Omake/) returns [Omake, Specials, Love Hina]
/// </summary>
/// <param name="rootPath"></param>
/// <param name="fullPath"></param>
/// <returns></returns>
public IEnumerable<string> GetFoldersTillRoot(string rootPath, string fullPath)
{
var separator = FileSystem.Path.AltDirectorySeparatorChar;
if (fullPath.Contains(FileSystem.Path.DirectorySeparatorChar))
{
fullPath = fullPath.Replace(FileSystem.Path.DirectorySeparatorChar, FileSystem.Path.AltDirectorySeparatorChar);
}
if (rootPath.Contains(Path.DirectorySeparatorChar))
{
rootPath = rootPath.Replace(FileSystem.Path.DirectorySeparatorChar, FileSystem.Path.AltDirectorySeparatorChar);
}
var path = fullPath.EndsWith(separator) ? fullPath.Substring(0, fullPath.Length - 1) : fullPath;
var root = rootPath.EndsWith(separator) ? rootPath.Substring(0, rootPath.Length - 1) : rootPath;
var paths = new List<string>();
// If a file is at the end of the path, remove it before we start processing folders
if (FileSystem.Path.GetExtension(path) != string.Empty)
{
path = path.Substring(0, path.LastIndexOf(separator));
}
while (FileSystem.Path.GetDirectoryName(path) != Path.GetDirectoryName(root))
{
var folder = FileSystem.DirectoryInfo.New(path).Name;
paths.Add(folder);
path = path.Substring(0, path.LastIndexOf(separator));
}
return paths;
}
/// <summary>
/// Does Directory Exist
/// </summary>
/// <param name="directory"></param>
/// <returns></returns>
public bool Exists(string directory)
{
var di = FileSystem.DirectoryInfo.New(directory);
return di.Exists;
}
/// <summary>
/// Get files given a path.
/// </summary>
/// <remarks>This will automatically filter out restricted files, like MacOsMetadata files</remarks>
/// <param name="path"></param>
/// <param name="fileNameRegex">An optional regex string to search against. Will use file path to match against.</param>
/// <param name="searchOption">Defaults to top level directory only, can be given all to provide recursive searching</param>
/// <returns></returns>
public IEnumerable<string> GetFiles(string path, string fileNameRegex = "", SearchOption searchOption = SearchOption.TopDirectoryOnly)
{
if (!FileSystem.Directory.Exists(path)) return ImmutableList<string>.Empty;
if (fileNameRegex != string.Empty)
{
var reSearchPattern = new Regex(fileNameRegex, RegexOptions.IgnoreCase,
Tasks.Scanner.Parser.Parser.RegexTimeout);
return FileSystem.Directory.EnumerateFiles(path, "*", searchOption)
.Where(file =>
{
var fileName = FileSystem.Path.GetFileName(file);
return reSearchPattern.IsMatch(fileName) &&
!fileName.StartsWith(Tasks.Scanner.Parser.Parser.MacOsMetadataFileStartsWith);
});
}
return FileSystem.Directory.EnumerateFiles(path, "*", searchOption).Where(file =>
!FileSystem.Path.GetFileName(file).StartsWith(Tasks.Scanner.Parser.Parser.MacOsMetadataFileStartsWith));
}
/// <summary>
/// Copies a file into a directory. Does not maintain parent folder of file.
/// Will create target directory if doesn't exist. Automatically overwrites what is there.
/// </summary>
/// <param name="fullFilePath"></param>
/// <param name="targetDirectory"></param>
public void CopyFileToDirectory(string fullFilePath, string targetDirectory)
{
try
{
var fileInfo = FileSystem.FileInfo.New(fullFilePath);
if (!fileInfo.Exists) return;
ExistOrCreate(targetDirectory);
fileInfo.CopyTo(FileSystem.Path.Join(targetDirectory, fileInfo.Name), true);
}
catch (Exception ex)
{
_logger.LogError(ex, "There was a critical error when copying {File} to {Directory}", fullFilePath, targetDirectory);
}
}
/// <summary>
/// Copies all files and subdirectories within a directory to a target location
/// </summary>
/// <param name="sourceDirName">Directory to copy from. Does not copy the parent folder</param>
/// <param name="destDirName">Destination to copy to. Will be created if doesn't exist</param>
/// <param name="searchPattern">Defaults to all files</param>
/// <returns>If was successful</returns>
/// <exception cref="DirectoryNotFoundException">Thrown when source directory does not exist</exception>
public bool CopyDirectoryToDirectory(string? sourceDirName, string destDirName, string searchPattern = "")
{
if (string.IsNullOrEmpty(sourceDirName)) return false;
// Get the subdirectories for the specified directory.
var dir = FileSystem.DirectoryInfo.New(sourceDirName);
if (!dir.Exists)
{
throw new DirectoryNotFoundException(
"Source directory does not exist or could not be found: "
+ sourceDirName);
}
var dirs = dir.GetDirectories();
// If the destination directory doesn't exist, create it.
ExistOrCreate(destDirName);
// Get the files in the directory and copy them to the new location.
var files = GetFilesWithExtension(dir.FullName, searchPattern).Select(n => FileSystem.FileInfo.New(n));
foreach (var file in files)
{
var tempPath = FileSystem.Path.Combine(destDirName, file.Name);
file.CopyTo(tempPath, false);
}
// If copying subdirectories, copy them and their contents to new location.
foreach (var subDir in dirs)
{
var tempPath = FileSystem.Path.Combine(destDirName, subDir.Name);
CopyDirectoryToDirectory(subDir.FullName, tempPath);
}
return true;
}
/// <summary>
/// Checks if the root path of a path exists or not.
/// </summary>
/// <param name="path"></param>
/// <returns></returns>
public bool IsDriveMounted(string path)
{
return FileSystem.DirectoryInfo.New(FileSystem.Path.GetPathRoot(path) ?? string.Empty).Exists;
}
/// <summary>
/// Checks if the root path of a path is empty or not.
/// </summary>
/// <param name="path"></param>
/// <returns></returns>
public bool IsDirectoryEmpty(string path)
{
return FileSystem.Directory.Exists(path) && !FileSystem.Directory.EnumerateFileSystemEntries(path).Any();
}
public string[] GetFilesWithExtension(string path, string searchPatternExpression = "")
{
if (searchPatternExpression != string.Empty)
{
return GetFilesWithCertainExtensions(path, searchPatternExpression).ToArray();
}
return !FileSystem.Directory.Exists(path) ? Array.Empty<string>() : FileSystem.Directory.GetFiles(path);
}
/// <summary>
/// Returns the total number of bytes for a given set of full file paths
/// </summary>
/// <param name="paths"></param>
/// <returns>Total bytes</returns>
public long GetTotalSize(IEnumerable<string> paths)
{
return paths.Sum(path => FileSystem.FileInfo.New(path).Length);
}
/// <summary>
/// Returns true if the path exists and is a directory. If path does not exist, this will create it. Returns false in all fail cases.
/// </summary>
/// <param name="directoryPath"></param>
/// <returns></returns>
public bool ExistOrCreate(string directoryPath)
{
var di = FileSystem.DirectoryInfo.New(directoryPath);
if (di.Exists) return true;
try
{
FileSystem.Directory.CreateDirectory(directoryPath);
}
catch (Exception)
{
return false;
}
return true;
}
/// <summary>
/// Deletes all files within the directory, then the directory itself.
/// </summary>
/// <param name="directoryPath"></param>
public void ClearAndDeleteDirectory(string directoryPath)
{
if (!FileSystem.Directory.Exists(directoryPath)) return;
var di = FileSystem.DirectoryInfo.New(directoryPath);
ClearDirectory(directoryPath);
di.Delete(true);
}
/// <summary>
/// Deletes all files and folders within the directory path
/// </summary>
/// <param name="directoryPath"></param>
/// <returns></returns>
public void ClearDirectory(string directoryPath)
{
var di = FileSystem.DirectoryInfo.New(directoryPath);
if (!di.Exists) return;
try
{
foreach (var file in di.EnumerateFiles())
{
file.Delete();
}
foreach (var dir in di.EnumerateDirectories())
{
dir.Delete(true);
}
}
catch (UnauthorizedAccessException ex)
{
_logger.LogError(ex, "[ClearDirectory] Could not delete {DirectoryPath} due to permission issue", directoryPath);
}
}
/// <summary>
/// Copies files to a destination directory. If the destination directory doesn't exist, this will create it.
/// </summary>
/// <remarks>If a file already exists in dest, this will rename as (2). It does not support multiple iterations of this. Overwriting is not supported.</remarks>
/// <param name="filePaths"></param>
/// <param name="directoryPath"></param>
/// <param name="prepend">An optional string to prepend to the target file's name</param>
/// <returns></returns>
public bool CopyFilesToDirectory(IEnumerable<string> filePaths, string directoryPath, string prepend = "")
{
ExistOrCreate(directoryPath);
string? currentFile = null;
try
{
foreach (var file in filePaths)
{
currentFile = file;
if (!FileSystem.File.Exists(file))
{
_logger.LogError("Unable to copy {File} to {DirectoryPath} as it doesn't exist", file, directoryPath);
continue;
}
var fileInfo = FileSystem.FileInfo.New(file);
var targetFile = FileSystem.FileInfo.New(RenameFileForCopy(file, directoryPath, prepend));
fileInfo.CopyTo(FileSystem.Path.Join(directoryPath, targetFile.Name));
}
}
catch (Exception ex)
{
_logger.LogError(ex, "Unable to copy {File} to {DirectoryPath}", currentFile, directoryPath);
return false;
}
return true;
}
/// <summary>
/// Copies files to a destination directory. If the destination directory doesn't exist, this will create it.
/// </summary>
/// <remarks>If a file already exists in dest, this will rename as (2). It does not support multiple iterations of this. Overwriting is not supported.</remarks>
/// <param name="filePaths"></param>
/// <param name="directoryPath"></param>
/// <param name="newFilenames">A list that matches one to one with filePaths. Each filepath will be renamed to newFilenames</param>
/// <returns></returns>
public bool CopyFilesToDirectory(IEnumerable<string> filePaths, string directoryPath, IList<string> newFilenames)
{
ExistOrCreate(directoryPath);
string? currentFile = null;
var index = 0;
try
{
foreach (var file in filePaths)
{
currentFile = file;
if (!FileSystem.File.Exists(file))
{
_logger.LogError("Unable to copy {File} to {DirectoryPath} as it doesn't exist", file, directoryPath);
continue;
}
var fileInfo = FileSystem.FileInfo.New(file);
var targetFile = FileSystem.FileInfo.New(RenameFileForCopy(newFilenames[index] + fileInfo.Extension, directoryPath));
fileInfo.CopyTo(FileSystem.Path.Join(directoryPath, targetFile.Name));
index++;
}
}
catch (Exception ex)
{
_logger.LogError(ex, "Unable to copy {File} to {DirectoryPath}", currentFile, directoryPath);
return false;
}
return true;
}
/// <summary>
/// Generates the combined filepath given a prepend (optional), output directory path, and a full input file path.
/// If the output file already exists, will append (1), (2), etc until it can be written out
/// </summary>
/// <param name="fileToCopy"></param>
/// <param name="directoryPath"></param>
/// <param name="prepend"></param>
/// <returns></returns>
private string RenameFileForCopy(string fileToCopy, string directoryPath, string prepend = "")
{
while (true)
{
var fileInfo = FileSystem.FileInfo.New(fileToCopy);
var filename = prepend + fileInfo.Name;
var targetFile = FileSystem.FileInfo.New(FileSystem.Path.Join(directoryPath, filename));
if (!targetFile.Exists)
{
return targetFile.FullName;
}
var noExtension = FileSystem.Path.GetFileNameWithoutExtension(fileInfo.Name);
if (FileCopyAppend.IsMatch(noExtension))
{
var match = FileCopyAppend.Match(noExtension).Value;
var matchNumber = match.Replace("(", string.Empty).Replace(")", string.Empty);
noExtension = noExtension.Replace(match, $"({int.Parse(matchNumber) + 1})");
}
else
{
noExtension += " (1)";
}
var newFilename = prepend + noExtension + FileSystem.Path.GetExtension(fileInfo.Name);
fileToCopy = FileSystem.Path.Join(directoryPath, newFilename);
}
}
/// <summary>
/// Lists all directories in a root path. Will exclude Hidden or System directories.
/// </summary>
/// <param name="rootPath"></param>
/// <returns></returns>
public IEnumerable<DirectoryDto> ListDirectory(string rootPath)
{
if (!FileSystem.Directory.Exists(rootPath)) return ImmutableList<DirectoryDto>.Empty;
var di = FileSystem.DirectoryInfo.New(rootPath);
var dirs = di.GetDirectories()
.Where(dir => !(dir.Attributes.HasFlag(FileAttributes.Hidden) || dir.Attributes.HasFlag(FileAttributes.System)))
.Select(d => new DirectoryDto()
{
Name = d.Name,
FullPath = d.FullName,
})
.OrderBy(s => s.Name)
.ToImmutableList();
return dirs;
}
/// <summary>
/// Reads a file's into byte[]. Returns empty array if file doesn't exist.
/// </summary>
/// <param name="path"></param>
/// <returns></returns>
public async Task<byte[]> ReadFileAsync(string path)
{
if (!FileSystem.File.Exists(path)) return Array.Empty<byte>();
return await FileSystem.File.ReadAllBytesAsync(path);
}
/// <summary>
/// Finds the highest directories from a set of file paths. Does not return the root path, will always select the highest non-root path.
/// </summary>
/// <remarks>If the file paths do not contain anything from libraryFolders, this returns an empty dictionary back</remarks>
/// <param name="libraryFolders">List of top level folders which files belong to</param>
/// <param name="filePaths">List of file paths that belong to libraryFolders</param>
/// <returns></returns>
public Dictionary<string, string> FindHighestDirectoriesFromFiles(IEnumerable<string> libraryFolders, IList<string> filePaths)
{
var stopLookingForDirectories = false;
var dirs = new Dictionary<string, string>();
foreach (var folder in libraryFolders.Select(Tasks.Scanner.Parser.Parser.NormalizePath))
{
if (stopLookingForDirectories) break;
foreach (var file in filePaths.Select(Tasks.Scanner.Parser.Parser.NormalizePath))
{
if (!file.Contains(folder)) continue;
var parts = GetFoldersTillRoot(folder, file).ToList();
if (parts.Count == 0)
{
// Break from all loops, we done, just scan folder.Path (library root)
dirs.Add(folder, string.Empty);
stopLookingForDirectories = true;
break;
}
var fullPath = Tasks.Scanner.Parser.Parser.NormalizePath(Path.Join(folder, parts[parts.Count - 1]));
dirs.TryAdd(fullPath, string.Empty);
}
}
return dirs;
}
/// <summary>
/// Gets a set of directories from the folder path. Automatically excludes directories that shouldn't be in scope.
/// </summary>
/// <param name="folderPath"></param>
/// <returns>List of directory paths, empty if path doesn't exist</returns>
public IEnumerable<string> GetDirectories(string folderPath)
{
if (!FileSystem.Directory.Exists(folderPath)) return ImmutableArray<string>.Empty;
return FileSystem.Directory.GetDirectories(folderPath)
.Where(path => ExcludeDirectories.Matches(path).Count == 0);
}
/// <summary>
/// Gets a set of directories from the folder path. Automatically excludes directories that shouldn't be in scope.
/// </summary>
/// <param name="folderPath"></param>
/// <param name="matcher">A set of glob rules that will filter directories out</param>
/// <returns>List of directory paths, empty if path doesn't exist</returns>
public IEnumerable<string> GetDirectories(string folderPath, GlobMatcher? matcher)
{
if (matcher == null) return GetDirectories(folderPath);
return GetDirectories(folderPath)
.Where(folder => !matcher.ExcludeMatches(
$"{FileSystem.DirectoryInfo.New(folder).Name}{FileSystem.Path.AltDirectorySeparatorChar}"));
}
/// <summary>
/// Returns all directories, including subdirectories. Automatically excludes directories that shouldn't be in scope.
/// </summary>
/// <param name="folderPath"></param>
/// <returns></returns>
public IEnumerable<string> GetAllDirectories(string folderPath)
{
if (!FileSystem.Directory.Exists(folderPath)) return ImmutableArray<string>.Empty;
var directories = new List<string>();
var foundDirs = GetDirectories(folderPath);
foreach (var foundDir in foundDirs)
{
directories.Add(foundDir);
directories.AddRange(GetAllDirectories(foundDir));
}
return directories;
}
/// <summary>
/// Returns the parent directories name for a file or folder. Empty string is path is not valid.
/// </summary>
/// <param name="fileOrFolder"></param>
/// <returns></returns>
public string GetParentDirectoryName(string fileOrFolder)
{
try
{
return Tasks.Scanner.Parser.Parser.NormalizePath(Directory.GetParent(fileOrFolder)?.FullName);
}
catch (Exception)
{
return string.Empty;
}
}
/// <summary>
/// Scans a directory by utilizing a recursive folder search. If a .kavitaignore file is found, will ignore matching patterns
/// </summary>
/// <param name="folderPath"></param>
/// <param name="matcher"></param>
/// <returns></returns>
public IList<string> ScanFiles(string folderPath, GlobMatcher? matcher = null)
{
_logger.LogDebug("[ScanFiles] called on {Path}", folderPath);
var files = new List<string>();
if (!Exists(folderPath)) return files;
var potentialIgnoreFile = FileSystem.Path.Join(folderPath, KavitaIgnoreFile);
if (matcher == null)
{
matcher = CreateMatcherFromFile(potentialIgnoreFile);
}
else
{
matcher.Merge(CreateMatcherFromFile(potentialIgnoreFile));
}
var directories = GetDirectories(folderPath, matcher);
foreach (var directory in directories)
{
files.AddRange(ScanFiles(directory, matcher));
}
// Get the matcher from either ignore or global (default setup)
if (matcher == null)
{
files.AddRange(GetFilesWithCertainExtensions(folderPath, Tasks.Scanner.Parser.Parser.SupportedExtensions));
}
else
{
var foundFiles = GetFilesWithCertainExtensions(folderPath,
Tasks.Scanner.Parser.Parser.SupportedExtensions)
.Where(file => !matcher.ExcludeMatches(FileSystem.FileInfo.New(file).Name));
files.AddRange(foundFiles);
}
return files;
}
/// <summary>
/// Recursively scans a folder and returns the max last write time on any folders and files
/// </summary>
/// <remarks>If the folder is empty, this will return MaxValue for a DateTime</remarks>
/// <param name="folderPath"></param>
/// <returns>Max Last Write Time</returns>
public DateTime GetLastWriteTime(string folderPath)
{
if (!FileSystem.Directory.Exists(folderPath)) throw new IOException($"{folderPath} does not exist");
var fileEntries = FileSystem.Directory.GetFileSystemEntries(folderPath, "*.*", SearchOption.AllDirectories);
if (fileEntries.Length == 0) return DateTime.MaxValue;
return fileEntries.Max(path => FileSystem.File.GetLastWriteTime(path));
}
/// <summary>
/// Generates a GlobMatcher from a .kavitaignore file found at path. Returns null otherwise.
/// </summary>
/// <param name="filePath"></param>
/// <returns></returns>
public GlobMatcher? CreateMatcherFromFile(string filePath)
{
if (!FileSystem.File.Exists(filePath))
{
return null;
}
// Read file in and add each line to Matcher
var lines = FileSystem.File.ReadAllLines(filePath);
if (lines.Length == 0)
{
return null;
}
GlobMatcher matcher = new();
foreach (var line in lines.Where(s => !string.IsNullOrEmpty(s)))
{
matcher.AddExclude(line);
}
return matcher;
}
/// <summary>
/// Recursively scans files and applies an action on them. This uses as many cores the underlying PC has to speed
/// up processing.
/// NOTE: This is no longer parallel due to user's machines locking up
/// </summary>
/// <param name="root">Directory to scan</param>
/// <param name="action">Action to apply on file path</param>
/// <param name="searchPattern">Regex pattern to search against</param>
/// <param name="logger"></param>
/// <exception cref="ArgumentException"></exception>
public int TraverseTreeParallelForEach(string root, Action<string> action, string searchPattern, ILogger logger)
{
//Count of files traversed and timer for diagnostic output
var fileCount = 0;
// Data structure to hold names of subfolders to be examined for files.
var dirs = new Stack<string>();
if (!FileSystem.Directory.Exists(root)) {
throw new ArgumentException("The directory doesn't exist");
}
dirs.Push(root);
while (dirs.Count > 0) {
var currentDir = dirs.Pop();
IEnumerable<string> subDirs;
string[] files;
try {
subDirs = GetDirectories(currentDir);
}
// Thrown if we do not have discovery permission on the directory.
catch (UnauthorizedAccessException e) {
logger.LogCritical(e, "Unauthorized access on {Directory}", currentDir);
continue;
}
// Thrown if another process has deleted the directory after we retrieved its name.
catch (DirectoryNotFoundException e) {
logger.LogCritical(e, "Directory not found on {Directory}", currentDir);
continue;
}
try {
files = GetFilesWithCertainExtensions(currentDir, searchPattern)
.ToArray();
}
catch (UnauthorizedAccessException e) {
logger.LogCritical(e, "Unauthorized access on a file in {Directory}", currentDir);
continue;
}
catch (DirectoryNotFoundException e) {
logger.LogCritical(e, "Directory not found on a file in {Directory}", currentDir);
continue;
}
catch (IOException e) {
logger.LogCritical(e, "IO exception on a file in {Directory}", currentDir);
continue;
}
// Execute in parallel if there are enough files in the directory.
// Otherwise, execute sequentially. Files are opened and processed
// synchronously but this could be modified to perform async I/O.
try {
foreach (var file in files) {
action(file);
fileCount++;
}
}
catch (AggregateException ae) {
ae.Handle((ex) => {
if (ex is not UnauthorizedAccessException) return false;
// Here we just output a message and go on.
_logger.LogError(ex, "Unauthorized access on file");
return true;
// Handle other exceptions here if necessary...
});
}
// Push the subdirectories onto the stack for traversal.
// This could also be done before handing the files.
foreach (var str in subDirs)
dirs.Push(str);
}
return fileCount;
}
/// <summary>
/// Attempts to delete the files passed to it. Swallows exceptions.
/// </summary>
/// <param name="files">Full path of files to delete</param>
public void DeleteFiles(IEnumerable<string> files)
{
foreach (var file in files)
{
try
{
FileSystem.FileInfo.New(file).Delete();
}
catch (Exception)
{
/* Swallow exception */
}
}
}
/// <summary>
/// Returns the human-readable file size for an arbitrary, 64-bit file size
/// <remarks>The default format is "0.## XB", e.g. "4.2 KB" or "1.43 GB"</remarks>
/// </summary>
/// https://www.somacon.com/p576.php
/// <param name="bytes"></param>
/// <returns></returns>
public static string GetHumanReadableBytes(long bytes)
{
// Get absolute value
var absoluteBytes = (bytes < 0 ? -bytes : bytes);
// Determine the suffix and readable value
string suffix;
double readable;
switch (absoluteBytes)
{
// Exabyte
case >= 0x1000000000000000:
suffix = "EB";
readable = (bytes >> 50);
break;
// Petabyte
case >= 0x4000000000000:
suffix = "PB";
readable = (bytes >> 40);
break;
// Terabyte
case >= 0x10000000000:
suffix = "TB";
readable = (bytes >> 30);
break;
// Gigabyte
case >= 0x40000000:
suffix = "GB";
readable = (bytes >> 20);
break;
// Megabyte
case >= 0x100000:
suffix = "MB";
readable = (bytes >> 10);
break;
// Kilobyte
case >= 0x400:
suffix = "KB";
readable = bytes;
break;
default:
return bytes.ToString("0 B"); // Byte
}
// Divide by 1024 to get fractional value
readable = (readable / 1024);
// Return formatted number with suffix
return readable.ToString("0.## ") + suffix;
}
/// <summary>
/// Removes all files except images from the directory. Includes sub directories.
/// </summary>
/// <param name="directoryName">Fully qualified directory</param>
public void RemoveNonImages(string directoryName)
{
DeleteFiles(GetFiles(directoryName, searchOption:SearchOption.AllDirectories).Where(file => !Tasks.Scanner.Parser.Parser.IsImage(file)));
}
/// <summary>
/// Flattens all files in subfolders to the passed directory recursively.
///
///
/// foo<para />
/// ├── 1.txt<para />
/// ├── 2.txt<para />
/// ├── 3.txt<para />
/// ├── 4.txt<para />
/// └── bar<para />
/// ├── 1.txt<para />
/// ├── 2.txt<para />
/// └── 5.txt<para />
///
/// becomes:<para />
/// foo<para />
/// ├── 1.txt<para />
/// ├── 2.txt<para />
/// ├── 3.txt<para />
/// ├── 4.txt<para />
/// ├── bar_1.txt<para />
/// ├── bar_2.txt<para />
/// └── bar_5.txt<para />
/// </summary>
/// <param name="directoryName">Fully qualified Directory name</param>
public void Flatten(string directoryName)
{
if (string.IsNullOrEmpty(directoryName) || !FileSystem.Directory.Exists(directoryName)) return;
var directory = FileSystem.DirectoryInfo.New(directoryName);
var index = 0;
FlattenDirectory(directory, directory, ref index);
}
/// <summary>
/// Checks whether a directory has write permissions
/// </summary>
/// <param name="directoryName">Fully qualified path</param>
/// <returns></returns>
public async Task<bool> CheckWriteAccess(string directoryName)
{
try
{
ExistOrCreate(directoryName);
await FileSystem.File.WriteAllTextAsync(
FileSystem.Path.Join(directoryName, "test.txt"),
string.Empty);
}
catch (Exception)
{
ClearAndDeleteDirectory(directoryName);
return false;
}
ClearAndDeleteDirectory(directoryName);
return true;
}
private static void FlattenDirectory(IFileSystemInfo root, IDirectoryInfo directory, ref int directoryIndex)
{
if (!root.FullName.Equals(directory.FullName))
{
var fileIndex = 1;
foreach (var file in directory.EnumerateFiles().OrderByNatural(file => file.FullName))
{
if (file.Directory == null) continue;
var paddedIndex = Tasks.Scanner.Parser.Parser.PadZeros(directoryIndex + "");
// We need to rename the files so that after flattening, they are in the order we found them
var newName = $"{paddedIndex}_{Tasks.Scanner.Parser.Parser.PadZeros(fileIndex + "")}{file.Extension}";
var newPath = Path.Join(root.FullName, newName);
if (!File.Exists(newPath)) file.MoveTo(newPath);
fileIndex++;
}
directoryIndex++;
}
foreach (var subDirectory in directory.EnumerateDirectories().OrderByNatural(d => d.FullName))
{
// We need to check if the directory is not a blacklisted (ie __MACOSX)
if (Tasks.Scanner.Parser.Parser.HasBlacklistedFolderInPath(subDirectory.FullName)) continue;
FlattenDirectory(root, subDirectory, ref directoryIndex);
}
}
}