Merge branch 'kovidgoyal/master'

This commit is contained in:
Charles Haley 2013-07-07 15:39:58 +02:00
commit 1ad3753e20
22 changed files with 892 additions and 78 deletions

View File

@ -20,6 +20,56 @@
# new recipes:
# - title:
- version: 0.9.38
date: 2013-07-05
new features:
- title: "Book polishing: Add option to embed all referenced fonts when polishing books using the 'Polish Books' tool."
tickets: [1196038]
- title: "DOCX Input: Add support for clickable (hyperlinked) images"
tickets: [1196728]
- title: "DOCX Input: Insert page breaks at the start of every new section"
tickets: [1196728]
- title: "Drivers for Trekstor Pyrus Maxi and PocketBook Surfpad 2"
tickets: [1196931, 1182850]
- title: "DOCX Input: Add support for horizontal rules created by typing three hyphens and pressing enter."
bug fixes:
- title: "Fix detection of SD Card in some PRS-T2N devices"
tickets: [1197970]
- title: "MOBI Input: Fix a regression that broke parsing of MOBI files with malformed markup that also used entities for apostrophes."
ticket: [1197585]
- title: "Get Books: Update Woblink store plugin"
- title: "Metadata download dialog: Prevent the buttons from being re-ordered when the Next button is clicked."
- title: "PDF Output: Fix links that point to URLs with query parameters being mangled by the conversion process."
tickets: [1197006]
- title: "DOCX Input: Fix links pointing to locations in the same document that contain multiple, redundant bookmarks not working."
- title: "EPUB/AZW3 Output: Fix splitting on page-break-after with plain text immediately following the split point causing the text to be added before rather than after the split point."
tickets: [1196728]
- title: "DOCX Input: handle bookmarks defined at the paragraph level"
tickets: [1196728]
- title: "DOCX Input: Handle hyperlinks created as fields"
tickets: [1196728]
improved recipes:
- iprofessional
new recipes:
- title: Democracy Now
author: Antoine Beaupre
- version: 0.9.37
date: 2013-06-28

View File

@ -46,17 +46,31 @@ The default values for the tweaks are reproduced below
Overriding icons, templates, et cetera
----------------------------------------
|app| allows you to override the static resources, like icons, templates, javascript, etc. with customized versions that you like.
All static resources are stored in the resources sub-folder of the calibre install location. On Windows, this is usually
:file:`C:/Program Files/Calibre2/resources`. On OS X, :file:`/Applications/calibre.app/Contents/Resources/resources/`. On linux, if you are using the binary installer
from the calibre website it will be :file:`/opt/calibre/resources`. These paths can change depending on where you choose to install |app|.
|app| allows you to override the static resources, like icons, javascript and
templates for the metadata jacket, catalogs, etc. with customized versions that
you like. All static resources are stored in the resources sub-folder of the
calibre install location. On Windows, this is usually :file:`C:/Program Files/Calibre2/resources`.
On OS X, :file:`/Applications/calibre.app/Contents/Resources/resources/`. On linux, if
you are using the binary installer from the calibre website it will be
:file:`/opt/calibre/resources`. These paths can change depending on where you
choose to install |app|.
You should not change the files in this resources folder, as your changes will get overwritten the next time you update |app|. Instead, go to
:guilabel:`Preferences->Advanced->Miscellaneous` and click :guilabel:`Open calibre configuration directory`. In this configuration directory, create a sub-folder called resources and place the files you want to override in it. Place the files in the appropriate sub folders, for example place images in :file:`resources/images`, etc.
|app| will automatically use your custom file in preference to the built-in one the next time it is started.
You should not change the files in this resources folder, as your changes will
get overwritten the next time you update |app|. Instead, go to
:guilabel:`Preferences->Advanced->Miscellaneous` and click
:guilabel:`Open calibre configuration directory`. In this configuration directory, create a
sub-folder called resources and place the files you want to override in it.
Place the files in the appropriate sub folders, for example place images in
:file:`resources/images`, etc. |app| will automatically use your custom file
in preference to the built-in one the next time it is started.
For example, if you wanted to change the icon for the :guilabel:`Remove books` action, you would first look in the built-in resources folder and see that the relevant file is
:file:`resources/images/trash.png`. Assuming you have an alternate icon in PNG format called :file:`mytrash.png` you would save it in the configuration directory as :file:`resources/images/trash.png`. All the icons used by the calibre user interface are in :file:`resources/images` and its sub-folders.
For example, if you wanted to change the icon for the :guilabel:`Remove books`
action, you would first look in the built-in resources folder and see that the
relevant file is :file:`resources/images/trash.png`. Assuming you have an
alternate icon in PNG format called :file:`mytrash.png` you would save it in
the configuration directory as :file:`resources/images/trash.png`. All the
icons used by the calibre user interface are in :file:`resources/images` and
its sub-folders.
Customizing |app| with plugins
--------------------------------

View File

@ -16,16 +16,13 @@
<div class="body">
{% if not embedded %}
<div id="ad-container" style="text-align:center">
<script type="text/javascript"><!--
google_ad_client = "ca-pub-5939552585043235";
/* User Manual horizontal */
google_ad_slot = "7580893187";
google_ad_width = 728;
google_ad_height = 90;
//-->
</script>
<script type="text/javascript"
src="http://pagead2.googlesyndication.com/pagead/show_ads.js">
<script async="async" src="http://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
<ins class="adsbygoogle"
style="display:inline-block;width:728px;height:90px"
data-ad-client="ca-pub-5939552585043235"
data-ad-slot="7580893187"></ins>
<script>
(adsbygoogle = window.adsbygoogle || []).push({});
</script>
</div>
{% endif %}

View File

@ -0,0 +1,10 @@
from calibre.web.feeds.news import AutomaticNewsRecipe
class BasicUserRecipe1373130920(AutomaticNewsRecipe):
title = u'Glenn Greenwald | guardian.co.uk'
language = 'en_GB'
__author__ = 'anywho'
oldest_article = 7
max_articles_per_feed = 100
auto_cleanup = True
feeds = [(u'Latest', u'http://www.guardian.co.uk/profile/glenn-greenwald/rss')]

View File

@ -0,0 +1,14 @@
from calibre.web.feeds.news import AutomaticNewsRecipe
class BasicUserRecipe1373130372(AutomaticNewsRecipe):
title = u'Ludwig von Mises Institute'
__author__ = 'anywho'
language = 'en'
oldest_article = 7
max_articles_per_feed = 100
auto_cleanup = True
feeds = [(u'Daily Articles (Full text version)',
u'http://feed.mises.org/MisesFullTextArticles'),
(u'Mises Blog Posts',
u'http://mises.org/blog/index.rdf')]

View File

@ -165,7 +165,7 @@ class Translations(POT): # {{{
subprocess.check_call(['msgfmt', '-o', dest, iso639])
elif locale not in ('en_GB', 'en_CA', 'en_AU', 'si', 'ur', 'sc',
'ltg', 'nds', 'te', 'yi', 'fo', 'sq', 'ast', 'ml', 'ku',
'fr_CA', 'him', 'jv', 'ka', 'fur', 'ber'):
'fr_CA', 'him', 'jv', 'ka', 'fur', 'ber', 'my'):
self.warn('No ISO 639 translations for locale:', locale)
if self.iso639_errors:

View File

@ -4,7 +4,7 @@ __license__ = 'GPL v3'
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
__docformat__ = 'restructuredtext en'
__appname__ = u'calibre'
numeric_version = (0, 9, 37)
numeric_version = (0, 9, 38)
__version__ = u'.'.join(map(unicode, numeric_version))
__author__ = u"Kovid Goyal <kovid@kovidgoyal.net>"

View File

@ -9,14 +9,15 @@ __docformat__ = 'restructuredtext en'
SPOOL_SIZE = 30*1024*1024
def _get_next_series_num_for_list(series_indices):
def _get_next_series_num_for_list(series_indices, unwrap=True):
from calibre.utils.config_base import tweaks
from math import ceil, floor
if not series_indices:
if isinstance(tweaks['series_index_auto_increment'], (int, float)):
return float(tweaks['series_index_auto_increment'])
return 1.0
series_indices = [x[0] for x in series_indices]
if unwrap:
series_indices = [x[0] for x in series_indices]
if tweaks['series_index_auto_increment'] == 'next':
return floor(series_indices[-1]) + 1
if tweaks['series_index_auto_increment'] == 'first_free':

View File

@ -26,10 +26,10 @@ from calibre.utils.date import utcfromtimestamp, parse_date
from calibre.utils.filenames import (is_case_sensitive, samefile, hardlink_file, ascii_filename,
WindowsAtomicFolderMove)
from calibre.utils.magick.draw import save_cover_data_to
from calibre.utils.recycle_bin import delete_tree
from calibre.utils.recycle_bin import delete_tree, delete_file
from calibre.db.tables import (OneToOneTable, ManyToOneTable, ManyToManyTable,
SizeTable, FormatsTable, AuthorsTable, IdentifiersTable, PathTable,
CompositeTable, LanguagesTable, UUIDTable)
CompositeTable, UUIDTable)
# }}}
'''
@ -711,7 +711,6 @@ class DB(object):
'authors':AuthorsTable,
'formats':FormatsTable,
'identifiers':IdentifiersTable,
'languages':LanguagesTable,
}.get(col, ManyToManyTable)
tables[col] = cls(col, self.field_metadata[col].copy())
@ -940,6 +939,15 @@ class DB(object):
def has_format(self, book_id, fmt, fname, path):
return self.format_abspath(book_id, fmt, fname, path) is not None
def remove_format(self, book_id, fmt, fname, path):
path = self.format_abspath(book_id, fmt, fname, path)
if path is not None:
try:
delete_file(path)
except:
import traceback
traceback.print_exc()
def copy_cover_to(self, path, dest, windows_atomic_move=None, use_hardlink=False):
path = os.path.abspath(os.path.join(self.library_path, path, 'cover.jpg'))
if windows_atomic_move is not None:
@ -1059,9 +1067,27 @@ class DB(object):
if wam is not None:
wam.close_handles()
def add_format(self, book_id, fmt, stream, title, author, path):
fname = self.construct_file_name(book_id, title, author)
path = os.path.join(self.library_path, path)
fmt = ('.' + fmt.lower()) if fmt else ''
dest = os.path.join(path, fname + fmt)
if not os.path.exists(path):
os.makedirs(path)
size = 0
if (not getattr(stream, 'name', False) or not samefile(dest, stream.name)):
with lopen(dest, 'wb') as f:
shutil.copyfileobj(stream, f)
size = f.tell()
elif os.path.exists(dest):
size = os.path.getsize(dest)
return size, fname
def update_path(self, book_id, title, author, path_field, formats_field):
path = self.construct_path_name(book_id, title, author)
current_path = path_field.for_book(book_id)
current_path = path_field.for_book(book_id, default_value='')
formats = formats_field.for_book(book_id, default_value=())
fname = self.construct_file_name(book_id, title, author)
# Check if the metadata used to construct paths has changed
@ -1138,5 +1164,16 @@ class DB(object):
with lopen(path, 'rb') as f:
return f.read()
def remove_books(self, path_map, permanent=False):
for book_id, path in path_map.iteritems():
if path:
path = os.path.join(self.library_path, path)
if os.path.exists(path):
self.rmtree(path, permanent=permanent)
parent = os.path.dirname(path)
if len(os.listdir(parent)) == 0:
self.rmtree(parent, permanent=permanent)
self.conn.executemany(
'DELETE FROM books WHERE id=?', [(x,) for x in path_map])
# }}}

View File

@ -7,13 +7,15 @@ __license__ = 'GPL v3'
__copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
import os, traceback, random
import os, traceback, random, shutil
from io import BytesIO
from collections import defaultdict
from functools import wraps, partial
from calibre.constants import iswindows
from calibre.db import SPOOL_SIZE
from calibre import isbytestring
from calibre.constants import iswindows, preferred_encoding
from calibre.customize.ui import run_plugins_on_import, run_plugins_on_postimport
from calibre.db import SPOOL_SIZE, _get_next_series_num_for_list
from calibre.db.categories import get_categories
from calibre.db.locking import create_locks
from calibre.db.errors import NoSuchFormat
@ -22,12 +24,14 @@ from calibre.db.search import Search
from calibre.db.tables import VirtualTable
from calibre.db.write import get_series_values
from calibre.db.lazy import FormatMetadata, FormatsList
from calibre.ebooks.metadata import string_to_authors
from calibre.ebooks import check_ebook_format
from calibre.ebooks.metadata import string_to_authors, author_to_author_sort
from calibre.ebooks.metadata.book.base import Metadata
from calibre.ebooks.metadata.opf2 import metadata_to_opf
from calibre.ptempfile import (base_dir, PersistentTemporaryFile,
SpooledTemporaryFile)
from calibre.utils.date import now as nowf
from calibre.utils.config import prefs
from calibre.utils.date import now as nowf, utcnow, UNDEFINED_DATE
from calibre.utils.icu import sort_key
def api(f):
@ -51,6 +55,28 @@ def wrap_simple(lock, func):
return func(*args, **kwargs)
return ans
def run_import_plugins(path_or_stream, fmt):
fmt = fmt.lower()
if hasattr(path_or_stream, 'seek'):
path_or_stream.seek(0)
pt = PersistentTemporaryFile('_import_plugin.'+fmt)
shutil.copyfileobj(path_or_stream, pt, 1024**2)
pt.close()
path = pt.name
else:
path = path_or_stream
return run_plugins_on_import(path, fmt)
def _add_newbook_tag(mi):
tags = prefs['new_book_tags']
if tags:
for tag in [t.strip() for t in tags]:
if tag:
if not mi.tags:
mi.tags = [tag]
elif tag not in mi.tags:
mi.tags.append(tag)
class Cache(object):
@ -943,6 +969,184 @@ class Cache(object):
if extra is not None or force_changes:
protected_set_field(idx, extra)
@write_api
def add_format(self, book_id, fmt, stream_or_path, replace=True, run_hooks=True, dbapi=None):
if run_hooks:
# Run import plugins
npath = run_import_plugins(stream_or_path, fmt)
fmt = os.path.splitext(npath)[-1].lower().replace('.', '').upper()
stream_or_path = lopen(npath, 'rb')
fmt = check_ebook_format(stream_or_path, fmt)
fmt = (fmt or '').upper()
self.format_metadata_cache[book_id].pop(fmt, None)
try:
name = self.fields['formats'].format_fname(book_id, fmt)
except:
name = None
if name and not replace:
return False
path = self._field_for('path', book_id).replace('/', os.sep)
title = self._field_for('title', book_id, default_value=_('Unknown'))
author = self._field_for('authors', book_id, default_value=(_('Unknown'),))[0]
stream = stream_or_path if hasattr(stream_or_path, 'read') else lopen(stream_or_path, 'rb')
size, fname = self.backend.add_format(book_id, fmt, stream, title, author, path)
del stream
max_size = self.fields['formats'].table.update_fmt(book_id, fmt, fname, size, self.backend)
self.fields['size'].table.update_sizes({book_id: max_size})
self._update_last_modified((book_id,))
if run_hooks:
# Run post import plugins
run_plugins_on_postimport(dbapi or self, book_id, fmt)
stream_or_path.close()
return True
@write_api
def remove_formats(self, formats_map, db_only=False):
table = self.fields['formats'].table
formats_map = {book_id:frozenset((f or '').upper() for f in fmts) for book_id, fmts in formats_map.iteritems()}
size_map = table.remove_formats(formats_map, self.backend)
self.fields['size'].table.update_sizes(size_map)
for book_id, fmts in formats_map.iteritems():
for fmt in fmts:
self.format_metadata_cache[book_id].pop(fmt, None)
if not db_only:
for book_id, fmts in formats_map.iteritems():
try:
path = self._field_for('path', book_id).replace('/', os.sep)
except:
continue
for fmt in fmts:
try:
name = self.fields['formats'].format_fname(book_id, fmt)
except:
continue
if name and path:
self.backend.remove_format(book_id, fmt, name, path)
self._update_last_modified(tuple(formats_map.iterkeys()))
@read_api
def get_next_series_num_for(self, series):
books = ()
sf = self.fields['series']
if series:
q = icu_lower(series)
for val, book_ids in sf.iter_searchable_values(self._get_metadata, frozenset(self.all_book_ids())):
if q == icu_lower(val):
books = book_ids
break
series_indices = sorted(self._field_for('series_index', book_id) for book_id in books)
return _get_next_series_num_for_list(tuple(series_indices), unwrap=False)
@read_api
def author_sort_from_authors(self, authors):
'''Given a list of authors, return the author_sort string for the authors,
preferring the author sort associated with the author over the computed
string. '''
table = self.fields['authors'].table
result = []
rmap = {icu_lower(v):k for k, v in table.id_map.iteritems()}
for aut in authors:
aid = rmap.get(icu_lower(aut), None)
result.append(author_to_author_sort(aut) if aid is None else table.asort_map[aid])
return ' & '.join(result)
@read_api
def has_book(self, mi):
title = mi.title
if title:
if isbytestring(title):
title = title.decode(preferred_encoding, 'replace')
q = icu_lower(title)
for title in self.fields['title'].table.book_col_map.itervalues():
if q == icu_lower(title):
return True
return False
@write_api
def create_book_entry(self, mi, cover=None, add_duplicates=True, force_id=None, apply_import_tags=True, preserve_uuid=False):
if mi.tags:
mi.tags = list(mi.tags)
if apply_import_tags:
_add_newbook_tag(mi)
if not add_duplicates and self._has_book(mi):
return
series_index = (self._get_next_series_num_for(mi.series) if mi.series_index is None else mi.series_index)
if not mi.authors:
mi.authors = (_('Unknown'),)
aus = mi.author_sort if mi.author_sort else self._author_sort_from_authors(mi.authors)
mi.title = mi.title or _('Unknown')
if isbytestring(aus):
aus = aus.decode(preferred_encoding, 'replace')
if isbytestring(mi.title):
mi.title = mi.title.decode(preferred_encoding, 'replace')
conn = self.backend.conn
if force_id is None:
conn.execute('INSERT INTO books(title, series_index, author_sort) VALUES (?, ?, ?)',
(mi.title, series_index, aus))
else:
conn.execute('INSERT INTO books(id, title, series_index, author_sort) VALUES (?, ?, ?, ?)',
(force_id, mi.title, series_index, aus))
book_id = conn.last_insert_rowid()
mi.timestamp = utcnow() if mi.timestamp is None else mi.timestamp
mi.pubdate = UNDEFINED_DATE if mi.pubdate is None else mi.pubdate
if cover is not None:
mi.cover, mi.cover_data = None, (None, cover)
self._set_metadata(book_id, mi, ignore_errors=True)
if preserve_uuid and mi.uuid:
self._set_field('uuid', {book_id:mi.uuid})
# Update the caches for fields from the books table
self.fields['size'].table.book_col_map[book_id] = 0
row = next(conn.execute('SELECT sort, series_index, author_sort, uuid, has_cover FROM books WHERE id=?', (book_id,)))
for field, val in zip(('sort', 'series_index', 'author_sort', 'uuid', 'cover'), row):
if field == 'cover':
val = bool(val)
elif field == 'uuid':
self.fields[field].table.uuid_to_id_map[val] = book_id
self.fields[field].table.book_col_map[book_id] = val
return book_id
@write_api
def add_books(self, books, add_duplicates=True, apply_import_tags=True, preserve_uuid=False, dbapi=None):
duplicates, ids = [], []
for mi, format_map in books:
book_id = self._create_book_entry(mi, add_duplicates=add_duplicates, apply_import_tags=apply_import_tags, preserve_uuid=preserve_uuid)
if book_id is None:
duplicates.append((mi, format_map))
else:
ids.append(book_id)
for fmt, stream_or_path in format_map.iteritems():
self._add_format(book_id, fmt, stream_or_path, dbapi=dbapi)
return ids, duplicates
@write_api
def remove_books(self, book_ids, permanent=False):
path_map = {}
for book_id in book_ids:
try:
path = self._field_for('path', book_id).replace('/', os.sep)
except:
path = None
path_map[book_id] = path
self.backend.remove_books(path_map, permanent=permanent)
for field in self.fields.itervalues():
try:
table = field.table
except AttributeError:
continue # Some fields like ondevice do not have tables
else:
table.remove_books(book_ids, self.backend)
# }}}
class SortKey(object): # {{{
@ -959,3 +1163,5 @@ class SortKey(object): # {{{
return 0
# }}}

View File

@ -58,15 +58,20 @@ class LibraryDatabase(object):
setattr(self, prop, partial(self.get_property,
loc=self.FIELD_MAP[fm]))
for meth in ('get_next_series_num_for', 'has_book', 'author_sort_from_authors'):
setattr(self, meth, getattr(self.new_api, meth))
self.last_update_check = self.last_modified()
def close(self):
self.backend.close()
def break_cycles(self):
delattr(self.backend, 'field_metadata')
self.data.cache.backend = None
self.data.cache = None
self.data = self.backend = self.new_api = self.field_metadata = self.prefs = self.listeners = self.refresh_ondevice = None
for x in ('data', 'backend', 'new_api', 'listeners',):
delattr(self, x)
# Library wide properties {{{
@property

View File

@ -8,6 +8,7 @@ __copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
from datetime import datetime
from collections import defaultdict
from dateutil.tz import tzoffset
@ -19,6 +20,10 @@ _c_speedup = plugins['speedup'][0]
ONE_ONE, MANY_ONE, MANY_MANY = xrange(3)
class Null:
pass
null = Null()
def _c_convert_timestamp(val):
if not val:
return None
@ -54,6 +59,9 @@ class Table(object):
self.link_table = (link_table if link_table else
'books_%s_link'%self.metadata['table'])
def remove_books(self, book_ids, db):
return set()
class VirtualTable(Table):
'''
@ -82,6 +90,14 @@ class OneToOneTable(Table):
self.metadata['column'], self.metadata['table'])):
self.book_col_map[row[0]] = self.unserialize(row[1])
def remove_books(self, book_ids, db):
clean = set()
for book_id in book_ids:
val = self.book_col_map.pop(book_id, null)
if val is not null:
clean.add(val)
return clean
class PathTable(OneToOneTable):
def set_path(self, book_id, path, db):
@ -98,6 +114,9 @@ class SizeTable(OneToOneTable):
'WHERE data.book=books.id) FROM books'):
self.book_col_map[row[0]] = self.unserialize(row[1])
def update_sizes(self, size_map):
self.book_col_map.update(size_map)
class UUIDTable(OneToOneTable):
def read(self, db):
@ -106,9 +125,18 @@ class UUIDTable(OneToOneTable):
def update_uuid_cache(self, book_id_val_map):
for book_id, uuid in book_id_val_map.iteritems():
self.uuid_to_id_map.pop(self.book_col_map[book_id], None) # discard old uuid
self.uuid_to_id_map.pop(self.book_col_map.get(book_id, None), None) # discard old uuid
self.uuid_to_id_map[uuid] = book_id
def remove_books(self, book_ids, db):
clean = set()
for book_id in book_ids:
val = self.book_col_map.pop(book_id, null)
if val is not null:
self.uuid_to_id_map.pop(val, None)
clean.add(val)
return clean
class CompositeTable(OneToOneTable):
def read(self, db):
@ -120,6 +148,9 @@ class CompositeTable(OneToOneTable):
self.composite_sort = d.get('composite_sort', False)
self.use_decorations = d.get('use_decorations', False)
def remove_books(self, book_ids, db):
return set()
class ManyToOneTable(Table):
'''
@ -152,6 +183,27 @@ class ManyToOneTable(Table):
self.col_book_map[row[1]].add(row[0])
self.book_col_map[row[0]] = row[1]
def remove_books(self, book_ids, db):
clean = set()
for book_id in book_ids:
item_id = self.book_col_map.pop(book_id, None)
if item_id is not None:
try:
self.col_book_map[item_id].discard(book_id)
except KeyError:
if self.id_map.pop(item_id, null) is not null:
clean.add(item_id)
else:
if not self.col_book_map[item_id]:
del self.col_book_map[item_id]
if self.id_map.pop(item_id, null) is not null:
clean.add(item_id)
if clean:
db.conn.executemany(
'DELETE FROM {0} WHERE id=?'.format(self.metadata['table']),
[(x,) for x in clean])
return clean
class ManyToManyTable(ManyToOneTable):
'''
@ -162,6 +214,7 @@ class ManyToManyTable(ManyToOneTable):
table_type = MANY_MANY
selectq = 'SELECT book, {0} FROM {1} ORDER BY id'
do_clean_on_remove = True
def read_maps(self, db):
for row in db.conn.execute(
@ -176,6 +229,27 @@ class ManyToManyTable(ManyToOneTable):
for key in tuple(self.book_col_map.iterkeys()):
self.book_col_map[key] = tuple(self.book_col_map[key])
def remove_books(self, book_ids, db):
clean = set()
for book_id in book_ids:
item_ids = self.book_col_map.pop(book_id, ())
for item_id in item_ids:
try:
self.col_book_map[item_id].discard(book_id)
except KeyError:
if self.id_map.pop(item_id, null) is not null:
clean.add(item_id)
else:
if not self.col_book_map[item_id]:
del self.col_book_map[item_id]
if self.id_map.pop(item_id, null) is not null:
clean.add(item_id)
if clean and self.do_clean_on_remove:
db.conn.executemany(
'DELETE FROM {0} WHERE id=?'.format(self.metadata['table']),
[(x,) for x in clean])
return clean
class AuthorsTable(ManyToManyTable):
def read_id_maps(self, db):
@ -188,14 +262,29 @@ class AuthorsTable(ManyToManyTable):
author_to_author_sort(row[1]))
self.alink_map[row[0]] = row[3]
def set_sort_names(self, aus_map, db):
self.asort_map.update(aus_map)
db.conn.executemany('UPDATE authors SET sort=? WHERE id=?',
[(v, k) for k, v in aus_map.iteritems()])
def remove_books(self, book_ids, db):
clean = ManyToManyTable.remove_books(self, book_ids, db)
for item_id in clean:
self.alink_map.pop(item_id, None)
self.asort_map.pop(item_id, None)
return clean
class FormatsTable(ManyToManyTable):
do_clean_on_remove = False
def read_id_maps(self, db):
pass
def read_maps(self, db):
self.fname_map = {}
for row in db.conn.execute('SELECT book, format, name FROM data'):
self.fname_map = defaultdict(dict)
self.size_map = defaultdict(dict)
for row in db.conn.execute('SELECT book, format, name, uncompressed_size FROM data'):
if row[1] is not None:
fmt = row[1].upper()
if fmt not in self.col_book_map:
@ -204,18 +293,64 @@ class FormatsTable(ManyToManyTable):
if row[0] not in self.book_col_map:
self.book_col_map[row[0]] = []
self.book_col_map[row[0]].append(fmt)
if row[0] not in self.fname_map:
self.fname_map[row[0]] = {}
self.fname_map[row[0]][fmt] = row[2]
self.size_map[row[0]][fmt] = row[3]
for key in tuple(self.book_col_map.iterkeys()):
self.book_col_map[key] = tuple(sorted(self.book_col_map[key]))
def remove_books(self, book_ids, db):
clean = ManyToManyTable.remove_books(self, book_ids, db)
for book_id in book_ids:
self.fname_map.pop(book_id, None)
self.size_map.pop(book_id, None)
return clean
def set_fname(self, book_id, fmt, fname, db):
self.fname_map[book_id][fmt] = fname
db.conn.execute('UPDATE data SET name=? WHERE book=? AND format=?',
(fname, book_id, fmt))
def remove_formats(self, formats_map, db):
for book_id, fmts in formats_map.iteritems():
self.book_col_map[book_id] = [fmt for fmt in self.book_col_map.get(book_id, []) if fmt not in fmts]
for m in (self.fname_map, self.size_map):
m[book_id] = {k:v for k, v in m[book_id].iteritems() if k not in fmts}
for fmt in fmts:
try:
self.col_book_map[fmt].discard(book_id)
except KeyError:
pass
db.conn.executemany('DELETE FROM data WHERE book=? AND format=?',
[(book_id, fmt) for book_id, fmts in formats_map.iteritems() for fmt in fmts])
def zero_max(book_id):
try:
return max(self.size_map[book_id].itervalues())
except ValueError:
return 0
return {book_id:zero_max(book_id) for book_id in formats_map}
def update_fmt(self, book_id, fmt, fname, size, db):
fmts = list(self.book_col_map.get(book_id, []))
try:
fmts.remove(fmt)
except ValueError:
pass
fmts.append(fmt)
self.book_col_map[book_id] = tuple(fmts)
try:
self.col_book_map[fmt].add(book_id)
except KeyError:
self.col_book_map[fmt] = {book_id}
self.fname_map[book_id][fmt] = fname
self.size_map[book_id][fmt] = size
db.conn.execute('INSERT OR REPLACE INTO data (book,format,uncompressed_size,name) VALUES (?,?,?,?)',
(book_id, fmt, size, fname))
return max(self.size_map[book_id].itervalues())
class IdentifiersTable(ManyToManyTable):
def read_id_maps(self, db):
@ -231,7 +366,19 @@ class IdentifiersTable(ManyToManyTable):
self.book_col_map[row[0]] = {}
self.book_col_map[row[0]][row[1]] = row[2]
class LanguagesTable(ManyToManyTable):
def remove_books(self, book_ids, db):
clean = set()
for book_id in book_ids:
item_map = self.book_col_map.pop(book_id, {})
for item_id in item_map:
try:
self.col_book_map[item_id].discard(book_id)
except KeyError:
clean.add(item_id)
else:
if not self.col_book_map[item_id]:
del self.col_book_map[item_id]
clean.add(item_id)
return clean
def read_id_maps(self, db):
ManyToManyTable.read_id_maps(self, db)

View File

@ -0,0 +1,254 @@
#!/usr/bin/env python
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
from __future__ import (unicode_literals, division, absolute_import,
print_function)
__license__ = 'GPL v3'
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
import os
from io import BytesIO
from tempfile import NamedTemporaryFile
from datetime import timedelta
from calibre.db.tests.base import BaseTest, IMG
from calibre.ptempfile import PersistentTemporaryFile
from calibre.utils.date import now, UNDEFINED_DATE
def import_test(replacement_data, replacement_fmt=None):
def func(path, fmt):
if not path.endswith('.'+fmt.lower()):
raise AssertionError('path extension does not match format')
ext = (replacement_fmt or fmt).lower()
with PersistentTemporaryFile('.'+ext) as f:
f.write(replacement_data)
return f.name
return func
class AddRemoveTest(BaseTest):
def test_add_format(self): # {{{
'Test adding formats to an existing book record'
af, ae, at = self.assertFalse, self.assertEqual, self.assertTrue
cache = self.init_cache()
table = cache.fields['formats'].table
NF = b'test_add_formatxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
# Test that replace=False works
previous = cache.format(1, 'FMT1')
af(cache.add_format(1, 'FMT1', BytesIO(NF), replace=False))
ae(previous, cache.format(1, 'FMT1'))
# Test that replace=True works
lm = cache.field_for('last_modified', 1)
at(cache.add_format(1, 'FMT1', BytesIO(NF), replace=True))
ae(NF, cache.format(1, 'FMT1'))
ae(cache.format_metadata(1, 'FMT1')['size'], len(NF))
at(cache.field_for('size', 1) >= len(NF))
at(cache.field_for('last_modified', 1) > lm)
ae(('FMT2','FMT1'), cache.formats(1))
at(1 in table.col_book_map['FMT1'])
# Test adding a format to a record with no formats
at(cache.add_format(3, 'FMT1', BytesIO(NF), replace=True))
ae(NF, cache.format(3, 'FMT1'))
ae(cache.format_metadata(3, 'FMT1')['size'], len(NF))
ae(('FMT1',), cache.formats(3))
at(3 in table.col_book_map['FMT1'])
at(cache.add_format(3, 'FMTX', BytesIO(NF), replace=True))
at(3 in table.col_book_map['FMTX'])
ae(('FMT1','FMTX'), cache.formats(3))
# Test running on import plugins
import calibre.db.cache as c
orig = c.run_plugins_on_import
try:
c.run_plugins_on_import = import_test(b'replacement data')
at(cache.add_format(3, 'REPL', BytesIO(NF)))
ae(b'replacement data', cache.format(3, 'REPL'))
c.run_plugins_on_import = import_test(b'replacement data2', 'REPL2')
with NamedTemporaryFile(suffix='_test_add_format.repl') as f:
f.write(NF)
f.seek(0)
at(cache.add_format(3, 'REPL', BytesIO(NF)))
ae(b'replacement data', cache.format(3, 'REPL'))
ae(b'replacement data2', cache.format(3, 'REPL2'))
finally:
c.run_plugins_on_import = orig
# Test adding FMT with path
with NamedTemporaryFile(suffix='_test_add_format.fmt9') as f:
f.write(NF)
f.seek(0)
at(cache.add_format(2, 'FMT9', f))
ae(NF, cache.format(2, 'FMT9'))
ae(cache.format_metadata(2, 'FMT9')['size'], len(NF))
at(cache.field_for('size', 2) >= len(NF))
at(2 in table.col_book_map['FMT9'])
del cache
# Test that the old interface also shows correct format data
db = self.init_old()
ae(db.formats(3, index_is_id=True), ','.join(['FMT1', 'FMTX', 'REPL', 'REPL2']))
ae(db.format(3, 'FMT1', index_is_id=True), NF)
ae(db.format(1, 'FMT1', index_is_id=True), NF)
db.close()
del db
# }}}
def test_remove_formats(self): # {{{
'Test removal of formats from book records'
af, ae, at = self.assertFalse, self.assertEqual, self.assertTrue
cache = self.init_cache()
# Test removal of non-existing format does nothing
formats = {bid:tuple(cache.formats(bid)) for bid in (1, 2, 3)}
cache.remove_formats({1:{'NF'}, 2:{'NF'}, 3:{'NF'}})
nformats = {bid:tuple(cache.formats(bid)) for bid in (1, 2, 3)}
ae(formats, nformats)
# Test full removal of format
af(cache.format(1, 'FMT1') is None)
at(cache.has_format(1, 'FMT1'))
cache.remove_formats({1:{'FMT1'}})
at(cache.format(1, 'FMT1') is None)
af(bool(cache.format_metadata(1, 'FMT1')))
af(bool(cache.format_metadata(1, 'FMT1', allow_cache=False)))
af('FMT1' in cache.formats(1))
af(cache.has_format(1, 'FMT1'))
# Test db only removal
at(cache.has_format(1, 'FMT2'))
ap = cache.format_abspath(1, 'FMT2')
if ap and os.path.exists(ap):
cache.remove_formats({1:{'FMT2'}})
af(bool(cache.format_metadata(1, 'FMT2')))
af(cache.has_format(1, 'FMT2'))
at(os.path.exists(ap))
# Test that the old interface agrees
db = self.init_old()
at(db.format(1, 'FMT1', index_is_id=True) is None)
db.close()
del db
# }}}
def test_create_book_entry(self): # {{{
'Test the creation of new book entries'
from calibre.ebooks.metadata.book.base import Metadata
cache = self.init_cache()
mi = Metadata('Created One', authors=('Creator One', 'Creator Two'))
book_id = cache.create_book_entry(mi)
self.assertIsNot(book_id, None)
def do_test(cache, book_id):
for field in ('path', 'uuid', 'author_sort', 'timestamp', 'pubdate', 'title', 'authors', 'series_index', 'sort'):
self.assertTrue(cache.field_for(field, book_id))
for field in ('size', 'cover'):
self.assertFalse(cache.field_for(field, book_id))
self.assertEqual(book_id, cache.fields['uuid'].table.uuid_to_id_map[cache.field_for('uuid', book_id)])
self.assertLess(now() - cache.field_for('timestamp', book_id), timedelta(seconds=30))
self.assertEqual(('Created One', ('Creator One', 'Creator Two')), (cache.field_for('title', book_id), cache.field_for('authors', book_id)))
self.assertEqual(cache.field_for('series_index', book_id), 1.0)
self.assertEqual(cache.field_for('pubdate', book_id), UNDEFINED_DATE)
do_test(cache, book_id)
# Test that the db contains correct data
cache = self.init_cache()
do_test(cache, book_id)
self.assertIs(None, cache.create_book_entry(mi, add_duplicates=False), 'Duplicate added incorrectly')
book_id = cache.create_book_entry(mi, cover=IMG)
self.assertIsNot(book_id, None)
self.assertEqual(IMG, cache.cover(book_id))
import calibre.db.cache as c
orig = c.prefs
c.prefs = {'new_book_tags':('newbook', 'newbook2')}
try:
book_id = cache.create_book_entry(mi)
self.assertEqual(('newbook', 'newbook2'), cache.field_for('tags', book_id))
mi.tags = ('one', 'two')
book_id = cache.create_book_entry(mi)
self.assertEqual(('one', 'two') + ('newbook', 'newbook2'), cache.field_for('tags', book_id))
mi.tags = ()
finally:
c.prefs = orig
mi.uuid = 'a preserved uuid'
book_id = cache.create_book_entry(mi, preserve_uuid=True)
self.assertEqual(mi.uuid, cache.field_for('uuid', book_id))
# }}}
def test_add_books(self): # {{{
'Test the adding of new books'
from calibre.ebooks.metadata.book.base import Metadata
cache = self.init_cache()
mi = Metadata('Created One', authors=('Creator One', 'Creator Two'))
FMT1, FMT2 = b'format1', b'format2'
format_map = {'FMT1':BytesIO(FMT1), 'FMT2':BytesIO(FMT2)}
ids, duplicates = cache.add_books([(mi, format_map)])
self.assertTrue(len(ids) == 1)
self.assertFalse(duplicates)
book_id = ids[0]
self.assertEqual(set(cache.formats(book_id)), {'FMT1', 'FMT2'})
self.assertEqual(cache.format(book_id, 'FMT1'), FMT1)
self.assertEqual(cache.format(book_id, 'FMT2'), FMT2)
# }}}
def test_remove_books(self): # {{{
'Test removal of books'
cache = self.init_cache()
af, ae, at = self.assertFalse, self.assertEqual, self.assertTrue
authors = cache.fields['authors'].table
# Delete a single book, with no formats and check cleaning
self.assertIn(_('Unknown'), set(authors.id_map.itervalues()))
olen = len(authors.id_map)
item_id = {v:k for k, v in authors.id_map.iteritems()}[_('Unknown')]
cache.remove_books((3,))
for c in (cache, self.init_cache()):
table = c.fields['authors'].table
self.assertNotIn(3, c.all_book_ids())
self.assertNotIn(_('Unknown'), set(table.id_map.itervalues()))
self.assertNotIn(item_id, table.asort_map)
self.assertNotIn(item_id, table.alink_map)
ae(len(table.id_map), olen-1)
# Check that files are removed
fmtpath = cache.format_abspath(1, 'FMT1')
bookpath = os.path.dirname(fmtpath)
authorpath = os.path.dirname(bookpath)
item_id = {v:k for k, v in cache.fields['#series'].table.id_map.iteritems()}['My Series Two']
cache.remove_books((1,), permanent=True)
for x in (fmtpath, bookpath, authorpath):
af(os.path.exists(x))
for c in (cache, self.init_cache()):
table = c.fields['authors'].table
self.assertNotIn(1, c.all_book_ids())
self.assertNotIn('Author Two', set(table.id_map.itervalues()))
self.assertNotIn(6, set(c.fields['rating'].table.id_map.itervalues()))
self.assertIn('A Series One', set(c.fields['series'].table.id_map.itervalues()))
self.assertNotIn('My Series Two', set(c.fields['#series'].table.id_map.itervalues()))
self.assertNotIn(item_id, c.fields['#series'].table.col_book_map)
self.assertNotIn(1, c.fields['#series'].table.book_col_map)
# Test emptying the db
cache.remove_books(cache.all_book_ids(), permanent=True)
for f in ('authors', 'series', '#series', 'tags'):
table = cache.fields[f].table
self.assertFalse(table.id_map)
self.assertFalse(table.book_col_map)
self.assertFalse(table.col_book_map)
# }}}

View File

@ -14,6 +14,8 @@ from future_builtins import map
rmtree = partial(shutil.rmtree, ignore_errors=True)
IMG = b'\xff\xd8\xff\xe0\x00\x10JFIF\x00\x01\x01\x01\x00`\x00`\x00\x00\xff\xe1\x00\x16Exif\x00\x00II*\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xdb\x00C\x00\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\xff\xdb\x00C\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\xff\xc0\x00\x11\x08\x00\x01\x00\x01\x03\x01"\x00\x02\x11\x01\x03\x11\x01\xff\xc4\x00\x15\x00\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\n\xff\xc4\x00\x14\x10\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xc4\x00\x14\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xc4\x00\x14\x11\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xda\x00\x0c\x03\x01\x00\x02\x11\x03\x11\x00?\x00\xbf\x80\x01\xff\xd9' # noqa {{{ }}}
class BaseTest(unittest.TestCase):
longMessage = True

View File

@ -103,6 +103,22 @@ class LegacyTest(BaseTest):
# }}}
def test_legacy_direct(self): # {{{
'Test methods that are directly equivalent in the old and new interface'
from calibre.ebooks.metadata.book.base import Metadata
ndb = self.init_legacy()
db = self.init_old()
for meth, args in {
'get_next_series_num_for': [('A Series One',)],
'author_sort_from_authors': [(['Author One', 'Author Two', 'Unknown'],)],
'has_book':[(Metadata('title one'),), (Metadata('xxxx1111'),)],
}.iteritems():
for a in args:
self.assertEqual(getattr(db, meth)(*a), getattr(ndb, meth)(*a),
'The method: %s() returned different results for argument %s' % (meth, a))
db.close()
# }}}
def test_legacy_coverage(self): # {{{
' Check that the emulation of the legacy interface is (almost) total '
cl = self.cloned_library
@ -117,18 +133,24 @@ class LegacyTest(BaseTest):
'__init__',
}
for attr in dir(db):
if attr in SKIP_ATTRS:
continue
if not hasattr(ndb, attr):
raise AssertionError('The attribute %s is missing' % attr)
obj, nobj = getattr(db, attr), getattr(ndb, attr)
if attr not in SKIP_ARGSPEC:
try:
argspec = inspect.getargspec(obj)
except TypeError:
pass
else:
self.assertEqual(argspec, inspect.getargspec(nobj), 'argspec for %s not the same' % attr)
try:
for attr in dir(db):
if attr in SKIP_ATTRS:
continue
if not hasattr(ndb, attr):
raise AssertionError('The attribute %s is missing' % attr)
obj, nobj = getattr(db, attr), getattr(ndb, attr)
if attr not in SKIP_ARGSPEC:
try:
argspec = inspect.getargspec(obj)
except TypeError:
pass
else:
self.assertEqual(argspec, inspect.getargspec(nobj), 'argspec for %s not the same' % attr)
finally:
for db in (ndb, db):
db.close()
db.break_cycles()
# }}}

View File

@ -15,7 +15,7 @@ from calibre.db.tests.base import BaseTest
class ReadingTest(BaseTest):
def test_read(self): # {{{
def test_read(self): # {{{
'Test the reading of data from the database'
cache = self.init_cache(self.library_path)
tests = {
@ -123,7 +123,7 @@ class ReadingTest(BaseTest):
book_id, field, expected_val, val))
# }}}
def test_sorting(self): # {{{
def test_sorting(self): # {{{
'Test sorting'
cache = self.init_cache(self.library_path)
for field, order in {
@ -165,7 +165,7 @@ class ReadingTest(BaseTest):
('title', True)]), 'Subsort failed')
# }}}
def test_get_metadata(self): # {{{
def test_get_metadata(self): # {{{
'Test get_metadata() returns the same data for both backends'
from calibre.library.database2 import LibraryDatabase2
old = LibraryDatabase2(self.library_path)
@ -188,7 +188,7 @@ class ReadingTest(BaseTest):
self.compare_metadata(mi1, mi2)
# }}}
def test_get_cover(self): # {{{
def test_get_cover(self): # {{{
'Test cover() returns the same data for both backends'
from calibre.library.database2 import LibraryDatabase2
old = LibraryDatabase2(self.library_path)
@ -212,7 +212,7 @@ class ReadingTest(BaseTest):
# }}}
def test_searching(self): # {{{
def test_searching(self): # {{{
'Test searching returns the same data for both backends'
from calibre.library.database2 import LibraryDatabase2
old = LibraryDatabase2(self.library_path)
@ -267,7 +267,7 @@ class ReadingTest(BaseTest):
# }}}
def test_get_categories(self): # {{{
def test_get_categories(self): # {{{
'Check that get_categories() returns the same data for both backends'
from calibre.library.database2 import LibraryDatabase2
old = LibraryDatabase2(self.library_path)
@ -286,9 +286,9 @@ class ReadingTest(BaseTest):
oval, nval = getattr(old, attr), getattr(new, attr)
if (
(category in {'rating', '#rating'} and attr in {'id_set', 'sort'}) or
(category == 'series' and attr == 'sort') or # Sorting is wrong in old
(category == 'series' and attr == 'sort') or # Sorting is wrong in old
(category == 'identifiers' and attr == 'id_set') or
(category == '@Good Series') or # Sorting is wrong in old
(category == '@Good Series') or # Sorting is wrong in old
(category == 'news' and attr in {'count', 'id_set'}) or
(category == 'formats' and attr == 'id_set')
):
@ -306,7 +306,7 @@ class ReadingTest(BaseTest):
# }}}
def test_get_formats(self): # {{{
def test_get_formats(self): # {{{
'Test reading ebook formats using the format() method'
from calibre.library.database2 import LibraryDatabase2
from calibre.db.cache import NoSuchFormat
@ -343,3 +343,47 @@ class ReadingTest(BaseTest):
# }}}
def test_author_sort_for_authors(self): # {{{
'Test getting the author sort for authors from the db'
cache = self.init_cache()
table = cache.fields['authors'].table
table.set_sort_names({next(table.id_map.iterkeys()): 'Fake Sort'}, cache.backend)
authors = tuple(table.id_map.itervalues())
nval = cache.author_sort_from_authors(authors)
self.assertIn('Fake Sort', nval)
db = self.init_old()
self.assertEqual(db.author_sort_from_authors(authors), nval)
db.close()
del db
# }}}
def test_get_next_series_num(self): # {{{
'Test getting the next series number for a series'
cache = self.init_cache()
cache.set_field('series', {3:'test series'})
cache.set_field('series_index', {3:13})
table = cache.fields['series'].table
series = tuple(table.id_map.itervalues())
nvals = {s:cache.get_next_series_num_for(s) for s in series}
db = self.init_old()
self.assertEqual({s:db.get_next_series_num_for(s) for s in series}, nvals)
db.close()
# }}}
def test_has_book(self): # {{{
'Test detecting duplicates'
from calibre.ebooks.metadata.book.base import Metadata
cache = self.init_cache()
db = self.init_old()
for title in cache.fields['title'].table.book_col_map.itervalues():
for x in (db, cache):
self.assertTrue(x.has_book(Metadata(title)))
self.assertTrue(x.has_book(Metadata(title.upper())))
self.assertFalse(x.has_book(Metadata(title + 'XXX')))
self.assertFalse(x.has_book(Metadata(title[:1])))
db.close()
# }}}

View File

@ -13,7 +13,7 @@ from io import BytesIO
from calibre.ebooks.metadata import author_to_author_sort
from calibre.utils.date import UNDEFINED_DATE
from calibre.db.tests.base import BaseTest
from calibre.db.tests.base import BaseTest, IMG
class WritingTest(BaseTest):
@ -364,8 +364,8 @@ class WritingTest(BaseTest):
ae(cache.field_for('cover', 1), 1)
ae(cache.set_cover({1:None}), set([1]))
ae(cache.field_for('cover', 1), 0)
img = IMG
img = b'\xff\xd8\xff\xe0\x00\x10JFIF\x00\x01\x01\x01\x00`\x00`\x00\x00\xff\xe1\x00\x16Exif\x00\x00II*\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xdb\x00C\x00\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\xff\xdb\x00C\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\xff\xc0\x00\x11\x08\x00\x01\x00\x01\x03\x01"\x00\x02\x11\x01\x03\x11\x01\xff\xc4\x00\x15\x00\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\n\xff\xc4\x00\x14\x10\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xc4\x00\x14\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xc4\x00\x14\x11\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xda\x00\x0c\x03\x01\x00\x02\x11\x03\x11\x00?\x00\xbf\x80\x01\xff\xd9' # noqa {{{ }}}
# Test setting a cover
ae(cache.set_cover({bid:img for bid in (1, 2, 3)}), {1, 2, 3})
old = self.init_old()
@ -374,6 +374,9 @@ class WritingTest(BaseTest):
ae(cache.field_for('cover', book_id), 1)
ae(old.cover(book_id, index_is_id=True), img, 'Cover was not set correctly for book %d' % book_id)
self.assertTrue(old.has_cover(book_id))
old.close()
old.break_cycles()
del old
# }}}
def test_set_metadata(self): # {{{

View File

@ -53,7 +53,7 @@ class PRST1(USBMS):
r'(PRS-T(1|2|2N)&)'
)
WINDOWS_CARD_A_MEM = re.compile(
r'(PRS-T(1|2|2N)__SD&)'
r'(PRS-T(1|2|2N)_{1,2}SD&)'
)
MAIN_MEMORY_VOLUME_LABEL = 'SONY Reader Main Memory'
STORAGE_CARD_VOLUME_LABEL = 'SONY Reader Storage Card'

View File

@ -203,6 +203,7 @@ class Convert(object):
current.append(p)
if current:
self.section_starts.append(current[0])
last = XPath('./w:body/w:sectPr')(doc)
pr = PageProperties(last)
for x in current:

View File

@ -1019,7 +1019,6 @@ class FullFetch(QDialog): # {{{
self.log_button = self.bb.addButton(_('View log'), self.bb.ActionRole)
self.log_button.clicked.connect(self.view_log)
self.log_button.setIcon(QIcon(I('debug.png')))
self.ok_button.setEnabled(False)
self.prev_button.setVisible(False)
self.identify_widget = IdentifyWidget(self.log, self)
@ -1044,7 +1043,6 @@ class FullFetch(QDialog): # {{{
def book_selected(self, book, caches):
self.next_button.setVisible(False)
self.ok_button.setEnabled(True)
self.prev_button.setVisible(True)
self.book = book
self.stack.setCurrentIndex(1)
@ -1055,7 +1053,6 @@ class FullFetch(QDialog): # {{{
def back_clicked(self):
self.next_button.setVisible(True)
self.ok_button.setEnabled(False)
self.prev_button.setVisible(False)
self.next_button.setFocus()
self.stack.setCurrentIndex(0)
@ -1063,11 +1060,14 @@ class FullFetch(QDialog): # {{{
self.covers_widget.reset_covers()
def accept(self):
gprefs['metadata_single_gui_geom'] = bytearray(self.saveGeometry())
if self.stack.currentIndex() == 1:
return QDialog.accept(self)
# Prevent the usual dialog accept mechanisms from working
pass
gprefs['metadata_single_gui_geom'] = bytearray(self.saveGeometry())
if DEBUG_DIALOG:
if self.stack.currentIndex() == 2:
return QDialog.accept(self)
else:
if self.stack.currentIndex() == 1:
return QDialog.accept(self)
def reject(self):
gprefs['metadata_single_gui_geom'] = bytearray(self.saveGeometry())
@ -1087,6 +1087,9 @@ class FullFetch(QDialog): # {{{
def ok_clicked(self, *args):
self.cover_pixmap = self.covers_widget.cover_pixmap()
if self.stack.currentIndex() == 0:
self.next_clicked()
return
if DEBUG_DIALOG:
if self.cover_pixmap is not None:
self.w = QLabel()

View File

@ -543,13 +543,14 @@ def do_set_metadata(db, id, stream):
def set_metadata_option_parser():
return get_parser(_(
'''
%prog set_metadata [options] id /path/to/metadata.opf
%prog set_metadata [options] id [/path/to/metadata.opf]
Set the metadata stored in the calibre database for the book identified by id
from the OPF file metadata.opf. id is an id number from the list command. You
can get a quick feel for the OPF format by using the --as-opf switch to the
show_metadata command. You can also set the metadata of individual fields with
the --field option.
the --field option. If you use the --field option, there is no need to specify
an OPF file.
'''))
def command_set_metadata(args, dbpath):

View File

@ -259,20 +259,23 @@ def samefile(src, dst):
def windows_hardlink(src, dest):
import win32file, pywintypes
msg = u'Creating hardlink from %s to %s failed: %%s' % (src, dest)
try:
win32file.CreateHardLink(dest, src)
except pywintypes.error as e:
msg = u'Creating hardlink from %s to %s failed: %%s' % (src, dest)
raise Exception(msg % e)
# We open and close dest, to ensure its directory entry is updated
# see http://blogs.msdn.com/b/oldnewthing/archive/2011/12/26/10251026.aspx
h = win32file.CreateFile(
dest, 0, win32file.FILE_SHARE_READ | win32file.FILE_SHARE_WRITE | win32file.FILE_SHARE_DELETE,
None, win32file.OPEN_EXISTING, 0, None)
sz = win32file.GetFileSize(h)
win32file.CloseHandle(h)
try:
sz = win32file.GetFileSize(h)
finally:
win32file.CloseHandle(h)
if sz != os.path.getsize(src):
msg = u'Creating hardlink from %s to %s failed: %%s' % (src, dest)
raise Exception(msg % ('hardlink size: %d not the same as source size' % sz))
class WindowsAtomicFolderMove(object):