Sync to trunk.

This commit is contained in:
John Schember 2013-02-27 18:40:12 -05:00
commit de813bf302
165 changed files with 59470 additions and 43164 deletions

View File

@ -19,6 +19,74 @@
# new recipes:
# - title:
- version: 0.9.20
date: 2013-02-22
new features:
- title: "Book polishing: Add an option to smarten punctuation in the book when polishing"
- title: "Book polishing: Add an option to delete all saved settings to the load saved settings button"
- title: "Book polishing: Remember the last used settings"
- title: "Book polishing: Add a checkbox to enable/disable the detailed polishing report"
- title: "Add a separate tweak in Preferences-Tweaks for saving backups of files when polishing. That way you can have calibre save backups while converting EPUB->EPUB and not while polishing, if you so desire."
- title: "Content server: Allow clicking on the book cover to download it. Useful on small screen devices where clicking the Get button may be difficult"
- title: "Driver for Energy Systems C4 Touch."
tickets: [1127477]
bug fixes:
- title: "E-book viewer: Fix a bug that could cause the back button in the viewer to skip a location"
- title: "When tweaking/polishing an azw3 file that does not have an identified content ToC, do not auto-generate one."
tickets: [1130729]
- title: "Book polishing: Use the actual cover image dimensions when creating the svg wrapper for the cover image."
tickets: [1127273]
- title: "Book polishing: Do not error out on epub files containing an iTunesMetadata.plist file."
tickets: [1127308]
- title: "Book polishing: Fix trying to polish more than 5 books at a time not working"
- title: "Content server: Add workaround for bug in latest release of Google Chrome that causes it to not work with book lists containing some utf-8 characters"
tickets: [1130478]
- title: "E-book viewer: When viewing EPUB files, do not parse html as xhtml even if it has svg tags embedded. This allows malformed XHTML files to still be viewed."
- title: "Bulk metadata edit Search & recplace: Update the sample values when changing the type of identifier to search on"
- title: "Fix recipes with the / character in their names not useable from the command line"
tickets: [1127666]
- title: "News download: Fix regression that broke downloading of images in gif format"
- title: "EPUB/AZW3 Output: When splitting the output html on page breaks, handle page-break-after rules correctly, the pre split point html should contain the full element"
- title: "Fix stdout/stderr redirection temp files not being deleted when restarting calibre from within calibre on windows"
- title: "E-book viewer: When viewing epub files that have their cover marked as non-linear, show the cover at the start of the book instead of the end."
tickets: [1126030]
- title: "EPUB Input: Fix handling of cover references with fragments in the urls"
improved recipes:
- Fronda
- Various Polish news sources
new recipes:
- title: Pravda
author: Darko Miletic
- title: PNN
author: n.kucklaender
- title: Various Polish news sources
author: fenuks
- version: 0.9.19
date: 2013-02-15

View File

@ -616,7 +616,10 @@ or a Remote Desktop solution.
If you must share the actual library, use a file syncing tool like
DropBox or rsync or Microsoft SkyDrive instead of a networked drive. Even with
these tools there is danger of data corruption/loss, so only do this if you are
willing to live with that risk.
willing to live with that risk. In particular, be aware that **Google Drive**
is incompatible with |app|, if you put your |app| library in Google Drive, you
*will* suffer data loss. See
`this thread <http://www.mobileread.com/forums/showthread.php?t=205581>`_ for details.
Content From The Web
---------------------
@ -692,7 +695,7 @@ Post any output you see in a help message on the `Forum <http://www.mobileread.c
|app| freezes/crashes occasionally?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are five possible things I know of, that can cause this:
There are several possible things I know of, that can cause this:
* You recently connected an external monitor or TV to your computer. In
this case, whenever |app| opens a new window like the edit metadata
@ -700,10 +703,6 @@ There are five possible things I know of, that can cause this:
you dont notice it and so you think |app| has frozen. Disconnect your
second monitor and restart calibre.
* You are using a Wacom branded USB mouse. There is an incompatibility between
Wacom mice and the graphics toolkit |app| uses. Try using a non-Wacom
mouse.
* If you use RoboForm, it is known to cause |app| to crash. Add |app| to
the blacklist of programs inside RoboForm to fix this. Or uninstall
RoboForm.
@ -714,6 +713,17 @@ There are five possible things I know of, that can cause this:
* Constant Guard Protection by Xfinity causes crashes in |app|. You have to
manually allow |app| in it or uninstall Constant Guard Protection.
* Spybot - Search & Destroy blocks |app| from accessing its temporary files
breaking viewing and converting of books.
* You are using a Wacom branded USB mouse. There is an incompatibility between
Wacom mice and the graphics toolkit |app| uses. Try using a non-Wacom
mouse.
* On some 64 bit versions of Windows there are security software/settings
that prevent 64-bit |app| from working properly. If you are using the 64-bit
version of |app| try switching to the 32-bit version.
If none of the above apply to you, then there is some other program on your
computer that is interfering with |app|. First reboot your computer in safe
mode, to have as few running programs as possible, and see if the crashes still

View File

@ -0,0 +1,27 @@
from calibre.web.feeds.news import BasicNewsRecipe
import re
class AdvancedUserRecipe1361743898(BasicNewsRecipe):
title = u'Democracy Journal'
description = '''A journal of ideas. Published quarterly.'''
__author__ = u'David Nye'
language = 'en'
oldest_article = 90
max_articles_per_feed = 30
no_stylesheets = True
auto_cleanup = True
def parse_index(self):
articles = []
feeds = []
soup = self.index_to_soup("http://www.democracyjournal.org")
for x in soup.findAll(href=re.compile("http://www\.democracyjournal\.org/\d*/.*php$")):
url = x.get('href')
title = self.tag_to_string(x)
articles.append({'title':title, 'url':url, 'description':'', 'date':''})
feeds.append(('Articles', articles))
return feeds
def print_version(self, url):
return url + '?page=all'

View File

@ -33,6 +33,21 @@ class DiscoverMagazine(BasicNewsRecipe):
remove_tags_after = [dict(name='div', attrs={'class':'listingBar'})]
# Login stuff
needs_subscription = True
use_javascript_to_login = True
requires_version = (0, 9, 20)
def javascript_login(self, br, username, password):
br.visit('http://discovermagazine.com', timeout=120)
f = br.select_form('div.login.section div.form')
f['username'] = username
f['password'] = password
br.submit('input[id="signInButton"]', timeout=120)
br.run_for_a_time(20)
# End login stuff
def append_page(self, soup, appendtag, position):
pager = soup.find('span',attrs={'class':'next'})
if pager:

View File

@ -0,0 +1,27 @@
# coding=utf-8
# https://github.com/iemejia/calibrecolombia
'''
http://www.elmalpensante.com/
'''
from calibre.web.feeds.news import BasicNewsRecipe
class ElMalpensante(BasicNewsRecipe):
title = u'El Malpensante'
language = 'es_CO'
__author__ = 'Ismael Mejia <iemejia@gmail.com>'
cover_url = 'http://elmalpensante.com/img/layout/logo.gif'
description = 'El Malpensante'
oldest_article = 30
simultaneous_downloads = 20
#tags = 'news, sport, blog'
use_embedded_content = True
remove_empty_feeds = True
max_articles_per_feed = 100
feeds = [(u'Artículos', u'http://www.elmalpensante.com/articulosRSS.php'),
(u'Malpensantías', u'http://www.elmalpensante.com/malpensantiasRSS.php'),
(u'Margaritas', u'http://www.elmalpensante.com/margaritasRSS.php'),
# This one is almost the same as articulos so we leave articles
# (u'Noticias', u'http://www.elmalpensante.com/noticiasRSS.php'),
]

View File

@ -0,0 +1,182 @@
__license__ = 'GPL v3'
__copyright__ = '2013, Darko Miletic <darko.miletic at gmail.com>'
'''
http://www.ft.com/intl/us-edition
'''
import datetime
from calibre.ptempfile import PersistentTemporaryFile
from calibre import strftime
from calibre.web.feeds.news import BasicNewsRecipe
class FinancialTimes(BasicNewsRecipe):
title = 'Financial Times (US) printed edition'
__author__ = 'Darko Miletic'
description = "The Financial Times (FT) is one of the world's leading business news and information organisations, recognised internationally for its authority, integrity and accuracy."
publisher = 'The Financial Times Ltd.'
category = 'news, finances, politics, UK, World'
oldest_article = 2
language = 'en'
max_articles_per_feed = 250
no_stylesheets = True
use_embedded_content = False
needs_subscription = True
encoding = 'utf8'
publication_type = 'newspaper'
articles_are_obfuscated = True
temp_files = []
masthead_url = 'http://im.media.ft.com/m/img/masthead_main.jpg'
LOGIN = 'https://registration.ft.com/registration/barrier/login'
LOGIN2 = 'http://media.ft.com/h/subs3.html'
INDEX = 'http://www.ft.com/intl/us-edition'
PREFIX = 'http://www.ft.com'
conversion_options = {
'comment' : description
, 'tags' : category
, 'publisher' : publisher
, 'language' : language
, 'linearize_tables' : True
}
def get_browser(self):
br = BasicNewsRecipe.get_browser(self)
br.open(self.INDEX)
if self.username is not None and self.password is not None:
br.open(self.LOGIN2)
br.select_form(name='loginForm')
br['username'] = self.username
br['password'] = self.password
br.submit()
return br
keep_only_tags = [
dict(name='div' , attrs={'class':['fullstory fullstoryHeader', 'ft-story-header']})
,dict(name='div' , attrs={'class':'standfirst'})
,dict(name='div' , attrs={'id' :'storyContent'})
,dict(name='div' , attrs={'class':['ft-story-body','index-detail']})
,dict(name='h2' , attrs={'class':'entry-title'} )
,dict(name='span', attrs={'class':lambda x: x and 'posted-on' in x.split()} )
,dict(name='span', attrs={'class':'author_byline'} )
,dict(name='div' , attrs={'class':'entry-content'} )
]
remove_tags = [
dict(name='div', attrs={'id':'floating-con'})
,dict(name=['meta','iframe','base','object','embed','link'])
,dict(attrs={'class':['storyTools','story-package','screen-copy','story-package separator','expandable-image']})
]
remove_attributes = ['width','height','lang']
extra_css = """
body{font-family: Georgia,Times,"Times New Roman",serif}
h2{font-size:large}
.ft-story-header{font-size: x-small}
.container{font-size:x-small;}
h3{font-size:x-small;color:#003399;}
.copyright{font-size: x-small}
img{margin-top: 0.8em; display: block}
.lastUpdated{font-family: Arial,Helvetica,sans-serif; font-size: x-small}
.byline,.ft-story-body,.ft-story-header{font-family: Arial,Helvetica,sans-serif}
"""
def get_artlinks(self, elem):
articles = []
count = 0
for item in elem.findAll('a',href=True):
count = count + 1
if self.test and count > 2:
return articles
rawlink = item['href']
url = rawlink
if not rawlink.startswith('http://'):
url = self.PREFIX + rawlink
try:
urlverified = self.browser.open_novisit(url).geturl() # resolve redirect.
except:
continue
title = self.tag_to_string(item)
date = strftime(self.timefmt)
articles.append({
'title' :title
,'date' :date
,'url' :urlverified
,'description':''
})
return articles
def parse_index(self):
feeds = []
soup = self.index_to_soup(self.INDEX)
dates= self.tag_to_string(soup.find('div', attrs={'class':'btm-links'}).find('div'))
self.timefmt = ' [%s]'%dates
wide = soup.find('div',attrs={'class':'wide'})
if not wide:
return feeds
allsections = wide.findAll(attrs={'class':lambda x: x and 'footwell' in x.split()})
if not allsections:
return feeds
count = 0
for item in allsections:
count = count + 1
if self.test and count > 2:
return feeds
fitem = item.h3
if not fitem:
fitem = item.h4
ftitle = self.tag_to_string(fitem)
self.report_progress(0, _('Fetching feed')+' %s...'%(ftitle))
feedarts = self.get_artlinks(item.ul)
feeds.append((ftitle,feedarts))
return feeds
def preprocess_html(self, soup):
items = ['promo-box','promo-title',
'promo-headline','promo-image',
'promo-intro','promo-link','subhead']
for item in items:
for it in soup.findAll(item):
it.name = 'div'
it.attrs = []
for item in soup.findAll(style=True):
del item['style']
for item in soup.findAll('a'):
limg = item.find('img')
if item.string is not None:
str = item.string
item.replaceWith(str)
else:
if limg:
item.name = 'div'
item.attrs = []
else:
str = self.tag_to_string(item)
item.replaceWith(str)
for item in soup.findAll('img'):
if not item.has_key('alt'):
item['alt'] = 'image'
return soup
def get_cover_url(self):
cdate = datetime.date.today()
if cdate.isoweekday() == 7:
cdate -= datetime.timedelta(days=1)
return cdate.strftime('http://specials.ft.com/vtf_pdf/%d%m%y_FRONT1_USA.pdf')
def get_obfuscated_article(self, url):
count = 0
while (count < 10):
try:
response = self.browser.open(url)
html = response.read()
count = 10
except:
print "Retrying download..."
count += 1
tfile = PersistentTemporaryFile('_fa.html')
tfile.write(html)
tfile.close()
self.temp_files.append(tfile)
return tfile.name
def cleanup(self):
self.browser.open('https://registration.ft.com/registration/login/logout?location=')

View File

@ -23,7 +23,6 @@ class Fronda(BasicNewsRecipe):
extra_css = '''
h1 {font-size:150%}
.body {text-align:left;}
div.headline {font-weight:bold}
'''
earliest_date = date.today() - timedelta(days=oldest_article)
@ -72,7 +71,7 @@ class Fronda(BasicNewsRecipe):
feeds.append((genName, articles[genName]))
return feeds
keep_only_tags = [
keep_only_tags = [
dict(name='div', attrs={'class':'yui-g'})
]
@ -84,5 +83,7 @@ class Fronda(BasicNewsRecipe):
dict(name='ul', attrs={'class':'comment-list'}),
dict(name='ul', attrs={'class':'category'}),
dict(name='p', attrs={'id':'comments-disclaimer'}),
dict(name='div', attrs={'style':'text-align: left; margin-bottom: 15px;'}),
dict(name='div', attrs={'style':'text-align: left; margin-top: 15px;'}),
dict(name='div', attrs={'id':'comment-form'})
]

View File

@ -0,0 +1,12 @@
from calibre.web.feeds.news import BasicNewsRecipe
class BasicUserRecipe1361379046(BasicNewsRecipe):
title = u'Geopolityka.org'
language = 'pl'
__author__ = 'chemik111'
oldest_article = 15
max_articles_per_feed = 100
auto_cleanup = True
feeds = [(u'Rss', u'http://geopolityka.org/index.php?format=feed&type=rss')]

68
recipes/hnonline.recipe Normal file
View File

@ -0,0 +1,68 @@
from calibre.web.feeds.news import BasicNewsRecipe
import re
class HNonlineRecipe(BasicNewsRecipe):
__license__ = 'GPL v3'
__author__ = 'lacike'
language = 'sk'
version = 1
title = u'HNonline'
publisher = u'HNonline'
category = u'News, Newspaper'
description = u'News from Slovakia'
cover_url = u'http://hnonline.sk/img/sk/_relaunch/logo2.png'
oldest_article = 1
max_articles_per_feed = 100
use_embedded_content = False
remove_empty_feeds = True
no_stylesheets = True
remove_javascript = True
# Feeds from: http://rss.hnonline.sk, for listing see http://rss.hnonline.sk/prehlad
feeds = []
feeds.append((u'HNonline|Ekonomika a firmy', u'http://rss.hnonline.sk/?p=kC1000'))
feeds.append((u'HNonline|Slovensko', u'http://rss.hnonline.sk/?p=kC2000'))
feeds.append((u'HNonline|Svet', u'http://rss.hnonline.sk/?p=kC3000'))
feeds.append((u'HNonline|\u0160port', u'http://rss.hnonline.sk/?p=kC4000'))
feeds.append((u'HNonline|Online rozhovor', u'http://rss.hnonline.sk/?p=kCR000'))
feeds.append((u'FinWeb|Spr\u00E1vy zo sveta financi\u00ED', u'http://rss.finweb.hnonline.sk/spravodajstvo'))
feeds.append((u'FinWeb|Koment\u00E1re a anal\u00FDzy', u'http://rss.finweb.hnonline.sk/?p=kPC200'))
feeds.append((u'FinWeb|Invest\u00EDcie', u'http://rss.finweb.hnonline.sk/?p=kPC300'))
feeds.append((u'FinWeb|Svet akci\u00ED', u'http://rss.finweb.hnonline.sk/?p=kPC400'))
feeds.append((u'FinWeb|Rozhovory', u'http://rss.finweb.hnonline.sk/?p=kPC500'))
feeds.append((u'FinWeb|T\u00E9ma t\u00FD\u017Ed\u0148a', u'http://rss.finweb.hnonline.sk/?p=kPC600'))
feeds.append((u'FinWeb|Rebr\u00ED\u010Dky', u'http://rss.finweb.hnonline.sk/?p=kPC700'))
feeds.append((u'HNstyle|Kult\u00FAra', u'http://style.hnonline.sk/?p=kTC100'))
feeds.append((u'HNstyle|Auto-moto', u'http://style.hnonline.sk/?p=kTC200'))
feeds.append((u'HNstyle|Digit\u00E1l', u'http://style.hnonline.sk/?p=kTC300'))
feeds.append((u'HNstyle|Veda', u'http://style.hnonline.sk/?p=kTCV00'))
feeds.append((u'HNstyle|Dizajn', u'http://style.hnonline.sk/?p=kTC400'))
feeds.append((u'HNstyle|Cestovanie', u'http://style.hnonline.sk/?p=kTCc00'))
feeds.append((u'HNstyle|V\u00EDkend', u'http://style.hnonline.sk/?p=kTC800'))
feeds.append((u'HNstyle|Gastro', u'http://style.hnonline.sk/?p=kTC600'))
feeds.append((u'HNstyle|M\u00F3da', u'http://style.hnonline.sk/?p=kTC700'))
feeds.append((u'HNstyle|Modern\u00E1 \u017Eena', u'http://style.hnonline.sk/?p=kTCA00'))
feeds.append((u'HNstyle|Pre\u010Do nie?!', u'http://style.hnonline.sk/?p=k7C000'))
keep_only_tags = []
keep_only_tags.append(dict(name = 'h1', attrs = {'class': 'detail-titulek'}))
keep_only_tags.append(dict(name = 'div', attrs = {'class': 'detail-podtitulek'}))
keep_only_tags.append(dict(name = 'div', attrs = {'class': 'detail-perex'}))
keep_only_tags.append(dict(name = 'div', attrs = {'class': 'detail-text'}))
remove_tags = []
#remove_tags.append(dict(name = 'div', attrs = {'id': re.compile('smeplayer.*')}))
remove_tags_after = []
#remove_tags_after = [dict(name = 'p', attrs = {'class': 'autor_line'})]
extra_css = '''
@font-face {font-family: "serif1";src:url(res:///opt/sony/ebook/FONT/tt0011m_.ttf)}
@font-face {font-family: "sans1";src:url(res:///opt/sony/ebook/FONT/LiberationSans.ttf)}
body {font-family: sans1, serif1;}
'''

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 KiB

BIN
recipes/icons/hnonline.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 454 B

BIN
recipes/icons/pravda_rs.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 606 B

View File

@ -0,0 +1,59 @@
__license__ = 'GPL v3'
__copyright__ = '2013, Darko Miletic <darko.miletic at gmail.com>'
'''
www.nezavisne.com
'''
from calibre import strftime
from calibre.web.feeds.news import BasicNewsRecipe
class NezavisneNovine(BasicNewsRecipe):
title = 'Nezavisne novine'
__author__ = 'Darko Miletic'
description = 'Nezavisne novine - Najnovije vijesti iz BiH, Srbije, Hrvatske, Crne Gore i svijeta'
publisher = 'NIGP "DNN"'
category = 'news, politics, Bosnia, Balcans'
oldest_article = 2
max_articles_per_feed = 200
no_stylesheets = True
encoding = 'utf8'
use_embedded_content = False
language = 'sr'
remove_empty_feeds = True
publication_type = 'newspaper'
cover_url = strftime('http://pdf.nezavisne.com/slika/novina/nezavisne_novine.jpg?v=%Y%m%d')
masthead_url = 'http://www.nezavisne.com/slika/osnova/nezavisne-novine-logo.gif'
extra_css = """
body{font-family: Arial,Helvetica,sans-serif }
img{margin-bottom: 0.4em; display:block}
"""
conversion_options = {
'comment' : description
, 'tags' : category
, 'publisher' : publisher
, 'language' : language
}
keep_only_tags = [dict(name='div', attrs={'class':'vijest'})]
remove_tags_after = dict(name='div', attrs={'id':'wrap'})
remove_tags = [
dict(name=['meta','link','iframe','object'])
,dict(name='div', attrs={'id':'wrap'})
]
remove_attributes=['lang','xmlns:fb','xmlns:og']
feeds = [
(u'Novosti' , u'http://feeds.feedburner.com/Novosti-NezavisneNovine' )
,(u'Posao' , u'http://feeds.feedburner.com/Posao-NezavisneNovine' )
,(u'Sport' , u'http://feeds.feedburner.com/Sport-NezavisneNovine' )
,(u'Komentar' , u'http://feeds.feedburner.com/Komentari-NezavisneNovine' )
,(u'Umjetnost i zabava' , u'http://feeds.feedburner.com/UmjetnostIZabava-NezavisneNovine' )
,(u'Život i stil' , u'http://feeds.feedburner.com/ZivotIStil-NezavisneNovine' )
,(u'Auto' , u'http://feeds.feedburner.com/Auto-NezavisneNovine' )
,(u'Nauka i tehnologija', u'http://feeds.feedburner.com/NaukaITehnologija-NezavisneNovine')
]
def preprocess_html(self, soup):
for item in soup.findAll(style=True):
del item['style']
return soup

55
recipes/pnn.recipe Normal file
View File

@ -0,0 +1,55 @@
from calibre.web.feeds.recipes import BasicNewsRecipe
'''Calibre recipe to convert the RSS feeds of the PNN to an ebook.'''
class SportsIllustratedRecipe(BasicNewsRecipe) :
__author__ = 'n.kucklaender'
__copyright__ = 'a.peter'
__license__ = 'GPL v3'
language = 'de'
description = 'PNN RSS'
version = 1
title = u'PNN'
timefmt = ' [%d.%m.%Y]'
oldest_article = 7.0
no_stylesheets = True
remove_javascript = True
use_embedded_content = False
publication_type = 'newspaper'
remove_empty_feeds = True
remove_tags = [dict(attrs={'class':['um-weather um-header-weather','um-has-sub um-mainnav','um-box','ts-products','um-meta-nav','um-box um-last','um-footer','um-footer-links','share hidden','um-buttons']}),dict(id=['dinsContainer'])]
# remove_tags_before = [dict(name='div', attrs={'class':'um-first'})]
# remove_tags_after = [dict(name='div', attrs={'class':'um-metabar'})]
feeds = [(u'Titelseite', u'http://www.pnn.de/rss.xml'),
(u'Dritte Seite', u'http://www.pnn.de/dritte-seite/rss.xml'),
(u'Politik', u'http://www.pnn.de/politik/rss.xml'),
(u'Meinung', u'http://www.pnn.de/meinung/rss.xml'),
(u'Potsdam', u'http://www.pnn.de/potsdam/rss.xml'),
(u'Havel-Spree', u'http://www.pnn.de/havel-spree/rss.xml'),
(u'Potsdam-Mittelmark', u'http://www.pnn.de/pm/rss.xml'),
(u'Berlin-Brandenburg', u'http://www.pnn.de/brandenburg-berlin/rss.xml'),
(u'Wirtschaft', u'http://www.pnn.de/wirtschaft/rss.xml'),
(u'Sport', u'http://www.pnn.de/sport/rss.xml'),
(u'Regionalsport', u'http://www.pnn.de/regionalsport/rss.xml'),
(u'Kultur', u'http://www.pnn.de/kultur/rss.xml'),
(u'Potsdam-Kultur', u'http://www.pnn.de/potsdam-kultur/rss.xml'),
(u'Wissen', u'http://www.pnn.de/wissen/rss.xml'),
(u'Medien', u'http://www.pnn.de/medien/rss.xml'),
(u'Weltspiegel', u'http://www.pnn.de/weltspiegel/rss.xml'),
(u'Wissenschaft', u'http://www.pnn.de/campus/rss.xml'),
(u'Mobil', u'http://www.pnn.de/mobil/rss.xml'),
(u'Reise', u'http://www.pnn.de/reise/rss.xml'),
(u'Ratgeber', u'http://www.pnn.de/ratgeber/rss.xml'),
(u'Fragen des Tages', u'http://www.pnn.de/fragen-des-tages/rss.xml'),
# (u'Potsdam bin ich', u'http://www.pnn.de/potsdam-bin-ich/rss.xml'),
(u'Leserbriefe', u'http://www.pnn.de/leserbriefe/rss.xml')]
def get_masthead_url(self):
return 'http://www.pnn.de/app/base/img/pnn_logo.png'
def print_version(self, url):
return url.replace('.html', ',view,printVersion.html')

85
recipes/pravda_rs.recipe Normal file
View File

@ -0,0 +1,85 @@
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
__license__ = 'GPL v3'
__copyright__ = '2013, Darko Miletic <darko.miletic at gmail.com>'
'''
www.pravda.rs
'''
import re
from calibre.web.feeds.recipes import BasicNewsRecipe
class Pravda_rs(BasicNewsRecipe):
title = 'Dnevne novine Pravda'
__author__ = 'Darko Miletic'
description = '24 sata portal vesti iz Srbije'
publisher = 'Dnevne novine Pravda'
category = 'news, politics, entertainment, Serbia'
oldest_article = 2
max_articles_per_feed = 100
no_stylesheets = True
encoding = 'utf-8'
use_embedded_content = False
language = 'sr'
publication_type = 'newspaper'
remove_empty_feeds = True
PREFIX = 'http://www.pravda.rs'
FEEDPR = PREFIX + '/category/'
LANGLAT = '?lng=lat'
FEEDSU = '/feed/' + LANGLAT
INDEX = PREFIX + LANGLAT
masthead_url = 'http://www.pravda.rs/wp-content/uploads/2012/09/logoof.png'
extra_css = """
@font-face {font-family: "serif1";src:url(res:///opt/sony/ebook/FONT/tt0011m_.ttf)}
body{font-family: Georgia,"Times New Roman",Times,serif1,serif;}
img{display: block}
"""
conversion_options = {
'comment' : description
, 'tags' : category
, 'publisher': publisher
, 'language' : language
}
preprocess_regexps = [(re.compile(u'\u0110'), lambda match: u'\u00D0')]
keep_only_tags = [dict(name='div', attrs={'class':'post'})]
remove_tags = [dict(name='h3')]
remove_tags_after = dict(name='h3')
feeds = [
(u'Politika' , FEEDPR + 'politika/' + FEEDSU),
(u'Tema Dana', FEEDPR + 'tema-dana/' + FEEDSU),
(u'Hronika' , FEEDPR + 'hronika/' + FEEDSU),
(u'Društvo' , FEEDPR + 'drustvo/' + FEEDSU),
(u'Ekonomija', FEEDPR + 'ekonomija/' + FEEDSU),
(u'Srbija' , FEEDPR + 'srbija/' + FEEDSU),
(u'Beograd' , FEEDPR + 'beograd/' + FEEDSU),
(u'Kultura' , FEEDPR + 'kultura/' + FEEDSU),
(u'Zabava' , FEEDPR + 'zabava/' + FEEDSU),
(u'Sport' , FEEDPR + 'sport/' + FEEDSU),
(u'Svet' , FEEDPR + 'svet/' + FEEDSU),
(u'Porodica' , FEEDPR + 'porodica/' + FEEDSU),
(u'Vremeplov', FEEDPR + 'vremeplov/' + FEEDSU),
(u'IT' , FEEDPR + 'it/' + FEEDSU),
(u'Republika Srpska', FEEDPR + 'republika-srpska/' + FEEDSU),
(u'Crna Gora', FEEDPR + 'crna-gora/' + FEEDSU),
(u'EX YU' , FEEDPR + 'eks-ju/' + FEEDSU),
(u'Dijaspora', FEEDPR + 'dijaspora/' + FEEDSU),
(u'Kolumna' , FEEDPR + 'kolumna/' + FEEDSU),
(u'Afere' , FEEDPR + 'afere/' + FEEDSU),
(u'Feljton' , FEEDPR + 'feljton/' + FEEDSU),
(u'Intervju' , FEEDPR + 'intervju/' + FEEDSU),
(u'Reportaža', FEEDPR + 'reportaza/' + FEEDSU),
(u'Zanimljivosti', FEEDPR + 'zanimljivosti/' + FEEDSU),
(u'Sa trga' , FEEDPR + 'sa-trga/' + FEEDSU)
]
def print_version(self, url):
return url + self.LANGLAT
def preprocess_raw_html(self, raw, url):
return '<html><head><title>title</title>'+raw[raw.find('</head>'):]

View File

@ -0,0 +1,33 @@
# coding=utf-8
# https://github.com/iemejia/calibrecolombia
'''
http://www.cromos.com.co/
'''
from calibre.web.feeds.news import BasicNewsRecipe
class ElMalpensante(BasicNewsRecipe):
title = u'Revista Cromos'
language = 'es_CO'
__author__ = 'Ismael Mejia <iemejia@gmail.com>'
cover_url = 'http://www.cromos.com.co/sites/cromos.com.co/themes/cromos_theme/images/logo_morado.gif'
description = 'Revista Cromos'
oldest_article = 7
simultaneous_downloads = 20
#tags = 'news, sport, blog'
use_embedded_content = True
remove_empty_feeds = True
max_articles_per_feed = 100
feeds = [(u'Cromos', u'http://www.cromos.com.co/rss.xml'),
(u'Moda', u'http://www.cromos.com.co/moda/feed'),
(u'Estilo de Vida', u'http://www.cromos.com.co/estilo-de-vida/feed'),
(u'Cuidado Personal', u'http://www.cromos.com.co/estilo-de-vida/cuidado-personal/feed'),
(u'Salud y Alimentación', u'http://www.cromos.com.co/estilo-de-vida/salud-y-alimentacion/feed'),
(u'Personajes', u'http://www.cromos.com.co/personajes/feed'),
(u'Actualidad', u'http://www.cromos.com.co/personajes/actualidad/feed'),
(u'Espectáculo', u'http://www.cromos.com.co/personajes/espectaculo/feed'),
(u'Reportajes', u'http://www.cromos.com.co/reportajes/feed'),
(u'Eventos', u'http://www.cromos.com.co/eventos/feed'),
(u'Modelos', u'http://www.cromos.com.co/modelos/feed'),
]

View File

@ -1,24 +1,38 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
__copyright__ = '2008, Darko Miletic <darko.miletic at gmail.com>'
'''
sciencenews.org
'''
from calibre.web.feeds.news import BasicNewsRecipe
class Sciencenews(BasicNewsRecipe):
title = u'ScienceNews'
__author__ = u'Darko Miletic and Sujata Raman'
description = u"Science News is an award-winning weekly newsmagazine covering the most important research in all fields of science. Its 16 pages each week are packed with short, accurate articles that appeal to both general readers and scientists. Published since 1922, the magazine now reaches about 150,000 subscribers and more than 1 million readers. These are the latest News Items from Science News."
class ScienceNewsIssue(BasicNewsRecipe):
title = u'Science News Recent Issues'
__author__ = u'Darko Miletic, Sujata Raman and Starson17'
description = u'''Science News is an award-winning weekly
newsmagazine covering the most important research in all fields of science.
Its 16 pages each week are packed with short, accurate articles that appeal
to both general readers and scientists. Published since 1922, the magazine
now reaches about 150,000 subscribers and more than 1 million readers.
These are the latest News Items from Science News. This recipe downloads
the last 30 days worth of articles.'''
category = u'Science, Technology, News'
publisher = u'Society for Science & the Public'
oldest_article = 30
language = 'en'
max_articles_per_feed = 100
no_stylesheets = True
use_embedded_content = False
auto_cleanup = True
timefmt = ' [%A, %d %B, %Y]'
recursions = 1
remove_attributes = ['style']
conversion_options = {'linearize_tables' : True
, 'comment' : description
, 'tags' : category
, 'publisher' : publisher
, 'language' : language
}
extra_css = '''
.content_description{font-family:georgia ;font-size:x-large; color:#646464 ; font-weight:bold;}
@ -27,36 +41,33 @@ class Sciencenews(BasicNewsRecipe):
.content_edition{font-family:helvetica,arial ;font-size: xx-small ;}
.exclusive{color:#FF0000 ;}
.anonymous{color:#14487E ;}
.content_content{font-family:helvetica,arial ;font-size: x-small ; color:#000000;}
.description{color:#585858;font-family:helvetica,arial ;font-size: xx-small ;}
.content_content{font-family:helvetica,arial ;font-size: medium ; color:#000000;}
.description{color:#585858;font-family:helvetica,arial ;font-size: large ;}
.credit{color:#A6A6A6;font-family:helvetica,arial ;font-size: xx-small ;}
'''
#keep_only_tags = [ dict(name='div', attrs={'id':'column_action'}) ]
#remove_tags_after = dict(name='ul', attrs={'id':'content_functions_bottom'})
#remove_tags = [
#dict(name='ul', attrs={'id':'content_functions_bottom'})
#,dict(name='div', attrs={'id':['content_functions_top','breadcrumb_content']})
#,dict(name='img', attrs={'class':'icon'})
#,dict(name='div', attrs={'class': 'embiggen'})
#]
keep_only_tags = [ dict(name='div', attrs={'class':'content_content'}),
dict(name='ul', attrs={'id':'toc'})
]
feeds = [(u"Science News / News Items", u'http://sciencenews.org/index.php/feed/type/news/name/news.rss/view/feed/name/all.rss')]
feeds = [(u"Science News Current Issues", u'http://www.sciencenews.org/view/feed/type/edition/name/issues.rss')]
match_regexps = [
r'www.sciencenews.org/view/feature/id/',
r'www.sciencenews.org/view/generic/id'
]
def get_cover_url(self):
cover_url = None
index = 'http://www.sciencenews.org/view/home'
soup = self.index_to_soup(index)
link_item = soup.find(name = 'img',alt = "issue")
print link_item
if link_item:
cover_url = 'http://www.sciencenews.org' + link_item['src'] + '.jpg'
return cover_url
#def preprocess_html(self, soup):
#for tag in soup.findAll(name=['span']):
#tag.name = 'div'
#return soup
def preprocess_html(self, soup):
for tag in soup.findAll(name=['span']):
tag.name = 'div'
return soup

View File

@ -0,0 +1,22 @@
# -*- coding: utf-8 -*-
# https://github.com/iemejia/calibrecolombia
'''
http://www.unperiodico.unal.edu.co/
'''
from calibre import strftime
from calibre.web.feeds.news import BasicNewsRecipe
class UNPeriodico(BasicNewsRecipe):
title = u'UN Periodico'
language = 'es_CO'
__author__ = 'Ismael Mejia <iemejia@gmail.com>'
cover_url = 'http://www.unperiodico.unal.edu.co/fileadmin/templates/periodico/img/logoperiodico.png'
description = 'UN Periodico'
oldest_article = 30
max_articles_per_feed = 100
publication_type = 'newspaper'
feeds = [
(u'UNPeriodico', u'http://www.unperiodico.unal.edu.co/rss/type/rss2/')
]

View File

@ -41,17 +41,9 @@ class AdvancedUserRecipe1249039563(BasicNewsRecipe):
#######################################################################################################
temp_files = []
articles_are_obfuscated = True
use_javascript_to_login = True
def javascript_login(self, br, username, password):
'Volksrant wants the user to explicitly allow cookies'
if not br.visit('http://www.volkskrant.nl'):
raise Exception('Failed to connect to volksrant website')
br.click('#pop_cookie_text a[onclick]', wait_for_load=True, timeout=120)
def get_obfuscated_article(self, url):
br = self.browser.clone_browser()
print 'THE CURRENT URL IS: ', url
br.open(url)
year = date.today().year

View File

@ -55,20 +55,14 @@ class WallStreetJournal(BasicNewsRecipe):
]
remove_tags_after = [dict(id="article_story_body"), {'class':"article story"},]
use_javascript_to_login = True
def get_browser(self):
br = BasicNewsRecipe.get_browser(self)
if self.username is not None and self.password is not None:
br.open('http://commerce.wsj.com/auth/login')
br.select_form(nr=1)
br['user'] = self.username
br['password'] = self.password
res = br.submit()
raw = res.read()
if 'Welcome,' not in raw and '>Logout<' not in raw and '>Log Out<' not in raw:
raise ValueError('Failed to log in to wsj.com, check your '
'username and password')
return br
def javascript_login(self, br, username, password):
br.visit('https://id.wsj.com/access/pages/wsj/us/login_standalone.html?mg=com-wsj', timeout=120)
f = br.select_form(nr=0)
f['username'] = username
f['password'] = password
br.submit(timeout=120)
def populate_article_metadata(self, article, soup, first):
if first and hasattr(self, 'add_toc_thumbnail'):

View File

@ -88,7 +88,7 @@ class ZeitEPUBAbo(BasicNewsRecipe):
(re.compile(u' \u00AB'), lambda match: u'\u00AB '), # before closing quotation
(re.compile(u'\u00BB '), lambda match: u' \u00BB'), # after opening quotation
# filtering for spaces in large numbers for better readability
(re.compile(r'(?<=\d\d)(?=\d\d\d[ ,\.;\)<\?!-])'), lambda match: u'\u2008'), # end of the number with some character following
(re.compile(r'(?<=\d\d)(?=\d\d\d[ ,;\)<\?!-])'), lambda match: u'\u2008'), # end of the number with some character following
(re.compile(r'(?<=\d\d)(?=\d\d\d. )'), lambda match: u'\u2008'), # end of the number with full-stop following, then space is necessary (avoid file names)
(re.compile(u'(?<=\d)(?=\d\d\d\u2008)'), lambda match: u'\u2008'), # next level
(re.compile(u'(?<=\d)(?=\d\d\d\u2008)'), lambda match: u'\u2008'), # next level

Binary file not shown.

View File

@ -356,6 +356,10 @@ h2.library_name {
color: red;
}
#booklist a.summary_thumb img {
border: none
}
#booklist > #pagelist { display: none; }
#goto_page_dialog ul {
@ -474,5 +478,9 @@ h2.library_name {
color: red
}
.details a.details_thumb img {
border: none
}
/* }}} */

View File

@ -1,6 +1,6 @@
<div id="details_{id}" class="details">
<div class="left">
<img alt="Cover of {title}" src="{prefix}/get/cover/{id}" />
<a href="{get_url}" title="Click to read {title} in the {fmt} format" class="details_thumb"><img alt="Cover of {title}" src="{prefix}/get/cover/{id}" /></a>
</div>
<div class="right">
<div class="field formats">{formats}</div>

View File

@ -1,6 +1,6 @@
<div id="summary_{id}" class="summary">
<div class="left">
<img alt="Cover of {title}" src="{prefix}/get/thumb_90_120/{id}" />
<a href="{get_url}" class="summary_thumb" title="Click to read {title} in the {fmt} format"><img alt="Cover of {title}" src="{prefix}/get/thumb_90_120/{id}" /></a>
{get_button}
</div>
<div class="right">

View File

@ -517,3 +517,17 @@ default_tweak_format = None
# your library and your personal editing style.
preselect_first_completion = False
#: Recognize numbers inside text when sorting
# This means that when sorting on text fields like title the text "Book 2"
# will sort before the text "Book 100". If you want this behavior, set
# numeric_collation = True note that doing so will cause problems with text
# that starts with numbers and is a little slower.
numeric_collation = False
#: Sort the list of libraries alphabetically
# The list of libraries in the Copy to Library and Quick Switch menus are
# normally sorted by most used. However, if there are more than a certain
# number of such libraries, the sorting becomes alphabetic. You can set that
# number here. The default is ten libraries.
many_libraries = 10

View File

@ -12,14 +12,14 @@ msgstr ""
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
"devel@lists.alioth.debian.org>\n"
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
"PO-Revision-Date: 2013-01-19 20:28+0000\n"
"PO-Revision-Date: 2013-02-19 18:01+0000\n"
"Last-Translator: Ferran Rius <frius64@hotmail.com>\n"
"Language-Team: Catalan <linux@softcatala.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"X-Launchpad-Export-Date: 2013-01-20 04:36+0000\n"
"X-Generator: Launchpad (build 16430)\n"
"X-Launchpad-Export-Date: 2013-02-20 04:50+0000\n"
"X-Generator: Launchpad (build 16491)\n"
"Language: ca\n"
#. name for aaa
@ -1920,7 +1920,7 @@ msgstr "Arára; Mato Grosso"
#. name for axk
msgid "Yaka (Central African Republic)"
msgstr "Yaka (República Centreafricana)"
msgstr "Yaka (República Centrafricana)"
#. name for axm
msgid "Armenian; Middle"
@ -3528,7 +3528,7 @@ msgstr "Buamu"
#. name for boy
msgid "Bodo (Central African Republic)"
msgstr "Bodo (República Centreafricana)"
msgstr "Bodo (República Centrafricana)"
#. name for boz
msgid "Bozo; Tiéyaxo"
@ -7928,7 +7928,7 @@ msgstr "Oromo; occidental"
#. name for gba
msgid "Gbaya (Central African Republic)"
msgstr "Gbaya (República Centreafricana)"
msgstr "Gbaya (República Centrafricana)"
#. name for gbb
msgid "Kaytetye"
@ -11184,7 +11184,7 @@ msgstr ""
#. name for kbn
msgid "Kare (Central African Republic)"
msgstr "Kare (República Centreafricana)"
msgstr "Kare (República Centrafricana)"
#. name for kbo
msgid "Keliko"
@ -20720,7 +20720,7 @@ msgstr "Pitjantjatjara"
#. name for pka
msgid "Prākrit; Ardhamāgadhī"
msgstr ""
msgstr "Pràcrit; Ardhamagadhi"
#. name for pkb
msgid "Pokomo"
@ -20776,31 +20776,31 @@ msgstr "Polonombauk"
#. name for plc
msgid "Palawano; Central"
msgstr ""
msgstr "Palawà; Central"
#. name for pld
msgid "Polari"
msgstr ""
msgstr "Polari"
#. name for ple
msgid "Palu'e"
msgstr ""
msgstr "Palue"
#. name for plg
msgid "Pilagá"
msgstr ""
msgstr "Pilagà"
#. name for plh
msgid "Paulohi"
msgstr ""
msgstr "Paulohi"
#. name for pli
msgid "Pali"
msgstr ""
msgstr "Pali"
#. name for plj
msgid "Polci"
msgstr ""
msgstr "Polci"
#. name for plk
msgid "Shina; Kohistani"
@ -20812,19 +20812,19 @@ msgstr "Palaung; Shwe"
#. name for pln
msgid "Palenquero"
msgstr ""
msgstr "Palenquero"
#. name for plo
msgid "Popoluca; Oluta"
msgstr ""
msgstr "Popoluca; Oluta"
#. name for plp
msgid "Palpa"
msgstr ""
msgstr "Palpa"
#. name for plq
msgid "Palaic"
msgstr ""
msgstr "Palaic"
#. name for plr
msgid "Senoufo; Palaka"
@ -20840,15 +20840,15 @@ msgstr "Malgaix; Plateau"
#. name for plu
msgid "Palikúr"
msgstr ""
msgstr "Palikur"
#. name for plv
msgid "Palawano; Southwest"
msgstr ""
msgstr "Palawà; Sudoccidental"
#. name for plw
msgid "Palawano; Brooke's Point"
msgstr ""
msgstr "Palawà; Brooke"
#. name for ply
msgid "Bolyu"
@ -20856,43 +20856,43 @@ msgstr ""
#. name for plz
msgid "Paluan"
msgstr ""
msgstr "Paluà"
#. name for pma
msgid "Paama"
msgstr ""
msgstr "Paama"
#. name for pmb
msgid "Pambia"
msgstr ""
msgstr "Pambia"
#. name for pmc
msgid "Palumata"
msgstr ""
msgstr "Palumata"
#. name for pme
msgid "Pwaamei"
msgstr ""
msgstr "Pwaamei"
#. name for pmf
msgid "Pamona"
msgstr ""
msgstr "Pamona"
#. name for pmh
msgid "Prākrit; Māhārāṣṭri"
msgstr ""
msgstr "Pràcrit; Maharastri"
#. name for pmi
msgid "Pumi; Northern"
msgstr ""
msgstr "Pumi; Septentrional"
#. name for pmj
msgid "Pumi; Southern"
msgstr ""
msgstr "Pumi; Meridional"
#. name for pmk
msgid "Pamlico"
msgstr ""
msgstr "Algonquí Carolina"
#. name for pml
msgid "Lingua Franca"
@ -20904,11 +20904,11 @@ msgstr "Pol"
#. name for pmn
msgid "Pam"
msgstr ""
msgstr "Pam"
#. name for pmo
msgid "Pom"
msgstr ""
msgstr "Pom"
#. name for pmq
msgid "Pame; Northern"
@ -20916,11 +20916,11 @@ msgstr "Pame; Septentrional"
#. name for pmr
msgid "Paynamar"
msgstr ""
msgstr "Paynamar"
#. name for pms
msgid "Piemontese"
msgstr ""
msgstr "Piemontès"
#. name for pmt
msgid "Tuamotuan"
@ -20956,7 +20956,7 @@ msgstr "Panjabi; Occidental"
#. name for pnc
msgid "Pannei"
msgstr ""
msgstr "Pannei"
#. name for pne
msgid "Penan; Western"
@ -20964,11 +20964,11 @@ msgstr "Penan; Occidental"
#. name for png
msgid "Pongu"
msgstr ""
msgstr "Pongu"
#. name for pnh
msgid "Penrhyn"
msgstr ""
msgstr "Penrhyn"
#. name for pni
msgid "Aoheng"
@ -20976,27 +20976,27 @@ msgstr ""
#. name for pnm
msgid "Punan Batu 1"
msgstr ""
msgstr "Punan Batu"
#. name for pnn
msgid "Pinai-Hagahai"
msgstr ""
msgstr "Pinai-Hagahai"
#. name for pno
msgid "Panobo"
msgstr ""
msgstr "Panobo"
#. name for pnp
msgid "Pancana"
msgstr ""
msgstr "Pancana"
#. name for pnq
msgid "Pana (Burkina Faso)"
msgstr ""
msgstr "Pana (Burkina Faso)"
#. name for pnr
msgid "Panim"
msgstr ""
msgstr "Panim"
#. name for pns
msgid "Ponosakan"
@ -21028,7 +21028,7 @@ msgstr ""
#. name for pnz
msgid "Pana (Central African Republic)"
msgstr ""
msgstr "Pana (República Centrafricana)"
#. name for poc
msgid "Poqomam"
@ -21056,7 +21056,7 @@ msgstr ""
#. name for poi
msgid "Popoluca; Highland"
msgstr ""
msgstr "Popoluca; Muntanya"
#. name for pok
msgid "Pokangá"
@ -21084,7 +21084,7 @@ msgstr ""
#. name for poq
msgid "Popoluca; Texistepec"
msgstr ""
msgstr "Popoluca; Texistepec"
#. name for por
msgid "Portuguese"
@ -21092,7 +21092,7 @@ msgstr "Portuguès"
#. name for pos
msgid "Popoluca; Sayula"
msgstr ""
msgstr "Popoluca; Sayula"
#. name for pot
msgid "Potawatomi"
@ -21336,7 +21336,7 @@ msgstr "Paixtú; Central"
#. name for psu
msgid "Prākrit; Sauraseni"
msgstr ""
msgstr "Pràcrit; Sauraseni"
#. name for psw
msgid "Port Sandwich"

View File

@ -10,19 +10,19 @@ msgstr ""
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
"devel@lists.alioth.debian.org>\n"
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
"PO-Revision-Date: 2011-09-27 16:52+0000\n"
"Last-Translator: Kovid Goyal <Unknown>\n"
"PO-Revision-Date: 2013-02-18 02:41+0000\n"
"Last-Translator: pedro jorge oliveira <pedrojorgeoliveira93@gmail.com>\n"
"Language-Team: Portuguese <pt@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"X-Launchpad-Export-Date: 2011-11-26 05:34+0000\n"
"X-Generator: Launchpad (build 14381)\n"
"X-Launchpad-Export-Date: 2013-02-19 04:56+0000\n"
"X-Generator: Launchpad (build 16491)\n"
"Language: pt\n"
#. name for aaa
msgid "Ghotuo"
msgstr ""
msgstr "Ghotuo"
#. name for aab
msgid "Alumu-Tesu"
@ -498,7 +498,7 @@ msgstr ""
#. name for afr
msgid "Afrikaans"
msgstr "Africanos"
msgstr "Africano"
#. name for afs
msgid "Creole; Afro-Seminole"
@ -910,7 +910,7 @@ msgstr ""
#. name for ale
msgid "Aleut"
msgstr "aleúte"
msgstr "Aleúte"
#. name for alf
msgid "Alege"
@ -30818,7 +30818,7 @@ msgstr ""
#. name for zxx
msgid "No linguistic content"
msgstr ""
msgstr "Sem conteúdo linguistico"
#. name for zyb
msgid "Zhuang; Yongbei"

View File

@ -9,14 +9,14 @@ msgstr ""
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
"devel@lists.alioth.debian.org>\n"
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
"PO-Revision-Date: 2012-12-21 03:31+0000\n"
"Last-Translator: Fábio Malcher Miranda <mirand863@hotmail.com>\n"
"PO-Revision-Date: 2013-02-17 21:57+0000\n"
"Last-Translator: Neliton Pereira Jr. <nelitonpjr@gmail.com>\n"
"Language-Team: Brazilian Portuguese\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"X-Launchpad-Export-Date: 2012-12-22 04:59+0000\n"
"X-Generator: Launchpad (build 16378)\n"
"X-Launchpad-Export-Date: 2013-02-18 04:49+0000\n"
"X-Generator: Launchpad (build 16491)\n"
"Language: \n"
#. name for aaa
@ -141,7 +141,7 @@ msgstr ""
#. name for abh
msgid "Arabic; Tajiki"
msgstr ""
msgstr "Arábico; Tajiki"
#. name for abi
msgid "Abidji"

View File

@ -9,43 +9,43 @@ msgstr ""
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
"devel@lists.alioth.debian.org>\n"
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
"PO-Revision-Date: 2011-09-27 16:56+0000\n"
"Last-Translator: Clytie Siddall <clytie@riverland.net.au>\n"
"PO-Revision-Date: 2013-02-15 06:39+0000\n"
"Last-Translator: baduong <Unknown>\n"
"Language-Team: Vietnamese <gnomevi-list@lists.sourceforge.net>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"X-Launchpad-Export-Date: 2011-11-26 05:44+0000\n"
"X-Generator: Launchpad (build 14381)\n"
"X-Launchpad-Export-Date: 2013-02-16 04:56+0000\n"
"X-Generator: Launchpad (build 16491)\n"
"Language: vi\n"
#. name for aaa
msgid "Ghotuo"
msgstr ""
msgstr "Ghotuo"
#. name for aab
msgid "Alumu-Tesu"
msgstr ""
msgstr "Alumu-Tesu"
#. name for aac
msgid "Ari"
msgstr ""
msgstr "Ari"
#. name for aad
msgid "Amal"
msgstr ""
msgstr "Amal"
#. name for aae
msgid "Albanian; Arbëreshë"
msgstr ""
msgstr "An-ba-ni"
#. name for aaf
msgid "Aranadan"
msgstr ""
msgstr "Aranadan"
#. name for aag
msgid "Ambrak"
msgstr ""
msgstr "Ambrak"
#. name for aah
msgid "Arapesh; Abu'"
@ -30817,7 +30817,7 @@ msgstr ""
#. name for zxx
msgid "No linguistic content"
msgstr ""
msgstr "Không có nội dung kiểu ngôn ngữ"
#. name for zyb
msgid "Zhuang; Yongbei"
@ -30829,11 +30829,11 @@ msgstr ""
#. name for zyj
msgid "Zhuang; Youjiang"
msgstr ""
msgstr "Zhuang; Youjiang"
#. name for zyn
msgid "Zhuang; Yongnan"
msgstr ""
msgstr "Zhuang; Yongnan"
#. name for zyp
msgid "Zyphe"

View File

@ -4,7 +4,7 @@ __license__ = 'GPL v3'
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
__docformat__ = 'restructuredtext en'
__appname__ = u'calibre'
numeric_version = (0, 9, 19)
numeric_version = (0, 9, 20)
__version__ = u'.'.join(map(unicode, numeric_version))
__author__ = u"Kovid Goyal <kovid@kovidgoyal.net>"

View File

@ -16,15 +16,14 @@ import apsw
from calibre import isbytestring, force_unicode, prints
from calibre.constants import (iswindows, filesystem_encoding,
preferred_encoding)
from calibre.ptempfile import PersistentTemporaryFile, SpooledTemporaryFile
from calibre.db import SPOOL_SIZE
from calibre.ptempfile import PersistentTemporaryFile
from calibre.db.schema_upgrades import SchemaUpgrade
from calibre.library.field_metadata import FieldMetadata
from calibre.ebooks.metadata import title_sort, author_to_author_sort
from calibre.utils.icu import strcmp
from calibre.utils.icu import sort_key
from calibre.utils.config import to_json, from_json, prefs, tweaks
from calibre.utils.date import utcfromtimestamp, parse_date
from calibre.utils.filenames import is_case_sensitive
from calibre.utils.filenames import (is_case_sensitive, samefile, hardlink_file)
from calibre.db.tables import (OneToOneTable, ManyToOneTable, ManyToManyTable,
SizeTable, FormatsTable, AuthorsTable, IdentifiersTable,
CompositeTable, LanguagesTable)
@ -173,7 +172,9 @@ def _author_to_author_sort(x):
return author_to_author_sort(x.replace('|', ','))
def icu_collator(s1, s2):
return strcmp(force_unicode(s1, 'utf-8'), force_unicode(s2, 'utf-8'))
return cmp(sort_key(force_unicode(s1, 'utf-8')),
sort_key(force_unicode(s2, 'utf-8')))
# }}}
# Unused aggregators {{{
@ -855,38 +856,75 @@ class DB(object):
ans = {}
if path is not None:
stat = os.stat(path)
ans['path'] = path
ans['size'] = stat.st_size
ans['mtime'] = utcfromtimestamp(stat.st_mtime)
return ans
def cover(self, path, as_file=False, as_image=False,
as_path=False):
def has_format(self, book_id, fmt, fname, path):
return self.format_abspath(book_id, fmt, fname, path) is not None
def copy_cover_to(self, path, dest, windows_atomic_move=None, use_hardlink=False):
path = os.path.join(self.library_path, path, 'cover.jpg')
ret = None
if os.access(path, os.R_OK):
try:
if windows_atomic_move is not None:
if not isinstance(dest, basestring):
raise Exception("Error, you must pass the dest as a path when"
" using windows_atomic_move")
if os.access(path, os.R_OK) and dest and not samefile(dest, path):
windows_atomic_move.copy_path_to(path, dest)
return True
else:
if os.access(path, os.R_OK):
try:
f = lopen(path, 'rb')
except (IOError, OSError):
time.sleep(0.2)
f = lopen(path, 'rb')
except (IOError, OSError):
time.sleep(0.2)
f = lopen(path, 'rb')
with f:
if as_path:
pt = PersistentTemporaryFile('_dbcover.jpg')
with pt:
shutil.copyfileobj(f, pt)
return pt.name
if as_file:
ret = SpooledTemporaryFile(SPOOL_SIZE)
shutil.copyfileobj(f, ret)
ret.seek(0)
else:
ret = f.read()
if as_image:
from PyQt4.Qt import QImage
i = QImage()
i.loadFromData(ret)
ret = i
return ret
with f:
if hasattr(dest, 'write'):
shutil.copyfileobj(f, dest)
if hasattr(dest, 'flush'):
dest.flush()
return True
elif dest and not samefile(dest, path):
if use_hardlink:
try:
hardlink_file(path, dest)
return True
except:
pass
with lopen(dest, 'wb') as d:
shutil.copyfileobj(f, d)
return True
return False
def copy_format_to(self, book_id, fmt, fname, path, dest,
windows_atomic_move=None, use_hardlink=False):
path = self.format_abspath(book_id, fmt, fname, path)
if path is None:
return False
if windows_atomic_move is not None:
if not isinstance(dest, basestring):
raise Exception("Error, you must pass the dest as a path when"
" using windows_atomic_move")
if dest and not samefile(dest, path):
windows_atomic_move.copy_path_to(path, dest)
else:
if hasattr(dest, 'write'):
with lopen(path, 'rb') as f:
shutil.copyfileobj(f, dest)
if hasattr(dest, 'flush'):
dest.flush()
elif dest and not samefile(dest, path):
if use_hardlink:
try:
hardlink_file(path, dest)
return True
except:
pass
with lopen(path, 'rb') as f, lopen(dest, 'wb') as d:
shutil.copyfileobj(f, d)
return True
# }}}

View File

@ -8,16 +8,21 @@ __copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
import os, traceback
from io import BytesIO
from collections import defaultdict
from functools import wraps, partial
from calibre.db import SPOOL_SIZE
from calibre.db.categories import get_categories
from calibre.db.locking import create_locks, RecordLock
from calibre.db.errors import NoSuchFormat
from calibre.db.fields import create_field
from calibre.db.search import Search
from calibre.db.tables import VirtualTable
from calibre.db.lazy import FormatMetadata, FormatsList
from calibre.ebooks.metadata.book.base import Metadata
from calibre.ptempfile import (base_dir, PersistentTemporaryFile,
SpooledTemporaryFile)
from calibre.utils.date import now
from calibre.utils.icu import sort_key
@ -103,27 +108,6 @@ class Cache(object):
def field_metadata(self):
return self.backend.field_metadata
def _format_abspath(self, book_id, fmt):
'''
Return absolute path to the ebook file of format `format`
WARNING: This method will return a dummy path for a network backend DB,
so do not rely on it, use format(..., as_path=True) instead.
Currently used only in calibredb list, the viewer and the catalogs (via
get_data_as_dict()).
Apart from the viewer, I don't believe any of the others do any file
I/O with the results of this call.
'''
try:
name = self.fields['formats'].format_fname(book_id, fmt)
path = self._field_for('path', book_id).replace('/', os.sep)
except:
return None
if name and path:
return self.backend.format_abspath(book_id, fmt, name, path)
def _get_metadata(self, book_id, get_user_categories=True): # {{{
mi = Metadata(None, template_cache=self.formatter_template_cache)
author_ids = self._field_ids_for('authors', book_id)
@ -162,7 +146,7 @@ class Cache(object):
if not formats:
good_formats = None
else:
mi.format_metadata = FormatMetadata(self, id, formats)
mi.format_metadata = FormatMetadata(self, book_id, formats)
good_formats = FormatsList(formats, mi.format_metadata)
mi.formats = good_formats
mi.has_cover = _('Yes') if self._field_for('cover', book_id,
@ -227,6 +211,12 @@ class Cache(object):
self.fields['ondevice'] = create_field('ondevice',
VirtualTable('ondevice'))
for name, field in self.fields.iteritems():
if name[0] == '#' and name.endswith('_index'):
field.series_field = self.fields[name[:-len('_index')]]
elif name == 'series_index':
field.series_field = self.fields['series']
@read_api
def field_for(self, name, book_id, default_value=None):
'''
@ -397,15 +387,184 @@ class Cache(object):
:param as_path: If True return the image as a path pointing to a
temporary file
'''
if as_file:
ret = SpooledTemporaryFile(SPOOL_SIZE)
if not self.copy_cover_to(book_id, ret): return
ret.seek(0)
elif as_path:
pt = PersistentTemporaryFile('_dbcover.jpg')
with pt:
if not self.copy_cover_to(book_id, pt): return
ret = pt.name
else:
buf = BytesIO()
if not self.copy_cover_to(book_id, buf): return
ret = buf.getvalue()
if as_image:
from PyQt4.Qt import QImage
i = QImage()
i.loadFromData(ret)
ret = i
return ret
@api
def copy_cover_to(self, book_id, dest, use_hardlink=False):
'''
Copy the cover to the file like object ``dest``. Returns False
if no cover exists or dest is the same file as the current cover.
dest can also be a path in which case the cover is
copied to it iff the path is different from the current path (taking
case sensitivity into account).
'''
with self.read_lock:
try:
path = self._field_for('path', book_id).replace('/', os.sep)
except:
return None
return False
with self.record_lock.lock(book_id):
return self.backend.cover(path, as_file=as_file, as_image=as_image,
as_path=as_path)
return self.backend.copy_cover_to(path, dest,
use_hardlink=use_hardlink)
@api
def copy_format_to(self, book_id, fmt, dest, use_hardlink=False):
'''
Copy the format ``fmt`` to the file like object ``dest``. If the
specified format does not exist, raises :class:`NoSuchFormat` error.
dest can also be a path, in which case the format is copied to it, iff
the path is different from the current path (taking case sensitivity
into account).
'''
with self.read_lock:
try:
name = self.fields['formats'].format_fname(book_id, fmt)
path = self._field_for('path', book_id).replace('/', os.sep)
except:
raise NoSuchFormat('Record %d has no %s file'%(book_id, fmt))
with self.record_lock.lock(book_id):
return self.backend.copy_format_to(book_id, fmt, name, path, dest,
use_hardlink=use_hardlink)
@read_api
def format_abspath(self, book_id, fmt):
'''
Return absolute path to the ebook file of format `format`
Currently used only in calibredb list, the viewer and the catalogs (via
get_data_as_dict()).
Apart from the viewer, I don't believe any of the others do any file
I/O with the results of this call.
'''
try:
name = self.fields['formats'].format_fname(book_id, fmt)
path = self._field_for('path', book_id).replace('/', os.sep)
except:
return None
if name and path:
return self.backend.format_abspath(book_id, fmt, name, path)
@read_api
def has_format(self, book_id, fmt):
'Return True iff the format exists on disk'
try:
name = self.fields['formats'].format_fname(book_id, fmt)
path = self._field_for('path', book_id).replace('/', os.sep)
except:
return False
return self.backend.has_format(book_id, fmt, name, path)
@read_api
def formats(self, book_id, verify_formats=True):
'''
Return tuple of all formats for the specified book. If verify_formats
is True, verifies that the files exist on disk.
'''
ans = self.field_for('formats', book_id)
if verify_formats and ans:
try:
path = self._field_for('path', book_id).replace('/', os.sep)
except:
return ()
def verify(fmt):
try:
name = self.fields['formats'].format_fname(book_id, fmt)
except:
return False
return self.backend.has_format(book_id, fmt, name, path)
ans = tuple(x for x in ans if verify(x))
return ans
@api
def format(self, book_id, fmt, as_file=False, as_path=False, preserve_filename=False):
'''
Return the ebook format as a bytestring or `None` if the format doesn't exist,
or we don't have permission to write to the ebook file.
:param as_file: If True the ebook format is returned as a file object. Note
that the file object is a SpooledTemporaryFile, so if what you want to
do is copy the format to another file, use :method:`copy_format_to`
instead for performance.
:param as_path: Copies the format file to a temp file and returns the
path to the temp file
:param preserve_filename: If True and returning a path the filename is
the same as that used in the library. Note that using
this means that repeated calls yield the same
temp file (which is re-created each time)
'''
with self.read_lock:
ext = ('.'+fmt.lower()) if fmt else ''
try:
fname = self.fields['formats'].format_fname(book_id, fmt)
except:
return None
fname += ext
if as_path:
if preserve_filename:
bd = base_dir()
d = os.path.join(bd, 'format_abspath')
try:
os.makedirs(d)
except:
pass
ret = os.path.join(d, fname)
with self.record_lock.lock(book_id):
try:
self.copy_format_to(book_id, fmt, ret)
except NoSuchFormat:
return None
else:
with PersistentTemporaryFile(ext) as pt, self.record_lock.lock(book_id):
try:
self.copy_format_to(book_id, fmt, pt)
except NoSuchFormat:
return None
ret = pt.name
elif as_file:
ret = SpooledTemporaryFile(SPOOL_SIZE)
with self.record_lock.lock(book_id):
try:
self.copy_format_to(book_id, fmt, ret)
except NoSuchFormat:
return None
ret.seek(0)
# Various bits of code try to use the name as the default
# title when reading metadata, so set it
ret.name = fname
else:
buf = BytesIO()
with self.record_lock.lock(book_id):
try:
self.copy_format_to(book_id, fmt, buf)
except NoSuchFormat:
return None
ret = buf.getvalue()
return ret
@read_api
def multisort(self, fields, ids_to_sort=None):
@ -455,6 +614,14 @@ class Cache(object):
return get_categories(self, sort=sort, book_ids=book_ids,
icon_map=icon_map)
@write_api
def set_field(self, name, book_id_to_val_map, allow_case_change=True):
# TODO: Specialize title/authors to also update path
# TODO: Handle updating caches used by composite fields
dirtied = self.fields[name].writer.set_books(
book_id_to_val_map, self.backend, allow_case_change=allow_case_change)
return dirtied
# }}}
class SortKey(object):

View File

@ -12,6 +12,7 @@ from functools import partial
from operator import attrgetter
from future_builtins import map
from calibre.ebooks.metadata import author_to_author_sort
from calibre.library.field_metadata import TagsIcons
from calibre.utils.config_base import tweaks
from calibre.utils.icu import sort_key
@ -149,8 +150,16 @@ def get_categories(dbcache, sort='name', book_ids=None, icon_map=None):
elif category == 'news':
cats = dbcache.fields['tags'].get_news_category(tag_class, book_ids)
else:
cat = fm[category]
brm = book_rating_map
if cat['datatype'] == 'rating' and category != 'rating':
brm = dbcache.fields[category].book_value_map
cats = dbcache.fields[category].get_categories(
tag_class, book_rating_map, lang_map, book_ids)
tag_class, brm, lang_map, book_ids)
if (category != 'authors' and cat['datatype'] == 'text' and
cat['is_multiple'] and cat['display'].get('is_names', False)):
for item in cats:
item.sort = author_to_author_sort(item.sort)
sort_categories(cats, sort)
categories[category] = cats

View File

@ -12,6 +12,7 @@ from threading import Lock
from collections import defaultdict, Counter
from calibre.db.tables import ONE_ONE, MANY_ONE, MANY_MANY
from calibre.db.write import Writer
from calibre.ebooks.metadata import title_sort
from calibre.utils.config_base import tweaks
from calibre.utils.icu import sort_key
@ -21,6 +22,7 @@ from calibre.utils.localization import calibre_langcode_to_name
class Field(object):
is_many = False
is_many_many = False
def __init__(self, name, table):
self.name, self.table = name, table
@ -44,6 +46,8 @@ class Field(object):
self.category_formatter = lambda x:'\u2605'*int(x/2)
elif name == 'languages':
self.category_formatter = calibre_langcode_to_name
self.writer = Writer(self)
self.series_field = None
@property
def metadata(self):
@ -296,6 +300,7 @@ class ManyToOneField(Field):
class ManyToManyField(Field):
is_many = True
is_many_many = True
def __init__(self, *args, **kwargs):
Field.__init__(self, *args, **kwargs)

View File

@ -7,19 +7,36 @@ __license__ = 'GPL v3'
__copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
import unittest, os, shutil
import unittest, os, shutil, tempfile, atexit
from functools import partial
from io import BytesIO
from future_builtins import map
rmtree = partial(shutil.rmtree, ignore_errors=True)
class BaseTest(unittest.TestCase):
def setUp(self):
self.library_path = self.mkdtemp()
self.create_db(self.library_path)
def tearDown(self):
shutil.rmtree(self.library_path)
def create_db(self, library_path):
from calibre.library.database2 import LibraryDatabase2
if LibraryDatabase2.exists_at(library_path):
raise ValueError('A library already exists at %r'%library_path)
src = os.path.join(os.path.dirname(__file__), 'metadata.db')
db = os.path.join(library_path, 'metadata.db')
shutil.copyfile(src, db)
return db
dest = os.path.join(library_path, 'metadata.db')
shutil.copyfile(src, dest)
db = LibraryDatabase2(library_path)
db.set_cover(1, I('lt.png', data=True))
db.set_cover(2, I('polish.png', data=True))
db.add_format(1, 'FMT1', BytesIO(b'book1fmt1'), index_is_id=True)
db.add_format(1, 'FMT2', BytesIO(b'book1fmt2'), index_is_id=True)
db.add_format(2, 'FMT1', BytesIO(b'book2fmt1'), index_is_id=True)
return dest
def init_cache(self, library_path):
from calibre.db.backend import DB
@ -29,20 +46,38 @@ class BaseTest(unittest.TestCase):
cache.init()
return cache
def mkdtemp(self):
ans = tempfile.mkdtemp(prefix='db_test_')
atexit.register(rmtree, ans)
return ans
def init_old(self, library_path):
from calibre.library.database2 import LibraryDatabase2
return LibraryDatabase2(library_path)
def clone_library(self, library_path):
if not hasattr(self, 'clone_dir'):
self.clone_dir = tempfile.mkdtemp()
atexit.register(rmtree, self.clone_dir)
self.clone_count = 0
self.clone_count += 1
dest = os.path.join(self.clone_dir, str(self.clone_count))
shutil.copytree(library_path, dest)
return dest
def compare_metadata(self, mi1, mi2):
allfk1 = mi1.all_field_keys()
allfk2 = mi2.all_field_keys()
self.assertEqual(allfk1, allfk2)
all_keys = {'format_metadata', 'id', 'application_id',
'author_sort_map', 'author_link_map', 'book_size',
'ondevice_col', 'last_modified'}.union(allfk1)
'author_sort_map', 'author_link_map', 'book_size',
'ondevice_col', 'last_modified', 'has_cover',
'cover_data'}.union(allfk1)
for attr in all_keys:
if attr == 'user_metadata': continue
if attr == 'format_metadata': continue # TODO: Not implemented yet
attr1, attr2 = getattr(mi1, attr), getattr(mi2, attr)
if attr == 'formats':
continue # TODO: Not implemented yet
attr1, attr2 = map(lambda x:tuple(x) if x else (), (attr1, attr2))
self.assertEqual(attr1, attr2,
'%s not the same: %r != %r'%(attr, attr1, attr2))

View File

@ -7,21 +7,13 @@ __license__ = 'GPL v3'
__copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
import shutil, unittest, tempfile, datetime
from cStringIO import StringIO
import unittest, datetime
from calibre.utils.date import utc_tz
from calibre.db.tests.base import BaseTest
class ReadingTest(BaseTest):
def setUp(self):
self.library_path = tempfile.mkdtemp()
self.create_db(self.library_path)
def tearDown(self):
shutil.rmtree(self.library_path)
def test_read(self): # {{{
'Test the reading of data from the database'
cache = self.init_cache(self.library_path)
@ -55,7 +47,7 @@ class ReadingTest(BaseTest):
'#tags':(),
'#yesno':None,
'#comments': None,
'size':None,
},
2 : {
@ -66,7 +58,7 @@ class ReadingTest(BaseTest):
'series' : 'A Series One',
'series_index': 1.0,
'tags':('Tag One', 'Tag Two'),
'formats': (),
'formats': ('FMT1',),
'rating': 4.0,
'identifiers': {'test':'one'},
'timestamp': datetime.datetime(2011, 9, 5, 21, 6,
@ -86,6 +78,7 @@ class ReadingTest(BaseTest):
'#tags':('My Tag One', 'My Tag Two'),
'#yesno':True,
'#comments': '<div>My Comments One<p></p></div>',
'size':9,
},
1 : {
'title': 'Title Two',
@ -96,7 +89,7 @@ class ReadingTest(BaseTest):
'series_index': 2.0,
'rating': 6.0,
'tags': ('Tag One', 'News'),
'formats':(),
'formats':('FMT1', 'FMT2'),
'identifiers': {'test':'two'},
'timestamp': datetime.datetime(2011, 9, 6, 6, 0,
tzinfo=utc_tz),
@ -115,6 +108,7 @@ class ReadingTest(BaseTest):
'#tags':('My Tag Two',),
'#yesno':False,
'#comments': '<div>My Comments Two<p></p></div>',
'size':9,
},
}
@ -172,22 +166,41 @@ class ReadingTest(BaseTest):
'Test get_metadata() returns the same data for both backends'
from calibre.library.database2 import LibraryDatabase2
old = LibraryDatabase2(self.library_path)
for i in xrange(1, 3):
old.add_format(i, 'txt%d'%i, StringIO(b'random%d'%i),
index_is_id=True)
old.add_format(i, 'text%d'%i, StringIO(b'random%d'%i),
index_is_id=True)
old_metadata = {i:old.get_metadata(i, index_is_id=True) for i in
old_metadata = {i:old.get_metadata(
i, index_is_id=True, get_cover=True, cover_as_data=True) for i in
xrange(1, 4)}
for mi in old_metadata.itervalues():
mi.format_metadata = dict(mi.format_metadata)
if mi.formats:
mi.formats = tuple(mi.formats)
old = None
cache = self.init_cache(self.library_path)
new_metadata = {i:cache.get_metadata(i) for i in xrange(1, 4)}
new_metadata = {i:cache.get_metadata(
i, get_cover=True, cover_as_data=True) for i in xrange(1, 4)}
cache = None
for mi2, mi1 in zip(new_metadata.values(), old_metadata.values()):
self.compare_metadata(mi1, mi2)
# }}}
def test_get_cover(self): # {{{
'Test cover() returns the same data for both backends'
from calibre.library.database2 import LibraryDatabase2
old = LibraryDatabase2(self.library_path)
covers = {i: old.cover(i, index_is_id=True) for i in old.all_ids()}
old = None
cache = self.init_cache(self.library_path)
for book_id, cdata in covers.iteritems():
self.assertEqual(cdata, cache.cover(book_id), 'Reading of cover failed')
f = cache.cover(book_id, as_file=True)
self.assertEqual(cdata, f.read() if f else f, 'Reading of cover as file failed')
if cdata:
with open(cache.cover(book_id, as_path=True), 'rb') as f:
self.assertEqual(cdata, f.read(), 'Reading of cover as path failed')
else:
self.assertEqual(cdata, cache.cover(book_id, as_path=True),
'Reading of null cover as path failed')
# }}}
@ -227,8 +240,12 @@ class ReadingTest(BaseTest):
# User categories
'@Good Authors:One', '@Good Series.good tags:two',
# TODO: Tests for searching the size and #formats columns and
# cover:true|false
# Cover/Formats
'cover:true', 'cover:false', 'formats:true', 'formats:false',
'formats:#>1', 'formats:#=1', 'formats:=fmt1', 'formats:=fmt2',
'formats:=fmt1 or formats:fmt2', '#formats:true', '#formats:false',
'#formats:fmt1', '#formats:fmt2', '#formats:fmt1 and #formats:fmt2',
)}
old = None
@ -247,9 +264,67 @@ class ReadingTest(BaseTest):
old = LibraryDatabase2(self.library_path)
old_categories = old.get_categories()
cache = self.init_cache(self.library_path)
import pprint
pprint.pprint(old_categories)
pprint.pprint(cache.get_categories())
new_categories = cache.get_categories()
self.assertEqual(set(old_categories), set(new_categories),
'The set of old categories is not the same as the set of new categories')
def compare_category(category, old, new):
for attr in ('name', 'original_name', 'id', 'count',
'is_hierarchical', 'is_editable', 'is_searchable',
'id_set', 'avg_rating', 'sort', 'use_sort_as_name',
'tooltip', 'icon', 'category'):
oval, nval = getattr(old, attr), getattr(new, attr)
if (
(category in {'rating', '#rating'} and attr in {'id_set', 'sort'}) or
(category == 'series' and attr == 'sort') or # Sorting is wrong in old
(category == 'identifiers' and attr == 'id_set') or
(category == '@Good Series') or # Sorting is wrong in old
(category == 'news' and attr in {'count', 'id_set'}) or
(category == 'formats' and attr == 'id_set')
):
continue
self.assertEqual(oval, nval,
'The attribute %s for %s in category %s does not match. Old is %r, New is %r'
%(attr, old.name, category, oval, nval))
for category in old_categories:
old, new = old_categories[category], new_categories[category]
self.assertEqual(len(old), len(new),
'The number of items in the category %s is not the same'%category)
for o, n in zip(old, new):
compare_category(category, o, n)
# }}}
def test_get_formats(self): # {{{
'Test reading ebook formats using the format() method'
from calibre.library.database2 import LibraryDatabase2
old = LibraryDatabase2(self.library_path)
ids = old.all_ids()
lf = {i:set(old.formats(i, index_is_id=True).split(',')) if old.formats(
i, index_is_id=True) else set() for i in ids}
formats = {i:{f:old.format(i, f, index_is_id=True) for f in fmts} for
i, fmts in lf.iteritems()}
old = None
cache = self.init_cache(self.library_path)
for book_id, fmts in lf.iteritems():
self.assertEqual(fmts, set(cache.formats(book_id)),
'Set of formats is not the same')
for fmt in fmts:
old = formats[book_id][fmt]
self.assertEqual(old, cache.format(book_id, fmt),
'Old and new format disagree')
f = cache.format(book_id, fmt, as_file=True)
self.assertEqual(old, f.read(),
'Failed to read format as file')
with open(cache.format(book_id, fmt, as_path=True,
preserve_filename=True), 'rb') as f:
self.assertEqual(old, f.read(),
'Failed to read format as path')
with open(cache.format(book_id, fmt, as_path=True), 'rb') as f:
self.assertEqual(old, f.read(),
'Failed to read format as path')
# }}}

View File

@ -0,0 +1,127 @@
#!/usr/bin/env python
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
from __future__ import (unicode_literals, division, absolute_import,
print_function)
__license__ = 'GPL v3'
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
import unittest
from collections import namedtuple
from functools import partial
from calibre.utils.date import UNDEFINED_DATE
from calibre.db.tests.base import BaseTest
class WritingTest(BaseTest):
@property
def cloned_library(self):
return self.clone_library(self.library_path)
def create_getter(self, name, getter=None):
if getter is None:
if name.endswith('_index'):
ans = lambda db:partial(db.get_custom_extra, index_is_id=True,
label=name[1:].replace('_index', ''))
else:
ans = lambda db:partial(db.get_custom, label=name[1:],
index_is_id=True)
else:
ans = lambda db:partial(getattr(db, getter), index_is_id=True)
return ans
def create_setter(self, name, setter=None):
if setter is None:
ans = lambda db:partial(db.set_custom, label=name[1:], commit=True)
else:
ans = lambda db:partial(getattr(db, setter), commit=True)
return ans
def create_test(self, name, vals, getter=None, setter=None ):
T = namedtuple('Test', 'name vals getter setter')
return T(name, vals, self.create_getter(name, getter),
self.create_setter(name, setter))
def run_tests(self, tests):
results = {}
for test in tests:
results[test] = []
for val in test.vals:
cl = self.cloned_library
cache = self.init_cache(cl)
cache.set_field(test.name, {1: val})
cached_res = cache.field_for(test.name, 1)
del cache
db = self.init_old(cl)
getter = test.getter(db)
sqlite_res = getter(1)
if test.name.endswith('_index'):
val = float(val) if val is not None else 1.0
self.assertEqual(sqlite_res, val,
'Failed setting for %s with value %r, sqlite value not the same. val: %r != sqlite_val: %r'%(
test.name, val, val, sqlite_res))
else:
test.setter(db)(1, val)
old_cached_res = getter(1)
self.assertEqual(old_cached_res, cached_res,
'Failed setting for %s with value %r, cached value not the same. Old: %r != New: %r'%(
test.name, val, old_cached_res, cached_res))
db.refresh()
old_sqlite_res = getter(1)
self.assertEqual(old_sqlite_res, sqlite_res,
'Failed setting for %s, sqlite value not the same: %r != %r'%(
test.name, old_sqlite_res, sqlite_res))
del db
def test_one_one(self):
'Test setting of values in one-one fields'
tests = [self.create_test('#yesno', (True, False, 'true', 'false', None))]
for name, getter, setter in (
('#series_index', None, None),
('series_index', 'series_index', 'set_series_index'),
('#float', None, None),
):
vals = ['1.5', None, 0, 1.0]
tests.append(self.create_test(name, tuple(vals), getter, setter))
for name, getter, setter in (
('pubdate', 'pubdate', 'set_pubdate'),
('timestamp', 'timestamp', 'set_timestamp'),
('#date', None, None),
):
tests.append(self.create_test(
name, ('2011-1-12', UNDEFINED_DATE, None), getter, setter))
for name, getter, setter in (
('title', 'title', 'set_title'),
('uuid', 'uuid', 'set_uuid'),
('author_sort', 'author_sort', 'set_author_sort'),
('sort', 'title_sort', 'set_title_sort'),
('#comments', None, None),
('comments', 'comments', 'set_comment'),
):
vals = ['something', None]
if name not in {'comments', '#comments'}:
# Setting text column to '' returns None in the new backend
# and '' in the old. I think None is more correct.
vals.append('')
if name == 'comments':
# Again new behavior of deleting comment rather than setting
# empty string is more correct.
vals.remove(None)
tests.append(self.create_test(name, tuple(vals), getter, setter))
self.run_tests(tests)
def tests():
return unittest.TestLoader().loadTestsFromTestCase(WritingTest)
def run():
unittest.TextTestRunner(verbosity=2).run(tests())
if __name__ == '__main__':
run()

275
src/calibre/db/write.py Normal file
View File

@ -0,0 +1,275 @@
#!/usr/bin/env python
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
from __future__ import (unicode_literals, division, absolute_import,
print_function)
__license__ = 'GPL v3'
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
from functools import partial
from datetime import datetime
from calibre.constants import preferred_encoding, ispy3
from calibre.utils.date import (parse_only_date, parse_date, UNDEFINED_DATE,
isoformat)
if ispy3:
unicode = str
# Convert data into values suitable for the db {{{
def sqlite_datetime(x):
return isoformat(x, sep=' ') if isinstance(x, datetime) else x
def single_text(x):
if x is None:
return x
if not isinstance(x, unicode):
x = x.decode(preferred_encoding, 'replace')
x = x.strip()
return x if x else None
def multiple_text(sep, x):
if x is None:
return ()
if isinstance(x, bytes):
x = x.decode(preferred_encoding, 'replce')
if isinstance(x, unicode):
x = x.split(sep)
x = (y.strip() for y in x if y.strip())
return (' '.join(y.split()) for y in x if y)
def adapt_datetime(x):
if isinstance(x, (unicode, bytes)):
x = parse_date(x, assume_utc=False, as_utc=False)
return x
def adapt_date(x):
if isinstance(x, (unicode, bytes)):
x = parse_only_date(x)
if x is None:
x = UNDEFINED_DATE
return x
def adapt_number(typ, x):
if x is None:
return None
if isinstance(x, (unicode, bytes)):
if x.lower() == 'none':
return None
return typ(x)
def adapt_bool(x):
if isinstance(x, (unicode, bytes)):
x = x.lower()
if x == 'true':
x = True
elif x == 'false':
x = False
elif x == 'none':
x = None
else:
x = bool(int(x))
return x if x is None else bool(x)
def get_adapter(name, metadata):
dt = metadata['datatype']
if dt == 'text':
if metadata['is_multiple']:
ans = partial(multiple_text, metadata['is_multiple']['ui_to_list'])
else:
ans = single_text
elif dt == 'series':
ans = single_text
elif dt == 'datetime':
ans = adapt_date if name == 'pubdate' else adapt_datetime
elif dt == 'int':
ans = partial(adapt_number, int)
elif dt == 'float':
ans = partial(adapt_number, float)
elif dt == 'bool':
ans = adapt_bool
elif dt == 'comments':
ans = single_text
elif dt == 'rating':
ans = lambda x: x if x is None else min(10., max(0., adapt_number(float, x))),
elif dt == 'enumeration':
ans = single_text
elif dt == 'composite':
ans = lambda x: x
if name == 'title':
return lambda x: ans(x) or _('Unknown')
if name == 'author_sort':
return lambda x: ans(x) or ''
if name == 'authors':
return lambda x: ans(x) or (_('Unknown'),)
if name in {'timestamp', 'last_modified'}:
return lambda x: ans(x) or UNDEFINED_DATE
if name == 'series_index':
return lambda x: 1.0 if ans(x) is None else ans(x)
return ans
# }}}
# One-One fields {{{
def one_one_in_books(book_id_val_map, db, field, *args):
'Set a one-one field in the books table'
if book_id_val_map:
sequence = tuple((sqlite_datetime(v), k) for k, v in book_id_val_map.iteritems())
db.conn.executemany(
'UPDATE books SET %s=? WHERE id=?'%field.metadata['column'], sequence)
field.table.book_col_map.update(book_id_val_map)
return set(book_id_val_map)
def one_one_in_other(book_id_val_map, db, field, *args):
'Set a one-one field in the non-books table, like comments'
deleted = tuple((k,) for k, v in book_id_val_map.iteritems() if v is None)
if deleted:
db.conn.executemany('DELETE FROM %s WHERE book=?'%field.metadata['table'],
deleted)
for book_id in deleted:
field.table.book_col_map.pop(book_id[0], None)
updated = {k:v for k, v in book_id_val_map.iteritems() if v is not None}
if updated:
db.conn.executemany('INSERT OR REPLACE INTO %s(book,%s) VALUES (?,?)'%(
field.metadata['table'], field.metadata['column']),
tuple((k, sqlite_datetime(v)) for k, v in updated.iteritems()))
field.table.book_col_map.update(updated)
return set(book_id_val_map)
def custom_series_index(book_id_val_map, db, field, *args):
series_field = field.series_field
sequence = []
for book_id, sidx in book_id_val_map.iteritems():
if sidx is None:
sidx = 1.0
ids = series_field.ids_for_book(book_id)
if ids:
sequence.append((sidx, book_id, ids[0]))
field.table.book_col_map[book_id] = sidx
if sequence:
db.conn.executemany('UPDATE %s SET %s=? WHERE book=? AND value=?'%(
field.metadata['table'], field.metadata['column']), sequence)
return {s[0] for s in sequence}
# }}}
# Many-One fields {{{
def many_one(book_id_val_map, db, field, allow_case_change, *args):
dirtied = set()
m = field.metadata
dt = m['datatype']
kmap = icu_lower if dt == 'text' else lambda x:x
rid_map = {kmap(v):k for k, v in field.table.id_map.iteritems()}
book_id_item_id_map = {k:rid_map.get(kmap(v), None) if v is not None else
None for k, v in book_id_val_map.iteritems()}
if allow_case_change:
for book_id, item_id in book_id_item_id_map.iteritems():
nval = book_id_val_map[book_id]
if (item_id is not None and nval != field.table.id_map[item_id]):
# Change of case
db.conn.execute('UPDATE %s SET %s=? WHERE id=?'%(
m['table'], m['column']), (nval, item_id))
field.table.id_map[item_id] = nval
dirtied |= field.table.col_book_map[item_id]
deleted = {k:v for k, v in book_id_val_map.iteritems() if v is None}
updated = {k:v for k, v in book_id_val_map.iteritems() if v is not None}
if deleted:
db.conn.executemany('DELETE FROM %s WHERE book=?'%m['link_table'],
tuple((book_id,) for book_id in deleted))
for book_id in deleted:
field.table.book_col_map.pop(book_id, None)
field.table.col_book_map.discard(book_id)
dirtied |= set(deleted)
if updated:
new_items = {k:v for k, v in updated.iteritems() if
book_id_item_id_map[k] is None}
changed_items = {k:book_id_item_id_map[k] for k in updated if
book_id_item_id_map[k] is not None}
def sql_update(imap):
db.conn.executemany(
'DELETE FROM {0} WHERE book=?; INSERT INTO {0}(book,{1}) VALUES(?, ?)'
.format(m['link_table'], m['link_column']),
tuple((book_id, book_id, item_id) for book_id, item_id in
imap.iteritems()))
if new_items:
imap = {}
for book_id, val in new_items.iteritems():
db.conn.execute('INSERT INTO %s(%s) VALUES (?)'%(
m['table'], m['column']), (val,))
imap[book_id] = item_id = db.conn.last_insert_rowid()
field.table.id_map[item_id] = val
field.table.col_book_map[item_id] = {book_id}
field.table.book_col_map[book_id] = item_id
sql_update(imap)
dirtied |= set(imap)
if changed_items:
imap = {}
sql_update(changed_items)
for book_id, item_id in changed_items.iteritems():
old_item_id = field.table.book_col_map[book_id]
if old_item_id != item_id:
field.table.book_col_map[book_id] = item_id
field.table.col_book_map[item_id].add(book_id)
field.table.col_book_map[old_item_id].discard(book_id)
imap[book_id] = item_id
sql_update(imap)
dirtied |= set(imap)
# Remove no longer used items
remove = {item_id for item_id, book_ids in
field.table.col_book_map.iteritems() if not book_ids}
if remove:
db.conn.executemany('DELETE FROM %s WHERE id=?'%m['table'],
tuple((item_id,) for item_id in remove))
for item_id in remove:
del field.table.id_map[item_id]
del field.table.col_book_map[item_id]
return dirtied
# }}}
def dummy(book_id_val_map, *args):
return set()
class Writer(object):
def __init__(self, field):
self.adapter = get_adapter(field.name, field.metadata)
self.name = field.name
self.field = field
dt = field.metadata['datatype']
self.accept_vals = lambda x: True
if dt == 'composite' or field.name in {
'id', 'cover', 'size', 'path', 'formats', 'news'}:
self.set_books_func = dummy
elif self.name[0] == '#' and self.name.endswith('_index'):
self.set_books_func = custom_series_index
elif field.is_many_many:
# TODO: Implement this
pass
# TODO: Remember to change commas to | when writing authors to sqlite
elif field.is_many:
# TODO: Implement this
pass
else:
self.set_books_func = (one_one_in_books if field.metadata['table']
== 'books' else one_one_in_other)
if self.name in {'timestamp', 'uuid', 'sort'}:
self.accept_vals = bool
def set_books(self, book_id_val_map, db, allow_case_change=True):
book_id_val_map = {k:self.adapter(v) for k, v in
book_id_val_map.iteritems() if self.accept_vals(v)}
if not book_id_val_map:
return set()
dirtied = self.set_books_func(book_id_val_map, db, self.field,
allow_case_change)
return dirtied

View File

@ -14,7 +14,7 @@ class ILIAD(USBMS):
name = 'IRex Iliad Device Interface'
description = _('Communicate with the IRex Iliad eBook reader.')
author = _('John Schember')
author = 'John Schember'
supported_platforms = ['windows', 'linux']
# Ordered list of supported formats

View File

@ -15,7 +15,7 @@ class IREXDR1000(USBMS):
name = 'IRex Digital Reader 1000 Device Interface'
description = _('Communicate with the IRex Digital Reader 1000 eBook ' \
'reader.')
author = _('John Schember')
author = 'John Schember'
supported_platforms = ['windows', 'osx', 'linux']
# Ordered list of supported formats

View File

@ -22,13 +22,14 @@ class IRIVER_STORY(USBMS):
FORMATS = ['epub', 'fb2', 'pdf', 'djvu', 'txt']
VENDOR_ID = [0x1006]
PRODUCT_ID = [0x4023, 0x4024, 0x4025, 0x4034]
BCD = [0x0323, 0x0326]
PRODUCT_ID = [0x4023, 0x4024, 0x4025, 0x4034, 0x4037]
BCD = [0x0323, 0x0326, 0x226]
VENDOR_NAME = 'IRIVER'
WINDOWS_MAIN_MEM = ['STORY', 'STORY_EB05', 'STORY_WI-FI', 'STORY_EB07']
WINDOWS_MAIN_MEM = ['STORY', 'STORY_EB05', 'STORY_WI-FI', 'STORY_EB07',
'STORY_EB12']
WINDOWS_MAIN_MEM = re.compile(r'(%s)&'%('|'.join(WINDOWS_MAIN_MEM)))
WINDOWS_CARD_A_MEM = ['STORY', 'STORY_SD']
WINDOWS_CARD_A_MEM = ['STORY', 'STORY_SD', 'STORY_EB12_SD']
WINDOWS_CARD_A_MEM = re.compile(r'(%s)&'%('|'.join(WINDOWS_CARD_A_MEM)))
#OSX_MAIN_MEM = 'Kindle Internal Storage Media'

View File

@ -6,7 +6,7 @@ import os, time, sys
from calibre.constants import preferred_encoding, DEBUG
from calibre import isbytestring, force_unicode
from calibre.utils.icu import strcmp
from calibre.utils.icu import sort_key
from calibre.devices.usbms.books import Book as Book_
from calibre.devices.usbms.books import CollectionsBookList
@ -239,9 +239,8 @@ class KTCollectionsBookList(CollectionsBookList):
if y is None:
return -1
if isinstance(x, basestring) and isinstance(y, basestring):
c = strcmp(force_unicode(x), force_unicode(y))
else:
c = cmp(x, y)
x, y = sort_key(force_unicode(x)), sort_key(force_unicode(y))
c = cmp(x, y)
if c != 0:
return c
# same as above -- no sort_key needed here

View File

@ -13,7 +13,7 @@ from calibre.devices.interface import BookList as _BookList
from calibre.constants import preferred_encoding
from calibre import isbytestring, force_unicode
from calibre.utils.config import device_prefs, tweaks
from calibre.utils.icu import strcmp
from calibre.utils.icu import sort_key
from calibre.utils.formatter import EvalFormatter
class Book(Metadata):
@ -281,9 +281,8 @@ class CollectionsBookList(BookList):
if y is None:
return -1
if isinstance(x, basestring) and isinstance(y, basestring):
c = strcmp(force_unicode(x), force_unicode(y))
else:
c = cmp(x, y)
x, y = sort_key(force_unicode(x)), sort_key(force_unicode(y))
c = cmp(x, y)
if c != 0:
return c
# same as above -- no sort_key needed here

View File

@ -40,7 +40,7 @@ class USBMS(CLI, Device):
'''
description = _('Communicate with an eBook reader.')
author = _('John Schember')
author = 'John Schember'
supported_platforms = ['windows', 'osx', 'linux']
# Store type instances of BookList and Book. We must do this because

View File

@ -1,8 +0,0 @@
usbobserver.so : usbobserver.o
gcc -arch i386 -arch ppc -bundle usbobserver.o -o usbobserver.so -framework Python -framework IOKit -framework CoreFoundation
usbobserver.o : usbobserver.c
gcc -arch i386 -arch ppc -dynamic -I/Library/Frameworks/Python.framework/Versions/Current/Headers -c usbobserver.c -o usbobserver.o
clean :
rm -f *.o *.so

View File

@ -67,6 +67,8 @@ def check_command_line_options(parser, args, log):
('-h' in args or '--help' in args):
log.error('Cannot read from', input)
raise SystemExit(1)
if input.endswith('.recipe') and not os.access(input, os.R_OK):
input = args[1]
output = args[2]
if (output.startswith('.') and output[:2] not in {'..', '.'} and '/' not in
@ -98,6 +100,9 @@ def option_recommendation_to_cli_option(add_option, rec):
switches = ['--disable-'+opt.long_switch]
add_option(Option(*switches, **attrs))
def group_titles():
return _('INPUT OPTIONS'), _('OUTPUT OPTIONS')
def add_input_output_options(parser, plumber):
input_options, output_options = \
plumber.input_options, plumber.output_options
@ -107,14 +112,14 @@ def add_input_output_options(parser, plumber):
option_recommendation_to_cli_option(group, opt)
if input_options:
title = _('INPUT OPTIONS')
title = group_titles()[0]
io = OptionGroup(parser, title, _('Options to control the processing'
' of the input %s file')%plumber.input_fmt)
add_options(io.add_option, input_options)
parser.add_option_group(io)
if output_options:
title = _('OUTPUT OPTIONS')
title = group_titles()[1]
oo = OptionGroup(parser, title, _('Options to control the processing'
' of the output %s')%plumber.output_fmt)
add_options(oo.add_option, output_options)

View File

@ -68,10 +68,15 @@ class RecipeInput(InputFormatPlugin):
recipe = compile_recipe(self.recipe_source)
log('Using custom recipe')
else:
from calibre.web.feeds.recipes.collection import \
get_builtin_recipe_by_title
from calibre.web.feeds.recipes.collection import (
get_builtin_recipe_by_title, get_builtin_recipe_titles)
title = getattr(opts, 'original_recipe_input_arg', recipe_or_file)
title = os.path.basename(title).rpartition('.')[0]
titles = frozenset(get_builtin_recipe_titles())
if title not in titles:
title = getattr(opts, 'original_recipe_input_arg', recipe_or_file)
title = title.rpartition('.')[0]
raw = get_builtin_recipe_by_title(title, log=log,
download_recipe=not opts.dont_download_recipe)
builtin = False

View File

@ -62,6 +62,26 @@ def wrap_lines(match):
else:
return ital+' '
def smarten_punctuation(html, log):
from calibre.utils.smartypants import smartyPants
from calibre.ebooks.chardet import substitute_entites
from calibre.ebooks.conversion.utils import HeuristicProcessor
preprocessor = HeuristicProcessor(log=log)
from uuid import uuid4
start = 'calibre-smartypants-'+str(uuid4())
stop = 'calibre-smartypants-'+str(uuid4())
html = html.replace('<!--', start)
html = html.replace('-->', stop)
html = preprocessor.fix_nbsp_indents(html)
html = smartyPants(html)
html = html.replace(start, '<!--')
html = html.replace(stop, '-->')
# convert ellipsis to entities to prevent wrapping
html = re.sub(r'(?u)(?<=\w)\s?(\.\s?){2}\.', '&hellip;', html)
# convert double dashes to em-dash
html = re.sub(r'\s--\s', u'\u2014', html)
return substitute_entites(html)
class DocAnalysis(object):
'''
Provides various text analysis functions to determine how the document is structured.
@ -638,7 +658,7 @@ class HTMLPreProcessor(object):
html = preprocessor(html)
if getattr(self.extra_opts, 'smarten_punctuation', False):
html = self.smarten_punctuation(html)
html = smarten_punctuation(html, self.log)
try:
unsupported_unicode_chars = self.extra_opts.output_profile.unsupported_unicode_chars
@ -653,23 +673,4 @@ class HTMLPreProcessor(object):
return html
def smarten_punctuation(self, html):
from calibre.utils.smartypants import smartyPants
from calibre.ebooks.chardet import substitute_entites
from calibre.ebooks.conversion.utils import HeuristicProcessor
preprocessor = HeuristicProcessor(self.extra_opts, self.log)
from uuid import uuid4
start = 'calibre-smartypants-'+str(uuid4())
stop = 'calibre-smartypants-'+str(uuid4())
html = html.replace('<!--', start)
html = html.replace('-->', stop)
html = preprocessor.fix_nbsp_indents(html)
html = smartyPants(html)
html = html.replace(start, '<!--')
html = html.replace(stop, '-->')
# convert ellipsis to entities to prevent wrapping
html = re.sub(r'(?u)(?<=\w)\s?(\.\s?){2}\.', '&hellip;', html)
# convert double dashes to em-dash
html = re.sub(r'\s--\s', u'\u2014', html)
return substitute_entites(html)

View File

@ -60,7 +60,8 @@ class TOCAdder(object):
else:
oeb.guide.remove('toc')
if not self.has_toc or 'toc' in oeb.guide or opts.no_inline_toc:
if (not self.has_toc or 'toc' in oeb.guide or opts.no_inline_toc or
getattr(opts, 'mobi_passthrough', False)):
return
self.log('\tGenerating in-line ToC')

View File

@ -81,6 +81,11 @@ class BookIndexing
if elem == null
pos = [body.scrollWidth+1000, body.scrollHeight+1000]
else
# Because of a bug in WebKit's getBoundingClientRect() in
# column mode, this position can be inaccurate,
# see https://bugs.launchpad.net/calibre/+bug/1132641 for a
# test case. The usual symptom of the inaccuracy is br.top is
# highly negative.
br = elem.getBoundingClientRect()
pos = viewport_to_document(br.left, br.top, elem.ownerDocument)

View File

@ -75,6 +75,13 @@ class PagedDisplay
this.margin_side = margin_side
this.margin_bottom = margin_bottom
handle_rtl_body: (body_style) ->
if body_style.direction == "rtl"
for node in document.body.childNodes
if node.nodeType == node.ELEMENT_NODE and window.getComputedStyle(node).direction == "rtl"
node.style.setProperty("direction", "rtl")
document.body.style.direction = "ltr"
layout: (is_single_page=false) ->
# start_time = new Date().getTime()
body_style = window.getComputedStyle(document.body)
@ -84,6 +91,7 @@ class PagedDisplay
# Check if the current document is a full screen layout like
# cover, if so we treat it specially.
single_screen = (document.body.scrollHeight < window.innerHeight + 75)
this.handle_rtl_body(body_style)
first_layout = true
ww = window.innerWidth
@ -402,7 +410,22 @@ class PagedDisplay
elem.scrollIntoView()
if this.in_paged_mode
# Ensure we are scrolled to the column containing elem
this.scroll_to_xpos(calibre_utils.absleft(elem) + 5)
# Because of a bug in WebKit's getBoundingClientRect() in column
# mode, this position can be inaccurate, see
# https://bugs.launchpad.net/calibre/+bug/1132641 for a test case.
# The usual symptom of the inaccuracy is br.top is highly negative.
br = elem.getBoundingClientRect()
if br.top < -1000
# This only works because of the preceding call to
# elem.scrollIntoView(). However, in some cases it gives
# inaccurate results, so we prefer the bounding client rect,
# when possible.
left = elem.scrollLeft
else
left = br.left
this.scroll_to_xpos(calibre_utils.viewport_to_document(
left+this.margin_side, elem.scrollTop, elem.ownerDocument)[0])
snap_to_selection: () ->
# Ensure that the viewport is positioned at the start of the column

View File

@ -86,7 +86,9 @@ class CalibreUtils
absleft: (elem) -> # {{{
# The left edge of elem in document co-ords. Works in all
# circumstances, including column layout. Note that this will cause
# a relayout if the render tree is dirty.
# a relayout if the render tree is dirty. Also, because of a bug in the
# version of WebKit bundled with Qt 4.8, this does not always work, see
# https://bugs.launchpad.net/bugs/1132641 for a test case.
r = elem.getBoundingClientRect()
return this.viewport_to_document(r.left, 0, elem.ownerDocument)[0]
# }}}

View File

@ -31,7 +31,8 @@ def self_closing_sub(match):
return '<%s%s></%s>'%(match.group(1), match.group(2), match.group(1))
def load_html(path, view, codec='utf-8', mime_type=None,
pre_load_callback=lambda x:None, path_is_html=False):
pre_load_callback=lambda x:None, path_is_html=False,
force_as_html=False):
from PyQt4.Qt import QUrl, QByteArray
if mime_type is None:
mime_type = guess_type(path)[0]
@ -44,18 +45,20 @@ def load_html(path, view, codec='utf-8', mime_type=None,
html = f.read().decode(codec, 'replace')
html = EntityDeclarationProcessor(html).processed_html
has_svg = re.search(r'<[:a-zA-Z]*svg', html) is not None
self_closing_pat = re.compile(r'<\s*([A-Za-z1-6]+)([^>]*)/\s*>')
self_closing_pat = re.compile(r'<\s*([:A-Za-z0-9-]+)([^>]*)/\s*>')
html = self_closing_pat.sub(self_closing_sub, html)
loading_url = QUrl.fromLocalFile(path)
pre_load_callback(loading_url)
if has_svg:
if force_as_html or re.search(r'<[:a-zA-Z0-9-]*svg', html) is None:
view.setHtml(html, loading_url)
else:
view.setContent(QByteArray(html.encode(codec)), mime_type,
loading_url)
else:
view.setHtml(html, loading_url)
mf = view.page().mainFrame()
elem = mf.findFirstElement('parsererror')
if not elem.isNull():
return False
return True

View File

@ -15,6 +15,7 @@ from calibre.ebooks.oeb.polish.container import get_container
from calibre.ebooks.oeb.polish.stats import StatsCollector
from calibre.ebooks.oeb.polish.subset import subset_all_fonts
from calibre.ebooks.oeb.polish.cover import set_cover
from calibre.ebooks.oeb.polish.replace import smarten_punctuation
from calibre.ebooks.oeb.polish.jacket import (
replace_jacket, add_or_replace_jacket, find_existing_jacket, remove_jacket)
from calibre.utils.logging import Log
@ -25,6 +26,7 @@ ALL_OPTS = {
'cover': None,
'jacket': False,
'remove_jacket':False,
'smarten_punctuation':False,
}
SUPPORTED = {'EPUB', 'AZW3'}
@ -72,6 +74,13 @@ etc.</p>'''),
'remove_jacket': _('''\
<p>Remove a previous inserted book jacket page.</p>
'''),
'smarten_punctuation': _('''\
<p>Convert plain text dashes, ellipsis, quotes, multiple hyphens, etc. into their
typographically correct equivalents.</p>
<p>Note that the algorithm can sometimes generate incorrect results, especially
when single quotes at the start of contractions are involved.</p>
'''),
}
def hfix(name, raw):
@ -121,11 +130,6 @@ def polish(file_map, opts, log, report):
report(_('Updated metadata jacket'))
report(_('Metadata updated\n'))
if opts.subset:
rt(_('Subsetting embedded fonts'))
subset_all_fonts(ebook, stats.font_stats, report)
report('')
if opts.cover:
rt(_('Setting cover'))
set_cover(ebook, opts.cover, report)
@ -150,6 +154,16 @@ def polish(file_map, opts, log, report):
report(_('No metadata jacket found'))
report('')
if opts.smarten_punctuation:
rt(_('Smartening punctuation'))
smarten_punctuation(ebook, report)
report('')
if opts.subset:
rt(_('Subsetting embedded fonts'))
subset_all_fonts(ebook, stats.font_stats, report)
report('')
ebook.commit(outbook)
report('-'*70)
report(_('Polishing took: %.1f seconds')%(time.time()-st))
@ -160,6 +174,7 @@ def gui_polish(data):
files = data.pop('files')
if not data.pop('metadata'):
data.pop('opf')
if not data.pop('do_cover'):
data.pop('cover')
file_map = {x:x for x in files}
opts = ALL_OPTS.copy()
@ -190,6 +205,7 @@ def option_parser():
'Path to an OPF file. The metadata in the book is updated from the OPF file.'))
o('--jacket', '-j', help=CLI_HELP['jacket'])
o('--remove-jacket', help=CLI_HELP['remove_jacket'])
o('--smarten-punctuation', '-p', help=CLI_HELP['smarten_punctuation'])
o('--verbose', help=_('Produce more verbose output, useful for debugging.'))

View File

@ -7,10 +7,12 @@ __license__ = 'GPL v3'
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
import codecs
from urlparse import urlparse
from cssutils import replaceUrls
from calibre.ebooks.chardet import strip_encoding_declarations
from calibre.ebooks.oeb.polish.container import guess_type
from calibre.ebooks.oeb.base import (OEB_DOCS, OEB_STYLES, rewrite_links)
@ -58,4 +60,26 @@ def replace_links(container, link_map, frag_map=lambda name, frag:frag):
if repl.replaced:
container.dirty(name)
def smarten_punctuation(container, report):
from calibre.ebooks.conversion.preprocess import smarten_punctuation
for path in container.spine_items:
name = container.abspath_to_name(path)
changed = False
with container.open(name, 'r+b') as f:
html = container.decode(f.read())
newhtml = smarten_punctuation(html, container.log)
if newhtml != html:
changed = True
report(_('Smartened punctuation in: %s')%name)
newhtml = strip_encoding_declarations(newhtml)
f.seek(0)
f.truncate()
f.write(codecs.BOM_UTF8 + newhtml.encode('utf-8'))
if changed:
# Add an encoding declaration (it will be added automatically when
# serialized)
root = container.parsed(name)
for m in root.xpath('descendant::*[local-name()="meta" and @http-equiv]'):
m.getparent().remove(m)
container.dirty(name)

View File

@ -9,10 +9,11 @@ __docformat__ = 'restructuredtext en'
import os, sys
from calibre import prints
from calibre import prints, as_unicode
from calibre.ebooks.oeb.base import OEB_STYLES, OEB_DOCS, XPath
from calibre.ebooks.oeb.polish.container import OEB_FONTS
from calibre.utils.fonts.sfnt.subset import subset
from calibre.utils.fonts.sfnt.errors import UnsupportedFont
from calibre.utils.fonts.utils import get_font_names
def remove_font_face_rules(container, sheet, remove_names, base):
@ -46,9 +47,16 @@ def subset_all_fonts(container, font_stats, report):
raw = f.read()
font_name = get_font_names(raw)[-1]
warnings = []
container.log('Subsetting font: %s'%font_name)
nraw, old_sizes, new_sizes = subset(raw, chars,
container.log('Subsetting font: %s'%(font_name or name))
try:
nraw, old_sizes, new_sizes = subset(raw, chars,
warnings=warnings)
except UnsupportedFont as e:
container.log.warning(
'Unsupported font: %s, ignoring. Error: %s'%(
name, as_unicode(e)))
continue
for w in warnings:
container.log.warn(w)
olen = sum(old_sizes.itervalues())

View File

@ -363,7 +363,10 @@ class CSSFlattener(object):
cssdict['font-weight'] = 'normal' # ADE chokes on font-weight medium
fsize = font_size
if not self.context.disable_font_rescaling:
is_drop_cap = (cssdict.get('float', None) == 'left' and 'font-size' in
cssdict and len(node) == 0 and node.text and
len(node.text) == 1)
if not self.context.disable_font_rescaling and not is_drop_cap:
_sbase = self.sbase if self.sbase is not None else \
self.context.source.fbase
dyn_rescale = dynamic_rescale_factor(node)
@ -382,7 +385,7 @@ class CSSFlattener(object):
try:
minlh = self.context.minimum_line_height / 100.
if style['line-height'] < minlh * fsize:
if not is_drop_cap and style['line-height'] < minlh * fsize:
cssdict['line-height'] = str(minlh)
except:
self.oeb.logger.exception('Failed to set minimum line-height')

View File

@ -13,9 +13,10 @@ from operator import itemgetter
from collections import Counter, OrderedDict
from future_builtins import map
from calibre import as_unicode
from calibre.ebooks.pdf.render.common import (Array, String, Stream,
Dictionary, Name)
from calibre.utils.fonts.sfnt.subset import pdf_subset
from calibre.utils.fonts.sfnt.subset import pdf_subset, UnsupportedFont
STANDARD_FONTS = {
'Times-Roman', 'Helvetica', 'Courier', 'Symbol', 'Times-Bold',
@ -150,12 +151,16 @@ class Font(object):
self.used_glyphs = set()
def embed(self, objects):
def embed(self, objects, debug):
self.font_descriptor['FontFile'+('3' if self.is_otf else '2')
] = objects.add(self.font_stream)
self.write_widths(objects)
self.write_to_unicode(objects)
pdf_subset(self.metrics.sfnt, self.used_glyphs)
try:
pdf_subset(self.metrics.sfnt, self.used_glyphs)
except UnsupportedFont as e:
debug('Subsetting of %s not supported, embedding full font. Error: %s'%(
self.metrics.names.get('full_name', 'Unknown'), as_unicode(e)))
if self.is_otf:
self.font_stream.write(self.metrics.sfnt['CFF '].raw)
else:
@ -221,7 +226,7 @@ class FontManager(object):
}))
return self.std_map[name]
def embed_fonts(self):
def embed_fonts(self, debug):
for font in self.fonts:
font.embed(self.objects)
font.embed(self.objects, debug)

View File

@ -488,7 +488,7 @@ class PDFStream(object):
def end(self):
if self.current_page.getvalue():
self.end_page()
self.font_manager.embed_fonts()
self.font_manager.embed_fonts(self.debug)
inforef = self.objects.add(self.info)
self.links.add_links()
self.objects.pdf_serialize(self.stream)

View File

@ -15,7 +15,8 @@ from PyQt4.Qt import (QMenu, Qt, QInputDialog, QToolButton, QDialog,
from calibre import isbytestring, sanitize_file_name_unicode
from calibre.constants import (filesystem_encoding, iswindows,
get_portable_base)
from calibre.utils.config import prefs
from calibre.utils.config import prefs, tweaks
from calibre.utils.icu import sort_key
from calibre.gui2 import (gprefs, warning_dialog, Dispatcher, error_dialog,
question_dialog, info_dialog, open_local_file, choose_dir)
from calibre.library.database2 import LibraryDatabase2
@ -46,7 +47,7 @@ class LibraryUsageStats(object): # {{{
locs = list(self.stats.keys())
locs.sort(cmp=lambda x, y: cmp(self.stats[x], self.stats[y]),
reverse=True)
for key in locs[25:]:
for key in locs[500:]:
self.stats.pop(key)
gprefs.set('library_usage_stats', self.stats)
@ -72,8 +73,9 @@ class LibraryUsageStats(object): # {{{
locs = list(self.stats.keys())
if lpath in locs:
locs.remove(lpath)
locs.sort(cmp=lambda x, y: cmp(self.stats[x], self.stats[y]),
reverse=True)
limit = tweaks['many_libraries']
key = sort_key if len(locs) > limit else lambda x:self.stats[x]
locs.sort(key=key, reverse=len(locs)<=limit)
for loc in locs:
yield self.pretty(loc), loc

View File

@ -40,14 +40,22 @@ class Polish(QDialog): # {{{
'subset':_('<h3>Subsetting fonts</h3>%s')%HELP['subset'],
'smarten_punctuation':
_('<h3>Smarten punctuation</h3>%s')%HELP['smarten_punctuation'],
'metadata':_('<h3>Updating metadata</h3>'
'<p>This will update all metadata and covers in the'
'<p>This will update all metadata <i>except</i> the cover in the'
' ebook files to match the current metadata in the'
' calibre library.</p><p>If the ebook file does not have'
' an identifiable cover, a new cover is inserted.</p>'
' calibre library.</p>'
' <p>Note that most ebook'
' formats are not capable of supporting all the'
' metadata in calibre.</p>'),
' metadata in calibre.</p><p>There is a separate option to'
' update the cover.</p>'),
'do_cover': _('<p>Update the covers in the ebook files to match the'
' current cover in the calibre library.</p>'
'<p>If the ebook file does not have'
' an identifiable cover, a new cover is inserted.</p>'
),
'jacket':_('<h3>Book Jacket</h3>%s')%HELP['jacket'],
'remove_jacket':_('<h3>Remove Book Jacket</h3>%s')%HELP['remove_jacket'],
}
@ -60,10 +68,12 @@ class Polish(QDialog): # {{{
count = 0
self.all_actions = OrderedDict([
('subset', _('Subset all embedded fonts')),
('metadata', _('Update metadata in book files')),
('jacket', _('Add metadata as a "book jacket" page')),
('remove_jacket', _('Remove a previously inserted book jacket')),
('subset', _('&Subset all embedded fonts')),
('smarten_punctuation', _('Smarten &punctuation')),
('metadata', _('Update &metadata in the book files')),
('do_cover', _('Update the &cover in the book files')),
('jacket', _('Add metadata as a "book &jacket" page')),
('remove_jacket', _('&Remove a previously inserted book jacket')),
])
prefs = gprefs.get('polishing_settings', {})
for name, text in self.all_actions.iteritems():
@ -143,11 +153,17 @@ class Polish(QDialog): # {{{
m = self.load_menu
m.clear()
self.__actions = []
a = self.__actions.append
for name in sorted(saved):
self.__actions.append(m.addAction(name, partial(self.load_settings,
name)))
a(m.addAction(name, partial(self.load_settings, name)))
m.addSeparator()
a(m.addAction(_('Remove saved settings'), self.clear_settings))
self.load_button.setEnabled(bool(saved))
def clear_settings(self):
gprefs.set('polish_settings', {})
self.setup_load_button()
def load_settings(self, name):
saved = gprefs.get('polish_settings', {}).get(name, {})
for action in self.all_actions:
@ -233,8 +249,10 @@ class Polish(QDialog): # {{{
cover = os.path.join(base, 'cover.jpg')
if db.copy_cover_to(book_id, cover, index_is_id=True):
data['cover'] = cover
is_orig = {}
for fmt in formats:
ext = fmt.replace('ORIGINAL_', '').lower()
is_orig[ext.upper()] = 'ORIGINAL_' in fmt
with open(os.path.join(base, '%s.%s'%(book_id, ext)), 'wb') as f:
db.copy_format_to(book_id, fmt, f, index_is_id=True)
data['files'].append(f.name)
@ -247,7 +265,7 @@ class Polish(QDialog): # {{{
self.pd.set_msg(_('Queueing book %(nums)s of %(tot)s (%(title)s)')%dict(
nums=num, tot=len(self.book_id_map), title=mi.title))
self.jobs.append((desc, data, book_id, base))
self.jobs.append((desc, data, book_id, base, is_orig))
# }}}
class Report(QDialog): # {{{
@ -394,11 +412,11 @@ class PolishAction(InterfaceAction):
d = Polish(self.gui.library_view.model().db, book_id_map, parent=self.gui)
if d.exec_() == d.Accepted and d.jobs:
show_reports = bool(d.show_reports.isChecked())
for desc, data, book_id, base in reversed(d.jobs):
for desc, data, book_id, base, is_orig in reversed(d.jobs):
job = self.gui.job_manager.run_job(
Dispatcher(self.book_polished), 'gui_polish', args=(data,),
description=desc)
job.polish_args = (book_id, base, data['files'], show_reports)
job.polish_args = (book_id, base, data['files'], show_reports, is_orig)
if d.jobs:
self.gui.jobs_pointer.start()
self.gui.status_bar.show_message(
@ -409,11 +427,11 @@ class PolishAction(InterfaceAction):
self.gui.job_exception(job)
return
db = self.gui.current_db
book_id, base, files, show_reports = job.polish_args
book_id, base, files, show_reports, is_orig = job.polish_args
fmts = set()
for path in files:
fmt = path.rpartition('.')[-1].upper()
if tweaks['save_original_format_when_polishing']:
if tweaks['save_original_format_when_polishing'] and not is_orig[fmt]:
fmts.add(fmt)
db.save_original_format(book_id, fmt, notify=False)
with open(path, 'rb') as f:

View File

@ -327,6 +327,13 @@ class EditorWidget(QWebView): # {{{
else:
return QWebView.keyReleaseEvent(self, ev)
def contextMenuEvent(self, ev):
menu = self.page().createStandardContextMenu()
paste = self.pageAction(QWebPage.Paste)
for action in menu.actions():
if action == paste:
menu.insertAction(action, self.pageAction(QWebPage.PasteAndMatchStyle))
menu.exec_(ev.globalPos())
# }}}

View File

@ -622,8 +622,7 @@ class BulkBase(Base):
return
val = self.gui_val
val = self.normalize_ui_val(val)
if val != self.initial_val:
self.db.set_custom_bulk(book_ids, val, num=self.col_id, notify=notify)
self.db.set_custom_bulk(book_ids, val, num=self.col_id, notify=notify)
def make_widgets(self, parent, main_widget_class, extra_label_text=''):
w = QWidget(parent)
@ -1030,8 +1029,7 @@ class BulkText(BulkBase):
else:
val = self.gui_val
val = self.normalize_ui_val(val)
if val != self.initial_val:
self.db.set_custom_bulk(book_ids, val, num=self.col_id, notify=notify)
self.db.set_custom_bulk(book_ids, val, num=self.col_id, notify=notify)
def getter(self):
if self.col_metadata['is_multiple']:

View File

@ -512,7 +512,7 @@ class MetadataBulkDialog(ResizableDialog, Ui_MetadataBulkDialog):
self.test_text.editTextChanged[str].connect(self.s_r_paint_results)
self.comma_separated.stateChanged.connect(self.s_r_paint_results)
self.case_sensitive.stateChanged.connect(self.s_r_paint_results)
self.s_r_src_ident.currentIndexChanged[int].connect(self.s_r_paint_results)
self.s_r_src_ident.currentIndexChanged[int].connect(self.s_r_identifier_type_changed)
self.s_r_dst_ident.textChanged.connect(self.s_r_paint_results)
self.s_r_template.lost_focus.connect(self.s_r_template_changed)
self.central_widget.setCurrentIndex(0)
@ -576,9 +576,9 @@ class MetadataBulkDialog(ResizableDialog, Ui_MetadataBulkDialog):
elif not fm['is_multiple']:
val = [val]
elif fm['datatype'] == 'composite':
val = [v.strip() for v in val.split(fm['is_multiple']['ui_to_list'])]
val = [v2.strip() for v2 in val.split(fm['is_multiple']['ui_to_list'])]
elif field == 'authors':
val = [v.replace('|', ',') for v in val]
val = [v2.replace('|', ',') for v2 in val]
else:
val = []
if not val:
@ -591,6 +591,10 @@ class MetadataBulkDialog(ResizableDialog, Ui_MetadataBulkDialog):
def s_r_template_changed(self):
self.s_r_search_field_changed(self.search_field.currentIndex())
def s_r_identifier_type_changed(self, idx):
self.s_r_search_field_changed(self.search_field.currentIndex())
self.s_r_paint_results(idx)
def s_r_search_field_changed(self, idx):
self.s_r_template.setVisible(False)
self.template_label.setVisible(False)

View File

@ -369,7 +369,7 @@ def build_pipe(print_error=True):
t.start()
t.join(3.0)
if t.is_alive():
if iswindows():
if iswindows:
cant_start()
else:
f = os.path.expanduser('~/.calibre_calibre GUI.lock')

View File

@ -725,13 +725,15 @@ class EbookViewer(MainWindow, Ui_EbookViewer):
self.view.shrink_fonts()
def magnification_changed(self, val):
tt = _('%(which)s font size [%(sc)s]\nCurrent magnification: %(mag).1f')
tt = '%(action)s [%(sc)s]\n'+_('Current magnification: %(mag).1f')
sc = unicode(self.action_font_size_larger.shortcut().toString())
self.action_font_size_larger.setToolTip(
tt %dict(which=_('Increase'), mag=val, sc=sc))
tt %dict(action=unicode(self.action_font_size_larger.text()),
mag=val, sc=sc))
sc = unicode(self.action_font_size_smaller.shortcut().toString())
self.action_font_size_smaller.setToolTip(
tt %dict(which=_('Decrease'), mag=val, sc=sc))
tt %dict(action=unicode(self.action_font_size_smaller.text()),
mag=val, sc=sc))
self.action_font_size_larger.setEnabled(self.view.multiplier < 3)
self.action_font_size_smaller.setEnabled(self.view.multiplier > 0.2)

View File

@ -955,8 +955,8 @@ class LayoutButton(QToolButton):
def set_state_to_hide(self, *args):
self.setChecked(True)
label = _('Hide')
self.setText(label + ' ' + self.label+ u' (%s)'%self.shortcut)
self.setText(_('Hide %(label)s %(shortcut)s'%dict(
label=self.label, shortcut=self.shortcut)))
self.setToolTip(self.text())
self.setStatusTip(self.text())

View File

@ -357,8 +357,9 @@ def do_add_empty(db, title, authors, isbn, tags, series, series_index, cover):
mi.series, mi.series_index = series, series_index
if cover:
mi.cover = cover
db.import_book(mi, [])
book_id = db.import_book(mi, [])
write_dirtied(db)
prints(_('Added book ids: %s')%book_id)
send_message()
def command_add(args, dbpath):

View File

@ -34,7 +34,7 @@ from calibre import isbytestring
from calibre.utils.filenames import (ascii_filename, samefile,
WindowsAtomicFolderMove, hardlink_file)
from calibre.utils.date import (utcnow, now as nowf, utcfromtimestamp,
parse_only_date, UNDEFINED_DATE)
parse_only_date, UNDEFINED_DATE, parse_date)
from calibre.utils.config import prefs, tweaks, from_json, to_json
from calibre.utils.icu import sort_key, strcmp, lower
from calibre.utils.search_query_parser import saved_searches, set_saved_searches
@ -1134,6 +1134,8 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
base_path = os.path.join(self.library_path, self.path(id,
index_is_id=True))
self.dirtied([id])
if not os.path.exists(base_path):
os.makedirs(base_path)
path = os.path.join(base_path, 'cover.jpg')
@ -2565,6 +2567,8 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
def set_timestamp(self, id, dt, notify=True, commit=True):
if dt:
if isinstance(dt, (unicode, bytes)):
dt = parse_date(dt, as_utc=True, assume_utc=False)
self.conn.execute('UPDATE books SET timestamp=? WHERE id=?', (dt, id))
self.data.set(id, self.FIELD_MAP['timestamp'], dt, row_is_id=True)
self.dirtied([id], commit=False)

View File

@ -590,7 +590,7 @@ class BrowseServer(object):
entries = get_category_items(category, entries,
self.search_restriction_name, datatype,
self.opts.url_prefix)
return json.dumps(entries, ensure_ascii=False)
return json.dumps(entries, ensure_ascii=True)
@Endpoint()
@ -772,6 +772,7 @@ class BrowseServer(object):
continue
args, fmt, fmts, fname = self.browse_get_book_args(mi, id_)
args['other_formats'] = ''
args['fmt'] = fmt
if fmts and fmt:
other_fmts = [x for x in fmts if x.lower() != fmt.lower()]
if other_fmts:
@ -794,8 +795,9 @@ class BrowseServer(object):
args['get_button'] = \
'<a href="%s" class="read" title="%s">%s</a>' % \
(xml(href, True), rt, xml(_('Get')))
args['get_url'] = xml(href, True)
else:
args['get_button'] = ''
args['get_button'] = args['get_url'] = ''
args['comments'] = comments_to_html(mi.comments)
args['stars'] = ''
if mi.rating:
@ -814,7 +816,7 @@ class BrowseServer(object):
summs.append(self.browse_summary_template.format(**args))
raw = json.dumps('\n'.join(summs), ensure_ascii=False)
raw = json.dumps('\n'.join(summs), ensure_ascii=True)
return raw
def browse_render_details(self, id_):
@ -825,12 +827,17 @@ class BrowseServer(object):
else:
args, fmt, fmts, fname = self.browse_get_book_args(mi, id_,
add_category_links=True)
args['fmt'] = fmt
if fmt:
args['get_url'] = xml(self.opts.url_prefix + '/get/%s/%s_%d.%s'%(
fmt, fname, id_, fmt), True)
else:
args['get_url'] = ''
args['formats'] = ''
if fmts:
ofmts = [u'<a href="{4}/get/{0}/{1}_{2}.{0}" title="{3}">{3}</a>'\
.format(fmt, fname, id_, fmt.upper(),
self.opts.url_prefix) for fmt in
fmts]
.format(xfmt, fname, id_, xfmt.upper(),
self.opts.url_prefix) for xfmt in fmts]
ofmts = ', '.join(ofmts)
args['formats'] = ofmts
fields, comments = [], []
@ -880,9 +887,10 @@ class BrowseServer(object):
c[1]) for c in comments]
comments = u'<div class="comments">%s</div>'%('\n\n'.join(comments))
return self.browse_details_template.format(id=id_,
title=xml(mi.title, True), fields=fields,
formats=args['formats'], comments=comments)
return self.browse_details_template.format(
id=id_, title=xml(mi.title, True), fields=fields,
get_url=args['get_url'], fmt=args['fmt'],
formats=args['formats'], comments=comments)
@Endpoint(mimetype='application/json; charset=utf-8')
def browse_details(self, id=None):
@ -893,7 +901,7 @@ class BrowseServer(object):
ans = self.browse_render_details(id_)
return json.dumps(ans, ensure_ascii=False)
return json.dumps(ans, ensure_ascii=True)
@Endpoint()
def browse_random(self, *args, **kwargs):

View File

@ -20,7 +20,7 @@ from calibre.ebooks.metadata import title_sort, author_to_author_sort
from calibre.utils.date import parse_date, isoformat, local_tz, UNDEFINED_DATE
from calibre import isbytestring, force_unicode
from calibre.constants import iswindows, DEBUG, plugins
from calibre.utils.icu import strcmp
from calibre.utils.icu import sort_key
from calibre import prints
from dateutil.tz import tzoffset
@ -189,7 +189,8 @@ def pynocase(one, two, encoding='utf-8'):
return cmp(one.lower(), two.lower())
def icu_collator(s1, s2):
return strcmp(force_unicode(s1, 'utf-8'), force_unicode(s2, 'utf-8'))
return cmp(sort_key(force_unicode(s1, 'utf-8')),
sort_key(force_unicode(s2, 'utf-8')))
def load_c_extensions(conn, debug=DEBUG):
try:

View File

@ -123,6 +123,274 @@ os.remove(os.path.abspath(__file__))
# }}}
class ZshCompleter(object): # {{{
def __init__(self, opts):
self.opts = opts
self.dest = None
base = os.path.dirname(self.opts.staging_sharedir)
self.detect_zsh(base)
if not self.dest and base == '/usr/share':
# Ubuntu puts site-functions in /usr/local/share
self.detect_zsh('/usr/local/share')
self.commands = {}
def detect_zsh(self, base):
for x in ('vendor-completions', 'vendor-functions', 'site-functions'):
c = os.path.join(base, 'zsh', x)
if os.path.isdir(c) and os.access(c, os.W_OK):
self.dest = os.path.join(c, '_calibre')
break
def get_options(self, parser, cover_opts=('--cover',), opf_opts=('--opf',),
file_map={}):
if hasattr(parser, 'option_list'):
options = parser.option_list
for group in parser.option_groups:
options += group.option_list
else:
options = parser
for opt in options:
lo, so = opt._long_opts, opt._short_opts
if opt.takes_value():
lo = [x+'=' for x in lo]
so = [x+'+' for x in so]
ostrings = lo + so
if len(ostrings) > 1:
ostrings = u'{%s}'%','.join(ostrings)
else:
ostrings = ostrings[0]
exclude = u''
if opt.dest is None:
exclude = u"'(- *)'"
h = opt.help or ''
h = h.replace('"', "'").replace('[', '(').replace(
']', ')').replace('\n', ' ').replace(':', '\\:')
h = h.replace('%default', type(u'')(opt.default))
arg = ''
if opt.takes_value():
arg = ':"%s":'%h
if opt.dest in {'debug_pipeline', 'to_dir', 'outbox', 'with_library', 'library_path'}:
arg += "'_path_files -/'"
elif opt.choices:
arg += "(%s)"%'|'.join(opt.choices)
elif set(file_map).intersection(set(opt._long_opts)):
k = set(file_map).intersection(set(opt._long_opts))
exts = file_map[tuple(k)[0]]
if exts:
arg += "'_files -g \"%s\"'"%(' '.join('*.%s'%x for x in
tuple(exts) + tuple(x.upper() for x in exts)))
else:
arg += "_files"
elif (opt.dest in {'pidfile', 'attachment'}):
arg += "_files"
elif set(opf_opts).intersection(set(opt._long_opts)):
arg += "'_files -g \"*.opf\"'"
elif set(cover_opts).intersection(set(opt._long_opts)):
arg += "'_files -g \"%s\"'"%(' '.join('*.%s'%x for x in
tuple(pics) + tuple(x.upper() for x in pics)))
help_txt = u'"[%s]"'%h
yield u'%s%s%s%s '%(exclude, ostrings, help_txt, arg)
def opts_and_exts(self, name, op, exts, cover_opts=('--cover',),
opf_opts=('--opf',), file_map={}):
if not self.dest: return
exts = set(exts).union(x.upper() for x in exts)
pats = ('*.%s'%x for x in exts)
extra = ("'*:filename:_files -g \"%s\"' "%' '.join(pats),)
opts = '\\\n '.join(tuple(self.get_options(
op(), cover_opts=cover_opts, opf_opts=opf_opts, file_map=file_map)) + extra)
txt = '_arguments -s \\\n ' + opts
self.commands[name] = txt
def opts_and_words(self, name, op, words, takes_files=False):
if not self.dest: return
extra = ("'*:filename:_files' ",) if takes_files else ()
opts = '\\\n '.join(tuple(self.get_options(op())) + extra)
txt = '_arguments -s \\\n ' + opts
self.commands[name] = txt
def do_ebook_convert(self, f):
from calibre.ebooks.conversion.plumber import supported_input_formats
from calibre.web.feeds.recipes.collection import get_builtin_recipe_titles
from calibre.customize.ui import available_output_formats
from calibre.ebooks.conversion.cli import create_option_parser, group_titles
from calibre.utils.logging import DevNull
input_fmts = set(supported_input_formats())
output_fmts = set(available_output_formats())
iexts = {x.upper() for x in input_fmts}.union(input_fmts)
oexts = {x.upper() for x in output_fmts}.union(output_fmts)
w = lambda x: f.write(x if isinstance(x, bytes) else x.encode('utf-8'))
# Arg 1
w('\n_ebc_input_args() {')
w('\n local extras; extras=(')
w('\n {-h,--help}":Show Help"')
w('\n "--version:Show program version"')
w('\n "--list-recipes:List builtin recipe names"')
for recipe in sorted(set(get_builtin_recipe_titles())):
recipe = recipe.replace(':', '\\:').replace('"', '\\"')
w(u'\n "%s.recipe"'%(recipe))
w('\n ); _describe -t recipes "ebook-convert builtin recipes" extras')
w('\n _files -g "%s"'%' '.join(('*.%s'%x for x in iexts)))
w('\n}\n')
# Arg 2
w('\n_ebc_output_args() {')
w('\n local extras; extras=(')
for x in output_fmts:
w('\n ".{0}:Convert to a .{0} file with the same name as the input file"'.format(x))
w('\n ); _describe -t output "ebook-convert output" extras')
w('\n _files -g "%s"'%' '.join(('*.%s'%x for x in oexts)))
w('\n _path_files -/')
w('\n}\n')
log = DevNull()
def get_parser(input_fmt='epub', output_fmt=None):
of = ('dummy2.'+output_fmt) if output_fmt else 'dummy'
return create_option_parser(('ec', 'dummy1.'+input_fmt, of, '-h'), log)[0]
# Common options
input_group, output_group = group_titles()
p = get_parser()
opts = p.option_list
for group in p.option_groups:
if group.title not in {input_group, output_group}:
opts += group.option_list
opts.append(p.get_option('--pretty-print'))
opts.append(p.get_option('--input-encoding'))
opts = '\\\n '.join(tuple(
self.get_options(opts, file_map={'--search-replace':()})))
w('\n_ebc_common_opts() {')
w('\n _arguments -s \\\n ' + opts)
w('\n}\n')
# Input/Output format options
for fmts, group_title, func in (
(input_fmts, input_group, '_ebc_input_opts_%s'),
(output_fmts, output_group, '_ebc_output_opts_%s'),
):
for fmt in fmts:
is_input = group_title == input_group
if is_input and fmt in {'rar', 'zip', 'oebzip'}: continue
p = (get_parser(input_fmt=fmt) if is_input
else get_parser(output_fmt=fmt))
opts = None
for group in p.option_groups:
if group.title == group_title:
opts = [o for o in group.option_list if
'--pretty-print' not in o._long_opts and
'--input-encoding' not in o._long_opts]
if not opts: continue
opts = '\\\n '.join(tuple(self.get_options(opts)))
w('\n%s() {'%(func%fmt))
w('\n _arguments -s \\\n ' + opts)
w('\n}\n')
w('\n_ebook_convert() {')
w('\n local iarg oarg context state_descr state line\n typeset -A opt_args\n local ret=1')
w("\n _arguments '1: :_ebc_input_args' '*::ebook-convert output:->args' && ret=0")
w("\n case $state in \n (args)")
w('\n iarg=${line[1]##*.}; ')
w("\n _arguments '1: :_ebc_output_args' '*::ebook-convert options:->args' && ret=0")
w("\n case $state in \n (args)")
w('\n oarg=${line[1]##*.}')
w('\n iarg="_ebc_input_opts_${(L)iarg}"; oarg="_ebc_output_opts_${(L)oarg}"')
w('\n _call_function - $iarg; _call_function - $oarg; _ebc_common_opts; ret=0')
w('\n ;;\n esac')
w("\n ;;\n esac\n return ret")
w('\n}\n')
def do_calibredb(self, f):
import calibre.library.cli as cli
from calibre.customize.ui import available_catalog_formats
parsers, descs = {}, {}
for command in cli.COMMANDS:
op = getattr(cli, '%s_option_parser'%command)
args = [['t.epub']] if command == 'catalog' else []
p = op(*args)
if isinstance(p, tuple):
p = p[0]
parsers[command] = p
lines = [x.strip().partition('.')[0] for x in p.usage.splitlines() if x.strip() and
not x.strip().startswith('%prog')]
descs[command] = lines[0]
f.write('\n_calibredb_cmds() {\n local commands; commands=(\n')
f.write(' {-h,--help}":Show help"\n')
f.write(' "--version:Show version"\n')
for command, desc in descs.iteritems():
f.write(' "%s:%s"\n'%(
command, desc.replace(':', '\\:').replace('"', '\'')))
f.write(' )\n _describe -t commands "calibredb command" commands \n}\n')
subcommands = []
for command, parser in parsers.iteritems():
exts = []
if command == 'catalog':
exts = [x.lower() for x in available_catalog_formats()]
elif command == 'set_metadata':
exts = ['opf']
exts = set(exts).union(x.upper() for x in exts)
pats = ('*.%s'%x for x in exts)
extra = ("'*:filename:_files -g \"%s\"' "%' '.join(pats),) if exts else ()
if command in {'add', 'add_format'}:
extra = ("'*:filename:_files' ",)
opts = '\\\n '.join(tuple(self.get_options(
parser)) + extra)
txt = ' _arguments -s \\\n ' + opts
subcommands.append('(%s)'%command)
subcommands.append(txt)
subcommands.append(';;')
f.write('\n_calibredb() {')
f.write(
r'''
local state line state_descr context
typeset -A opt_args
local ret=1
_arguments \
'1: :_calibredb_cmds' \
'*::calibredb subcommand options:->args' \
&& ret=0
case $state in
(args)
case $line[1] in
(-h|--help|--version)
_message 'no more arguments' && ret=0
;;
%s
esac
;;
esac
return ret
'''%'\n '.join(subcommands))
f.write('\n}\n\n')
def write(self):
if self.dest:
self.commands['calibredb'] = ' _calibredb "$@"'
self.commands['ebook-convert'] = ' _ebook_convert "$@"'
with open(self.dest, 'wb') as f:
f.write('#compdef ' + ' '.join(self.commands)+'\n')
self.do_ebook_convert(f)
self.do_calibredb(f)
f.write('case $service in\n')
for c, txt in self.commands.iteritems():
if isinstance(txt, type(u'')):
txt = txt.encode('utf-8')
if isinstance(c, type(u'')):
c = c.encode('utf-8')
f.write(b'%s)\n%s\n;;\n'%(c, txt))
f.write('esac\n')
# }}}
class PostInstall:
def task_failed(self, msg):
@ -217,7 +485,7 @@ class PostInstall:
def setup_completion(self): # {{{
try:
self.info('Setting up bash completion...')
self.info('Setting up command-line completion...')
from calibre.ebooks.metadata.cli import option_parser as metaop, filetypes as meta_filetypes
from calibre.ebooks.lrf.lrfparser import option_parser as lrf2lrsop
from calibre.gui2.lrf_renderer.main import option_parser as lrfviewerop
@ -229,6 +497,7 @@ class PostInstall:
from calibre.ebooks.oeb.polish.main import option_parser as polish_op, SUPPORTED
from calibre.ebooks import BOOK_EXTENSIONS
input_formats = sorted(all_input_formats())
zsh = ZshCompleter(self.opts)
bc = os.path.join(os.path.dirname(self.opts.staging_sharedir),
'bash-completion')
if os.path.exists(bc):
@ -240,6 +509,9 @@ class PostInstall:
f = os.path.join(self.opts.staging_etc, 'bash_completion.d/calibre')
if not os.path.exists(os.path.dirname(f)):
os.makedirs(os.path.dirname(f))
if zsh.dest:
self.info('Installing zsh completion to:', zsh.dest)
self.manifest.append(zsh.dest)
self.manifest.append(f)
complete = 'calibre-complete'
if getattr(sys, 'frozen_path', None):
@ -247,20 +519,27 @@ class PostInstall:
self.info('Installing bash completion to', f)
with open(f, 'wb') as f:
def o_and_e(*args, **kwargs):
f.write(opts_and_exts(*args, **kwargs))
zsh.opts_and_exts(*args, **kwargs)
def o_and_w(*args, **kwargs):
f.write(opts_and_words(*args, **kwargs))
zsh.opts_and_words(*args, **kwargs)
f.write('# calibre Bash Shell Completion\n')
f.write(opts_and_exts('calibre', guiop, BOOK_EXTENSIONS))
f.write(opts_and_exts('lrf2lrs', lrf2lrsop, ['lrf']))
f.write(opts_and_exts('ebook-meta', metaop,
list(meta_filetypes()), cover_opts=['--cover', '-c'],
opf_opts=['--to-opf', '--from-opf']))
f.write(opts_and_exts('ebook-polish', polish_op,
[x.lower() for x in SUPPORTED], cover_opts=['--cover', '-c'],
opf_opts=['--opf', '-o']))
f.write(opts_and_exts('lrfviewer', lrfviewerop, ['lrf']))
f.write(opts_and_exts('ebook-viewer', viewer_op, input_formats))
f.write(opts_and_words('fetch-ebook-metadata', fem_op, []))
f.write(opts_and_words('calibre-smtp', smtp_op, []))
f.write(opts_and_words('calibre-server', serv_op, []))
o_and_e('calibre', guiop, BOOK_EXTENSIONS)
o_and_e('lrf2lrs', lrf2lrsop, ['lrf'], file_map={'--output':['lrs']})
o_and_e('ebook-meta', metaop,
list(meta_filetypes()), cover_opts=['--cover', '-c'],
opf_opts=['--to-opf', '--from-opf'])
o_and_e('ebook-polish', polish_op,
[x.lower() for x in SUPPORTED], cover_opts=['--cover', '-c'],
opf_opts=['--opf', '-o'])
o_and_e('lrfviewer', lrfviewerop, ['lrf'])
o_and_e('ebook-viewer', viewer_op, input_formats)
o_and_w('fetch-ebook-metadata', fem_op, [])
o_and_w('calibre-smtp', smtp_op, [])
o_and_w('calibre-server', serv_op, [])
f.write(textwrap.dedent('''
_ebook_device_ls()
{
@ -335,6 +614,7 @@ class PostInstall:
complete -o nospace -C %s ebook-convert
''')%complete)
zsh.write()
except TypeError as err:
if 'resolve_entities' in str(err):
print 'You need python-lxml >= 2.0.5 for calibre'
@ -451,7 +731,7 @@ def options(option_parser):
opts.extend(opt._long_opts)
return opts
def opts_and_words(name, op, words):
def opts_and_words(name, op, words, takes_files=False):
opts = '|'.join(options(op))
words = '|'.join([w.replace("'", "\\'") for w in words])
fname = name.replace('-', '_')
@ -481,12 +761,15 @@ def opts_and_words(name, op, words):
}
complete -F _'''%(opts, words) + fname + ' ' + name +"\n\n").encode('utf-8')
pics = {'jpg', 'jpeg', 'gif', 'png', 'bmp'}
def opts_and_exts(name, op, exts, cover_opts=('--cover',), opf_opts=()):
def opts_and_exts(name, op, exts, cover_opts=('--cover',), opf_opts=(),
file_map={}):
opts = ' '.join(options(op))
exts.extend([i.upper() for i in exts])
exts='|'.join(exts)
fname = name.replace('-', '_')
spics = '|'.join(tuple(pics) + tuple(x.upper() for x in pics))
special_exts_template = '''\
%s )
_filedir %s
@ -507,7 +790,7 @@ def opts_and_exts(name, op, exts, cover_opts=('--cover',), opf_opts=()):
cur="${COMP_WORDS[COMP_CWORD]}"
prev="${COMP_WORDS[COMP_CWORD-1]}"
opts="%(opts)s"
pics="@(jpg|jpeg|png|gif|bmp|JPG|JPEG|PNG|GIF|BMP)"
pics="@(%(pics)s)"
case "${prev}" in
%(extras)s
@ -526,7 +809,7 @@ def opts_and_exts(name, op, exts, cover_opts=('--cover',), opf_opts=()):
esac
}
complete -o filenames -F _'''%dict(
complete -o filenames -F _'''%dict(pics=spics,
opts=opts, extras=extras, exts=exts) + fname + ' ' + name +"\n\n"
@ -630,3 +913,4 @@ def main():
if __name__ == '__main__':
sys.exit(main())
sys.exit(main())

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More