merge from trunk

This commit is contained in:
Lee 2011-07-06 12:15:42 +04:00
commit e05bd25e2d
168 changed files with 69748 additions and 58386 deletions

View File

@ -19,12 +19,92 @@
# new recipes: # new recipes:
# - title: # - title:
- version: 0.8.8
date: 2011-07-01
new features:
- title: "Make author names in the Book Details panel clickable. Clicking them takes you to the wikipedia page for the author by default. You may have to tell calibre to display author names in the Book details panel first via Preferences->Look & Feel->Book details. You can change the link for individual authors by right clicking on the author's name in the Tag Browser and selecting Manage Authors."
- title: "Get Books: Add 'Open Books' as an available book source"
- title: "Get Books: When a free download is available for a search result, for example, for public domain books, allow direct download of the book into your calibre library."
- title: "Support for detecting and mounting reader devices on FreeBSD."
tickets: [802708]
- title: "When creating a composite custom column, allow the use of HTML to create links and other markup that display in the Book details panel"
- title: "Add the swap_around_comma function to the template language."
- title: "Drivers for HTC G2, Advent Vega, iRiver Story HD, Lark FreeMe and Moovyman mp7"
- title: "Quick View: Survives changing libraries. Also allow sorting by series index as well as name."
- title: "Connect to iTunes: Add an option to control how the driver works depending on whether you have iTunes setup to copy files to its media directory or not. Set this option by customizing the Apple driver in Preferences->Plugins. Having iTunes copy media to its storage folder is no longer neccessary. See http://www.mobileread.com/forums/showthread.php?t=118559 for details"
- title: "Remove the delete library functionality from calibre, instead you can now remove a library, so calibre will forget about it, but you have to delete the files manually"
bug fixes:
- title: "Fix a regression introduced in 0.8.7 in the Tag Browser that could cause calibre to crash after performing various actions"
- title: "Fix an unhandled error when deleting all saved searches"
tickets: [804383]
- title: "Fix row numbers in a previous selection being incorrect after a sort operation."
- title: "Fix ISBN identifier type not recognized if it is in upper case"
tickets: [802288]
- title: "Fix a regression in 0.8.7 that broke reading metadata from MOBI files in the Edit metadata dialog."
tickets: [801981]
- title: "Fix handling of filenames that have an even number of periods before the file extension."
tickets: [801939]
- title: "Fix lack of thread saefty in template format system, that could lead to incorrect template evaluation in some cases."
tickets: [801944]
- title: "Fix conversion to PDB when the input document has no text"
tickets: [801888]
- title: "Fix clicking on first letter of author names generating incorrect search."
- title: "Also fix updating bulk metadata in custom column causing unnneccessary Tag Browser refreshes."
- title: "Fix a regression in 0.8.7 that broke renaming items via the Tag Browser"
- title: "Fix a regression in 0.8.7 that caused the regex builder wizard to fail with LIT files as the input"
improved recipes:
- Zaman Gazetesi
- Infobae
- El Cronista
- Critica de la Argentina
- Buenos Aires Economico
- El Universal (Venezuela)
- wprost
- Financial Times UK
new recipes:
- title: "Today's Zaman by thomass"
- title: "Athens News by Darko Miletic"
- title: "Catholic News Agency"
author: Jetkey
- title: "Arizona Republic"
author: Jim Olo
- title: "Add Ming Pao Vancouver and Toronto"
author: Eddie Lau
- version: 0.8.7 - version: 0.8.7
date: 2011-06-24 date: 2011-06-24
new features: new features:
- title: "Connect to iTunes: You now need to tell iTunes to keep its own copy of every ebook. Do this in iTunes by going to Preferences->Advanced and setting the 'Copy files to iTunes Media folder when adding to library' option. To learn about why this is necessary, see: http://www.mobileread.com/forums/showthread.php?t=140260" - title: "Connect to iTunes: You now need to tell iTunes to keep its own copy of every ebook. Do this in iTunes by going to Preferences->Advanced and setting the 'Copy files to iTunes Media folder when adding to library' option. To learn about why this is necessary, see: http://www.mobileread.com/forums/showthread.php?t=140260"
type: major
- title: "Add a couple of date related functions to the calibre template langauge to get 'todays' date and create text based on the value of a date type field" - title: "Add a couple of date related functions to the calibre template langauge to get 'todays' date and create text based on the value of a date type field"

View File

@ -0,0 +1,68 @@
__license__ = 'GPL v3'
__copyright__ = '2010, jolo'
'''
azrepublic.com
'''
from calibre.web.feeds.recipes import BasicNewsRecipe
class AdvancedUserRecipe1307301031(BasicNewsRecipe):
title = u'AZRepublic'
__author__ = 'Jim Olo'
language = 'en'
description = "The Arizona Republic is Arizona's leading provider of news and information, and has published a daily newspaper in Phoenix for more than 110 years"
publisher = 'AZRepublic/AZCentral'
masthead_url = 'http://freedom2t.com/wp-content/uploads/press_az_republic_v2.gif'
cover_url = 'http://www.valleyleadership.org/Common/Img/2line4c_AZRepublic%20with%20azcentral%20logo.jpg'
category = 'news, politics, USA, AZ, Arizona'
oldest_article = 7
max_articles_per_feed = 100
remove_empty_feeds = True
no_stylesheets = True
remove_javascript = True
# extra_css = '.headline {font-size: medium;} \n .fact { padding-top: 10pt }'
extra_css = ' body{ font-family: Verdana,Helvetica,Arial,sans-serif } .headline {font-size: medium} .introduction{font-weight: bold} .story-feature{display: block; padding: 0; border: 1px solid; width: 40%; font-size: small} .story-feature h2{text-align: center; text-transform: uppercase} '
remove_attributes = ['width','height','h2','subHeadline','style']
remove_tags = [
dict(name='div', attrs={'id':['slidingBillboard', 'top728x90', 'subindex-header', 'topSearch']}),
dict(name='div', attrs={'id':['simplesearch', 'azcLoginBox', 'azcLoginBoxInner', 'topNav']}),
dict(name='div', attrs={'id':['carsDrop', 'homesDrop', 'rentalsDrop', 'classifiedDrop']}),
dict(name='div', attrs={'id':['nav', 'mp', 'subnav', 'jobsDrop']}),
dict(name='h6', attrs={'class':['section-header']}),
dict(name='a', attrs={'href':['#comments']}),
dict(name='div', attrs={'class':['articletools clearfix', 'floatRight']}),
dict(name='div', attrs={'id':['fbFrame', 'ob', 'storyComments', 'storyGoogleAdBox']}),
dict(name='div', attrs={'id':['storyTopHomes', 'openRight', 'footerwrap', 'copyright']}),
dict(name='div', attrs={'id':['blogsHed', 'blog_comments', 'blogByline','blogTopics']}),
dict(name='div', attrs={'id':['membersRightMain', 'dealsfooter', 'azrTopHed', 'azrRightCol']}),
dict(name='div', attrs={'id':['ttdHeader', 'ttdTimeWeather']}),
dict(name='div', attrs={'id':['membersRightMain', 'deals-header-wrap']}),
dict(name='div', attrs={'id':['todoTopSearchBar', 'byline clearfix', 'subdex-topnav']}),
dict(name='h1', attrs={'id':['SEOtext']}),
dict(name='table', attrs={'class':['ap-mediabox-table']}),
dict(name='p', attrs={'class':['ap_para']}),
dict(name='span', attrs={'class':['source-org vcard', 'org fn']}),
dict(name='a', attrs={'href':['http://hosted2.ap.org/APDEFAULT/privacy']}),
dict(name='a', attrs={'href':['http://hosted2.ap.org/APDEFAULT/terms']}),
dict(name='div', attrs={'id':['onespot_nextclick']}),
]
feeds = [
(u'FrontPage', u'http://www.azcentral.com/rss/feeds/republicfront.xml'),
(u'TopUS-News', u'http://hosted.ap.org/lineups/USHEADS.rss?SITE=AZPHG&SECTION=HOME'),
(u'WorldNews', u'http://hosted.ap.org/lineups/WORLDHEADS.rss?SITE=AZPHG&SECTION=HOME'),
(u'TopBusiness', u'http://hosted.ap.org/lineups/BUSINESSHEADS.rss?SITE=AZPHG&SECTION=HOME'),
(u'Entertainment', u'http://hosted.ap.org/lineups/ENTERTAINMENT.rss?SITE=AZPHG&SECTION=HOME'),
(u'ArizonaNews', u'http://www.azcentral.com/rss/feeds/news.xml'),
(u'Gilbert', u'http://www.azcentral.com/rss/feeds/gilbert.xml'),
(u'Chandler', u'http://www.azcentral.com/rss/feeds/chandler.xml'),
(u'DiningReviews', u'http://www.azcentral.com/rss/feeds/diningreviews.xml'),
(u'AZBusiness', u'http://www.azcentral.com/rss/feeds/business.xml'),
(u'ArizonaDeals', u'http://www.azcentral.com/members/Blog%7E/RealDealsblog'),
(u'GroceryDeals', u'http://www.azcentral.com/members/Blog%7E/RealDealsblog/tag/2646')
]

View File

@ -0,0 +1,70 @@
__license__ = 'GPL v3'
__copyright__ = '2011, Darko Miletic <darko.miletic at gmail.com>'
'''
www.athensnews.gr
'''
from calibre.web.feeds.news import BasicNewsRecipe
class AthensNews(BasicNewsRecipe):
title = 'Athens News'
__author__ = 'Darko Miletic'
description = 'Greece in English since 1952'
publisher = 'NEP Publishing Company SA'
category = 'news, politics, Greece, Athens'
oldest_article = 1
max_articles_per_feed = 200
no_stylesheets = True
encoding = 'utf8'
use_embedded_content = False
language = 'en_GR'
remove_empty_feeds = True
publication_type = 'newspaper'
masthead_url = 'http://www.athensnews.gr/sites/athensnews/themes/athensnewsv3/images/logo.jpg'
extra_css = """
body{font-family: Arial,Helvetica,sans-serif }
img{margin-bottom: 0.4em; display:block}
.big{font-size: xx-large; font-family: Georgia,serif}
.articlepubdate{font-size: small; color: gray; font-family: Georgia,serif}
.lezanta{font-size: x-small; font-weight: bold; text-align: left; margin-bottom: 1em; display: block}
"""
conversion_options = {
'comment' : description
, 'tags' : category
, 'publisher' : publisher
, 'language' : language
, 'linearize_tables' : True
}
remove_tags = [
dict(name=['meta','link'])
]
keep_only_tags=[
dict(name='span',attrs={'class':'big'})
,dict(name='td', attrs={'class':['articlepubdate','text']})
]
remove_attributes=['lang']
feeds = [
(u'News' , u'http://www.athensnews.gr/category/1/feed' )
,(u'Politics' , u'http://www.athensnews.gr/category/8/feed' )
,(u'Business' , u'http://www.athensnews.gr/category/2/feed' )
,(u'Economy' , u'http://www.athensnews.gr/category/11/feed')
,(u'Community' , u'http://www.athensnews.gr/category/5/feed' )
,(u'Arts' , u'http://www.athensnews.gr/category/3/feed' )
,(u'Living in Athens', u'http://www.athensnews.gr/category/7/feed' )
,(u'Sports' , u'http://www.athensnews.gr/category/4/feed' )
,(u'Travel' , u'http://www.athensnews.gr/category/6/feed' )
,(u'Letters' , u'http://www.athensnews.gr/category/44/feed')
,(u'Media' , u'http://www.athensnews.gr/multimedia/feed' )
]
def print_version(self, url):
return url + '?action=print'
def preprocess_html(self, soup):
for item in soup.findAll(style=True):
del item['style']
return soup

View File

@ -1,72 +1,59 @@
#!/usr/bin/env python
__license__ = 'GPL v3' __license__ = 'GPL v3'
__copyright__ = '2009, Darko Miletic <darko.miletic at gmail.com>' __copyright__ = '2009-2011, Darko Miletic <darko.miletic at gmail.com>'
''' '''
elargentino.com www.diariobae.com
''' '''
from calibre import strftime
from calibre.web.feeds.news import BasicNewsRecipe from calibre.web.feeds.news import BasicNewsRecipe
from calibre.ebooks.BeautifulSoup import Tag
class BsAsEconomico(BasicNewsRecipe): class BsAsEconomico(BasicNewsRecipe):
title = 'Buenos Aires Economico' title = 'Buenos Aires Economico'
__author__ = 'Darko Miletic' __author__ = 'Darko Miletic'
description = 'Revista Argentina' description = 'Diario BAE es el diario economico-politico con mas influencia en la Argentina. Fuente de empresarios y politicos del pais y el exterior. El pozo estaria aportando en periodos breves un volumen equivalente a 800m3 diarios. Pero todavia deben efectuarse otras perforaciones adicionales.'
publisher = 'ElArgentino.com' publisher = 'Diario BAE'
category = 'news, politics, economy, Argentina' category = 'news, politics, economy, Argentina'
oldest_article = 2 oldest_article = 2
max_articles_per_feed = 100 max_articles_per_feed = 100
no_stylesheets = True no_stylesheets = True
use_embedded_content = False use_embedded_content = False
encoding = 'utf-8' encoding = 'utf-8'
language = 'es_AR' language = 'es_AR'
cover_url = strftime('http://www.diariobae.com/imgs_portadas/%Y%m%d_portadasBAE.jpg')
masthead_url = 'http://www.diariobae.com/img/logo_bae.png'
remove_empty_feeds = True
publication_type = 'newspaper'
extra_css = """
body{font-family: Georgia,"Times New Roman",Times,serif}
#titulo{font-size: x-large}
#epi{font-size: small; font-style: italic; font-weight: bold}
img{display: block; margin-top: 1em}
"""
conversion_options = {
'comment' : description
, 'tags' : category
, 'publisher' : publisher
, 'language' : language
}
lang = 'es-AR' remove_tags_before= dict(attrs={'id':'titulo'})
direction = 'ltr' remove_tags_after = dict(attrs={'id':'autor' })
INDEX = 'http://www.elargentino.com/medios/121/Buenos-Aires-Economico.html' remove_tags = [
extra_css = ' .titulo{font-size: x-large; font-weight: bold} .volantaImp{font-size: small; font-weight: bold} ' dict(name=['meta','base','iframe','link','lang'])
,dict(attrs={'id':'barra_tw'})
html2lrf_options = [
'--comment' , description
, '--category' , category
, '--publisher', publisher
] ]
remove_attributes = ['data-count','data-via']
html2epub_options = 'publisher="' + publisher + '"\ncomments="' + description + '"\ntags="' + category + '"\noverride_css=" p {text-indent: 0cm; margin-top: 0em; margin-bottom: 0.5em} "' feeds = [
(u'Argentina' , u'http://www.diariobae.com/rss/argentina.xml' )
keep_only_tags = [dict(name='div', attrs={'class':'ContainerPop'})] ,(u'Valores' , u'http://www.diariobae.com/rss/valores.xml' )
,(u'Finanzas' , u'http://www.diariobae.com/rss/finanzas.xml' )
remove_tags = [dict(name='link')] ,(u'Negocios' , u'http://www.diariobae.com/rss/negocios.xml' )
,(u'Mundo' , u'http://www.diariobae.com/rss/mundo.xml' )
feeds = [(u'Articulos', u'http://www.elargentino.com/Highlights.aspx?ParentType=Section&ParentId=121&Content-Type=text/xml&ChannelDesc=Buenos%20Aires%20Econ%C3%B3mico')] ,(u'5 dias' , u'http://www.diariobae.com/rss/5dias.xml' )
,(u'Espectaculos', u'http://www.diariobae.com/rss/espectaculos.xml')
def print_version(self, url): ]
main, sep, article_part = url.partition('/nota-')
article_id, rsep, rrest = article_part.partition('-')
return u'http://www.elargentino.com/Impresion.aspx?Id=' + article_id
def preprocess_html(self, soup): def preprocess_html(self, soup):
for item in soup.findAll(style=True): for item in soup.findAll(style=True):
del item['style'] del item['style']
soup.html['lang'] = self.lang
soup.html['dir' ] = self.direction
mlang = Tag(soup,'meta',[("http-equiv","Content-Language"),("content",self.lang)])
mcharset = Tag(soup,'meta',[("http-equiv","Content-Type"),("content","text/html; charset=utf-8")])
soup.head.insert(0,mlang)
soup.head.insert(1,mcharset)
return soup return soup
def get_cover_url(self):
cover_url = None
soup = self.index_to_soup(self.INDEX)
cover_item = soup.find('div',attrs={'class':'colder'})
if cover_item:
clean_url = self.image_url_processor(None,cover_item.div.img['src'])
cover_url = 'http://www.elargentino.com' + clean_url + '&height=600'
return cover_url
def image_url_processor(self, baseurl, url):
base, sep, rest = url.rpartition('?Id=')
img, sep2, rrest = rest.partition('&')
return base + sep + img

View File

@ -0,0 +1,13 @@
from calibre.web.feeds.news import BasicNewsRecipe
class AdvancedUserRecipe1301972345(BasicNewsRecipe):
title = u'Catholic News Agency'
language = 'en'
__author__ = 'Jetkey'
oldest_article = 5
max_articles_per_feed = 20
feeds = [(u'U.S. News', u'http://feeds.feedburner.com/catholicnewsagency/dailynews-us'),
(u'Vatican', u'http://feeds.feedburner.com/catholicnewsagency/dailynews-vatican'),
(u'Bishops Corner', u'http://feeds.feedburner.com/catholicnewsagency/columns/bishopscorner'),
(u'Saint of the Day', u'http://feeds.feedburner.com/catholicnewsagency/saintoftheday')]

View File

@ -1,83 +1,63 @@
from calibre.web.feeds.news import BasicNewsRecipe from calibre.web.feeds.news import BasicNewsRecipe
import re
class Cracked(BasicNewsRecipe): class Cracked(BasicNewsRecipe):
title = u'Cracked.com' title = u'Cracked.com'
__author__ = u'Nudgenudge' __author__ = 'UnWeave'
language = 'en' language = 'en'
description = 'America''s Only Humor and Video Site, since 1958' description = "America's Only HumorSite since 1958"
publisher = 'Cracked' publisher = 'Cracked'
category = 'comedy, lists' category = 'comedy, lists'
oldest_article = 2 oldest_article = 3 #days
delay = 10 max_articles_per_feed = 100
max_articles_per_feed = 2
no_stylesheets = True no_stylesheets = True
encoding = 'cp1252' encoding = 'ascii'
remove_javascript = True remove_javascript = True
use_embedded_content = False use_embedded_content = False
INDEX = u'http://www.cracked.com'
extra_css = """ feeds = [ (u'Articles', u'http://feeds.feedburner.com/CrackedRSS/') ]
.pageheader_type{font-size: x-large; font-weight: bold; color: #828D74}
.pageheader_title{font-size: xx-large; color: #394128}
.pageheader_byline{font-size: small; font-weight: bold; color: #394128}
.score_bg {display: inline; width: 100%; margin-bottom: 2em}
.score_column_1{ padding-left: 10px; font-size: small; width: 50%}
.score_column_2{ padding-left: 10px; font-size: small; width: 50%}
.score_column_3{ padding-left: 10px; font-size: small; width: 50%}
.score_header{font-size: large; color: #50544A}
.bodytext{display: block}
body{font-family: Helvetica,Arial,sans-serif}
"""
conversion_options = { conversion_options = {
'comment' : description 'comment' : description
, 'tags' : category , 'tags' : category
, 'publisher' : publisher , 'publisher' : publisher
, 'language' : language , 'language' : language
, 'linearize_tables' : True
} }
keep_only_tags = [ remove_tags_before = dict(id='PrimaryContent')
dict(name='div', attrs={'class':['Column1']})
]
feeds = [(u'Articles', u'http://feeds.feedburner.com/CrackedRSS')] remove_tags_after = dict(name='div', attrs={'class':'shareBar'})
def get_article_url(self, article): remove_tags = [ dict(name='div', attrs={'class':['social',
return article.get('guid', None) 'FacebookLike',
'shareBar'
]}),
def cleanup_page(self, soup): dict(name='div', attrs={'id':['inline-share-buttons',
for item in soup.findAll(style=True): ]}),
del item['style']
for alink in soup.findAll('a'):
if alink.string is not None:
tstr = alink.string
alink.replaceWith(tstr)
for div_to_remove in soup.findAll('div', attrs={'id':['googlead_1','fb-like-article','comments_section']}):
div_to_remove.extract()
for div_to_remove in soup.findAll('div', attrs={'class':['share_buttons_col_1','GenericModule1']}):
div_to_remove.extract()
for div_to_remove in soup.findAll('div', attrs={'class':re.compile("prev_next")}):
div_to_remove.extract()
for ul_to_remove in soup.findAll('ul', attrs={'class':['Nav6']}):
ul_to_remove.extract()
for image in soup.findAll('img', attrs={'alt': 'article image'}):
image.extract()
def append_page(self, soup, appendtag, position): dict(name='span', attrs={'class':['views',
pager = soup.find('a',attrs={'class':'next_arrow_active'}) 'KonaFilter'
if pager: ]}),
nexturl = self.INDEX + pager['href'] #dict(name='img'),
soup2 = self.index_to_soup(nexturl) ]
texttag = soup2.find('div', attrs={'class':re.compile("userStyled")})
newpos = len(texttag.contents) def appendPage(self, soup, appendTag, position):
self.append_page(soup2,texttag,newpos) # Check if article has multiple pages
texttag.extract() pageNav = soup.find('nav', attrs={'class':'PaginationContent'})
self.cleanup_page(appendtag) if pageNav:
appendtag.insert(position,texttag) # Check not at last page
else: nextPage = pageNav.find('a', attrs={'class':'next'})
self.cleanup_page(appendtag) if nextPage:
nextPageURL = nextPage['href']
nextPageSoup = self.index_to_soup(nextPageURL)
# 8th <section> tag contains article content
nextPageContent = nextPageSoup.findAll('section')[7]
newPosition = len(nextPageContent.contents)
self.appendPage(nextPageSoup,nextPageContent,newPosition)
nextPageContent.extract()
pageNav.extract()
appendTag.insert(position,nextPageContent)
def preprocess_html(self, soup): def preprocess_html(self, soup):
self.append_page(soup, soup.body, 3) self.appendPage(soup, soup.body, 3)
return self.adeify_images(soup) return soup

View File

@ -1,69 +0,0 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
__copyright__ = '2008, Darko Miletic <darko.miletic at gmail.com>'
'''
criticadigital.com
'''
from calibre.web.feeds.news import BasicNewsRecipe
class CriticaDigital(BasicNewsRecipe):
title = 'Critica de la Argentina'
__author__ = 'Darko Miletic and Sujata Raman'
description = 'Noticias de Argentina'
oldest_article = 2
max_articles_per_feed = 100
language = 'es_AR'
no_stylesheets = True
use_embedded_content = False
encoding = 'cp1252'
extra_css = '''
h1{font-family:"Trebuchet MS";}
h3{color:#9A0000; font-family:Tahoma; font-size:x-small;}
h2{color:#504E53; font-family:Arial,Helvetica,sans-serif ;font-size:small;}
#epigrafe{font-family:Arial,Helvetica,sans-serif ;color:#666666 ; font-size:x-small;}
p {font-family:Arial,Helvetica,sans-serif;}
#fecha{color:#858585; font-family:Tahoma; font-size:x-small;}
#autor{color:#858585; font-family:Tahoma; font-size:x-small;}
#hora{color:#F00000;font-family:Tahoma; font-size:x-small;}
'''
keep_only_tags = [
dict(name='div', attrs={'class':['bloqueTitulosNoticia','cfotonota']})
,dict(name='div', attrs={'id':'boxautor'})
,dict(name='p', attrs={'id':'textoNota'})
]
remove_tags = [
dict(name='div', attrs={'class':'box300' })
,dict(name='div', style=True )
,dict(name='div', attrs={'class':'titcomentario'})
,dict(name='div', attrs={'class':'comentario' })
,dict(name='div', attrs={'class':'paginador' })
]
feeds = [
(u'Politica', u'http://www.criticadigital.com/herramientas/rss.php?ch=politica' )
,(u'Economia', u'http://www.criticadigital.com/herramientas/rss.php?ch=economia' )
,(u'Deportes', u'http://www.criticadigital.com/herramientas/rss.php?ch=deportes' )
,(u'Espectaculos', u'http://www.criticadigital.com/herramientas/rss.php?ch=espectaculos')
,(u'Mundo', u'http://www.criticadigital.com/herramientas/rss.php?ch=mundo' )
,(u'Policiales', u'http://www.criticadigital.com/herramientas/rss.php?ch=policiales' )
,(u'Sociedad', u'http://www.criticadigital.com/herramientas/rss.php?ch=sociedad' )
,(u'Salud', u'http://www.criticadigital.com/herramientas/rss.php?ch=salud' )
,(u'Tecnologia', u'http://www.criticadigital.com/herramientas/rss.php?ch=tecnologia' )
,(u'Santa Fe', u'http://www.criticadigital.com/herramientas/rss.php?ch=santa_fe' )
]
def get_cover_url(self):
cover_url = None
index = 'http://www.criticadigital.com/impresa/'
soup = self.index_to_soup(index)
link_item = soup.find('div',attrs={'class':'tapa'})
if link_item:
cover_url = index + link_item.img['src']
return cover_url

View File

@ -1,72 +1,59 @@
#!/usr/bin/env python
__license__ = 'GPL v3' __license__ = 'GPL v3'
__copyright__ = '2008, Darko Miletic <darko.miletic at gmail.com>' __copyright__ = '2008-2011, Darko Miletic <darko.miletic at gmail.com>'
''' '''
cronista.com www.cronista.com
''' '''
from calibre.web.feeds.news import BasicNewsRecipe from calibre.web.feeds.news import BasicNewsRecipe
class ElCronista(BasicNewsRecipe): class Pagina12(BasicNewsRecipe):
title = 'El Cronista' title = 'El Cronista Comercial'
__author__ = 'Darko Miletic' __author__ = 'Darko Miletic'
description = 'Noticias de Argentina' description = 'El Cronista Comercial es el Diario economico-politico mas valorado. Es la fuente mas confiable de informacion en temas de economia, finanzas y negocios enmarcados politicamente.'
publisher = 'Cronista.com'
category = 'news, politics, economy, finances, Argentina'
oldest_article = 2 oldest_article = 2
language = 'es_AR' max_articles_per_feed = 200
max_articles_per_feed = 100
no_stylesheets = True no_stylesheets = True
encoding = 'utf8'
use_embedded_content = False use_embedded_content = False
encoding = 'cp1252' language = 'es_AR'
remove_empty_feeds = True
publication_type = 'newspaper'
masthead_url = 'http://www.cronista.com/export/sites/diarioelcronista/arte/header-logo.gif'
extra_css = """
body{font-family: Arial,Helvetica,sans-serif }
h2{font-family: Georgia,"Times New Roman",Times,serif }
img{margin-bottom: 0.4em; display:block}
.nom{font-weight: bold; vertical-align: baseline}
.autor-cfoto{border-bottom: 1px solid #D2D2D2;
border-top: 1px solid #D2D2D2;
display: inline-block;
margin: 0 10px 10px 0;
padding: 10px;
width: 210px}
.under{font-weight: bold}
.time{font-size: small}
"""
html2lrf_options = [ conversion_options = {
'--comment' , description 'comment' : description
, '--category' , 'news, Argentina' , 'tags' : category
, '--publisher' , title , 'publisher' : publisher
] , 'language' : language
}
keep_only_tags = [ remove_tags = [
dict(name='table', attrs={'width':'100%' }) dict(name=['meta','link','base','iframe','object','embed'])
,dict(name='h1' , attrs={'class':'Arialgris16normal'}) ,dict(attrs={'class':['user-tools','tabsmedia']})
] ]
remove_attributes = ['lang']
remove_tags_before = dict(attrs={'class':'top'})
remove_tags_after = dict(attrs={'class':'content-nota'})
feeds = [(u'Ultimas noticias', u'http://www.cronista.com/rss.html')]
remove_tags = [dict(name='a', attrs={'class':'Arialazul12'})]
feeds = [
(u'Economia' , u'http://www.cronista.com/adjuntos/8/rss/Economia_EI.xml' )
,(u'Negocios' , u'http://www.cronista.com/adjuntos/8/rss/negocios_EI.xml' )
,(u'Ultimo momento' , u'http://www.cronista.com/adjuntos/8/rss/ultimo_momento.xml' )
,(u'Finanzas y Mercados' , u'http://www.cronista.com/adjuntos/8/rss/Finanzas_Mercados_EI.xml' )
,(u'Financial Times' , u'http://www.cronista.com/adjuntos/8/rss/FT_EI.xml' )
,(u'Opinion edicion impresa' , u'http://www.cronista.com/adjuntos/8/rss/opinion_edicion_impresa.xml' )
,(u'Socialmente Responsables', u'http://www.cronista.com/adjuntos/8/rss/Socialmente_Responsables.xml')
,(u'Asuntos Legales' , u'http://www.cronista.com/adjuntos/8/rss/asuntoslegales.xml' )
,(u'IT Business' , u'http://www.cronista.com/adjuntos/8/rss/itbusiness.xml' )
,(u'Management y RR.HH.' , u'http://www.cronista.com/adjuntos/8/rss/management.xml' )
,(u'Inversiones Personales' , u'http://www.cronista.com/adjuntos/8/rss/inversionespersonales.xml' )
]
def print_version(self, url):
main, sep, rest = url.partition('.com/notas/')
article_id, lsep, rrest = rest.partition('-')
return 'http://www.cronista.com/interior/index.php?p=imprimir_nota&idNota=' + article_id
def preprocess_html(self, soup): def preprocess_html(self, soup):
mtag = '<meta http-equiv="Content-Type" content="text/html; charset=utf-8">' for item in soup.findAll(style=True):
soup.head.insert(0,mtag) del item['style']
soup.head.base.extract()
htext = soup.find('h1',attrs={'class':'Arialgris16normal'})
htext.name = 'p'
soup.prettify()
return soup return soup
def get_cover_url(self):
cover_url = None
index = 'http://www.cronista.com/contenidos/'
soup = self.index_to_soup(index + 'ee.html')
link_item = soup.find('a',attrs={'href':"javascript:Close()"})
if link_item:
cover_url = index + link_item.img['src']
return cover_url

View File

@ -1,5 +1,5 @@
__license__ = 'GPL v3' __license__ = 'GPL v3'
__copyright__ = '2010, Darko Miletic <darko.miletic at gmail.com>' __copyright__ = '2010-2011, Darko Miletic <darko.miletic at gmail.com>'
''' '''
www.eluniversal.com www.eluniversal.com
''' '''
@ -15,12 +15,20 @@ class ElUniversal(BasicNewsRecipe):
max_articles_per_feed = 100 max_articles_per_feed = 100
no_stylesheets = True no_stylesheets = True
use_embedded_content = False use_embedded_content = False
remove_empty_feeds = True
encoding = 'cp1252' encoding = 'cp1252'
publisher = 'El Universal' publisher = 'El Universal'
category = 'news, Caracas, Venezuela, world' category = 'news, Caracas, Venezuela, world'
language = 'es_VE' language = 'es_VE'
publication_type = 'newspaper'
cover_url = strftime('http://static.eluniversal.com/%Y/%m/%d/portada.jpg') cover_url = strftime('http://static.eluniversal.com/%Y/%m/%d/portada.jpg')
extra_css = """
.txt60{font-family: Tahoma,Geneva,sans-serif; font-size: small}
.txt29{font-family: Tahoma,Geneva,sans-serif; font-size: small; color: gray}
.txt38{font-family: Georgia,"Times New Roman",Times,serif; font-size: xx-large}
.txt35{font-family: Georgia,"Times New Roman",Times,serif; font-size: large}
body{font-family: Verdana,Arial,Helvetica,sans-serif}
"""
conversion_options = { conversion_options = {
'comments' : description 'comments' : description
,'tags' : category ,'tags' : category
@ -28,10 +36,11 @@ class ElUniversal(BasicNewsRecipe):
,'publisher' : publisher ,'publisher' : publisher
} }
keep_only_tags = [dict(name='div', attrs={'class':'Nota'})] remove_tags_before=dict(attrs={'class':'header-print MB10'})
remove_tags_after= dict(attrs={'id':'SizeText'})
remove_tags = [ remove_tags = [
dict(name=['object','link','script','iframe']) dict(name=['object','link','script','iframe','meta'])
,dict(name='div',attrs={'class':'Herramientas'}) ,dict(attrs={'class':'header-print MB10'})
] ]
feeds = [ feeds = [

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
__license__ = 'GPL v3' __license__ = 'GPL v3'
__copyright__ = '2008 - 2009, Darko Miletic <darko.miletic at gmail.com>' __copyright__ = 'Copyright 2011 Starson17'
''' '''
engadget.com engadget.com
''' '''
@ -9,14 +9,29 @@ engadget.com
from calibre.web.feeds.news import BasicNewsRecipe from calibre.web.feeds.news import BasicNewsRecipe
class Engadget(BasicNewsRecipe): class Engadget(BasicNewsRecipe):
title = u'Engadget' title = u'Engadget_Full'
__author__ = 'Darko Miletic' __author__ = 'Starson17'
__version__ = 'v1.00'
__date__ = '02, July 2011'
description = 'Tech news' description = 'Tech news'
language = 'en' language = 'en'
oldest_article = 7 oldest_article = 7
max_articles_per_feed = 100 max_articles_per_feed = 100
no_stylesheets = True no_stylesheets = True
use_embedded_content = True use_embedded_content = False
remove_javascript = True
remove_empty_feeds = True
feeds = [ (u'Posts', u'http://www.engadget.com/rss.xml')] keep_only_tags = [dict(name='div', attrs={'class':['post_content permalink ','post_content permalink alt-post-full']})]
remove_tags = [dict(name='div', attrs={'class':['filed_under','post_footer']})]
remove_tags_after = [dict(name='div', attrs={'class':['post_footer']})]
feeds = [(u'Posts', u'http://www.engadget.com/rss.xml')]
extra_css = '''
h1{font-family:Arial,Helvetica,sans-serif; font-weight:bold;font-size:large;}
h2{font-family:Arial,Helvetica,sans-serif; font-weight:normal;font-size:small;}
p{font-family:Arial,Helvetica,sans-serif;font-size:small;}
body{font-family:Helvetica,Arial,sans-serif;font-size:small;}
'''

View File

@ -1,5 +1,6 @@
from calibre.web.feeds.news import BasicNewsRecipe from calibre.web.feeds.news import BasicNewsRecipe
import re import re
from datetime import date, timedelta
class HBR(BasicNewsRecipe): class HBR(BasicNewsRecipe):
@ -12,13 +13,14 @@ class HBR(BasicNewsRecipe):
no_stylesheets = True no_stylesheets = True
LOGIN_URL = 'http://hbr.org/login?request_url=/' LOGIN_URL = 'http://hbr.org/login?request_url=/'
INDEX = 'http://hbr.org/current' INDEX = 'http://hbr.org/archive-toc/BR'
keep_only_tags = [dict(name='div', id='pageContainer')] keep_only_tags = [dict(name='div', id='pageContainer')]
remove_tags = [dict(id=['mastheadContainer', 'magazineHeadline', remove_tags = [dict(id=['mastheadContainer', 'magazineHeadline',
'articleToolbarTopRD', 'pageRightSubColumn', 'pageRightColumn', 'articleToolbarTopRD', 'pageRightSubColumn', 'pageRightColumn',
'todayOnHBRListWidget', 'mostWidget', 'keepUpWithHBR', 'todayOnHBRListWidget', 'mostWidget', 'keepUpWithHBR',
'mailingListTout', 'partnerCenter', 'pageFooter', 'mailingListTout', 'partnerCenter', 'pageFooter',
'superNavHeadContainer', 'hbrDisqus',
'articleToolbarTop', 'articleToolbarBottom', 'articleToolbarRD']), 'articleToolbarTop', 'articleToolbarBottom', 'articleToolbarRD']),
dict(name='iframe')] dict(name='iframe')]
extra_css = ''' extra_css = '''
@ -55,9 +57,14 @@ class HBR(BasicNewsRecipe):
def hbr_get_toc(self): def hbr_get_toc(self):
soup = self.index_to_soup(self.INDEX) today = date.today()
url = soup.find('a', text=lambda t:'Full Table of Contents' in t).parent.get('href') future = today + timedelta(days=30)
return self.index_to_soup('http://hbr.org'+url) for x in [x.strftime('%y%m') for x in (future, today)]:
url = self.INDEX + x
soup = self.index_to_soup(url)
if not soup.find(text='Issue Not Found'):
return soup
raise Exception('Could not find current issue')
def hbr_parse_section(self, container, feeds): def hbr_parse_section(self, container, feeds):
current_section = None current_section = None

Binary file not shown.

After

Width:  |  Height:  |  Size: 514 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 400 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 770 B

After

Width:  |  Height:  |  Size: 1.1 KiB

View File

@ -6,7 +6,7 @@ class TheIndependent(BasicNewsRecipe):
language = 'en_GB' language = 'en_GB'
__author__ = 'Krittika Goyal' __author__ = 'Krittika Goyal'
oldest_article = 1 #days oldest_article = 1 #days
max_articles_per_feed = 25 max_articles_per_feed = 30
encoding = 'latin1' encoding = 'latin1'
no_stylesheets = True no_stylesheets = True
@ -25,24 +25,39 @@ class TheIndependent(BasicNewsRecipe):
'http://www.independent.co.uk/news/uk/rss'), 'http://www.independent.co.uk/news/uk/rss'),
('World', ('World',
'http://www.independent.co.uk/news/world/rss'), 'http://www.independent.co.uk/news/world/rss'),
('Sport',
'http://www.independent.co.uk/sport/rss'),
('Arts and Entertainment',
'http://www.independent.co.uk/arts-entertainment/rss'),
('Business', ('Business',
'http://www.independent.co.uk/news/business/rss'), 'http://www.independent.co.uk/news/business/rss'),
('Life and Style',
'http://www.independent.co.uk/life-style/gadgets-and-tech/news/rss'),
('Science',
'http://www.independent.co.uk/news/science/rss'),
('People', ('People',
'http://www.independent.co.uk/news/people/rss'), 'http://www.independent.co.uk/news/people/rss'),
('Science',
'http://www.independent.co.uk/news/science/rss'),
('Media', ('Media',
'http://www.independent.co.uk/news/media/rss'), 'http://www.independent.co.uk/news/media/rss'),
('Health and Families', ('Education',
'http://www.independent.co.uk/life-style/health-and-families/rss'), 'http://www.independent.co.uk/news/education/rss'),
('Obituaries', ('Obituaries',
'http://www.independent.co.uk/news/obituaries/rss'), 'http://www.independent.co.uk/news/obituaries/rss'),
('Opinion',
'http://www.independent.co.uk/opinion/rss'),
('Environment',
'http://www.independent.co.uk/environment/rss'),
('Sport',
'http://www.independent.co.uk/sport/rss'),
('Life and Style',
'http://www.independent.co.uk/life-style/rss'),
('Arts and Entertainment',
'http://www.independent.co.uk/arts-entertainment/rss'),
('Travel',
'http://www.independent.co.uk/travel/rss'),
('Money',
'http://www.independent.co.uk/money/rss'),
] ]
def preprocess_html(self, soup): def preprocess_html(self, soup):

View File

@ -1,5 +1,5 @@
__license__ = 'GPL v3' __license__ = 'GPL v3'
__copyright__ = '2008-2010, Darko Miletic <darko.miletic at gmail.com>' __copyright__ = '2008-2011, Darko Miletic <darko.miletic at gmail.com>'
''' '''
infobae.com infobae.com
''' '''
@ -9,7 +9,7 @@ from calibre.web.feeds.news import BasicNewsRecipe
class Infobae(BasicNewsRecipe): class Infobae(BasicNewsRecipe):
title = 'Infobae.com' title = 'Infobae.com'
__author__ = 'Darko Miletic and Sujata Raman' __author__ = 'Darko Miletic and Sujata Raman'
description = 'Informacion Libre las 24 horas' description = 'Infobae.com es el sitio de noticias con mayor actualizacion de Latinoamérica. Noticias actualizadas las 24 horas, los 365 días del año.'
publisher = 'Infobae.com' publisher = 'Infobae.com'
category = 'news, politics, Argentina' category = 'news, politics, Argentina'
oldest_article = 1 oldest_article = 1
@ -17,13 +17,13 @@ class Infobae(BasicNewsRecipe):
no_stylesheets = True no_stylesheets = True
use_embedded_content = False use_embedded_content = False
language = 'es_AR' language = 'es_AR'
encoding = 'cp1252' encoding = 'utf8'
masthead_url = 'http://www.infobae.com/imgs/header/header.gif' masthead_url = 'http://www.infobae.com/media/img/static/logo-infobae.gif'
remove_javascript = True
remove_empty_feeds = True remove_empty_feeds = True
extra_css = ''' extra_css = '''
body{font-family:Arial,Helvetica,sans-serif;} body{font-family: Arial,Helvetica,sans-serif}
.popUpTitulo{color:#0D4261; font-size: xx-large} img{display: block}
.categoria{font-size: small; text-transform: uppercase}
''' '''
conversion_options = { conversion_options = {
@ -31,26 +31,44 @@ class Infobae(BasicNewsRecipe):
, 'tags' : category , 'tags' : category
, 'publisher' : publisher , 'publisher' : publisher
, 'language' : language , 'language' : language
, 'linearize_tables' : True
} }
keep_only_tags = [dict(attrs={'class':['titularnota','nota','post-title','post-entry','entry-title','entry-info','entry-content']})]
remove_tags_after = dict(attrs={'class':['interior-noticia','nota-desc','tags']})
remove_tags = [
dict(name=['base','meta','link','iframe','object','embed','ins'])
,dict(attrs={'class':['barranota','tags']})
]
feeds = [ feeds = [
(u'Noticias' , u'http://www.infobae.com/adjuntos/html/RSS/hoy.xml' ) (u'Saludable' , u'http://www.infobae.com/rss/saludable.xml')
,(u'Salud' , u'http://www.infobae.com/adjuntos/html/RSS/salud.xml' ) ,(u'Economia' , u'http://www.infobae.com/rss/economia.xml' )
,(u'Tecnologia', u'http://www.infobae.com/adjuntos/html/RSS/tecnologia.xml') ,(u'En Numeros', u'http://www.infobae.com/rss/rating.xml' )
,(u'Deportes' , u'http://www.infobae.com/adjuntos/html/RSS/deportes.xml' ) ,(u'Finanzas' , u'http://www.infobae.com/rss/finanzas.xml' )
,(u'Mundo' , u'http://www.infobae.com/rss/mundo.xml' )
,(u'Sociedad' , u'http://www.infobae.com/rss/sociedad.xml' )
,(u'Politica' , u'http://www.infobae.com/rss/politica.xml' )
,(u'Deportes' , u'http://www.infobae.com/rss/deportes.xml' )
] ]
def print_version(self, url): def preprocess_html(self, soup):
article_part = url.rpartition('/')[2] for item in soup.findAll(style=True):
article_id= article_part.partition('-')[0] del item['style']
return 'http://www.infobae.com/notas/nota_imprimir.php?Idx=' + article_id for item in soup.findAll('a'):
limg = item.find('img')
def postprocess_html(self, soup, first): if item.string is not None:
for tag in soup.findAll(name='strong'): str = item.string
tag.name = 'b' item.replaceWith(str)
else:
if limg:
item.name = 'div'
item.attrs = []
else:
str = self.tag_to_string(item)
item.replaceWith(str)
for item in soup.findAll('img'):
if not item.has_key('alt'):
item['alt'] = 'image'
return soup return soup

View File

@ -99,7 +99,7 @@ class LeMonde(BasicNewsRecipe):
keep_only_tags = [ keep_only_tags = [
dict(name='div', attrs={'class':['contenu']}) dict(name='div', attrs={'class':['contenu']})
] ]
remove_tags = [dict(name='div', attrs={'class':['LM_atome']})]
remove_tags_after = [dict(id='appel_temoignage')] remove_tags_after = [dict(id='appel_temoignage')]
def get_article_url(self, article): def get_article_url(self, article):

View File

@ -179,17 +179,17 @@ class MPRecipe(BasicNewsRecipe):
def get_dtlocal(self): def get_dtlocal(self):
dt_utc = datetime.datetime.utcnow() dt_utc = datetime.datetime.utcnow()
if __Region__ == 'Hong Kong': if __Region__ == 'Hong Kong':
# convert UTC to local hk time - at HKT 4.30am, all news are available # convert UTC to local hk time - at HKT 5.30am, all news are available
dt_local = dt_utc + datetime.timedelta(8.0/24) - datetime.timedelta(4.5/24) dt_local = dt_utc + datetime.timedelta(8.0/24) - datetime.timedelta(5.5/24)
# dt_local = dt_utc.astimezone(pytz.timezone('Asia/Hong_Kong')) - datetime.timedelta(4.5/24) # dt_local = dt_utc.astimezone(pytz.timezone('Asia/Hong_Kong')) - datetime.timedelta(5.5/24)
elif __Region__ == 'Vancouver': elif __Region__ == 'Vancouver':
# convert UTC to local Vancouver time - at PST time 4.30am, all news are available # convert UTC to local Vancouver time - at PST time 5.30am, all news are available
dt_local = dt_utc + datetime.timedelta(-8.0/24) - datetime.timedelta(4.5/24) dt_local = dt_utc + datetime.timedelta(-8.0/24) - datetime.timedelta(5.5/24)
#dt_local = dt_utc.astimezone(pytz.timezone('America/Vancouver')) - datetime.timedelta(4.5/24) #dt_local = dt_utc.astimezone(pytz.timezone('America/Vancouver')) - datetime.timedelta(5.5/24)
elif __Region__ == 'Toronto': elif __Region__ == 'Toronto':
# convert UTC to local Toronto time - at EST time 4.30am, all news are available # convert UTC to local Toronto time - at EST time 8.30am, all news are available
dt_local = dt_utc + datetime.timedelta(-5.0/24) - datetime.timedelta(4.5/24) dt_local = dt_utc + datetime.timedelta(-5.0/24) - datetime.timedelta(8.5/24)
#dt_local = dt_utc.astimezone(pytz.timezone('America/Toronto')) - datetime.timedelta(4.5/24) #dt_local = dt_utc.astimezone(pytz.timezone('America/Toronto')) - datetime.timedelta(8.5/24)
return dt_local return dt_local
def get_fetchdate(self): def get_fetchdate(self):

View File

@ -179,17 +179,17 @@ class MPRecipe(BasicNewsRecipe):
def get_dtlocal(self): def get_dtlocal(self):
dt_utc = datetime.datetime.utcnow() dt_utc = datetime.datetime.utcnow()
if __Region__ == 'Hong Kong': if __Region__ == 'Hong Kong':
# convert UTC to local hk time - at HKT 4.30am, all news are available # convert UTC to local hk time - at HKT 5.30am, all news are available
dt_local = dt_utc + datetime.timedelta(8.0/24) - datetime.timedelta(4.5/24) dt_local = dt_utc + datetime.timedelta(8.0/24) - datetime.timedelta(5.5/24)
# dt_local = dt_utc.astimezone(pytz.timezone('Asia/Hong_Kong')) - datetime.timedelta(4.5/24) # dt_local = dt_utc.astimezone(pytz.timezone('Asia/Hong_Kong')) - datetime.timedelta(5.5/24)
elif __Region__ == 'Vancouver': elif __Region__ == 'Vancouver':
# convert UTC to local Vancouver time - at PST time 4.30am, all news are available # convert UTC to local Vancouver time - at PST time 5.30am, all news are available
dt_local = dt_utc + datetime.timedelta(-8.0/24) - datetime.timedelta(4.5/24) dt_local = dt_utc + datetime.timedelta(-8.0/24) - datetime.timedelta(5.5/24)
#dt_local = dt_utc.astimezone(pytz.timezone('America/Vancouver')) - datetime.timedelta(4.5/24) #dt_local = dt_utc.astimezone(pytz.timezone('America/Vancouver')) - datetime.timedelta(5.5/24)
elif __Region__ == 'Toronto': elif __Region__ == 'Toronto':
# convert UTC to local Toronto time - at EST time 4.30am, all news are available # convert UTC to local Toronto time - at EST time 8.30am, all news are available
dt_local = dt_utc + datetime.timedelta(-5.0/24) - datetime.timedelta(4.5/24) dt_local = dt_utc + datetime.timedelta(-5.0/24) - datetime.timedelta(8.5/24)
#dt_local = dt_utc.astimezone(pytz.timezone('America/Toronto')) - datetime.timedelta(4.5/24) #dt_local = dt_utc.astimezone(pytz.timezone('America/Toronto')) - datetime.timedelta(8.5/24)
return dt_local return dt_local
def get_fetchdate(self): def get_fetchdate(self):

View File

@ -179,17 +179,17 @@ class MPRecipe(BasicNewsRecipe):
def get_dtlocal(self): def get_dtlocal(self):
dt_utc = datetime.datetime.utcnow() dt_utc = datetime.datetime.utcnow()
if __Region__ == 'Hong Kong': if __Region__ == 'Hong Kong':
# convert UTC to local hk time - at HKT 4.30am, all news are available # convert UTC to local hk time - at HKT 5.30am, all news are available
dt_local = dt_utc + datetime.timedelta(8.0/24) - datetime.timedelta(4.5/24) dt_local = dt_utc + datetime.timedelta(8.0/24) - datetime.timedelta(5.5/24)
# dt_local = dt_utc.astimezone(pytz.timezone('Asia/Hong_Kong')) - datetime.timedelta(4.5/24) # dt_local = dt_utc.astimezone(pytz.timezone('Asia/Hong_Kong')) - datetime.timedelta(5.5/24)
elif __Region__ == 'Vancouver': elif __Region__ == 'Vancouver':
# convert UTC to local Vancouver time - at PST time 4.30am, all news are available # convert UTC to local Vancouver time - at PST time 5.30am, all news are available
dt_local = dt_utc + datetime.timedelta(-8.0/24) - datetime.timedelta(4.5/24) dt_local = dt_utc + datetime.timedelta(-8.0/24) - datetime.timedelta(5.5/24)
#dt_local = dt_utc.astimezone(pytz.timezone('America/Vancouver')) - datetime.timedelta(4.5/24) #dt_local = dt_utc.astimezone(pytz.timezone('America/Vancouver')) - datetime.timedelta(5.5/24)
elif __Region__ == 'Toronto': elif __Region__ == 'Toronto':
# convert UTC to local Toronto time - at EST time 4.30am, all news are available # convert UTC to local Toronto time - at EST time 8.30am, all news are available
dt_local = dt_utc + datetime.timedelta(-5.0/24) - datetime.timedelta(4.5/24) dt_local = dt_utc + datetime.timedelta(-5.0/24) - datetime.timedelta(8.5/24)
#dt_local = dt_utc.astimezone(pytz.timezone('America/Toronto')) - datetime.timedelta(4.5/24) #dt_local = dt_utc.astimezone(pytz.timezone('America/Toronto')) - datetime.timedelta(8.5/24)
return dt_local return dt_local
def get_fetchdate(self): def get_fetchdate(self):

80
recipes/scmp.recipe Normal file
View File

@ -0,0 +1,80 @@
__license__ = 'GPL v3'
__copyright__ = '2010, Darko Miletic <darko.miletic at gmail.com>'
'''
scmp.com
'''
import re
from calibre.web.feeds.news import BasicNewsRecipe
class SCMP(BasicNewsRecipe):
title = 'South China Morning Post'
__author__ = 'llam'
description = "SCMP.com, Hong Kong's premier online English daily provides exclusive up-to-date news, audio video news, podcasts, RSS Feeds, Blogs, breaking news, top stories, award winning news and analysis on Hong Kong and China."
publisher = 'South China Morning Post Publishers Ltd.'
category = 'SCMP, Online news, Hong Kong News, China news, Business news, English newspaper, daily newspaper, Lifestyle news, Sport news, Audio Video news, Asia news, World news, economy news, investor relations news, RSS Feeds'
oldest_article = 2
delay = 1
max_articles_per_feed = 200
no_stylesheets = True
encoding = 'utf-8'
use_embedded_content = False
language = 'en_CN'
remove_empty_feeds = True
needs_subscription = True
publication_type = 'newspaper'
masthead_url = 'http://www.scmp.com/images/logo_scmp_home.gif'
extra_css = ' body{font-family: Arial,Helvetica,sans-serif } '
conversion_options = {
'comment' : description
, 'tags' : category
, 'publisher' : publisher
, 'language' : language
}
def get_browser(self):
br = BasicNewsRecipe.get_browser()
#br.set_debug_http(True)
#br.set_debug_responses(True)
#br.set_debug_redirects(True)
if self.username is not None and self.password is not None:
br.open('http://www.scmp.com/portal/site/SCMP/')
br.select_form(name='loginForm')
br['Login' ] = self.username
br['Password'] = self.password
br.submit()
return br
remove_attributes=['width','height','border']
keep_only_tags = [
dict(attrs={'id':['ART','photoBox']})
,dict(attrs={'class':['article_label','article_byline','article_body']})
]
preprocess_regexps = [
(re.compile(r'<P><table((?!<table).)*class="embscreen"((?!</table>).)*</table>', re.DOTALL|re.IGNORECASE),
lambda match: ''),
]
feeds = [
(u'Business' , u'http://www.scmp.com/rss/business.xml' )
,(u'Hong Kong' , u'http://www.scmp.com/rss/hong_kong.xml' )
,(u'China' , u'http://www.scmp.com/rss/china.xml' )
,(u'Asia & World' , u'http://www.scmp.com/rss/news_asia_world.xml')
,(u'Opinion' , u'http://www.scmp.com/rss/opinion.xml' )
,(u'LifeSTYLE' , u'http://www.scmp.com/rss/lifestyle.xml' )
,(u'Sport' , u'http://www.scmp.com/rss/sport.xml' )
]
def print_version(self, url):
rpart, sep, rest = url.rpartition('&')
return rpart #+ sep + urllib.quote_plus(rest)
def preprocess_html(self, soup):
for item in soup.findAll(style=True):
del item['style']
items = soup.findAll(src="/images/label_icon.gif")
[item.extract() for item in items]
return self.adeify_images(soup)

View File

@ -0,0 +1,40 @@
# -*- coding: utf-8 -*-
from calibre.web.feeds.news import BasicNewsRecipe
class TodaysZaman_en(BasicNewsRecipe):
title = u'Sızıntı Dergisi'
__author__ = u'thomass'
description = 'a Turkey based daily for national and international news in the fields of business, diplomacy, politics, culture, arts, sports and economics, in addition to commentaries, specials and features'
oldest_article = 30
max_articles_per_feed =80
no_stylesheets = True
#delay = 1
#use_embedded_content = False
encoding = 'utf-8'
#publisher = ' '
category = 'dergi, ilim, kültür, bilim,Türkçe'
language = 'tr'
publication_type = 'magazine'
#extra_css = ' body{ font-family: Verdana,Helvetica,Arial,sans-serif } .introduction{font-weight: bold} .story-feature{display: block; padding: 0; border: 1px solid; width: 40%; font-size: small} .story-feature h2{text-align: center; text-transform: uppercase} '
#keep_only_tags = [dict(name='h1', attrs={'class':['georgia_30']})]
#remove_attributes = ['aria-describedby']
#remove_tags = [dict(name='div', attrs={'id':['renk10']}) ]
cover_img_url = 'http://www.sizinti.com.tr/images/sizintiprint.jpg'
masthead_url = 'http://www.sizinti.com.tr/images/sizintiprint.jpg'
remove_tags_before = dict(id='content-right')
#remove_empty_feeds= True
#remove_attributes = ['width','height']
feeds = [
( u'Sızıntı', u'http://www.sizinti.com.tr/rss'),
]
#def preprocess_html(self, soup):
# return self.adeify_images(soup)
#def print_version(self, url): #there is a probem caused by table format
#return url.replace('http://www.todayszaman.com/newsDetail_getNewsById.action?load=detay&', 'http://www.todayszaman.com/newsDetail_openPrintPage.action?')

View File

@ -56,6 +56,7 @@ class TelegraphUK(BasicNewsRecipe):
,(u'Sport' , u'http://www.telegraph.co.uk/sport/rss' ) ,(u'Sport' , u'http://www.telegraph.co.uk/sport/rss' )
,(u'Earth News' , u'http://www.telegraph.co.uk/earth/earthnews/rss' ) ,(u'Earth News' , u'http://www.telegraph.co.uk/earth/earthnews/rss' )
,(u'Comment' , u'http://www.telegraph.co.uk/comment/rss' ) ,(u'Comment' , u'http://www.telegraph.co.uk/comment/rss' )
,(u'Travel' , u'http://www.telegraph.co.uk/travel/rss' )
,(u'How about that?', u'http://www.telegraph.co.uk/news/newstopics/howaboutthat/rss' ) ,(u'How about that?', u'http://www.telegraph.co.uk/news/newstopics/howaboutthat/rss' )
] ]

View File

@ -0,0 +1,53 @@
from calibre.web.feeds.news import BasicNewsRecipe
class TodaysZaman_en(BasicNewsRecipe):
title = u'Todays Zaman'
__author__ = u'thomass'
description = 'a Turkey based daily for national and international news in the fields of business, diplomacy, politics, culture, arts, sports and economics, in addition to commentaries, specials and features'
oldest_article = 2
max_articles_per_feed =100
no_stylesheets = True
#delay = 1
#use_embedded_content = False
encoding = 'utf-8'
#publisher = ' '
category = 'news, haberler,TR,gazete'
language = 'en_TR'
publication_type = 'newspaper'
#extra_css = ' body{ font-family: Verdana,Helvetica,Arial,sans-serif } .introduction{font-weight: bold} .story-feature{display: block; padding: 0; border: 1px solid; width: 40%; font-size: small} .story-feature h2{text-align: center; text-transform: uppercase} '
#keep_only_tags = [dict(name='font', attrs={'class':['newsDetail','agenda2NewsSpot']}),dict(name='span', attrs={'class':['agenda2Title']}),dict(name='div', attrs={'id':['gallery']})]
keep_only_tags = [dict(name='h1', attrs={'class':['georgia_30']}),dict(name='span', attrs={'class':['left-date','detailDate','detailCName']}),dict(name='td', attrs={'id':['newsSpot','newsText']})] #resim ekleme: ,dict(name='div', attrs={'id':['gallery','detailDate',]})
remove_attributes = ['aria-describedby']
remove_tags = [dict(name='img', attrs={'src':['/images/icon_print.gif','http://gmodules.com/ig/images/plus_google.gif','/images/template/jazz/agenda/i1.jpg', 'http://medya.todayszaman.com/todayszaman/images/logo/logo.bmp']}),dict(name='hr', attrs={'class':[ 'interactive-hr']}),dict(name='div', attrs={'class':[ 'empty_height_18','empty_height_9']}) ,dict(name='td', attrs={'id':[ 'superTitle']}),dict(name='span', attrs={'class':[ 't-count enabled t-count-focus']}),dict(name='a', attrs={'id':[ 'count']}),dict(name='td', attrs={'class':[ 'left-date']}) ]
cover_img_url = 'http://medya.todayszaman.com/todayszaman/images/logo/logo.bmp'
masthead_url = 'http://medya.todayszaman.com/todayszaman/images/logo/logo.bmp'
remove_empty_feeds= True
# remove_attributes = ['width','height']
feeds = [
( u'Home', u'http://www.todayszaman.com/rss?sectionId=0'),
( u'News', u'http://www.todayszaman.com/rss?sectionId=100'),
( u'Business', u'http://www.todayszaman.com/rss?sectionId=105'),
( u'Interviews', u'http://www.todayszaman.com/rss?sectionId=8'),
( u'Columnists', u'http://www.todayszaman.com/rss?sectionId=6'),
( u'Op-Ed', u'http://www.todayszaman.com/rss?sectionId=109'),
( u'Arts & Culture', u'http://www.todayszaman.com/rss?sectionId=110'),
( u'Expat Zone', u'http://www.todayszaman.com/rss?sectionId=132'),
( u'Sports', u'http://www.todayszaman.com/rss?sectionId=5'),
( u'Features', u'http://www.todayszaman.com/rss?sectionId=116'),
( u'Travel', u'http://www.todayszaman.com/rss?sectionId=117'),
( u'Leisure', u'http://www.todayszaman.com/rss?sectionId=118'),
( u'Weird But True', u'http://www.todayszaman.com/rss?sectionId=134'),
( u'Life', u'http://www.todayszaman.com/rss?sectionId=133'),
( u'Health', u'http://www.todayszaman.com/rss?sectionId=126'),
( u'Press Review', u'http://www.todayszaman.com/rss?sectionId=130'),
( u'Todays think tanks', u'http://www.todayszaman.com/rss?sectionId=159'),
]
#def preprocess_html(self, soup):
# return self.adeify_images(soup)
#def print_version(self, url): #there is a probem caused by table format
#return url.replace('http://www.todayszaman.com/newsDetail_getNewsById.action?load=detay&', 'http://www.todayszaman.com/newsDetail_openPrintPage.action?')

View File

@ -1,20 +1,55 @@
# -*- coding: utf-8 -*-
from calibre.web.feeds.news import BasicNewsRecipe from calibre.web.feeds.news import BasicNewsRecipe
class ZamanRecipe(BasicNewsRecipe): class Zaman (BasicNewsRecipe):
title = u'Zaman'
__author__ = u'Deniz Og\xfcz'
language = 'tr'
oldest_article = 1
max_articles_per_feed = 10
cover_url = 'http://medya.zaman.com.tr/zamantryeni/pics/zamanonline.gif' title = u'ZAMAN Gazetesi'
feeds = [(u'Gundem', u'http://www.zaman.com.tr/gundem.rss'), __author__ = u'thomass'
(u'Son Dakika', u'http://www.zaman.com.tr/sondakika.rss'), oldest_article = 2
(u'Spor', u'http://www.zaman.com.tr/spor.rss'), max_articles_per_feed =100
(u'Ekonomi', u'http://www.zaman.com.tr/ekonomi.rss'), # no_stylesheets = True
(u'Politika', u'http://www.zaman.com.tr/politika.rss'), #delay = 1
(u'D\u0131\u015f Haberler', u'http://www.zaman.com.tr/dishaberler.rss'), #use_embedded_content = False
(u'Yazarlar', u'http://www.zaman.com.tr/yazarlar.rss'),] encoding = 'ISO 8859-9'
publisher = 'Zaman'
category = 'news, haberler,TR,gazete'
language = 'tr'
publication_type = 'newspaper '
extra_css = ' body{ font-family: Verdana,Helvetica,Arial,sans-serif } .introduction{font-weight: bold} .story-feature{display: block; padding: 0; border: 1px solid; width: 40%; font-size: small} .story-feature h2{text-align: center; text-transform: uppercase} '
conversion_options = {
'tags' : category
,'language' : language
,'publisher' : publisher
,'linearize_tables': False
}
cover_img_url = 'https://fbcdn-profile-a.akamaihd.net/hprofile-ak-snc4/188140_81722291869_2111820_n.jpg'
masthead_url = 'http://medya.zaman.com.tr/extentions/zaman.com.tr/img/section/logo-section.png'
def print_version(self, url):
return url.replace('www.zaman.com.tr/haber.do?', 'www.zaman.com.tr/yazdir.do?') keep_only_tags = [dict(name='div', attrs={'id':[ 'news-detail-content']}), dict(name='td', attrs={'class':['columnist-detail','columnist_head']}) ]
remove_tags = [ dict(name='div', attrs={'id':['news-detail-news-text-font-size','news-detail-gallery','news-detail-news-bottom-social']}),dict(name='div', attrs={'class':['radioEmbedBg','radyoProgramAdi']}),dict(name='a', attrs={'class':['webkit-html-attribute-value webkit-html-external-link']}),dict(name='table', attrs={'id':['yaziYorumTablosu']}),dict(name='img', attrs={'src':['http://medya.zaman.com.tr/pics/paylas.gif','http://medya.zaman.com.tr/extentions/zaman.com.tr/img/columnist/ma-16.png']})]
#remove_attributes = ['width','height']
remove_empty_feeds= True
feeds = [
( u'Anasayfa', u'http://www.zaman.com.tr/anasayfa.rss'),
( u'Son Dakika', u'http://www.zaman.com.tr/sondakika.rss'),
( u'En çok Okunanlar', u'http://www.zaman.com.tr/max_all.rss'),
( u'Gündem', u'http://www.zaman.com.tr/gundem.rss'),
( u'Yazarlar', u'http://www.zaman.com.tr/yazarlar.rss'),
( u'Politika', u'http://www.zaman.com.tr/politika.rss'),
( u'Ekonomi', u'http://www.zaman.com.tr/ekonomi.rss'),
( u'Dış Haberler', u'http://www.zaman.com.tr/dishaberler.rss'),
( u'Yorumlar', u'http://www.zaman.com.tr/yorumlar.rss'),
( u'Röportaj', u'http://www.zaman.com.tr/roportaj.rss'),
( u'Spor', u'http://www.zaman.com.tr/spor.rss'),
( u'Kürsü', u'http://www.zaman.com.tr/kursu.rss'),
( u'Kültür Sanat', u'http://www.zaman.com.tr/kultursanat.rss'),
( u'Televizyon', u'http://www.zaman.com.tr/televizyon.rss'),
( u'Manşet', u'http://www.zaman.com.tr/manset.rss'),
]

View File

@ -292,13 +292,17 @@ maximum_resort_levels = 5
generate_cover_title_font = None generate_cover_title_font = None
generate_cover_foot_font = None generate_cover_foot_font = None
#: Control behavior of double clicks on the book list #: Control behavior of the book list
# Behavior of doubleclick on the books list. Choices: open_viewer, do_nothing, # You can control the behavior of doubleclicks on the books list.
# Choices: open_viewer, do_nothing,
# edit_cell, edit_metadata. Selecting edit_metadata has the side effect of # edit_cell, edit_metadata. Selecting edit_metadata has the side effect of
# disabling editing a field using a single click. # disabling editing a field using a single click.
# Default: open_viewer. # Default: open_viewer.
# Example: doubleclick_on_library_view = 'do_nothing' # Example: doubleclick_on_library_view = 'do_nothing'
# You can also control whether the book list scrolls horizontal per column or
# per pixel. Default is per column.
doubleclick_on_library_view = 'open_viewer' doubleclick_on_library_view = 'open_viewer'
horizontal_scrolling_per_column = True
#: Language to use when sorting. #: Language to use when sorting.
# Setting this tweak will force sorting to use the # Setting this tweak will force sorting to use the

View File

@ -1,6 +1,7 @@
CREATE TABLE authors ( id INTEGER PRIMARY KEY, CREATE TABLE authors ( id INTEGER PRIMARY KEY,
name TEXT NOT NULL COLLATE NOCASE, name TEXT NOT NULL COLLATE NOCASE,
sort TEXT COLLATE NOCASE, sort TEXT COLLATE NOCASE,
link TEXT NOT NULL DEFAULT "",
UNIQUE(name) UNIQUE(name)
); );
CREATE TABLE books ( id INTEGER PRIMARY KEY AUTOINCREMENT, CREATE TABLE books ( id INTEGER PRIMARY KEY AUTOINCREMENT,
@ -545,4 +546,4 @@ CREATE TRIGGER series_update_trg
BEGIN BEGIN
UPDATE series SET sort=NEW.name WHERE id=NEW.id; UPDATE series SET sort=NEW.name WHERE id=NEW.id;
END; END;
pragma user_version=20; pragma user_version=21;

Binary file not shown.

View File

@ -53,6 +53,13 @@ SQLite
Put sqlite3*.h from the sqlite windows amlgamation in ~/sw/include Put sqlite3*.h from the sqlite windows amlgamation in ~/sw/include
APSW
-----
Download source from http://code.google.com/p/apsw/downloads/list and run in visual studio prompt
python setup.py fetch --all build --missing-checksum-ok --enable-all-extensions install test
OpenSSL OpenSSL
-------- --------

View File

@ -106,10 +106,12 @@ def sanitize_file_name(name, substitute='_', as_unicode=False):
name = name.encode(filesystem_encoding, 'ignore') name = name.encode(filesystem_encoding, 'ignore')
one = _filename_sanitize.sub(substitute, name) one = _filename_sanitize.sub(substitute, name)
one = re.sub(r'\s', ' ', one).strip() one = re.sub(r'\s', ' ', one).strip()
one = re.sub(r'^\.+$', '_', one) bname, ext = os.path.splitext(one)
one = re.sub(r'^\.+$', '_', bname)
if as_unicode: if as_unicode:
one = one.decode(filesystem_encoding) one = one.decode(filesystem_encoding)
one = one.replace('..', substitute) one = one.replace('..', substitute)
one += ext
# Windows doesn't like path components that end with a period # Windows doesn't like path components that end with a period
if one and one[-1] in ('.', ' '): if one and one[-1] in ('.', ' '):
one = one[:-1]+'_' one = one[:-1]+'_'
@ -132,8 +134,10 @@ def sanitize_file_name_unicode(name, substitute='_'):
name] name]
one = u''.join(chars) one = u''.join(chars)
one = re.sub(r'\s', ' ', one).strip() one = re.sub(r'\s', ' ', one).strip()
one = re.sub(r'^\.+$', '_', one) bname, ext = os.path.splitext(one)
one = re.sub(r'^\.+$', '_', bname)
one = one.replace('..', substitute) one = one.replace('..', substitute)
one += ext
# Windows doesn't like path components that end with a period or space # Windows doesn't like path components that end with a period or space
if one and one[-1] in ('.', ' '): if one and one[-1] in ('.', ' '):
one = one[:-1]+'_' one = one[:-1]+'_'

View File

@ -4,7 +4,7 @@ __license__ = 'GPL v3'
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net' __copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
__docformat__ = 'restructuredtext en' __docformat__ = 'restructuredtext en'
__appname__ = u'calibre' __appname__ = u'calibre'
numeric_version = (0, 8, 7) numeric_version = (0, 8, 8)
__version__ = u'.'.join(map(unicode, numeric_version)) __version__ = u'.'.join(map(unicode, numeric_version))
__author__ = u"Kovid Goyal <kovid@kovidgoyal.net>" __author__ = u"Kovid Goyal <kovid@kovidgoyal.net>"

View File

@ -611,7 +611,7 @@ from calibre.devices.teclast.driver import (TECLAST_K3, NEWSMY, IPAPYRUS,
from calibre.devices.sne.driver import SNE from calibre.devices.sne.driver import SNE
from calibre.devices.misc import (PALMPRE, AVANT, SWEEX, PDNOVEL, from calibre.devices.misc import (PALMPRE, AVANT, SWEEX, PDNOVEL,
GEMEI, VELOCITYMICRO, PDNOVEL_KOBO, LUMIREAD, ALURATEK_COLOR, GEMEI, VELOCITYMICRO, PDNOVEL_KOBO, LUMIREAD, ALURATEK_COLOR,
TREKSTOR, EEEREADER, NEXTBOOK, ADAM) TREKSTOR, EEEREADER, NEXTBOOK, ADAM, MOOVYBOOK)
from calibre.devices.folder_device.driver import FOLDER_DEVICE_FOR_CONFIG from calibre.devices.folder_device.driver import FOLDER_DEVICE_FOR_CONFIG
from calibre.devices.kobo.driver import KOBO from calibre.devices.kobo.driver import KOBO
from calibre.devices.bambook.driver import BAMBOOK from calibre.devices.bambook.driver import BAMBOOK
@ -746,6 +746,7 @@ plugins += [
EEEREADER, EEEREADER,
NEXTBOOK, NEXTBOOK,
ADAM, ADAM,
MOOVYBOOK,
ITUNES, ITUNES,
BOEYE_BEX, BOEYE_BEX,
BOEYE_BDX, BOEYE_BDX,
@ -1382,18 +1383,9 @@ class StoreOpenBooksStore(StoreBase):
name = 'Open Books' name = 'Open Books'
description = u'Comprehensive listing of DRM free ebooks from a variety of sources provided by users of calibre.' description = u'Comprehensive listing of DRM free ebooks from a variety of sources provided by users of calibre.'
actual_plugin = 'calibre.gui2.store.stores.open_books_plugin:OpenBooksStore' actual_plugin = 'calibre.gui2.store.stores.open_books_plugin:OpenBooksStore'
drm_free_only = True
headquarters = 'US'
class StoreOpenLibraryStore(StoreBase):
name = 'Open Library'
description = u'One web page for every book ever published. The goal is to be a true online library. Over 20 million records from a variety of large catalogs as well as single contributions, with more on the way.'
actual_plugin = 'calibre.gui2.store.stores.open_library_plugin:OpenLibraryStore'
drm_free_only = True drm_free_only = True
headquarters = 'US' headquarters = 'US'
formats = ['DAISY', 'DJVU', 'EPUB', 'MOBI', 'PDF', 'TXT']
class StoreOReillyStore(StoreBase): class StoreOReillyStore(StoreBase):
name = 'OReilly' name = 'OReilly'
@ -1513,7 +1505,6 @@ plugins += [
StoreMobileReadStore, StoreMobileReadStore,
StoreNextoStore, StoreNextoStore,
StoreOpenBooksStore, StoreOpenBooksStore,
StoreOpenLibraryStore,
StoreOReillyStore, StoreOReillyStore,
StorePragmaticBookshelfStore, StorePragmaticBookshelfStore,
StoreSmashwordsStore, StoreSmashwordsStore,

View File

@ -63,5 +63,4 @@ Various things that require other things before they can be migrated:
columns/categories/searches info into columns/categories/searches info into
self.field_metadata. Finally, implement metadata dirtied self.field_metadata. Finally, implement metadata dirtied
functionality. functionality.
''' '''

View File

@ -17,12 +17,13 @@ from calibre import isbytestring, force_unicode, prints
from calibre.constants import (iswindows, filesystem_encoding, from calibre.constants import (iswindows, filesystem_encoding,
preferred_encoding) preferred_encoding)
from calibre.ptempfile import PersistentTemporaryFile from calibre.ptempfile import PersistentTemporaryFile
from calibre.library.schema_upgrades import SchemaUpgrade from calibre.db.schema_upgrades import SchemaUpgrade
from calibre.library.field_metadata import FieldMetadata from calibre.library.field_metadata import FieldMetadata
from calibre.ebooks.metadata import title_sort, author_to_author_sort from calibre.ebooks.metadata import title_sort, author_to_author_sort
from calibre.utils.icu import strcmp from calibre.utils.icu import strcmp
from calibre.utils.config import to_json, from_json, prefs, tweaks from calibre.utils.config import to_json, from_json, prefs, tweaks
from calibre.utils.date import utcfromtimestamp from calibre.utils.date import utcfromtimestamp, parse_date
from calibre.utils.filenames import is_case_sensitive
from calibre.db.tables import (OneToOneTable, ManyToOneTable, ManyToManyTable, from calibre.db.tables import (OneToOneTable, ManyToOneTable, ManyToManyTable,
SizeTable, FormatsTable, AuthorsTable, IdentifiersTable) SizeTable, FormatsTable, AuthorsTable, IdentifiersTable)
# }}} # }}}
@ -30,7 +31,9 @@ from calibre.db.tables import (OneToOneTable, ManyToOneTable, ManyToManyTable,
''' '''
Differences in semantics from pysqlite: Differences in semantics from pysqlite:
1. execute/executemany/executescript operate in autocommit mode 1. execute/executemany operate in autocommit mode
2. There is no fetchone() method on cursor objects, instead use next()
3. There is no executescript
''' '''
@ -119,6 +122,66 @@ def icu_collator(s1, s2):
return strcmp(force_unicode(s1, 'utf-8'), force_unicode(s2, 'utf-8')) return strcmp(force_unicode(s1, 'utf-8'), force_unicode(s2, 'utf-8'))
# }}} # }}}
# Unused aggregators {{{
def Concatenate(sep=','):
'''String concatenation aggregator for sqlite'''
def step(ctxt, value):
if value is not None:
ctxt.append(value)
def finalize(ctxt):
if not ctxt:
return None
return sep.join(ctxt)
return ([], step, finalize)
def SortedConcatenate(sep=','):
'''String concatenation aggregator for sqlite, sorted by supplied index'''
def step(ctxt, ndx, value):
if value is not None:
ctxt[ndx] = value
def finalize(ctxt):
if len(ctxt) == 0:
return None
return sep.join(map(ctxt.get, sorted(ctxt.iterkeys())))
return ({}, step, finalize)
def IdentifiersConcat():
'''String concatenation aggregator for the identifiers map'''
def step(ctxt, key, val):
ctxt.append(u'%s:%s'%(key, val))
def finalize(ctxt):
return ','.join(ctxt)
return ([], step, finalize)
def AumSortedConcatenate():
'''String concatenation aggregator for the author sort map'''
def step(ctxt, ndx, author, sort, link):
if author is not None:
ctxt[ndx] = ':::'.join((author, sort, link))
def finalize(ctxt):
keys = list(ctxt.iterkeys())
l = len(keys)
if l == 0:
return None
if l == 1:
return ctxt[keys[0]]
return ':#:'.join([ctxt[v] for v in sorted(keys)])
return ({}, step, finalize)
# }}}
class Connection(apsw.Connection): # {{{ class Connection(apsw.Connection): # {{{
BUSY_TIMEOUT = 2000 # milliseconds BUSY_TIMEOUT = 2000 # milliseconds
@ -128,32 +191,46 @@ class Connection(apsw.Connection): # {{{
self.setbusytimeout(self.BUSY_TIMEOUT) self.setbusytimeout(self.BUSY_TIMEOUT)
self.execute('pragma cache_size=5000') self.execute('pragma cache_size=5000')
self.conn.execute('pragma temp_store=2') self.execute('pragma temp_store=2')
encoding = self.execute('pragma encoding').fetchone()[0] encoding = self.execute('pragma encoding').next()[0]
self.conn.create_collation('PYNOCASE', partial(pynocase, self.createcollation('PYNOCASE', partial(pynocase,
encoding=encoding)) encoding=encoding))
self.conn.create_function('title_sort', 1, title_sort) self.createscalarfunction('title_sort', title_sort, 1)
self.conn.create_function('author_to_author_sort', 1, self.createscalarfunction('author_to_author_sort',
_author_to_author_sort) _author_to_author_sort, 1)
self.createscalarfunction('uuid4', lambda : str(uuid.uuid4()),
self.conn.create_function('uuid4', 0, lambda : str(uuid.uuid4())) 0)
# Dummy functions for dynamically created filters # Dummy functions for dynamically created filters
self.conn.create_function('books_list_filter', 1, lambda x: 1) self.createscalarfunction('books_list_filter', lambda x: 1, 1)
self.conn.create_collation('icucollate', icu_collator) self.createcollation('icucollate', icu_collator)
# Legacy aggregators (never used) but present for backwards compat
self.createaggregatefunction('sortconcat', SortedConcatenate, 2)
self.createaggregatefunction('sortconcat_bar',
partial(SortedConcatenate, sep='|'), 2)
self.createaggregatefunction('sortconcat_amper',
partial(SortedConcatenate, sep='&'), 2)
self.createaggregatefunction('identifiers_concat',
IdentifiersConcat, 2)
self.createaggregatefunction('concat', Concatenate, 1)
self.createaggregatefunction('aum_sortconcat',
AumSortedConcatenate, 4)
def create_dynamic_filter(self, name): def create_dynamic_filter(self, name):
f = DynamicFilter(name) f = DynamicFilter(name)
self.conn.create_function(name, 1, f) self.createscalarfunction(name, f, 1)
def get(self, *args, **kw): def get(self, *args, **kw):
ans = self.cursor().execute(*args) ans = self.cursor().execute(*args)
if kw.get('all', True): if kw.get('all', True):
return ans.fetchall() return ans.fetchall()
for row in ans: try:
return ans[0] return ans.next()[0]
except (StopIteration, IndexError):
return None
def execute(self, sql, bindings=None): def execute(self, sql, bindings=None):
cursor = self.cursor() cursor = self.cursor()
@ -162,14 +239,9 @@ class Connection(apsw.Connection): # {{{
def executemany(self, sql, sequence_of_bindings): def executemany(self, sql, sequence_of_bindings):
return self.cursor().executemany(sql, sequence_of_bindings) return self.cursor().executemany(sql, sequence_of_bindings)
def executescript(self, sql):
with self:
# Use an explicit savepoint so that even if this is called
# while a transaction is active, it is atomic
return self.cursor().execute(sql)
# }}} # }}}
class DB(object, SchemaUpgrade): class DB(object):
PATH_LIMIT = 40 if iswindows else 100 PATH_LIMIT = 40 if iswindows else 100
WINDOWS_LIBRARY_PATH_LIMIT = 75 WINDOWS_LIBRARY_PATH_LIMIT = 75
@ -213,25 +285,24 @@ class DB(object, SchemaUpgrade):
shutil.copyfile(self.dbpath, pt.name) shutil.copyfile(self.dbpath, pt.name)
self.dbpath = pt.name self.dbpath = pt.name
self.is_case_sensitive = (not iswindows and if not os.path.exists(os.path.dirname(self.dbpath)):
not os.path.exists(self.dbpath.replace('metadata.db', os.makedirs(os.path.dirname(self.dbpath))
'MeTAdAtA.dB')))
self._conn = None self._conn = None
if self.user_version == 0: if self.user_version == 0:
self.initialize_database() self.initialize_database()
with self.conn: if not os.path.exists(self.library_path):
SchemaUpgrade.__init__(self) os.makedirs(self.library_path)
self.is_case_sensitive = is_case_sensitive(self.library_path)
SchemaUpgrade(self.conn, self.library_path, self.field_metadata)
# Guarantee that the library_id is set # Guarantee that the library_id is set
self.library_id self.library_id
self.initialize_prefs(default_prefs)
# Fix legacy triggers and columns # Fix legacy triggers and columns
self.conn.executescript(''' self.conn.execute('''
DROP TRIGGER IF EXISTS author_insert_trg; DROP TRIGGER IF EXISTS author_insert_trg;
CREATE TEMP TRIGGER author_insert_trg CREATE TEMP TRIGGER author_insert_trg
AFTER INSERT ON authors AFTER INSERT ON authors
@ -248,7 +319,11 @@ class DB(object, SchemaUpgrade):
UPDATE authors SET sort=author_to_author_sort(name) WHERE sort IS NULL; UPDATE authors SET sort=author_to_author_sort(name) WHERE sort IS NULL;
''') ''')
def initialize_prefs(self, default_prefs): self.initialize_prefs(default_prefs)
self.initialize_custom_columns()
self.initialize_tables()
def initialize_prefs(self, default_prefs): # {{{
self.prefs = DBPrefs(self) self.prefs = DBPrefs(self)
if default_prefs is not None and not self._exists: if default_prefs is not None and not self._exists:
@ -339,15 +414,236 @@ class DB(object, SchemaUpgrade):
cats_changed = True cats_changed = True
if cats_changed: if cats_changed:
self.prefs.set('user_categories', user_cats) self.prefs.set('user_categories', user_cats)
# }}}
def initialize_custom_columns(self): # {{{
with self.conn:
# Delete previously marked custom columns
for record in self.conn.get(
'SELECT id FROM custom_columns WHERE mark_for_delete=1'):
num = record[0]
table, lt = self.custom_table_names(num)
self.conn.execute('''\
DROP INDEX IF EXISTS {table}_idx;
DROP INDEX IF EXISTS {lt}_aidx;
DROP INDEX IF EXISTS {lt}_bidx;
DROP TRIGGER IF EXISTS fkc_update_{lt}_a;
DROP TRIGGER IF EXISTS fkc_update_{lt}_b;
DROP TRIGGER IF EXISTS fkc_insert_{lt};
DROP TRIGGER IF EXISTS fkc_delete_{lt};
DROP TRIGGER IF EXISTS fkc_insert_{table};
DROP TRIGGER IF EXISTS fkc_delete_{table};
DROP VIEW IF EXISTS tag_browser_{table};
DROP VIEW IF EXISTS tag_browser_filtered_{table};
DROP TABLE IF EXISTS {table};
DROP TABLE IF EXISTS {lt};
'''.format(table=table, lt=lt)
)
self.conn.execute('DELETE FROM custom_columns WHERE mark_for_delete=1')
# Load metadata for custom columns
self.custom_column_label_map, self.custom_column_num_map = {}, {}
triggers = []
remove = []
custom_tables = self.custom_tables
for record in self.conn.get(
'SELECT label,name,datatype,editable,display,normalized,id,is_multiple FROM custom_columns'):
data = {
'label':record[0],
'name':record[1],
'datatype':record[2],
'editable':bool(record[3]),
'display':json.loads(record[4]),
'normalized':bool(record[5]),
'num':record[6],
'is_multiple':bool(record[7]),
}
if data['display'] is None:
data['display'] = {}
# set up the is_multiple separator dict
if data['is_multiple']:
if data['display'].get('is_names', False):
seps = {'cache_to_list': '|', 'ui_to_list': '&', 'list_to_ui': ' & '}
elif data['datatype'] == 'composite':
seps = {'cache_to_list': ',', 'ui_to_list': ',', 'list_to_ui': ', '}
else:
seps = {'cache_to_list': '|', 'ui_to_list': ',', 'list_to_ui': ', '}
else:
seps = {}
data['multiple_seps'] = seps
table, lt = self.custom_table_names(data['num'])
if table not in custom_tables or (data['normalized'] and lt not in
custom_tables):
remove.append(data)
continue
self.custom_column_label_map[data['label']] = data['num']
self.custom_column_num_map[data['num']] = \
self.custom_column_label_map[data['label']] = data
# Create Foreign Key triggers
if data['normalized']:
trigger = 'DELETE FROM %s WHERE book=OLD.id;'%lt
else:
trigger = 'DELETE FROM %s WHERE book=OLD.id;'%table
triggers.append(trigger)
if remove:
with self.conn:
for data in remove:
prints('WARNING: Custom column %r not found, removing.' %
data['label'])
self.conn.execute('DELETE FROM custom_columns WHERE id=?',
(data['num'],))
if triggers:
with self.conn:
self.conn.execute('''\
CREATE TEMP TRIGGER custom_books_delete_trg
AFTER DELETE ON books
BEGIN
%s
END;
'''%(' \n'.join(triggers)))
# Setup data adapters
def adapt_text(x, d):
if d['is_multiple']:
if x is None:
return []
if isinstance(x, (str, unicode, bytes)):
x = x.split(d['multiple_seps']['ui_to_list'])
x = [y.strip() for y in x if y.strip()]
x = [y.decode(preferred_encoding, 'replace') if not isinstance(y,
unicode) else y for y in x]
return [u' '.join(y.split()) for y in x]
else:
return x if x is None or isinstance(x, unicode) else \
x.decode(preferred_encoding, 'replace')
def adapt_datetime(x, d):
if isinstance(x, (str, unicode, bytes)):
x = parse_date(x, assume_utc=False, as_utc=False)
return x
def adapt_bool(x, d):
if isinstance(x, (str, unicode, bytes)):
x = x.lower()
if x == 'true':
x = True
elif x == 'false':
x = False
elif x == 'none':
x = None
else:
x = bool(int(x))
return x
def adapt_enum(x, d):
v = adapt_text(x, d)
if not v:
v = None
return v
def adapt_number(x, d):
if x is None:
return None
if isinstance(x, (str, unicode, bytes)):
if x.lower() == 'none':
return None
if d['datatype'] == 'int':
return int(x)
return float(x)
self.custom_data_adapters = {
'float': adapt_number,
'int': adapt_number,
'rating':lambda x,d : x if x is None else min(10., max(0., float(x))),
'bool': adapt_bool,
'comments': lambda x,d: adapt_text(x, {'is_multiple':False}),
'datetime' : adapt_datetime,
'text':adapt_text,
'series':adapt_text,
'enumeration': adapt_enum
}
# Create Tag Browser categories for custom columns
for k in sorted(self.custom_column_label_map.iterkeys()):
v = self.custom_column_label_map[k]
if v['normalized']:
is_category = True
else:
is_category = False
is_m = v['multiple_seps']
tn = 'custom_column_{0}'.format(v['num'])
self.field_metadata.add_custom_field(label=v['label'],
table=tn, column='value', datatype=v['datatype'],
colnum=v['num'], name=v['name'], display=v['display'],
is_multiple=is_m, is_category=is_category,
is_editable=v['editable'], is_csp=False)
# }}}
def initialize_tables(self): # {{{
tables = self.tables = {}
for col in ('title', 'sort', 'author_sort', 'series_index', 'comments',
'timestamp', 'pubdate', 'uuid', 'path', 'cover',
'last_modified'):
metadata = self.field_metadata[col].copy()
if col == 'comments':
metadata['table'], metadata['column'] = 'comments', 'text'
if not metadata['table']:
metadata['table'], metadata['column'] = 'books', ('has_cover'
if col == 'cover' else col)
if not metadata['column']:
metadata['column'] = col
tables[col] = OneToOneTable(col, metadata)
for col in ('series', 'publisher', 'rating'):
tables[col] = ManyToOneTable(col, self.field_metadata[col].copy())
for col in ('authors', 'tags', 'formats', 'identifiers'):
cls = {
'authors':AuthorsTable,
'formats':FormatsTable,
'identifiers':IdentifiersTable,
}.get(col, ManyToManyTable)
tables[col] = cls(col, self.field_metadata[col].copy())
tables['size'] = SizeTable('size', self.field_metadata['size'].copy())
for label, data in self.custom_column_label_map.iteritems():
label = '#' + label
metadata = self.field_metadata[label].copy()
link_table = self.custom_table_names(data['num'])[1]
if data['normalized']:
if metadata['is_multiple']:
tables[label] = ManyToManyTable(label, metadata,
link_table=link_table)
else:
tables[label] = ManyToOneTable(label, metadata,
link_table=link_table)
if metadata['datatype'] == 'series':
# Create series index table
label += '_index'
metadata = self.field_metadata[label].copy()
metadata['column'] = 'extra'
metadata['table'] = link_table
tables[label] = OneToOneTable(label, metadata)
else:
tables[label] = OneToOneTable(label, metadata)
# }}}
@property @property
def conn(self): def conn(self):
if self._conn is None: if self._conn is None:
self._conn = apsw.Connection(self.dbpath) self._conn = Connection(self.dbpath)
if self._exists and self.user_version == 0: if self._exists and self.user_version == 0:
self._conn.close() self._conn.close()
os.remove(self.dbpath) os.remove(self.dbpath)
self._conn = apsw.Connection(self.dbpath) self._conn = Connection(self.dbpath)
return self._conn return self._conn
@dynamic_property @dynamic_property
@ -365,13 +661,29 @@ class DB(object, SchemaUpgrade):
def initialize_database(self): def initialize_database(self):
metadata_sqlite = P('metadata_sqlite.sql', data=True, metadata_sqlite = P('metadata_sqlite.sql', data=True,
allow_user_override=False).decode('utf-8') allow_user_override=False).decode('utf-8')
self.conn.executescript(metadata_sqlite) cur = self.conn.cursor()
cur.execute('BEGIN EXCLUSIVE TRANSACTION')
try:
cur.execute(metadata_sqlite)
except:
cur.execute('ROLLBACK')
else:
cur.execute('COMMIT')
if self.user_version == 0: if self.user_version == 0:
self.user_version = 1 self.user_version = 1
# }}} # }}}
# Database layer API {{{ # Database layer API {{{
def custom_table_names(self, num):
return 'custom_column_%d'%num, 'books_custom_column_%d_link'%num
@property
def custom_tables(self):
return set([x[0] for x in self.conn.get(
'SELECT name FROM sqlite_master WHERE type="table" AND '
'(name GLOB "custom_column_*" OR name GLOB "books_custom_column_*")')])
@classmethod @classmethod
def exists_at(cls, path): def exists_at(cls, path):
return path and os.path.exists(os.path.join(path, 'metadata.db')) return path and os.path.exists(os.path.join(path, 'metadata.db'))
@ -396,7 +708,7 @@ class DB(object, SchemaUpgrade):
self.conn.execute(''' self.conn.execute('''
DELETE FROM library_id; DELETE FROM library_id;
INSERT INTO library_id (uuid) VALUES (?); INSERT INTO library_id (uuid) VALUES (?);
''', self._library_id_) ''', (self._library_id_,))
return property(doc=doc, fget=fget, fset=fset) return property(doc=doc, fget=fget, fset=fset)
@ -405,39 +717,20 @@ class DB(object, SchemaUpgrade):
return utcfromtimestamp(os.stat(self.dbpath).st_mtime) return utcfromtimestamp(os.stat(self.dbpath).st_mtime)
def read_tables(self): def read_tables(self):
tables = {} '''
for col in ('title', 'sort', 'author_sort', 'series_index', 'comments', Read all data from the db into the python in-memory tables
'timestamp', 'published', 'uuid', 'path', 'cover', '''
'last_modified'):
metadata = self.field_metadata[col].copy()
if metadata['table'] is None:
metadata['table'], metadata['column'] == 'books', ('has_cover'
if col == 'cover' else col)
tables[col] = OneToOneTable(col, metadata)
for col in ('series', 'publisher', 'rating'):
tables[col] = ManyToOneTable(col, self.field_metadata[col].copy())
for col in ('authors', 'tags', 'formats', 'identifiers'):
cls = {
'authors':AuthorsTable,
'formats':FormatsTable,
'identifiers':IdentifiersTable,
}.get(col, ManyToManyTable)
tables[col] = cls(col, self.field_metadata[col].copy())
tables['size'] = SizeTable('size', self.field_metadata['size'].copy())
with self.conn: # Use a single transaction, to ensure nothing modifies with self.conn: # Use a single transaction, to ensure nothing modifies
# the db while we are reading # the db while we are reading
for table in tables.itervalues(): for table in self.tables.itervalues():
try: try:
table.read() table.read(self)
except: except:
prints('Failed to read table:', table.name) prints('Failed to read table:', table.name)
import pprint
pprint.pprint(table.metadata)
raise raise
return tables
# }}} # }}}

11
src/calibre/db/cache.py Normal file
View File

@ -0,0 +1,11 @@
#!/usr/bin/env python
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
from __future__ import (unicode_literals, division, absolute_import,
print_function)
__license__ = 'GPL v3'
__copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
__docformat__ = 'restructuredtext en'

331
src/calibre/db/locking.py Normal file
View File

@ -0,0 +1,331 @@
#!/usr/bin/env python
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
from __future__ import (unicode_literals, division, absolute_import,
print_function)
__license__ = 'GPL v3'
__copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
from threading import Lock, Condition, current_thread
class LockingError(RuntimeError):
pass
def create_locks():
'''
Return a pair of locks: (read_lock, write_lock)
The read_lock can be acquired by multiple threads simultaneously, it can
also be acquired multiple times by the same thread.
Only one thread can hold write_lock at a time, and only if there are no
current read_locks. While the write_lock is held no
other threads can acquire read locks. The write_lock can also be acquired
multiple times by the same thread.
Both read_lock and write_lock are meant to be used in with statements (they
operate on a single underlying lock.
WARNING: Be very careful to not try to acquire a read lock while the same
thread holds a write lock and vice versa. That is, a given thread should
always release *all* locks of type A before trying to acquire a lock of type
B. Bad things will happen if you violate this rule, the most benign of
which is the raising of a LockingError (I haven't been able to eliminate
the possibility of deadlocking in this scenario).
'''
l = SHLock()
return RWLockWrapper(l), RWLockWrapper(l, is_shared=False)
class SHLock(object):
'''
Shareable lock class. Used to implement the Multiple readers-single writer
paradigm. As best as I can tell, neither writer nor reader starvation
should be possible.
Based on code from: https://github.com/rfk/threading2
'''
def __init__(self):
self._lock = Lock()
# When a shared lock is held, is_shared will give the cumulative
# number of locks and _shared_owners maps each owning thread to
# the number of locks is holds.
self.is_shared = 0
self._shared_owners = {}
# When an exclusive lock is held, is_exclusive will give the number
# of locks held and _exclusive_owner will give the owning thread
self.is_exclusive = 0
self._exclusive_owner = None
# When someone is forced to wait for a lock, they add themselves
# to one of these queues along with a "waiter" condition that
# is used to wake them up.
self._shared_queue = []
self._exclusive_queue = []
# This is for recycling waiter objects.
self._free_waiters = []
def acquire(self, blocking=True, shared=False):
'''
Acquire the lock in shared or exclusive mode.
If blocking is False this method will return False if acquiring the
lock failed.
'''
with self._lock:
if shared:
return self._acquire_shared(blocking)
else:
return self._acquire_exclusive(blocking)
assert not (self.is_shared and self.is_exclusive)
def release(self):
''' Release the lock. '''
# This decrements the appropriate lock counters, and if the lock
# becomes free, it looks for a queued thread to hand it off to.
# By doing the handoff here we ensure fairness.
me = current_thread()
with self._lock:
if self.is_exclusive:
if self._exclusive_owner is not me:
raise LockingError("release() called on unheld lock")
self.is_exclusive -= 1
if not self.is_exclusive:
self._exclusive_owner = None
# If there are waiting shared locks, issue them
# all and them wake everyone up.
if self._shared_queue:
for (thread, waiter) in self._shared_queue:
self.is_shared += 1
self._shared_owners[thread] = 1
waiter.notify()
del self._shared_queue[:]
# Otherwise, if there are waiting exclusive locks,
# they get first dibbs on the lock.
elif self._exclusive_queue:
(thread, waiter) = self._exclusive_queue.pop(0)
self._exclusive_owner = thread
self.is_exclusive += 1
waiter.notify()
elif self.is_shared:
try:
self._shared_owners[me] -= 1
if self._shared_owners[me] == 0:
del self._shared_owners[me]
except KeyError:
raise LockingError("release() called on unheld lock")
self.is_shared -= 1
if not self.is_shared:
# If there are waiting exclusive locks,
# they get first dibbs on the lock.
if self._exclusive_queue:
(thread, waiter) = self._exclusive_queue.pop(0)
self._exclusive_owner = thread
self.is_exclusive += 1
waiter.notify()
else:
assert not self._shared_queue
else:
raise LockingError("release() called on unheld lock")
def _acquire_shared(self, blocking=True):
me = current_thread()
# Each case: acquiring a lock we already hold.
if self.is_shared and me in self._shared_owners:
self.is_shared += 1
self._shared_owners[me] += 1
return True
# If the lock is already spoken for by an exclusive, add us
# to the shared queue and it will give us the lock eventually.
if self.is_exclusive or self._exclusive_queue:
if self._exclusive_owner is me:
raise LockingError("can't downgrade SHLock object")
if not blocking:
return False
waiter = self._take_waiter()
try:
self._shared_queue.append((me, waiter))
waiter.wait()
assert not self.is_exclusive
finally:
self._return_waiter(waiter)
else:
self.is_shared += 1
self._shared_owners[me] = 1
return True
def _acquire_exclusive(self, blocking=True):
me = current_thread()
# Each case: acquiring a lock we already hold.
if self._exclusive_owner is me:
assert self.is_exclusive
self.is_exclusive += 1
return True
# Do not allow upgrade of lock
if self.is_shared and me in self._shared_owners:
raise LockingError("can't upgrade SHLock object")
# If the lock is already spoken for, add us to the exclusive queue.
# This will eventually give us the lock when it's our turn.
if self.is_shared or self.is_exclusive:
if not blocking:
return False
waiter = self._take_waiter()
try:
self._exclusive_queue.append((me, waiter))
waiter.wait()
finally:
self._return_waiter(waiter)
else:
self._exclusive_owner = me
self.is_exclusive += 1
return True
def _take_waiter(self):
try:
return self._free_waiters.pop()
except IndexError:
return Condition(self._lock)#, verbose=True)
def _return_waiter(self, waiter):
self._free_waiters.append(waiter)
class RWLockWrapper(object):
def __init__(self, shlock, is_shared=True):
self._shlock = shlock
self._is_shared = is_shared
def __enter__(self):
self._shlock.acquire(shared=self._is_shared)
return self
def __exit__(self, *args):
self._shlock.release()
# Tests {{{
if __name__ == '__main__':
import time, random, unittest
from threading import Thread
class TestSHLock(unittest.TestCase):
"""Testcases for SHLock class."""
def test_upgrade(self):
lock = SHLock()
lock.acquire(shared=True)
self.assertRaises(LockingError, lock.acquire, shared=False)
lock.release()
def test_downgrade(self):
lock = SHLock()
lock.acquire(shared=False)
self.assertRaises(LockingError, lock.acquire, shared=True)
lock.release()
def test_recursive(self):
lock = SHLock()
lock.acquire(shared=True)
lock.acquire(shared=True)
self.assertEqual(lock.is_shared, 2)
lock.release()
lock.release()
self.assertFalse(lock.is_shared)
lock.acquire(shared=False)
lock.acquire(shared=False)
self.assertEqual(lock.is_exclusive, 2)
lock.release()
lock.release()
self.assertFalse(lock.is_exclusive)
def test_release(self):
lock = SHLock()
self.assertRaises(LockingError, lock.release)
def get_lock(shared):
lock.acquire(shared=shared)
time.sleep(1)
lock.release()
threads = [Thread(target=get_lock, args=(x,)) for x in (True,
False)]
for t in threads:
t.daemon = True
t.start()
self.assertRaises(LockingError, lock.release)
t.join(2)
self.assertFalse(t.is_alive())
self.assertFalse(lock.is_shared)
self.assertFalse(lock.is_exclusive)
def test_acquire(self):
lock = SHLock()
def get_lock(shared):
lock.acquire(shared=shared)
time.sleep(1)
lock.release()
shared = Thread(target=get_lock, args=(True,))
shared.daemon = True
shared.start()
time.sleep(0.1)
self.assertTrue(lock.acquire(shared=True, blocking=False))
lock.release()
self.assertFalse(lock.acquire(shared=False, blocking=False))
lock.acquire(shared=False)
self.assertFalse(shared.is_alive())
lock.release()
self.assertTrue(lock.acquire(shared=False, blocking=False))
lock.release()
exclusive = Thread(target=get_lock, args=(False,))
exclusive.daemon = True
exclusive.start()
time.sleep(0.1)
self.assertFalse(lock.acquire(shared=False, blocking=False))
self.assertFalse(lock.acquire(shared=True, blocking=False))
lock.acquire(shared=True)
self.assertFalse(exclusive.is_alive())
lock.release()
lock.acquire(shared=False)
lock.release()
lock.acquire(shared=True)
lock.release()
self.assertFalse(lock.is_shared)
self.assertFalse(lock.is_exclusive)
def test_contention(self):
lock = SHLock()
done = []
def lots_of_acquires():
for _ in xrange(1000):
shared = random.choice([True,False])
lock.acquire(shared=shared)
lock.acquire(shared=shared)
time.sleep(random.random() * 0.0001)
lock.release()
time.sleep(random.random() * 0.0001)
lock.acquire(shared=shared)
time.sleep(random.random() * 0.0001)
lock.release()
lock.release()
done.append(True)
threads = [Thread(target=lots_of_acquires) for _ in xrange(10)]
for t in threads:
t.daemon = True
t.start()
for t in threads:
t.join(20)
live = [t for t in threads if t.is_alive()]
self.assertListEqual(live, [], 'ShLock hung')
self.assertEqual(len(done), len(threads), 'SHLock locking failed')
self.assertFalse(lock.is_shared)
self.assertFalse(lock.is_exclusive)
suite = unittest.TestLoader().loadTestsFromTestCase(TestSHLock)
unittest.TextTestRunner(verbosity=2).run(suite)
# }}}

View File

@ -0,0 +1,618 @@
#!/usr/bin/env python
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
from __future__ import (unicode_literals, division, absolute_import,
print_function)
__license__ = 'GPL v3'
__copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
import os
from calibre import prints
from calibre.utils.date import isoformat, DEFAULT_DATE
class SchemaUpgrade(object):
def __init__(self, conn, library_path, field_metadata):
conn.execute('BEGIN EXCLUSIVE TRANSACTION')
self.conn = conn
self.library_path = library_path
self.field_metadata = field_metadata
# Upgrade database
try:
while True:
uv = self.conn.execute('pragma user_version').next()[0]
meth = getattr(self, 'upgrade_version_%d'%uv, None)
if meth is None:
break
else:
prints('Upgrading database to version %d...'%(uv+1))
meth()
self.conn.execute('pragma user_version=%d'%(uv+1))
except:
self.conn.execute('ROLLBACK')
raise
else:
self.conn.execute('COMMIT')
finally:
self.conn = self.field_metadata = None
def upgrade_version_1(self):
'''
Normalize indices.
'''
self.conn.execute('''\
DROP INDEX IF EXISTS authors_idx;
CREATE INDEX authors_idx ON books (author_sort COLLATE NOCASE, sort COLLATE NOCASE);
DROP INDEX IF EXISTS series_idx;
CREATE INDEX series_idx ON series (name COLLATE NOCASE);
DROP INDEX IF EXISTS series_sort_idx;
CREATE INDEX series_sort_idx ON books (series_index, id);
''')
def upgrade_version_2(self):
''' Fix Foreign key constraints for deleting from link tables. '''
script = '''\
DROP TRIGGER IF EXISTS fkc_delete_books_%(ltable)s_link;
CREATE TRIGGER fkc_delete_on_%(table)s
BEFORE DELETE ON %(table)s
BEGIN
SELECT CASE
WHEN (SELECT COUNT(id) FROM books_%(ltable)s_link WHERE %(ltable_col)s=OLD.id) > 0
THEN RAISE(ABORT, 'Foreign key violation: %(table)s is still referenced')
END;
END;
DELETE FROM %(table)s WHERE (SELECT COUNT(id) FROM books_%(ltable)s_link WHERE %(ltable_col)s=%(table)s.id) < 1;
'''
self.conn.execute(script%dict(ltable='authors', table='authors', ltable_col='author'))
self.conn.execute(script%dict(ltable='publishers', table='publishers', ltable_col='publisher'))
self.conn.execute(script%dict(ltable='tags', table='tags', ltable_col='tag'))
self.conn.execute(script%dict(ltable='series', table='series', ltable_col='series'))
def upgrade_version_3(self):
' Add path to result cache '
self.conn.execute('''
DROP VIEW IF EXISTS meta;
CREATE VIEW meta AS
SELECT id, title,
(SELECT concat(name) FROM authors WHERE authors.id IN (SELECT author from books_authors_link WHERE book=books.id)) authors,
(SELECT name FROM publishers WHERE publishers.id IN (SELECT publisher from books_publishers_link WHERE book=books.id)) publisher,
(SELECT rating FROM ratings WHERE ratings.id IN (SELECT rating from books_ratings_link WHERE book=books.id)) rating,
timestamp,
(SELECT MAX(uncompressed_size) FROM data WHERE book=books.id) size,
(SELECT concat(name) FROM tags WHERE tags.id IN (SELECT tag from books_tags_link WHERE book=books.id)) tags,
(SELECT text FROM comments WHERE book=books.id) comments,
(SELECT name FROM series WHERE series.id IN (SELECT series FROM books_series_link WHERE book=books.id)) series,
series_index,
sort,
author_sort,
(SELECT concat(format) FROM data WHERE data.book=books.id) formats,
isbn,
path
FROM books;
''')
def upgrade_version_4(self):
'Rationalize books table'
self.conn.execute('''
CREATE TEMPORARY TABLE
books_backup(id,title,sort,timestamp,series_index,author_sort,isbn,path);
INSERT INTO books_backup SELECT id,title,sort,timestamp,series_index,author_sort,isbn,path FROM books;
DROP TABLE books;
CREATE TABLE books ( id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT NOT NULL DEFAULT 'Unknown' COLLATE NOCASE,
sort TEXT COLLATE NOCASE,
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
pubdate TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
series_index REAL NOT NULL DEFAULT 1.0,
author_sort TEXT COLLATE NOCASE,
isbn TEXT DEFAULT "" COLLATE NOCASE,
lccn TEXT DEFAULT "" COLLATE NOCASE,
path TEXT NOT NULL DEFAULT "",
flags INTEGER NOT NULL DEFAULT 1
);
INSERT INTO
books (id,title,sort,timestamp,pubdate,series_index,author_sort,isbn,path)
SELECT id,title,sort,timestamp,timestamp,series_index,author_sort,isbn,path FROM books_backup;
DROP TABLE books_backup;
DROP VIEW IF EXISTS meta;
CREATE VIEW meta AS
SELECT id, title,
(SELECT concat(name) FROM authors WHERE authors.id IN (SELECT author from books_authors_link WHERE book=books.id)) authors,
(SELECT name FROM publishers WHERE publishers.id IN (SELECT publisher from books_publishers_link WHERE book=books.id)) publisher,
(SELECT rating FROM ratings WHERE ratings.id IN (SELECT rating from books_ratings_link WHERE book=books.id)) rating,
timestamp,
(SELECT MAX(uncompressed_size) FROM data WHERE book=books.id) size,
(SELECT concat(name) FROM tags WHERE tags.id IN (SELECT tag from books_tags_link WHERE book=books.id)) tags,
(SELECT text FROM comments WHERE book=books.id) comments,
(SELECT name FROM series WHERE series.id IN (SELECT series FROM books_series_link WHERE book=books.id)) series,
series_index,
sort,
author_sort,
(SELECT concat(format) FROM data WHERE data.book=books.id) formats,
isbn,
path,
lccn,
pubdate,
flags
FROM books;
''')
def upgrade_version_5(self):
'Update indexes/triggers for new books table'
self.conn.execute('''
CREATE INDEX authors_idx ON books (author_sort COLLATE NOCASE);
CREATE INDEX books_idx ON books (sort COLLATE NOCASE);
CREATE TRIGGER books_delete_trg
AFTER DELETE ON books
BEGIN
DELETE FROM books_authors_link WHERE book=OLD.id;
DELETE FROM books_publishers_link WHERE book=OLD.id;
DELETE FROM books_ratings_link WHERE book=OLD.id;
DELETE FROM books_series_link WHERE book=OLD.id;
DELETE FROM books_tags_link WHERE book=OLD.id;
DELETE FROM data WHERE book=OLD.id;
DELETE FROM comments WHERE book=OLD.id;
DELETE FROM conversion_options WHERE book=OLD.id;
END;
CREATE TRIGGER books_insert_trg
AFTER INSERT ON books
BEGIN
UPDATE books SET sort=title_sort(NEW.title) WHERE id=NEW.id;
END;
CREATE TRIGGER books_update_trg
AFTER UPDATE ON books
BEGIN
UPDATE books SET sort=title_sort(NEW.title) WHERE id=NEW.id;
END;
UPDATE books SET sort=title_sort(title) WHERE sort IS NULL;
'''
)
def upgrade_version_6(self):
'Show authors in order'
self.conn.execute('''
DROP VIEW IF EXISTS meta;
CREATE VIEW meta AS
SELECT id, title,
(SELECT sortconcat(bal.id, name) FROM books_authors_link AS bal JOIN authors ON(author = authors.id) WHERE book = books.id) authors,
(SELECT name FROM publishers WHERE publishers.id IN (SELECT publisher from books_publishers_link WHERE book=books.id)) publisher,
(SELECT rating FROM ratings WHERE ratings.id IN (SELECT rating from books_ratings_link WHERE book=books.id)) rating,
timestamp,
(SELECT MAX(uncompressed_size) FROM data WHERE book=books.id) size,
(SELECT concat(name) FROM tags WHERE tags.id IN (SELECT tag from books_tags_link WHERE book=books.id)) tags,
(SELECT text FROM comments WHERE book=books.id) comments,
(SELECT name FROM series WHERE series.id IN (SELECT series FROM books_series_link WHERE book=books.id)) series,
series_index,
sort,
author_sort,
(SELECT concat(format) FROM data WHERE data.book=books.id) formats,
isbn,
path,
lccn,
pubdate,
flags
FROM books;
''')
def upgrade_version_7(self):
'Add uuid column'
self.conn.execute('''
ALTER TABLE books ADD COLUMN uuid TEXT;
DROP TRIGGER IF EXISTS books_insert_trg;
DROP TRIGGER IF EXISTS books_update_trg;
UPDATE books SET uuid=uuid4();
CREATE TRIGGER books_insert_trg AFTER INSERT ON books
BEGIN
UPDATE books SET sort=title_sort(NEW.title),uuid=uuid4() WHERE id=NEW.id;
END;
CREATE TRIGGER books_update_trg AFTER UPDATE ON books
BEGIN
UPDATE books SET sort=title_sort(NEW.title) WHERE id=NEW.id;
END;
DROP VIEW IF EXISTS meta;
CREATE VIEW meta AS
SELECT id, title,
(SELECT sortconcat(bal.id, name) FROM books_authors_link AS bal JOIN authors ON(author = authors.id) WHERE book = books.id) authors,
(SELECT name FROM publishers WHERE publishers.id IN (SELECT publisher from books_publishers_link WHERE book=books.id)) publisher,
(SELECT rating FROM ratings WHERE ratings.id IN (SELECT rating from books_ratings_link WHERE book=books.id)) rating,
timestamp,
(SELECT MAX(uncompressed_size) FROM data WHERE book=books.id) size,
(SELECT concat(name) FROM tags WHERE tags.id IN (SELECT tag from books_tags_link WHERE book=books.id)) tags,
(SELECT text FROM comments WHERE book=books.id) comments,
(SELECT name FROM series WHERE series.id IN (SELECT series FROM books_series_link WHERE book=books.id)) series,
series_index,
sort,
author_sort,
(SELECT concat(format) FROM data WHERE data.book=books.id) formats,
isbn,
path,
lccn,
pubdate,
flags,
uuid
FROM books;
''')
def upgrade_version_8(self):
'Add Tag Browser views'
def create_tag_browser_view(table_name, column_name):
self.conn.execute('''
DROP VIEW IF EXISTS tag_browser_{tn};
CREATE VIEW tag_browser_{tn} AS SELECT
id,
name,
(SELECT COUNT(id) FROM books_{tn}_link WHERE {cn}={tn}.id) count
FROM {tn};
'''.format(tn=table_name, cn=column_name))
for tn in ('authors', 'tags', 'publishers', 'series'):
cn = tn[:-1]
if tn == 'series':
cn = tn
create_tag_browser_view(tn, cn)
def upgrade_version_9(self):
'Add custom columns'
self.conn.execute('''
CREATE TABLE custom_columns (
id INTEGER PRIMARY KEY AUTOINCREMENT,
label TEXT NOT NULL,
name TEXT NOT NULL,
datatype TEXT NOT NULL,
mark_for_delete BOOL DEFAULT 0 NOT NULL,
editable BOOL DEFAULT 1 NOT NULL,
display TEXT DEFAULT "{}" NOT NULL,
is_multiple BOOL DEFAULT 0 NOT NULL,
normalized BOOL NOT NULL,
UNIQUE(label)
);
CREATE INDEX IF NOT EXISTS custom_columns_idx ON custom_columns (label);
CREATE INDEX IF NOT EXISTS formats_idx ON data (format);
''')
def upgrade_version_10(self):
'Add restricted Tag Browser views'
def create_tag_browser_view(table_name, column_name, view_column_name):
script = ('''
DROP VIEW IF EXISTS tag_browser_{tn};
CREATE VIEW tag_browser_{tn} AS SELECT
id,
{vcn},
(SELECT COUNT(id) FROM books_{tn}_link WHERE {cn}={tn}.id) count
FROM {tn};
DROP VIEW IF EXISTS tag_browser_filtered_{tn};
CREATE VIEW tag_browser_filtered_{tn} AS SELECT
id,
{vcn},
(SELECT COUNT(books_{tn}_link.id) FROM books_{tn}_link WHERE
{cn}={tn}.id AND books_list_filter(book)) count
FROM {tn};
'''.format(tn=table_name, cn=column_name, vcn=view_column_name))
self.conn.execute(script)
for field in self.field_metadata.itervalues():
if field['is_category'] and not field['is_custom'] and 'link_column' in field:
table = self.conn.get(
'SELECT name FROM sqlite_master WHERE type="table" AND name=?',
('books_%s_link'%field['table'],), all=False)
if table is not None:
create_tag_browser_view(field['table'], field['link_column'], field['column'])
def upgrade_version_11(self):
'Add average rating to tag browser views'
def create_std_tag_browser_view(table_name, column_name,
view_column_name, sort_column_name):
script = ('''
DROP VIEW IF EXISTS tag_browser_{tn};
CREATE VIEW tag_browser_{tn} AS SELECT
id,
{vcn},
(SELECT COUNT(id) FROM books_{tn}_link WHERE {cn}={tn}.id) count,
(SELECT AVG(ratings.rating)
FROM books_{tn}_link AS tl, books_ratings_link AS bl, ratings
WHERE tl.{cn}={tn}.id AND bl.book=tl.book AND
ratings.id = bl.rating AND ratings.rating <> 0) avg_rating,
{scn} AS sort
FROM {tn};
DROP VIEW IF EXISTS tag_browser_filtered_{tn};
CREATE VIEW tag_browser_filtered_{tn} AS SELECT
id,
{vcn},
(SELECT COUNT(books_{tn}_link.id) FROM books_{tn}_link WHERE
{cn}={tn}.id AND books_list_filter(book)) count,
(SELECT AVG(ratings.rating)
FROM books_{tn}_link AS tl, books_ratings_link AS bl, ratings
WHERE tl.{cn}={tn}.id AND bl.book=tl.book AND
ratings.id = bl.rating AND ratings.rating <> 0 AND
books_list_filter(bl.book)) avg_rating,
{scn} AS sort
FROM {tn};
'''.format(tn=table_name, cn=column_name,
vcn=view_column_name, scn= sort_column_name))
self.conn.execute(script)
def create_cust_tag_browser_view(table_name, link_table_name):
script = '''
DROP VIEW IF EXISTS tag_browser_{table};
CREATE VIEW tag_browser_{table} AS SELECT
id,
value,
(SELECT COUNT(id) FROM {lt} WHERE value={table}.id) count,
(SELECT AVG(r.rating)
FROM {lt},
books_ratings_link AS bl,
ratings AS r
WHERE {lt}.value={table}.id AND bl.book={lt}.book AND
r.id = bl.rating AND r.rating <> 0) avg_rating,
value AS sort
FROM {table};
DROP VIEW IF EXISTS tag_browser_filtered_{table};
CREATE VIEW tag_browser_filtered_{table} AS SELECT
id,
value,
(SELECT COUNT({lt}.id) FROM {lt} WHERE value={table}.id AND
books_list_filter(book)) count,
(SELECT AVG(r.rating)
FROM {lt},
books_ratings_link AS bl,
ratings AS r
WHERE {lt}.value={table}.id AND bl.book={lt}.book AND
r.id = bl.rating AND r.rating <> 0 AND
books_list_filter(bl.book)) avg_rating,
value AS sort
FROM {table};
'''.format(lt=link_table_name, table=table_name)
self.conn.execute(script)
for field in self.field_metadata.itervalues():
if field['is_category'] and not field['is_custom'] and 'link_column' in field:
table = self.conn.get(
'SELECT name FROM sqlite_master WHERE type="table" AND name=?',
('books_%s_link'%field['table'],), all=False)
if table is not None:
create_std_tag_browser_view(field['table'], field['link_column'],
field['column'], field['category_sort'])
db_tables = self.conn.get('''SELECT name FROM sqlite_master
WHERE type='table'
ORDER BY name''')
tables = []
for (table,) in db_tables:
tables.append(table)
for table in tables:
link_table = 'books_%s_link'%table
if table.startswith('custom_column_') and link_table in tables:
create_cust_tag_browser_view(table, link_table)
self.conn.execute('UPDATE authors SET sort=author_to_author_sort(name)')
def upgrade_version_12(self):
'DB based preference store'
script = '''
DROP TABLE IF EXISTS preferences;
CREATE TABLE preferences(id INTEGER PRIMARY KEY,
key TEXT NON NULL,
val TEXT NON NULL,
UNIQUE(key));
'''
self.conn.execute(script)
def upgrade_version_13(self):
'Dirtied table for OPF metadata backups'
script = '''
DROP TABLE IF EXISTS metadata_dirtied;
CREATE TABLE metadata_dirtied(id INTEGER PRIMARY KEY,
book INTEGER NOT NULL,
UNIQUE(book));
INSERT INTO metadata_dirtied (book) SELECT id FROM books;
'''
self.conn.execute(script)
def upgrade_version_14(self):
'Cache has_cover'
self.conn.execute('ALTER TABLE books ADD COLUMN has_cover BOOL DEFAULT 0')
data = self.conn.get('SELECT id,path FROM books', all=True)
def has_cover(path):
if path:
path = os.path.join(self.library_path, path.replace('/', os.sep),
'cover.jpg')
return os.path.exists(path)
return False
ids = [(x[0],) for x in data if has_cover(x[1])]
self.conn.executemany('UPDATE books SET has_cover=1 WHERE id=?', ids)
def upgrade_version_15(self):
'Remove commas from tags'
self.conn.execute("UPDATE OR IGNORE tags SET name=REPLACE(name, ',', ';')")
self.conn.execute("UPDATE OR IGNORE tags SET name=REPLACE(name, ',', ';;')")
self.conn.execute("UPDATE OR IGNORE tags SET name=REPLACE(name, ',', '')")
def upgrade_version_16(self):
self.conn.execute('''
DROP TRIGGER IF EXISTS books_update_trg;
CREATE TRIGGER books_update_trg
AFTER UPDATE ON books
BEGIN
UPDATE books SET sort=title_sort(NEW.title)
WHERE id=NEW.id AND OLD.title <> NEW.title;
END;
''')
def upgrade_version_17(self):
'custom book data table (for plugins)'
script = '''
DROP TABLE IF EXISTS books_plugin_data;
CREATE TABLE books_plugin_data(id INTEGER PRIMARY KEY,
book INTEGER NON NULL,
name TEXT NON NULL,
val TEXT NON NULL,
UNIQUE(book,name));
DROP TRIGGER IF EXISTS books_delete_trg;
CREATE TRIGGER books_delete_trg
AFTER DELETE ON books
BEGIN
DELETE FROM books_authors_link WHERE book=OLD.id;
DELETE FROM books_publishers_link WHERE book=OLD.id;
DELETE FROM books_ratings_link WHERE book=OLD.id;
DELETE FROM books_series_link WHERE book=OLD.id;
DELETE FROM books_tags_link WHERE book=OLD.id;
DELETE FROM data WHERE book=OLD.id;
DELETE FROM comments WHERE book=OLD.id;
DELETE FROM conversion_options WHERE book=OLD.id;
DELETE FROM books_plugin_data WHERE book=OLD.id;
END;
'''
self.conn.execute(script)
def upgrade_version_18(self):
'''
Add a library UUID.
Add an identifiers table.
Add a languages table.
Add a last_modified column.
NOTE: You cannot downgrade after this update, if you do
any changes you make to book isbns will be lost.
'''
script = '''
DROP TABLE IF EXISTS library_id;
CREATE TABLE library_id ( id INTEGER PRIMARY KEY,
uuid TEXT NOT NULL,
UNIQUE(uuid)
);
DROP TABLE IF EXISTS identifiers;
CREATE TABLE identifiers ( id INTEGER PRIMARY KEY,
book INTEGER NON NULL,
type TEXT NON NULL DEFAULT "isbn" COLLATE NOCASE,
val TEXT NON NULL COLLATE NOCASE,
UNIQUE(book, type)
);
DROP TABLE IF EXISTS languages;
CREATE TABLE languages ( id INTEGER PRIMARY KEY,
lang_code TEXT NON NULL COLLATE NOCASE,
UNIQUE(lang_code)
);
DROP TABLE IF EXISTS books_languages_link;
CREATE TABLE books_languages_link ( id INTEGER PRIMARY KEY,
book INTEGER NOT NULL,
lang_code INTEGER NOT NULL,
item_order INTEGER NOT NULL DEFAULT 0,
UNIQUE(book, lang_code)
);
DROP TRIGGER IF EXISTS fkc_delete_on_languages;
CREATE TRIGGER fkc_delete_on_languages
BEFORE DELETE ON languages
BEGIN
SELECT CASE
WHEN (SELECT COUNT(id) FROM books_languages_link WHERE lang_code=OLD.id) > 0
THEN RAISE(ABORT, 'Foreign key violation: language is still referenced')
END;
END;
DROP TRIGGER IF EXISTS fkc_delete_on_languages_link;
CREATE TRIGGER fkc_delete_on_languages_link
BEFORE INSERT ON books_languages_link
BEGIN
SELECT CASE
WHEN (SELECT id from books WHERE id=NEW.book) IS NULL
THEN RAISE(ABORT, 'Foreign key violation: book not in books')
WHEN (SELECT id from languages WHERE id=NEW.lang_code) IS NULL
THEN RAISE(ABORT, 'Foreign key violation: lang_code not in languages')
END;
END;
DROP TRIGGER IF EXISTS fkc_update_books_languages_link_a;
CREATE TRIGGER fkc_update_books_languages_link_a
BEFORE UPDATE OF book ON books_languages_link
BEGIN
SELECT CASE
WHEN (SELECT id from books WHERE id=NEW.book) IS NULL
THEN RAISE(ABORT, 'Foreign key violation: book not in books')
END;
END;
DROP TRIGGER IF EXISTS fkc_update_books_languages_link_b;
CREATE TRIGGER fkc_update_books_languages_link_b
BEFORE UPDATE OF lang_code ON books_languages_link
BEGIN
SELECT CASE
WHEN (SELECT id from languages WHERE id=NEW.lang_code) IS NULL
THEN RAISE(ABORT, 'Foreign key violation: lang_code not in languages')
END;
END;
DROP INDEX IF EXISTS books_languages_link_aidx;
CREATE INDEX books_languages_link_aidx ON books_languages_link (lang_code);
DROP INDEX IF EXISTS books_languages_link_bidx;
CREATE INDEX books_languages_link_bidx ON books_languages_link (book);
DROP INDEX IF EXISTS languages_idx;
CREATE INDEX languages_idx ON languages (lang_code COLLATE NOCASE);
DROP TRIGGER IF EXISTS books_delete_trg;
CREATE TRIGGER books_delete_trg
AFTER DELETE ON books
BEGIN
DELETE FROM books_authors_link WHERE book=OLD.id;
DELETE FROM books_publishers_link WHERE book=OLD.id;
DELETE FROM books_ratings_link WHERE book=OLD.id;
DELETE FROM books_series_link WHERE book=OLD.id;
DELETE FROM books_tags_link WHERE book=OLD.id;
DELETE FROM books_languages_link WHERE book=OLD.id;
DELETE FROM data WHERE book=OLD.id;
DELETE FROM comments WHERE book=OLD.id;
DELETE FROM conversion_options WHERE book=OLD.id;
DELETE FROM books_plugin_data WHERE book=OLD.id;
DELETE FROM identifiers WHERE book=OLD.id;
END;
INSERT INTO identifiers (book, val) SELECT id,isbn FROM books WHERE isbn;
ALTER TABLE books ADD COLUMN last_modified TIMESTAMP NOT NULL DEFAULT "%s";
'''%isoformat(DEFAULT_DATE, sep=' ')
# Sqlite does not support non constant default values in alter
# statements
self.conn.execute(script)
def upgrade_version_19(self):
recipes = self.conn.get('SELECT id,title,script FROM feeds')
if recipes:
from calibre.web.feeds.recipes import (custom_recipes,
custom_recipe_filename)
bdir = os.path.dirname(custom_recipes.file_path)
for id_, title, script in recipes:
existing = frozenset(map(int, custom_recipes.iterkeys()))
if id_ in existing:
id_ = max(existing) + 1000
id_ = str(id_)
fname = custom_recipe_filename(id_, title)
custom_recipes[id_] = (title, fname)
if isinstance(script, unicode):
script = script.encode('utf-8')
with open(os.path.join(bdir, fname), 'wb') as f:
f.write(script)
def upgrade_version_20(self):
'''
Add a link column to the authors table.
'''
script = '''
ALTER TABLE authors ADD COLUMN link TEXT NOT NULL DEFAULT "";
'''
self.conn.execute(script)

View File

@ -32,11 +32,11 @@ def _c_convert_timestamp(val):
class Table(object): class Table(object):
def __init__(self, name, metadata): def __init__(self, name, metadata, link_table=None):
self.name, self.metadata = name, metadata self.name, self.metadata = name, metadata
# self.adapt() maps values from the db to python objects # self.unserialize() maps values from the db to python objects
self.adapt = \ self.unserialize = \
{ {
'datetime': _c_convert_timestamp, 'datetime': _c_convert_timestamp,
'bool': bool 'bool': bool
@ -44,16 +44,25 @@ class Table(object):
metadata['datatype'], lambda x: x) metadata['datatype'], lambda x: x)
if name == 'authors': if name == 'authors':
# Legacy # Legacy
self.adapt = lambda x: x.replace('|', ',') if x else None self.unserialize = lambda x: x.replace('|', ',') if x else None
self.link_table = (link_table if link_table else
'books_%s_link'%self.metadata['table'])
class OneToOneTable(Table): class OneToOneTable(Table):
'''
Represents data that is unique per book (it may not actually be unique) but
each item is assigned to a book in a one-to-one mapping. For example: uuid,
timestamp, size, etc.
'''
def read(self, db): def read(self, db):
self.book_col_map = {} self.book_col_map = {}
idcol = 'id' if self.metadata['table'] == 'books' else 'book' idcol = 'id' if self.metadata['table'] == 'books' else 'book'
for row in db.conn.execute('SELECT {0}, {1} FROM {2}'.format(idcol, for row in db.conn.execute('SELECT {0}, {1} FROM {2}'.format(idcol,
self.metadata['column'], self.metadata['table'])): self.metadata['column'], self.metadata['table'])):
self.book_col_map[row[0]] = self.adapt(row[1]) self.book_col_map[row[0]] = self.unserialize(row[1])
class SizeTable(OneToOneTable): class SizeTable(OneToOneTable):
@ -62,10 +71,17 @@ class SizeTable(OneToOneTable):
for row in db.conn.execute( for row in db.conn.execute(
'SELECT books.id, (SELECT MAX(uncompressed_size) FROM data ' 'SELECT books.id, (SELECT MAX(uncompressed_size) FROM data '
'WHERE data.book=books.id) FROM books'): 'WHERE data.book=books.id) FROM books'):
self.book_col_map[row[0]] = self.adapt(row[1]) self.book_col_map[row[0]] = self.unserialize(row[1])
class ManyToOneTable(Table): class ManyToOneTable(Table):
'''
Represents data where one data item can map to many books, for example:
series or publisher.
Each book however has only one value for data of this type.
'''
def read(self, db): def read(self, db):
self.id_map = {} self.id_map = {}
self.extra_map = {} self.extra_map = {}
@ -76,28 +92,34 @@ class ManyToOneTable(Table):
def read_id_maps(self, db): def read_id_maps(self, db):
for row in db.conn.execute('SELECT id, {0} FROM {1}'.format( for row in db.conn.execute('SELECT id, {0} FROM {1}'.format(
self.metadata['name'], self.metadata['table'])): self.metadata['column'], self.metadata['table'])):
if row[1]: if row[1]:
self.id_map[row[0]] = self.adapt(row[1]) self.id_map[row[0]] = self.unserialize(row[1])
def read_maps(self, db): def read_maps(self, db):
for row in db.conn.execute( for row in db.conn.execute(
'SELECT book, {0} FROM books_{1}_link'.format( 'SELECT book, {0} FROM {1}'.format(
self.metadata['link_column'], self.metadata['table'])): self.metadata['link_column'], self.link_table)):
if row[1] not in self.col_book_map: if row[1] not in self.col_book_map:
self.col_book_map[row[1]] = [] self.col_book_map[row[1]] = []
self.col_book_map.append(row[0]) self.col_book_map[row[1]].append(row[0])
self.book_col_map[row[0]] = row[1] self.book_col_map[row[0]] = row[1]
class ManyToManyTable(ManyToOneTable): class ManyToManyTable(ManyToOneTable):
'''
Represents data that has a many-to-many mapping with books. i.e. each book
can have more than one value and each value can be mapped to more than one
book. For example: tags or authors.
'''
def read_maps(self, db): def read_maps(self, db):
for row in db.conn.execute( for row in db.conn.execute(
'SELECT book, {0} FROM books_{1}_link'.format( 'SELECT book, {0} FROM {1}'.format(
self.metadata['link_column'], self.metadata['table'])): self.metadata['link_column'], self.link_table)):
if row[1] not in self.col_book_map: if row[1] not in self.col_book_map:
self.col_book_map[row[1]] = [] self.col_book_map[row[1]] = []
self.col_book_map.append(row[0]) self.col_book_map[row[1]].append(row[0])
if row[0] not in self.book_col_map: if row[0] not in self.book_col_map:
self.book_col_map[row[0]] = [] self.book_col_map[row[0]] = []
self.book_col_map[row[0]].append(row[1]) self.book_col_map[row[0]].append(row[1])
@ -105,11 +127,13 @@ class ManyToManyTable(ManyToOneTable):
class AuthorsTable(ManyToManyTable): class AuthorsTable(ManyToManyTable):
def read_id_maps(self, db): def read_id_maps(self, db):
self.alink_map = {}
for row in db.conn.execute( for row in db.conn.execute(
'SELECT id, name, sort FROM authors'): 'SELECT id, name, sort, link FROM authors'):
self.id_map[row[0]] = row[1] self.id_map[row[0]] = row[1]
self.extra_map[row[0]] = (row[2] if row[2] else self.extra_map[row[0]] = (row[2] if row[2] else
author_to_author_sort(row[1])) author_to_author_sort(row[1]))
self.alink_map[row[0]] = row[3]
class FormatsTable(ManyToManyTable): class FormatsTable(ManyToManyTable):
@ -121,7 +145,7 @@ class FormatsTable(ManyToManyTable):
if row[1] is not None: if row[1] is not None:
if row[1] not in self.col_book_map: if row[1] not in self.col_book_map:
self.col_book_map[row[1]] = [] self.col_book_map[row[1]] = []
self.col_book_map.append(row[0]) self.col_book_map[row[1]].append(row[0])
if row[0] not in self.book_col_map: if row[0] not in self.book_col_map:
self.book_col_map[row[0]] = [] self.book_col_map[row[0]] = []
self.book_col_map[row[0]].append((row[1], row[2])) self.book_col_map[row[0]].append((row[1], row[2]))
@ -136,7 +160,7 @@ class IdentifiersTable(ManyToManyTable):
if row[1] is not None and row[2] is not None: if row[1] is not None and row[2] is not None:
if row[1] not in self.col_book_map: if row[1] not in self.col_book_map:
self.col_book_map[row[1]] = [] self.col_book_map[row[1]] = []
self.col_book_map.append(row[0]) self.col_book_map[row[1]].append(row[0])
if row[0] not in self.book_col_map: if row[0] not in self.book_col_map:
self.book_col_map[row[0]] = [] self.book_col_map[row[0]] = []
self.book_col_map[row[0]].append((row[1], row[2])) self.book_col_map[row[0]].append((row[1], row[2]))

View File

@ -19,16 +19,17 @@ class ANDROID(USBMS):
VENDOR_ID = { VENDOR_ID = {
# HTC # HTC
0x0bb4 : { 0x0c02 : [0x100, 0x0227, 0x0226, 0x222], 0x0bb4 : { 0xc02 : [0x100, 0x0227, 0x0226, 0x222],
0x0c01 : [0x100, 0x0227, 0x0226], 0xc01 : [0x100, 0x0227, 0x0226],
0x0ff9 : [0x0100, 0x0227, 0x0226], 0xff9 : [0x0100, 0x0227, 0x0226],
0x0c87 : [0x0100, 0x0227, 0x0226], 0xc87 : [0x0100, 0x0227, 0x0226],
0xc92 : [0x100], 0xc91 : [0x0100, 0x0227, 0x0226],
0xc97 : [0x226], 0xc92 : [0x100, 0x0227, 0x0226, 0x222],
0xc99 : [0x0100], 0xc97 : [0x100, 0x0227, 0x0226, 0x222],
0xca2 : [0x226], 0xc99 : [0x100, 0x0227, 0x0226, 0x222],
0xca3 : [0x100], 0xca2 : [0x100, 0x0227, 0x0226, 0x222],
0xca4 : [0x226], 0xca3 : [0x100, 0x0227, 0x0226, 0x222],
0xca4 : [0x100, 0x0227, 0x0226, 0x222],
}, },
# Eken # Eken
@ -100,6 +101,9 @@ class ANDROID(USBMS):
# ZTE # ZTE
0x19d2 : { 0x1353 : [0x226] }, 0x19d2 : { 0x1353 : [0x226] },
# Advent
0x0955 : { 0x7100 : [0x9999] }, # This is the same as the Notion Ink Adam
} }
EBOOK_DIR_MAIN = ['eBooks/import', 'wordplayer/calibretransfer', 'Books'] EBOOK_DIR_MAIN = ['eBooks/import', 'wordplayer/calibretransfer', 'Books']
EXTRA_CUSTOMIZATION_MESSAGE = _('Comma separated list of directories to ' EXTRA_CUSTOMIZATION_MESSAGE = _('Comma separated list of directories to '

View File

@ -5,7 +5,7 @@ __copyright__ = '2010, Gregory Riker'
__docformat__ = 'restructuredtext en' __docformat__ = 'restructuredtext en'
import cStringIO, ctypes, datetime, os, re, sys, tempfile, time import cStringIO, ctypes, datetime, os, re, shutil, sys, tempfile, time
from calibre.constants import __appname__, __version__, DEBUG from calibre.constants import __appname__, __version__, DEBUG
from calibre import fit_image, confirm_config_name from calibre import fit_image, confirm_config_name
from calibre.constants import isosx, iswindows from calibre.constants import isosx, iswindows
@ -119,11 +119,17 @@ class DriverBase(DeviceConfig, DevicePlugin):
'iBooks Category'), 'iBooks Category'),
_('Cache covers from iTunes/iBooks') + _('Cache covers from iTunes/iBooks') +
':::' + ':::' +
_('Enable to cache and display covers from iTunes/iBooks') _('Enable to cache and display covers from iTunes/iBooks'),
_(u'"Copy files to iTunes Media folder %s" is enabled in iTunes Preferences|Advanced')%u'\u2026' +
':::' +
_("<p>This setting should match your iTunes <i>Preferences</i>|<i>Advanced</i> setting.</p>"
"<p>Disabling will store copies of books transferred to iTunes in your calibre configuration directory.</p>"
"<p>Enabling indicates that iTunes is configured to store copies in your iTunes Media folder.</p>")
] ]
EXTRA_CUSTOMIZATION_DEFAULT = [ EXTRA_CUSTOMIZATION_DEFAULT = [
True, True,
True, True,
False,
] ]
@ -193,6 +199,7 @@ class ITUNES(DriverBase):
# EXTRA_CUSTOMIZATION_MESSAGE indexes # EXTRA_CUSTOMIZATION_MESSAGE indexes
USE_SERIES_AS_CATEGORY = 0 USE_SERIES_AS_CATEGORY = 0
CACHE_COVERS = 1 CACHE_COVERS = 1
USE_ITUNES_STORAGE = 2
OPEN_FEEDBACK_MESSAGE = _( OPEN_FEEDBACK_MESSAGE = _(
'Apple device detected, launching iTunes, please wait ...') 'Apple device detected, launching iTunes, please wait ...')
@ -281,6 +288,7 @@ class ITUNES(DriverBase):
description_prefix = "added by calibre" description_prefix = "added by calibre"
ejected = False ejected = False
iTunes= None iTunes= None
iTunes_local_storage = None
library_orphans = None library_orphans = None
log = Log() log = Log()
manual_sync_mode = False manual_sync_mode = False
@ -825,7 +833,7 @@ class ITUNES(DriverBase):
# Confirm/create thumbs archive # Confirm/create thumbs archive
if not os.path.exists(self.cache_dir): if not os.path.exists(self.cache_dir):
if DEBUG: if DEBUG:
self.log.info(" creating thumb cache '%s'" % self.cache_dir) self.log.info(" creating thumb cache at '%s'" % self.cache_dir)
os.makedirs(self.cache_dir) os.makedirs(self.cache_dir)
if not os.path.exists(self.archive_path): if not os.path.exists(self.archive_path):
@ -837,6 +845,17 @@ class ITUNES(DriverBase):
if DEBUG: if DEBUG:
self.log.info(" existing thumb cache at '%s'" % self.archive_path) self.log.info(" existing thumb cache at '%s'" % self.archive_path)
# If enabled in config options, create/confirm an iTunes storage folder
if not self.settings().extra_customization[self.USE_ITUNES_STORAGE]:
self.iTunes_local_storage = os.path.join(config_dir,'iTunes storage')
if not os.path.exists(self.iTunes_local_storage):
if DEBUG:
self.log(" creating iTunes_local_storage at '%s'" % self.iTunes_local_storage)
os.mkdir(self.iTunes_local_storage)
else:
if DEBUG:
self.log(" existing iTunes_local_storage at '%s'" % self.iTunes_local_storage)
def remove_books_from_metadata(self, paths, booklists): def remove_books_from_metadata(self, paths, booklists):
''' '''
Remove books from the metadata list. This function must not communicate Remove books from the metadata list. This function must not communicate
@ -1281,50 +1300,27 @@ class ITUNES(DriverBase):
if DEBUG: if DEBUG:
self.log.info(" ITUNES._add_new_copy()") self.log.info(" ITUNES._add_new_copy()")
def _save_last_known_iTunes_storage(lb_added):
if isosx:
fp = lb_added.location().path
index = fp.rfind('/Books') + len('/Books')
last_known_iTunes_storage = fp[:index]
elif iswindows:
fp = lb_added.Location
index = fp.rfind('\Books') + len('\Books')
last_known_iTunes_storage = fp[:index]
dynamic['last_known_iTunes_storage'] = last_known_iTunes_storage
self.log.warning(" last_known_iTunes_storage: %s" % last_known_iTunes_storage)
db_added = None db_added = None
lb_added = None lb_added = None
# If using iTunes_local_storage, copy the file, redirect iTunes to use local copy
if not self.settings().extra_customization[self.USE_ITUNES_STORAGE]:
local_copy = os.path.join(self.iTunes_local_storage, str(metadata.uuid) + os.path.splitext(fpath)[1])
shutil.copyfile(fpath,local_copy)
fpath = local_copy
if self.manual_sync_mode: if self.manual_sync_mode:
''' '''
This is the unsupported direct-connect mode. Unsupported direct-connect mode.
In an attempt to avoid resetting the iTunes library Media folder, don't try to
add the book to iTunes if the last_known_iTunes_storage path is inaccessible.
This means that the path has to be set at least once, probably by using
'Connect to iTunes' and doing a transfer.
''' '''
self.log.warning(" unsupported direct connect mode") self.log.warning(" unsupported direct connect mode")
db_added = self._add_device_book(fpath, metadata) db_added = self._add_device_book(fpath, metadata)
last_known_iTunes_storage = dynamic.get('last_known_iTunes_storage', None) lb_added = self._add_library_book(fpath, metadata)
if last_known_iTunes_storage is not None:
if os.path.exists(last_known_iTunes_storage):
if DEBUG:
self.log.warning(" iTunes storage online, adding to library")
lb_added = self._add_library_book(fpath, metadata)
else:
if DEBUG:
self.log.warning(" iTunes storage not online, can't add to library")
if lb_added:
_save_last_known_iTunes_storage(lb_added)
if not lb_added and DEBUG: if not lb_added and DEBUG:
self.log.warn(" failed to add '%s' to iTunes, iTunes Media folder inaccessible" % metadata.title) self.log.warn(" failed to add '%s' to iTunes, iTunes Media folder inaccessible" % metadata.title)
else: else:
lb_added = self._add_library_book(fpath, metadata) lb_added = self._add_library_book(fpath, metadata)
if lb_added: if not lb_added:
_save_last_known_iTunes_storage(lb_added)
else:
raise UserFeedback("iTunes Media folder inaccessible", raise UserFeedback("iTunes Media folder inaccessible",
details="Failed to add '%s' to iTunes" % metadata.title, details="Failed to add '%s' to iTunes" % metadata.title,
level=UserFeedback.WARN) level=UserFeedback.WARN)
@ -1520,7 +1516,7 @@ class ITUNES(DriverBase):
else: else:
self.log.error(" book_playlist not found") self.log.error(" book_playlist not found")
if len(dev_books): if dev_books is not None and len(dev_books):
first_book = dev_books[0] first_book = dev_books[0]
if False: if False:
self.log.info(" determing manual mode by modifying '%s' by %s" % (first_book.name(), first_book.artist())) self.log.info(" determing manual mode by modifying '%s' by %s" % (first_book.name(), first_book.artist()))
@ -1551,7 +1547,7 @@ class ITUNES(DriverBase):
dev_books = pl.Tracks dev_books = pl.Tracks
break break
if dev_books.Count: if dev_books is not None and dev_books.Count:
first_book = dev_books.Item(1) first_book = dev_books.Item(1)
#if DEBUG: #if DEBUG:
#self.log.info(" determing manual mode by modifying '%s' by %s" % (first_book.Name, first_book.Artist)) #self.log.info(" determing manual mode by modifying '%s' by %s" % (first_book.Name, first_book.Artist))
@ -2526,7 +2522,15 @@ class ITUNES(DriverBase):
self.log.info(" processing %s" % fp) self.log.info(" processing %s" % fp)
if fp.startswith(prefs['library_path']): if fp.startswith(prefs['library_path']):
self.log.info(" '%s' stored in calibre database, not removed" % cached_book['title']) self.log.info(" '%s' stored in calibre database, not removed" % cached_book['title'])
elif not self.settings().extra_customization[self.USE_ITUNES_STORAGE] and \
fp.startswith(self.iTunes_local_storage) and \
os.path.exists(fp):
# Delete the copy in iTunes_local_storage
os.remove(fp)
if DEBUG:
self.log(" removing from iTunes_local_storage")
else: else:
# Delete from iTunes Media folder
if os.path.exists(fp): if os.path.exists(fp):
os.remove(fp) os.remove(fp)
if DEBUG: if DEBUG:
@ -2544,12 +2548,6 @@ class ITUNES(DriverBase):
os.rmdir(author_storage_path) os.rmdir(author_storage_path)
if DEBUG: if DEBUG:
self.log.info(" removing empty author directory") self.log.info(" removing empty author directory")
'''
else:
if DEBUG:
self.log.info(" author_storage_path not empty:")
self.log.info(" %s" % '\n'.join(author_files))
'''
else: else:
self.log.info(" '%s' does not exist at storage location" % cached_book['title']) self.log.info(" '%s' does not exist at storage location" % cached_book['title'])
@ -2586,7 +2584,15 @@ class ITUNES(DriverBase):
self.log.info(" processing %s" % fp) self.log.info(" processing %s" % fp)
if fp.startswith(prefs['library_path']): if fp.startswith(prefs['library_path']):
self.log.info(" '%s' stored in calibre database, not removed" % cached_book['title']) self.log.info(" '%s' stored in calibre database, not removed" % cached_book['title'])
elif not self.settings().extra_customization[self.USE_ITUNES_STORAGE] and \
fp.startswith(self.iTunes_local_storage) and \
os.path.exists(fp):
# Delete the copy in iTunes_local_storage
os.remove(fp)
if DEBUG:
self.log(" removing from iTunes_local_storage")
else: else:
# Delete from iTunes Media folder
if os.path.exists(fp): if os.path.exists(fp):
os.remove(fp) os.remove(fp)
if DEBUG: if DEBUG:
@ -3234,6 +3240,17 @@ class ITUNES_ASYNC(ITUNES):
if DEBUG: if DEBUG:
self.log.info(" existing thumb cache at '%s'" % self.archive_path) self.log.info(" existing thumb cache at '%s'" % self.archive_path)
# If enabled in config options, create/confirm an iTunes storage folder
if not self.settings().extra_customization[self.USE_ITUNES_STORAGE]:
self.iTunes_local_storage = os.path.join(config_dir,'iTunes storage')
if not os.path.exists(self.iTunes_local_storage):
if DEBUG:
self.log(" creating iTunes_local_storage at '%s'" % self.iTunes_local_storage)
os.mkdir(self.iTunes_local_storage)
else:
if DEBUG:
self.log(" existing iTunes_local_storage at '%s'" % self.iTunes_local_storage)
def sync_booklists(self, booklists, end_session=True): def sync_booklists(self, booklists, end_session=True):
''' '''
Update metadata on device. Update metadata on device.

View File

@ -137,7 +137,7 @@ class KOBO(USBMS):
bl_cache[lpath] = None bl_cache[lpath] = None
if ImageID is not None: if ImageID is not None:
imagename = self.normalize_path(self._main_prefix + '.kobo/images/' + ImageID + ' - NickelBookCover.parsed') imagename = self.normalize_path(self._main_prefix + '.kobo/images/' + ImageID + ' - NickelBookCover.parsed')
if not os.path.exists(imagename): if not os.path.exists(imagename):
# Try the Touch version if the image does not exist # Try the Touch version if the image does not exist
imagename = self.normalize_path(self._main_prefix + '.kobo/images/' + ImageID + ' - N3_LIBRARY_FULL.parsed') imagename = self.normalize_path(self._main_prefix + '.kobo/images/' + ImageID + ' - N3_LIBRARY_FULL.parsed')
@ -203,14 +203,25 @@ class KOBO(USBMS):
result = cursor.fetchone() result = cursor.fetchone()
self.dbversion = result[0] self.dbversion = result[0]
debug_print("Database Version: ", self.dbversion)
if self.dbversion >= 14: if self.dbversion >= 14:
query= 'select Title, Attribution, DateCreated, ContentID, MimeType, ContentType, ' \ query= 'select Title, Attribution, DateCreated, ContentID, MimeType, ContentType, ' \
'ImageID, ReadStatus, ___ExpirationStatus, FavouritesIndex from content where BookID is Null' 'ImageID, ReadStatus, ___ExpirationStatus, FavouritesIndex from content where BookID is Null and ( ___ExpirationStatus <> "3" or ___ExpirationStatus is Null)'
elif self.dbversion < 14 and self.dbversion >= 8:
query= 'select Title, Attribution, DateCreated, ContentID, MimeType, ContentType, ' \
'ImageID, ReadStatus, ___ExpirationStatus, "-1" as FavouritesIndex from content where BookID is Null and ( ___ExpirationStatus <> "3" or ___ExpirationStatus is Null)'
else: else:
query= 'select Title, Attribution, DateCreated, ContentID, MimeType, ContentType, ' \ query= 'select Title, Attribution, DateCreated, ContentID, MimeType, ContentType, ' \
'ImageID, ReadStatus, ___ExpirationStatus, "-1" as FavouritesIndex from content where BookID is Null' 'ImageID, ReadStatus, "-1" as ___ExpirationStatus, "-1" as FavouritesIndex from content where BookID is Null'
cursor.execute (query) try:
cursor.execute (query)
except Exception as e:
if '___ExpirationStatus' not in str(e):
raise
query= 'select Title, Attribution, DateCreated, ContentID, MimeType, ContentType, ' \
'ImageID, ReadStatus, "-1" as ___ExpirationStatus, "-1" as FavouritesIndex from content where BookID is Null'
cursor.execute(query)
changed = False changed = False
for i, row in enumerate(cursor): for i, row in enumerate(cursor):
@ -577,7 +588,7 @@ class KOBO(USBMS):
for book in books: for book in books:
# debug_print('Title:', book.title, 'lpath:', book.path) # debug_print('Title:', book.title, 'lpath:', book.path)
if 'Im_Reading' not in book.device_collections: if 'Im_Reading' not in book.device_collections:
book.device_collections.append('Im_Reading') book.device_collections.append('Im_Reading')
extension = os.path.splitext(book.path)[1] extension = os.path.splitext(book.path)[1]
ContentType = self.get_content_type_from_extension(extension) if extension != '' else self.get_content_type_from_path(book.path) ContentType = self.get_content_type_from_extension(extension) if extension != '' else self.get_content_type_from_path(book.path)
@ -621,7 +632,7 @@ class KOBO(USBMS):
for book in books: for book in books:
# debug_print('Title:', book.title, 'lpath:', book.path) # debug_print('Title:', book.title, 'lpath:', book.path)
if 'Read' not in book.device_collections: if 'Read' not in book.device_collections:
book.device_collections.append('Read') book.device_collections.append('Read')
extension = os.path.splitext(book.path)[1] extension = os.path.splitext(book.path)[1]
ContentType = self.get_content_type_from_extension(extension) if extension != '' else self.get_content_type_from_path(book.path) ContentType = self.get_content_type_from_extension(extension) if extension != '' else self.get_content_type_from_path(book.path)
@ -658,7 +669,7 @@ class KOBO(USBMS):
for book in books: for book in books:
# debug_print('Title:', book.title, 'lpath:', book.path) # debug_print('Title:', book.title, 'lpath:', book.path)
if 'Closed' not in book.device_collections: if 'Closed' not in book.device_collections:
book.device_collections.append('Closed') book.device_collections.append('Closed')
extension = os.path.splitext(book.path)[1] extension = os.path.splitext(book.path)[1]
ContentType = self.get_content_type_from_extension(extension) if extension != '' else self.get_content_type_from_path(book.path) ContentType = self.get_content_type_from_extension(extension) if extension != '' else self.get_content_type_from_path(book.path)
@ -695,8 +706,8 @@ class KOBO(USBMS):
for book in books: for book in books:
# debug_print('Title:', book.title, 'lpath:', book.path) # debug_print('Title:', book.title, 'lpath:', book.path)
if 'Shortlist' not in book.device_collections: if 'Shortlist' not in book.device_collections:
book.device_collections.append('Shortlist') book.device_collections.append('Shortlist')
# debug_print ("Shortlist found for: ", book.title) # debug_print ("Shortlist found for: ", book.title)
extension = os.path.splitext(book.path)[1] extension = os.path.splitext(book.path)[1]
ContentType = self.get_content_type_from_extension(extension) if extension != '' else self.get_content_type_from_path(book.path) ContentType = self.get_content_type_from_extension(extension) if extension != '' else self.get_content_type_from_path(book.path)

View File

@ -64,14 +64,24 @@ int do_mount(const char *dev, const char *mp) {
snprintf(options, 1000, "rw,noexec,nosuid,sync,nodev"); snprintf(options, 1000, "rw,noexec,nosuid,sync,nodev");
snprintf(uids, 100, "%d", getuid()); snprintf(uids, 100, "%d", getuid());
snprintf(gids, 100, "%d", getgid()); snprintf(gids, 100, "%d", getgid());
#else
#ifdef __FreeBSD__
snprintf(options, 1000, "rw,noexec,nosuid,sync,-u=%d,-g=%d",getuid(),getgid());
#else #else
snprintf(options, 1000, "rw,noexec,nosuid,sync,nodev,quiet,shortname=mixed,uid=%d,gid=%d,umask=077,fmask=0177,dmask=0077,utf8,iocharset=iso8859-1", getuid(), getgid()); snprintf(options, 1000, "rw,noexec,nosuid,sync,nodev,quiet,shortname=mixed,uid=%d,gid=%d,umask=077,fmask=0177,dmask=0077,utf8,iocharset=iso8859-1", getuid(), getgid());
#endif #endif
#endif
ensure_root(); ensure_root();
#ifdef __NetBSD__ #ifdef __NetBSD__
execlp("mount_msdos", "mount_msdos", "-u", uids, "-g", gids, "-o", options, dev, mp, NULL); execlp("mount_msdos", "mount_msdos", "-u", uids, "-g", gids, "-o", options, dev, mp, NULL);
#else
#ifdef __FreeBSD__
execlp("mount", "mount", "-t", "msdosfs", "-o", options, dev, mp, NULL);
#else #else
execlp("mount", "mount", "-t", "auto", "-o", options, dev, mp, NULL); execlp("mount", "mount", "-t", "auto", "-o", options, dev, mp, NULL);
#endif
#endif #endif
errsv = errno; errsv = errno;
fprintf(stderr, "Failed to mount with error: %s\n", strerror(errsv)); fprintf(stderr, "Failed to mount with error: %s\n", strerror(errsv));
@ -91,8 +101,12 @@ int call_eject(const char *dev, const char *mp) {
ensure_root(); ensure_root();
#ifdef __NetBSD__ #ifdef __NetBSD__
execlp("eject", "eject", dev, NULL); execlp("eject", "eject", dev, NULL);
#else
#ifdef __FreeBSD__
execlp("umount", "umount", dev, NULL);
#else #else
execlp("eject", "eject", "-s", dev, NULL); execlp("eject", "eject", "-s", dev, NULL);
#endif
#endif #endif
/* execlp failed */ /* execlp failed */
errsv = errno; errsv = errno;
@ -121,7 +135,11 @@ int call_umount(const char *dev, const char *mp) {
if (pid == 0) { /* Child process */ if (pid == 0) { /* Child process */
ensure_root(); ensure_root();
#ifdef __FreeBSD__
execlp("umount", "umount", mp, NULL);
#else
execlp("umount", "umount", "-l", mp, NULL); execlp("umount", "umount", "-l", mp, NULL);
#endif
/* execlp failed */ /* execlp failed */
errsv = errno; errsv = errno;
fprintf(stderr, "Failed to umount with error: %s\n", strerror(errsv)); fprintf(stderr, "Failed to umount with error: %s\n", strerror(errsv));

View File

@ -329,3 +329,25 @@ class NEXTBOOK(USBMS):
f.write(metadata.thumbnail[-1]) f.write(metadata.thumbnail[-1])
''' '''
class MOOVYBOOK(USBMS):
name = 'Moovybook device interface'
gui_name = 'Moovybook'
description = _('Communicate with the Moovybook Reader')
author = 'Kovid Goyal'
supported_platforms = ['windows', 'osx', 'linux']
# Ordered list of supported formats
FORMATS = ['epub', 'txt', 'pdf']
VENDOR_ID = [0x1cae]
PRODUCT_ID = [0x9b08]
BCD = [0x02]
EBOOK_DIR_MAIN = ''
SUPPORTS_SUB_DIRS = True
def get_main_ebook_dir(self, for_upload=False):
return 'Books' if for_upload else self.EBOOK_DIR_MAIN

View File

@ -17,7 +17,7 @@ from itertools import repeat
from calibre.devices.interface import DevicePlugin from calibre.devices.interface import DevicePlugin
from calibre.devices.errors import DeviceError, FreeSpaceError from calibre.devices.errors import DeviceError, FreeSpaceError
from calibre.devices.usbms.deviceconfig import DeviceConfig from calibre.devices.usbms.deviceconfig import DeviceConfig
from calibre.constants import iswindows, islinux, isosx, plugins from calibre.constants import iswindows, islinux, isosx, isfreebsd, plugins
from calibre.utils.filenames import ascii_filename as sanitize, shorten_components_to from calibre.utils.filenames import ascii_filename as sanitize, shorten_components_to
if isosx: if isosx:
@ -701,7 +701,152 @@ class Device(DeviceConfig, DevicePlugin):
self._card_a_prefix = self._card_b_prefix self._card_a_prefix = self._card_b_prefix
self._card_b_prefix = None self._card_b_prefix = None
# ------------------------------------------------------
#
# open for FreeBSD
# find the device node or nodes that match the S/N we already have from the scanner
# and attempt to mount each one
# 1. get list of disk devices from sysctl
# 2. compare that list with the one from camcontrol
# 3. and see if it has a matching s/n
# 6. find any partitions/slices associated with each node
# 7. attempt to mount, using calibre-mount-helper, each one
# 8. when finished, we have a list of mount points and associated device nodes
#
def open_freebsd(self):
# this gives us access to the S/N, etc. of the reader that the scanner has found
# and the match routines for some of that data, like s/n, vendor ID, etc.
d=self.detected_device
if not d.serial:
raise DeviceError("Device has no S/N. Can't continue")
return False
devs={}
di=0
ndevs=4 # number of possible devices per reader (main, carda, cardb, launcher)
#get list of disk devices
p=subprocess.Popen(["sysctl", "kern.disks"], stdout=subprocess.PIPE)
kdsks=subprocess.Popen(["sed", "s/kern.disks: //"], stdin=p.stdout, stdout=subprocess.PIPE).communicate()[0]
p.stdout.close()
#print kdsks
for dvc in kdsks.split():
# for each one that's also in the list of cam devices ...
p=subprocess.Popen(["camcontrol", "devlist"], stdout=subprocess.PIPE)
devmatch=subprocess.Popen(["grep", dvc], stdin=p.stdout, stdout=subprocess.PIPE).communicate()[0]
p.stdout.close()
if devmatch:
#print "Checking ", devmatch
# ... see if we can get a S/N from the actual device node
sn=subprocess.Popen(["camcontrol", "inquiry", dvc, "-S"], stdout=subprocess.PIPE).communicate()[0]
sn=sn[0:-1] # drop the trailing newline
#print "S/N = ", sn
if sn and d.match_serial(sn):
# we have a matching s/n, record this device node
#print "match found: ", dvc
devs[di]=dvc
di += 1
# sort the list of devices
for i in range(1,ndevs+1):
for j in reversed(range(1,i)):
if devs[j-1] > devs[j]:
x=devs[j-1]
devs[j-1]=devs[j]
devs[j]=x
#print devs
# now we need to see if any of these have slices/partitions
mtd=0
label="READER" # could use something more unique, like S/N or productID...
cmd = '/usr/local/bin/calibre-mount-helper'
cmd = [cmd, 'mount']
for i in range(0,ndevs):
cmd2="ls /dev/"+devs[i]+"*"
p=subprocess.Popen(cmd2, shell=True, stdout=subprocess.PIPE)
devs[i]=subprocess.Popen(["cut", "-d", "/", "-f" "3"], stdin=p.stdout, stdout=subprocess.PIPE).communicate()[0]
p.stdout.close()
# try all the nodes to see what we can mount
for dev in devs[i].split():
mp='/media/'+label+'-'+dev
#print "trying ", dev, "on", mp
try:
p = subprocess.Popen(cmd + ["/dev/"+dev, mp])
except OSError:
raise DeviceError(_('Could not find mount helper: %s.')%cmd[0])
while p.poll() is None:
time.sleep(0.1)
if p.returncode == 0:
#print " mounted", dev
if i == 0:
self._main_prefix = mp
self._main_dev = "/dev/"+dev
#print "main = ", self._main_dev, self._main_prefix
if i == 1:
self._card_a_prefix = mp
self._card_a_dev = "/dev/"+dev
#print "card a = ", self._card_a_dev, self._card_a_prefix
if i == 2:
self._card_b_prefix = mp
self._card_b_dev = "/dev/"+dev
#print "card b = ", self._card_b_dev, self._card_b_prefix
mtd += 1
break
if mtd > 0:
return True
else :
return False
#
# ------------------------------------------------------
#
# this one is pretty simple:
# just umount each of the previously
# mounted filesystems, using the mount helper
#
def eject_freebsd(self):
cmd = '/usr/local/bin/calibre-mount-helper'
cmd = [cmd, 'eject']
if self._main_prefix:
#print "umount main:", cmd, self._main_dev, self._main_prefix
try:
p = subprocess.Popen(cmd + [self._main_dev, self._main_prefix])
except OSError:
raise DeviceError(
_('Could not find mount helper: %s.')%cmd[0])
while p.poll() is None:
time.sleep(0.1)
if self._card_a_prefix:
#print "umount card a:", cmd, self._card_a_dev, self._card_a_prefix
try:
p = subprocess.Popen(cmd + [self._card_a_dev, self._card_a_prefix])
except OSError:
raise DeviceError(
_('Could not find mount helper: %s.')%cmd[0])
while p.poll() is None:
time.sleep(0.1)
if self._card_b_prefix:
#print "umount card b:", cmd, self._card_b_dev, self._card_b_prefix
try:
p = subprocess.Popen(cmd + [self._card_b_dev, self._card_b_prefix])
except OSError:
raise DeviceError(
_('Could not find mount helper: %s.')%cmd[0])
while p.poll() is None:
time.sleep(0.1)
self._main_prefix = None
self._card_a_prefix = None
self._card_b_prefix = None
# ------------------------------------------------------
def open(self, library_uuid): def open(self, library_uuid):
time.sleep(5) time.sleep(5)
@ -712,6 +857,14 @@ class Device(DeviceConfig, DevicePlugin):
except DeviceError: except DeviceError:
time.sleep(7) time.sleep(7)
self.open_linux() self.open_linux()
if isfreebsd:
self._main_dev = self._card_a_dev = self._card_b_dev = None
try:
self.open_freebsd()
except DeviceError:
subprocess.Popen(["camcontrol", "rescan", "all"])
time.sleep(2)
self.open_freebsd()
if iswindows: if iswindows:
try: try:
self.open_windows() self.open_windows()
@ -800,6 +953,11 @@ class Device(DeviceConfig, DevicePlugin):
self.eject_linux() self.eject_linux()
except: except:
pass pass
if isfreebsd:
try:
self.eject_freebsd()
except:
pass
if iswindows: if iswindows:
try: try:
self.eject_windows() self.eject_windows()

View File

@ -54,7 +54,7 @@ cpalmdoc_decompress(PyObject *self, PyObject *args) {
// Map chars to bytes // Map chars to bytes
for (j = 0; j < input_len; j++) for (j = 0; j < input_len; j++)
input[j] = (_input[j] < 0) ? _input[j]+256 : _input[j]; input[j] = (_input[j] < 0) ? _input[j]+256 : _input[j];
output = (char *)PyMem_Malloc(sizeof(char)*(MAX(BUFFER, 5*input_len))); output = (char *)PyMem_Malloc(sizeof(char)*(MAX(BUFFER, 8*input_len)));
if (output == NULL) return PyErr_NoMemory(); if (output == NULL) return PyErr_NoMemory();
while (i < input_len) { while (i < input_len) {

View File

@ -86,6 +86,8 @@ CALIBRE_METADATA_FIELDS = frozenset([
# a dict of user category names, where the value is a list of item names # a dict of user category names, where the value is a list of item names
# from the book that are in that category # from the book that are in that category
'user_categories', 'user_categories',
# a dict of author to an associated hyperlink
'author_link_map',
] ]
) )

View File

@ -34,6 +34,7 @@ NULL_VALUES = {
'authors' : [_('Unknown')], 'authors' : [_('Unknown')],
'title' : _('Unknown'), 'title' : _('Unknown'),
'user_categories' : {}, 'user_categories' : {},
'author_link_map' : {},
'language' : 'und' 'language' : 'und'
} }

View File

@ -474,7 +474,7 @@ def serialize_user_metadata(metadata_elem, all_user_metadata, tail='\n'+(' '*8))
metadata_elem.append(meta) metadata_elem.append(meta)
def dump_user_categories(cats): def dump_dict(cats):
if not cats: if not cats:
cats = {} cats = {}
from calibre.ebooks.metadata.book.json_codec import object_to_unicode from calibre.ebooks.metadata.book.json_codec import object_to_unicode
@ -537,8 +537,9 @@ class OPF(object): # {{{
formatter=parse_date, renderer=isoformat) formatter=parse_date, renderer=isoformat)
user_categories = MetadataField('user_categories', is_dc=False, user_categories = MetadataField('user_categories', is_dc=False,
formatter=json.loads, formatter=json.loads,
renderer=dump_user_categories) renderer=dump_dict)
author_link_map = MetadataField('author_link_map', is_dc=False,
formatter=json.loads, renderer=dump_dict)
def __init__(self, stream, basedir=os.getcwdu(), unquote_urls=True, def __init__(self, stream, basedir=os.getcwdu(), unquote_urls=True,
populate_spine=True): populate_spine=True):
@ -1039,7 +1040,7 @@ class OPF(object): # {{{
for attr in ('title', 'authors', 'author_sort', 'title_sort', for attr in ('title', 'authors', 'author_sort', 'title_sort',
'publisher', 'series', 'series_index', 'rating', 'publisher', 'series', 'series_index', 'rating',
'isbn', 'tags', 'category', 'comments', 'isbn', 'tags', 'category', 'comments',
'pubdate', 'user_categories'): 'pubdate', 'user_categories', 'author_link_map'):
val = getattr(mi, attr, None) val = getattr(mi, attr, None)
if val is not None and val != [] and val != (None, None): if val is not None and val != [] and val != (None, None):
setattr(self, attr, val) setattr(self, attr, val)
@ -1336,6 +1337,8 @@ def metadata_to_opf(mi, as_string=True):
for tag in mi.tags: for tag in mi.tags:
factory(DC('subject'), tag) factory(DC('subject'), tag)
meta = lambda n, c: factory('meta', name='calibre:'+n, content=c) meta = lambda n, c: factory('meta', name='calibre:'+n, content=c)
if getattr(mi, 'author_link_map', None) is not None:
meta('author_link_map', dump_dict(mi.author_link_map))
if mi.series: if mi.series:
meta('series', mi.series) meta('series', mi.series)
if mi.series_index is not None: if mi.series_index is not None:
@ -1349,7 +1352,7 @@ def metadata_to_opf(mi, as_string=True):
if mi.title_sort: if mi.title_sort:
meta('title_sort', mi.title_sort) meta('title_sort', mi.title_sort)
if mi.user_categories: if mi.user_categories:
meta('user_categories', dump_user_categories(mi.user_categories)) meta('user_categories', dump_dict(mi.user_categories))
serialize_user_metadata(metadata, mi.get_all_user_metadata(False)) serialize_user_metadata(metadata, mi.get_all_user_metadata(False))

View File

@ -957,7 +957,10 @@ def get_metadata(stream):
return get_metadata(stream) return get_metadata(stream)
from calibre.utils.logging import Log from calibre.utils.logging import Log
log = Log() log = Log()
mi = MetaInformation(os.path.basename(stream.name), [_('Unknown')]) try:
mi = MetaInformation(os.path.basename(stream.name), [_('Unknown')])
except:
mi = MetaInformation(_('Unknown'), [_('Unknown')])
mh = MetadataHeader(stream, log) mh = MetadataHeader(stream, log)
if mh.title and mh.title != _('Unknown'): if mh.title and mh.title != _('Unknown'):
mi.title = mh.title mi.title = mh.title

View File

@ -83,13 +83,14 @@ gprefs.defaults['tags_browser_partition_method'] = 'first letter'
gprefs.defaults['tags_browser_collapse_at'] = 100 gprefs.defaults['tags_browser_collapse_at'] = 100
gprefs.defaults['edit_metadata_single_layout'] = 'default' gprefs.defaults['edit_metadata_single_layout'] = 'default'
gprefs.defaults['book_display_fields'] = [ gprefs.defaults['book_display_fields'] = [
('title', False), ('authors', False), ('formats', True), ('title', False), ('authors', True), ('formats', True),
('series', True), ('identifiers', True), ('tags', True), ('series', True), ('identifiers', True), ('tags', True),
('path', True), ('publisher', False), ('rating', False), ('path', True), ('publisher', False), ('rating', False),
('author_sort', False), ('sort', False), ('timestamp', False), ('author_sort', False), ('sort', False), ('timestamp', False),
('uuid', False), ('comments', True), ('id', False), ('pubdate', False), ('uuid', False), ('comments', True), ('id', False), ('pubdate', False),
('last_modified', False), ('size', False), ('last_modified', False), ('size', False),
] ]
gprefs.defaults['default_author_link'] = 'http://en.wikipedia.org/w/index.php?search={author}'
# }}} # }}}

View File

@ -260,7 +260,8 @@ class ChooseLibraryAction(InterfaceAction):
'The files remain on your computer, if you want ' 'The files remain on your computer, if you want '
'to delete them, you will have to do so manually.') % loc, 'to delete them, you will have to do so manually.') % loc,
show=True) show=True)
open_local_file(loc) if os.path.exists(loc):
open_local_file(loc)
def backup_status(self, location): def backup_status(self, location):
dirty_text = 'no' dirty_text = 'no'

View File

@ -38,3 +38,6 @@ class ShowQuickviewAction(InterfaceAction):
Quickview(self.gui, self.gui.library_view, index) Quickview(self.gui, self.gui.library_view, index)
self.current_instance.show() self.current_instance.show()
def library_changed(self, db):
if self.current_instance and not self.current_instance.is_closed:
self.current_instance.set_database(db)

View File

@ -5,7 +5,6 @@ __license__ = 'GPL v3'
__copyright__ = '2010, Kovid Goyal <kovid@kovidgoyal.net>' __copyright__ = '2010, Kovid Goyal <kovid@kovidgoyal.net>'
__docformat__ = 'restructuredtext en' __docformat__ = 'restructuredtext en'
from PyQt4.Qt import (QPixmap, QSize, QWidget, Qt, pyqtSignal, QUrl, from PyQt4.Qt import (QPixmap, QSize, QWidget, Qt, pyqtSignal, QUrl,
QPropertyAnimation, QEasingCurve, QApplication, QFontInfo, QPropertyAnimation, QEasingCurve, QApplication, QFontInfo,
QSizePolicy, QPainter, QRect, pyqtProperty, QLayout, QPalette, QMenu) QSizePolicy, QPainter, QRect, pyqtProperty, QLayout, QPalette, QMenu)
@ -23,6 +22,7 @@ from calibre.library.comments import comments_to_html
from calibre.gui2 import (config, open_local_file, open_url, pixmap_to_data, from calibre.gui2 import (config, open_local_file, open_url, pixmap_to_data,
gprefs) gprefs)
from calibre.utils.icu import sort_key from calibre.utils.icu import sort_key
from calibre.utils.formatter import EvalFormatter
def render_html(mi, css, vertical, widget, all_fields=False): # {{{ def render_html(mi, css, vertical, widget, all_fields=False): # {{{
table = render_data(mi, all_fields=all_fields, table = render_data(mi, all_fields=all_fields,
@ -98,6 +98,14 @@ def render_data(mi, use_roman_numbers=True, all_fields=False):
val = force_unicode(val) val = force_unicode(val)
ans.append((field, ans.append((field,
u'<td class="comments" colspan="2">%s</td>'%comments_to_html(val))) u'<td class="comments" colspan="2">%s</td>'%comments_to_html(val)))
elif metadata['datatype'] == 'composite' and \
metadata['display'].get('contains_html', False):
val = getattr(mi, field)
if val:
val = force_unicode(val)
ans.append((field,
u'<td class="title">%s</td><td>%s</td>'%
(name, comments_to_html(val))))
elif field == 'path': elif field == 'path':
if mi.path: if mi.path:
path = force_unicode(mi.path, filesystem_encoding) path = force_unicode(mi.path, filesystem_encoding)
@ -121,6 +129,27 @@ def render_data(mi, use_roman_numbers=True, all_fields=False):
if links: if links:
ans.append((field, u'<td class="title">%s</td><td>%s</td>'%( ans.append((field, u'<td class="title">%s</td><td>%s</td>'%(
_('Ids')+':', links))) _('Ids')+':', links)))
elif field == 'authors' and not isdevice:
authors = []
formatter = EvalFormatter()
for aut in mi.authors:
if mi.author_link_map[aut]:
link = mi.author_link_map[aut]
elif gprefs.get('default_author_link'):
vals = {'author': aut.replace(' ', '+')}
try:
vals['author_sort'] = mi.author_sort_map[aut].replace(' ', '+')
except:
vals['author_sort'] = aut.replace(' ', '+')
link = formatter.safe_format(
gprefs.get('default_author_link'), vals, '', vals)
if link:
link = prepare_string_for_xml(link)
authors.append(u'<a href="%s">%s</a>'%(link, aut))
else:
authors.append(aut)
ans.append((field, u'<td class="title">%s</td><td>%s</td>'%(name,
u' & '.join(authors))))
else: else:
val = mi.format_field(field)[-1] val = mi.format_field(field)[-1]
if val is None: if val is None:

View File

@ -4,10 +4,11 @@ __docformat__ = 'restructuredtext en'
__license__ = 'GPL v3' __license__ = 'GPL v3'
from PyQt4.Qt import (Qt, QDialog, QTableWidgetItem, QAbstractItemView, QIcon, from PyQt4.Qt import (Qt, QDialog, QTableWidgetItem, QAbstractItemView, QIcon,
QDialogButtonBox, QFrame, QLabel, QTimer, QMenu, QApplication) QDialogButtonBox, QFrame, QLabel, QTimer, QMenu, QApplication,
QByteArray)
from calibre.ebooks.metadata import author_to_author_sort from calibre.ebooks.metadata import author_to_author_sort
from calibre.gui2 import error_dialog from calibre.gui2 import error_dialog, gprefs
from calibre.gui2.dialogs.edit_authors_dialog_ui import Ui_EditAuthorsDialog from calibre.gui2.dialogs.edit_authors_dialog_ui import Ui_EditAuthorsDialog
from calibre.utils.icu import sort_key from calibre.utils.icu import sort_key
@ -20,7 +21,7 @@ class tableItem(QTableWidgetItem):
class EditAuthorsDialog(QDialog, Ui_EditAuthorsDialog): class EditAuthorsDialog(QDialog, Ui_EditAuthorsDialog):
def __init__(self, parent, db, id_to_select, select_sort): def __init__(self, parent, db, id_to_select, select_sort, select_link):
QDialog.__init__(self, parent) QDialog.__init__(self, parent)
Ui_EditAuthorsDialog.__init__(self) Ui_EditAuthorsDialog.__init__(self)
self.setupUi(self) self.setupUi(self)
@ -29,11 +30,19 @@ class EditAuthorsDialog(QDialog, Ui_EditAuthorsDialog):
self.setWindowFlags(self.windowFlags()&(~Qt.WindowContextHelpButtonHint)) self.setWindowFlags(self.windowFlags()&(~Qt.WindowContextHelpButtonHint))
self.setWindowIcon(icon) self.setWindowIcon(icon)
try:
self.table_column_widths = \
gprefs.get('manage_authors_table_widths', None)
geom = gprefs.get('manage_authors_dialog_geometry', bytearray(''))
self.restoreGeometry(QByteArray(geom))
except:
pass
self.buttonBox.accepted.connect(self.accepted) self.buttonBox.accepted.connect(self.accepted)
# Set up the column headings # Set up the column headings
self.table.setSelectionMode(QAbstractItemView.SingleSelection) self.table.setSelectionMode(QAbstractItemView.SingleSelection)
self.table.setColumnCount(2) self.table.setColumnCount(3)
self.down_arrow_icon = QIcon(I('arrow-down.png')) self.down_arrow_icon = QIcon(I('arrow-down.png'))
self.up_arrow_icon = QIcon(I('arrow-up.png')) self.up_arrow_icon = QIcon(I('arrow-up.png'))
self.blank_icon = QIcon(I('blank.png')) self.blank_icon = QIcon(I('blank.png'))
@ -43,26 +52,35 @@ class EditAuthorsDialog(QDialog, Ui_EditAuthorsDialog):
self.aus_col = QTableWidgetItem(_('Author sort')) self.aus_col = QTableWidgetItem(_('Author sort'))
self.table.setHorizontalHeaderItem(1, self.aus_col) self.table.setHorizontalHeaderItem(1, self.aus_col)
self.aus_col.setIcon(self.up_arrow_icon) self.aus_col.setIcon(self.up_arrow_icon)
self.aul_col = QTableWidgetItem(_('Link'))
self.table.setHorizontalHeaderItem(2, self.aul_col)
self.aus_col.setIcon(self.blank_icon)
# Add the data # Add the data
self.authors = {} self.authors = {}
auts = db.get_authors_with_ids() auts = db.get_authors_with_ids()
self.table.setRowCount(len(auts)) self.table.setRowCount(len(auts))
select_item = None select_item = None
for row, (id, author, sort) in enumerate(auts): for row, (id, author, sort, link) in enumerate(auts):
author = author.replace('|', ',') author = author.replace('|', ',')
self.authors[id] = (author, sort) self.authors[id] = (author, sort, link)
aut = tableItem(author) aut = tableItem(author)
aut.setData(Qt.UserRole, id) aut.setData(Qt.UserRole, id)
sort = tableItem(sort) sort = tableItem(sort)
link = tableItem(link)
self.table.setItem(row, 0, aut) self.table.setItem(row, 0, aut)
self.table.setItem(row, 1, sort) self.table.setItem(row, 1, sort)
self.table.setItem(row, 2, link)
if id == id_to_select: if id == id_to_select:
if select_sort: if select_sort:
select_item = sort select_item = sort
elif select_link:
select_item = link
else: else:
select_item = aut select_item = aut
self.table.resizeColumnsToContents() self.table.resizeColumnsToContents()
if self.table.columnWidth(2) < 200:
self.table.setColumnWidth(2, 200)
# set up the cellChanged signal only after the table is filled # set up the cellChanged signal only after the table is filled
self.table.cellChanged.connect(self.cell_changed) self.table.cellChanged.connect(self.cell_changed)
@ -115,6 +133,28 @@ class EditAuthorsDialog(QDialog, Ui_EditAuthorsDialog):
self.table.setContextMenuPolicy(Qt.CustomContextMenu) self.table.setContextMenuPolicy(Qt.CustomContextMenu)
self.table.customContextMenuRequested .connect(self.show_context_menu) self.table.customContextMenuRequested .connect(self.show_context_menu)
def save_state(self):
self.table_column_widths = []
for c in range(0, self.table.columnCount()):
self.table_column_widths.append(self.table.columnWidth(c))
gprefs['manage_authors_table_widths'] = self.table_column_widths
gprefs['manage_authors_dialog_geometry'] = bytearray(self.saveGeometry())
def resizeEvent(self, *args):
QDialog.resizeEvent(self, *args)
if self.table_column_widths is not None:
for c,w in enumerate(self.table_column_widths):
self.table.setColumnWidth(c, w)
else:
# the vertical scroll bar might not be rendered, so might not yet
# have a width. Assume 25. Not a problem because user-changed column
# widths will be remembered
w = self.table.width() - 25 - self.table.verticalHeader().width()
w /= self.table.columnCount()
for c in range(0, self.table.columnCount()):
self.table.setColumnWidth(c, w)
self.save_state()
def show_context_menu(self, point): def show_context_menu(self, point):
self.context_item = self.table.itemAt(point) self.context_item = self.table.itemAt(point)
case_menu = QMenu(_('Change Case')) case_menu = QMenu(_('Change Case'))
@ -231,14 +271,16 @@ class EditAuthorsDialog(QDialog, Ui_EditAuthorsDialog):
self.auth_col.setIcon(self.blank_icon) self.auth_col.setIcon(self.blank_icon)
def accepted(self): def accepted(self):
self.save_state()
self.result = [] self.result = []
for row in range(0,self.table.rowCount()): for row in range(0,self.table.rowCount()):
id = self.table.item(row, 0).data(Qt.UserRole).toInt()[0] id = self.table.item(row, 0).data(Qt.UserRole).toInt()[0]
aut = unicode(self.table.item(row, 0).text()).strip() aut = unicode(self.table.item(row, 0).text()).strip()
sort = unicode(self.table.item(row, 1).text()).strip() sort = unicode(self.table.item(row, 1).text()).strip()
orig_aut,orig_sort = self.authors[id] link = unicode(self.table.item(row, 2).text()).strip()
if orig_aut != aut or orig_sort != sort: orig_aut,orig_sort,orig_link = self.authors[id]
self.result.append((id, orig_aut, aut, sort)) if orig_aut != aut or orig_sort != sort or orig_link != link:
self.result.append((id, orig_aut, aut, sort, link))
def do_recalc_author_sort(self): def do_recalc_author_sort(self):
self.table.cellChanged.disconnect() self.table.cellChanged.disconnect()
@ -276,6 +318,6 @@ class EditAuthorsDialog(QDialog, Ui_EditAuthorsDialog):
c.setText(author_to_author_sort(aut)) c.setText(author_to_author_sort(aut))
item = c item = c
else: else:
item = self.table.item(row, 1) item = self.table.item(row, col)
self.table.setCurrentItem(item) self.table.setCurrentItem(item)
self.table.scrollToItem(item) self.table.scrollToItem(item)

View File

@ -18,16 +18,29 @@ class TableItem(QTableWidgetItem):
A QTableWidgetItem that sorts on a separate string and uses ICU rules A QTableWidgetItem that sorts on a separate string and uses ICU rules
''' '''
def __init__(self, val, sort): def __init__(self, val, sort, idx=0):
self.sort = sort self.sort = sort
self.sort_idx = idx
QTableWidgetItem.__init__(self, val) QTableWidgetItem.__init__(self, val)
self.setFlags(Qt.ItemIsEnabled|Qt.ItemIsSelectable) self.setFlags(Qt.ItemIsEnabled|Qt.ItemIsSelectable)
def __ge__(self, other): def __ge__(self, other):
return sort_key(self.sort) >= sort_key(other.sort) l = sort_key(self.sort)
r = sort_key(other.sort)
if l > r:
return 1
if l == r:
return self.sort_idx >= other.sort_idx
return 0
def __lt__(self, other): def __lt__(self, other):
return sort_key(self.sort) < sort_key(other.sort) l = sort_key(self.sort)
r = sort_key(other.sort)
if l < r:
return 1
if l == r:
return self.sort_idx < other.sort_idx
return 0
class Quickview(QDialog, Ui_Quickview): class Quickview(QDialog, Ui_Quickview):
@ -60,6 +73,7 @@ class Quickview(QDialog, Ui_Quickview):
self.last_search = None self.last_search = None
self.current_column = None self.current_column = None
self.current_item = None self.current_item = None
self.no_valid_items = False
self.items.setSelectionMode(QAbstractItemView.SingleSelection) self.items.setSelectionMode(QAbstractItemView.SingleSelection)
self.items.currentTextChanged.connect(self.item_selected) self.items.currentTextChanged.connect(self.item_selected)
@ -95,8 +109,19 @@ class Quickview(QDialog, Ui_Quickview):
self.search_button.clicked.connect(self.do_search) self.search_button.clicked.connect(self.do_search)
view.model().new_bookdisplay_data.connect(self.book_was_changed) view.model().new_bookdisplay_data.connect(self.book_was_changed)
def set_database(self, db):
self.db = db
self.items.blockSignals(True)
self.books_table.blockSignals(True)
self.items.clear()
self.books_table.setRowCount(0)
self.books_table.blockSignals(False)
self.items.blockSignals(False)
# search button # search button
def do_search(self): def do_search(self):
if self.no_valid_items:
return
if self.last_search is not None: if self.last_search is not None:
self.gui.search.set_search_string(self.last_search) self.gui.search.set_search_string(self.last_search)
@ -110,6 +135,8 @@ class Quickview(QDialog, Ui_Quickview):
# clicks on the items listWidget # clicks on the items listWidget
def item_selected(self, txt): def item_selected(self, txt):
if self.no_valid_items:
return
self.fill_in_books_box(unicode(txt)) self.fill_in_books_box(unicode(txt))
# Given a cell in the library view, display the information # Given a cell in the library view, display the information
@ -122,6 +149,7 @@ class Quickview(QDialog, Ui_Quickview):
# Only show items for categories # Only show items for categories
if not self.db.field_metadata[key]['is_category']: if not self.db.field_metadata[key]['is_category']:
if self.current_key is None: if self.current_key is None:
self.indicate_no_items()
return return
key = self.current_key key = self.current_key
self.items_label.setText('{0} ({1})'.format( self.items_label.setText('{0} ({1})'.format(
@ -135,6 +163,7 @@ class Quickview(QDialog, Ui_Quickview):
vals = mi.get(key, None) vals = mi.get(key, None)
if vals: if vals:
self.no_valid_items = False
if not isinstance(vals, list): if not isinstance(vals, list):
vals = [vals] vals = [vals]
vals.sort(key=sort_key) vals.sort(key=sort_key)
@ -148,8 +177,19 @@ class Quickview(QDialog, Ui_Quickview):
self.current_key = key self.current_key = key
self.fill_in_books_box(vals[0]) self.fill_in_books_box(vals[0])
else:
self.indicate_no_items()
self.items.blockSignals(False) self.items.blockSignals(False)
def indicate_no_items(self):
print 'no items'
self.no_valid_items = True
self.items.clear()
self.items.addItem(QListWidgetItem(_('**No items found**')))
self.books_label.setText(_('Click in a column in the library view '
'to see the information for that book'))
def fill_in_books_box(self, selected_item): def fill_in_books_box(self, selected_item):
self.current_item = selected_item self.current_item = selected_item
# Do a bit of fix-up on the items so that the search works. # Do a bit of fix-up on the items so that the search works.
@ -163,7 +203,8 @@ class Quickview(QDialog, Ui_Quickview):
self.db.data.search_restriction) self.db.data.search_restriction)
self.books_table.setRowCount(len(books)) self.books_table.setRowCount(len(books))
self.books_label.setText(_('Books with selected item: {0}').format(len(books))) self.books_label.setText(_('Books with selected item "{0}": {1}').
format(selected_item, len(books)))
select_item = None select_item = None
self.books_table.setSortingEnabled(False) self.books_table.setSortingEnabled(False)
@ -185,7 +226,7 @@ class Quickview(QDialog, Ui_Quickview):
series = mi.format_field('series')[1] series = mi.format_field('series')[1]
if series is None: if series is None:
series = '' series = ''
a = TableItem(series, series) a = TableItem(series, mi.series, mi.series_index)
a.setToolTip(tt) a.setToolTip(tt)
self.books_table.setItem(row, 2, a) self.books_table.setItem(row, 2, a)
self.books_table.setRowHeight(row, self.books_table_row_height) self.books_table.setRowHeight(row, self.books_table_row_height)
@ -213,6 +254,8 @@ class Quickview(QDialog, Ui_Quickview):
self.save_state() self.save_state()
def book_doubleclicked(self, row, column): def book_doubleclicked(self, row, column):
if self.no_valid_items:
return
book_id = self.books_table.item(row, 0).data(Qt.UserRole).toInt()[0] book_id = self.books_table.item(row, 0).data(Qt.UserRole).toInt()[0]
self.view.select_rows([book_id]) self.view.select_rows([book_id])
modifiers = int(QApplication.keyboardModifiers()) modifiers = int(QApplication.keyboardModifiers())

View File

@ -57,19 +57,6 @@
</property> </property>
</widget> </widget>
</item> </item>
<item row="2" column="1">
<spacer>
<property name="orientation">
<enum>Qt::Vertical</enum>
</property>
<property name="sizeHint" stdset="0">
<size>
<width>0</width>
<height>0</height>
</size>
</property>
</spacer>
</item>
<item row="3" column="0" colspan="2"> <item row="3" column="0" colspan="2">
<layout class="QHBoxLayout"> <layout class="QHBoxLayout">
<item> <item>

View File

@ -54,7 +54,7 @@ class DBRestore(QDialog):
def reject(self): def reject(self):
self.rejected = True self.rejected = True
self.restorer.progress_callback = lambda x, y: x self.restorer.progress_callback = lambda x, y: x
QDialog.rejecet(self) QDialog.reject(self)
def update(self): def update(self):
if self.restorer.is_alive(): if self.restorer.is_alive():

View File

@ -51,6 +51,9 @@ class BooksView(QTableView): # {{{
def __init__(self, parent, modelcls=BooksModel, use_edit_metadata_dialog=True): def __init__(self, parent, modelcls=BooksModel, use_edit_metadata_dialog=True):
QTableView.__init__(self, parent) QTableView.__init__(self, parent)
if not tweaks['horizontal_scrolling_per_column']:
self.setHorizontalScrollMode(self.ScrollPerPixel)
self.setEditTriggers(self.EditKeyPressed) self.setEditTriggers(self.EditKeyPressed)
if tweaks['doubleclick_on_library_view'] == 'edit_cell': if tweaks['doubleclick_on_library_view'] == 'edit_cell':
self.setEditTriggers(self.DoubleClicked|self.editTriggers()) self.setEditTriggers(self.DoubleClicked|self.editTriggers())
@ -110,6 +113,7 @@ class BooksView(QTableView): # {{{
self.column_header.sectionMoved.connect(self.save_state) self.column_header.sectionMoved.connect(self.save_state)
self.column_header.setContextMenuPolicy(Qt.CustomContextMenu) self.column_header.setContextMenuPolicy(Qt.CustomContextMenu)
self.column_header.customContextMenuRequested.connect(self.show_column_header_context_menu) self.column_header.customContextMenuRequested.connect(self.show_column_header_context_menu)
self.column_header.sectionResized.connect(self.column_resized, Qt.QueuedConnection)
# }}} # }}}
self._model.database_changed.connect(self.database_changed) self._model.database_changed.connect(self.database_changed)
@ -214,6 +218,9 @@ class BooksView(QTableView): # {{{
self.column_header_context_menu.addSeparator() self.column_header_context_menu.addSeparator()
self.column_header_context_menu.addAction(
_('Shrink column if it is too wide to fit'),
partial(self.resize_column_to_fit, column=self.column_map[idx]))
self.column_header_context_menu.addAction( self.column_header_context_menu.addAction(
_('Restore default layout'), _('Restore default layout'),
partial(self.column_header_context_handler, partial(self.column_header_context_handler,
@ -235,13 +242,8 @@ class BooksView(QTableView): # {{{
self.selected_ids = [idc(r) for r in selected_rows] self.selected_ids = [idc(r) for r in selected_rows]
def sorting_done(self, indexc): def sorting_done(self, indexc):
if self.selected_ids: self.select_rows(self.selected_ids, using_ids=True, change_current=True,
indices = [self.model().index(indexc(i), 0) for i in scroll=True)
self.selected_ids]
sm = self.selectionModel()
for idx in indices:
sm.select(idx, sm.Select|sm.Rows)
self.scroll_to_row(indices[0].row())
self.selected_ids = [] self.selected_ids = []
def sort_by_named_field(self, field, order, reset=True): def sort_by_named_field(self, field, order, reset=True):
@ -456,7 +458,9 @@ class BooksView(QTableView): # {{{
traceback.print_exc() traceback.print_exc()
old_state['sort_history'] = sh old_state['sort_history'] = sh
self.column_header.blockSignals(True)
self.apply_state(old_state) self.apply_state(old_state)
self.column_header.blockSignals(False)
# Resize all rows to have the correct height # Resize all rows to have the correct height
if self.model().rowCount(QModelIndex()) > 0: if self.model().rowCount(QModelIndex()) > 0:
@ -465,6 +469,19 @@ class BooksView(QTableView): # {{{
self.was_restored = True self.was_restored = True
def resize_column_to_fit(self, column):
col = self.column_map.index(column)
self.column_resized(col, self.columnWidth(col), self.columnWidth(col))
def column_resized(self, col, old_size, new_size):
# arbitrary: scroll bar + header + some
max_width = self.width() - (self.verticalScrollBar().width() +
self.verticalHeader().width() + 10)
if new_size > max_width:
self.column_header.blockSignals(True)
self.setColumnWidth(col, max_width)
self.column_header.blockSignals(False)
# }}} # }}}
# Initialization/Delegate Setup {{{ # Initialization/Delegate Setup {{{

View File

@ -1092,11 +1092,12 @@ class IdentifiersEdit(QLineEdit): # {{{
for x in parts: for x in parts:
c = x.split(':') c = x.split(':')
if len(c) > 1: if len(c) > 1:
if c[0] == 'isbn': itype = c[0].lower()
if itype == 'isbn':
v = check_isbn(c[1]) v = check_isbn(c[1])
if v is not None: if v is not None:
c[1] = v c[1] = v
ans[c[0]] = c[1] ans[itype] = c[1]
return ans return ans
def fset(self, val): def fset(self, val):
if not val: if not val:
@ -1112,7 +1113,7 @@ class IdentifiersEdit(QLineEdit): # {{{
if v is not None: if v is not None:
val[k] = v val[k] = v
ids = sorted(val.iteritems(), key=keygen) ids = sorted(val.iteritems(), key=keygen)
txt = ', '.join(['%s:%s'%(k, v) for k, v in ids]) txt = ', '.join(['%s:%s'%(k.lower(), v) for k, v in ids])
self.setText(txt.strip()) self.setText(txt.strip())
self.setCursorPosition(0) self.setCursorPosition(0)
return property(fget=fget, fset=fset) return property(fget=fget, fset=fset)

View File

@ -127,6 +127,8 @@ class CreateCustomColumn(QDialog, Ui_QCreateCustomColumn):
self.composite_sort_by.setCurrentIndex(sb) self.composite_sort_by.setCurrentIndex(sb)
self.composite_make_category.setChecked( self.composite_make_category.setChecked(
c['display'].get('make_category', False)) c['display'].get('make_category', False))
self.composite_make_category.setChecked(
c['display'].get('contains_html', False))
elif ct == 'enumeration': elif ct == 'enumeration':
self.enum_box.setText(','.join(c['display'].get('enum_values', []))) self.enum_box.setText(','.join(c['display'].get('enum_values', [])))
self.enum_colors.setText(','.join(c['display'].get('enum_colors', []))) self.enum_colors.setText(','.join(c['display'].get('enum_colors', [])))
@ -141,6 +143,21 @@ class CreateCustomColumn(QDialog, Ui_QCreateCustomColumn):
all_colors = [unicode(s) for s in list(QColor.colorNames())] all_colors = [unicode(s) for s in list(QColor.colorNames())]
self.enum_colors_label.setToolTip('<p>' + ', '.join(all_colors) + '</p>') self.enum_colors_label.setToolTip('<p>' + ', '.join(all_colors) + '</p>')
self.composite_contains_html.setToolTip('<p>' +
_('If checked, this column will be displayed as HTML in '
'book details and the content server. This can be used to '
'construct links with the template language. For example, '
'the template '
'<pre>&lt;big&gt;&lt;b&gt;{title}&lt;/b&gt;&lt;/big&gt;'
'{series:| [|}{series_index:| [|]]}</pre>'
'will create a field displaying the title in bold large '
'characters, along with the series, for example <br>"<big><b>'
'An Oblique Approach</b></big> [Belisarius [1]]". The template '
'<pre>&lt;a href="http://www.beam-ebooks.de/ebook/{identifiers'
':select(beam)}"&gt;Beam book&lt;/a&gt;</pre> '
'will generate a link to the book on the Beam ebooks site.')
+ '</p>')
self.exec_() self.exec_()
def shortcut_activated(self, url): def shortcut_activated(self, url):
@ -179,7 +196,7 @@ class CreateCustomColumn(QDialog, Ui_QCreateCustomColumn):
getattr(self, 'date_format_'+x).setVisible(col_type == 'datetime') getattr(self, 'date_format_'+x).setVisible(col_type == 'datetime')
getattr(self, 'number_format_'+x).setVisible(col_type in ['int', 'float']) getattr(self, 'number_format_'+x).setVisible(col_type in ['int', 'float'])
for x in ('box', 'default_label', 'label', 'sort_by', 'sort_by_label', for x in ('box', 'default_label', 'label', 'sort_by', 'sort_by_label',
'make_category'): 'make_category', 'contains_html'):
getattr(self, 'composite_'+x).setVisible(col_type in ['composite', '*composite']) getattr(self, 'composite_'+x).setVisible(col_type in ['composite', '*composite'])
for x in ('box', 'default_label', 'label', 'colors', 'colors_label'): for x in ('box', 'default_label', 'label', 'colors', 'colors_label'):
getattr(self, 'enum_'+x).setVisible(col_type == 'enumeration') getattr(self, 'enum_'+x).setVisible(col_type == 'enumeration')
@ -257,6 +274,7 @@ class CreateCustomColumn(QDialog, Ui_QCreateCustomColumn):
'composite_sort': ['text', 'number', 'date', 'bool'] 'composite_sort': ['text', 'number', 'date', 'bool']
[self.composite_sort_by.currentIndex()], [self.composite_sort_by.currentIndex()],
'make_category': self.composite_make_category.isChecked(), 'make_category': self.composite_make_category.isChecked(),
'contains_html': self.composite_contains_html.isChecked(),
} }
elif col_type == 'enumeration': elif col_type == 'enumeration':
if not unicode(self.enum_box.text()).strip(): if not unicode(self.enum_box.text()).strip():

View File

@ -294,6 +294,13 @@ and end with &lt;code&gt;}&lt;/code&gt; You can have text before and after the f
</property> </property>
</widget> </widget>
</item> </item>
<item>
<widget class="QCheckBox" name="composite_contains_html">
<property name="text">
<string>Show as HTML in book details</string>
</property>
</widget>
</item>
<item> <item>
<spacer name="horizontalSpacer_24"> <spacer name="horizontalSpacer_24">
<property name="sizePolicy"> <property name="sizePolicy">

View File

@ -138,6 +138,7 @@ class ConfigWidget(ConfigWidgetBase, Ui_Form):
(_('Partitioned'), 'partition')] (_('Partitioned'), 'partition')]
r('tags_browser_partition_method', gprefs, choices=choices) r('tags_browser_partition_method', gprefs, choices=choices)
r('tags_browser_collapse_at', gprefs) r('tags_browser_collapse_at', gprefs)
r('default_author_link', gprefs)
choices = set([k for k in db.field_metadata.all_field_keys() choices = set([k for k in db.field_metadata.all_field_keys()
if db.field_metadata[k]['is_category'] and if db.field_metadata[k]['is_category'] and

View File

@ -192,7 +192,7 @@
<string>Book Details</string> <string>Book Details</string>
</attribute> </attribute>
<layout class="QGridLayout" name="gridLayout_12"> <layout class="QGridLayout" name="gridLayout_12">
<item row="0" column="0" rowspan="2"> <item row="1" column="0" rowspan="2">
<widget class="QGroupBox" name="groupBox"> <widget class="QGroupBox" name="groupBox">
<property name="title"> <property name="title">
<string>Select displayed metadata</string> <string>Select displayed metadata</string>
@ -243,6 +243,31 @@
</layout> </layout>
</widget> </widget>
</item> </item>
<item row="0" column="0">
<layout class="QHBoxLayout">
<item>
<widget class="QLabel" name="label">
<property name="text">
<string>Default author link template:</string>
</property>
<property name="buddy">
<cstring>opt_default_author_link</cstring>
</property>
</widget>
</item>
<item>
<widget class="QLineEdit" name="opt_default_author_link">
<property name="toolTip">
<string>&lt;p&gt;Enter a template to be used to create a link for
an author in the books information dialog. This template will
be used when no link has been provided for the author using
Manage Authors. You can use the values {author} and
{author_sort}, and any template function.</string>
</property>
</widget>
</item>
</layout>
</item>
<item row="0" column="1"> <item row="0" column="1">
<widget class="QCheckBox" name="opt_use_roman_numerals_for_series_number"> <widget class="QCheckBox" name="opt_use_roman_numerals_for_series_number">
<property name="text"> <property name="text">

View File

@ -357,7 +357,6 @@ class Preferences(QMainWindow):
bytearray(self.saveGeometry())) bytearray(self.saveGeometry()))
if self.committed: if self.committed:
self.gui.must_restart_before_config = self.must_restart self.gui.must_restart_before_config = self.must_restart
self.gui.tags_view.set_new_model() # in case columns changed
self.gui.tags_view.recount() self.gui.tags_view.recount()
self.gui.create_device_menu() self.gui.create_device_menu()
self.gui.set_device_menu_items_state(bool(self.gui.device_connected)) self.gui.set_device_menu_items_state(bool(self.gui.device_connected))

View File

@ -31,7 +31,7 @@ class SaveTemplate(QWidget, Ui_Form):
(var, FORMAT_ARG_DESCS[var])) (var, FORMAT_ARG_DESCS[var]))
rows.append(u'<tr><td>%s&nbsp;</td><td>&nbsp;</td><td>%s</td></tr>'%( rows.append(u'<tr><td>%s&nbsp;</td><td>&nbsp;</td><td>%s</td></tr>'%(
_('Any custom field'), _('Any custom field'),
_('The lookup name of any custom field. These names begin with "#")'))) _('The lookup name of any custom field (these names begin with "#").')))
table = u'<table>%s</table>'%(u'\n'.join(rows)) table = u'<table>%s</table>'%(u'\n'.join(rows))
self.template_variables.setText(table) self.template_variables.setText(table)

View File

@ -173,7 +173,7 @@ class ConfigWidget(ConfigWidgetBase, Ui_Form):
def refresh_gui(self, gui): def refresh_gui(self, gui):
gui.set_highlight_only_button_icon() gui.set_highlight_only_button_icon()
if self.muc_changed: if self.muc_changed:
gui.tags_view.set_new_model() gui.tags_view.recount()
gui.search.search_as_you_type(config['search_as_you_type']) gui.search.search_as_you_type(config['search_as_you_type'])
gui.search.do_search() gui.search.do_search()

View File

@ -126,7 +126,7 @@ class Matches(QAbstractItemModel):
elif role == Qt.ToolTipRole: elif role == Qt.ToolTipRole:
if col == 0: if col == 0:
if is_disabled(result): if is_disabled(result):
return QVariant('<p>' + _('This store is currently diabled and cannot be used in other parts of calibre.') + '</p>') return QVariant('<p>' + _('This store is currently disabled and cannot be used in other parts of calibre.') + '</p>')
else: else:
return QVariant('<p>' + _('This store is currently enabled and can be used in other parts of calibre.') + '</p>') return QVariant('<p>' + _('This store is currently enabled and can be used in other parts of calibre.') + '</p>')
elif col == 1: elif col == 1:

View File

@ -7,7 +7,6 @@ __copyright__ = '2011, John Schember <john@nachtimwald.com>'
__docformat__ = 'restructuredtext en' __docformat__ = 'restructuredtext en'
import mimetypes import mimetypes
import urllib
from contextlib import closing from contextlib import closing
from lxml import etree from lxml import etree
@ -22,7 +21,7 @@ from calibre.gui2.store.web_store_dialog import WebStoreDialog
from calibre.utils.opensearch.description import Description from calibre.utils.opensearch.description import Description
from calibre.utils.opensearch.query import Query from calibre.utils.opensearch.query import Query
class OpenSearchStore(StorePlugin): class OpenSearchOPDSStore(StorePlugin):
open_search_url = '' open_search_url = ''
web_url = '' web_url = ''
@ -50,7 +49,7 @@ class OpenSearchStore(StorePlugin):
oquery = Query(url_template) oquery = Query(url_template)
# set up initial values # set up initial values
oquery.searchTerms = urllib.quote_plus(query) oquery.searchTerms = query
oquery.count = max_results oquery.count = max_results
url = oquery.url() url = oquery.url()

View File

@ -22,6 +22,7 @@ from calibre.utils.icu import sort_key
from calibre.utils.search_query_parser import SearchQueryParser from calibre.utils.search_query_parser import SearchQueryParser
def comparable_price(text): def comparable_price(text):
text = re.sub(r'[^0-9.,]', '', text)
if len(text) < 3 or text[-3] not in ('.', ','): if len(text) < 3 or text[-3] not in ('.', ','):
text += '00' text += '00'
text = re.sub(r'\D', '', text) text = re.sub(r'\D', '', text)
@ -293,6 +294,7 @@ class SearchFilter(SearchQueryParser):
return self.srs return self.srs
def get_matches(self, location, query): def get_matches(self, location, query):
query = query.strip()
location = location.lower().strip() location = location.lower().strip()
if location == 'authors': if location == 'authors':
location = 'author' location = 'author'

View File

@ -22,6 +22,7 @@ from calibre.gui2.store.search.adv_search_builder import AdvSearchBuilderDialog
from calibre.gui2.store.search.download_thread import SearchThreadPool, \ from calibre.gui2.store.search.download_thread import SearchThreadPool, \
CacheUpdateThreadPool CacheUpdateThreadPool
from calibre.gui2.store.search.search_ui import Ui_Dialog from calibre.gui2.store.search.search_ui import Ui_Dialog
from calibre.utils.filenames import ascii_filename
class SearchDialog(QDialog, Ui_Dialog): class SearchDialog(QDialog, Ui_Dialog):
@ -349,7 +350,9 @@ class SearchDialog(QDialog, Ui_Dialog):
d = ChooseFormatDialog(self, _('Choose format to download to your library.'), result.downloads.keys()) d = ChooseFormatDialog(self, _('Choose format to download to your library.'), result.downloads.keys())
if d.exec_() == d.Accepted: if d.exec_() == d.Accepted:
ext = d.format() ext = d.format()
self.gui.download_ebook(result.downloads[ext]) fname = result.title + '.' + ext.lower()
fname = ascii_filename(fname)
self.gui.download_ebook(result.downloads[ext], filename=fname)
def open_store(self, result): def open_store(self, result):
self.gui.istores[result.store_name].open(self, result.detail_item, self.open_external.isChecked()) self.gui.istores[result.store_name].open(self, result.detail_item, self.open_external.isChecked())

View File

@ -6,12 +6,11 @@ __license__ = 'GPL 3'
__copyright__ = '2011, John Schember <john@nachtimwald.com>' __copyright__ = '2011, John Schember <john@nachtimwald.com>'
__docformat__ = 'restructuredtext en' __docformat__ = 'restructuredtext en'
from calibre.gui2.store.basic_config import BasicStoreConfig from calibre.gui2.store.basic_config import BasicStoreConfig
from calibre.gui2.store.opensearch_store import OpenSearchStore from calibre.gui2.store.opensearch_store import OpenSearchOPDSStore
from calibre.gui2.store.search_result import SearchResult from calibre.gui2.store.search_result import SearchResult
class ArchiveOrgStore(BasicStoreConfig, OpenSearchStore): class ArchiveOrgStore(BasicStoreConfig, OpenSearchOPDSStore):
open_search_url = 'http://bookserver.archive.org/catalog/opensearch.xml' open_search_url = 'http://bookserver.archive.org/catalog/opensearch.xml'
web_url = 'http://www.archive.org/details/texts' web_url = 'http://www.archive.org/details/texts'
@ -19,7 +18,7 @@ class ArchiveOrgStore(BasicStoreConfig, OpenSearchStore):
# http://bookserver.archive.org/catalog/ # http://bookserver.archive.org/catalog/
def search(self, query, max_results=10, timeout=60): def search(self, query, max_results=10, timeout=60):
for s in OpenSearchStore.search(self, query, max_results, timeout): for s in OpenSearchOPDSStore.search(self, query, max_results, timeout):
s.detail_item = 'http://www.archive.org/details/' + s.detail_item.split(':')[-1] s.detail_item = 'http://www.archive.org/details/' + s.detail_item.split(':')[-1]
s.price = '$0.00' s.price = '$0.00'
s.drm = SearchResult.DRM_UNLOCKED s.drm = SearchResult.DRM_UNLOCKED
@ -30,6 +29,10 @@ class ArchiveOrgStore(BasicStoreConfig, OpenSearchStore):
The opensearch feed only returns a subset of formats that are available. The opensearch feed only returns a subset of formats that are available.
We want to get a list of all formats that the user can get. We want to get a list of all formats that the user can get.
''' '''
from calibre import browser
from contextlib import closing
from lxml import html
br = browser() br = browser()
with closing(br.open(search_result.detail_item, timeout=timeout)) as nf: with closing(br.open(search_result.detail_item, timeout=timeout)) as nf:
idata = html.fromstring(nf.read()) idata = html.fromstring(nf.read())

View File

@ -7,7 +7,6 @@ __copyright__ = '2011, John Schember <john@nachtimwald.com>'
__docformat__ = 'restructuredtext en' __docformat__ = 'restructuredtext en'
import random import random
import re
from contextlib import closing from contextlib import closing
from lxml import html from lxml import html
@ -22,20 +21,18 @@ from calibre.gui2.store.search_result import SearchResult
from calibre.gui2.store.web_store_dialog import WebStoreDialog from calibre.gui2.store.web_store_dialog import WebStoreDialog
class BNStore(BasicStoreConfig, StorePlugin): class BNStore(BasicStoreConfig, StorePlugin):
def open(self, parent=None, detail_item=None, external=False): def open(self, parent=None, detail_item=None, external=False):
pub_id = '21000000000352219' pub_id = 'sHa5EXvYOwA'
# Use Kovid's affiliate id 30% of the time. # Use Kovid's affiliate id 30% of the time.
if random.randint(1, 10) in (1, 2, 3): if random.randint(1, 10) in (1, 2, 3):
pub_id = '21000000000352583' pub_id = '0dsO3kDu/AU'
url = 'http://gan.doubleclick.net/gan_click?lid=41000000028437369&pubid=' + pub_id base_url = 'http://click.linksynergy.com/fs-bin/click?id=%s&subid=&offerid=229293.1&type=10&tmpid=8433&RD_PARM1=' % pub_id
url = base_url + 'http%253A%252F%252Fwww.barnesandnoble.com%252F'
if detail_item: if detail_item:
mo = re.search(r'(?<=/)(?P<isbn>\d+)(?=/|$)', detail_item) detail_item = base_url + detail_item
if mo:
isbn = mo.group('isbn')
detail_item = 'http://gan.doubleclick.net/gan_click?lid=41000000012871747&pid=' + isbn + '&adurl=' + detail_item + '&pubid=' + pub_id
if external or self.config.get('open_external', False): if external or self.config.get('open_external', False):
open_url(QUrl(url_slash_cleaner(detail_item if detail_item else url))) open_url(QUrl(url_slash_cleaner(detail_item if detail_item else url)))
@ -48,27 +45,27 @@ class BNStore(BasicStoreConfig, StorePlugin):
def search(self, query, max_results=10, timeout=60): def search(self, query, max_results=10, timeout=60):
query = query.replace(' ', '-') query = query.replace(' ', '-')
url = 'http://www.barnesandnoble.com/s/%s?store=ebook&sze=%s' % (query, max_results) url = 'http://www.barnesandnoble.com/s/%s?store=ebook&sze=%s' % (query, max_results)
br = browser() br = browser()
counter = max_results counter = max_results
with closing(br.open(url, timeout=timeout)) as f: with closing(br.open(url, timeout=timeout)) as f:
doc = html.fromstring(f.read()) doc = html.fromstring(f.read())
for data in doc.xpath('//ul[contains(@class, "result-set")]/li[contains(@class, "result")]'): for data in doc.xpath('//ul[contains(@class, "result-set")]/li[contains(@class, "result")]'):
if counter <= 0: if counter <= 0:
break break
id = ''.join(data.xpath('.//div[contains(@class, "image")]/a/@href')) id = ''.join(data.xpath('.//div[contains(@class, "image")]/a/@href'))
if not id: if not id:
continue continue
cover_url = ''.join(data.xpath('.//div[contains(@class, "image")]//img/@src')) cover_url = ''.join(data.xpath('.//div[contains(@class, "image")]//img/@src'))
title = ''.join(data.xpath('.//p[@class="title"]//span[@class="name"]/text()')) title = ''.join(data.xpath('.//p[@class="title"]//span[@class="name"]/text()'))
author = ', '.join(data.xpath('.//ul[@class="contributors"]//li[position()>1]//a/text()')) author = ', '.join(data.xpath('.//ul[@class="contributors"]//li[position()>1]//a/text()'))
price = ''.join(data.xpath('.//table[@class="displayed-formats"]//a[@class="subtle"]/text()')) price = ''.join(data.xpath('.//table[@class="displayed-formats"]//a[@class="subtle"]/text()'))
counter -= 1 counter -= 1
s = SearchResult() s = SearchResult()
s.cover_url = cover_url s.cover_url = cover_url
s.title = title.strip() s.title = title.strip()

View File

@ -7,10 +7,10 @@ __copyright__ = '2011, John Schember <john@nachtimwald.com>'
__docformat__ = 'restructuredtext en' __docformat__ = 'restructuredtext en'
from calibre.gui2.store.basic_config import BasicStoreConfig from calibre.gui2.store.basic_config import BasicStoreConfig
from calibre.gui2.store.opensearch_store import OpenSearchStore from calibre.gui2.store.opensearch_store import OpenSearchOPDSStore
from calibre.gui2.store.search_result import SearchResult from calibre.gui2.store.search_result import SearchResult
class EpubBudStore(BasicStoreConfig, OpenSearchStore): class EpubBudStore(BasicStoreConfig, OpenSearchOPDSStore):
open_search_url = 'http://www.epubbud.com/feeds/opensearch.xml' open_search_url = 'http://www.epubbud.com/feeds/opensearch.xml'
web_url = 'http://www.epubbud.com/' web_url = 'http://www.epubbud.com/'
@ -18,7 +18,7 @@ class EpubBudStore(BasicStoreConfig, OpenSearchStore):
# http://www.epubbud.com/feeds/catalog.atom # http://www.epubbud.com/feeds/catalog.atom
def search(self, query, max_results=10, timeout=60): def search(self, query, max_results=10, timeout=60):
for s in OpenSearchStore.search(self, query, max_results, timeout): for s in OpenSearchOPDSStore.search(self, query, max_results, timeout):
s.price = '$0.00' s.price = '$0.00'
s.drm = SearchResult.DRM_UNLOCKED s.drm = SearchResult.DRM_UNLOCKED
s.formats = 'EPUB' s.formats = 'EPUB'

View File

@ -7,10 +7,10 @@ __copyright__ = '2011, John Schember <john@nachtimwald.com>'
__docformat__ = 'restructuredtext en' __docformat__ = 'restructuredtext en'
from calibre.gui2.store.basic_config import BasicStoreConfig from calibre.gui2.store.basic_config import BasicStoreConfig
from calibre.gui2.store.opensearch_store import OpenSearchStore from calibre.gui2.store.opensearch_store import OpenSearchOPDSStore
from calibre.gui2.store.search_result import SearchResult from calibre.gui2.store.search_result import SearchResult
class FeedbooksStore(BasicStoreConfig, OpenSearchStore): class FeedbooksStore(BasicStoreConfig, OpenSearchOPDSStore):
open_search_url = 'http://assets0.feedbooks.net/opensearch.xml?t=1253087147' open_search_url = 'http://assets0.feedbooks.net/opensearch.xml?t=1253087147'
web_url = 'http://feedbooks.com/' web_url = 'http://feedbooks.com/'
@ -18,7 +18,7 @@ class FeedbooksStore(BasicStoreConfig, OpenSearchStore):
# http://www.feedbooks.com/catalog # http://www.feedbooks.com/catalog
def search(self, query, max_results=10, timeout=60): def search(self, query, max_results=10, timeout=60):
for s in OpenSearchStore.search(self, query, max_results, timeout): for s in OpenSearchOPDSStore.search(self, query, max_results, timeout):
if s.downloads: if s.downloads:
s.drm = SearchResult.DRM_UNLOCKED s.drm = SearchResult.DRM_UNLOCKED
s.price = '$0.00' s.price = '$0.00'

View File

@ -6,6 +6,7 @@ __license__ = 'GPL 3'
__copyright__ = '2011, John Schember <john@nachtimwald.com>' __copyright__ = '2011, John Schember <john@nachtimwald.com>'
__docformat__ = 'restructuredtext en' __docformat__ = 'restructuredtext en'
import mimetypes
import urllib import urllib
from contextlib import closing from contextlib import closing
@ -23,70 +24,67 @@ from calibre.gui2.store.web_store_dialog import WebStoreDialog
class GutenbergStore(BasicStoreConfig, StorePlugin): class GutenbergStore(BasicStoreConfig, StorePlugin):
def open(self, parent=None, detail_item=None, external=False): def open(self, parent=None, detail_item=None, external=False):
url = 'http://m.gutenberg.org/' url = 'http://gutenberg.org/'
ext_url = 'http://gutenberg.org/'
if detail_item:
detail_item = url_slash_cleaner(url + detail_item)
if external or self.config.get('open_external', False): if external or self.config.get('open_external', False):
if detail_item: open_url(QUrl(detail_item if detail_item else url))
ext_url = ext_url + detail_item
open_url(QUrl(url_slash_cleaner(ext_url)))
else: else:
detail_url = None d = WebStoreDialog(self.gui, url, parent, detail_item)
if detail_item:
detail_url = url + detail_item
d = WebStoreDialog(self.gui, url, parent, detail_url)
d.setWindowTitle(self.name) d.setWindowTitle(self.name)
d.set_tags(self.config.get('tags', '')) d.set_tags(self.config.get('tags', ''))
d.exec_() d.exec_()
def search(self, query, max_results=10, timeout=60): def search(self, query, max_results=10, timeout=60):
# Gutenberg's website does not allow searching both author and title. url = 'http://m.gutenberg.org/ebooks/search.mobile/?default_prefix=all&sort_order=title&query=' + urllib.quote_plus(query)
# Using a google search so we can search on both fields at once.
url = 'http://www.google.com/xhtml?q=site:gutenberg.org+' + urllib.quote_plus(query)
br = browser() br = browser()
counter = max_results counter = max_results
with closing(br.open(url, timeout=timeout)) as f: with closing(br.open(url, timeout=timeout)) as f:
doc = html.fromstring(f.read()) doc = html.fromstring(f.read())
for data in doc.xpath('//div[@class="edewpi"]//div[@class="r ld"]'): for data in doc.xpath('//ol[@class="results"]//li[contains(@class, "icon_title")]'):
if counter <= 0: if counter <= 0:
break break
id = ''.join(data.xpath('./a/@href'))
id = id.split('.mobile')[0]
url = '' title = ''.join(data.xpath('.//span[@class="title"]/text()'))
url_a = data.xpath('div[@class="jd"]/a') author = ''.join(data.xpath('.//span[@class="subtitle"]/text()'))
if url_a:
url_a = url_a[0]
url = url_a.get('href', None)
if url:
url = url.split('u=')[-1].split('&')[0]
if '/ebooks/' not in url:
continue
id = url.split('/')[-1]
url_a = html.fromstring(html.tostring(url_a))
heading = ''.join(url_a.xpath('//text()'))
title, _, author = heading.rpartition('by ')
author = author.split('-')[0]
price = '$0.00'
counter -= 1 counter -= 1
s = SearchResult() s = SearchResult()
s.cover_url = '' s.cover_url = ''
s.detail_item = id.strip()
s.title = title.strip() s.title = title.strip()
s.author = author.strip() s.author = author.strip()
s.price = price.strip() s.price = '$0.00'
s.detail_item = '/ebooks/' + id.strip()
s.drm = SearchResult.DRM_UNLOCKED s.drm = SearchResult.DRM_UNLOCKED
yield s yield s
def get_details(self, search_result, timeout): def get_details(self, search_result, timeout):
url = 'http://m.gutenberg.org/' url = url_slash_cleaner('http://m.gutenberg.org/' + search_result.detail_item + '.mobile')
br = browser() br = browser()
with closing(br.open(url + search_result.detail_item, timeout=timeout)) as nf: with closing(br.open(url, timeout=timeout)) as nf:
idata = html.fromstring(nf.read()) doc = html.fromstring(nf.read())
search_result.formats = ', '.join(idata.xpath('//a[@type!="application/atom+xml"]//span[@class="title"]/text()'))
return True for save_item in doc.xpath('//li[contains(@class, "icon_save")]/a'):
type = save_item.get('type')
href = save_item.get('href')
if type:
ext = mimetypes.guess_extension(type)
if ext:
ext = ext[1:].upper().strip()
search_result.downloads[ext] = href
search_result.formats = ', '.join(search_result.downloads.keys())
return True

View File

@ -6,89 +6,101 @@ __license__ = 'GPL 3'
__copyright__ = '2011, John Schember <john@nachtimwald.com>' __copyright__ = '2011, John Schember <john@nachtimwald.com>'
__docformat__ = 'restructuredtext en' __docformat__ = 'restructuredtext en'
import re import mimetypes
import urllib
from contextlib import closing from contextlib import closing
from lxml import html from lxml import etree
from PyQt4.Qt import QUrl from calibre import browser
from calibre import browser, url_slash_cleaner
from calibre.gui2 import open_url
from calibre.gui2.store import StorePlugin
from calibre.gui2.store.basic_config import BasicStoreConfig from calibre.gui2.store.basic_config import BasicStoreConfig
from calibre.gui2.store.opensearch_store import OpenSearchOPDSStore
from calibre.gui2.store.search_result import SearchResult from calibre.gui2.store.search_result import SearchResult
from calibre.gui2.store.web_store_dialog import WebStoreDialog from calibre.utils.opensearch.description import Description
from calibre.utils.opensearch.query import Query
class ManyBooksStore(BasicStoreConfig, StorePlugin): class ManyBooksStore(BasicStoreConfig, OpenSearchOPDSStore):
def open(self, parent=None, detail_item=None, external=False): open_search_url = 'http://www.manybooks.net/opds/'
url = 'http://manybooks.net/' web_url = 'http://manybooks.net'
detail_url = None
if detail_item:
detail_url = url + detail_item
if external or self.config.get('open_external', False):
open_url(QUrl(url_slash_cleaner(detail_url if detail_url else url)))
else:
d = WebStoreDialog(self.gui, url, parent, detail_url)
d.setWindowTitle(self.name)
d.set_tags(self.config.get('tags', ''))
d.exec_()
def search(self, query, max_results=10, timeout=60): def search(self, query, max_results=10, timeout=60):
# ManyBooks website separates results for title and author. '''
# It also doesn't do a clear job of references authors and Manybooks uses a very strange opds feed. The opds
# secondary titles. Google is also faster. main feed is structured like a stanza feed. The
# Using a google search so we can search on both fields at once. search result entries give very little information
url = 'http://www.google.com/xhtml?q=site:manybooks.net+' + urllib.quote_plus(query) and requires you to go to a detail link. The detail
link has the wrong type specified (text/html instead
of application/atom+xml).
'''
if not hasattr(self, 'open_search_url'):
return
br = browser() description = Description(self.open_search_url)
url_template = description.get_best_template()
if not url_template:
return
oquery = Query(url_template)
# set up initial values
oquery.searchTerms = query
oquery.count = max_results
url = oquery.url()
counter = max_results counter = max_results
br = browser()
with closing(br.open(url, timeout=timeout)) as f: with closing(br.open(url, timeout=timeout)) as f:
doc = html.fromstring(f.read()) doc = etree.fromstring(f.read())
for data in doc.xpath('//div[@class="edewpi"]//div[@class="r ld"]'): for data in doc.xpath('//*[local-name() = "entry"]'):
if counter <= 0: if counter <= 0:
break break
url = ''
url_a = data.xpath('div[@class="jd"]/a')
if url_a:
url_a = url_a[0]
url = url_a.get('href', None)
if url:
url = url.split('u=')[-1][:-2]
if '/titles/' not in url:
continue
id = url.split('/')[-1]
id = id.strip()
url_a = html.fromstring(html.tostring(url_a))
heading = ''.join(url_a.xpath('//text()'))
title, _, author = heading.rpartition('by ')
author = author.split('-')[0]
price = '$0.00'
cover_url = ''
mo = re.match('^\D+', id)
if mo:
cover_name = mo.group()
cover_name = cover_name.replace('etext', '')
cover_id = id.split('.')[0]
cover_url = 'http://www.manybooks.net/images/' + id[0] + '/' + cover_name + '/' + cover_id + '-thumb.jpg'
counter -= 1 counter -= 1
s = SearchResult() s = SearchResult()
s.cover_url = cover_url
s.title = title.strip() detail_links = data.xpath('./*[local-name() = "link" and @type = "text/html"]')
s.author = author.strip() if not detail_links:
s.price = price.strip() continue
s.detail_item = '/titles/' + id detail_link = detail_links[0]
detail_href = detail_link.get('href')
if not detail_href:
continue
s.detail_item = 'http://manybooks.net/titles/' + detail_href.split('tid=')[-1] + '.html'
# These can have HTML inside of them. We are going to get them again later
# just in case.
s.title = ''.join(data.xpath('./*[local-name() = "title"]//text()')).strip()
s.author = ', '.join(data.xpath('./*[local-name() = "author"]//text()')).strip()
# Follow the detail link to get the rest of the info.
with closing(br.open(detail_href, timeout=timeout/4)) as df:
ddoc = etree.fromstring(df.read())
ddata = ddoc.xpath('//*[local-name() = "entry"][1]')
if ddata:
ddata = ddata[0]
# This is the real title and author info we want. We got
# it previously just in case it's not specified here for some reason.
s.title = ''.join(ddata.xpath('./*[local-name() = "title"]//text()')).strip()
s.author = ', '.join(ddata.xpath('./*[local-name() = "author"]//text()')).strip()
if s.author.startswith(','):
s.author = s.author[1:]
if s.author.endswith(','):
s.author = s.author[:-1]
s.cover_url = ''.join(ddata.xpath('./*[local-name() = "link" and @rel = "http://opds-spec.org/thumbnail"][1]/@href')).strip()
for link in ddata.xpath('./*[local-name() = "link" and @rel = "http://opds-spec.org/acquisition"]'):
type = link.get('type')
href = link.get('href')
if type:
ext = mimetypes.guess_extension(type)
if ext:
ext = ext[1:].upper().strip()
s.downloads[ext] = href
s.price = '$0.00'
s.drm = SearchResult.DRM_UNLOCKED s.drm = SearchResult.DRM_UNLOCKED
s.formts = 'EPUB, PDB (eReader, PalmDoc, zTXT, Plucker, iSilo), FB2, ZIP, AZW, MOBI, PRC, LIT, PKG, PDF, TXT, RB, RTF, LRF, TCR, JAR' s.formats = 'EPUB, PDB (eReader, PalmDoc, zTXT, Plucker, iSilo), FB2, ZIP, AZW, MOBI, PRC, LIT, PKG, PDF, TXT, RB, RTF, LRF, TCR, JAR'
yield s yield s

View File

@ -1,84 +0,0 @@
# -*- coding: utf-8 -*-
from __future__ import (unicode_literals, division, absolute_import, print_function)
__license__ = 'GPL 3'
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
__docformat__ = 'restructuredtext en'
import urllib2
from contextlib import closing
from lxml import html
from PyQt4.Qt import QUrl
from calibre import browser, url_slash_cleaner
from calibre.gui2 import open_url
from calibre.gui2.store import StorePlugin
from calibre.gui2.store.basic_config import BasicStoreConfig
from calibre.gui2.store.search_result import SearchResult
from calibre.gui2.store.web_store_dialog import WebStoreDialog
class OpenLibraryStore(BasicStoreConfig, StorePlugin):
def open(self, parent=None, detail_item=None, external=False):
url = 'http://openlibrary.org/'
if external or self.config.get('open_external', False):
if detail_item:
url = url + detail_item
open_url(QUrl(url_slash_cleaner(url)))
else:
detail_url = None
if detail_item:
detail_url = url + detail_item
d = WebStoreDialog(self.gui, url, parent, detail_url)
d.setWindowTitle(self.name)
d.set_tags(self.config.get('tags', ''))
d.exec_()
def search(self, query, max_results=10, timeout=60):
url = 'http://openlibrary.org/search?q=' + urllib2.quote(query) + '&has_fulltext=true'
br = browser()
counter = max_results
with closing(br.open(url, timeout=timeout)) as f:
doc = html.fromstring(f.read())
for data in doc.xpath('//div[@id="searchResults"]/ul[@id="siteSearch"]/li'):
if counter <= 0:
break
# Don't include books that don't have downloadable files.
if not data.xpath('boolean(./span[@class="actions"]//span[@class="label" and contains(text(), "Read")])'):
continue
id = ''.join(data.xpath('./span[@class="bookcover"]/a/@href'))
if not id:
continue
cover_url = ''.join(data.xpath('./span[@class="bookcover"]/a/img/@src'))
title = ''.join(data.xpath('.//h3[@class="booktitle"]/a[@class="results"]/text()'))
author = ''.join(data.xpath('.//span[@class="bookauthor"]/a/text()'))
price = '$0.00'
counter -= 1
s = SearchResult()
s.cover_url = cover_url
s.title = title.strip()
s.author = author.strip()
s.price = price
s.detail_item = id.strip()
s.drm = SearchResult.DRM_UNLOCKED
yield s
def get_details(self, search_result, timeout):
url = 'http://openlibrary.org/'
br = browser()
with closing(br.open(url_slash_cleaner(url + search_result.detail_item), timeout=timeout)) as nf:
idata = html.fromstring(nf.read())
search_result.formats = ', '.join(list(set(idata.xpath('//a[contains(@title, "Download")]/text()'))))
return True

View File

@ -7,10 +7,10 @@ __copyright__ = '2011, John Schember <john@nachtimwald.com>'
__docformat__ = 'restructuredtext en' __docformat__ = 'restructuredtext en'
from calibre.gui2.store.basic_config import BasicStoreConfig from calibre.gui2.store.basic_config import BasicStoreConfig
from calibre.gui2.store.opensearch_store import OpenSearchStore from calibre.gui2.store.opensearch_store import OpenSearchOPDSStore
from calibre.gui2.store.search_result import SearchResult from calibre.gui2.store.search_result import SearchResult
class PragmaticBookshelfStore(BasicStoreConfig, OpenSearchStore): class PragmaticBookshelfStore(BasicStoreConfig, OpenSearchOPDSStore):
open_search_url = 'http://pragprog.com/catalog/search-description' open_search_url = 'http://pragprog.com/catalog/search-description'
web_url = 'http://pragprog.com/' web_url = 'http://pragprog.com/'
@ -18,7 +18,7 @@ class PragmaticBookshelfStore(BasicStoreConfig, OpenSearchStore):
# http://pragprog.com/catalog.opds # http://pragprog.com/catalog.opds
def search(self, query, max_results=10, timeout=60): def search(self, query, max_results=10, timeout=60):
for s in OpenSearchStore.search(self, query, max_results, timeout): for s in OpenSearchOPDSStore.search(self, query, max_results, timeout):
s.drm = SearchResult.DRM_UNLOCKED s.drm = SearchResult.DRM_UNLOCKED
s.formats = 'EPUB, PDF, MOBI' s.formats = 'EPUB, PDF, MOBI'
yield s yield s

View File

@ -77,9 +77,12 @@ class SmashwordsStore(BasicStoreConfig, StorePlugin):
title = ''.join(data.xpath('//a[@class="bookTitle"]/text()')) title = ''.join(data.xpath('//a[@class="bookTitle"]/text()'))
subnote = ''.join(data.xpath('//span[@class="subnote"]/text()')) subnote = ''.join(data.xpath('//span[@class="subnote"]/text()'))
author = ''.join(data.xpath('//span[@class="subnote"]/a/text()')) author = ''.join(data.xpath('//span[@class="subnote"]/a/text()'))
price = subnote.partition('$')[2] if '$' in subnote:
price = price.split(u'\xa0')[0] price = subnote.partition('$')[2]
price = '$' + price price = price.split(u'\xa0')[0]
price = '$' + price
else:
price = '$0.00'
counter -= 1 counter -= 1

View File

@ -224,6 +224,7 @@ class TagsModel(QAbstractItemModel): # {{{
self.row_map = [] self.row_map = []
self.root_item = self.create_node(icon_map=self.icon_state_map) self.root_item = self.create_node(icon_map=self.icon_state_map)
self.db = None self.db = None
self._build_in_progress = False
self.reread_collapse_model({}, rebuild=False) self.reread_collapse_model({}, rebuild=False)
def reread_collapse_model(self, state_map, rebuild=True): def reread_collapse_model(self, state_map, rebuild=True):
@ -257,9 +258,17 @@ class TagsModel(QAbstractItemModel): # {{{
self.endResetModel() self.endResetModel()
def rebuild_node_tree(self, state_map={}): def rebuild_node_tree(self, state_map={}):
if self._build_in_progress:
print ('Tag Browser build already in progress')
traceback.print_stack()
return
#traceback.print_stack()
#print ()
self._build_in_progress = True
self.beginResetModel() self.beginResetModel()
self._run_rebuild(state_map=state_map) self._run_rebuild(state_map=state_map)
self.endResetModel() self.endResetModel()
self._build_in_progress = False
def _run_rebuild(self, state_map={}): def _run_rebuild(self, state_map={}):
for node in self.node_map.itervalues(): for node in self.node_map.itervalues():
@ -505,7 +514,7 @@ class TagsModel(QAbstractItemModel): # {{{
# }}} # }}}
for category in self.category_nodes: for category in self.category_nodes:
process_one_node(category, state_map.get(category.py_name, {})) process_one_node(category, state_map.get(category.category_key, {}))
# Drag'n Drop {{{ # Drag'n Drop {{{
def mimeTypes(self): def mimeTypes(self):
@ -842,7 +851,7 @@ class TagsModel(QAbstractItemModel): # {{{
def index_for_category(self, name): def index_for_category(self, name):
for row, category in enumerate(self.category_nodes): for row, category in enumerate(self.category_nodes):
if category.py_name == name: if category.category_key == name:
return self.index(row, 0, QModelIndex()) return self.index(row, 0, QModelIndex())
def columnCount(self, parent): def columnCount(self, parent):

View File

@ -91,10 +91,10 @@ class TagBrowserMixin(object): # {{{
# Add the new category # Add the new category
user_cats[new_cat] = [] user_cats[new_cat] = []
db.prefs.set('user_categories', user_cats) db.prefs.set('user_categories', user_cats)
self.tags_view.set_new_model() self.tags_view.recount()
m = self.tags_view.model() m = self.tags_view.model()
idx = m.index_for_path(m.find_category_node('@' + new_cat)) idx = m.index_for_path(m.find_category_node('@' + new_cat))
m.show_item_at_index(idx) self.tags_view.show_item_at_index(idx)
# Open the editor on the new item to rename it # Open the editor on the new item to rename it
if new_category_name is None: if new_category_name is None:
self.tags_view.edit(idx) self.tags_view.edit(idx)
@ -111,7 +111,7 @@ class TagBrowserMixin(object): # {{{
for k in d.categories: for k in d.categories:
db.field_metadata.add_user_category('@' + k, k) db.field_metadata.add_user_category('@' + k, k)
db.data.change_search_locations(db.field_metadata.get_search_terms()) db.data.change_search_locations(db.field_metadata.get_search_terms())
self.tags_view.set_new_model() self.tags_view.recount()
def do_delete_user_category(self, category_name): def do_delete_user_category(self, category_name):
''' '''
@ -144,7 +144,7 @@ class TagBrowserMixin(object): # {{{
elif k.startswith(category_name + '.'): elif k.startswith(category_name + '.'):
del user_cats[k] del user_cats[k]
db.prefs.set('user_categories', user_cats) db.prefs.set('user_categories', user_cats)
self.tags_view.set_new_model() self.tags_view.recount()
def do_del_item_from_user_cat(self, user_cat, item_name, item_category): def do_del_item_from_user_cat(self, user_cat, item_name, item_category):
''' '''
@ -262,20 +262,22 @@ class TagBrowserMixin(object): # {{{
self.library_view.select_rows(ids) self.library_view.select_rows(ids)
# refreshing the tags view happens at the emit()/call() site # refreshing the tags view happens at the emit()/call() site
def do_author_sort_edit(self, parent, id, select_sort=True): def do_author_sort_edit(self, parent, id, select_sort=True, select_link=False):
''' '''
Open the manage authors dialog Open the manage authors dialog
''' '''
db = self.library_view.model().db db = self.library_view.model().db
editor = EditAuthorsDialog(parent, db, id, select_sort) editor = EditAuthorsDialog(parent, db, id, select_sort, select_link)
d = editor.exec_() d = editor.exec_()
if d: if d:
for (id, old_author, new_author, new_sort) in editor.result: for (id, old_author, new_author, new_sort, new_link) in editor.result:
if old_author != new_author: if old_author != new_author:
# The id might change if the new author already exists # The id might change if the new author already exists
id = db.rename_author(id, new_author) id = db.rename_author(id, new_author)
db.set_sort_field_for_author(id, unicode(new_sort), db.set_sort_field_for_author(id, unicode(new_sort),
commit=False, notify=False) commit=False, notify=False)
db.set_link_field_for_author(id, unicode(new_link),
commit=False, notify=False)
db.commit() db.commit()
self.library_view.model().refresh() self.library_view.model().refresh()
self.tags_view.recount() self.tags_view.recount()
@ -413,13 +415,14 @@ class TagBrowserWidget(QWidget): # {{{
txt = unicode(self.item_search.currentText()).strip() txt = unicode(self.item_search.currentText()).strip()
if txt.startswith('*'): if txt.startswith('*'):
self.tags_view.set_new_model(filter_categories_by=txt[1:]) model.filter_categories_by = txt[1:]
self.tags_view.recount()
self.current_find_position = None self.current_find_position = None
return return
if model.get_filter_categories_by(): if model.filter_categories_by:
self.tags_view.set_new_model(filter_categories_by=None) model.filter_categories_by = None
self.tags_view.recount()
self.current_find_position = None self.current_find_position = None
model = self.tags_view.model()
if not txt: if not txt:
return return
@ -437,8 +440,9 @@ class TagBrowserWidget(QWidget): # {{{
self.current_find_position = \ self.current_find_position = \
model.find_item_node(key, txt, self.current_find_position) model.find_item_node(key, txt, self.current_find_position)
if self.current_find_position: if self.current_find_position:
model.show_item_at_path(self.current_find_position, box=True) self.tags_view.show_item_at_path(self.current_find_position, box=True)
elif self.item_search.text(): elif self.item_search.text():
self.not_found_label.setVisible(True) self.not_found_label.setVisible(True)
if self.tags_view.verticalScrollBar().isVisible(): if self.tags_view.verticalScrollBar().isVisible():

View File

@ -12,7 +12,7 @@ from functools import partial
from itertools import izip from itertools import izip
from PyQt4.Qt import (QItemDelegate, Qt, QTreeView, pyqtSignal, QSize, QIcon, from PyQt4.Qt import (QItemDelegate, Qt, QTreeView, pyqtSignal, QSize, QIcon,
QApplication, QMenu, QPoint, QModelIndex) QApplication, QMenu, QPoint, QModelIndex, QToolTip, QCursor)
from calibre.gui2.tag_browser.model import (TagTreeItem, TAG_SEARCH_STATES, from calibre.gui2.tag_browser.model import (TagTreeItem, TAG_SEARCH_STATES,
TagsModel) TagsModel)
@ -66,12 +66,11 @@ class TagsView(QTreeView): # {{{
tag_list_edit = pyqtSignal(object, object) tag_list_edit = pyqtSignal(object, object)
saved_search_edit = pyqtSignal(object) saved_search_edit = pyqtSignal(object)
rebuild_saved_searches = pyqtSignal() rebuild_saved_searches = pyqtSignal()
author_sort_edit = pyqtSignal(object, object) author_sort_edit = pyqtSignal(object, object, object, object)
tag_item_renamed = pyqtSignal() tag_item_renamed = pyqtSignal()
search_item_renamed = pyqtSignal() search_item_renamed = pyqtSignal()
drag_drop_finished = pyqtSignal(object) drag_drop_finished = pyqtSignal(object)
restriction_error = pyqtSignal() restriction_error = pyqtSignal()
show_at_path = pyqtSignal()
def __init__(self, parent=None): def __init__(self, parent=None):
QTreeView.__init__(self, parent=None) QTreeView.__init__(self, parent=None)
@ -96,8 +95,6 @@ class TagsView(QTreeView): # {{{
self.user_category_icon = QIcon(I('tb_folder.png')) self.user_category_icon = QIcon(I('tb_folder.png'))
self.delete_icon = QIcon(I('list_remove.png')) self.delete_icon = QIcon(I('list_remove.png'))
self.rename_icon = QIcon(I('edit-undo.png')) self.rename_icon = QIcon(I('edit-undo.png'))
self.show_at_path.connect(self.show_item_at_path,
type=Qt.QueuedConnection)
self._model = TagsModel(self) self._model = TagsModel(self)
self._model.search_item_renamed.connect(self.search_item_renamed) self._model.search_item_renamed.connect(self.search_item_renamed)
@ -132,14 +129,14 @@ class TagsView(QTreeView): # {{{
expanded_categories = [] expanded_categories = []
for row, category in enumerate(self._model.category_nodes): for row, category in enumerate(self._model.category_nodes):
if self.isExpanded(self._model.index(row, 0, QModelIndex())): if self.isExpanded(self._model.index(row, 0, QModelIndex())):
expanded_categories.append(category.py_name) expanded_categories.append(category.category_key)
states = [c.tag.state for c in category.child_tags()] states = [c.tag.state for c in category.child_tags()]
names = [(c.tag.name, c.tag.category) for c in category.child_tags()] names = [(c.tag.name, c.tag.category) for c in category.child_tags()]
state_map[category.py_name] = dict(izip(names, states)) state_map[category.category_key] = dict(izip(names, states))
return expanded_categories, state_map return expanded_categories, state_map
def reread_collapse_parameters(self): def reread_collapse_parameters(self):
self._model.reread_collapse_parameters(self.get_state()[1]) self._model.reread_collapse_model(self.get_state()[1])
def set_database(self, db, tag_match, sort_by): def set_database(self, db, tag_match, sort_by):
self._model.set_database(db) self._model.set_database(db)
@ -176,7 +173,8 @@ class TagsView(QTreeView): # {{{
state_map = self.get_state()[1] state_map = self.get_state()[1]
self.db.prefs.set('user_categories', user_cats) self.db.prefs.set('user_categories', user_cats)
self._model.rebuild_node_tree(state_map=state_map) self._model.rebuild_node_tree(state_map=state_map)
self.show_at_path.emit('@'+nkey) p = self._model.find_category_node('@'+nkey)
self.show_item_at_path(p)
@property @property
def match_all(self): def match_all(self):
@ -279,7 +277,10 @@ class TagsView(QTreeView): # {{{
self.saved_search_edit.emit(category) self.saved_search_edit.emit(category)
return return
if action == 'edit_author_sort': if action == 'edit_author_sort':
self.author_sort_edit.emit(self, index) self.author_sort_edit.emit(self, index, True, False)
return
if action == 'edit_author_link':
self.author_sort_edit.emit(self, index, False, True)
return return
reset_filter_categories = True reset_filter_categories = True
@ -348,6 +349,9 @@ class TagsView(QTreeView): # {{{
self.context_menu.addAction(_('Edit sort for %s')%display_name(tag), self.context_menu.addAction(_('Edit sort for %s')%display_name(tag),
partial(self.context_menu_handler, partial(self.context_menu_handler,
action='edit_author_sort', index=tag.id)) action='edit_author_sort', index=tag.id))
self.context_menu.addAction(_('Edit link for %s')%display_name(tag),
partial(self.context_menu_handler,
action='edit_author_link', index=tag.id))
# is_editable is also overloaded to mean 'can be added # is_editable is also overloaded to mean 'can be added
# to a user category' # to a user category'
@ -489,10 +493,25 @@ class TagsView(QTreeView): # {{{
pa.setCheckable(True) pa.setCheckable(True)
pa.setChecked(True) pa.setChecked(True)
if config['sort_tags_by'] != "name":
fla.setEnabled(False)
m.hovered.connect(self.collapse_menu_hovered)
fla.setToolTip(_('First letter is usable only when sorting by name'))
# Apparently one cannot set a tooltip to empty, so use a star and
# deal with it in the hover method
da.setToolTip('*')
pa.setToolTip('*')
if not self.context_menu.isEmpty(): if not self.context_menu.isEmpty():
self.context_menu.popup(self.mapToGlobal(point)) self.context_menu.popup(self.mapToGlobal(point))
return True return True
def collapse_menu_hovered(self, action):
tip = action.toolTip()
if tip == '*':
tip = ''
QToolTip.showText(QCursor.pos(), tip)
def dragMoveEvent(self, event): def dragMoveEvent(self, event):
QTreeView.dragMoveEvent(self, event) QTreeView.dragMoveEvent(self, event)
self.setDropIndicatorShown(False) self.setDropIndicatorShown(False)
@ -501,6 +520,8 @@ class TagsView(QTreeView): # {{{
return return
src_is_tb = event.mimeData().hasFormat('application/calibre+from_tag_browser') src_is_tb = event.mimeData().hasFormat('application/calibre+from_tag_browser')
item = index.data(Qt.UserRole).toPyObject() item = index.data(Qt.UserRole).toPyObject()
if item.type == TagTreeItem.ROOT:
return
flags = self._model.flags(index) flags = self._model.flags(index)
if item.type == TagTreeItem.TAG and flags & Qt.ItemIsDropEnabled: if item.type == TagTreeItem.TAG and flags & Qt.ItemIsDropEnabled:
self.setDropIndicatorShown(not src_is_tb) self.setDropIndicatorShown(not src_is_tb)
@ -554,7 +575,9 @@ class TagsView(QTreeView): # {{{
expanded_categories, state_map = self.get_state() expanded_categories, state_map = self.get_state()
self._model.rebuild_node_tree(state_map=state_map) self._model.rebuild_node_tree(state_map=state_map)
for category in expanded_categories: for category in expanded_categories:
self.expand(self._model.index_for_category(category)) idx = self._model.index_for_category(category)
if idx is not None and idx.isValid():
self.expand(idx)
self.show_item_at_path(path) self.show_item_at_path(path)
def show_item_at_path(self, path, box=False, def show_item_at_path(self, path, box=False,
@ -570,10 +593,12 @@ class TagsView(QTreeView): # {{{
def show_item_at_index(self, idx, box=False, def show_item_at_index(self, idx, box=False,
position=QTreeView.PositionAtCenter): position=QTreeView.PositionAtCenter):
if idx.isValid(): if idx.isValid() and idx.data(Qt.UserRole).toPyObject() is not self._model.root_item:
self.expand(self._model.parent(idx)) # Needed otherwise Qt sometimes segfaults if the
# node is buried in a collapsed, off
# screen hierarchy
self.setCurrentIndex(idx) self.setCurrentIndex(idx)
self.scrollTo(idx, position) self.scrollTo(idx, position)
self.setCurrentIndex(idx)
if box: if box:
self._model.set_boxed(idx) self._model.set_boxed(idx)

View File

@ -1024,7 +1024,15 @@ class SortKeyGenerator(object):
dt = 'datetime' dt = 'datetime'
elif sb == 'number': elif sb == 'number':
try: try:
val = float(val) val = val.replace(',', '').strip()
p = 1
for i, candidate in enumerate(
(' B', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB')):
if val.endswith(candidate):
p = 1024**(i)
val = val[:-len(candidate)].strip()
break
val = float(val) * p
except: except:
val = 0.0 val = 0.0
dt = 'float' dt = 'float'

View File

@ -8,6 +8,7 @@ The database used to store ebook metadata
''' '''
import os, sys, shutil, cStringIO, glob, time, functools, traceback, re, \ import os, sys, shutil, cStringIO, glob, time, functools, traceback, re, \
json, uuid, tempfile, hashlib json, uuid, tempfile, hashlib
from collections import defaultdict
import threading, random import threading, random
from itertools import repeat from itertools import repeat
from math import ceil from math import ceil
@ -367,7 +368,7 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
'uuid', 'uuid',
'has_cover', 'has_cover',
('au_map', 'authors', 'author', ('au_map', 'authors', 'author',
'aum_sortconcat(link.id, authors.name, authors.sort)'), 'aum_sortconcat(link.id, authors.name, authors.sort, authors.link)'),
'last_modified', 'last_modified',
'(SELECT identifiers_concat(type, val) FROM identifiers WHERE identifiers.book=books.id) identifiers', '(SELECT identifiers_concat(type, val) FROM identifiers WHERE identifiers.book=books.id) identifiers',
] ]
@ -487,6 +488,7 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
self.refresh_ondevice = functools.partial(self.data.refresh_ondevice, self) self.refresh_ondevice = functools.partial(self.data.refresh_ondevice, self)
self.refresh() self.refresh()
self.last_update_check = self.last_modified() self.last_update_check = self.last_modified()
self.format_metadata_cache = defaultdict(dict)
def break_cycles(self): def break_cycles(self):
self.data.break_cycles() self.data.break_cycles()
@ -894,13 +896,17 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
aut_list = [] aut_list = []
aum = [] aum = []
aus = {} aus = {}
for (author, author_sort) in aut_list: aul = {}
aum.append(author.replace('|', ',')) for (author, author_sort, link) in aut_list:
aus[author] = author_sort.replace('|', ',') aut = author.replace('|', ',')
aum.append(aut)
aus[aut] = author_sort.replace('|', ',')
aul[aut] = link
mi.title = row[fm['title']] mi.title = row[fm['title']]
mi.authors = aum mi.authors = aum
mi.author_sort = row[fm['author_sort']] mi.author_sort = row[fm['author_sort']]
mi.author_sort_map = aus mi.author_sort_map = aus
mi.author_link_map = aul
mi.comments = row[fm['comments']] mi.comments = row[fm['comments']]
mi.publisher = row[fm['publisher']] mi.publisher = row[fm['publisher']]
mi.timestamp = row[fm['timestamp']] mi.timestamp = row[fm['timestamp']]
@ -910,11 +916,15 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
mi.book_size = row[fm['size']] mi.book_size = row[fm['size']]
mi.ondevice_col= row[fm['ondevice']] mi.ondevice_col= row[fm['ondevice']]
mi.last_modified = row[fm['last_modified']] mi.last_modified = row[fm['last_modified']]
id = idx if index_is_id else self.id(idx)
formats = row[fm['formats']] formats = row[fm['formats']]
mi.format_metadata = {}
if not formats: if not formats:
formats = None formats = None
else: else:
formats = formats.split(',') formats = formats.split(',')
for f in formats:
mi.format_metadata[f] = self.format_metadata(id, f)
mi.formats = formats mi.formats = formats
tags = row[fm['tags']] tags = row[fm['tags']]
if tags: if tags:
@ -923,7 +933,6 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
if mi.series: if mi.series:
mi.series_index = row[fm['series_index']] mi.series_index = row[fm['series_index']]
mi.rating = row[fm['rating']] mi.rating = row[fm['rating']]
id = idx if index_is_id else self.id(idx)
mi.set_identifiers(self.get_identifiers(id, index_is_id=True)) mi.set_identifiers(self.get_identifiers(id, index_is_id=True))
mi.application_id = id mi.application_id = id
mi.id = id mi.id = id
@ -959,6 +968,7 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
mi.cover_data = ('jpeg', cdata) mi.cover_data = ('jpeg', cdata)
else: else:
mi.cover = self.cover(id, index_is_id=True, as_path=True) mi.cover = self.cover(id, index_is_id=True, as_path=True)
mi.has_cover = _('Yes') if self.has_cover(id) else ''
return mi return mi
def has_book(self, mi): def has_book(self, mi):
@ -1122,13 +1132,21 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
if m: if m:
return m['mtime'] return m['mtime']
def format_metadata(self, id_, fmt): def format_metadata(self, id_, fmt, allow_cache=True):
if not fmt:
return {}
fmt = fmt.upper()
if allow_cache:
x = self.format_metadata_cache[id_].get(fmt, None)
if x is not None:
return x
path = self.format_abspath(id_, fmt, index_is_id=True) path = self.format_abspath(id_, fmt, index_is_id=True)
ans = {} ans = {}
if path is not None: if path is not None:
stat = os.stat(path) stat = os.stat(path)
ans['size'] = stat.st_size ans['size'] = stat.st_size
ans['mtime'] = utcfromtimestamp(stat.st_mtime) ans['mtime'] = utcfromtimestamp(stat.st_mtime)
self.format_metadata_cache[id_][fmt] = ans
return ans return ans
def format_hash(self, id_, fmt): def format_hash(self, id_, fmt):
@ -1245,6 +1263,9 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
ret = tempfile.SpooledTemporaryFile(max_size=SPOOL_SIZE) ret = tempfile.SpooledTemporaryFile(max_size=SPOOL_SIZE)
shutil.copyfileobj(f, ret) shutil.copyfileobj(f, ret)
ret.seek(0) ret.seek(0)
# Various bits of code try to use the name as the default
# title when reading metadata, so set it
ret.name = f.name
else: else:
ret = f.read() ret = f.read()
return ret return ret
@ -1261,6 +1282,8 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
def add_format(self, index, format, stream, index_is_id=False, path=None, def add_format(self, index, format, stream, index_is_id=False, path=None,
notify=True, replace=True): notify=True, replace=True):
id = index if index_is_id else self.id(index) id = index if index_is_id else self.id(index)
if format:
self.format_metadata_cache[id].pop(format.upper(), None)
if path is None: if path is None:
path = os.path.join(self.library_path, self.path(id, index_is_id=True)) path = os.path.join(self.library_path, self.path(id, index_is_id=True))
name = self.conn.get('SELECT name FROM data WHERE book=? AND format=?', (id, format), all=False) name = self.conn.get('SELECT name FROM data WHERE book=? AND format=?', (id, format), all=False)
@ -1313,6 +1336,8 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
def remove_format(self, index, format, index_is_id=False, notify=True, def remove_format(self, index, format, index_is_id=False, notify=True,
commit=True, db_only=False): commit=True, db_only=False):
id = index if index_is_id else self.id(index) id = index if index_is_id else self.id(index)
if format:
self.format_metadata_cache[id].pop(format.upper(), None)
name = self.conn.get('SELECT name FROM data WHERE book=? AND format=?', (id, format), all=False) name = self.conn.get('SELECT name FROM data WHERE book=? AND format=?', (id, format), all=False)
if name: if name:
if not db_only: if not db_only:
@ -1442,7 +1467,7 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
raise ValueError('sort ' + sort + ' not a valid value') raise ValueError('sort ' + sort + ' not a valid value')
self.books_list_filter.change([] if not ids else ids) self.books_list_filter.change([] if not ids else ids)
id_filter = None if not ids else frozenset(ids) id_filter = None if ids is None else frozenset(ids)
tb_cats = self.field_metadata tb_cats = self.field_metadata
tcategories = {} tcategories = {}
@ -1520,7 +1545,7 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
rating_dex = self.FIELD_MAP['rating'] rating_dex = self.FIELD_MAP['rating']
tag_class = LibraryDatabase2.TCat_Tag tag_class = LibraryDatabase2.TCat_Tag
for book in self.data.iterall(): for book in self.data.iterall():
if id_filter and book[id_dex] not in id_filter: if id_filter is not None and book[id_dex] not in id_filter:
continue continue
rating = book[rating_dex] rating = book[rating_dex]
# We kept track of all possible category field_map positions above # We kept track of all possible category field_map positions above
@ -2038,13 +2063,13 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
def authors_with_sort_strings(self, id, index_is_id=False): def authors_with_sort_strings(self, id, index_is_id=False):
id = id if index_is_id else self.id(id) id = id if index_is_id else self.id(id)
aut_strings = self.conn.get(''' aut_strings = self.conn.get('''
SELECT authors.id, authors.name, authors.sort SELECT authors.id, authors.name, authors.sort, authors.link
FROM authors, books_authors_link as bl FROM authors, books_authors_link as bl
WHERE bl.book=? and authors.id=bl.author WHERE bl.book=? and authors.id=bl.author
ORDER BY bl.id''', (id,)) ORDER BY bl.id''', (id,))
result = [] result = []
for (id_, author, sort,) in aut_strings: for (id_, author, sort, link) in aut_strings:
result.append((id_, author.replace('|', ','), sort)) result.append((id_, author.replace('|', ','), sort, link))
return result return result
# Given a book, return the author_sort string for authors of the book # Given a book, return the author_sort string for authors of the book
@ -2084,7 +2109,8 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
aum = self.authors_with_sort_strings(id_, index_is_id=True) aum = self.authors_with_sort_strings(id_, index_is_id=True)
self.data.set(id_, self.FIELD_MAP['au_map'], self.data.set(id_, self.FIELD_MAP['au_map'],
':#:'.join([':::'.join((au.replace(',', '|'), aus)) for (_, au, aus) in aum]), ':#:'.join([':::'.join((au.replace(',', '|'), aus, aul))
for (_, au, aus, aul) in aum]),
row_is_id=True) row_is_id=True)
def _set_authors(self, id, authors, allow_case_change=False): def _set_authors(self, id, authors, allow_case_change=False):
@ -2435,7 +2461,7 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
self.conn.commit() self.conn.commit()
def get_authors_with_ids(self): def get_authors_with_ids(self):
result = self.conn.get('SELECT id,name,sort FROM authors') result = self.conn.get('SELECT id,name,sort,link FROM authors')
if not result: if not result:
return [] return []
return result return result
@ -2446,6 +2472,13 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
(author,), all=False) (author,), all=False)
return result return result
def set_link_field_for_author(self, aid, link, commit=True, notify=False):
if not link:
link = ''
self.conn.execute('UPDATE authors SET link=? WHERE id=?', (link.strip(), aid))
if commit:
self.conn.commit()
def set_sort_field_for_author(self, old_id, new_sort, commit=True, notify=False): def set_sort_field_for_author(self, old_id, new_sort, commit=True, notify=False):
self.conn.execute('UPDATE authors SET sort=? WHERE id=?', \ self.conn.execute('UPDATE authors SET sort=? WHERE id=?', \
(new_sort.strip(), old_id)) (new_sort.strip(), old_id))

View File

@ -53,6 +53,7 @@ class Restore(Thread):
self.mismatched_dirs = [] self.mismatched_dirs = []
self.successes = 0 self.successes = 0
self.tb = None self.tb = None
self.authors_links = {}
@property @property
def errors_occurred(self): def errors_occurred(self):
@ -160,6 +161,12 @@ class Restore(Thread):
else: else:
self.mismatched_dirs.append(dirpath) self.mismatched_dirs.append(dirpath)
alm = mi.get('author_link_map', {})
for author, link in alm.iteritems():
existing_link, timestamp = self.authors_links.get(author, (None, None))
if existing_link is None or existing_link != link and timestamp < mi.timestamp:
self.authors_links[author] = (link, mi.timestamp)
def create_cc_metadata(self): def create_cc_metadata(self):
self.books.sort(key=itemgetter('timestamp')) self.books.sort(key=itemgetter('timestamp'))
self.custom_columns = {} self.custom_columns = {}
@ -206,6 +213,11 @@ class Restore(Thread):
self.failed_restores.append((book, traceback.format_exc())) self.failed_restores.append((book, traceback.format_exc()))
self.progress_callback(book['mi'].title, i+1) self.progress_callback(book['mi'].title, i+1)
for author in self.authors_links.iterkeys():
link, ign = self.authors_links[author]
db.conn.execute('UPDATE authors SET link=? WHERE name=?',
(link, author.replace(',', '|')))
db.conn.commit()
db.conn.close() db.conn.close()
def restore_book(self, book, db): def restore_book(self, book, db):

View File

@ -600,4 +600,15 @@ class SchemaUpgrade(object):
with open(os.path.join(bdir, fname), 'wb') as f: with open(os.path.join(bdir, fname), 'wb') as f:
f.write(script) f.write(script)
def upgrade_version_20(self):
'''
Add a link column to the authors table.
'''
script = '''
BEGIN TRANSACTION;
ALTER TABLE authors ADD COLUMN link TEXT NOT NULL DEFAULT "";
'''
self.conn.executescript(script)

View File

@ -795,7 +795,9 @@ class BrowseServer(object):
list(mi.get_all_user_metadata(False).items()): list(mi.get_all_user_metadata(False).items()):
if m['is_custom'] and field not in displayed_custom_fields: if m['is_custom'] and field not in displayed_custom_fields:
continue continue
if m['datatype'] == 'comments' or field == 'comments': if m['datatype'] == 'comments' or field == 'comments' or (
m['datatype'] == 'composite' and \
m['display'].get('contains_html', False)):
val = mi.get(field, '') val = mi.get(field, '')
if val and val.strip(): if val and val.strip():
comments.append((m['name'], comments_to_html(val))) comments.append((m['name'], comments_to_html(val)))

View File

@ -186,7 +186,8 @@ def ACQUISITION_ENTRY(item, version, db, updated, CFM, CKEYS, prefix):
CFM[key]['is_multiple']['ui_to_list'], CFM[key]['is_multiple']['ui_to_list'],
ignore_max=True, no_tag_count=True, ignore_max=True, no_tag_count=True,
joinval=CFM[key]['is_multiple']['list_to_ui'])))) joinval=CFM[key]['is_multiple']['list_to_ui']))))
elif datatype == 'comments': elif datatype == 'comments' or (CFM[key]['datatype'] == 'composite' and
CFM[key]['display'].get('contains_html', False)):
extra.append('%s: %s<br />'%(xml(name), comments_to_html(unicode(val)))) extra.append('%s: %s<br />'%(xml(name), comments_to_html(unicode(val))))
else: else:
extra.append('%s: %s<br />'%(xml(name), xml(unicode(val)))) extra.append('%s: %s<br />'%(xml(name), xml(unicode(val))))

View File

@ -144,9 +144,9 @@ class AumSortedConcatenate(object):
def __init__(self): def __init__(self):
self.ans = {} self.ans = {}
def step(self, ndx, author, sort): def step(self, ndx, author, sort, link):
if author is not None: if author is not None:
self.ans[ndx] = author + ':::' + sort self.ans[ndx] = ':::'.join((author, sort, link))
def finalize(self): def finalize(self):
keys = self.ans.keys() keys = self.ans.keys()
@ -229,7 +229,7 @@ class DBThread(Thread):
load_c_extensions(self.conn) load_c_extensions(self.conn)
self.conn.row_factory = sqlite.Row if self.row_factory else lambda cursor, row : list(row) self.conn.row_factory = sqlite.Row if self.row_factory else lambda cursor, row : list(row)
self.conn.create_aggregate('concat', 1, Concatenate) self.conn.create_aggregate('concat', 1, Concatenate)
self.conn.create_aggregate('aum_sortconcat', 3, AumSortedConcatenate) self.conn.create_aggregate('aum_sortconcat', 4, AumSortedConcatenate)
self.conn.create_collation('PYNOCASE', partial(pynocase, self.conn.create_collation('PYNOCASE', partial(pynocase,
encoding=encoding)) encoding=encoding))
self.conn.create_function('title_sort', 1, title_sort) self.conn.create_function('title_sort', 1, title_sort)

View File

@ -56,7 +56,7 @@ You should not change the files in this resources folder, as your changes will g
|app| will automatically use your custom file in preference to the builtin one the next time it is started. |app| will automatically use your custom file in preference to the builtin one the next time it is started.
For example, if you wanted to change the icon for the :guilabel:`Remove books` action, you would first look in the builtin resources folder and see that the relevant file is For example, if you wanted to change the icon for the :guilabel:`Remove books` action, you would first look in the builtin resources folder and see that the relevant file is
:file:`resources/images/trash.svg`. Assuming you have an alternate icon in svg format called :file:`mytrash.svg` you would save it in the configuration directory as :file:`resources/images/trash.svg`. All the icons used by the calibre user interface are in :file:`resources/images` and its sub-folders. :file:`resources/images/trash.png`. Assuming you have an alternate icon in PNG format called :file:`mytrash.png` you would save it in the configuration directory as :file:`resources/images/trash.png`. All the icons used by the calibre user interface are in :file:`resources/images` and its sub-folders.
Customizing |app| with plugins Customizing |app| with plugins
-------------------------------- --------------------------------

View File

@ -187,6 +187,26 @@ in your favorite editor and add the line::
near the top of the file. Now run the command :command:`calibredb`. The very first line of output should be ``Hello, world!``. near the top of the file. Now run the command :command:`calibredb`. The very first line of output should be ``Hello, world!``.
Having separate "normal" and "development" |app| installs on the same computer
-------------------------------------------------------------------------------
The calibre source tree is very stable, it rarely breaks, but if you feel the need to run from source on a separate
test library and run the released calibre version with your everyday library, you can achieve this easily using
.bat files or shell scripts to launch |app|. The example below shows how to do this on windows using .bat files (the
instructions for other platforms are the same, just use a BASh script instead of a .bat file)
To launch the relase version of |app| with your everyday library:
calibre-normal.bat::
calibre.exe "--with-library=C:\path\to\everyday\library folder"
calibre-dev.bat::
set CALIBRE_DEVELOP_FROM=C:\path\to\calibre\checkout\src
calibre.exe "--with-library=C:\path\to\test\library folder"
Debugging tips Debugging tips
---------------- ----------------

View File

@ -340,6 +340,10 @@ When you first run |app|, it will ask you for a folder in which to store your bo
Metadata about the books is stored in the file ``metadata.db`` at the top level of the library folder This file is is a sqlite database. When backing up your library make sure you copy the entire folder and all its sub-folders. Metadata about the books is stored in the file ``metadata.db`` at the top level of the library folder This file is is a sqlite database. When backing up your library make sure you copy the entire folder and all its sub-folders.
The library folder and all it's contents make up what is called a *|app| library*. You can have multiple such libraries. To manage the libraries, click the |app| icon on the toolbar. You can create new libraries, remove/rename existing ones and switch between libraries easily.
You can copy or move books between different libraries (once you have more than one library setup) by right clicking on a book and selecting the :guilabel:`Copy to library` action.
How does |app| manage author names and sorting? How does |app| manage author names and sorting?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -558,11 +562,16 @@ Most readers do not support this. You should complain to the manufacturer about
Another alternative is to create a catalog in ebook form containing a listing of all the books in your calibre library, with their metadata. Click the arrow next to the convert button to access the catalog creation tool. And before you ask, no you cannot have the catalog "link directly to" books on your reader. Another alternative is to create a catalog in ebook form containing a listing of all the books in your calibre library, with their metadata. Click the arrow next to the convert button to access the catalog creation tool. And before you ask, no you cannot have the catalog "link directly to" books on your reader.
How do I get |app| to use my HTTP proxy?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By default, |app| uses whatever proxy settings are set in your OS. Sometimes these are incorrect, for example, on windows if you don't use Internet Explorer then the proxy settings may not be up to date. You can tell |app| to use a particular proxy server by setting the http_proxy environment variable. The format of the variable is: http://username:password@servername you should ask your network admin to give you the correct value for this variable. Note that |app| only supports HTTP proxies not SOCKS proxies. You can see the current proxies used by |app| in Preferences->Miscellaneous.
I want some feature added to |app|. What can I do? I want some feature added to |app|. What can I do?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You have two choices: You have two choices:
1. Create a patch by hacking on |app| and send it to me for review and inclusion. See `Development <http://calibre-ebook.com/get-involved>`_. 1. Create a patch by hacking on |app| and send it to me for review and inclusion. See `Development <http://calibre-ebook.com/get-involved>`_.
2. `Open a ticket <http://calibre-ebook.com/bugs>`_ (you have to register and login first). Remember that |app| development is done by volunteers, so if you get no response to your feature request, it means no one feels like implementing it. 2. `Open a bug requesting the feature <http://calibre-ebook.com/bugs>`_ . Remember that |app| development is done by volunteers, so if you get no response to your feature request, it means no one feels like implementing it.
Why doesn't |app| have an automatic update? Why doesn't |app| have an automatic update?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -164,13 +164,18 @@ Library
.. |lii| image:: images/library.png .. |lii| image:: images/library.png
:class: float-right-img :class: float-right-img
|lii| The :guilabel: `Library` action allows you to create, switch between, rename or delete a Library. |app| allows you to create as many libraries as you wish. You could for instance create a fiction library, a non fiction library, a foreign language library a project library, basically any structure that suits your needs. Libraries are the highest organizational structure within |app|, each library has its own set of books, tags, categories and base storage location. |lii| The :guilabel:`Library` action allows you to create, switch between, rename or remove a Library. |app| allows you to create as many libraries as you wish. You could for instance create a fiction library, a non fiction library, a foreign language library, a project library, basically any structure that suits your needs. Libraries are the highest organizational structure within |app|, each library has its own set of books, tags, categories and base storage location.
1. **Switch\Create library..**: This action allows you to; a) connect to a pre-existing |app| library at another location from your currently open library, b) Create and empty library at a nw location or, c) Move the current Library to a newly specified location. 1. **Switch/Create library**: This action allows you to; a) connect to a pre-existing |app| library at another location from your currently open library, b) Create and empty library at a new location or, c) Move the current Library to a newly specified location.
2. **Quick Switch>**: This action allows you to switch between libraries that have been registered or created within |app|. 2. **Quick Switch**: This action allows you to switch between libraries that have been registered or created within |app|.
3. **Rename Library>**: This action allows you to rename a Library. 3. **Rename Library**: This action allows you to rename a Library.
4. **Delete Library>**: This action allows you to **permanenetly delete** a Library. 4. **Remove Library**: This action allows you to unregister a library from |app|.
5. **<calibre library>**: Actions 5, 6 etc .. give you immediate switch access between multiple Libraries that you have created or attached to. 5. **<library name>**: Actions 5, 6 etc .. give you immediate switch access between multiple Libraries that you have created or attached to. This list contains only the 5 most frequently used libraries. For the complete list, use the Quick Switch menu.
6. **Library Maintenance**: This action allows you to check the current library for data consistency issues and restore the current libraries' database from backups.
.. note:: Metadata about your ebooks like title/author/tags/etc. is stored in a single file in your |app| library folder called metadata.db. If this file gets corrupted (a very rare event), you can lose the metadata. Fortunately, |app| automatically backs up the metadata for every individual book in the book's folder as an .opf file. By using the Restore Library action under Library Maintenance described above, you can have |app| rebuild the metadata.db file from the individual .opf files for you.
You can copy or move books between different libraries (once you have more than one library setup) by right clicking on the book and selecting the action :guilabel:`Copy to library`.
.. _device: .. _device:
@ -265,6 +270,7 @@ Preferences
.. |cbi| image:: images/preferences.png .. |cbi| image:: images/preferences.png
The Preferences Action allows you to change the way various aspects of |app| work. To access it, click the |cbi|. The Preferences Action allows you to change the way various aspects of |app| work. To access it, click the |cbi|.
You can also re-run the Welcome Wizard by clicking the arrow next to the preferences button.
.. _catalogs: .. _catalogs:

View File

@ -116,7 +116,7 @@ If you have programming experience, please note that the syntax in this mode (si
Many functions use regular expressions. In all cases, regular expression matching is case-insensitive. Many functions use regular expressions. In all cases, regular expression matching is case-insensitive.
The functions available are: The functions available are listed below. Note that the definitive documentation for functions is available in the section :ref:`Function classification <template_functions_reference>`:
* ``lowercase()`` -- return value of the field in lower case. * ``lowercase()`` -- return value of the field in lower case.
* ``uppercase()`` -- return the value of the field in upper case. * ``uppercase()`` -- return the value of the field in upper case.
@ -124,11 +124,14 @@ The functions available are:
* ``capitalize()`` -- return the value with the first letter upper case and the rest lower case. * ``capitalize()`` -- return the value with the first letter upper case and the rest lower case.
* ``contains(pattern, text if match, text if not match)`` -- checks if field contains matches for the regular expression `pattern`. Returns `text if match` if matches are found, otherwise it returns `text if no match`. * ``contains(pattern, text if match, text if not match)`` -- checks if field contains matches for the regular expression `pattern`. Returns `text if match` if matches are found, otherwise it returns `text if no match`.
* ``count(separator)`` -- interprets the value as a list of items separated by `separator`, returning the number of items in the list. Most lists use a comma as the separator, but authors uses an ampersand. Examples: `{tags:count(,)}`, `{authors:count(&)}` * ``count(separator)`` -- interprets the value as a list of items separated by `separator`, returning the number of items in the list. Most lists use a comma as the separator, but authors uses an ampersand. Examples: `{tags:count(,)}`, `{authors:count(&)}`
* ``format_number(template)`` -- interprets the value as a number and format that number using a python formatting template such as "{0:5.2f}" or "{0:,d}" or "${0:5,.2f}". The field_name part of the template must be a 0 (zero) (the "{0:" in the above examples). See the template language and python documentation for more examples. Returns the empty string if formatting fails.
* ``human_readable()`` -- expects the value to be a number and returns a string representing that number in KB, MB, GB, etc.
* ``ifempty(text)`` -- if the field is not empty, return the value of the field. Otherwise return `text`. * ``ifempty(text)`` -- if the field is not empty, return the value of the field. Otherwise return `text`.
* ``in_list(separator, pattern, found_val, not_found_val)`` -- interpret the field as a list of items separated by `separator`, comparing the `pattern` against each value in the list. If the pattern matches a value, return `found_val`, otherwise return `not_found_val`. * ``in_list(separator, pattern, found_val, not_found_val)`` -- interpret the field as a list of items separated by `separator`, comparing the `pattern` against each value in the list. If the pattern matches a value, return `found_val`, otherwise return `not_found_val`.
* ``list_item(index, separator)`` -- interpret the field as a list of items separated by `separator`, returning the `index`th item. The first item is number zero. The last item can be returned using `list_item(-1,separator)`. If the item is not in the list, then the empty value is returned. The separator has the same meaning as in the `count` function. * ``list_item(index, separator)`` -- interpret the field as a list of items separated by `separator`, returning the `index`th item. The first item is number zero. The last item can be returned using `list_item(-1,separator)`. If the item is not in the list, then the empty value is returned. The separator has the same meaning as in the `count` function.
* ``re(pattern, replacement)`` -- return the field after applying the regular expression. All instances of `pattern` are replaced with `replacement`. As in all of |app|, these are python-compatible regular expressions. * ``re(pattern, replacement)`` -- return the field after applying the regular expression. All instances of `pattern` are replaced with `replacement`. As in all of |app|, these are python-compatible regular expressions.
* ``shorten(left chars, middle text, right chars)`` -- Return a shortened version of the field, consisting of `left chars` characters from the beginning of the field, followed by `middle text`, followed by `right chars` characters from the end of the string. `Left chars` and `right chars` must be integers. For example, assume the title of the book is `Ancient English Laws in the Times of Ivanhoe`, and you want it to fit in a space of at most 15 characters. If you use ``{title:shorten(9,-,5)}``, the result will be `Ancient E-nhoe`. If the field's length is less than ``left chars`` + ``right chars`` + the length of ``middle text``, then the field will be used intact. For example, the title `The Dome` would not be changed. * ``shorten(left chars, middle text, right chars)`` -- Return a shortened version of the field, consisting of `left chars` characters from the beginning of the field, followed by `middle text`, followed by `right chars` characters from the end of the string. `Left chars` and `right chars` must be integers. For example, assume the title of the book is `Ancient English Laws in the Times of Ivanhoe`, and you want it to fit in a space of at most 15 characters. If you use ``{title:shorten(9,-,5)}``, the result will be `Ancient E-nhoe`. If the field's length is less than ``left chars`` + ``right chars`` + the length of ``middle text``, then the field will be used intact. For example, the title `The Dome` would not be changed.
* ``swap_around_comma(val) `` -- given a value of the form ``B, A``, return ``A B``. This is most useful for converting names in LN, FN format to FN LN. If there is no comma, the function returns val unchanged.
* ``switch(pattern, value, pattern, value, ..., else_value)`` -- for each ``pattern, value`` pair, checks if the field matches the regular expression ``pattern`` and if so, returns that ``value``. If no ``pattern`` matches, then ``else_value`` is returned. You can have as many ``pattern, value`` pairs as you want. * ``switch(pattern, value, pattern, value, ..., else_value)`` -- for each ``pattern, value`` pair, checks if the field matches the regular expression ``pattern`` and if so, returns that ``value``. If no ``pattern`` matches, then ``else_value`` is returned. You can have as many ``pattern, value`` pairs as you want.
* ``lookup(pattern, field, pattern, field, ..., else_field)`` -- like switch, except the arguments are field (metadata) names, not text. The value of the appropriate field will be fetched and used. Note that because composite columns are fields, you can use this function in one composite field to use the value of some other composite field. This is extremely useful when constructing variable save paths (more later). * ``lookup(pattern, field, pattern, field, ..., else_field)`` -- like switch, except the arguments are field (metadata) names, not text. The value of the appropriate field will be fetched and used. Note that because composite columns are fields, you can use this function in one composite field to use the value of some other composite field. This is extremely useful when constructing variable save paths (more later).
* ``select(key)`` -- interpret the field as a comma-separated list of items, with the items being of the form "id:value". Find the pair with the id equal to key, and return the corresponding value. This function is particularly useful for extracting a value such as an isbn from the set of identifiers for a book. * ``select(key)`` -- interpret the field as a comma-separated list of items, with the items being of the form "id:value". Find the pair with the id equal to key, and return the corresponding value. This function is particularly useful for extracting a value such as an isbn from the set of identifiers for a book.
@ -230,13 +233,14 @@ For various values of series_index, the program returns:
**All the functions listed under single-function mode can be used in program mode**. To do so, you must supply the value that the function is to act upon as the first parameter, in addition to the parameters documented above. For example, in program mode the parameters of the `test` function are ``test(x, text_if_not_empty, text_if_empty)``. The `x` parameter, which is the value to be tested, will almost always be a variable or a function call, often `field()`. **All the functions listed under single-function mode can be used in program mode**. To do so, you must supply the value that the function is to act upon as the first parameter, in addition to the parameters documented above. For example, in program mode the parameters of the `test` function are ``test(x, text_if_not_empty, text_if_empty)``. The `x` parameter, which is the value to be tested, will almost always be a variable or a function call, often `field()`.
The following functions are available in addition to those described in single-function mode. Remember from the example above that the single-function mode functions require an additional first parameter specifying the field to operate on. With the exception of the ``id`` parameter of assign, all parameters can be statements (sequences of expressions): The following functions are available in addition to those described in single-function mode. Remember from the example above that the single-function mode functions require an additional first parameter specifying the field to operate on. With the exception of the ``id`` parameter of assign, all parameters can be statements (sequences of expressions). Note that the definitive documentation for functions is available in the section :ref:`Function classification <template_functions_reference>`:
* ``and(value, value, ...)`` -- returns the string "1" if all values are not empty, otherwise returns the empty string. This function works well with test or first_non_empty. You can have as many values as you want. * ``and(value, value, ...)`` -- returns the string "1" if all values are not empty, otherwise returns the empty string. This function works well with test or first_non_empty. You can have as many values as you want.
* ``add(x, y)`` -- returns x + y. Throws an exception if either x or y are not numbers. * ``add(x, y)`` -- returns x + y. Throws an exception if either x or y are not numbers.
* ``assign(id, val)`` -- assigns val to id, then returns val. id must be an identifier, not an expression * ``assign(id, val)`` -- assigns val to id, then returns val. id must be an identifier, not an expression
* ``booksize()`` -- returns the value of the |app| 'size' field. Returns '' if there are no formats. * ``booksize()`` -- returns the value of the |app| 'size' field. Returns '' if there are no formats.
* ``cmp(x, y, lt, eq, gt)`` -- compares x and y after converting both to numbers. Returns ``lt`` if x < y. Returns ``eq`` if x == y. Otherwise returns ``gt``. * ``cmp(x, y, lt, eq, gt)`` -- compares x and y after converting both to numbers. Returns ``lt`` if x < y. Returns ``eq`` if x == y. Otherwise returns ``gt``.
* ``days_between(date1, date2)`` -- return the number of days between ``date1`` and ``date2``. The number is positive if ``date1`` is greater than ``date2``, otherwise negative. If either ``date1`` or ``date2`` are not dates, the function returns the empty string.
* ``divide(x, y)`` -- returns x / y. Throws an exception if either x or y are not numbers. * ``divide(x, y)`` -- returns x / y. Throws an exception if either x or y are not numbers.
* ``field(name)`` -- returns the metadata field named by ``name``. * ``field(name)`` -- returns the metadata field named by ``name``.
* ``first_non_empty(value, value, ...)`` -- returns the first value that is not empty. If all values are empty, then the empty value is returned. You can have as many values as you want. * ``first_non_empty(value, value, ...)`` -- returns the first value that is not empty. If all values are empty, then the empty value is returned. You can have as many values as you want.
@ -255,6 +259,9 @@ The following functions are available in addition to those described in single-f
iso : the date with time and timezone. Must be the only format present. iso : the date with time and timezone. Must be the only format present.
* ``eval(string)`` -- evaluates the string as a program, passing the local variables (those ``assign`` ed to). This permits using the template processor to construct complex results from local variables. * ``eval(string)`` -- evaluates the string as a program, passing the local variables (those ``assign`` ed to). This permits using the template processor to construct complex results from local variables.
* ``formats_modtimes(date_format)`` -- return a comma-separated list of colon_separated items representing modification times for the formats of a book. The date_format parameter specifies how the date is to be formatted. See the date_format function for details. You can use the select function to get the mod time for a specific format. Note that format names are always uppercase, as in EPUB.
* ``formats_sizes()`` -- return a comma-separated list of colon_separated items representing sizes in bytes of the formats of a book. You can use the select function to get the size for a specific format. Note that format names are always uppercase, as in EPUB.
* ``has_cover()`` -- return ``Yes`` if the book has a cover, otherwise return the empty string
* ``not(value)`` -- returns the string "1" if the value is empty, otherwise returns the empty string. This function works well with test or first_non_empty. You can have as many values as you want. * ``not(value)`` -- returns the string "1" if the value is empty, otherwise returns the empty string. This function works well with test or first_non_empty. You can have as many values as you want.
* ``merge_lists(list1, list2, separator)`` -- return a list made by merging the items in list1 and list2, removing duplicate items using a case-insensitive compare. If items differ in case, the one in list1 is used. The items in list1 and list2 are separated by separator, as are the items in the returned list. * ``merge_lists(list1, list2, separator)`` -- return a list made by merging the items in list1 and list2, removing duplicate items using a case-insensitive compare. If items differ in case, the one in list1 is used. The items in list1 and list2 are separated by separator, as are the items in the returned list.
* ``multiply(x, y)`` -- returns x * y. Throws an exception if either x or y are not numbers. * ``multiply(x, y)`` -- returns x * y. Throws an exception if either x or y are not numbers.
@ -266,7 +273,10 @@ The following functions are available in addition to those described in single-f
* ``strcmp(x, y, lt, eq, gt)`` -- does a case-insensitive comparison x and y as strings. Returns ``lt`` if x < y. Returns ``eq`` if x == y. Otherwise returns ``gt``. * ``strcmp(x, y, lt, eq, gt)`` -- does a case-insensitive comparison x and y as strings. Returns ``lt`` if x < y. Returns ``eq`` if x == y. Otherwise returns ``gt``.
* ``substr(str, start, end)`` -- returns the ``start``'th through the ``end``'th characters of ``str``. The first character in ``str`` is the zero'th character. If end is negative, then it indicates that many characters counting from the right. If end is zero, then it indicates the last character. For example, ``substr('12345', 1, 0)`` returns ``'2345'``, and ``substr('12345', 1, -1)`` returns ``'234'``. * ``substr(str, start, end)`` -- returns the ``start``'th through the ``end``'th characters of ``str``. The first character in ``str`` is the zero'th character. If end is negative, then it indicates that many characters counting from the right. If end is zero, then it indicates the last character. For example, ``substr('12345', 1, 0)`` returns ``'2345'``, and ``substr('12345', 1, -1)`` returns ``'234'``.
* ``subtract(x, y)`` -- returns x - y. Throws an exception if either x or y are not numbers. * ``subtract(x, y)`` -- returns x - y. Throws an exception if either x or y are not numbers.
* ``today()`` -- return a date string for today. This value is designed for use in format_date or days_between, but can be manipulated like any other string. The date is in ISO format.
* ``template(x)`` -- evaluates x as a template. The evaluation is done in its own context, meaning that variables are not shared between the caller and the template evaluation. Because the `{` and `}` characters are special, you must use `[[` for the `{` character and `]]` for the '}' character; they are converted automatically. For example, ``template('[[title_sort]]') will evaluate the template ``{title_sort}`` and return its value. * ``template(x)`` -- evaluates x as a template. The evaluation is done in its own context, meaning that variables are not shared between the caller and the template evaluation. Because the `{` and `}` characters are special, you must use `[[` for the `{` character and `]]` for the '}' character; they are converted automatically. For example, ``template('[[title_sort]]') will evaluate the template ``{title_sort}`` and return its value.
.. _template_functions_reference:
Function classification Function classification
--------------------------- ---------------------------

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More