Merge from trunk

This commit is contained in:
Charles Haley 2010-06-20 11:27:51 +01:00
commit 8d316781f5
43 changed files with 49537 additions and 32769 deletions

View File

@ -4,6 +4,142 @@
# for important features/bug fixes.
# Also, each release can have new and improved recipes.
- version: 0.7.4
date: 2010-06-19
bug fixes:
- title: "Fix regression in 0.7.3 that broke creating custom columns of rating or text types"
- title: "Fix cover browser breaking if you click on a book in the book list while cover browser is animated"
- title: "Fix a bug that could be triggered with the new book details pane if a book has a zero size cover"
tickets: [5889]
- title: "SONY driver: Fix bug preventing the editing of collections in the device view"
new recipes:
- title: Auto Prove
author: Gabriele Marini
- title: Forbes India, Maximum PC, Today Online
author: rty
improved recipes:
- WSJ
- Psychology Today
- version: 0.7.3
date: 2010-06-18
new features:
- title: "The Tag Browser now display an average rating for each item"
type: major
description: >
"
The icons of each individual item in the Tag Browser are now partially colored to indicate the average rating of
all books belonging to that category. For example, the icon next to each author is partially colored based on the
averagerating of all books by that author in your calibre library. You can also hover your mouse over the item to
see the average rating in a tooltip. Can be turned off via Preferences->Interface
"
- title: "Editable author sort for each author"
type: major
description: >
"calibre has always allowed you to specify the author sort for each bookin your collection. Now you
can also specify the way the name of each individual author should be sorted. This is used to display the list
of authors in the Tag Browser and OPDS feeds in the Content Server"
- title: "When downloading metadata, also get series information from librarything.com"
type: major
tickets: [5148]
- title: "Redesign of the Book Details pane"
type: major
description: >
"The Book details pane now display covers with animation. Also instead of showing the full path to the book, you now have
clickable links to open the containing folder or individual formats. The path information is still accessible via a tooltip"
- title: "New User Interface layouts"
type: major
description: >
"calibre now has two user interface layouts selectable from Preferences->Interface. The 'wide' layout has the book details pane on the side
and the 'narrow' layout has it on the bottom. The default layout is now wide."
- title: "You can now add books directly from the device to the calibre library by right clicking on the books in the device views"
- title: "iPad driver: Create category from series preferentially, also handle series sorting"
- title: "SONY driver: Add an option to use author_sort instead of author when sending to device"
- title: "Hitting Enter in the search box now causes the search to be re-run"
tickets: [5856]
- title: "Boox driver: Make destination directory for books customizable"
- title: "Add plugin to download metadata from douban.com. Disabled by default."
- title: "OS X/linux driver for PocketBook 301"
- title: "Support for the Samsung Galaxy and Sigmatek EBK52"
- title: "On startup do not focus the search bar. Instead you can acces the search bar easily by pressing the / key or the standard search keyboard shortcut for your operating system"
bug fixes:
- title: "iPad driver: Various bug fixes"
- title: "Kobo Output profile: Adjust the screen dimensions when converting comics"
- title: "Fix using Preferences when a device is connected causes items in device menu to be disabled"
- title: "CHM Input: Skip files whoose names are too long for windows"
- title: "Brighten up calibre icon on dark backgrounds"
- title: "Ignore 'Unknown' in title/autors when downloading metadata"
tickets: [5633]
- title: "Fix regression that broke various entries in the menus - Preferences, Open containing folder and Edit metadata individually"
- title: "EPUB metadata: Handle comma separated entries in <dc:subject> tags correctly"
tickets: [5855]
- title: "MOBI Output: Fix underlines not being rendered"
tickets: [5830]
- title: "EPUB Output: Remove workaround for old versions of Adobe Digital Editions' faulty rendering of links in html. calibre no longer forces links to be blue and underlined"
- title: "Fix a bug that could cause the show pane buttons to not show hidden panes"
- title: "Fix Tag Editor does not reflect recently changed data in Tag Catagory Text Box"
tickets: [5809]
- title: "Content server: Fix sorting of books by authors instead of author_sort in the main and mobile views"
- title: "Cover cache: Resize covers larger than 600x800 in the cover cache to reduce memory consumption in the GUI"
- title: "EPUB Output: Default cover is generated is now generated as a JPEG instead of PNG32, reducing size by an order of magnitude."
tickets: [5810]
- title: "Cover Browser: Scale text size with height of cover browser. Only show a reflection of half the cover. Also restore rendering quality after regression in 0.7.1"
tickets: [5808]
- title: "Book list: Do not let the default layout have any column wider than 350 pixels"
new recipes:
- title: Akter
author: Darko Miletic
- title: Thai Rath and The Nation (Thailand)
author: Anat Ruangrassamee
improved recipes:
- Wall Street Journal
- New York Times
- Slashdot
- Publico
- Danas
- version: 0.7.2
date: 2010-06-11

View File

@ -0,0 +1,90 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
__author__ = 'GabrieleMarini, based on Darko Miletic'
__copyright__ = '2009, Darko Miletic <darko.miletic at gmail.com>, Gabriele Marini'
__version__ = 'v1.02 Marini Gabriele '
__date__ = '10, January 2010'
__description__ = 'Italian daily newspaper'
'''
http://www.corrieredellosport.it/
'''
from calibre.web.feeds.news import BasicNewsRecipe
class AutoPR(BasicNewsRecipe):
__author__ = 'Gabriele Marini'
description = 'Auto and Formula 1'
cover_url = 'http://www.auto.it/res/imgs/logo_Auto.png'
title = u'Auto Prove'
publisher = 'CONTE Editore'
category = 'Sport'
language = 'it'
timefmt = '[%a, %d %b, %Y]'
oldest_article = 60
max_articles_per_feed = 20
use_embedded_content = False
recursion = 100
remove_javascript = True
no_stylesheets = True
#html2lrf_options = [
# '--comment', description
# , '--category', category
# , '--publisher', publisher
# , '--ignore-tables'
# ]
#html2epub_options = 'publisher="' + publisher + '"\ncomments="' + description + '"\ntags="' + category + '"\nlinearize_tables=True'
keep_only_tags = [
dict(name='h2', attrs={'class':['tit_Article y_Txt']}),
dict(name='h2', attrs={'class':['tit_Article']}),
dict(name='div', attrs={'class':['box_Img newsdet_new ']}),
dict(name='div', attrs={'class':['box_Img newsdet_as ']}),
dict(name='table', attrs={'class':['table_A']}),
dict(name='div', attrs={'class':['txt_Article txtBox_cms']}),
dict(name='testoscheda')]
def parse_index(self):
feeds = []
for title, url in [
("Prove su Strada" , "http://www.auto.it/rss/prove+6.xml")
]:
soup = self.index_to_soup(url)
soup = soup.find('channel')
print soup
for article in soup.findAllNext('item'):
title = self.tag_to_string(article.title)
date = self.tag_to_string(article.pubDate)
description = self.tag_to_string(article.description)
link = self.tag_to_string(article.guid)
# print article
articles = self.create_links_append(link, date, description)
if articles:
feeds.append((title, articles))
return feeds
def create_links_append(self, link, date, description):
current_articles = []
current_articles.append({'title': 'Generale', 'url': link,'description':description, 'date':date}),
current_articles.append({'title': 'Design', 'url': link.replace('scheda','design'),'description':'scheda', 'date':''}),
current_articles.append({'title': 'Interni', 'url': link.replace('scheda','interni'),'description':'Interni', 'date':''}),
current_articles.append({'title': 'Tecnica', 'url': link.replace('scheda','tecnica'),'description':'Tecnica', 'date':''}),
current_articles.append({'title': 'Su Strada', 'url': link.replace('scheda','su_strada'),'description':'Su Strada', 'date':''}),
current_articles.append({'title': 'Pagella', 'url': link.replace('scheda','pagella'),'description':'Pagella', 'date':''}),
current_articles.append({'title': 'Rilevamenti', 'url': link.replace('scheda','telemetria'),'description':'Rilevamenti', 'date':''})
return current_articles

View File

@ -0,0 +1,55 @@
from calibre.ptempfile import PersistentTemporaryFile
from calibre.web.feeds.news import BasicNewsRecipe
class AdvancedUserRecipe1276934715(BasicNewsRecipe):
title = u'Forbes India'
__author__ = 'rty'
description = 'India Edition Forbes'
publisher = 'Forbes India'
category = 'Business News, Economy, India'
oldest_article = 7
max_articles_per_feed = 100
remove_javascript = True
use_embedded_content = False
no_stylesheets = True
language = 'en_IN'
temp_files = []
articles_are_obfuscated = True
conversion_options = {'linearize_tables':True}
feeds = [
(u'Contents', u'http://business.in.com/rssfeed/rss_all.xml'),
]
extra_css = '''
.t-10-gy-l{font-style: italic; font-size: small}
.t-30-b-d{font-weight: bold; font-size: xx-large}
.t-16-gy-l{font-weight: bold; font-size: x-large; font-syle: italic}
.storycontent{font-size: 4px;font-family: Times New Roman;}
'''
remove_tags_before = dict(name='div', attrs={'class':'pdl10 pdr15'})
def get_obfuscated_article(self, url):
br = self.get_browser()
br.open(url)
response = br.follow_link(url_regex = r'/printcontent/[0-9]+', nr = 0)
html = response.read()
self.temp_files.append(PersistentTemporaryFile('_fa.html'))
self.temp_files[-1].write(html)
self.temp_files[-1].close()
return self.temp_files[-1].name
def get_cover_url(self):
index = 'http://business.in.com/magazine/'
soup = self.index_to_soup(index)
for image in soup.findAll('a',{ "class" : "lbOn a-9-b-d" }):
return image['href']
#return image['href'] + '.jpg'
return None
def preprocess_html(self, soup):
for item in soup.findAll(style=True):
del item['style']
for item in soup.findAll(width=True):
del item['width']
return soup

View File

@ -0,0 +1,43 @@
from calibre.ptempfile import PersistentTemporaryFile
from calibre.web.feeds.news import BasicNewsRecipe
class AdvancedUserRecipe1276930924(BasicNewsRecipe):
title = u'Maximum PC'
__author__ = 'rty'
description = 'Maximum PC'
publisher = 'http://www.maximumpc.com'
category = 'news, computer, technology'
language = 'en'
oldest_article = 30
max_articles_per_feed = 100
remove_javascript = True
use_embedded_content = False
no_stylesheets = True
language = 'en'
temp_files = []
articles_are_obfuscated = True
feeds = [(u'News', u'http://www.maximumpc.com/articles/4/feed'),
(u'Reviews', u'http://www.maximumpc.com/articles/40/feed'),
(u'Editors Blog', u'http://www.maximumpc.com/articles/6/feed'),
(u'How-to', u'http://www.maximumpc.com/articles/32/feed'),
(u'Features', u'http://www.maximumpc.com/articles/31/feed'),
(u'From the Magazine', u'http://www.maximumpc.com/articles/72/feed')
]
keep_only_tags = [
dict(name='div', attrs={'class':['print-title','article_body']}),
]
remove_tags = [
dict(name='div', attrs={'class':'comments-tags-actions'}),
]
remove_tags_before = dict(name='div', attrs={'class':'print-title'})
remove_tags_after = dict(name='div', attrs={'class':'meta-content'})
def get_obfuscated_article(self, url):
br = self.get_browser()
br.open(url)
response = br.follow_link(url_regex = r'/print/[0-9]+', nr = 0)
html = response.read()
self.temp_files.append(PersistentTemporaryFile('_fa.html'))
self.temp_files[-1].write(html)
self.temp_files[-1].close()
return self.temp_files[-1].name

View File

@ -17,6 +17,7 @@ class NYTimes(BasicNewsRecipe):
title = 'New York Times Top Stories'
__author__ = 'GRiker'
language = 'en'
requires_version = (0, 7, 3)
description = 'Top Stories from the New York Times'
# List of sections typically included in Top Stories. Use a keyword from the
@ -64,6 +65,7 @@ class NYTimes(BasicNewsRecipe):
timefmt = ''
needs_subscription = True
masthead_url = 'http://graphics8.nytimes.com/images/misc/nytlogo379x64.gif'
cover_margins = (18,18,'grey99')
remove_tags_before = dict(id='article')
remove_tags_after = dict(id='article')
@ -183,6 +185,16 @@ class NYTimes(BasicNewsRecipe):
self.log("\nFailed to login")
return br
def skip_ad_pages(self, soup):
# Skip ad pages served before actual article
skip_tag = soup.find(True, {'name':'skip'})
if skip_tag is not None:
self.log.warn("Found forwarding link: %s" % skip_tag.parent['href'])
url = 'http://www.nytimes.com' + re.sub(r'\?.*', '', skip_tag.parent['href'])
url += '?pagewanted=all'
self.log.warn("Skipping ad to article at '%s'" % url)
return self.index_to_soup(url, raw=True)
def get_cover_url(self):
cover = None
st = time.localtime()
@ -391,14 +403,6 @@ class NYTimes(BasicNewsRecipe):
return ans
def preprocess_html(self, soup):
# Skip ad pages served before actual article
skip_tag = soup.find(True, {'name':'skip'})
if skip_tag is not None:
self.log.error("Found forwarding link: %s" % skip_tag.parent['href'])
url = 'http://www.nytimes.com' + re.sub(r'\?.*', '', skip_tag.parent['href'])
url += '?pagewanted=all'
self.log.error("Skipping ad to article at '%s'" % url)
soup = self.index_to_soup(url)
return self.strip_anchors(soup)
def postprocess_html(self,soup, True):

View File

@ -20,6 +20,7 @@ class NYTimes(BasicNewsRecipe):
title = 'The New York Times'
__author__ = 'GRiker'
language = 'en'
requires_version = (0, 7, 3)
description = 'Daily news from the New York Times (subscription version)'
allSectionKeywords = ['The Front Page', 'International','National','Obituaries','Editorials',
@ -103,6 +104,7 @@ class NYTimes(BasicNewsRecipe):
]),
dict(name=['script', 'noscript', 'style'])]
masthead_url = 'http://graphics8.nytimes.com/images/misc/nytlogo379x64.gif'
cover_margins = (18,18,'grey99')
no_stylesheets = True
extra_css = '.headline {text-align: left;}\n \
.byline {font-family: monospace; \
@ -158,7 +160,7 @@ class NYTimes(BasicNewsRecipe):
return cover
def get_masthead_title(self):
return 'NYTimes GR Version'
return self.title
def dump_ans(self, ans):
total_article_count = 0
@ -279,15 +281,17 @@ class NYTimes(BasicNewsRecipe):
self.dump_ans(ans)
return ans
def preprocess_html(self, soup):
def skip_ad_pages(self, soup):
# Skip ad pages served before actual article
skip_tag = soup.find(True, {'name':'skip'})
if skip_tag is not None:
self.log.error("Found forwarding link: %s" % skip_tag.parent['href'])
self.log.warn("Found forwarding link: %s" % skip_tag.parent['href'])
url = 'http://www.nytimes.com' + re.sub(r'\?.*', '', skip_tag.parent['href'])
url += '?pagewanted=all'
self.log.error("Skipping ad to article at '%s'" % url)
soup = self.index_to_soup(url)
self.log.warn("Skipping ad to article at '%s'" % url)
return self.index_to_soup(url, raw=True)
def preprocess_html(self, soup):
return self.strip_anchors(soup)
def postprocess_html(self,soup, True):

View File

@ -1,39 +1,44 @@
from calibre.ptempfile import PersistentTemporaryFile
from calibre.web.feeds.news import BasicNewsRecipe
from calibre.ebooks.BeautifulSoup import BeautifulSoup
class PsychologyToday(BasicNewsRecipe):
class AdvancedUserRecipe1275708473(BasicNewsRecipe):
title = u'Psychology Today'
_author__ = 'rty'
publisher = u'www.psychologytoday.com'
category = u'Psychology'
max_articles_per_feed = 100
remove_javascript = True
use_embedded_content = False
no_stylesheets = True
language = 'en'
__author__ = 'Krittika Goyal'
oldest_article = 1 #days
max_articles_per_feed = 25
#encoding = 'latin1'
remove_stylesheets = True
#remove_tags_before = dict(name='h1', attrs={'class':'heading'})
#remove_tags_after = dict(name='td', attrs={'class':'newptool1'})
temp_files = []
articles_are_obfuscated = True
remove_tags = [
dict(name='iframe'),
dict(name='div', attrs={'class':['pt-box-title', 'pt-box-content', 'blog-entry-footer', 'item-list', 'article-sub-meta']}),
dict(name='div', attrs={'id':['block-td_search_160', 'block-cam_search_160']}),
#dict(name='ul', attrs={'class':'article-tools'}),
#dict(name='ul', attrs={'class':'articleTools'}),
dict(name='div', attrs={'class':['print-source_url','field-items','print-footer']}),
dict(name='span', attrs={'class':'print-footnote'}),
]
remove_tags_before = dict(name='h1', attrs={'class':'print-title'})
remove_tags_after = dict(name='div', attrs={'class':['field-items','print-footer']})
feeds = [
('PSY TODAY',
'http://www.psychologytoday.com/articles/index.rss'),
]
feeds = [(u'Contents', u'http://www.psychologytoday.com/articles/index.rss')]
def preprocess_html(self, soup):
story = soup.find(name='div', attrs={'id':'contentColumn'})
#td = heading.findParent(name='td')
#td.extract()
soup = BeautifulSoup('<html><head><title>t</title></head><body></body></html>')
body = soup.find(name='body')
body.insert(0, story)
for x in soup.findAll(name='p', text=lambda x:x and '--&gt;' in x):
p = x.findParent('p')
if p is not None:
p.extract()
return soup
def get_article_url(self, article):
return article.get('link', None)
def get_obfuscated_article(self, url):
br = self.get_browser()
br.open(url)
response = br.follow_link(url_regex = r'/print/[0-9]+', nr = 0)
html = response.read()
self.temp_files.append(PersistentTemporaryFile('_fa.html'))
self.temp_files[-1].write(html)
self.temp_files[-1].close()
return self.temp_files[-1].name
def get_cover_url(self):
index = 'http://www.psychologytoday.com/magazine/'
soup = self.index_to_soup(index)
for image in soup.findAll('img',{ "class" : "imagefield imagefield-field_magazine_cover" }):
return image['src'] + '.jpg'
return None

View File

@ -0,0 +1,59 @@
from calibre.ptempfile import PersistentTemporaryFile
from calibre.web.feeds.news import BasicNewsRecipe
class AdvancedUserRecipe1276486274(BasicNewsRecipe):
title = u'Today Online - Singapore'
publisher = 'MediaCorp Press Ltd - Singapore'
__author__ = 'rty'
category = 'news, Singapore'
oldest_article = 7
max_articles_per_feed = 100
remove_javascript = True
use_embedded_content = False
no_stylesheets = True
language = 'en_SG'
temp_files = []
articles_are_obfuscated = True
masthead_url = 'http://www.todayonline.com/App_Themes/Default/images/icons/TodayOnlineLogo.gif'
conversion_options = {'linearize_tables':True}
extra_css = '''
.author{font-style: italic; font-size: small}
.date{font-style: italic; font-size: small}
.Headline{font-weight: bold; font-size: xx-large}
.headerStrap{font-weight: bold; font-size: x-large; font-syle: italic}
.bodyText{font-size: 4px;font-family: Times New Roman;}
'''
keep_only_tags = [
dict(name='div', attrs={'id':['fullPrintBodyHolder']})
]
remove_tags_after = [ dict(name='div', attrs={'class':'button'})]
remove_tags = [
dict(name='div', attrs={'class':['url','button']})
]
feeds = [
(u'Singapore', u'http://www.todayonline.com/RSS/Singapore'),
(u'Hot News', u'http://www.todayonline.com/RSS/Hotnews'),
(u'Today Online', u'http://www.todayonline.com/RSS/Todayonline'),
(u'Voices', u'http://www.todayonline.com/RSS/Voices'),
(u'Commentary', u'http://www.todayonline.com/RSS/Commentary'),
(u'World', u'http://www.todayonline.com/RSS/World'),
(u'Business', u'http://www.todayonline.com/RSS/Business'),
(u'Column', u'http://www.todayonline.com/RSS/Columns'),
]
def get_obfuscated_article(self, url):
br = self.get_browser()
br.open(url)
response = br.follow_link(url_regex = r'/Print/', nr = 0)
html = response.read()
self.temp_files.append(PersistentTemporaryFile('_fa.html'))
self.temp_files[-1].write(html)
self.temp_files[-1].close()
return self.temp_files[-1].name
def preprocess_html(self, soup):
for item in soup.findAll(style=True):
del item['style']
return soup

View File

@ -4,13 +4,14 @@ __copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
__docformat__ = 'restructuredtext en'
from calibre.web.feeds.news import BasicNewsRecipe
import copy
# http://online.wsj.com/page/us_in_todays_paper.html
class WallStreetJournal(BasicNewsRecipe):
title = 'The Wall Street Journal (US)'
__author__ = 'Kovid Goyal and Sujata Raman'
title = 'The Wall Street Journal'
__author__ = 'Kovid Goyal, Sujata Raman, and Joshua Oster-Morris'
description = 'News and current affairs'
needs_subscription = True
language = 'en'
@ -67,6 +68,16 @@ class WallStreetJournal(BasicNewsRecipe):
def wsj_get_index(self):
return self.index_to_soup('http://online.wsj.com/itp')
def wsj_add_feed(self,feeds,title,url):
self.log('Found section:', title)
if url.endswith('whatsnews'):
articles = self.wsj_find_wn_articles(url)
else:
articles = self.wsj_find_articles(url)
if articles:
feeds.append((title, articles))
return feeds
def parse_index(self):
soup = self.wsj_get_index()
@ -82,25 +93,62 @@ class WallStreetJournal(BasicNewsRecipe):
div = soup.find('div', attrs={'class':'itpHeader'})
div = div.find('ul', attrs={'class':'tab'})
for a in div.findAll('a', href=lambda x: x and '/itp/' in x):
pageone = a['href'].endswith('pageone')
if pageone:
title = 'Front Section'
url = 'http://online.wsj.com' + a['href']
feeds = self.wsj_add_feed(feeds,title,url)
title = 'What''s News'
url = url.replace('pageone','whatsnews')
feeds = self.wsj_add_feed(feeds,title,url)
else:
title = self.tag_to_string(a)
url = 'http://online.wsj.com' + a['href']
self.log('Found section:', title)
articles = self.wsj_find_articles(url)
if articles:
feeds.append((title, articles))
feeds = self.wsj_add_feed(feeds,title,url)
return feeds
def wsj_find_wn_articles(self, url):
soup = self.index_to_soup(url)
articles = []
whats_news = soup.find('div', attrs={'class':lambda x: x and 'whatsNews-simple' in x})
if whats_news is not None:
for a in whats_news.findAll('a', href=lambda x: x and '/article/' in x):
container = a.findParent(['p'])
meta = a.find(attrs={'class':'meta_sectionName'})
if meta is not None:
meta.extract()
title = self.tag_to_string(a).strip()
url = a['href']
desc = ''
if container is not None:
desc = self.tag_to_string(container)
articles.append({'title':title, 'url':url,
'description':desc, 'date':''})
self.log('\tFound WN article:', title)
return articles
def wsj_find_articles(self, url):
soup = self.index_to_soup(url)
whats_news = soup.find('div', attrs={'class':lambda x: x and
'whatsNews-simple' in x})
whats_news = soup.find('div', attrs={'class':lambda x: x and 'whatsNews-simple' in x})
if whats_news is not None:
whats_news.extract()
articles = []
flavorarea = soup.find('div', attrs={'class':lambda x: x and 'ahed' in x})
if flavorarea is not None:
flavorstory = flavorarea.find('a', href=lambda x: x and x.startswith('/article'))
if flavorstory is not None:
flavorstory['class'] = 'mjLinkItem'
metapage = soup.find('span', attrs={'class':lambda x: x and 'meta_sectionName' in x})
if metapage is not None:
flavorstory.append( copy.copy(metapage) ) #metapage should always be A1 because that should be first on the page
for a in soup.findAll('a', attrs={'class':'mjLinkItem'}, href=True):
container = a.findParent(['li', 'div'])
meta = a.find(attrs={'class':'meta_sectionName'})
@ -118,26 +166,9 @@ class WallStreetJournal(BasicNewsRecipe):
self.log('\tFound article:', title)
'''
# Find related articles
a.extract()
for a in container.findAll('a', href=lambda x: x and '/article/'
in x and 'articleTabs' not in x):
url = a['href']
if not url.startswith('http:'):
url = 'http://online.wsj.com'+url
title = self.tag_to_string(a).strip()
if not title or title.startswith('['): continue
if title:
articles.append({'title':self.tag_to_string(a),
'url':url, 'description':'', 'date':''})
self.log('\t\tFound related:', title)
'''
return articles
def cleanup(self):
self.browser.open('http://online.wsj.com/logout?url=http://online.wsj.com')

View File

@ -2,7 +2,7 @@ __license__ = 'GPL v3'
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
__docformat__ = 'restructuredtext en'
__appname__ = 'calibre'
__version__ = '0.7.2'
__version__ = '0.7.4'
__author__ = "Kovid Goyal <kovid@kovidgoyal.net>"
import re

View File

@ -6,8 +6,7 @@ __docformat__ = 'restructuredtext en'
import cStringIO, ctypes, datetime, os, re, shutil, subprocess, sys, tempfile, time
from calibre.constants import DEBUG
from calibre.constants import __appname__, __version__, DEBUG
from calibre import fit_image
from calibre.constants import isosx, iswindows
from calibre.devices.errors import UserFeedback
@ -79,7 +78,7 @@ class ITUNES(DevicePlugin):
supported_platforms = ['osx','windows']
author = 'GRiker'
#: The version of this plugin as a 3-tuple (major, minor, revision)
version = (0,6,0)
version = (0,7,0)
OPEN_FEEDBACK_MESSAGE = _(
'Apple device detected, launching iTunes, please wait ...')
@ -160,6 +159,7 @@ class ITUNES(DevicePlugin):
sources = None
update_msg = None
update_needed = False
use_series_data = True
# Public methods
def add_books_to_metadata(self, locations, metadata, booklists):
@ -293,7 +293,7 @@ class ITUNES(DevicePlugin):
'author':[book.artist()],
'lib_book':library_books[this_book.path] if this_book.path in library_books else None,
'dev_book':book,
'uuid': book.album()
'uuid': book.composer()
}
if self.report_progress is not None:
@ -329,7 +329,7 @@ class ITUNES(DevicePlugin):
'title':book.Name,
'author':book.Artist,
'lib_book':library_books[this_book.path] if this_book.path in library_books else None,
'uuid': book.Album
'uuid': book.Composer
}
if self.report_progress is not None:
@ -398,7 +398,7 @@ class ITUNES(DevicePlugin):
attempts -= 1
time.sleep(0.5)
if DEBUG:
self.log.warning(" waiting for identified iPad, attempt #%d" % (10 - attempts))
self.log.warning(" waiting for connected iPad, attempt #%d" % (10 - attempts))
else:
if DEBUG:
self.log.info(' found connected iPad')
@ -474,7 +474,7 @@ class ITUNES(DevicePlugin):
attempts -= 1
time.sleep(0.5)
if DEBUG:
self.log.warning(" waiting for identified iPad, attempt #%d" % (10 - attempts))
self.log.warning(" waiting for connected iPad, attempt #%d" % (10 - attempts))
else:
if DEBUG:
self.log.info(' found connected iPad in iTunes')
@ -693,6 +693,8 @@ class ITUNES(DevicePlugin):
# Purge the booklist, self.cached_books
for i,bl_book in enumerate(booklists[0]):
if False:
self.log.info(" evaluating '%s'" % bl_book.uuid)
if bl_book.uuid == self.cached_books[path]['uuid']:
# Remove from booklists[0]
booklists[0].pop(i)
@ -703,6 +705,10 @@ class ITUNES(DevicePlugin):
break
break
if False:
self._dump_booklist(booklists[0], indent = 2)
self._dump_cached_books(indent=2)
def reset(self, key='-1', log_packets=False, report_progress=None,
detected_device=None) :
"""
@ -1061,7 +1067,7 @@ class ITUNES(DevicePlugin):
except:
if DEBUG:
self.log.warning(" iTunes automation interface reported an error"
" when adding artwork to '%s'" % metadata.title)
" when adding artwork to '%s' on the iDevice" % metadata.title)
#import traceback
#traceback.print_exc()
#from calibre import ipython
@ -1264,11 +1270,11 @@ class ITUNES(DevicePlugin):
def _dump_cached_book(self, cached_book, header=None,indent=0):
'''
'''
if isosx:
if header:
msg = '%s%s' % (' '*indent,header)
self.log.info(msg)
self.log.info( "%s%s" % (' '*indent, '-' * len(msg)))
if isosx:
self.log.info("%s%-40.40s %-30.30s %-10.10s %-10.10s %s" %
(' '*indent,
'title',
@ -1284,14 +1290,17 @@ class ITUNES(DevicePlugin):
str(cached_book['dev_book'])[-9:],
cached_book['uuid']))
elif iswindows:
if header:
msg = '%s%s' % (' '*indent,header)
self.log.info(msg)
self.log.info( "%s%s" % (' '*indent, '-' * len(msg)))
self.log.info("%s%-40.40s %-30.30s %s" %
(' '*indent,
cached_book['title'],
cached_book['author'],
cached_book['uuid']))
self.log.info()
def _dump_cached_books(self, header=None, indent=0):
'''
'''
@ -1415,18 +1424,20 @@ class ITUNES(DevicePlugin):
(search['uuid'], search['title'], search['author']))
attempts = 9
while attempts:
# Try by uuid
hits = dev_books.Search(search['uuid'],self.SearchField.index('Albums'))
# Try by uuid - only one hit
hits = dev_books.Search(search['uuid'],self.SearchField.index('All'))
if hits:
hit = hits[0]
self.log.info(" found '%s' by %s (%s)" % (hit.Name, hit.Artist, hit.Album))
self.log.info(" found '%s' by %s (%s)" % (hit.Name, hit.Artist, hit.Composer))
return hit
# Try by author
# Try by author - there could be multiple hits
hits = dev_books.Search(search['author'],self.SearchField.index('Artists'))
if hits:
hit = hits[0]
self.log.info(" found '%s' by %s" % (hit.Name, hit.Artist))
for hit in hits:
if hit.Name == search['title']:
if DEBUG:
self.log.info(" found '%s' by %s (%s)" % (hit.Name, hit.Artist, hit.Composer))
return hit
attempts -= 1
@ -1438,19 +1449,19 @@ class ITUNES(DevicePlugin):
self.log.error(" no hits")
return None
def _find_library_book(self, cached_book):
def _find_library_book(self, search):
'''
Windows-only method to get a handle to a library book in the current pythoncom session
'''
if iswindows:
if DEBUG:
self.log.info(" ITUNES._find_library_book()")
if 'uuid' in cached_book:
if 'uuid' in search:
self.log.info(" looking for '%s' by %s (%s)" %
(cached_book['title'], cached_book['author'], cached_book['uuid']))
(search['title'], search['author'], search['uuid']))
else:
self.log.info(" looking for '%s' by %s" %
(cached_book['title'], cached_book['author']))
(search['title'], search['author']))
for source in self.iTunes.sources:
if source.Kind == self.Sources.index('Library'):
@ -1477,21 +1488,25 @@ class ITUNES(DevicePlugin):
attempts = 9
while attempts:
# Find book whose Album field = cached_book['uuid']
if 'uuid' in cached_book:
hits = lib_books.Search(cached_book['uuid'],self.SearchField.index('Albums'))
# Find book whose Album field = search['uuid']
if 'uuid' in search:
if DEBUG:
self.log.info(" searching by uuid '%s' ..." % search['uuid'])
hits = lib_books.Search(search['uuid'],self.SearchField.index('All'))
if hits:
hit = hits[0]
if DEBUG:
self.log.info(" found '%s' by %s (%s)" % (hit.Name, hit.Artist, hit.Album))
self.log.info(" found '%s' by %s (%s)" % (hit.Name, hit.Artist, hit.Composer))
return hit
hits = lib_books.Search(cached_book['author'],self.SearchField.index('Artists'))
if hits:
hit = hits[0]
if hit.Name == cached_book['title']:
if DEBUG:
self.log.info(" found '%s' by %s (%s)" % (hit.Name, hit.Artist, hit.Album))
self.log.info(" searching by author '%s' ..." % search['author'])
hits = lib_books.Search(search['author'],self.SearchField.index('Artists'))
if hits:
for hit in hits:
if hit.Name == search['title']:
if DEBUG:
self.log.info(" found '%s' by %s (%s)" % (hit.Name, hit.Artist, hit.Composer))
return hit
attempts -= 1
@ -1500,7 +1515,7 @@ class ITUNES(DevicePlugin):
self.log.warning(" attempt #%d" % (10 - attempts))
if DEBUG:
self.log.error(" search for '%s' yielded no hits" % cached_book['title'])
self.log.error(" search for '%s' yielded no hits" % search['title'])
return None
def _generate_thumbnail(self, book_path, book):
@ -1542,6 +1557,10 @@ class ITUNES(DevicePlugin):
return thumb.getvalue()
except:
self.log.error(" error generating thumb for '%s'" % book.name())
try:
zfw.close()
except:
pass
return None
elif iswindows:
@ -1571,6 +1590,10 @@ class ITUNES(DevicePlugin):
return thumb.getvalue()
except:
self.log.error(" error generating thumb for '%s'" % book.Name)
try:
zfw.close()
except:
pass
return None
def _get_device_book_size(self, file, compressed_size):
@ -1617,7 +1640,7 @@ class ITUNES(DevicePlugin):
self.log.info(" ignoring '%s' of type '%s'" % (book.name(), book.kind()))
else:
if DEBUG:
self.log.info(" %-30.30s %-30.30s %s [%s]" %
self.log.info(" %-30.30s %-30.30s %-40.40s [%s]" %
(book.name(), book.artist(), book.album(), book.kind()))
device_books.append(book)
if DEBUG:
@ -1649,7 +1672,7 @@ class ITUNES(DevicePlugin):
self.log.info(" ignoring '%s' of type '%s'" % (book.Name, book.KindAsString))
else:
if DEBUG:
self.log.info(" %-30.30s %-30.30s %s [%s]" % (book.Name, book.Artist, book.Album, book.KindAsString))
self.log.info(" %-30.30s %-30.30s %-40.40s [%s]" % (book.Name, book.Artist, book.Album, book.KindAsString))
device_books.append(book)
if DEBUG:
self.log.info()
@ -1663,8 +1686,6 @@ class ITUNES(DevicePlugin):
'''
assumes pythoncom wrapper
'''
# if DEBUG:
# self.log.info(" ITUNES._get_device_books_playlist()")
if iswindows:
if 'iPod' in self.sources:
pl = None
@ -1707,11 +1728,6 @@ class ITUNES(DevicePlugin):
if update_md:
self._update_epub_metadata(fpath, metadata)
# if DEBUG:
# self.log.info(" metadata before rewrite: '{0[0]}' '{0[1]}' '{0[2]}'".format(self._dump_epub_metadata(fpath)))
# self._update_epub_metadata(fpath, metadata)
# if DEBUG:
# self.log.info(" metadata after rewrite: '{0[0]}' '{0[1]}' '{0[2]}'".format(self._dump_epub_metadata(fpath)))
return fpath
def _get_library_books(self):
@ -1766,7 +1782,7 @@ class ITUNES(DevicePlugin):
library_books[path] = book
if DEBUG:
self.log.info(" %-30.30s %-30.30s %s [%s]" % (book.name(), book.artist(), book.album(), book.kind()))
self.log.info(" %-30.30s %-30.30s %-40.40s [%s]" % (book.name(), book.artist(), book.album(), book.kind()))
else:
if DEBUG:
self.log.info(' no Library playlists')
@ -1819,7 +1835,7 @@ class ITUNES(DevicePlugin):
library_books[path] = book
if DEBUG:
self.log.info(" %-30.30s %-30.30s %s [%s]" % (book.Name, book.Artist, book.Album, book.KindAsString))
self.log.info(" %-30.30s %-30.30s %-40.40s [%s]" % (book.Name, book.Artist, book.Album, book.KindAsString))
except:
if DEBUG:
self.log.info(" no books in library")
@ -1852,8 +1868,12 @@ class ITUNES(DevicePlugin):
Check for >1 iPod device connected to iTunes
'''
if isosx:
try:
names = [s.name() for s in self.iTunes.sources()]
kinds = [str(s.kind()).rpartition('.')[2] for s in self.iTunes.sources()]
except:
# User probably quit iTunes
return {}
elif iswindows:
# Assumes a pythoncom wrapper
it_sources = ['Unknown','Library','iPod','AudioCD','MP3CD','Device','RadioTuner','SharedLibrary']
@ -1912,6 +1932,7 @@ class ITUNES(DevicePlugin):
self.log.error(" could not confirm valid iTunes.media_dir from %s" % 'com.apple.itunes')
self.log.error(" media_dir: %s" % media_dir)
if DEBUG:
self.log.info(" %s %s" % (__appname__, __version__))
self.log.info(" [OSX %s - %s (%s), driver version %d.%d.%d]" %
(self.iTunes.name(), self.iTunes.version(), self.initial_status,
self.version[0],self.version[1],self.version[2]))
@ -1941,6 +1962,7 @@ class ITUNES(DevicePlugin):
self.log.error(" '%s' not found" % media_dir)
if DEBUG:
self.log.info(" %s %s" % (__appname__, __version__))
self.log.info(" [Windows %s - %s (%s), driver version %d.%d.%d]" %
(self.iTunes.Windows[0].name, self.iTunes.Version, self.initial_status,
self.version[0],self.version[1],self.version[2]))
@ -2028,7 +2050,7 @@ class ITUNES(DevicePlugin):
elif iswindows:
dev_pl = self._get_device_books_playlist()
hits = dev_pl.Search(cached_book['uuid'],self.SearchField.index('Albums'))
hits = dev_pl.Search(cached_book['uuid'],self.SearchField.index('All'))
if hits:
hit = hits[0]
if False:
@ -2082,7 +2104,7 @@ class ITUNES(DevicePlugin):
self.iTunes.delete(cached_book['lib_book'])
except:
if DEBUG:
self.log.info(" '%s' not found in iTunes" % cached_book['title'])
self.log.info(" unable to remove '%s' from iTunes" % cached_book['title'])
elif iswindows:
'''
@ -2094,13 +2116,14 @@ class ITUNES(DevicePlugin):
path = book.Location
except:
book = self._find_library_book(cached_book)
path = book.Location
if book:
storage_path = os.path.split(book.Location)
if book.Location.startswith(self.iTunes_media):
storage_path = os.path.split(path)
if path.startswith(self.iTunes_media):
if DEBUG:
self.log.info(" removing '%s' at %s" %
(cached_book['title'], book.Location))
(cached_book['title'], path))
try:
os.remove(path)
except:
@ -2121,7 +2144,7 @@ class ITUNES(DevicePlugin):
book.Delete()
except:
if DEBUG:
self.log.info(" '%s' not found in iTunes" % cached_book['title'])
self.log.info(" unable to remove '%s' from iTunes" % cached_book['title'])
def _update_epub_metadata(self, fpath, metadata):
'''
@ -2130,21 +2153,6 @@ class ITUNES(DevicePlugin):
# Refresh epub metadata
with open(fpath,'r+b') as zfo:
'''
# Touch the timestamp to force a recache
if metadata.timestamp:
if DEBUG:
self.log.info(" old timestamp: %s" % metadata.timestamp)
old_ts = metadata.timestamp
metadata.timestamp = datetime.datetime(old_ts.year, old_ts.month, old_ts.day, old_ts.hour,
old_ts.minute, old_ts.second, old_ts.microsecond+1, old_ts.tzinfo)
if DEBUG:
self.log.info(" new timestamp: %s" % metadata.timestamp)
else:
metadata.timestamp = isoformat(now())
if DEBUG:
self.log.info(" add timestamp: %s" % metadata.timestamp)
'''
# Touch the OPF timestamp
zf_opf = ZipFile(fpath,'r')
fnames = zf_opf.namelist()
@ -2243,14 +2251,16 @@ class ITUNES(DevicePlugin):
if isosx:
if lb_added:
lb_added.album.set(metadata.uuid)
lb_added.album.set(metadata.title)
lb_added.composer.set(metadata.uuid)
lb_added.description.set("%s %s" % (self.description_prefix,strftime('%Y-%m-%d %H:%M:%S')))
lb_added.enabled.set(True)
lb_added.sort_artist.set(metadata.author_sort.title())
lb_added.sort_name.set(this_book.title_sorter)
if db_added:
db_added.album.set(metadata.uuid)
db_added.album.set(metadata.title)
db_added.composer.set(metadata.uuid)
db_added.description.set("%s %s" % (self.description_prefix,strftime('%Y-%m-%d %H:%M:%S')))
db_added.enabled.set(True)
db_added.sort_artist.set(metadata.author_sort.title())
@ -2273,16 +2283,20 @@ class ITUNES(DevicePlugin):
pass
# Set genre from series if available, else first alpha tag
# Otherwise iTunes grabs the first dc:subject from the opf metadata,
if metadata.series:
# Otherwise iTunes grabs the first dc:subject from the opf metadata
if self.use_series_data and metadata.series:
if lb_added:
lb_added.sort_name.set("%s %03d" % (metadata.series, metadata.series_index))
lb_added.genre.set(metadata.series)
lb_added.episode_ID.set(metadata.series)
lb_added.episode_number.set(metadata.series_index)
if db_added:
db_added.sort_name.set("%s %03d" % (metadata.series, metadata.series_index))
db_added.genre.set(metadata.series)
db_added.episode_ID.set(metadata.series)
db_added.episode_number.set(metadata.series_index)
elif metadata.tags:
for tag in metadata.tags:
if self._is_alpha(tag[0]):
@ -2294,14 +2308,16 @@ class ITUNES(DevicePlugin):
elif iswindows:
if lb_added:
lb_added.Album = metadata.uuid
lb_added.Album = metadata.title
lb_added.Composer = metadata.uuid
lb_added.Description = ("%s %s" % (self.description_prefix,strftime('%Y-%m-%d %H:%M:%S')))
lb_added.Enabled = True
lb_added.SortArtist = (metadata.author_sort.title())
lb_added.SortName = (this_book.title_sorter)
if db_added:
db_added.Album = metadata.uuid
db_added.Album = metadata.title
db_added.Composer = metadata.uuid
db_added.Description = ("%s %s" % (self.description_prefix,strftime('%Y-%m-%d %H:%M:%S')))
db_added.Enabled = True
db_added.SortArtist = (metadata.author_sort.title())
@ -2323,36 +2339,38 @@ class ITUNES(DevicePlugin):
except:
if DEBUG:
self.log.warning(" iTunes automation interface reported an error"
" setting AlbumRating")
" setting AlbumRating on iDevice")
# Set Category from first alpha tag, overwrite with series if available
# Set Genre from first alpha tag, overwrite with series if available
# Otherwise iBooks uses first <dc:subject> from opf
# iTunes balks on setting EpisodeNumber, but it sticks (9.1.1.12)
if metadata.series:
if self.use_series_data and metadata.series:
if lb_added:
lb_added.Category = metadata.series
lb_added.SortName = "%s %03d" % (metadata.series, metadata.series_index)
lb_added.Genre = metadata.series
lb_added.EpisodeID = metadata.series
try:
lb_added.EpisodeNumber = metadata.series_index
except:
pass
if db_added:
db_added.Category = metadata.series
db_added.SortName = "%s %03d" % (metadata.series, metadata.series_index)
db_added.Genre = metadata.series
db_added.EpisodeID = metadata.series
try:
db_added.EpisodeNumber = metadata.series_index
except:
if DEBUG:
self.log.warning(" iTunes automation interface reported an error"
" setting EpisodeNumber")
" setting EpisodeNumber on iDevice")
elif metadata.tags:
for tag in metadata.tags:
if self._is_alpha(tag[0]):
if lb_added:
lb_added.Category = tag
lb_added.Genre = tag
if db_added:
db_added.Category = tag
db_added.Genre = tag
break

View File

@ -93,10 +93,16 @@ class CoverView(QWidget): # {{{
self._current_pixmap_size = val
def do_layout(self):
if self.rect().width() == 0 or self.rect().height() == 0:
return
pixmap = self.pixmap
pwidth, pheight = pixmap.width(), pixmap.height()
try:
self.pwidth, self.pheight = fit_image(pwidth, pheight,
self.rect().width(), self.rect().height())[1:]
except:
self.pwidth, self.pheight = self.rect().width()-1, \
self.rect().height()-1
self.current_pixmap_size = QSize(self.pwidth, self.pheight)
self.animation.setEndValue(self.current_pixmap_size)
@ -120,7 +126,8 @@ class CoverView(QWidget): # {{{
self.data = {'id':data.get('id', None)}
if data.has_key('cover'):
self.pixmap = QPixmap.fromImage(data.pop('cover'))
if self.pixmap.isNull():
if self.pixmap.isNull() or self.pixmap.width() < 5 or \
self.pixmap.height() < 5:
self.pixmap = self.default_pixmap
else:
self.pixmap = self.default_pixmap

View File

@ -205,8 +205,8 @@ class CoverFlowMixin(object):
sm.select(index, sm.ClearAndSelect|sm.Rows)
self.library_view.setCurrentIndex(index)
except:
pass
import traceback
traceback.print_exc()
def sync_listview_to_cf(self, row):
self.cf_last_updated_at = time.time()

View File

@ -1355,10 +1355,12 @@ class DeviceMixin(object): # {{{
# library view. In this case, simply give up
if not hasattr(self, 'library_view') or self.library_view is None:
return
db = getattr(self.library_view.model(), 'db', None)
if db is None:
return
# Build a cache (map) of the library, so the search isn't On**2
self.db_book_title_cache = {}
self.db_book_uuid_cache = {}
db = self.library_view.model().db
for id in db.data.iterallids():
mi = db.get_metadata(id, index_is_id=True)
title = re.sub('(?u)\W|[_]', '', mi.title.lower())

View File

@ -371,7 +371,7 @@ class BooksView(QTableView): # {{{
# Context Menu {{{
def set_context_menu(self, edit_metadata, send_to_device, convert, view,
save, open_folder, book_details, delete,
add_to_library, similar_menu=None):
similar_menu=None, add_to_library=None):
self.setContextMenuPolicy(Qt.DefaultContextMenu)
self.context_menu = QMenu(self)
if edit_metadata is not None:

View File

@ -496,6 +496,7 @@ int PictureFlowPrivate::currentSlide() const
void PictureFlowPrivate::setCurrentSlide(int index)
{
animateTimer.stop();
step = 0;
centerIndex = qBound(index, 0, slideImages->count()-1);
target = centerIndex;

View File

@ -506,7 +506,7 @@ class CustomColumns(object):
ratings as r
WHERE {lt}.value={table}.id and bl.book={lt}.book and
r.id = bl.rating and r.rating <> 0) avg_rating,
value as sort
value AS sort
FROM {table};
CREATE VIEW tag_browser_filtered_{table} AS SELECT
@ -521,7 +521,7 @@ class CustomColumns(object):
WHERE {lt}.value={table}.id AND bl.book={lt}.book AND
r.id = bl.rating AND r.rating <> 0 AND
books_list_filter(bl.book)) avg_rating,
value as sort
value AS sort
FROM {table};
'''.format(lt=lt, table=table),

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff