mirror of
https://github.com/kovidgoyal/calibre.git
synced 2025-07-09 03:04:10 -04:00
0.8.38
This commit is contained in:
commit
68e0e3a81d
133
Changelog.yaml
133
Changelog.yaml
@ -19,6 +19,139 @@
|
|||||||
# new recipes:
|
# new recipes:
|
||||||
# - title:
|
# - title:
|
||||||
|
|
||||||
|
- version: 0.8.38
|
||||||
|
date: 2012-02-03
|
||||||
|
|
||||||
|
new features:
|
||||||
|
- title: "Implement the ability to automatically add books to calibre from a specified folder."
|
||||||
|
type: major
|
||||||
|
description: "calibre can now watch a folder on your computer and instantly add any files you put there to the calibre library as new books. You can tell calibre which folder to watch via Preferences->Adding Books->Automatic Adding."
|
||||||
|
tickets: [920249]
|
||||||
|
|
||||||
|
- title: "Conversion: When automatically inserting page breaks, do not put a page break before a <h1> or <h2> tag if it is immediately preceded by another <h1> or <h2> tag."
|
||||||
|
|
||||||
|
- title: "Driver for EZReader T730 and Point-of-View PlayTab Pro"
|
||||||
|
tickets: [923283, 922969]
|
||||||
|
|
||||||
|
bug fixes:
|
||||||
|
- title: "Fix device entry not visible in menubar even when it has been added via Preferences->Toolbars."
|
||||||
|
tickets: [923175]
|
||||||
|
|
||||||
|
- title: "Fix metadata plugboards not applied when auto sending news by email"
|
||||||
|
|
||||||
|
- title: "Fix regression in 0.8.34 that broke recipes that used skip_ad_pages() but not get_browser(). "
|
||||||
|
tickets: [923724]
|
||||||
|
|
||||||
|
- title: "Restore device support on FreeBSD, by using HAL"
|
||||||
|
tickets: [924503]
|
||||||
|
|
||||||
|
- title: "Get books: Show no more than 10 results from the Gandalf store"
|
||||||
|
|
||||||
|
- title: "Content server: Fix metadata not being updated when sending for some MOBI files."
|
||||||
|
tickets: [923130]
|
||||||
|
|
||||||
|
- title: "Heuristic processing: Fix the italicize common patterns algorithm breaking on some HTML markup."
|
||||||
|
tickets: [922317]
|
||||||
|
|
||||||
|
- title: "When trying to find an ebook inside a zip file, do not fail if the zip file itself contains other zip files."
|
||||||
|
tickets: [925670]
|
||||||
|
|
||||||
|
- title: "EPUB Input: Handle EPUBs with duplicate entries in the manifest."
|
||||||
|
tickets: [925831]
|
||||||
|
|
||||||
|
- title: "MOBI Input: Handle files that have extra </html> tags sprinkled through out their markup."
|
||||||
|
tickets: [925833]
|
||||||
|
|
||||||
|
improved recipes:
|
||||||
|
- Metro Nieuws NL
|
||||||
|
- FHM UK
|
||||||
|
|
||||||
|
new recipes:
|
||||||
|
- title: Strange Horizons
|
||||||
|
author: Jim DeVona
|
||||||
|
|
||||||
|
- title: Telegraph India and Live Mint
|
||||||
|
author: Krittika Goyal
|
||||||
|
|
||||||
|
- title: High Country News
|
||||||
|
author: Armin Geller
|
||||||
|
|
||||||
|
- title: Countryfile
|
||||||
|
author: Dave Asbury
|
||||||
|
|
||||||
|
- title: Liberation (subscription version)
|
||||||
|
author: Remi Vanicat
|
||||||
|
|
||||||
|
- title: Various Italian news sources
|
||||||
|
author: faber1971
|
||||||
|
|
||||||
|
|
||||||
|
- version: 0.8.37
|
||||||
|
date: 2012-01-27
|
||||||
|
|
||||||
|
new features:
|
||||||
|
- title: "Allow calibre to be run simultaneously in two different user accounts on windows."
|
||||||
|
tickets: [919856]
|
||||||
|
|
||||||
|
- title: "Driver for Motorola Photon and Point of View PlayTab"
|
||||||
|
tickets: [920582, 919080]
|
||||||
|
|
||||||
|
- title: "Add a checkbox to preferences->plugins to show only user installed plugins"
|
||||||
|
|
||||||
|
- title: "Add a restart calibre button to the warning dialog that pops up after changing some preference that requires a restart"
|
||||||
|
|
||||||
|
bug fixes:
|
||||||
|
- title: "Fix regression in 0.8.36 that caused the remove format from book function to only delete the entry from the database and not delete the actual file from the disk"
|
||||||
|
tickets: [921721]
|
||||||
|
|
||||||
|
- title: "Fix regression in 0.8.36 that caused the calibredb command to not properly refresh the format information in the GUI"
|
||||||
|
tickets: [919494]
|
||||||
|
|
||||||
|
- title: "E-book viewer: Preserve the current position more accurately when changing font size/other preferences."
|
||||||
|
tickets: [912406]
|
||||||
|
|
||||||
|
- title: "Conversion pipeline: Fix items in the <guide> that refer to files with URL unsafe filenames being ignored."
|
||||||
|
tickets: [920804]
|
||||||
|
|
||||||
|
- title: "Fix calibre not running on linux systems that set LANG to an empty string"
|
||||||
|
|
||||||
|
- title: "On first run of calibre, ensure the columns are sized appropriately"
|
||||||
|
|
||||||
|
- title: "MOBI Output: Do not collapse whitespace when setting the comments metadata in newly created MOBI files"
|
||||||
|
|
||||||
|
- title: "HTML Input: Fix handling of files with ä characters in their filenames."
|
||||||
|
tickets: [919931]
|
||||||
|
|
||||||
|
- title: "Fix the sort on startup tweak ignoring more than three levels"
|
||||||
|
tickets: [919584]
|
||||||
|
|
||||||
|
- title: "Edit metadata dialog: Fix a bug that broke adding of a file to the book that calibre did not previously know about in the books directory while simultaneously changing the author or title of the book."
|
||||||
|
tickets: [922003]
|
||||||
|
|
||||||
|
improved recipes:
|
||||||
|
- People's Daily
|
||||||
|
- Plus Info
|
||||||
|
- grantland.com
|
||||||
|
- Eret es irodalom
|
||||||
|
- Sueddeutsche.de
|
||||||
|
|
||||||
|
new recipes:
|
||||||
|
- title: Mumbai Mirror
|
||||||
|
author: Krittika Goyal
|
||||||
|
|
||||||
|
- title: Real Clear
|
||||||
|
author: TMcN
|
||||||
|
|
||||||
|
- title: Gazeta Wyborcza
|
||||||
|
author: ravcio
|
||||||
|
|
||||||
|
- title: The Daily News Egypt and al masry al youm
|
||||||
|
author: Omm Mishmishah
|
||||||
|
|
||||||
|
- title: Klip.me
|
||||||
|
author: Ken Sun
|
||||||
|
|
||||||
|
|
||||||
- version: 0.8.36
|
- version: 0.8.36
|
||||||
date: 2012-01-20
|
date: 2012-01-20
|
||||||
|
|
||||||
|
16
recipes/beppe_grillo.recipe
Normal file
16
recipes/beppe_grillo.recipe
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
__license__ = 'GPL v3'
|
||||||
|
|
||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class AdvancedUserRecipe1327747616(BasicNewsRecipe):
|
||||||
|
title = u'Beppe Grillo'
|
||||||
|
oldest_article = 7
|
||||||
|
max_articles_per_feed = 100
|
||||||
|
auto_cleanup = True
|
||||||
|
|
||||||
|
feeds = [(u'Beppe Grillo', u'http://feeds.feedburner.com/beppegrillo/atom')]
|
||||||
|
description = 'Blog of the famous comedian and politician Beppe Grillo - v1.00 (28, January 2012)'
|
||||||
|
__author__ = 'faber1971'
|
||||||
|
|
||||||
|
language = 'it'
|
||||||
|
|
@ -77,8 +77,18 @@ class ChicagoTribune(BasicNewsRecipe):
|
|||||||
|
|
||||||
|
|
||||||
def get_article_url(self, article):
|
def get_article_url(self, article):
|
||||||
print article.get('feedburner_origlink', article.get('guid', article.get('link')))
|
url = article.get('feedburner_origlink', article.get('guid', article.get('link')))
|
||||||
return article.get('feedburner_origlink', article.get('guid', article.get('link')))
|
if url.endswith('?track=rss'):
|
||||||
|
url = url.partition('?')[0]
|
||||||
|
return url
|
||||||
|
|
||||||
|
def skip_ad_pages(self, soup):
|
||||||
|
text = soup.find(text='click here to continue to article')
|
||||||
|
if text:
|
||||||
|
a = text.parent
|
||||||
|
url = a.get('href')
|
||||||
|
if url:
|
||||||
|
return self.index_to_soup(url, raw=True)
|
||||||
|
|
||||||
def postprocess_html(self, soup, first_fetch):
|
def postprocess_html(self, soup, first_fetch):
|
||||||
# Remove the navigation bar. It was kept until now to be able to follow
|
# Remove the navigation bar. It was kept until now to be able to follow
|
||||||
|
25
recipes/countryfile.recipe
Normal file
25
recipes/countryfile.recipe
Normal file
@ -0,0 +1,25 @@
|
|||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class AdvancedUserRecipe1325006965(BasicNewsRecipe):
|
||||||
|
title = u'Countryfile.com'
|
||||||
|
cover_url = 'http://www.buysubscriptions.com/static_content/the-immediate/en/images/covers/CFIL_maxi.jpg'
|
||||||
|
__author__ = 'Dave Asbury'
|
||||||
|
description = 'The official website of Countryfile Magazine'
|
||||||
|
# last updated 29/1/12
|
||||||
|
language = 'en_GB'
|
||||||
|
oldest_article = 30
|
||||||
|
max_articles_per_feed = 25
|
||||||
|
remove_empty_feeds = True
|
||||||
|
no_stylesheets = True
|
||||||
|
auto_cleanup = True
|
||||||
|
#articles_are_obfuscated = True
|
||||||
|
|
||||||
|
remove_tags = [
|
||||||
|
# dict(attrs={'class' : ['player']}),
|
||||||
|
|
||||||
|
]
|
||||||
|
feeds = [
|
||||||
|
(u'Homepage', u'http://www.countryfile.com/rss/home'),
|
||||||
|
(u'Country News', u'http://www.countryfile.com/rss/news'),
|
||||||
|
(u'Countryside', u'http://www.countryfile.com/rss/countryside'),
|
||||||
|
]
|
@ -1,16 +1,16 @@
|
|||||||
################################################################################
|
################################################################################
|
||||||
#Description: http://es.hu/ RSS channel
|
#Description: http://es.hu/ RSS channel
|
||||||
#Author: Bigpapa (bigpapabig@hotmail.com)
|
#Author: Bigpapa (bigpapabig@hotmail.com)
|
||||||
#Date: 2010.12.01. - V1.0
|
#Date: 2012.01.20. - V1.2
|
||||||
################################################################################
|
################################################################################
|
||||||
|
|
||||||
from calibre.web.feeds.recipes import BasicNewsRecipe
|
from calibre.web.feeds.recipes import BasicNewsRecipe
|
||||||
|
|
||||||
class elet_es_irodalom(BasicNewsRecipe):
|
class elet_es_irodalom(BasicNewsRecipe):
|
||||||
title = u'Elet es Irodalom'
|
title = u'\u00c9let \u00e9s Irodalom'
|
||||||
__author__ = 'Bigpapa'
|
__author__ = 'Bigpapa'
|
||||||
oldest_article = 7
|
oldest_article = 7
|
||||||
max_articles_per_feed = 20 # Az adott e-bookban tarolt cikkek feedenkenti maximalis szamat adja meg.
|
max_articles_per_feed = 30 # Az adott e-bookban tarolt cikkek feedenkenti maximalis szamat adja meg.
|
||||||
no_stylesheets = True
|
no_stylesheets = True
|
||||||
#delay = 1
|
#delay = 1
|
||||||
use_embedded_content = False
|
use_embedded_content = False
|
||||||
@ -19,21 +19,32 @@ class elet_es_irodalom(BasicNewsRecipe):
|
|||||||
language = 'hu'
|
language = 'hu'
|
||||||
publication_type = 'newsportal'
|
publication_type = 'newsportal'
|
||||||
extra_css = '.doc_title { font: bold 30px } .doc_author {font: bold 14px} '
|
extra_css = '.doc_title { font: bold 30px } .doc_author {font: bold 14px} '
|
||||||
|
needs_subscription = 'optional'
|
||||||
|
|
||||||
|
masthead_url = 'http://www.es.hu/images/logo.jpg'
|
||||||
|
timefmt = ' [%Y %b %d, %a]'
|
||||||
|
|
||||||
|
#Nem ide a kódba kell beleírni a hozzáférés adatait, hanem azt akkor adod meg, ha le akarod tölteni!
|
||||||
|
def get_browser(self):
|
||||||
|
br = BasicNewsRecipe.get_browser()
|
||||||
|
if self.username is not None and self.password is not None:
|
||||||
|
br.open('http://www.es.hu/')
|
||||||
|
br.select_form(name='userfrmlogin')
|
||||||
|
br['cusername'] = self.username
|
||||||
|
br['cpassword'] = self.password
|
||||||
|
br.submit()
|
||||||
|
return br
|
||||||
|
|
||||||
keep_only_tags = [
|
keep_only_tags = [
|
||||||
dict(name='div', attrs={'class':['doc_author', 'doc_title', 'doc']})
|
dict(name='div', attrs={'class':['doc_author', 'doc_title', 'doc']})
|
||||||
|
|
||||||
]
|
]
|
||||||
|
|
||||||
remove_tags = [
|
remove_tags = [
|
||||||
dict(name='a', attrs={'target':['_TOP']}),
|
dict(name='a', attrs={'target':['_TOP']}),
|
||||||
dict(name='div', attrs={'style':['float: right; margin-left: 5px; margin-bottom: 5px;', 'float: right; margin-left: 5px; margin-bottom: 5px;']}),
|
dict(name='div', attrs={'style':['float: right; margin-left: 5px; margin-bottom: 5px;', 'float: right; margin-left: 5px; margin-bottom: 5px;']}),
|
||||||
|
|
||||||
|
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
feeds = [
|
feeds = [
|
||||||
(u'Publicisztika', 'http://www.feed43.com/4684235031168504.xml'),
|
(u'Publicisztika', 'http://www.feed43.com/4684235031168504.xml'),
|
||||||
(u'Interj\xfa', 'http://www.feed43.com/4032465460040618.xml'),
|
(u'Interj\xfa', 'http://www.feed43.com/4032465460040618.xml'),
|
||||||
@ -44,5 +55,4 @@ class elet_es_irodalom(BasicNewsRecipe):
|
|||||||
(u'Vers', 'http://www.feed43.com/1737324675134275.xml'),
|
(u'Vers', 'http://www.feed43.com/1737324675134275.xml'),
|
||||||
(u'K\xf6nyvkritika', 'http://www.feed43.com/1281156550717082.xml'),
|
(u'K\xf6nyvkritika', 'http://www.feed43.com/1281156550717082.xml'),
|
||||||
(u'M\u0171b\xedr\xe1lat', 'http://www.feed43.com/1851854623681044.xml')
|
(u'M\u0171b\xedr\xe1lat', 'http://www.feed43.com/1851854623681044.xml')
|
||||||
|
|
||||||
]
|
]
|
@ -6,7 +6,7 @@ class AdvancedUserRecipe1325006965(BasicNewsRecipe):
|
|||||||
cover_url = 'http://profile.ak.fbcdn.net/hprofile-ak-snc4/373529_38324934806_64930243_n.jpg'
|
cover_url = 'http://profile.ak.fbcdn.net/hprofile-ak-snc4/373529_38324934806_64930243_n.jpg'
|
||||||
masthead_url = 'http://www.fhm.com/App_Resources/Images/Site/re-design/logo.gif'
|
masthead_url = 'http://www.fhm.com/App_Resources/Images/Site/re-design/logo.gif'
|
||||||
__author__ = 'Dave Asbury'
|
__author__ = 'Dave Asbury'
|
||||||
# last updated 27/12/11
|
# last updated 27/1/12
|
||||||
language = 'en_GB'
|
language = 'en_GB'
|
||||||
oldest_article = 28
|
oldest_article = 28
|
||||||
max_articles_per_feed = 12
|
max_articles_per_feed = 12
|
||||||
@ -22,9 +22,13 @@ class AdvancedUserRecipe1325006965(BasicNewsRecipe):
|
|||||||
|
|
||||||
]
|
]
|
||||||
|
|
||||||
|
#remove_tags = [
|
||||||
|
#dict(attrs={'class' : ['player']}),
|
||||||
|
|
||||||
|
#]
|
||||||
feeds = [
|
feeds = [
|
||||||
(u'From the Homepage',u'http://feed43.com/8053226782885416.xml'),
|
(u'From the Homepage',u'http://feed43.com/8053226782885416.xml'),
|
||||||
|
(u'Funny - The Very Best Of The Internet',u'http://feed43.com/4538510106331565.xml'),
|
||||||
(u'The Final Countdown', u'http://feed43.com/3576106158530118.xml'),
|
(u'The Final Countdown', u'http://feed43.com/3576106158530118.xml'),
|
||||||
(u'Gaming',u'http://feed43.com/0755006465351035.xml'),
|
(u'Gaming',u'http://feed43.com/0755006465351035.xml'),
|
||||||
]
|
]
|
||||||
|
@ -7,40 +7,35 @@ class GrantLand(BasicNewsRecipe):
|
|||||||
language = 'en'
|
language = 'en'
|
||||||
__author__ = 'barty on mobileread.com forum'
|
__author__ = 'barty on mobileread.com forum'
|
||||||
max_articles_per_feed = 100
|
max_articles_per_feed = 100
|
||||||
no_stylesheets = False
|
no_stylesheets = True
|
||||||
# auto_cleanup is too aggressive sometimes and we end up with blank articles
|
# auto_cleanup is too aggressive sometimes and we end up with blank articles
|
||||||
auto_cleanup = False
|
auto_cleanup = False
|
||||||
timefmt = ' [%a, %d %b %Y]'
|
timefmt = ' [%a, %d %b %Y]'
|
||||||
oldest_article = 365
|
oldest_article = 90
|
||||||
|
|
||||||
cover_url = 'http://cdn0.sbnation.com/imported_assets/740965/blog_grantland_grid_3.jpg'
|
cover_url = 'http://cdn0.sbnation.com/imported_assets/740965/blog_grantland_grid_3.jpg'
|
||||||
masthead_url = 'http://a1.espncdn.com/prod/assets/grantland/grantland-logo.jpg'
|
masthead_url = 'http://a1.espncdn.com/prod/assets/grantland/grantland-logo.jpg'
|
||||||
|
|
||||||
INDEX = 'http://www.grantland.com'
|
INDEX = 'http://www.grantland.com'
|
||||||
CATEGORIES = [
|
CATEGORIES = [
|
||||||
# comment out categories you don't want
|
# comment out second line if you don't want older articles
|
||||||
# (user friendly name, url suffix, max number of articles to load)
|
# (user friendly name, url suffix, max number of articles to load)
|
||||||
('Today in Grantland','',20),
|
('Today in Grantland','',20),
|
||||||
('In Case You Missed It','incaseyoumissedit',35),
|
('In Case You Missed It','incaseyoumissedit',35),
|
||||||
]
|
]
|
||||||
|
|
||||||
remove_tags = [
|
remove_tags = [
|
||||||
{'name':['head','style','script']},
|
{'name':['style','aside','nav','footer','script']},
|
||||||
{'id':['header']},
|
{'name':'h1','text':'Grantland'},
|
||||||
{'class':re.compile(r'\bside|\bad\b|floatright|tags')}
|
{'id':['header','col-right']},
|
||||||
|
{'class':['connect_widget']},
|
||||||
|
{'name':'section','class':re.compile(r'\b(ad|module)\b')},
|
||||||
]
|
]
|
||||||
remove_tags_before = {'class':'wrapper'}
|
|
||||||
remove_tags_after = [{'id':'content'}]
|
|
||||||
|
|
||||||
preprocess_regexps = [
|
preprocess_regexps = [
|
||||||
# <header> tags with an img inside are just blog banners, don't need them
|
# remove blog banners
|
||||||
# note: there are other useful <header> tags so we don't want to just strip all of them
|
(re.compile(r'<a href="/blog/(?:(?!</a>).)+</a>', re.DOTALL|re.IGNORECASE), lambda m: ''),
|
||||||
(re.compile(r'<header class.+?<img .+?>.+?</header>', re.DOTALL|re.IGNORECASE),lambda m: ''),
|
|
||||||
# delete everything between the *last* <hr class="small" /> and </article>
|
|
||||||
(re.compile(r'<hr class="small"(?:(?!<hr class="small").)+</article>', re.DOTALL|re.IGNORECASE),lambda m: '<hr class="small" /></article>'),
|
|
||||||
]
|
]
|
||||||
extra_css = """cite, time { font-size: 0.8em !important; margin-right: 1em !important; }
|
|
||||||
img + cite { display:block; text-align:right}"""
|
|
||||||
|
|
||||||
def parse_index(self):
|
def parse_index(self):
|
||||||
feeds = []
|
feeds = []
|
||||||
@ -54,45 +49,23 @@ class GrantLand(BasicNewsRecipe):
|
|||||||
|
|
||||||
page = "%s/%s" % (self.INDEX, tag)
|
page = "%s/%s" % (self.INDEX, tag)
|
||||||
soup = self.index_to_soup(page)
|
soup = self.index_to_soup(page)
|
||||||
headers = soup.findAll('h2' if tag=='' else 'h3')
|
|
||||||
|
|
||||||
for header in headers:
|
main = soup.find('div',id='col-main')
|
||||||
tag = header.find('a',href=True)
|
if main is None:
|
||||||
if tag is None:
|
main = soup
|
||||||
continue
|
|
||||||
|
for tag in main.findAll('a', href=re.compile(r'(story|post)/_/id/\d+')):
|
||||||
url = tag['href']
|
url = tag['href']
|
||||||
if url in seen_urls:
|
if url in seen_urls:
|
||||||
continue
|
continue
|
||||||
title = self.tag_to_string(tag)
|
title = tag.string
|
||||||
if 'Podcast:' in title or 'In Case You Missed It' in title:
|
# blank title probably means <a href=".."><img /></a>. skip
|
||||||
|
if not title:
|
||||||
continue
|
continue
|
||||||
desc = dt = ''
|
|
||||||
# get at the div that contains description and other info
|
|
||||||
div = header.parent.find('div')
|
|
||||||
if div is not None:
|
|
||||||
desc = self.tag_to_string(div)
|
|
||||||
dt = div.find('time')
|
|
||||||
if dt is not None:
|
|
||||||
dt = self.tag_to_string( dt)
|
|
||||||
|
|
||||||
# if div contains the same url that is in h2/h3
|
|
||||||
# that means this is a series split into multiple articles
|
|
||||||
if div.find('a',href=url):
|
|
||||||
self.log('\tFound series:', title)
|
|
||||||
# grab all articles in series
|
|
||||||
for tag in div.findAll('a',href=True):
|
|
||||||
url = tag['href']
|
|
||||||
if url in seen_urls:
|
|
||||||
continue
|
|
||||||
self.log('\t', url)
|
|
||||||
seen_urls.add(url)
|
|
||||||
articles.append({'title':title+' - '+self.tag_to_string( tag),
|
|
||||||
'url':url,'description':desc,'date':dt})
|
|
||||||
else:
|
|
||||||
self.log('\tFound article:', title)
|
self.log('\tFound article:', title)
|
||||||
self.log('\t', url)
|
self.log('\t', url)
|
||||||
|
articles.append({'title':title,'url':url})
|
||||||
seen_urls.add(url)
|
seen_urls.add(url)
|
||||||
articles.append({'title':title,'url':url,'description':desc,'date':dt})
|
|
||||||
|
|
||||||
if len(articles) >= max_articles:
|
if len(articles) >= max_articles:
|
||||||
break
|
break
|
||||||
@ -101,6 +74,3 @@ class GrantLand(BasicNewsRecipe):
|
|||||||
feeds.append((cat_name, articles))
|
feeds.append((cat_name, articles))
|
||||||
|
|
||||||
return feeds
|
return feeds
|
||||||
|
|
||||||
def print_version(self, url):
|
|
||||||
return url+'?view=print'
|
|
||||||
|
43
recipes/high_country_news.recipe
Normal file
43
recipes/high_country_news.recipe
Normal file
@ -0,0 +1,43 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
__copyright__ = '2012, Kovid Goyal <kovid at kovidgoyal.net>, Armin Geller'
|
||||||
|
|
||||||
|
'''
|
||||||
|
Fetch High Country News
|
||||||
|
'''
|
||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
class HighCountryNews(BasicNewsRecipe):
|
||||||
|
|
||||||
|
title = u'High Country News'
|
||||||
|
description = u'News from the American West'
|
||||||
|
__author__ = 'Armin Geller' # 2012-01-31
|
||||||
|
publisher = 'High Country News'
|
||||||
|
timefmt = ' [%a, %d %b %Y]'
|
||||||
|
language = 'en-Us'
|
||||||
|
encoding = 'UTF-8'
|
||||||
|
publication_type = 'newspaper'
|
||||||
|
oldest_article = 7
|
||||||
|
max_articles_per_feed = 100
|
||||||
|
no_stylesheets = True
|
||||||
|
auto_cleanup = True
|
||||||
|
remove_javascript = True
|
||||||
|
use_embedded_content = False
|
||||||
|
masthead_url = 'http://www.hcn.org/logo.jpg' # 2012-01-31 AGe add
|
||||||
|
cover_source = 'http://www.hcn.org' # 2012-01-31 AGe add
|
||||||
|
|
||||||
|
def get_cover_url(self): # 2012-01-31 AGe add
|
||||||
|
cover_source_soup = self.index_to_soup(self.cover_source)
|
||||||
|
preview_image_div = cover_source_soup.find(attrs={'class':' portaltype-Plone Site content--hcn template-homepage_view'})
|
||||||
|
return preview_image_div.div.img['src']
|
||||||
|
|
||||||
|
feeds = [
|
||||||
|
(u'Most recent', u'http://feeds.feedburner.com/hcn/most-recent'),
|
||||||
|
(u'Current Issue', u'http://feeds.feedburner.com/hcn/current-issue'),
|
||||||
|
|
||||||
|
(u'Writers on the Range', u'http://feeds.feedburner.com/hcn/wotr'),
|
||||||
|
(u'High Country Views', u'http://feeds.feedburner.com/hcn/HighCountryViews'),
|
||||||
|
]
|
||||||
|
|
||||||
|
def print_version(self, url):
|
||||||
|
return url + '/print_view'
|
||||||
|
|
Before Width: | Height: | Size: 712 B After Width: | Height: | Size: 712 B |
15
recipes/la_voce.recipe
Normal file
15
recipes/la_voce.recipe
Normal file
@ -0,0 +1,15 @@
|
|||||||
|
__license__ = 'GPL v3'
|
||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class AdvancedUserRecipe1324114228(BasicNewsRecipe):
|
||||||
|
title = u'La Voce'
|
||||||
|
oldest_article = 7
|
||||||
|
max_articles_per_feed = 100
|
||||||
|
auto_cleanup = True
|
||||||
|
masthead_url = 'http://www.lavoce.info/binary/la_voce/testata/lavoce.1184661635.gif'
|
||||||
|
feeds = [(u'La Voce', u'http://www.lavoce.info/feed_rss.php?id_feed=1')]
|
||||||
|
__author__ = 'faber1971'
|
||||||
|
description = 'Italian website on Economy - v1.01 (17, December 2011)'
|
||||||
|
language = 'it'
|
||||||
|
|
||||||
|
|
103
recipes/liberation_sub.recipe
Normal file
103
recipes/liberation_sub.recipe
Normal file
@ -0,0 +1,103 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
|
||||||
|
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
__copyright__ = '2012, Rémi Vanicat <vanicat at debian.org>'
|
||||||
|
'''
|
||||||
|
liberation.fr
|
||||||
|
'''
|
||||||
|
# The cleanning is from the Liberation recipe, by Darko Miletic
|
||||||
|
|
||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class Liberation(BasicNewsRecipe):
|
||||||
|
|
||||||
|
title = u'Libération: Édition abonnés'
|
||||||
|
__author__ = 'Rémi Vanicat'
|
||||||
|
description = u'Actualités'
|
||||||
|
category = 'Actualités, France, Monde'
|
||||||
|
language = 'fr'
|
||||||
|
needs_subscription = True
|
||||||
|
|
||||||
|
use_embedded_content = False
|
||||||
|
no_stylesheets = True
|
||||||
|
remove_empty_feeds = True
|
||||||
|
|
||||||
|
extra_css = '''
|
||||||
|
h1, h2, h3 {font-size:xx-large; font-family:Arial,Helvetica,sans-serif;}
|
||||||
|
p.subtitle {font-size:xx-small; font-family:Arial,Helvetica,sans-serif;}
|
||||||
|
h4, h5, h2.rubrique, {font-size:xx-small; color:#4D4D4D; font-family:Arial,Helvetica,sans-serif;}
|
||||||
|
.ref, .date, .author, .legende {font-size:xx-small; color:#4D4D4D; font-family:Arial,Helvetica,sans-serif;}
|
||||||
|
.mna-body, entry-body {font-size:medium; font-family:Arial,Helvetica,sans-serif;}
|
||||||
|
'''
|
||||||
|
|
||||||
|
keep_only_tags = [
|
||||||
|
dict(name='div', attrs={'class':'article'})
|
||||||
|
,dict(name='div', attrs={'class':'text-article m-bot-s1'})
|
||||||
|
,dict(name='div', attrs={'class':'entry'})
|
||||||
|
,dict(name='div', attrs={'class':'col_contenu'})
|
||||||
|
]
|
||||||
|
|
||||||
|
remove_tags_after = [
|
||||||
|
dict(name='div',attrs={'class':['object-content text text-item', 'object-content', 'entry-content', 'col01', 'bloc_article_01']})
|
||||||
|
,dict(name='p',attrs={'class':['chapo']})
|
||||||
|
,dict(id='_twitter_facebook')
|
||||||
|
]
|
||||||
|
|
||||||
|
remove_tags = [
|
||||||
|
dict(name='iframe')
|
||||||
|
,dict(name='a', attrs={'class':'lnk-comments'})
|
||||||
|
,dict(name='div', attrs={'class':'toolbox'})
|
||||||
|
,dict(name='ul', attrs={'class':'share-box'})
|
||||||
|
,dict(name='ul', attrs={'class':'tool-box'})
|
||||||
|
,dict(name='ul', attrs={'class':'rub'})
|
||||||
|
,dict(name='p',attrs={'class':['chapo']})
|
||||||
|
,dict(name='p',attrs={'class':['tag']})
|
||||||
|
,dict(name='div',attrs={'class':['blokLies']})
|
||||||
|
,dict(name='div',attrs={'class':['alire']})
|
||||||
|
,dict(id='_twitter_facebook')
|
||||||
|
]
|
||||||
|
|
||||||
|
index = 'http://www.liberation.fr/abonnes/'
|
||||||
|
|
||||||
|
def get_browser(self):
|
||||||
|
br = BasicNewsRecipe.get_browser()
|
||||||
|
if self.username is not None and self.password is not None:
|
||||||
|
br.open('http://www.liberation.fr/jogger/login/')
|
||||||
|
br.select_form(nr=0)
|
||||||
|
br['email'] = self.username
|
||||||
|
br['password'] = self.password
|
||||||
|
br.submit()
|
||||||
|
return br
|
||||||
|
|
||||||
|
def parse_index(self):
|
||||||
|
soup=self.index_to_soup(self.index)
|
||||||
|
|
||||||
|
content = soup.find('div', { 'class':'block-content' })
|
||||||
|
|
||||||
|
articles = []
|
||||||
|
cat_articles = []
|
||||||
|
|
||||||
|
for tag in content.findAll(recursive=False):
|
||||||
|
if(tag['class']=='headrest headrest-basic-rounded'):
|
||||||
|
cat_articles = []
|
||||||
|
articles.append((tag.find('h5').contents[0],cat_articles))
|
||||||
|
else:
|
||||||
|
title = tag.find('h3').contents[0]
|
||||||
|
url = tag.find('a')['href']
|
||||||
|
print(url)
|
||||||
|
descripion = tag.find('p',{ 'class':'subtitle' }).contents[0]
|
||||||
|
article = {
|
||||||
|
'title': title,
|
||||||
|
'url': url,
|
||||||
|
'descripion': descripion,
|
||||||
|
'content': ''
|
||||||
|
}
|
||||||
|
cat_articles.append(article)
|
||||||
|
return articles
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# Local Variables:
|
||||||
|
# mode: python
|
||||||
|
# End:
|
@ -1,41 +1,26 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
|
|
||||||
__license__ = 'GPL v3'
|
|
||||||
__copyright__ = '2009, Darko Miletic <darko.miletic at gmail.com>'
|
|
||||||
'''
|
|
||||||
www.livemint.com
|
|
||||||
'''
|
|
||||||
|
|
||||||
from calibre.web.feeds.news import BasicNewsRecipe
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
class LiveMint(BasicNewsRecipe):
|
class LiveMint(BasicNewsRecipe):
|
||||||
title = u'Livemint'
|
title = u'Live Mint'
|
||||||
__author__ = 'Darko Miletic'
|
language = 'en_IN'
|
||||||
description = 'The Wall Street Journal'
|
__author__ = 'Krittika Goyal'
|
||||||
publisher = 'The Wall Street Journal'
|
#encoding = 'cp1252'
|
||||||
category = 'news, games, adventure, technology'
|
oldest_article = 1 #days
|
||||||
language = 'en'
|
max_articles_per_feed = 25
|
||||||
|
use_embedded_content = True
|
||||||
|
|
||||||
oldest_article = 15
|
|
||||||
max_articles_per_feed = 100
|
|
||||||
no_stylesheets = True
|
no_stylesheets = True
|
||||||
encoding = 'utf-8'
|
auto_cleanup = True
|
||||||
use_embedded_content = False
|
|
||||||
extra_css = ' #dvArtheadline{font-size: x-large} #dvArtAbstract{font-size: large} '
|
|
||||||
|
|
||||||
keep_only_tags = [dict(name='div', attrs={'class':'innercontent'})]
|
|
||||||
|
|
||||||
remove_tags = [dict(name=['object','link','embed','form','iframe'])]
|
feeds = [
|
||||||
|
('Latest News',
|
||||||
|
'http://www.livemint.com/StoryRss.aspx?LN=Latestnews'),
|
||||||
|
('Gallery',
|
||||||
|
'http://www.livemint.com/GalleryRssfeed.aspx'),
|
||||||
|
('Top Stories',
|
||||||
|
'http://www.livemint.com/StoryRss.aspx?ts=Topstories'),
|
||||||
|
('Banking',
|
||||||
|
'http://www.livemint.com/StoryRss.aspx?Id=104'),
|
||||||
|
]
|
||||||
|
|
||||||
feeds = [(u'Articles', u'http://www.livemint.com/SectionRssfeed.aspx?Mid=1')]
|
|
||||||
|
|
||||||
def print_version(self, url):
|
|
||||||
link = url
|
|
||||||
msoup = self.index_to_soup(link)
|
|
||||||
mlink = msoup.find(attrs={'id':'ctl00_bodyplaceholdercontent_cntlArtTool_printUrl'})
|
|
||||||
if mlink:
|
|
||||||
link = 'http://www.livemint.com/Articles/' + mlink['href'].rpartition('/Articles/')[2]
|
|
||||||
return link
|
|
||||||
|
|
||||||
def preprocess_html(self, soup):
|
|
||||||
return self.adeify_images(soup)
|
|
||||||
|
16
recipes/marketing_magazine.recipe
Normal file
16
recipes/marketing_magazine.recipe
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
__license__ = 'GPL v3'
|
||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class AdvancedUserRecipe1327062445(BasicNewsRecipe):
|
||||||
|
title = u'Marketing Magazine'
|
||||||
|
oldest_article = 7
|
||||||
|
max_articles_per_feed = 100
|
||||||
|
auto_cleanup = True
|
||||||
|
remove_javascript = True
|
||||||
|
masthead_url = 'http://www.simrendeogun.com/wp-content/uploads/2011/06/New-Marketing-Magazine-Logo.jpg'
|
||||||
|
feeds = [(u'My Marketing', u'http://feed43.com/0537744466058428.xml'), (u'My Marketing_', u'http://feed43.com/8126723074604845.xml'), (u'Venturini', u'http://robertoventurini.blogspot.com/feeds/posts/default?alt=rss'), (u'Ninja Marketing', u'http://feeds.feedburner.com/NinjaMarketing'), (u'Comunitàzione', u'http://www.comunitazione.it/feed/novita.asp'), (u'Brandforum news', u'http://www.brandforum.it/rss/news'), (u'Brandforum papers', u'http://www.brandforum.it/rss/papers'), (u'Disambiguando', u'http://giovannacosenza.wordpress.com/feed/')]
|
||||||
|
__author__ = 'faber1971'
|
||||||
|
description = 'Collection of Italian marketing websites - v1.00 (28, January 2012)'
|
||||||
|
language = 'it'
|
||||||
|
|
||||||
|
|
@ -38,18 +38,23 @@ except:
|
|||||||
removed keep_only tags
|
removed keep_only tags
|
||||||
Version 1.8 26-11-2022
|
Version 1.8 26-11-2022
|
||||||
added remove tag: article-slideshow
|
added remove tag: article-slideshow
|
||||||
|
Version 1.9 31-1-2012
|
||||||
|
removed some left debug settings
|
||||||
|
extended timeout from 2 to 10
|
||||||
|
changed oldest article from 10 to 1.2
|
||||||
|
changed max articles from 15 to 25
|
||||||
'''
|
'''
|
||||||
|
|
||||||
class AdvancedUserRecipe1306097511(BasicNewsRecipe):
|
class AdvancedUserRecipe1306097511(BasicNewsRecipe):
|
||||||
title = u'Metro Nieuws NL'
|
title = u'Metro Nieuws NL'
|
||||||
oldest_article = 10
|
oldest_article = 1.2
|
||||||
max_articles_per_feed = 15
|
max_articles_per_feed = 25
|
||||||
__author__ = u'DrMerry'
|
__author__ = u'DrMerry'
|
||||||
description = u'Metro Nederland'
|
description = u'Metro Nederland'
|
||||||
language = u'nl'
|
language = u'nl'
|
||||||
simultaneous_downloads = 5
|
simultaneous_downloads = 3
|
||||||
masthead_url = 'http://blog.metronieuws.nl/wp-content/themes/metro/images/header.gif'
|
masthead_url = 'http://blog.metronieuws.nl/wp-content/themes/metro/images/header.gif'
|
||||||
timeout = 2
|
timeout = 10
|
||||||
center_navbar = True
|
center_navbar = True
|
||||||
timefmt = ' [%A, %d %b %Y]'
|
timefmt = ' [%A, %d %b %Y]'
|
||||||
no_stylesheets = True
|
no_stylesheets = True
|
||||||
|
59
recipes/mumbai_mirror.recipe
Normal file
59
recipes/mumbai_mirror.recipe
Normal file
@ -0,0 +1,59 @@
|
|||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class MumbaiMirror(BasicNewsRecipe):
|
||||||
|
title = u'Mumbai Mirror'
|
||||||
|
oldest_article = 2
|
||||||
|
max_articles_per_feed = 100
|
||||||
|
__author__ = 'Krittika Goyal'
|
||||||
|
|
||||||
|
description = 'People Daily Newspaper'
|
||||||
|
language = 'en_IN'
|
||||||
|
category = 'News, Mumbai, India'
|
||||||
|
remove_javascript = True
|
||||||
|
use_embedded_content = False
|
||||||
|
auto_cleanup = True
|
||||||
|
no_stylesheets = True
|
||||||
|
#encoding = 'GB2312'
|
||||||
|
conversion_options = {'linearize_tables':True}
|
||||||
|
|
||||||
|
|
||||||
|
feeds = [
|
||||||
|
('Cover Story',
|
||||||
|
'http://www.mumbaimirror.com/rssfeeds.aspx?feed=latest'),
|
||||||
|
('City Diary',
|
||||||
|
'http://www.mumbaimirror.com/rssfeeds.aspx?feed=citydiary'),
|
||||||
|
('Columnists',
|
||||||
|
'http://www.mumbaimirror.com/rssfeeds.aspx?feed=mmcolumnists'),
|
||||||
|
('Mumbai, The City',
|
||||||
|
'http://www.mumbaimirror.com/rssfeeds.aspx?feed=city'),
|
||||||
|
('Nation',
|
||||||
|
'http://www.mumbaimirror.com/rssfeeds.aspx?feed=nation'),
|
||||||
|
('Top Stories',
|
||||||
|
'http://www.mumbaimirror.com/rssfeeds.aspx?feed=topstories'),
|
||||||
|
('Business',
|
||||||
|
'http://www.mumbaimirror.com/rssfeeds.aspx?feed=business'),
|
||||||
|
('World',
|
||||||
|
'http://www.mumbaimirror.com/rssfeeds.aspx?feed=world'),
|
||||||
|
(' Chai Time',
|
||||||
|
'http://www.mumbaimirror.com/rssfeeds.aspx?feed=chaitime'),
|
||||||
|
('Technology',
|
||||||
|
'http://www.mumbaimirror.com/rssfeeds.aspx?feed=technology'),
|
||||||
|
('Entertainment',
|
||||||
|
'http://www.mumbaimirror.com/rssfeeds.aspx?feed=entertainment'),
|
||||||
|
('Style',
|
||||||
|
'http://www.mumbaimirror.com/rssfeeds.aspx?feed=style'),
|
||||||
|
('Ask the Sexpert',
|
||||||
|
'http://www.mumbaimirror.com/rssfeeds.aspx?feed=askthesexpert'),
|
||||||
|
('Television',
|
||||||
|
'http://www.mumbaimirror.com/rssfeeds.aspx?feed=television'),
|
||||||
|
('Lifestyle',
|
||||||
|
'http://www.mumbaimirror.com/rssfeeds.aspx?feed=lifestyle'),
|
||||||
|
('Sports',
|
||||||
|
'http://www.mumbaimirror.com/rssfeeds.aspx?feed=sports'),
|
||||||
|
('Travel: Travelers Diary',
|
||||||
|
'http://www.mumbaimirror.com/rssfeeds.aspx?feed=travellersdiaries'),
|
||||||
|
('Travel: Domestic',
|
||||||
|
'http://www.mumbaimirror.com/rssfeeds.aspx?feed=traveldomestic'),
|
||||||
|
('Travel: International',
|
||||||
|
'http://www.mumbaimirror.com/rssfeeds.aspx?feed=travelinternational')
|
||||||
|
]
|
@ -14,6 +14,7 @@ from calibre.ebooks.BeautifulSoup import BeautifulSoup
|
|||||||
class OReillyPremium(BasicNewsRecipe):
|
class OReillyPremium(BasicNewsRecipe):
|
||||||
title = u'OReilly Premium'
|
title = u'OReilly Premium'
|
||||||
__author__ = 'TMcN'
|
__author__ = 'TMcN'
|
||||||
|
language = 'en'
|
||||||
description = 'Retrieves Premium and News Letter content from BillOReilly.com. Requires a Bill OReilly Premium Membership.'
|
description = 'Retrieves Premium and News Letter content from BillOReilly.com. Requires a Bill OReilly Premium Membership.'
|
||||||
cover_url = 'http://images.billoreilly.com/images/headers/billgray_header.png'
|
cover_url = 'http://images.billoreilly.com/images/headers/billgray_header.png'
|
||||||
auto_cleanup = True
|
auto_cleanup = True
|
||||||
|
@ -1,10 +1,11 @@
|
|||||||
from calibre.web.feeds.news import BasicNewsRecipe
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
import os, time
|
||||||
|
|
||||||
class AdvancedUserRecipe1277129332(BasicNewsRecipe):
|
class AdvancedUserRecipe1277129332(BasicNewsRecipe):
|
||||||
title = u'People Daily - China'
|
title = u'人民日报'
|
||||||
oldest_article = 2
|
oldest_article = 2
|
||||||
max_articles_per_feed = 100
|
max_articles_per_feed = 100
|
||||||
__author__ = 'rty'
|
__author__ = 'zzh'
|
||||||
|
|
||||||
pubisher = 'people.com.cn'
|
pubisher = 'people.com.cn'
|
||||||
description = 'People Daily Newspaper'
|
description = 'People Daily Newspaper'
|
||||||
@ -14,21 +15,65 @@ class AdvancedUserRecipe1277129332(BasicNewsRecipe):
|
|||||||
use_embedded_content = False
|
use_embedded_content = False
|
||||||
no_stylesheets = True
|
no_stylesheets = True
|
||||||
encoding = 'GB2312'
|
encoding = 'GB2312'
|
||||||
|
language = 'zh'
|
||||||
conversion_options = {'linearize_tables':True}
|
conversion_options = {'linearize_tables':True}
|
||||||
|
masthead_url = 'http://www.people.com.cn/img/2010wb/images/logo.gif'
|
||||||
|
|
||||||
feeds = [(u'\u56fd\u5185\u65b0\u95fb', u'http://www.people.com.cn/rss/politics.xml'),
|
feeds = [
|
||||||
(u'\u56fd\u9645\u65b0\u95fb', u'http://www.people.com.cn/rss/world.xml'),
|
(u'时政', u'http://www.people.com.cn/rss/politics.xml'),
|
||||||
(u'\u7ecf\u6d4e\u65b0\u95fb', u'http://www.people.com.cn/rss/finance.xml'),
|
(u'国际', u'http://www.people.com.cn/rss/world.xml'),
|
||||||
(u'\u4f53\u80b2\u65b0\u95fb', u'http://www.people.com.cn/rss/sports.xml'),
|
(u'经济', u'http://www.people.com.cn/rss/finance.xml'),
|
||||||
(u'\u53f0\u6e7e\u65b0\u95fb', u'http://www.people.com.cn/rss/haixia.xml')]
|
(u'体育', u'http://www.people.com.cn/rss/sports.xml'),
|
||||||
|
(u'教育', u'http://www.people.com.cn/rss/edu.xml'),
|
||||||
|
(u'文化', u'http://www.people.com.cn/rss/culture.xml'),
|
||||||
|
(u'社会', u'http://www.people.com.cn/rss/society.xml'),
|
||||||
|
(u'传媒', u'http://www.people.com.cn/rss/media.xml'),
|
||||||
|
(u'娱乐', u'http://www.people.com.cn/rss/ent.xml'),
|
||||||
|
# (u'汽车', u'http://www.people.com.cn/rss/auto.xml'),
|
||||||
|
(u'海峡两岸', u'http://www.people.com.cn/rss/haixia.xml'),
|
||||||
|
# (u'IT频道', u'http://www.people.com.cn/rss/it.xml'),
|
||||||
|
# (u'环保', u'http://www.people.com.cn/rss/env.xml'),
|
||||||
|
# (u'科技', u'http://www.people.com.cn/rss/scitech.xml'),
|
||||||
|
# (u'新农村', u'http://www.people.com.cn/rss/nc.xml'),
|
||||||
|
# (u'天气频道', u'http://www.people.com.cn/rss/weather.xml'),
|
||||||
|
(u'生活提示', u'http://www.people.com.cn/rss/life.xml'),
|
||||||
|
(u'卫生', u'http://www.people.com.cn/rss/medicine.xml'),
|
||||||
|
# (u'人口', u'http://www.people.com.cn/rss/npmpc.xml'),
|
||||||
|
# (u'读书', u'http://www.people.com.cn/rss/booker.xml'),
|
||||||
|
# (u'食品', u'http://www.people.com.cn/rss/shipin.xml'),
|
||||||
|
# (u'女性新闻', u'http://www.people.com.cn/rss/women.xml'),
|
||||||
|
# (u'游戏', u'http://www.people.com.cn/rss/game.xml'),
|
||||||
|
# (u'家电频道', u'http://www.people.com.cn/rss/homea.xml'),
|
||||||
|
# (u'房产', u'http://www.people.com.cn/rss/house.xml'),
|
||||||
|
# (u'健康', u'http://www.people.com.cn/rss/health.xml'),
|
||||||
|
# (u'科学发展观', u'http://www.people.com.cn/rss/kxfz.xml'),
|
||||||
|
# (u'知识产权', u'http://www.people.com.cn/rss/ip.xml'),
|
||||||
|
# (u'高层动态', u'http://www.people.com.cn/rss/64094.xml'),
|
||||||
|
# (u'党的各项工作', u'http://www.people.com.cn/rss/64107.xml'),
|
||||||
|
# (u'党建聚焦', u'http://www.people.com.cn/rss/64101.xml'),
|
||||||
|
# (u'机关党建', u'http://www.people.com.cn/rss/117094.xml'),
|
||||||
|
# (u'事业党建', u'http://www.people.com.cn/rss/117095.xml'),
|
||||||
|
# (u'国企党建', u'http://www.people.com.cn/rss/117096.xml'),
|
||||||
|
# (u'非公党建', u'http://www.people.com.cn/rss/117097.xml'),
|
||||||
|
# (u'社区党建', u'http://www.people.com.cn/rss/117098.xml'),
|
||||||
|
# (u'高校党建', u'http://www.people.com.cn/rss/117099.xml'),
|
||||||
|
# (u'农村党建', u'http://www.people.com.cn/rss/117100.xml'),
|
||||||
|
# (u'军队党建', u'http://www.people.com.cn/rss/117101.xml'),
|
||||||
|
# (u'时代先锋', u'http://www.people.com.cn/rss/78693.xml'),
|
||||||
|
# (u'网友声音', u'http://www.people.com.cn/rss/64103.xml'),
|
||||||
|
# (u'反腐倡廉', u'http://www.people.com.cn/rss/64371.xml'),
|
||||||
|
# (u'综合报道', u'http://www.people.com.cn/rss/64387.xml'),
|
||||||
|
# (u'中国人大新闻', u'http://www.people.com.cn/rss/14576.xml'),
|
||||||
|
# (u'中国政协新闻', u'http://www.people.com.cn/rss/34948.xml'),
|
||||||
|
]
|
||||||
keep_only_tags = [
|
keep_only_tags = [
|
||||||
dict(name='div', attrs={'class':'left_content'}),
|
dict(name='div', attrs={'class':'text_c'}),
|
||||||
]
|
]
|
||||||
remove_tags = [
|
remove_tags = [
|
||||||
dict(name='table', attrs={'class':'title'}),
|
dict(name='div', attrs={'class':'tools'}),
|
||||||
]
|
]
|
||||||
remove_tags_after = [
|
remove_tags_after = [
|
||||||
dict(name='table', attrs={'class':'bianji'}),
|
dict(name='div', attrs={'id':'p_content'}),
|
||||||
]
|
]
|
||||||
|
|
||||||
def append_page(self, soup, appendtag, position):
|
def append_page(self, soup, appendtag, position):
|
||||||
@ -36,7 +81,7 @@ class AdvancedUserRecipe1277129332(BasicNewsRecipe):
|
|||||||
if pager:
|
if pager:
|
||||||
nexturl = self.INDEX + pager.a['href']
|
nexturl = self.INDEX + pager.a['href']
|
||||||
soup2 = self.index_to_soup(nexturl)
|
soup2 = self.index_to_soup(nexturl)
|
||||||
texttag = soup2.find('div', attrs={'class':'left_content'})
|
texttag = soup2.find('div', attrs={'class':'text_c'})
|
||||||
#for it in texttag.findAll(style=True):
|
#for it in texttag.findAll(style=True):
|
||||||
# del it['style']
|
# del it['style']
|
||||||
newpos = len(texttag.contents)
|
newpos = len(texttag.contents)
|
||||||
@ -44,9 +89,15 @@ class AdvancedUserRecipe1277129332(BasicNewsRecipe):
|
|||||||
texttag.extract()
|
texttag.extract()
|
||||||
appendtag.insert(position,texttag)
|
appendtag.insert(position,texttag)
|
||||||
|
|
||||||
|
def skip_ad_pages(self, soup):
|
||||||
|
if ('advertisement' in soup.find('title').string.lower()):
|
||||||
|
href = soup.find('a').get('href')
|
||||||
|
return self.browser.open(href).read().decode('GB2312', 'ignore')
|
||||||
|
else:
|
||||||
|
return None
|
||||||
|
|
||||||
def preprocess_html(self, soup):
|
def preprocess_html(self, soup):
|
||||||
mtag = '<meta http-equiv="content-type" content="text/html;charset=GB2312" />\n<meta http-equiv="content-language" content="utf-8" />'
|
mtag = '<meta http-equiv="content-type" content="text/html;charset=GB2312" />\n<meta http-equiv="content-language" content="GB2312" />'
|
||||||
soup.head.insert(0,mtag)
|
soup.head.insert(0,mtag)
|
||||||
for item in soup.findAll(style=True):
|
for item in soup.findAll(style=True):
|
||||||
del item['form']
|
del item['form']
|
||||||
@ -55,3 +106,19 @@ class AdvancedUserRecipe1277129332(BasicNewsRecipe):
|
|||||||
#if pager:
|
#if pager:
|
||||||
# pager.extract()
|
# pager.extract()
|
||||||
return soup
|
return soup
|
||||||
|
|
||||||
|
def get_cover_url(self):
|
||||||
|
cover = None
|
||||||
|
os.environ['TZ'] = 'Asia/Shanghai'
|
||||||
|
time.tzset()
|
||||||
|
year = time.strftime('%Y')
|
||||||
|
month = time.strftime('%m')
|
||||||
|
day = time.strftime('%d')
|
||||||
|
cover = 'http://paper.people.com.cn/rmrb/page/'+year+'-'+month+'/'+day+'/01/RMRB'+year+month+day+'B001_b.jpg'
|
||||||
|
br = BasicNewsRecipe.get_browser()
|
||||||
|
try:
|
||||||
|
br.open(cover)
|
||||||
|
except:
|
||||||
|
self.log("\nCover unavailable: " + cover)
|
||||||
|
cover = None
|
||||||
|
return cover
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
|
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
|
||||||
|
|
||||||
__author__ = 'Darko Spasovski'
|
__author__ = 'Darko Spasovski'
|
||||||
__license__ = 'GPL v3'
|
__license__ = 'GPL v3'
|
||||||
@ -7,7 +8,6 @@ __copyright__ = '2011, Darko Spasovski <darko.spasovski at gmail.com>'
|
|||||||
'''
|
'''
|
||||||
www.plusinfo.mk
|
www.plusinfo.mk
|
||||||
'''
|
'''
|
||||||
|
|
||||||
from calibre.web.feeds.news import BasicNewsRecipe
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
class PlusInfo(BasicNewsRecipe):
|
class PlusInfo(BasicNewsRecipe):
|
||||||
@ -27,8 +27,11 @@ class PlusInfo(BasicNewsRecipe):
|
|||||||
oldest_article = 1
|
oldest_article = 1
|
||||||
max_articles_per_feed = 100
|
max_articles_per_feed = 100
|
||||||
|
|
||||||
keep_only_tags = [dict(name='div', attrs={'class': 'vest'})]
|
remove_tags = []
|
||||||
remove_tags = [dict(name='div', attrs={'class':['komentari_holder', 'objava']})]
|
remove_tags.append(dict(name='div', attrs={'class':['komentari_holder', 'objava', 'koment']}))
|
||||||
|
remove_tags.append(dict(name='ul', attrs={'class':['vest_meni']}))
|
||||||
|
remove_tags.append(dict(name='a', attrs={'name': ['fb_share']}))
|
||||||
|
keep_only_tags = [dict(name='div', attrs={'class': 'vest1'})]
|
||||||
|
|
||||||
feeds = [(u'Македонија', u'http://www.plusinfo.mk/rss/makedonija'),
|
feeds = [(u'Македонија', u'http://www.plusinfo.mk/rss/makedonija'),
|
||||||
(u'Бизнис', u'http://www.plusinfo.mk/rss/biznis'),
|
(u'Бизнис', u'http://www.plusinfo.mk/rss/biznis'),
|
||||||
|
170
recipes/real_clear.recipe
Normal file
170
recipes/real_clear.recipe
Normal file
@ -0,0 +1,170 @@
|
|||||||
|
# Test with "\Program Files\Calibre2\ebook-convert.exe" RealClear.recipe .epub --test -vv --debug-pipeline debug
|
||||||
|
import time
|
||||||
|
from calibre.web.feeds.recipes import BasicNewsRecipe
|
||||||
|
from calibre.ebooks.BeautifulSoup import NavigableString
|
||||||
|
|
||||||
|
class RealClear(BasicNewsRecipe):
|
||||||
|
title = u'Real Clear'
|
||||||
|
__author__ = 'TMcN'
|
||||||
|
description = 'Real Clear Politics/Science/etc... aggregation of news\n'
|
||||||
|
cover_url = 'http://www.realclearpolitics.com/dev/mt-static/images/logo.gif'
|
||||||
|
custom_title = 'Real Clear - '+ time.strftime('%d %b %Y')
|
||||||
|
auto_cleanup = True
|
||||||
|
encoding = 'utf8'
|
||||||
|
language = 'en'
|
||||||
|
needs_subscription = False
|
||||||
|
no_stylesheets = True
|
||||||
|
oldest_article = 7
|
||||||
|
remove_javascript = True
|
||||||
|
remove_tags = [dict(name='img', attrs={})]
|
||||||
|
# Don't go down
|
||||||
|
recursions = 0
|
||||||
|
max_articles_per_feed = 400
|
||||||
|
debugMessages = False
|
||||||
|
|
||||||
|
# Numeric parameter is type, controls whether we look for
|
||||||
|
feedsets = [
|
||||||
|
["Politics", "http://www.realclearpolitics.com/index.xml", 0],
|
||||||
|
["Science", "http://www.realclearscience.com/index.xml", 0],
|
||||||
|
["Tech", "http://www.realcleartechnology.com/index.xml", 0],
|
||||||
|
# The feedburner is essentially the same as the top feed, politics.
|
||||||
|
# ["Politics Burner", "http://feeds.feedburner.com/realclearpolitics/qlMj", 1],
|
||||||
|
# ["Commentary", "http://feeds.feedburner.com/Realclearpolitics-Articles", 1],
|
||||||
|
["Markets Home", "http://www.realclearmarkets.com/index.xml", 0],
|
||||||
|
["Markets", "http://www.realclearmarkets.com/articles/index.xml", 0],
|
||||||
|
["World", "http://www.realclearworld.com/index.xml", 0],
|
||||||
|
["World Blog", "http://www.realclearworld.com/blog/index.xml", 2]
|
||||||
|
]
|
||||||
|
# Hints to extractPrintURL.
|
||||||
|
# First column is the URL snippet. Then the string to search for as text, and the attributes to look for above it. Start with attributes and drill down.
|
||||||
|
printhints = [
|
||||||
|
["billoreilly.com", "Print this entry", 'a', ''],
|
||||||
|
["billoreilly.com", "Print This Article", 'a', ''],
|
||||||
|
["politico.com", "Print", 'a', 'share-print'],
|
||||||
|
["nationalreview.com", ">Print<", 'a', ''],
|
||||||
|
["reason.com", "", 'a', 'printer']
|
||||||
|
# The following are not supported due to JavaScripting, and would require obfuscated_article to handle
|
||||||
|
# forbes,
|
||||||
|
# usatoday - just prints with all current crap anyhow
|
||||||
|
|
||||||
|
]
|
||||||
|
|
||||||
|
# Returns the best-guess print url.
|
||||||
|
# The second parameter (pageURL) is returned if nothing is found.
|
||||||
|
def extractPrintURL(self, pageURL):
|
||||||
|
tagURL = pageURL
|
||||||
|
hintsCount =len(self.printhints)
|
||||||
|
for x in range(0,hintsCount):
|
||||||
|
if pageURL.find(self.printhints[x][0])== -1 :
|
||||||
|
continue
|
||||||
|
print("Trying "+self.printhints[x][0])
|
||||||
|
# Only retrieve the soup if we have a match to check for the printed article with.
|
||||||
|
soup = self.index_to_soup(pageURL)
|
||||||
|
if soup is None:
|
||||||
|
return pageURL
|
||||||
|
if len(self.printhints[x][3])>0 and len(self.printhints[x][1]) == 0:
|
||||||
|
if self.debugMessages == True :
|
||||||
|
print("search1")
|
||||||
|
printFind = soup.find(self.printhints[x][2], attrs=self.printhints[x][3])
|
||||||
|
elif len(self.printhints[x][3])>0 :
|
||||||
|
if self.debugMessages == True :
|
||||||
|
print("search2")
|
||||||
|
printFind = soup.find(self.printhints[x][2], attrs=self.printhints[x][3], text=self.printhints[x][1])
|
||||||
|
else :
|
||||||
|
printFind = soup.find(self.printhints[x][2], text=self.printhints[x][1])
|
||||||
|
if printFind is None:
|
||||||
|
if self.debugMessages == True :
|
||||||
|
print("Not Found")
|
||||||
|
continue
|
||||||
|
print(printFind)
|
||||||
|
if isinstance(printFind, NavigableString)==False:
|
||||||
|
if printFind['href'] is not None:
|
||||||
|
return printFind['href']
|
||||||
|
tag = printFind.parent
|
||||||
|
print(tag)
|
||||||
|
if tag['href'] is None:
|
||||||
|
if self.debugMessages == True :
|
||||||
|
print("Not in parent, trying skip-up")
|
||||||
|
if tag.parent['href'] is None:
|
||||||
|
if self.debugMessages == True :
|
||||||
|
print("Not in skip either, aborting")
|
||||||
|
continue;
|
||||||
|
return tag.parent['href']
|
||||||
|
return tag['href']
|
||||||
|
return tagURL
|
||||||
|
|
||||||
|
def get_browser(self):
|
||||||
|
if self.debugMessages == True :
|
||||||
|
print("In get_browser")
|
||||||
|
br = BasicNewsRecipe.get_browser()
|
||||||
|
return br
|
||||||
|
|
||||||
|
def parseRSS(self, index) :
|
||||||
|
if self.debugMessages == True :
|
||||||
|
print("\n\nStarting "+self.feedsets[index][0])
|
||||||
|
articleList = []
|
||||||
|
soup = self.index_to_soup(self.feedsets[index][1])
|
||||||
|
for div in soup.findAll("item"):
|
||||||
|
title = div.find("title").contents[0]
|
||||||
|
urlEl = div.find("originalLink")
|
||||||
|
if urlEl is None or len(urlEl.contents)==0 :
|
||||||
|
urlEl = div.find("originallink")
|
||||||
|
if urlEl is None or len(urlEl.contents)==0 :
|
||||||
|
urlEl = div.find("link")
|
||||||
|
if urlEl is None or len(urlEl.contents)==0 :
|
||||||
|
urlEl = div.find("guid")
|
||||||
|
if urlEl is None or title is None or len(urlEl.contents)==0 :
|
||||||
|
print("Error in feed "+ self.feedsets[index][0])
|
||||||
|
print(div)
|
||||||
|
continue
|
||||||
|
print(title)
|
||||||
|
print(urlEl)
|
||||||
|
url = urlEl.contents[0].encode("utf-8")
|
||||||
|
description = div.find("description")
|
||||||
|
if description is not None and description.contents is not None and len(description.contents)>0:
|
||||||
|
description = description.contents[0]
|
||||||
|
else :
|
||||||
|
description="None"
|
||||||
|
pubDateEl = div.find("pubDate")
|
||||||
|
if pubDateEl is None :
|
||||||
|
pubDateEl = div.find("pubdate")
|
||||||
|
if pubDateEl is None :
|
||||||
|
pubDate = time.strftime('%a, %d %b')
|
||||||
|
else :
|
||||||
|
pubDate = pubDateEl.contents[0]
|
||||||
|
if self.debugMessages == True :
|
||||||
|
print("Article");
|
||||||
|
print(title)
|
||||||
|
print(description)
|
||||||
|
print(pubDate)
|
||||||
|
print(url)
|
||||||
|
url = self.extractPrintURL(url)
|
||||||
|
print(url)
|
||||||
|
#url +=re.sub(r'\?.*', '', div['href'])
|
||||||
|
pubdate = time.strftime('%a, %d %b')
|
||||||
|
articleList.append(dict(title=title, url=url, date=pubdate, description=description, content=''))
|
||||||
|
return articleList
|
||||||
|
|
||||||
|
# calibre.web.feeds.news.BasicNewsRecipe.parse_index() fetches the list of articles.
|
||||||
|
# returns a list of tuple ('feed title', list of articles)
|
||||||
|
# {
|
||||||
|
# 'title' : article title,
|
||||||
|
# 'url' : URL of print version,
|
||||||
|
# 'date' : The publication date of the article as a string,
|
||||||
|
# 'description' : A summary of the article
|
||||||
|
# 'content' : The full article (can be an empty string). This is used by FullContentProfile
|
||||||
|
# }
|
||||||
|
# this is used instead of BasicNewsRecipe.parse_feeds().
|
||||||
|
def parse_index(self):
|
||||||
|
# Parse the page into Python Soup
|
||||||
|
|
||||||
|
ans = []
|
||||||
|
feedsCount = len(self.feedsets)
|
||||||
|
for x in range(0,feedsCount): # should be ,4
|
||||||
|
feedarticles = self.parseRSS(x)
|
||||||
|
if feedarticles is not None:
|
||||||
|
ans.append((self.feedsets[x][0], feedarticles))
|
||||||
|
if self.debugMessages == True :
|
||||||
|
print(ans)
|
||||||
|
return ans
|
||||||
|
|
14
recipes/satira.recipe
Normal file
14
recipes/satira.recipe
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
__license__ = 'GPL v3'
|
||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class AdvancedUserRecipe1327351409(BasicNewsRecipe):
|
||||||
|
title = u'Satira'
|
||||||
|
oldest_article = 7
|
||||||
|
max_articles_per_feed = 100
|
||||||
|
auto_cleanup = True
|
||||||
|
feeds = [(u'spinoza', u'http://feeds.feedburner.com/Spinoza'), (u'umore maligno', u'http://www.umoremaligno.it/feed/rss/'), (u'fed-ex', u'http://exfed.tumblr.com/rss'), (u'metilparaben', u'http://feeds.feedburner.com/metil'), (u'freddy nietzsche', u'http://feeds.feedburner.com/FreddyNietzsche')]
|
||||||
|
__author__ = 'faber1971'
|
||||||
|
description = 'Collection of Italian satiric blogs - v1.00 (28, January 2012)'
|
||||||
|
language = 'it'
|
||||||
|
|
||||||
|
|
133
recipes/strange_horizons.recipe
Normal file
133
recipes/strange_horizons.recipe
Normal file
@ -0,0 +1,133 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
import urlparse
|
||||||
|
from collections import OrderedDict
|
||||||
|
|
||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class StrangeHorizons(BasicNewsRecipe):
|
||||||
|
|
||||||
|
# Recipe metadata
|
||||||
|
# Any issue archive page is an acceptable index as well.
|
||||||
|
# However, reviews will not be included in older issues.
|
||||||
|
# (Using the reviews archive instead of the recent reviews page would fix.)
|
||||||
|
INDEX = 'http://www.strangehorizons.com/'
|
||||||
|
title = 'Strange Horizons'
|
||||||
|
description = 'A magazine of speculative fiction and related nonfiction. Best downloaded on weekends'
|
||||||
|
masthead_url = 'http://strangehorizons.com/images/sh_head.gif'
|
||||||
|
publication_type = 'magazine'
|
||||||
|
language = 'en'
|
||||||
|
__author__ = 'Jim DeVona'
|
||||||
|
__version__ = '1.0'
|
||||||
|
|
||||||
|
# Cruft filters
|
||||||
|
keep_only_tags = [dict(name='div', id='content')]
|
||||||
|
remove_tags = [dict(name='p', attrs={'class': 'forum-links'}), dict(name='p', attrs={'class': 'top-link'})]
|
||||||
|
remove_tags_after = [dict(name='p', attrs={'class': 'author-bio'})]
|
||||||
|
|
||||||
|
# Styles
|
||||||
|
no_stylesheets = True
|
||||||
|
extra_css = '''div.image-left { margin: 0.5em auto 1em auto; } div.image-right { margin: 0.5em auto 1em auto; } div.illustration { margin: 0.5em auto 1em auto; text-align: center; } p.image-caption { margin-top: 0.25em; margin-bottom: 1em; font-size: 75%; text-align: center; } h1 { font-size: 160%; } h2 { font-size: 110%; } h3 { font-size: 85%; } h4 { font-size: 80%; } p { font-size: 90%; margin: 1em 1em 1em 15px; } p.author-bio { font-size: 75%; font-style: italic; margin: 1em 1em 1em 15px; } p.author-bio i, p.author-bio cite, p.author-bio .foreign { font-style: normal; } p.author-copyright { font-size: 75%; text-align: center; margin: 3em 1em 1em 15px; } p.content-date { font-weight: bold; } p.dedication { font-style: italic; } div.stanza { margin-bottom: 1em; } div.stanza p { margin: 0px 1em 0px 15px; font-size: 90%; } p.verse-line { margin-bottom: 0px; margin-top: 0px; } p.verse-line-indent-1 { margin-bottom: 0px; margin-top: 0px; text-indent: 2em; } p.verse-line-indent-2 { margin-bottom: 0px; margin-top: 0px; text-indent: 4em; } p.verse-stanza-break { margin-bottom: 0px; margin-top: 0px; } .foreign { font-style: italic; } .thought { font-style: italic; } .thought cite { font-style: normal; } .thought em { font-style: normal; } blockquote { font-size: 90%; font-style: italic; } blockquote cite { font-style: normal; } blockquote em { font-style: normal; } blockquote .foreign { font-style: normal; } blockquote .thought { font-style: normal; } .speaker { font-weight: bold; } pre { margin-left: 15px; } div.screenplay { font-family: monospace; } blockquote.screenplay-dialogue { font-style: normal; font-size: 100%; } .screenplay p.dialogue-first { margin-top: 0; } .screenplay p.speaker { margin-bottom: 0; text-align: center; font-weight: normal; } blockquote.typed-letter { font-style: normal; font-size: 100%; font-family: monospace; } .no-italics { font-style: normal; }'''
|
||||||
|
|
||||||
|
def parse_index(self):
|
||||||
|
|
||||||
|
sections = OrderedDict()
|
||||||
|
strange_soup = self.index_to_soup(self.INDEX)
|
||||||
|
|
||||||
|
# Find the heading that marks the start of this issue.
|
||||||
|
issue_heading = strange_soup.find('h2')
|
||||||
|
issue_date = self.tag_to_string(issue_heading)
|
||||||
|
self.title = self.title + " - " + issue_date
|
||||||
|
|
||||||
|
# Examine subsequent headings for information about this issue.
|
||||||
|
heading_tag = issue_heading.findNextSibling(['h2','h3'])
|
||||||
|
while heading_tag != None:
|
||||||
|
|
||||||
|
# An h2 indicates the start of the next issue.
|
||||||
|
if heading_tag.name == 'h2':
|
||||||
|
break
|
||||||
|
|
||||||
|
# The heading begins with a word indicating the article category.
|
||||||
|
section = self.tag_to_string(heading_tag).split(':', 1)[0].title()
|
||||||
|
|
||||||
|
# Reviews aren't linked from the index, so we need to look them up
|
||||||
|
# separately. Currently using Recent Reviews page. The reviews
|
||||||
|
# archive page lists all reviews, but is >500k.
|
||||||
|
if section == 'Review':
|
||||||
|
|
||||||
|
# Get the list of recent reviews.
|
||||||
|
review_soup = self.index_to_soup('http://www.strangehorizons.com/reviews/')
|
||||||
|
review_titles = review_soup.findAll('p', attrs={'class': 'contents-title'})
|
||||||
|
|
||||||
|
# Get the list of reviews included in this issue. (Kludgey.)
|
||||||
|
reviews_summary = heading_tag.findNextSibling('p', attrs={'class': 'contents-pullquote'})
|
||||||
|
for br in reviews_summary.findAll('br'):
|
||||||
|
br.replaceWith('----')
|
||||||
|
review_summary_text = self.tag_to_string(reviews_summary)
|
||||||
|
review_lines = review_summary_text.split(' ----')
|
||||||
|
|
||||||
|
# Look for each of the needed reviews (there are 3, right?)...
|
||||||
|
for review_info in review_lines[0:3]:
|
||||||
|
|
||||||
|
# Get the review's release day (unused), title, and author.
|
||||||
|
day, tna = review_info.split(': ', 1)
|
||||||
|
article_title, article_author = tna.split(', reviewed by ')
|
||||||
|
|
||||||
|
# ... in the list of recent reviews.
|
||||||
|
for review_title_tag in review_titles:
|
||||||
|
review_title = self.tag_to_string(review_title_tag)
|
||||||
|
if review_title != article_title:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Extract review information from heading and surrounding text.
|
||||||
|
article_summary = self.tag_to_string(review_title_tag.findNextSibling('p', attrs={'class': 'contents-pullquote'}))
|
||||||
|
review_date = self.tag_to_string(review_title_tag.findNextSibling('p', attrs={'class': 'contents-date'}))
|
||||||
|
article_url = review_title_tag.find('a')['href']
|
||||||
|
|
||||||
|
# Add this review to the Review section.
|
||||||
|
if section not in sections:
|
||||||
|
sections[section] = []
|
||||||
|
sections[section].append({
|
||||||
|
'title': article_title,
|
||||||
|
'author': article_author,
|
||||||
|
'url': article_url,
|
||||||
|
'description': article_summary,
|
||||||
|
'date': review_date})
|
||||||
|
|
||||||
|
break
|
||||||
|
|
||||||
|
else:
|
||||||
|
# Try http://www.strangehorizons.com/reviews/archives.shtml
|
||||||
|
self.log("Review not found in Recent Reviews:", article_title)
|
||||||
|
|
||||||
|
else:
|
||||||
|
|
||||||
|
# Extract article information from the heading and surrounding text.
|
||||||
|
link = heading_tag.find('a')
|
||||||
|
article_title = self.tag_to_string(link)
|
||||||
|
article_url = urlparse.urljoin(self.INDEX, link['href'])
|
||||||
|
article_author = link.nextSibling.replace(', by ', '')
|
||||||
|
article_summary = self.tag_to_string(heading_tag.findNextSibling('p', attrs={'class':'contents-pullquote'}))
|
||||||
|
|
||||||
|
# Add article to the appropriate collection of sections.
|
||||||
|
if section not in sections:
|
||||||
|
sections[section] = []
|
||||||
|
sections[section].append({
|
||||||
|
'title': article_title,
|
||||||
|
'author': article_author,
|
||||||
|
'url': article_url,
|
||||||
|
'description': article_summary,
|
||||||
|
'date': issue_date})
|
||||||
|
|
||||||
|
heading_tag = heading_tag.findNextSibling(['h2','h3'])
|
||||||
|
|
||||||
|
# Manually insert standard info about the magazine.
|
||||||
|
sections['About'] = [{
|
||||||
|
'title': 'Strange Horizons',
|
||||||
|
'author': 'Niall Harrison, Editor-in-Chief',
|
||||||
|
'url': 'http://www.strangehorizons.com/AboutUs.shtml',
|
||||||
|
'description': 'Strange Horizons is a magazine of and about speculative fiction and related nonfiction. Speculative fiction includes science fiction, fantasy, horror, slipstream, and all other flavors of fantastika. Work published in Strange Horizons has been shortlisted for or won Hugo, Nebula, Rhysling, Theodore Sturgeon, James Tiptree Jr., and World Fantasy Awards.',
|
||||||
|
'date': ''}]
|
||||||
|
|
||||||
|
return sections.items()
|
||||||
|
|
@ -1,6 +1,6 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
__license__ = 'GPL v3'
|
__license__ = 'GPL v3'
|
||||||
__copyright__ = '2008, Kovid Goyal <kovid at kovidgoyal.net>'
|
__copyright__ = '2012, Kovid Goyal <kovid at kovidgoyal.net>' # 2012-01-26 AGe change to actual Year
|
||||||
|
|
||||||
'''
|
'''
|
||||||
Fetch sueddeutsche.de
|
Fetch sueddeutsche.de
|
||||||
@ -8,19 +8,30 @@ Fetch sueddeutsche.de
|
|||||||
from calibre.web.feeds.news import BasicNewsRecipe
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
class Sueddeutsche(BasicNewsRecipe):
|
class Sueddeutsche(BasicNewsRecipe):
|
||||||
|
|
||||||
title = u'sueddeutsche.de'
|
title = u'Süddeutsche.de' # 2012-01-26 AGe Correct Title
|
||||||
description = 'News from Germany'
|
description = 'News from Germany, Access to online content' # 2012-01-26 AGe
|
||||||
__author__ = 'Oliver Niesner and Armin Geller' #Update AGe 2011-12-16
|
__author__ = 'Oliver Niesner and Armin Geller' #Update AGe 2012-01-26
|
||||||
use_embedded_content = False
|
publisher = 'Süddeutsche Zeitung' # 2012-01-26 AGe add
|
||||||
timefmt = ' [%d %b %Y]'
|
category = 'news, politics, Germany' # 2012-01-26 AGe add
|
||||||
|
timefmt = ' [%a, %d %b %Y]' # 2012-01-26 AGe add %a
|
||||||
oldest_article = 7
|
oldest_article = 7
|
||||||
max_articles_per_feed = 50
|
max_articles_per_feed = 100
|
||||||
no_stylesheets = True
|
|
||||||
language = 'de'
|
language = 'de'
|
||||||
encoding = 'utf-8'
|
encoding = 'utf-8'
|
||||||
|
publication_type = 'newspaper' # 2012-01-26 add
|
||||||
|
cover_source = 'http://www.sueddeutsche.de/verlag' # 2012-01-26 AGe add from Darko Miletic paid content source
|
||||||
|
masthead_url = 'http://www.sueddeutsche.de/static_assets/build/img/sdesiteheader/logo_homepage.441d531c.png' # 2012-01-26 AGe add
|
||||||
|
|
||||||
|
use_embedded_content = False
|
||||||
|
no_stylesheets = True
|
||||||
remove_javascript = True
|
remove_javascript = True
|
||||||
auto_cleanup = True
|
auto_cleanup = True
|
||||||
cover_url = 'http://polpix.sueddeutsche.com/polopoly_fs/1.1237395.1324054345!/image/image.jpg_gen/derivatives/860x860/image.jpg' # 2011-12-16 AGe
|
|
||||||
|
def get_cover_url(self): # 2012-01-26 AGe add from Darko Miletic paid content source
|
||||||
|
cover_source_soup = self.index_to_soup(self.cover_source)
|
||||||
|
preview_image_div = cover_source_soup.find(attrs={'class':'preview-image'})
|
||||||
|
return preview_image_div.div.img['src']
|
||||||
|
|
||||||
feeds = [
|
feeds = [
|
||||||
(u'Politik', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5EPolitik%24?output=rss'),
|
(u'Politik', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5EPolitik%24?output=rss'),
|
||||||
(u'Wirtschaft', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5EWirtschaft%24?output=rss'),
|
(u'Wirtschaft', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5EWirtschaft%24?output=rss'),
|
||||||
@ -29,6 +40,9 @@ class Sueddeutsche(BasicNewsRecipe):
|
|||||||
(u'Sport', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5ESport%24?output=rss'),
|
(u'Sport', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5ESport%24?output=rss'),
|
||||||
(u'Leben', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5ELeben%24?output=rss'),
|
(u'Leben', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5ELeben%24?output=rss'),
|
||||||
(u'Karriere', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5EKarriere%24?output=rss'),
|
(u'Karriere', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5EKarriere%24?output=rss'),
|
||||||
|
(u'Bildung', u'http://rss.sueddeutsche.de/rss/bildung'), #2012-01-26 AGe New
|
||||||
|
(u'Gesundheit', u'http://rss.sueddeutsche.de/rss/gesundheit'), #2012-01-26 AGe New
|
||||||
|
(u'Stil', u'http://rss.sueddeutsche.de/rss/stil'), #2012-01-26 AGe New
|
||||||
(u'München & Region', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5EMünchen&Region%24?output=rss'),
|
(u'München & Region', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5EMünchen&Region%24?output=rss'),
|
||||||
(u'Bayern', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5EBayern%24?output=rss'),
|
(u'Bayern', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5EBayern%24?output=rss'),
|
||||||
(u'Medien', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5EMedien%24?output=rss'),
|
(u'Medien', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5EMedien%24?output=rss'),
|
||||||
@ -42,6 +56,7 @@ class Sueddeutsche(BasicNewsRecipe):
|
|||||||
(u'Job', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5EJob%24?output=rss'), # sometimes only
|
(u'Job', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5EJob%24?output=rss'), # sometimes only
|
||||||
(u'Service', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5EService%24?output=rss'), # sometimes only
|
(u'Service', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5EService%24?output=rss'), # sometimes only
|
||||||
(u'Verlag', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5EVerlag%24?output=rss'), # sometimes only
|
(u'Verlag', u'http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5EVerlag%24?output=rss'), # sometimes only
|
||||||
|
|
||||||
]
|
]
|
||||||
# AGe 2011-12-16 Problem of Handling redirections solved by a solution of Recipes-Re-usable code from kiklop74.
|
# AGe 2011-12-16 Problem of Handling redirections solved by a solution of Recipes-Re-usable code from kiklop74.
|
||||||
# Feed is: http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5ESport%24?output=rss
|
# Feed is: http://suche.sueddeutsche.de/query/%23/sort/-docdatetime/drilldown/%C2%A7ressort%3A%5ESport%24?output=rss
|
||||||
|
15
recipes/tech_economy.recipe
Normal file
15
recipes/tech_economy.recipe
Normal file
@ -0,0 +1,15 @@
|
|||||||
|
__license__ = 'GPL v3'
|
||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class AdvancedUserRecipe1327051385(BasicNewsRecipe):
|
||||||
|
title = u'Tech Economy'
|
||||||
|
oldest_article = 7
|
||||||
|
max_articles_per_feed = 100
|
||||||
|
auto_cleanup = True
|
||||||
|
masthead_url = 'http://www.techeconomy.it/wp-content/uploads/2012/01/Logo-TE9.png'
|
||||||
|
feeds = [(u'Tech Economy', u'http://www.techeconomy.it/feed/')]
|
||||||
|
remove_tags_after = [dict(name='div', attrs={'class':'cab-author-name'})]
|
||||||
|
__author__ = 'faber1971'
|
||||||
|
description = 'Italian website on technology - v1.00 (28, January 2012)'
|
||||||
|
language = 'it'
|
||||||
|
|
37
recipes/telegraph_in.recipe
Normal file
37
recipes/telegraph_in.recipe
Normal file
@ -0,0 +1,37 @@
|
|||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class Telegraph(BasicNewsRecipe):
|
||||||
|
title = u'The Telegraph India'
|
||||||
|
language = 'en_IN'
|
||||||
|
__author__ = 'Krittika Goyal'
|
||||||
|
oldest_article = 1 #days
|
||||||
|
max_articles_per_feed = 25
|
||||||
|
use_embedded_content = False
|
||||||
|
|
||||||
|
no_stylesheets = True
|
||||||
|
auto_cleanup = True
|
||||||
|
|
||||||
|
|
||||||
|
feeds = [
|
||||||
|
('Front Page',
|
||||||
|
'http://www.telegraphindia.com/feeds/rss.jsp?id=3'),
|
||||||
|
('Nation',
|
||||||
|
'http://www.telegraphindia.com/feeds/rss.jsp?id=4'),
|
||||||
|
('Calcutta',
|
||||||
|
'http://www.telegraphindia.com/feeds/rss.jsp?id=5'),
|
||||||
|
('Bengal',
|
||||||
|
'http://www.telegraphindia.com/feeds/rss.jsp?id=8'),
|
||||||
|
('Bihar',
|
||||||
|
'http://www.telegraphindia.com/feeds/rss.jsp?id=22'),
|
||||||
|
('Sports',
|
||||||
|
'http://www.telegraphindia.com/feeds/rss.jsp?id=7'),
|
||||||
|
('International',
|
||||||
|
'http://www.telegraphindia.com/feeds/rss.jsp?id=13'),
|
||||||
|
('Business',
|
||||||
|
'http://www.telegraphindia.com/feeds/rss.jsp?id=9'),
|
||||||
|
('Entertainment',
|
||||||
|
'http://www.telegraphindia.com/feeds/rss.jsp?id=20'),
|
||||||
|
('Opinion',
|
||||||
|
'http://www.telegraphindia.com/feeds/rss.jsp?id=6'),
|
||||||
|
]
|
||||||
|
|
24
recipes/tomshardware_it.recipe
Normal file
24
recipes/tomshardware_it.recipe
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
__license__ = 'GPL v3'
|
||||||
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
|
|
||||||
|
class AdvancedUserRecipe1327434170(BasicNewsRecipe):
|
||||||
|
title = u"Tom's Hardware"
|
||||||
|
oldest_article = 7
|
||||||
|
max_articles_per_feed = 100
|
||||||
|
auto_cleanup = True
|
||||||
|
masthead_url = 'http://userlogos.org/files/logos/spaljeni/tomshardwre.png'
|
||||||
|
def get_article_url(self, article):
|
||||||
|
link = BasicNewsRecipe.get_article_url(self, article)
|
||||||
|
if link.split('/')[-1]=="story01.htm":
|
||||||
|
link=link.split('/')[-2]
|
||||||
|
a=['A', 'B', 'C', 'D', 'E', 'F', 'G', 'I', 'L' , 'N' , 'S' ]
|
||||||
|
b=['0', '.', '/', '?', '-', '=', '&', '_', 'http://', '.com', 'www.']
|
||||||
|
for i in range(0,len(a)):
|
||||||
|
link=link.replace('0'+a[-i],b[-i])
|
||||||
|
return link
|
||||||
|
feeds = [(u"Tom's Hardware", u'http://rss.feedsportal.com/c/32604/f/531080/index.rss')]
|
||||||
|
__author__ = 'faber1971'
|
||||||
|
description = 'Italian website on technology - v1.00 (28, January 2012)'
|
||||||
|
language = 'it'
|
||||||
|
|
||||||
|
|
144
recipes/wyborcza_duzy_format.recipe
Normal file
144
recipes/wyborcza_duzy_format.recipe
Normal file
@ -0,0 +1,144 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
from calibre.web.feeds.recipes import BasicNewsRecipe
|
||||||
|
|
||||||
|
class GazetaWyborczaDuzyForma(BasicNewsRecipe):
|
||||||
|
cover_url = 'http://bi.gazeta.pl/im/8/5415/m5415058.gif'
|
||||||
|
title = u"Gazeta Wyborcza Duzy Format"
|
||||||
|
__author__ = 'ravcio - rlelusz[at]gmail.com'
|
||||||
|
description = u"Articles from Gazeta's website"
|
||||||
|
language = 'pl'
|
||||||
|
max_articles_per_feed = 50 #you can increade it event up to maybe 600, should still work
|
||||||
|
recursions = 0
|
||||||
|
encoding = 'iso-8859-2'
|
||||||
|
no_stylesheets = True
|
||||||
|
remove_javascript = True
|
||||||
|
use_embedded_content = False
|
||||||
|
|
||||||
|
|
||||||
|
keep_only_tags = [
|
||||||
|
dict(name='div', attrs={'id':['k1']})
|
||||||
|
]
|
||||||
|
|
||||||
|
remove_tags = [
|
||||||
|
dict(name='div', attrs={'class':['zdjM', 'rel_video', 'zdjP', 'rel_box', 'index mod_zi_dolStrony']})
|
||||||
|
,dict(name='div', attrs={'id':['source', 'banP4', 'article_toolbar', 'rel', 'inContext_disabled']})
|
||||||
|
,dict(name='ul', attrs={'id':['articleToolbar']})
|
||||||
|
,dict(name='img', attrs={'class':['brand']})
|
||||||
|
,dict(name='h5', attrs={'class':['author']})
|
||||||
|
,dict(name='h6', attrs={'class':['date']})
|
||||||
|
,dict(name='p', attrs={'class':['txt_upl']})
|
||||||
|
]
|
||||||
|
|
||||||
|
remove_tags_after = [
|
||||||
|
dict(name='div', attrs={'id':['Str']}) #nawigator numerow linii
|
||||||
|
]
|
||||||
|
|
||||||
|
def load_article_links(self, url, count):
|
||||||
|
print '--- load_article_links', url, count
|
||||||
|
|
||||||
|
#page with link to articles
|
||||||
|
soup = self.index_to_soup(url)
|
||||||
|
|
||||||
|
#table with articles
|
||||||
|
list = soup.find('div', attrs={'class':'GWdalt'})
|
||||||
|
|
||||||
|
#single articles (link, title, ...)
|
||||||
|
links = list.findAll('div', attrs={'class':['GWdaltE']})
|
||||||
|
|
||||||
|
if len(links) < count:
|
||||||
|
#load links to more articles...
|
||||||
|
|
||||||
|
#remove new link
|
||||||
|
pages_nav = list.find('div', attrs={'class':'pages'})
|
||||||
|
next = pages_nav.find('a', attrs={'class':'next'})
|
||||||
|
if next:
|
||||||
|
print 'next=', next['href']
|
||||||
|
url = 'http://wyborcza.pl' + next['href']
|
||||||
|
#e.g. url = 'http://wyborcza.pl/0,75480.html?str=2'
|
||||||
|
|
||||||
|
older_links = self.load_article_links(url, count - len(links))
|
||||||
|
links.extend(older_links)
|
||||||
|
|
||||||
|
return links
|
||||||
|
|
||||||
|
|
||||||
|
#produce list of articles to download
|
||||||
|
def parse_index(self):
|
||||||
|
print '--- parse_index'
|
||||||
|
|
||||||
|
max_articles = 8000
|
||||||
|
links = self.load_article_links('http://wyborcza.pl/0,75480.html', max_articles)
|
||||||
|
|
||||||
|
ans = []
|
||||||
|
key = None
|
||||||
|
articles = {}
|
||||||
|
|
||||||
|
key = 'Uncategorized'
|
||||||
|
articles[key] = []
|
||||||
|
|
||||||
|
for div_art in links:
|
||||||
|
div_date = div_art.find('div', attrs={'class':'kL'})
|
||||||
|
div = div_art.find('div', attrs={'class':'kR'})
|
||||||
|
|
||||||
|
a = div.find('a', href=True)
|
||||||
|
|
||||||
|
url = a['href']
|
||||||
|
title = a.string
|
||||||
|
description = ''
|
||||||
|
pubdate = div_date.string.rstrip().lstrip()
|
||||||
|
summary = div.find('span', attrs={'class':'lead'})
|
||||||
|
|
||||||
|
desc = summary.find('a', href=True)
|
||||||
|
if desc:
|
||||||
|
desc.extract()
|
||||||
|
|
||||||
|
description = self.tag_to_string(summary, use_alt=False)
|
||||||
|
description = description.rstrip().lstrip()
|
||||||
|
|
||||||
|
feed = key if key is not None else 'Duzy Format'
|
||||||
|
|
||||||
|
if not articles.has_key(feed):
|
||||||
|
articles[feed] = []
|
||||||
|
|
||||||
|
if description != '': # skip just pictures atricle
|
||||||
|
articles[feed].append(
|
||||||
|
dict(title=title, url=url, date=pubdate,
|
||||||
|
description=description,
|
||||||
|
content=''))
|
||||||
|
|
||||||
|
ans = [(key, articles[key])]
|
||||||
|
return ans
|
||||||
|
|
||||||
|
def append_page(self, soup, appendtag, position):
|
||||||
|
pager = soup.find('div',attrs={'id':'Str'})
|
||||||
|
if pager:
|
||||||
|
#seek for 'a' element with nast value (if not found exit)
|
||||||
|
list = pager.findAll('a')
|
||||||
|
|
||||||
|
for elem in list:
|
||||||
|
if 'nast' in elem.string:
|
||||||
|
nexturl = elem['href']
|
||||||
|
|
||||||
|
soup2 = self.index_to_soup('http://warszawa.gazeta.pl' + nexturl)
|
||||||
|
|
||||||
|
texttag = soup2.find('div', attrs={'id':'artykul'})
|
||||||
|
|
||||||
|
newpos = len(texttag.contents)
|
||||||
|
self.append_page(soup2,texttag,newpos)
|
||||||
|
texttag.extract()
|
||||||
|
appendtag.insert(position,texttag)
|
||||||
|
|
||||||
|
def preprocess_html(self, soup):
|
||||||
|
self.append_page(soup, soup.body, 3)
|
||||||
|
|
||||||
|
# finally remove some tags
|
||||||
|
pager = soup.find('div',attrs={'id':'Str'})
|
||||||
|
if pager:
|
||||||
|
pager.extract()
|
||||||
|
|
||||||
|
pager = soup.find('div',attrs={'class':'tylko_int'})
|
||||||
|
if pager:
|
||||||
|
pager.extract()
|
||||||
|
|
||||||
|
return soup
|
Binary file not shown.
@ -8,14 +8,14 @@ msgstr ""
|
|||||||
"Project-Id-Version: calibre\n"
|
"Project-Id-Version: calibre\n"
|
||||||
"Report-Msgid-Bugs-To: FULL NAME <EMAIL@ADDRESS>\n"
|
"Report-Msgid-Bugs-To: FULL NAME <EMAIL@ADDRESS>\n"
|
||||||
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
|
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
|
||||||
"PO-Revision-Date: 2012-01-19 00:12+0000\n"
|
"PO-Revision-Date: 2012-01-28 05:12+0000\n"
|
||||||
"Last-Translator: Vibhav Pant <vibhavp@gmail.com>\n"
|
"Last-Translator: Vibhav Pant <vibhavp@gmail.com>\n"
|
||||||
"Language-Team: English (United Kingdom) <en_GB@li.org>\n"
|
"Language-Team: English (United Kingdom) <en_GB@li.org>\n"
|
||||||
"MIME-Version: 1.0\n"
|
"MIME-Version: 1.0\n"
|
||||||
"Content-Type: text/plain; charset=UTF-8\n"
|
"Content-Type: text/plain; charset=UTF-8\n"
|
||||||
"Content-Transfer-Encoding: 8bit\n"
|
"Content-Transfer-Encoding: 8bit\n"
|
||||||
"X-Launchpad-Export-Date: 2012-01-19 04:39+0000\n"
|
"X-Launchpad-Export-Date: 2012-01-29 05:21+0000\n"
|
||||||
"X-Generator: Launchpad (build 14692)\n"
|
"X-Generator: Launchpad (build 14727)\n"
|
||||||
|
|
||||||
#. name for aaa
|
#. name for aaa
|
||||||
msgid "Ghotuo"
|
msgid "Ghotuo"
|
||||||
@ -7003,83 +7003,83 @@ msgstr "Ebrié"
|
|||||||
|
|
||||||
#. name for ebu
|
#. name for ebu
|
||||||
msgid "Embu"
|
msgid "Embu"
|
||||||
msgstr ""
|
msgstr "Embu"
|
||||||
|
|
||||||
#. name for ecr
|
#. name for ecr
|
||||||
msgid "Eteocretan"
|
msgid "Eteocretan"
|
||||||
msgstr ""
|
msgstr "Eteocretan"
|
||||||
|
|
||||||
#. name for ecs
|
#. name for ecs
|
||||||
msgid "Ecuadorian Sign Language"
|
msgid "Ecuadorian Sign Language"
|
||||||
msgstr ""
|
msgstr "Ecuadorian Sign Language"
|
||||||
|
|
||||||
#. name for ecy
|
#. name for ecy
|
||||||
msgid "Eteocypriot"
|
msgid "Eteocypriot"
|
||||||
msgstr ""
|
msgstr "Eteocypriot"
|
||||||
|
|
||||||
#. name for eee
|
#. name for eee
|
||||||
msgid "E"
|
msgid "E"
|
||||||
msgstr ""
|
msgstr "E"
|
||||||
|
|
||||||
#. name for efa
|
#. name for efa
|
||||||
msgid "Efai"
|
msgid "Efai"
|
||||||
msgstr ""
|
msgstr "Efai"
|
||||||
|
|
||||||
#. name for efe
|
#. name for efe
|
||||||
msgid "Efe"
|
msgid "Efe"
|
||||||
msgstr ""
|
msgstr "Efe"
|
||||||
|
|
||||||
#. name for efi
|
#. name for efi
|
||||||
msgid "Efik"
|
msgid "Efik"
|
||||||
msgstr ""
|
msgstr "Efik"
|
||||||
|
|
||||||
#. name for ega
|
#. name for ega
|
||||||
msgid "Ega"
|
msgid "Ega"
|
||||||
msgstr ""
|
msgstr "Ega"
|
||||||
|
|
||||||
#. name for egl
|
#. name for egl
|
||||||
msgid "Emilian"
|
msgid "Emilian"
|
||||||
msgstr ""
|
msgstr "Emilian"
|
||||||
|
|
||||||
#. name for ego
|
#. name for ego
|
||||||
msgid "Eggon"
|
msgid "Eggon"
|
||||||
msgstr ""
|
msgstr "Eggon"
|
||||||
|
|
||||||
#. name for egy
|
#. name for egy
|
||||||
msgid "Egyptian (Ancient)"
|
msgid "Egyptian (Ancient)"
|
||||||
msgstr ""
|
msgstr "Egyptian (Ancient)"
|
||||||
|
|
||||||
#. name for ehu
|
#. name for ehu
|
||||||
msgid "Ehueun"
|
msgid "Ehueun"
|
||||||
msgstr ""
|
msgstr "Ehueun"
|
||||||
|
|
||||||
#. name for eip
|
#. name for eip
|
||||||
msgid "Eipomek"
|
msgid "Eipomek"
|
||||||
msgstr ""
|
msgstr "Eipomek"
|
||||||
|
|
||||||
#. name for eit
|
#. name for eit
|
||||||
msgid "Eitiep"
|
msgid "Eitiep"
|
||||||
msgstr ""
|
msgstr "Eitiep"
|
||||||
|
|
||||||
#. name for eiv
|
#. name for eiv
|
||||||
msgid "Askopan"
|
msgid "Askopan"
|
||||||
msgstr ""
|
msgstr "Askopan"
|
||||||
|
|
||||||
#. name for eja
|
#. name for eja
|
||||||
msgid "Ejamat"
|
msgid "Ejamat"
|
||||||
msgstr ""
|
msgstr "Ejamat"
|
||||||
|
|
||||||
#. name for eka
|
#. name for eka
|
||||||
msgid "Ekajuk"
|
msgid "Ekajuk"
|
||||||
msgstr ""
|
msgstr "Ekajuk"
|
||||||
|
|
||||||
#. name for eke
|
#. name for eke
|
||||||
msgid "Ekit"
|
msgid "Ekit"
|
||||||
msgstr ""
|
msgstr "Ekit"
|
||||||
|
|
||||||
#. name for ekg
|
#. name for ekg
|
||||||
msgid "Ekari"
|
msgid "Ekari"
|
||||||
msgstr ""
|
msgstr "Ekari"
|
||||||
|
|
||||||
#. name for eki
|
#. name for eki
|
||||||
msgid "Eki"
|
msgid "Eki"
|
||||||
|
@ -12,14 +12,14 @@ msgstr ""
|
|||||||
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
|
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
|
||||||
"devel@lists.alioth.debian.org>\n"
|
"devel@lists.alioth.debian.org>\n"
|
||||||
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
|
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
|
||||||
"PO-Revision-Date: 2011-11-03 23:08+0000\n"
|
"PO-Revision-Date: 2012-02-01 20:12+0000\n"
|
||||||
"Last-Translator: drMerry <Unknown>\n"
|
"Last-Translator: drMerry <Unknown>\n"
|
||||||
"Language-Team: Dutch <vertaling@vrijschrift.org>\n"
|
"Language-Team: Dutch <vertaling@vrijschrift.org>\n"
|
||||||
"MIME-Version: 1.0\n"
|
"MIME-Version: 1.0\n"
|
||||||
"Content-Type: text/plain; charset=UTF-8\n"
|
"Content-Type: text/plain; charset=UTF-8\n"
|
||||||
"Content-Transfer-Encoding: 8bit\n"
|
"Content-Transfer-Encoding: 8bit\n"
|
||||||
"X-Launchpad-Export-Date: 2011-11-26 05:12+0000\n"
|
"X-Launchpad-Export-Date: 2012-02-02 05:57+0000\n"
|
||||||
"X-Generator: Launchpad (build 14381)\n"
|
"X-Generator: Launchpad (build 14738)\n"
|
||||||
"Language: nl\n"
|
"Language: nl\n"
|
||||||
|
|
||||||
#. name for aaa
|
#. name for aaa
|
||||||
@ -17956,7 +17956,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for nds
|
#. name for nds
|
||||||
msgid "German; Low"
|
msgid "German; Low"
|
||||||
msgstr ""
|
msgstr "Duits; Laag"
|
||||||
|
|
||||||
#. name for ndt
|
#. name for ndt
|
||||||
msgid "Ndunga"
|
msgid "Ndunga"
|
||||||
@ -30424,7 +30424,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for zlm
|
#. name for zlm
|
||||||
msgid "Malay (individual language)"
|
msgid "Malay (individual language)"
|
||||||
msgstr ""
|
msgstr "Maleis (aparte taal)"
|
||||||
|
|
||||||
#. name for zln
|
#. name for zln
|
||||||
msgid "Zhuang; Lianshan"
|
msgid "Zhuang; Lianshan"
|
||||||
|
@ -151,7 +151,7 @@ class Translations(POT): # {{{
|
|||||||
self.info('\tCopying ISO 639 translations')
|
self.info('\tCopying ISO 639 translations')
|
||||||
subprocess.check_call(['msgfmt', '-o', dest, iso639])
|
subprocess.check_call(['msgfmt', '-o', dest, iso639])
|
||||||
elif locale not in ('en_GB', 'en_CA', 'en_AU', 'si', 'ur', 'sc',
|
elif locale not in ('en_GB', 'en_CA', 'en_AU', 'si', 'ur', 'sc',
|
||||||
'ltg', 'nds', 'te', 'yi', 'fo', 'sq', 'ast', 'ml'):
|
'ltg', 'nds', 'te', 'yi', 'fo', 'sq', 'ast', 'ml', 'ku'):
|
||||||
self.warn('No ISO 639 translations for locale:', locale)
|
self.warn('No ISO 639 translations for locale:', locale)
|
||||||
|
|
||||||
self.write_stats()
|
self.write_stats()
|
||||||
|
@ -4,7 +4,7 @@ __license__ = 'GPL v3'
|
|||||||
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
|
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
|
||||||
__docformat__ = 'restructuredtext en'
|
__docformat__ = 'restructuredtext en'
|
||||||
__appname__ = u'calibre'
|
__appname__ = u'calibre'
|
||||||
numeric_version = (0, 8, 36)
|
numeric_version = (0, 8, 38)
|
||||||
__version__ = u'.'.join(map(unicode, numeric_version))
|
__version__ = u'.'.join(map(unicode, numeric_version))
|
||||||
__author__ = u"Kovid Goyal <kovid@kovidgoyal.net>"
|
__author__ = u"Kovid Goyal <kovid@kovidgoyal.net>"
|
||||||
|
|
||||||
@ -161,4 +161,32 @@ def get_version():
|
|||||||
v += '*'
|
v += '*'
|
||||||
return v
|
return v
|
||||||
|
|
||||||
|
def get_unicode_windows_env_var(name):
|
||||||
|
import ctypes
|
||||||
|
name = unicode(name)
|
||||||
|
n = ctypes.windll.kernel32.GetEnvironmentVariableW(name, None, 0)
|
||||||
|
if n == 0:
|
||||||
|
return None
|
||||||
|
buf = ctypes.create_unicode_buffer(u'\0'*n)
|
||||||
|
ctypes.windll.kernel32.GetEnvironmentVariableW(name, buf, n)
|
||||||
|
return buf.value
|
||||||
|
|
||||||
|
def get_windows_username():
|
||||||
|
'''
|
||||||
|
Return the user name of the currently loggen in user as a unicode string.
|
||||||
|
Note that usernames on windows are case insensitive, the case of the value
|
||||||
|
returned depends on what the user typed into the login box at login time.
|
||||||
|
'''
|
||||||
|
import ctypes
|
||||||
|
try:
|
||||||
|
advapi32 = ctypes.windll.advapi32
|
||||||
|
GetUserName = getattr(advapi32, u'GetUserNameW')
|
||||||
|
except AttributeError:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
buf = ctypes.create_unicode_buffer(257)
|
||||||
|
n = ctypes.c_int(257)
|
||||||
|
if GetUserName(buf, ctypes.byref(n)):
|
||||||
|
return buf.value
|
||||||
|
|
||||||
|
return get_unicode_windows_env_var(u'USERNAME')
|
||||||
|
@ -1508,6 +1508,7 @@ class StoreVirtualoStore(StoreBase):
|
|||||||
|
|
||||||
headquarters = 'PL'
|
headquarters = 'PL'
|
||||||
formats = ['EPUB', 'MOBI', 'PDF']
|
formats = ['EPUB', 'MOBI', 'PDF']
|
||||||
|
affiliate = True
|
||||||
|
|
||||||
class StoreWaterstonesUKStore(StoreBase):
|
class StoreWaterstonesUKStore(StoreBase):
|
||||||
name = 'Waterstones UK'
|
name = 'Waterstones UK'
|
||||||
|
@ -38,6 +38,7 @@ class ANDROID(USBMS):
|
|||||||
0xca4 : [0x100, 0x0227, 0x0226, 0x222],
|
0xca4 : [0x100, 0x0227, 0x0226, 0x222],
|
||||||
0xca9 : [0x100, 0x0227, 0x0226, 0x222],
|
0xca9 : [0x100, 0x0227, 0x0226, 0x222],
|
||||||
0xcac : [0x100, 0x0227, 0x0226, 0x222],
|
0xcac : [0x100, 0x0227, 0x0226, 0x222],
|
||||||
|
0x2910 : [0x222],
|
||||||
},
|
},
|
||||||
|
|
||||||
# Eken
|
# Eken
|
||||||
@ -175,13 +176,13 @@ class ANDROID(USBMS):
|
|||||||
'GT-S5830_CARD', 'GT-S5570_CARD', 'MB870', 'MID7015A',
|
'GT-S5830_CARD', 'GT-S5570_CARD', 'MB870', 'MID7015A',
|
||||||
'ALPANDIGITAL', 'ANDROID_MID', 'VTAB1008', 'EMX51_BBG_ANDROI',
|
'ALPANDIGITAL', 'ANDROID_MID', 'VTAB1008', 'EMX51_BBG_ANDROI',
|
||||||
'UMS', '.K080', 'P990', 'LTE', 'MB853', 'GT-S5660_CARD', 'A107',
|
'UMS', '.K080', 'P990', 'LTE', 'MB853', 'GT-S5660_CARD', 'A107',
|
||||||
'GT-I9003_CARD', 'XT912', 'FILE-CD_GADGET', 'RK29_SDK']
|
'GT-I9003_CARD', 'XT912', 'FILE-CD_GADGET', 'RK29_SDK', 'MB855']
|
||||||
WINDOWS_CARD_A_MEM = ['ANDROID_PHONE', 'GT-I9000_CARD', 'SGH-I897',
|
WINDOWS_CARD_A_MEM = ['ANDROID_PHONE', 'GT-I9000_CARD', 'SGH-I897',
|
||||||
'FILE-STOR_GADGET', 'SGH-T959', 'SAMSUNG_ANDROID', 'GT-P1000_CARD',
|
'FILE-STOR_GADGET', 'SGH-T959', 'SAMSUNG_ANDROID', 'GT-P1000_CARD',
|
||||||
'A70S', 'A101IT', '7', 'INCREDIBLE', 'A7EB', 'SGH-T849_CARD',
|
'A70S', 'A101IT', '7', 'INCREDIBLE', 'A7EB', 'SGH-T849_CARD',
|
||||||
'__UMS_COMPOSITE', 'SGH-I997_CARD', 'MB870', 'ALPANDIGITAL',
|
'__UMS_COMPOSITE', 'SGH-I997_CARD', 'MB870', 'ALPANDIGITAL',
|
||||||
'ANDROID_MID', 'P990_SD_CARD', '.K080', 'LTE_CARD', 'MB853',
|
'ANDROID_MID', 'P990_SD_CARD', '.K080', 'LTE_CARD', 'MB853',
|
||||||
'A1-07___C0541A4F', 'XT912']
|
'A1-07___C0541A4F', 'XT912', 'MB855']
|
||||||
|
|
||||||
OSX_MAIN_MEM = 'Android Device Main Memory'
|
OSX_MAIN_MEM = 'Android Device Main Memory'
|
||||||
|
|
||||||
|
@ -209,8 +209,8 @@ class ALURATEK_COLOR(USBMS):
|
|||||||
|
|
||||||
EBOOK_DIR_MAIN = EBOOK_DIR_CARD_A = 'books'
|
EBOOK_DIR_MAIN = EBOOK_DIR_CARD_A = 'books'
|
||||||
|
|
||||||
VENDOR_NAME = 'USB_2.0'
|
VENDOR_NAME = ['USB_2.0', 'EZREADER']
|
||||||
WINDOWS_MAIN_MEM = WINDOWS_CARD_A_MEM = 'USB_FLASH_DRIVER'
|
WINDOWS_MAIN_MEM = WINDOWS_CARD_A_MEM = ['USB_FLASH_DRIVER', '.']
|
||||||
|
|
||||||
class TREKSTOR(USBMS):
|
class TREKSTOR(USBMS):
|
||||||
|
|
||||||
|
@ -8,7 +8,7 @@ manner.
|
|||||||
import sys, os, re
|
import sys, os, re
|
||||||
from threading import RLock
|
from threading import RLock
|
||||||
|
|
||||||
from calibre.constants import iswindows, isosx, plugins, islinux
|
from calibre.constants import iswindows, isosx, plugins, islinux, isfreebsd
|
||||||
|
|
||||||
osx_scanner = win_scanner = linux_scanner = None
|
osx_scanner = win_scanner = linux_scanner = None
|
||||||
|
|
||||||
@ -155,17 +155,78 @@ class LinuxScanner(object):
|
|||||||
ans.add(tuple(dev))
|
ans.add(tuple(dev))
|
||||||
return ans
|
return ans
|
||||||
|
|
||||||
|
class FreeBSDScanner(object):
|
||||||
|
|
||||||
|
def __call__(self):
|
||||||
|
ans = set([])
|
||||||
|
import dbus
|
||||||
|
|
||||||
|
try:
|
||||||
|
bus = dbus.SystemBus()
|
||||||
|
manager = dbus.Interface(bus.get_object('org.freedesktop.Hal',
|
||||||
|
'/org/freedesktop/Hal/Manager'), 'org.freedesktop.Hal.Manager')
|
||||||
|
paths = manager.FindDeviceStringMatch('freebsd.driver','da')
|
||||||
|
for path in paths:
|
||||||
|
obj = bus.get_object('org.freedesktop.Hal', path)
|
||||||
|
objif = dbus.Interface(obj, 'org.freedesktop.Hal.Device')
|
||||||
|
parentdriver = None
|
||||||
|
while parentdriver != 'umass':
|
||||||
|
try:
|
||||||
|
obj = bus.get_object('org.freedesktop.Hal',
|
||||||
|
objif.GetProperty('info.parent'))
|
||||||
|
objif = dbus.Interface(obj, 'org.freedesktop.Hal.Device')
|
||||||
|
try:
|
||||||
|
parentdriver = objif.GetProperty('freebsd.driver')
|
||||||
|
except dbus.exceptions.DBusException, e:
|
||||||
|
continue
|
||||||
|
except dbus.exceptions.DBusException, e:
|
||||||
|
break
|
||||||
|
if parentdriver != 'umass':
|
||||||
|
continue
|
||||||
|
dev = []
|
||||||
|
try:
|
||||||
|
dev.append(objif.GetProperty('usb.vendor_id'))
|
||||||
|
dev.append(objif.GetProperty('usb.product_id'))
|
||||||
|
dev.append(objif.GetProperty('usb.device_revision_bcd'))
|
||||||
|
except dbus.exceptions.DBusException, e:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
dev.append(objif.GetProperty('info.vendor'))
|
||||||
|
except:
|
||||||
|
dev.append('')
|
||||||
|
try:
|
||||||
|
dev.append(objif.GetProperty('info.product'))
|
||||||
|
except:
|
||||||
|
dev.append('')
|
||||||
|
try:
|
||||||
|
dev.append(objif.GetProperty('usb.serial'))
|
||||||
|
except:
|
||||||
|
dev.append('')
|
||||||
|
dev.append(path)
|
||||||
|
ans.add(tuple(dev))
|
||||||
|
except dbus.exceptions.DBusException, e:
|
||||||
|
print >>sys.stderr, "Execution failed:", e
|
||||||
|
return ans
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
linux_scanner = None
|
linux_scanner = None
|
||||||
|
|
||||||
if islinux:
|
if islinux:
|
||||||
linux_scanner = LinuxScanner()
|
linux_scanner = LinuxScanner()
|
||||||
|
|
||||||
|
freebsd_scanner = None
|
||||||
|
|
||||||
|
if isfreebsd:
|
||||||
|
freebsd_scanner = FreeBSDScanner()
|
||||||
|
|
||||||
|
|
||||||
class DeviceScanner(object):
|
class DeviceScanner(object):
|
||||||
|
|
||||||
def __init__(self, *args):
|
def __init__(self, *args):
|
||||||
if isosx and osx_scanner is None:
|
if isosx and osx_scanner is None:
|
||||||
raise RuntimeError('The Python extension usbobserver must be available on OS X.')
|
raise RuntimeError('The Python extension usbobserver must be available on OS X.')
|
||||||
self.scanner = win_scanner if iswindows else osx_scanner if isosx else linux_scanner
|
self.scanner = win_scanner if iswindows else osx_scanner if isosx else freebsd_scanner if isfreebsd else linux_scanner
|
||||||
self.devices = []
|
self.devices = []
|
||||||
|
|
||||||
def scan(self):
|
def scan(self):
|
||||||
|
@ -591,26 +591,7 @@ class Device(DeviceConfig, DevicePlugin):
|
|||||||
mp = self.node_mountpoint(node)
|
mp = self.node_mountpoint(node)
|
||||||
if mp is not None:
|
if mp is not None:
|
||||||
return mp, 0
|
return mp, 0
|
||||||
if type == 'main':
|
def do_mount(node):
|
||||||
label = self.MAIN_MEMORY_VOLUME_LABEL
|
|
||||||
if type == 'carda':
|
|
||||||
label = self.STORAGE_CARD_VOLUME_LABEL
|
|
||||||
if type == 'cardb':
|
|
||||||
label = self.STORAGE_CARD2_VOLUME_LABEL
|
|
||||||
if not label:
|
|
||||||
label = self.STORAGE_CARD_VOLUME_LABEL + ' 2'
|
|
||||||
if not label:
|
|
||||||
label = 'E-book Reader (%s)'%type
|
|
||||||
extra = 0
|
|
||||||
while True:
|
|
||||||
q = ' (%d)'%extra if extra else ''
|
|
||||||
if not os.path.exists('/media/'+label+q):
|
|
||||||
break
|
|
||||||
extra += 1
|
|
||||||
if extra:
|
|
||||||
label += ' (%d)'%extra
|
|
||||||
|
|
||||||
def do_mount(node, label):
|
|
||||||
try:
|
try:
|
||||||
from calibre.devices.udisks import mount
|
from calibre.devices.udisks import mount
|
||||||
mount(node)
|
mount(node)
|
||||||
@ -621,8 +602,7 @@ class Device(DeviceConfig, DevicePlugin):
|
|||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
return 1
|
return 1
|
||||||
|
|
||||||
|
ret = do_mount(node)
|
||||||
ret = do_mount(node, label)
|
|
||||||
if ret != 0:
|
if ret != 0:
|
||||||
return None, ret
|
return None, ret
|
||||||
return self.node_mountpoint(node)+'/', 0
|
return self.node_mountpoint(node)+'/', 0
|
||||||
@ -697,19 +677,21 @@ class Device(DeviceConfig, DevicePlugin):
|
|||||||
self._card_a_prefix = self._card_b_prefix
|
self._card_a_prefix = self._card_b_prefix
|
||||||
self._card_b_prefix = None
|
self._card_b_prefix = None
|
||||||
|
|
||||||
|
|
||||||
# ------------------------------------------------------
|
# ------------------------------------------------------
|
||||||
#
|
#
|
||||||
# open for FreeBSD
|
# open for FreeBSD
|
||||||
# find the device node or nodes that match the S/N we already have from the scanner
|
# find the device node or nodes that match the S/N we already have from the scanner
|
||||||
# and attempt to mount each one
|
# and attempt to mount each one
|
||||||
# 1. get list of disk devices from sysctl
|
# 1. get list of devices in /dev with matching s/n etc.
|
||||||
# 2. compare that list with the one from camcontrol
|
# 2. get list of volumes associated with each
|
||||||
# 3. and see if it has a matching s/n
|
# 3. attempt to mount each one using Hal
|
||||||
# 6. find any partitions/slices associated with each node
|
# 4. when finished, we have a list of mount points and associated dbus nodes
|
||||||
# 7. attempt to mount, using calibre-mount-helper, each one
|
|
||||||
# 8. when finished, we have a list of mount points and associated device nodes
|
|
||||||
#
|
#
|
||||||
def open_freebsd(self):
|
def open_freebsd(self):
|
||||||
|
import dbus
|
||||||
|
# There should be some way to access the -v arg...
|
||||||
|
verbose = False
|
||||||
|
|
||||||
# this gives us access to the S/N, etc. of the reader that the scanner has found
|
# this gives us access to the S/N, etc. of the reader that the scanner has found
|
||||||
# and the match routines for some of that data, like s/n, vendor ID, etc.
|
# and the match routines for some of that data, like s/n, vendor ID, etc.
|
||||||
@ -719,128 +701,146 @@ class Device(DeviceConfig, DevicePlugin):
|
|||||||
raise DeviceError("Device has no S/N. Can't continue")
|
raise DeviceError("Device has no S/N. Can't continue")
|
||||||
return False
|
return False
|
||||||
|
|
||||||
devs={}
|
vols=[]
|
||||||
di=0
|
|
||||||
ndevs=4 # number of possible devices per reader (main, carda, cardb, launcher)
|
|
||||||
|
|
||||||
#get list of disk devices
|
bus = dbus.SystemBus()
|
||||||
p=subprocess.Popen(["sysctl", "kern.disks"], stdout=subprocess.PIPE)
|
manager = dbus.Interface(bus.get_object('org.freedesktop.Hal',
|
||||||
kdsks=subprocess.Popen(["sed", "s/kern.disks: //"], stdin=p.stdout, stdout=subprocess.PIPE).communicate()[0]
|
'/org/freedesktop/Hal/Manager'), 'org.freedesktop.Hal.Manager')
|
||||||
p.stdout.close()
|
paths = manager.FindDeviceStringMatch('usb.serial',d.serial)
|
||||||
#print kdsks
|
for path in paths:
|
||||||
for dvc in kdsks.split():
|
objif = dbus.Interface(bus.get_object('org.freedesktop.Hal', path), 'org.freedesktop.Hal.Device')
|
||||||
# for each one that's also in the list of cam devices ...
|
# Extra paranoia...
|
||||||
p=subprocess.Popen(["camcontrol", "devlist"], stdout=subprocess.PIPE)
|
|
||||||
devmatch=subprocess.Popen(["grep", dvc], stdin=p.stdout, stdout=subprocess.PIPE).communicate()[0]
|
|
||||||
p.stdout.close()
|
|
||||||
if devmatch:
|
|
||||||
#print "Checking ", devmatch
|
|
||||||
# ... see if we can get a S/N from the actual device node
|
|
||||||
sn=subprocess.Popen(["camcontrol", "inquiry", dvc, "-S"], stdout=subprocess.PIPE).communicate()[0]
|
|
||||||
sn=sn[0:-1] # drop the trailing newline
|
|
||||||
#print "S/N = ", sn
|
|
||||||
if sn and d.match_serial(sn):
|
|
||||||
# we have a matching s/n, record this device node
|
|
||||||
#print "match found: ", dvc
|
|
||||||
devs[di]=dvc
|
|
||||||
di += 1
|
|
||||||
|
|
||||||
# sort the list of devices
|
|
||||||
for i in range(1,ndevs+1):
|
|
||||||
for j in reversed(range(1,i)):
|
|
||||||
if devs[j-1] > devs[j]:
|
|
||||||
x=devs[j-1]
|
|
||||||
devs[j-1]=devs[j]
|
|
||||||
devs[j]=x
|
|
||||||
#print devs
|
|
||||||
|
|
||||||
# now we need to see if any of these have slices/partitions
|
|
||||||
mtd=0
|
|
||||||
label="READER" # could use something more unique, like S/N or productID...
|
|
||||||
cmd = '/usr/local/bin/calibre-mount-helper'
|
|
||||||
cmd = [cmd, 'mount']
|
|
||||||
for i in range(0,ndevs):
|
|
||||||
cmd2="ls /dev/"+devs[i]+"*"
|
|
||||||
p=subprocess.Popen(cmd2, shell=True, stdout=subprocess.PIPE)
|
|
||||||
devs[i]=subprocess.Popen(["cut", "-d", "/", "-f" "3"], stdin=p.stdout, stdout=subprocess.PIPE).communicate()[0]
|
|
||||||
p.stdout.close()
|
|
||||||
|
|
||||||
# try all the nodes to see what we can mount
|
|
||||||
for dev in devs[i].split():
|
|
||||||
mp='/media/'+label+'-'+dev
|
|
||||||
mmp = mp
|
|
||||||
if mmp.endswith('/'):
|
|
||||||
mmp = mmp[:-1]
|
|
||||||
#print "trying ", dev, "on", mp
|
|
||||||
try:
|
try:
|
||||||
p = subprocess.Popen(cmd + ["/dev/"+dev, mmp])
|
if d.idVendor == objif.GetProperty('usb.vendor_id') and \
|
||||||
except OSError:
|
d.idProduct == objif.GetProperty('usb.product_id') and \
|
||||||
raise DeviceError(_('Could not find mount helper: %s.')%cmd[0])
|
d.manufacturer == objif.GetProperty('usb.vendor') and \
|
||||||
while p.poll() is None:
|
d.product == objif.GetProperty('usb.product') and \
|
||||||
time.sleep(0.1)
|
d.serial == objif.GetProperty('usb.serial'):
|
||||||
|
dpaths = manager.FindDeviceStringMatch('storage.originating_device', path)
|
||||||
|
for dpath in dpaths:
|
||||||
|
#devif = dbus.Interface(bus.get_object('org.freedesktop.Hal', dpath), 'org.freedesktop.Hal.Device')
|
||||||
|
try:
|
||||||
|
vpaths = manager.FindDeviceStringMatch('block.storage_device', dpath)
|
||||||
|
for vpath in vpaths:
|
||||||
|
try:
|
||||||
|
vdevif = dbus.Interface(bus.get_object('org.freedesktop.Hal', vpath), 'org.freedesktop.Hal.Device')
|
||||||
|
if not vdevif.GetProperty('block.is_volume'):
|
||||||
|
continue
|
||||||
|
if vdevif.GetProperty('volume.fsusage') != 'filesystem':
|
||||||
|
continue
|
||||||
|
volif = dbus.Interface(bus.get_object('org.freedesktop.Hal', vpath), 'org.freedesktop.Hal.Device.Volume')
|
||||||
|
pdevif = dbus.Interface(bus.get_object('org.freedesktop.Hal', vdevif.GetProperty('info.parent')), 'org.freedesktop.Hal.Device')
|
||||||
|
vol = {'node': pdevif.GetProperty('block.device'),
|
||||||
|
'dev': vdevif,
|
||||||
|
'vol': volif,
|
||||||
|
'label': vdevif.GetProperty('volume.label')}
|
||||||
|
vols.append(vol)
|
||||||
|
except dbus.exceptions.DBusException, e:
|
||||||
|
print e
|
||||||
|
continue
|
||||||
|
except dbus.exceptions.DBusException, e:
|
||||||
|
print e
|
||||||
|
continue
|
||||||
|
except dbus.exceptions.DBusException, e:
|
||||||
|
continue
|
||||||
|
|
||||||
if p.returncode == 0:
|
def ocmp(x,y):
|
||||||
#print " mounted", dev
|
if x['node'] < y['node']:
|
||||||
if i == 0:
|
return -1
|
||||||
|
if x['node'] > y['node']:
|
||||||
|
return 1
|
||||||
|
return 0
|
||||||
|
|
||||||
|
vols.sort(cmp=ocmp)
|
||||||
|
|
||||||
|
if verbose:
|
||||||
|
print "FBSD: ", vols
|
||||||
|
|
||||||
|
mtd=0
|
||||||
|
|
||||||
|
for vol in vols:
|
||||||
|
mp = ''
|
||||||
|
if vol['dev'].GetProperty('volume.is_mounted'):
|
||||||
|
mp = vol['dev'].GetProperty('volume.mount_point')
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
vol['vol'].Mount('Calibre-'+vol['label'],
|
||||||
|
vol['dev'].GetProperty('volume.fstype'), [])
|
||||||
|
loops = 0
|
||||||
|
while not vol['dev'].GetProperty('volume.is_mounted'):
|
||||||
|
time.sleep(1)
|
||||||
|
loops += 1
|
||||||
|
if loops > 100:
|
||||||
|
print "ERROR: Timeout waiting for mount to complete"
|
||||||
|
continue
|
||||||
|
mp = vol['dev'].GetProperty('volume.mount_point')
|
||||||
|
except dbus.exceptions.DBusException, e:
|
||||||
|
print "Failed to mount ", e
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Mount Point becomes Mount Path
|
||||||
|
mp += '/'
|
||||||
|
|
||||||
|
if verbose:
|
||||||
|
print "FBSD: mounted", vol['label'], "on", mp
|
||||||
|
if mtd == 0:
|
||||||
self._main_prefix = mp
|
self._main_prefix = mp
|
||||||
self._main_dev = "/dev/"+dev
|
self._main_vol = vol['vol']
|
||||||
#print "main = ", self._main_dev, self._main_prefix
|
if verbose:
|
||||||
if i == 1:
|
print "FBSD: main = ", self._main_prefix
|
||||||
|
if mtd == 1:
|
||||||
self._card_a_prefix = mp
|
self._card_a_prefix = mp
|
||||||
self._card_a_dev = "/dev/"+dev
|
self._card_a_vol = vol['vol']
|
||||||
#print "card a = ", self._card_a_dev, self._card_a_prefix
|
if verbose:
|
||||||
if i == 2:
|
print "FBSD: card a = ", self._card_a_prefix
|
||||||
|
if mtd == 2:
|
||||||
self._card_b_prefix = mp
|
self._card_b_prefix = mp
|
||||||
self._card_b_dev = "/dev/"+dev
|
self._card_b_vol = vol['vol']
|
||||||
#print "card b = ", self._card_b_dev, self._card_b_prefix
|
if verbose:
|
||||||
|
print "FBSD: card b = ", self._card_b_prefix
|
||||||
mtd += 1
|
# Note that mtd is used as a bool... not incrementing is fine.
|
||||||
break
|
break
|
||||||
|
mtd += 1
|
||||||
|
|
||||||
if mtd > 0:
|
if mtd > 0:
|
||||||
return True
|
return True
|
||||||
else :
|
raise DeviceError(_('Unable to mount the device'))
|
||||||
return False
|
|
||||||
#
|
#
|
||||||
# ------------------------------------------------------
|
# ------------------------------------------------------
|
||||||
#
|
#
|
||||||
# this one is pretty simple:
|
# this one is pretty simple:
|
||||||
# just umount each of the previously
|
# just umount each of the previously
|
||||||
# mounted filesystems, using the mount helper
|
# mounted filesystems, using the stored volume object
|
||||||
#
|
#
|
||||||
def eject_freebsd(self):
|
def eject_freebsd(self):
|
||||||
cmd = '/usr/local/bin/calibre-mount-helper'
|
import dbus
|
||||||
cmd = [cmd, 'eject']
|
# There should be some way to access the -v arg...
|
||||||
|
verbose = False
|
||||||
|
|
||||||
if self._main_prefix:
|
if self._main_prefix:
|
||||||
#print "umount main:", cmd, self._main_dev, self._main_prefix
|
if verbose:
|
||||||
|
print "FBSD: umount main:", self._main_prefix
|
||||||
try:
|
try:
|
||||||
p = subprocess.Popen(cmd + [self._main_dev, self._main_prefix])
|
self._main_vol.Unmount([])
|
||||||
except OSError:
|
except dbus.exceptions.DBusException, e:
|
||||||
raise DeviceError(
|
print 'Unable to eject ', e
|
||||||
_('Could not find mount helper: %s.')%cmd[0])
|
|
||||||
while p.poll() is None:
|
|
||||||
time.sleep(0.1)
|
|
||||||
|
|
||||||
if self._card_a_prefix:
|
if self._card_a_prefix:
|
||||||
#print "umount card a:", cmd, self._card_a_dev, self._card_a_prefix
|
if verbose:
|
||||||
|
print "FBSD: umount card a:", self._card_a_prefix
|
||||||
try:
|
try:
|
||||||
p = subprocess.Popen(cmd + [self._card_a_dev, self._card_a_prefix])
|
self._card_a_vol.Unmount([])
|
||||||
except OSError:
|
except dbus.exceptions.DBusException, e:
|
||||||
raise DeviceError(
|
print 'Unable to eject ', e
|
||||||
_('Could not find mount helper: %s.')%cmd[0])
|
|
||||||
while p.poll() is None:
|
|
||||||
time.sleep(0.1)
|
|
||||||
|
|
||||||
if self._card_b_prefix:
|
if self._card_b_prefix:
|
||||||
#print "umount card b:", cmd, self._card_b_dev, self._card_b_prefix
|
if verbose:
|
||||||
|
print "FBSD: umount card b:", self._card_b_prefix
|
||||||
try:
|
try:
|
||||||
p = subprocess.Popen(cmd + [self._card_b_dev, self._card_b_prefix])
|
self._card_b_vol.Unmount([])
|
||||||
except OSError:
|
except dbus.exceptions.DBusException, e:
|
||||||
raise DeviceError(
|
print 'Unable to eject ', e
|
||||||
_('Could not find mount helper: %s.')%cmd[0])
|
|
||||||
while p.poll() is None:
|
|
||||||
time.sleep(0.1)
|
|
||||||
|
|
||||||
self._main_prefix = None
|
self._main_prefix = None
|
||||||
self._card_a_prefix = None
|
self._card_a_prefix = None
|
||||||
@ -859,11 +859,10 @@ class Device(DeviceConfig, DevicePlugin):
|
|||||||
time.sleep(7)
|
time.sleep(7)
|
||||||
self.open_linux()
|
self.open_linux()
|
||||||
if isfreebsd:
|
if isfreebsd:
|
||||||
self._main_dev = self._card_a_dev = self._card_b_dev = None
|
self._main_vol = self._card_a_vol = self._card_b_vol = None
|
||||||
try:
|
try:
|
||||||
self.open_freebsd()
|
self.open_freebsd()
|
||||||
except DeviceError:
|
except DeviceError:
|
||||||
subprocess.Popen(["camcontrol", "rescan", "all"])
|
|
||||||
time.sleep(2)
|
time.sleep(2)
|
||||||
self.open_freebsd()
|
self.open_freebsd()
|
||||||
if iswindows:
|
if iswindows:
|
||||||
|
@ -706,8 +706,9 @@ OptionRecommendation(name='sr3_replace',
|
|||||||
files = [f if isinstance(f, unicode) else f.decode(filesystem_encoding)
|
files = [f if isinstance(f, unicode) else f.decode(filesystem_encoding)
|
||||||
for f in files]
|
for f in files]
|
||||||
from calibre.customize.ui import available_input_formats
|
from calibre.customize.ui import available_input_formats
|
||||||
fmts = available_input_formats()
|
fmts = set(available_input_formats())
|
||||||
for x in ('htm', 'html', 'xhtm', 'xhtml'): fmts.remove(x)
|
fmts -= {'htm', 'html', 'xhtm', 'xhtml'}
|
||||||
|
fmts -= set(ARCHIVE_FMTS)
|
||||||
|
|
||||||
for ext in fmts:
|
for ext in fmts:
|
||||||
for f in files:
|
for f in files:
|
||||||
|
@ -157,7 +157,7 @@ class HeuristicProcessor(object):
|
|||||||
|
|
||||||
ITALICIZE_STYLE_PATS = [
|
ITALICIZE_STYLE_PATS = [
|
||||||
ur'(?msu)(?<=[\s>"“\'‘])_(?P<words>[^_]+)_',
|
ur'(?msu)(?<=[\s>"“\'‘])_(?P<words>[^_]+)_',
|
||||||
ur'(?msu)(?<=[\s>"“\'‘])/(?P<words>[^/\*>]+)/',
|
ur'(?msu)(?<=[\s>"“\'‘])/(?P<words>[^/\*><]+)/',
|
||||||
ur'(?msu)(?<=[\s>"“\'‘])~~(?P<words>[^~]+)~~',
|
ur'(?msu)(?<=[\s>"“\'‘])~~(?P<words>[^~]+)~~',
|
||||||
ur'(?msu)(?<=[\s>"“\'‘])\*(?P<words>[^\*]+)\*',
|
ur'(?msu)(?<=[\s>"“\'‘])\*(?P<words>[^\*]+)\*',
|
||||||
ur'(?msu)(?<=[\s>"“\'‘])~(?P<words>[^~]+)~',
|
ur'(?msu)(?<=[\s>"“\'‘])~(?P<words>[^~]+)~',
|
||||||
@ -172,8 +172,11 @@ class HeuristicProcessor(object):
|
|||||||
for word in ITALICIZE_WORDS:
|
for word in ITALICIZE_WORDS:
|
||||||
html = re.sub(r'(?<=\s|>)' + re.escape(word) + r'(?=\s|<)', '<i>%s</i>' % word, html)
|
html = re.sub(r'(?<=\s|>)' + re.escape(word) + r'(?=\s|<)', '<i>%s</i>' % word, html)
|
||||||
|
|
||||||
|
def sub(mo):
|
||||||
|
return '<i>%s</i>'%mo.group('words')
|
||||||
|
|
||||||
for pat in ITALICIZE_STYLE_PATS:
|
for pat in ITALICIZE_STYLE_PATS:
|
||||||
html = re.sub(pat, lambda mo: '<i>%s</i>' % mo.group('words'), html)
|
html = re.sub(pat, sub, html)
|
||||||
|
|
||||||
return html
|
return html
|
||||||
|
|
||||||
|
@ -475,7 +475,9 @@ class HTMLInput(InputFormatPlugin):
|
|||||||
# bhref refers to an already existing file. The read() method of
|
# bhref refers to an already existing file. The read() method of
|
||||||
# DirContainer will call unquote on it before trying to read the
|
# DirContainer will call unquote on it before trying to read the
|
||||||
# file, therefore we quote it here.
|
# file, therefore we quote it here.
|
||||||
item.html_input_href = quote(bhref)
|
if isinstance(bhref, unicode):
|
||||||
|
bhref = bhref.encode('utf-8')
|
||||||
|
item.html_input_href = quote(bhref).decode('utf-8')
|
||||||
if guessed in self.OEB_STYLES:
|
if guessed in self.OEB_STYLES:
|
||||||
item.override_css_fetch = partial(
|
item.override_css_fetch = partial(
|
||||||
self.css_import_handler, os.path.dirname(link))
|
self.css_import_handler, os.path.dirname(link))
|
||||||
|
@ -217,3 +217,18 @@ def opf_metadata(opfpath):
|
|||||||
import traceback
|
import traceback
|
||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
def forked_read_metadata(path, tdir):
|
||||||
|
from calibre.ebooks.metadata.opf2 import metadata_to_opf
|
||||||
|
with open(path, 'rb') as f:
|
||||||
|
fmt = os.path.splitext(path)[1][1:].lower()
|
||||||
|
mi = get_metadata(f, fmt)
|
||||||
|
if mi.cover_data and mi.cover_data[1]:
|
||||||
|
with open(os.path.join(tdir, 'cover.jpg'), 'wb') as f:
|
||||||
|
f.write(mi.cover_data[1])
|
||||||
|
mi.cover_data = (None, None)
|
||||||
|
mi.cover = 'cover.jpg'
|
||||||
|
opf = metadata_to_opf(mi)
|
||||||
|
with open(os.path.join(tdir, 'metadata.opf'), 'wb') as f:
|
||||||
|
f.write(opf)
|
||||||
|
|
||||||
|
@ -767,15 +767,6 @@ if __name__ == '__main__': # tests {{{
|
|||||||
|
|
||||||
),
|
),
|
||||||
|
|
||||||
( # This isbn not on amazon
|
|
||||||
{'identifiers':{'isbn': '8324616489'}, 'title':'Learning Python',
|
|
||||||
'authors':['Lutz']},
|
|
||||||
[title_test('Learning Python, 3rd Edition',
|
|
||||||
exact=True), authors_test(['Mark Lutz'])
|
|
||||||
]
|
|
||||||
|
|
||||||
),
|
|
||||||
|
|
||||||
( # Sophisticated comment formatting
|
( # Sophisticated comment formatting
|
||||||
{'identifiers':{'isbn': '9781416580829'}},
|
{'identifiers':{'isbn': '9781416580829'}},
|
||||||
[title_test('Angels & Demons - Movie Tie-In: A Novel',
|
[title_test('Angels & Demons - Movie Tie-In: A Novel',
|
||||||
|
@ -363,6 +363,11 @@ class MobiReader(object):
|
|||||||
self.log.warning('MOBI markup appears to contain random bytes. Stripping.')
|
self.log.warning('MOBI markup appears to contain random bytes. Stripping.')
|
||||||
self.processed_html = self.remove_random_bytes(self.processed_html)
|
self.processed_html = self.remove_random_bytes(self.processed_html)
|
||||||
root = fromstring(self.processed_html)
|
root = fromstring(self.processed_html)
|
||||||
|
if len(root.xpath('body/descendant::*')) < 1:
|
||||||
|
# There are probably stray </html>s in the markup
|
||||||
|
self.processed_html = self.processed_html.replace('</html>',
|
||||||
|
'')
|
||||||
|
root = fromstring(self.processed_html)
|
||||||
|
|
||||||
if root.tag != 'html':
|
if root.tag != 'html':
|
||||||
self.log.warn('File does not have opening <html> tag')
|
self.log.warn('File does not have opening <html> tag')
|
||||||
|
@ -494,7 +494,9 @@ class MobiWriter(object):
|
|||||||
creators = [normalize(unicode(c)) for c in items]
|
creators = [normalize(unicode(c)) for c in items]
|
||||||
items = ['; '.join(creators)]
|
items = ['; '.join(creators)]
|
||||||
for item in items:
|
for item in items:
|
||||||
data = self.COLLAPSE_RE.sub(' ', normalize(unicode(item)))
|
data = normalize(unicode(item))
|
||||||
|
if term != 'description':
|
||||||
|
data = self.COLLAPSE_RE.sub(' ', data)
|
||||||
if term == 'identifier':
|
if term == 'identifier':
|
||||||
if data.lower().startswith('urn:isbn:'):
|
if data.lower().startswith('urn:isbn:'):
|
||||||
data = data[9:]
|
data = data[9:]
|
||||||
|
@ -29,14 +29,38 @@ class Extract(ODF2XHTML):
|
|||||||
root = etree.fromstring(html)
|
root = etree.fromstring(html)
|
||||||
self.epubify_markup(root, log)
|
self.epubify_markup(root, log)
|
||||||
self.filter_css(root, log)
|
self.filter_css(root, log)
|
||||||
|
self.extract_css(root)
|
||||||
html = etree.tostring(root, encoding='utf-8',
|
html = etree.tostring(root, encoding='utf-8',
|
||||||
xml_declaration=True)
|
xml_declaration=True)
|
||||||
return html
|
return html
|
||||||
|
|
||||||
|
def extract_css(self, root):
|
||||||
|
ans = []
|
||||||
|
for s in root.xpath('//*[local-name() = "style" and @type="text/css"]'):
|
||||||
|
ans.append(s.text)
|
||||||
|
s.getparent().remove(s)
|
||||||
|
|
||||||
|
head = root.xpath('//*[local-name() = "head"]')
|
||||||
|
if head:
|
||||||
|
head = head[0]
|
||||||
|
ns = head.nsmap.get(None, '')
|
||||||
|
if ns:
|
||||||
|
ns = '{%s}'%ns
|
||||||
|
etree.SubElement(head, ns+'link', {'type':'text/css',
|
||||||
|
'rel':'stylesheet', 'href':'odfpy.css'})
|
||||||
|
|
||||||
|
with open('odfpy.css', 'wb') as f:
|
||||||
|
f.write((u'\n\n'.join(ans)).encode('utf-8'))
|
||||||
|
|
||||||
|
|
||||||
def epubify_markup(self, root, log):
|
def epubify_markup(self, root, log):
|
||||||
|
from calibre.ebooks.oeb.base import XPath, XHTML
|
||||||
|
# Fix empty title tags
|
||||||
|
for t in XPath('//h:title')(root):
|
||||||
|
if not t.text:
|
||||||
|
t.text = u' '
|
||||||
# Fix <p><div> constructs as the asinine epubchecker complains
|
# Fix <p><div> constructs as the asinine epubchecker complains
|
||||||
# about them
|
# about them
|
||||||
from calibre.ebooks.oeb.base import XPath, XHTML
|
|
||||||
pdiv = XPath('//h:p/h:div')
|
pdiv = XPath('//h:p/h:div')
|
||||||
for div in pdiv(root):
|
for div in pdiv(root):
|
||||||
div.getparent().tag = XHTML('div')
|
div.getparent().tag = XHTML('div')
|
||||||
@ -146,7 +170,8 @@ class Extract(ODF2XHTML):
|
|||||||
if not mi.authors:
|
if not mi.authors:
|
||||||
mi.authors = [_('Unknown')]
|
mi.authors = [_('Unknown')]
|
||||||
opf = OPFCreator(os.path.abspath(os.getcwdu()), mi)
|
opf = OPFCreator(os.path.abspath(os.getcwdu()), mi)
|
||||||
opf.create_manifest([(os.path.abspath(f), None) for f in walk(os.getcwd())])
|
opf.create_manifest([(os.path.abspath(f), None) for f in
|
||||||
|
walk(os.getcwdu())])
|
||||||
opf.create_spine([os.path.abspath('index.xhtml')])
|
opf.create_spine([os.path.abspath('index.xhtml')])
|
||||||
with open('metadata.opf', 'wb') as f:
|
with open('metadata.opf', 'wb') as f:
|
||||||
opf.render(f)
|
opf.render(f)
|
||||||
|
@ -425,15 +425,24 @@ class DirContainer(object):
|
|||||||
self.opfname = path
|
self.opfname = path
|
||||||
return
|
return
|
||||||
|
|
||||||
|
def _unquote(self, path):
|
||||||
|
# urlunquote must run on a bytestring and will return a bytestring
|
||||||
|
# If it runs on a unicode object, it returns a double encoded unicode
|
||||||
|
# string: unquote(u'%C3%A4') != unquote(b'%C3%A4').decode('utf-8')
|
||||||
|
# and the latter is correct
|
||||||
|
if isinstance(path, unicode):
|
||||||
|
path = path.encode('utf-8')
|
||||||
|
return urlunquote(path).decode('utf-8')
|
||||||
|
|
||||||
def read(self, path):
|
def read(self, path):
|
||||||
if path is None:
|
if path is None:
|
||||||
path = self.opfname
|
path = self.opfname
|
||||||
path = os.path.join(self.rootdir, path)
|
path = os.path.join(self.rootdir, self._unquote(path))
|
||||||
with open(urlunquote(path), 'rb') as f:
|
with open(path, 'rb') as f:
|
||||||
return f.read()
|
return f.read()
|
||||||
|
|
||||||
def write(self, path, data):
|
def write(self, path, data):
|
||||||
path = os.path.join(self.rootdir, urlunquote(path))
|
path = os.path.join(self.rootdir, self._unquote(path))
|
||||||
dir = os.path.dirname(path)
|
dir = os.path.dirname(path)
|
||||||
if not os.path.isdir(dir):
|
if not os.path.isdir(dir):
|
||||||
os.makedirs(dir)
|
os.makedirs(dir)
|
||||||
@ -442,7 +451,7 @@ class DirContainer(object):
|
|||||||
|
|
||||||
def exists(self, path):
|
def exists(self, path):
|
||||||
try:
|
try:
|
||||||
path = os.path.join(self.rootdir, urlunquote(path))
|
path = os.path.join(self.rootdir, self._unquote(path))
|
||||||
except ValueError: #Happens if path contains quoted special chars
|
except ValueError: #Happens if path contains quoted special chars
|
||||||
return False
|
return False
|
||||||
return os.path.isfile(path)
|
return os.path.isfile(path)
|
||||||
@ -1068,6 +1077,12 @@ class Manifest(object):
|
|||||||
if item in self.oeb.spine:
|
if item in self.oeb.spine:
|
||||||
self.oeb.spine.remove(item)
|
self.oeb.spine.remove(item)
|
||||||
|
|
||||||
|
def remove_duplicate_item(self, item):
|
||||||
|
if item in self.ids:
|
||||||
|
item = self.ids[item]
|
||||||
|
del self.ids[item.id]
|
||||||
|
self.items.remove(item)
|
||||||
|
|
||||||
def generate(self, id=None, href=None):
|
def generate(self, id=None, href=None):
|
||||||
"""Generate a new unique identifier and/or internal path for use in
|
"""Generate a new unique identifier and/or internal path for use in
|
||||||
creating a new manifest item, using the provided :param:`id` and/or
|
creating a new manifest item, using the provided :param:`id` and/or
|
||||||
|
@ -153,7 +153,7 @@ class CanonicalFragmentIdentifier
|
|||||||
|
|
||||||
###
|
###
|
||||||
This class is a namespace to expose CFI functions via the window.cfi
|
This class is a namespace to expose CFI functions via the window.cfi
|
||||||
object. The three most important functions are:
|
object. The most important functions are:
|
||||||
|
|
||||||
is_compatible(): Throws an error if the browser is not compatible with
|
is_compatible(): Throws an error if the browser is not compatible with
|
||||||
this script
|
this script
|
||||||
@ -166,6 +166,8 @@ class CanonicalFragmentIdentifier
|
|||||||
###
|
###
|
||||||
|
|
||||||
constructor: () -> # {{{
|
constructor: () -> # {{{
|
||||||
|
if not this instanceof arguments.callee
|
||||||
|
throw new Error('CFI constructor called as function')
|
||||||
this.CREATE_RANGE_ERR = "Your browser does not support the createRange function. Update it to a newer version."
|
this.CREATE_RANGE_ERR = "Your browser does not support the createRange function. Update it to a newer version."
|
||||||
this.IE_ERR = "Your browser is too old. You need Internet Explorer version 9 or newer."
|
this.IE_ERR = "Your browser is too old. You need Internet Explorer version 9 or newer."
|
||||||
div = document.createElement('div')
|
div = document.createElement('div')
|
||||||
@ -322,7 +324,7 @@ class CanonicalFragmentIdentifier
|
|||||||
point.time = r[1] - 0 # Coerce to number
|
point.time = r[1] - 0 # Coerce to number
|
||||||
cfi = cfi.substr(r[0].length)
|
cfi = cfi.substr(r[0].length)
|
||||||
|
|
||||||
if (r = cfi.match(/^@(-?\d+(\.\d+)?),(-?\d+(\.\d+)?)/)) != null
|
if (r = cfi.match(/^@(-?\d+(\.\d+)?):(-?\d+(\.\d+)?)/)) != null
|
||||||
# Spatial offset
|
# Spatial offset
|
||||||
point.x = r[1] - 0 # Coerce to number
|
point.x = r[1] - 0 # Coerce to number
|
||||||
point.y = r[3] - 0 # Coerce to number
|
point.y = r[3] - 0 # Coerce to number
|
||||||
@ -416,7 +418,7 @@ class CanonicalFragmentIdentifier
|
|||||||
rect = target.getBoundingClientRect()
|
rect = target.getBoundingClientRect()
|
||||||
px = ((x - rect.left)*100)/target.offsetWidth
|
px = ((x - rect.left)*100)/target.offsetWidth
|
||||||
py = ((y - rect.top)*100)/target.offsetHeight
|
py = ((y - rect.top)*100)/target.offsetHeight
|
||||||
tail = "#{ tail }@#{ fstr px },#{ fstr py }"
|
tail = "#{ tail }@#{ fstr px }:#{ fstr py }"
|
||||||
else if name != 'audio'
|
else if name != 'audio'
|
||||||
# Get the text offset
|
# Get the text offset
|
||||||
# We use a custom function instead of caretRangeFromPoint as
|
# We use a custom function instead of caretRangeFromPoint as
|
||||||
@ -579,11 +581,12 @@ class CanonicalFragmentIdentifier
|
|||||||
|
|
||||||
get_cfi = (ox, oy) ->
|
get_cfi = (ox, oy) ->
|
||||||
try
|
try
|
||||||
cfi = this.at(ox, oy)
|
cfi = window.cfi.at(ox, oy)
|
||||||
point = this.point(cfi)
|
point = window.cfi.point(cfi)
|
||||||
catch err
|
catch err
|
||||||
cfi = null
|
cfi = null
|
||||||
|
|
||||||
|
if cfi
|
||||||
if point.range != null
|
if point.range != null
|
||||||
r = point.range
|
r = point.range
|
||||||
rect = r.getClientRects()[0]
|
rect = r.getClientRects()[0]
|
||||||
@ -625,8 +628,16 @@ class CanonicalFragmentIdentifier
|
|||||||
return cfi
|
return cfi
|
||||||
cury += delta
|
cury += delta
|
||||||
|
|
||||||
# TODO: Return the CFI corresponding to the <body> tag
|
# Use a spatial offset on the html element, since we could not find a
|
||||||
null
|
# normal CFI
|
||||||
|
[x, y] = window_scroll_pos()
|
||||||
|
de = document.documentElement
|
||||||
|
rect = de.getBoundingClientRect()
|
||||||
|
px = (x*100)/rect.width
|
||||||
|
py = (y*100)/rect.height
|
||||||
|
cfi = "/2@#{ fstr px }:#{ fstr py }"
|
||||||
|
|
||||||
|
return cfi
|
||||||
|
|
||||||
# }}}
|
# }}}
|
||||||
|
|
||||||
|
@ -30,18 +30,23 @@ window_ypos = (pos=null) ->
|
|||||||
window.scrollTo(0, pos)
|
window.scrollTo(0, pos)
|
||||||
|
|
||||||
mark_and_reload = (evt) ->
|
mark_and_reload = (evt) ->
|
||||||
# Remove image in case the click was on the image itself, we want the cfi to
|
|
||||||
# be on the underlying element
|
|
||||||
x = evt.clientX
|
x = evt.clientX
|
||||||
y = evt.clientY
|
y = evt.clientY
|
||||||
if evt.button == 2
|
if evt.button == 2
|
||||||
return # Right mouse click, generated only in firefox
|
return # Right mouse click, generated only in firefox
|
||||||
reset = document.getElementById('reset')
|
|
||||||
if document.elementFromPoint(x, y) == reset
|
if document.elementFromPoint(x, y)?.getAttribute('id') in ['reset', 'viewport_mode']
|
||||||
return
|
return
|
||||||
|
|
||||||
|
# Remove image in case the click was on the image itself, we want the cfi to
|
||||||
|
# be on the underlying element
|
||||||
ms = document.getElementById("marker")
|
ms = document.getElementById("marker")
|
||||||
if ms
|
ms.style.display = 'none'
|
||||||
ms.parentNode?.removeChild(ms)
|
|
||||||
|
if document.getElementById('viewport_mode').checked
|
||||||
|
cfi = window.cfi.at_current()
|
||||||
|
window.cfi.scroll_to(cfi)
|
||||||
|
return
|
||||||
|
|
||||||
fn = () ->
|
fn = () ->
|
||||||
try
|
try
|
||||||
|
@ -8,6 +8,7 @@
|
|||||||
body {
|
body {
|
||||||
font-family: sans-serif;
|
font-family: sans-serif;
|
||||||
background-color: white;
|
background-color: white;
|
||||||
|
padding-bottom: 500px;
|
||||||
}
|
}
|
||||||
|
|
||||||
h1, h2 { color: #005a9c }
|
h1, h2 { color: #005a9c }
|
||||||
@ -48,7 +49,13 @@
|
|||||||
<div id="container">
|
<div id="container">
|
||||||
<h1 id="first-h1">Testing cfi.coffee</h1>
|
<h1 id="first-h1">Testing cfi.coffee</h1>
|
||||||
<p>Click anywhere and the location will be marked with a marker, whose position is set via a CFI.</p>
|
<p>Click anywhere and the location will be marked with a marker, whose position is set via a CFI.</p>
|
||||||
<p><a id="reset" href="/">Reset CFI to None</a></p>
|
<p>
|
||||||
|
<a id="reset" href="/">Reset CFI to None</a>
|
||||||
|
|
||||||
|
Test viewport location calculation:
|
||||||
|
<input type="checkbox" id="viewport_mode" title=
|
||||||
|
"Checking this will cause the window to scroll to a position based on a CFI calculated for the windows current position."/>
|
||||||
|
</p>
|
||||||
<h2>A div with scrollbars</h2>
|
<h2>A div with scrollbars</h2>
|
||||||
<p>Scroll down and click on some elements. Make sure to hit both
|
<p>Scroll down and click on some elements. Make sure to hit both
|
||||||
bold and not bold text as well as different points on the image</p>
|
bold and not bold text as well as different points on the image</p>
|
||||||
|
@ -327,7 +327,7 @@ class OEBReader(object):
|
|||||||
manifest = self.oeb.manifest
|
manifest = self.oeb.manifest
|
||||||
for elem in xpath(opf, '/o2:package/o2:guide/o2:reference'):
|
for elem in xpath(opf, '/o2:package/o2:guide/o2:reference'):
|
||||||
href = elem.get('href')
|
href = elem.get('href')
|
||||||
path = urldefrag(href)[0]
|
path = urlnormalize(urldefrag(href)[0])
|
||||||
if path not in manifest.hrefs:
|
if path not in manifest.hrefs:
|
||||||
self.logger.warn(u'Guide reference %r not found' % href)
|
self.logger.warn(u'Guide reference %r not found' % href)
|
||||||
continue
|
continue
|
||||||
@ -627,11 +627,27 @@ class OEBReader(object):
|
|||||||
return
|
return
|
||||||
self.oeb.metadata.add('cover', cover.id)
|
self.oeb.metadata.add('cover', cover.id)
|
||||||
|
|
||||||
|
def _manifest_remove_duplicates(self):
|
||||||
|
seen = set()
|
||||||
|
dups = set()
|
||||||
|
for item in self.oeb.manifest:
|
||||||
|
if item.href in seen:
|
||||||
|
dups.add(item.href)
|
||||||
|
seen.add(item.href)
|
||||||
|
|
||||||
|
for href in dups:
|
||||||
|
items = [x for x in self.oeb.manifest if x.href == href]
|
||||||
|
for x in items:
|
||||||
|
if x not in self.oeb.spine:
|
||||||
|
self.oeb.log.warn('Removing duplicate manifest item with id:', x.id)
|
||||||
|
self.oeb.manifest.remove_duplicate_item(x)
|
||||||
|
|
||||||
def _all_from_opf(self, opf):
|
def _all_from_opf(self, opf):
|
||||||
self.oeb.version = opf.get('version', '1.2')
|
self.oeb.version = opf.get('version', '1.2')
|
||||||
self._metadata_from_opf(opf)
|
self._metadata_from_opf(opf)
|
||||||
self._manifest_from_opf(opf)
|
self._manifest_from_opf(opf)
|
||||||
self._spine_from_opf(opf)
|
self._spine_from_opf(opf)
|
||||||
|
self._manifest_remove_duplicates()
|
||||||
self._guide_from_opf(opf)
|
self._guide_from_opf(opf)
|
||||||
item = self._find_ncx(opf)
|
item = self._find_ncx(opf)
|
||||||
self._toc_from_opf(opf, item)
|
self._toc_from_opf(opf, item)
|
||||||
|
@ -12,7 +12,7 @@ from lxml import etree
|
|||||||
from urlparse import urlparse
|
from urlparse import urlparse
|
||||||
from collections import OrderedDict
|
from collections import OrderedDict
|
||||||
|
|
||||||
from calibre.ebooks.oeb.base import XPNSMAP, TOC, XHTML, xml2text
|
from calibre.ebooks.oeb.base import XPNSMAP, TOC, XHTML, xml2text, barename
|
||||||
from calibre.ebooks import ConversionError
|
from calibre.ebooks import ConversionError
|
||||||
|
|
||||||
def XPath(x):
|
def XPath(x):
|
||||||
@ -59,6 +59,18 @@ class DetectStructure(object):
|
|||||||
pb_xpath = XPath(opts.page_breaks_before)
|
pb_xpath = XPath(opts.page_breaks_before)
|
||||||
for item in oeb.spine:
|
for item in oeb.spine:
|
||||||
for elem in pb_xpath(item.data):
|
for elem in pb_xpath(item.data):
|
||||||
|
try:
|
||||||
|
prev = elem.itersiblings(tag=etree.Element,
|
||||||
|
preceding=True).next()
|
||||||
|
if (barename(elem.tag) in {'h1', 'h2'} and barename(
|
||||||
|
prev.tag) in {'h1', 'h2'} and (not prev.tail or
|
||||||
|
not prev.tail.split())):
|
||||||
|
# We have two adjacent headings, do not put a page
|
||||||
|
# break on the second one
|
||||||
|
continue
|
||||||
|
except StopIteration:
|
||||||
|
pass
|
||||||
|
|
||||||
style = elem.get('style', '')
|
style = elem.get('style', '')
|
||||||
if style:
|
if style:
|
||||||
style += '; '
|
style += '; '
|
||||||
|
@ -101,6 +101,7 @@ gprefs.defaults['preserve_date_on_ctl'] = True
|
|||||||
gprefs.defaults['cb_fullscreen'] = False
|
gprefs.defaults['cb_fullscreen'] = False
|
||||||
gprefs.defaults['worker_max_time'] = 0
|
gprefs.defaults['worker_max_time'] = 0
|
||||||
gprefs.defaults['show_files_after_save'] = True
|
gprefs.defaults['show_files_after_save'] = True
|
||||||
|
gprefs.defaults['auto_add_path'] = None
|
||||||
# }}}
|
# }}}
|
||||||
|
|
||||||
NONE = QVariant() #: Null value to return from the data function of item models
|
NONE = QVariant() #: Null value to return from the data function of item models
|
||||||
@ -257,7 +258,8 @@ def extension(path):
|
|||||||
def warning_dialog(parent, title, msg, det_msg='', show=False,
|
def warning_dialog(parent, title, msg, det_msg='', show=False,
|
||||||
show_copy_button=True):
|
show_copy_button=True):
|
||||||
from calibre.gui2.dialogs.message_box import MessageBox
|
from calibre.gui2.dialogs.message_box import MessageBox
|
||||||
d = MessageBox(MessageBox.WARNING, 'WARNING: '+title, msg, det_msg, parent=parent,
|
d = MessageBox(MessageBox.WARNING, _('WARNING:')+ ' ' +
|
||||||
|
title, msg, det_msg, parent=parent,
|
||||||
show_copy_button=show_copy_button)
|
show_copy_button=show_copy_button)
|
||||||
if show:
|
if show:
|
||||||
return d.exec_()
|
return d.exec_()
|
||||||
@ -266,7 +268,8 @@ def warning_dialog(parent, title, msg, det_msg='', show=False,
|
|||||||
def error_dialog(parent, title, msg, det_msg='', show=False,
|
def error_dialog(parent, title, msg, det_msg='', show=False,
|
||||||
show_copy_button=True):
|
show_copy_button=True):
|
||||||
from calibre.gui2.dialogs.message_box import MessageBox
|
from calibre.gui2.dialogs.message_box import MessageBox
|
||||||
d = MessageBox(MessageBox.ERROR, 'ERROR: '+title, msg, det_msg, parent=parent,
|
d = MessageBox(MessageBox.ERROR, _('ERROR:')+ ' ' +
|
||||||
|
title, msg, det_msg, parent=parent,
|
||||||
show_copy_button=show_copy_button)
|
show_copy_button=show_copy_button)
|
||||||
if show:
|
if show:
|
||||||
return d.exec_()
|
return d.exec_()
|
||||||
|
@ -37,6 +37,7 @@ def get_filters():
|
|||||||
(_('SNB Books'), ['snb']),
|
(_('SNB Books'), ['snb']),
|
||||||
(_('Comics'), ['cbz', 'cbr', 'cbc']),
|
(_('Comics'), ['cbz', 'cbr', 'cbc']),
|
||||||
(_('Archives'), ['zip', 'rar']),
|
(_('Archives'), ['zip', 'rar']),
|
||||||
|
(_('Wordprocessor files'), ['odt', 'doc', 'docx']),
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
156
src/calibre/gui2/auto_add.py
Normal file
156
src/calibre/gui2/auto_add.py
Normal file
@ -0,0 +1,156 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
|
||||||
|
from __future__ import (unicode_literals, division, absolute_import,
|
||||||
|
print_function)
|
||||||
|
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
__copyright__ = '2012, Kovid Goyal <kovid@kovidgoyal.net>'
|
||||||
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
|
import os, tempfile, shutil
|
||||||
|
from threading import Thread, Event
|
||||||
|
|
||||||
|
from PyQt4.Qt import (QFileSystemWatcher, QObject, Qt, pyqtSignal, QTimer)
|
||||||
|
|
||||||
|
from calibre import prints
|
||||||
|
from calibre.ptempfile import PersistentTemporaryDirectory
|
||||||
|
from calibre.ebooks import BOOK_EXTENSIONS
|
||||||
|
|
||||||
|
class Worker(Thread):
|
||||||
|
|
||||||
|
def __init__(self, path, callback):
|
||||||
|
Thread.__init__(self)
|
||||||
|
self.daemon = True
|
||||||
|
self.keep_running = True
|
||||||
|
self.wake_up = Event()
|
||||||
|
self.path, self.callback = path, callback
|
||||||
|
self.staging = set()
|
||||||
|
self.be = frozenset(BOOK_EXTENSIONS)
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
self.tdir = PersistentTemporaryDirectory('_auto_adder')
|
||||||
|
while self.keep_running:
|
||||||
|
self.wake_up.wait()
|
||||||
|
self.wake_up.clear()
|
||||||
|
if not self.keep_running:
|
||||||
|
break
|
||||||
|
try:
|
||||||
|
self.auto_add()
|
||||||
|
except:
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
|
||||||
|
def auto_add(self):
|
||||||
|
from calibre.utils.ipc.simple_worker import fork_job
|
||||||
|
from calibre.ebooks.metadata.opf2 import metadata_to_opf
|
||||||
|
from calibre.ebooks.metadata.meta import metadata_from_filename
|
||||||
|
|
||||||
|
files = [x for x in os.listdir(self.path) if x not in self.staging
|
||||||
|
and os.path.isfile(os.path.join(self.path, x)) and
|
||||||
|
os.access(os.path.join(self.path, x), os.R_OK|os.W_OK) and
|
||||||
|
os.path.splitext(x)[1][1:].lower() in self.be]
|
||||||
|
data = {}
|
||||||
|
for fname in files:
|
||||||
|
f = os.path.join(self.path, fname)
|
||||||
|
tdir = tempfile.mkdtemp(dir=self.tdir)
|
||||||
|
try:
|
||||||
|
fork_job('calibre.ebooks.metadata.meta',
|
||||||
|
'forked_read_metadata', (f, tdir), no_output=True)
|
||||||
|
except:
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
|
||||||
|
opfpath = os.path.join(tdir, 'metadata.opf')
|
||||||
|
try:
|
||||||
|
if os.stat(opfpath).st_size < 30:
|
||||||
|
raise Exception('metadata reading failed')
|
||||||
|
except:
|
||||||
|
mi = metadata_from_filename(fname)
|
||||||
|
with open(opfpath, 'wb') as f:
|
||||||
|
f.write(metadata_to_opf(mi))
|
||||||
|
self.staging.add(fname)
|
||||||
|
data[fname] = tdir
|
||||||
|
if data:
|
||||||
|
self.callback(data)
|
||||||
|
|
||||||
|
|
||||||
|
class AutoAdder(QObject):
|
||||||
|
|
||||||
|
metadata_read = pyqtSignal(object)
|
||||||
|
|
||||||
|
def __init__(self, path, parent):
|
||||||
|
QObject.__init__(self, parent)
|
||||||
|
if path and os.path.isdir(path) and os.access(path, os.R_OK|os.W_OK):
|
||||||
|
self.watcher = QFileSystemWatcher(self)
|
||||||
|
self.worker = Worker(path, self.metadata_read.emit)
|
||||||
|
self.watcher.directoryChanged.connect(self.dir_changed,
|
||||||
|
type=Qt.QueuedConnection)
|
||||||
|
self.metadata_read.connect(self.add_to_db,
|
||||||
|
type=Qt.QueuedConnection)
|
||||||
|
QTimer.singleShot(2000, self.initialize)
|
||||||
|
elif path:
|
||||||
|
prints(path,
|
||||||
|
'is not a valid directory to watch for new ebooks, ignoring')
|
||||||
|
|
||||||
|
def initialize(self):
|
||||||
|
try:
|
||||||
|
if os.listdir(self.worker.path):
|
||||||
|
self.dir_changed()
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
self.watcher.addPath(self.worker.path)
|
||||||
|
|
||||||
|
def dir_changed(self, *args):
|
||||||
|
if os.path.isdir(self.worker.path) and os.access(self.worker.path,
|
||||||
|
os.R_OK|os.W_OK):
|
||||||
|
if not self.worker.is_alive():
|
||||||
|
self.worker.start()
|
||||||
|
self.worker.wake_up.set()
|
||||||
|
|
||||||
|
def stop(self):
|
||||||
|
if hasattr(self, 'worker'):
|
||||||
|
self.worker.keep_running = False
|
||||||
|
self.worker.wake_up.set()
|
||||||
|
|
||||||
|
def wait(self):
|
||||||
|
if hasattr(self, 'worker'):
|
||||||
|
self.worker.join()
|
||||||
|
|
||||||
|
def add_to_db(self, data):
|
||||||
|
from calibre.ebooks.metadata.opf2 import OPF
|
||||||
|
|
||||||
|
gui = self.parent()
|
||||||
|
if gui is None:
|
||||||
|
return
|
||||||
|
m = gui.library_view.model()
|
||||||
|
count = 0
|
||||||
|
|
||||||
|
for fname, tdir in data.iteritems():
|
||||||
|
paths = [os.path.join(self.worker.path, fname)]
|
||||||
|
mi = os.path.join(tdir, 'metadata.opf')
|
||||||
|
if not os.access(mi, os.R_OK):
|
||||||
|
continue
|
||||||
|
mi = [OPF(open(mi, 'rb'), tdir,
|
||||||
|
populate_spine=False).to_book_metadata()]
|
||||||
|
m.add_books(paths, [os.path.splitext(fname)[1][1:].upper()], mi,
|
||||||
|
add_duplicates=True)
|
||||||
|
try:
|
||||||
|
os.remove(os.path.join(self.worker.path, fname))
|
||||||
|
try:
|
||||||
|
self.worker.staging.remove(fname)
|
||||||
|
except KeyError:
|
||||||
|
pass
|
||||||
|
shutil.rmtree(tdir)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
count += 1
|
||||||
|
|
||||||
|
if count > 0:
|
||||||
|
m.books_added(count)
|
||||||
|
gui.status_bar.show_message(_(
|
||||||
|
'Added %d book(s) automatically from %s') %
|
||||||
|
(count, self.worker.path), 2000)
|
||||||
|
if hasattr(gui, 'db_images'):
|
||||||
|
gui.db_images.reset()
|
||||||
|
|
||||||
|
|
@ -199,8 +199,9 @@ class MenuBar(QMenuBar): # {{{
|
|||||||
|
|
||||||
def update_lm_actions(self):
|
def update_lm_actions(self):
|
||||||
for ac in self.added_actions:
|
for ac in self.added_actions:
|
||||||
if ac in self.location_manager.all_actions:
|
clone = getattr(ac, 'clone', None)
|
||||||
ac.setVisible(ac in self.location_manager.available_actions)
|
if clone is not None and clone in self.location_manager.all_actions:
|
||||||
|
ac.setVisible(clone in self.location_manager.available_actions)
|
||||||
|
|
||||||
def init_bar(self, actions):
|
def init_bar(self, actions):
|
||||||
for ac in self.added_actions:
|
for ac in self.added_actions:
|
||||||
|
@ -206,6 +206,12 @@ class DeviceManager(Thread): # {{{
|
|||||||
self.scanner.is_device_connected(self.connected_device,
|
self.scanner.is_device_connected(self.connected_device,
|
||||||
only_presence=True)
|
only_presence=True)
|
||||||
if not connected:
|
if not connected:
|
||||||
|
if DEBUG:
|
||||||
|
# Allow the device subsystem to output debugging info about
|
||||||
|
# why it thinks the device is not connected. Used, for e.g.
|
||||||
|
# in the can_handle() method of the T1 driver
|
||||||
|
self.scanner.is_device_connected(self.connected_device,
|
||||||
|
only_presence=True, debug=True)
|
||||||
self.connected_device_removed()
|
self.connected_device_removed()
|
||||||
else:
|
else:
|
||||||
possibly_connected_devices = []
|
possibly_connected_devices = []
|
||||||
|
@ -285,7 +285,10 @@ class EmailMixin(object): # {{{
|
|||||||
else []
|
else []
|
||||||
def get_fmts(fmts):
|
def get_fmts(fmts):
|
||||||
files, auto = self.library_view.model().\
|
files, auto = self.library_view.model().\
|
||||||
get_preferred_formats_from_ids([id_], fmts)
|
get_preferred_formats_from_ids([id_], fmts,
|
||||||
|
set_metadata=True,
|
||||||
|
use_plugboard=plugboard_email_value,
|
||||||
|
plugboard_formats=plugboard_email_formats)
|
||||||
return files
|
return files
|
||||||
sent_mails = email_news(mi, remove,
|
sent_mails = email_news(mi, remove,
|
||||||
get_fmts, self.email_sent, self.job_manager)
|
get_fmts, self.email_sent, self.job_manager)
|
||||||
|
@ -363,14 +363,15 @@ class BooksView(QTableView): # {{{
|
|||||||
history.append([col, order])
|
history.append([col, order])
|
||||||
return history
|
return history
|
||||||
|
|
||||||
def apply_sort_history(self, saved_history):
|
def apply_sort_history(self, saved_history, max_sort_levels=3):
|
||||||
if not saved_history:
|
if not saved_history:
|
||||||
return
|
return
|
||||||
for col, order in reversed(self.cleanup_sort_history(saved_history)[:3]):
|
for col, order in reversed(self.cleanup_sort_history(
|
||||||
|
saved_history)[:max_sort_levels]):
|
||||||
self.sortByColumn(self.column_map.index(col),
|
self.sortByColumn(self.column_map.index(col),
|
||||||
Qt.AscendingOrder if order else Qt.DescendingOrder)
|
Qt.AscendingOrder if order else Qt.DescendingOrder)
|
||||||
|
|
||||||
def apply_state(self, state):
|
def apply_state(self, state, max_sort_levels=3):
|
||||||
h = self.column_header
|
h = self.column_header
|
||||||
cmap = {}
|
cmap = {}
|
||||||
hidden = state.get('hidden_columns', [])
|
hidden = state.get('hidden_columns', [])
|
||||||
@ -399,7 +400,8 @@ class BooksView(QTableView): # {{{
|
|||||||
sz = h.sectionSizeHint(cmap[col])
|
sz = h.sectionSizeHint(cmap[col])
|
||||||
h.resizeSection(cmap[col], sz)
|
h.resizeSection(cmap[col], sz)
|
||||||
|
|
||||||
self.apply_sort_history(state.get('sort_history', None))
|
self.apply_sort_history(state.get('sort_history', None),
|
||||||
|
max_sort_levels=max_sort_levels)
|
||||||
|
|
||||||
for col, alignment in state.get('column_alignment', {}).items():
|
for col, alignment in state.get('column_alignment', {}).items():
|
||||||
self._model.change_alignment(col, alignment)
|
self._model.change_alignment(col, alignment)
|
||||||
@ -474,6 +476,7 @@ class BooksView(QTableView): # {{{
|
|||||||
old_state = self.get_old_state()
|
old_state = self.get_old_state()
|
||||||
if old_state is None:
|
if old_state is None:
|
||||||
old_state = self.get_default_state()
|
old_state = self.get_default_state()
|
||||||
|
max_levels = 3
|
||||||
|
|
||||||
if tweaks['sort_columns_at_startup'] is not None:
|
if tweaks['sort_columns_at_startup'] is not None:
|
||||||
sh = []
|
sh = []
|
||||||
@ -488,9 +491,10 @@ class BooksView(QTableView): # {{{
|
|||||||
import traceback
|
import traceback
|
||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
old_state['sort_history'] = sh
|
old_state['sort_history'] = sh
|
||||||
|
max_levels = max(3, len(sh))
|
||||||
|
|
||||||
self.column_header.blockSignals(True)
|
self.column_header.blockSignals(True)
|
||||||
self.apply_state(old_state)
|
self.apply_state(old_state, max_sort_levels=max_levels)
|
||||||
self.column_header.blockSignals(False)
|
self.column_header.blockSignals(False)
|
||||||
|
|
||||||
# Resize all rows to have the correct height
|
# Resize all rows to have the correct height
|
||||||
|
@ -10,7 +10,7 @@ from PyQt4.Qt import (QCoreApplication, QIcon, QObject, QTimer,
|
|||||||
from calibre import prints, plugins, force_unicode
|
from calibre import prints, plugins, force_unicode
|
||||||
from calibre.constants import (iswindows, __appname__, isosx, DEBUG,
|
from calibre.constants import (iswindows, __appname__, isosx, DEBUG,
|
||||||
filesystem_encoding)
|
filesystem_encoding)
|
||||||
from calibre.utils.ipc import ADDRESS, RC
|
from calibre.utils.ipc import gui_socket_address, RC
|
||||||
from calibre.gui2 import (ORG_NAME, APP_UID, initialize_file_icon_provider,
|
from calibre.gui2 import (ORG_NAME, APP_UID, initialize_file_icon_provider,
|
||||||
Application, choose_dir, error_dialog, question_dialog, gprefs)
|
Application, choose_dir, error_dialog, question_dialog, gprefs)
|
||||||
from calibre.gui2.main_window import option_parser as _option_parser
|
from calibre.gui2.main_window import option_parser as _option_parser
|
||||||
@ -304,7 +304,7 @@ def cant_start(msg=_('If you are sure it is not running')+', ',
|
|||||||
if iswindows:
|
if iswindows:
|
||||||
what = _('try rebooting your computer.')
|
what = _('try rebooting your computer.')
|
||||||
else:
|
else:
|
||||||
what = _('try deleting the file')+': '+ADDRESS
|
what = _('try deleting the file')+': '+ gui_socket_address()
|
||||||
|
|
||||||
info = base%(where, msg, what)
|
info = base%(where, msg, what)
|
||||||
error_dialog(None, _('Cannot Start ')+__appname__,
|
error_dialog(None, _('Cannot Start ')+__appname__,
|
||||||
@ -345,14 +345,14 @@ def main(args=sys.argv):
|
|||||||
return 0
|
return 0
|
||||||
if si:
|
if si:
|
||||||
try:
|
try:
|
||||||
listener = Listener(address=ADDRESS)
|
listener = Listener(address=gui_socket_address())
|
||||||
except socket.error:
|
except socket.error:
|
||||||
if iswindows:
|
if iswindows:
|
||||||
cant_start()
|
cant_start()
|
||||||
if os.path.exists(ADDRESS):
|
if os.path.exists(gui_socket_address()):
|
||||||
os.remove(ADDRESS)
|
os.remove(gui_socket_address())
|
||||||
try:
|
try:
|
||||||
listener = Listener(address=ADDRESS)
|
listener = Listener(address=gui_socket_address())
|
||||||
except socket.error:
|
except socket.error:
|
||||||
cant_start()
|
cant_start()
|
||||||
else:
|
else:
|
||||||
@ -363,7 +363,7 @@ def main(args=sys.argv):
|
|||||||
gui_debug=gui_debug)
|
gui_debug=gui_debug)
|
||||||
otherinstance = False
|
otherinstance = False
|
||||||
try:
|
try:
|
||||||
listener = Listener(address=ADDRESS)
|
listener = Listener(address=gui_socket_address())
|
||||||
except socket.error: # Good si is correct (on UNIX)
|
except socket.error: # Good si is correct (on UNIX)
|
||||||
otherinstance = True
|
otherinstance = True
|
||||||
else:
|
else:
|
||||||
|
@ -169,7 +169,10 @@ class MetadataSingleDialogBase(ResizableDialog):
|
|||||||
self.basic_metadata_widgets.extend([self.series, self.series_index])
|
self.basic_metadata_widgets.extend([self.series, self.series_index])
|
||||||
|
|
||||||
self.formats_manager = FormatsManager(self, self.copy_fmt)
|
self.formats_manager = FormatsManager(self, self.copy_fmt)
|
||||||
self.basic_metadata_widgets.append(self.formats_manager)
|
# We want formats changes to be committed before title/author, as
|
||||||
|
# otherwise we could have data loss if the title/author changed and the
|
||||||
|
# user was trying to add an extra file from the old books directory.
|
||||||
|
self.basic_metadata_widgets.insert(0, self.formats_manager)
|
||||||
self.formats_manager.metadata_from_format_button.clicked.connect(
|
self.formats_manager.metadata_from_format_button.clicked.connect(
|
||||||
self.metadata_from_format)
|
self.metadata_from_format)
|
||||||
self.formats_manager.cover_from_format_button.clicked.connect(
|
self.formats_manager.cover_from_format_button.clicked.connect(
|
||||||
|
@ -5,14 +5,14 @@ __license__ = 'GPL v3'
|
|||||||
__copyright__ = '2010, Kovid Goyal <kovid@kovidgoyal.net>'
|
__copyright__ = '2010, Kovid Goyal <kovid@kovidgoyal.net>'
|
||||||
__docformat__ = 'restructuredtext en'
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
from calibre.gui2.preferences import ConfigWidgetBase, test_widget, \
|
from calibre.gui2.preferences import ConfigWidgetBase, test_widget, \
|
||||||
CommaSeparatedList
|
CommaSeparatedList, AbortCommit
|
||||||
from calibre.gui2.preferences.adding_ui import Ui_Form
|
from calibre.gui2.preferences.adding_ui import Ui_Form
|
||||||
from calibre.utils.config import prefs
|
from calibre.utils.config import prefs
|
||||||
from calibre.gui2.widgets import FilenamePattern
|
from calibre.gui2.widgets import FilenamePattern
|
||||||
from calibre.gui2 import gprefs
|
from calibre.gui2 import gprefs, choose_dir, error_dialog, question_dialog
|
||||||
|
|
||||||
class ConfigWidget(ConfigWidgetBase, Ui_Form):
|
class ConfigWidget(ConfigWidgetBase, Ui_Form):
|
||||||
|
|
||||||
@ -31,10 +31,18 @@ class ConfigWidget(ConfigWidgetBase, Ui_Form):
|
|||||||
(_('Create new record for each duplicate format'), 'new record')]
|
(_('Create new record for each duplicate format'), 'new record')]
|
||||||
r('automerge', gprefs, choices=choices)
|
r('automerge', gprefs, choices=choices)
|
||||||
r('new_book_tags', prefs, setting=CommaSeparatedList)
|
r('new_book_tags', prefs, setting=CommaSeparatedList)
|
||||||
|
r('auto_add_path', gprefs, restart_required=True)
|
||||||
|
|
||||||
self.filename_pattern = FilenamePattern(self)
|
self.filename_pattern = FilenamePattern(self)
|
||||||
self.metadata_box.layout().insertWidget(0, self.filename_pattern)
|
self.metadata_box.layout().insertWidget(0, self.filename_pattern)
|
||||||
self.filename_pattern.changed_signal.connect(self.changed_signal.emit)
|
self.filename_pattern.changed_signal.connect(self.changed_signal.emit)
|
||||||
|
self.auto_add_browse_button.clicked.connect(self.choose_aa_path)
|
||||||
|
|
||||||
|
def choose_aa_path(self):
|
||||||
|
path = choose_dir(self, 'auto add path choose',
|
||||||
|
_('Choose a folder'))
|
||||||
|
if path:
|
||||||
|
self.opt_auto_add_path.setText(path)
|
||||||
|
|
||||||
def initialize(self):
|
def initialize(self):
|
||||||
ConfigWidgetBase.initialize(self)
|
ConfigWidgetBase.initialize(self)
|
||||||
@ -48,6 +56,27 @@ class ConfigWidget(ConfigWidgetBase, Ui_Form):
|
|||||||
self.filename_pattern.initialize(defaults=True)
|
self.filename_pattern.initialize(defaults=True)
|
||||||
|
|
||||||
def commit(self):
|
def commit(self):
|
||||||
|
path = unicode(self.opt_auto_add_path.text()).strip()
|
||||||
|
if path != gprefs['auto_add_path']:
|
||||||
|
if path:
|
||||||
|
path = os.path.abspath(path)
|
||||||
|
self.opt_auto_add_path.setText(path)
|
||||||
|
if not os.path.isdir(path):
|
||||||
|
error_dialog(self, _('Invalid folder'),
|
||||||
|
_('You must specify an existing folder as your '
|
||||||
|
'auto-add folder. %s does not exist.')%path,
|
||||||
|
show=True)
|
||||||
|
raise AbortCommit('invalid auto-add folder')
|
||||||
|
if not os.access(path, os.R_OK|os.W_OK):
|
||||||
|
error_dialog(self, _('Invalid folder'),
|
||||||
|
_('You do not have read/write permissions for '
|
||||||
|
'the folder: %s')%path, show=True)
|
||||||
|
raise AbortCommit('invalid auto-add folder')
|
||||||
|
if not question_dialog(self, _('Are you sure'),
|
||||||
|
_('<b>WARNING:</b> Any files you place in %s will be '
|
||||||
|
'automatically deleted after being added to '
|
||||||
|
'calibre. Are you sure?')%path):
|
||||||
|
return
|
||||||
pattern = self.filename_pattern.commit()
|
pattern = self.filename_pattern.commit()
|
||||||
prefs['filename_pattern'] = pattern
|
prefs['filename_pattern'] = pattern
|
||||||
return ConfigWidgetBase.commit(self)
|
return ConfigWidgetBase.commit(self)
|
||||||
|
@ -7,14 +7,24 @@
|
|||||||
<x>0</x>
|
<x>0</x>
|
||||||
<y>0</y>
|
<y>0</y>
|
||||||
<width>753</width>
|
<width>753</width>
|
||||||
<height>339</height>
|
<height>547</height>
|
||||||
</rect>
|
</rect>
|
||||||
</property>
|
</property>
|
||||||
<property name="windowTitle">
|
<property name="windowTitle">
|
||||||
<string>Form</string>
|
<string>Form</string>
|
||||||
</property>
|
</property>
|
||||||
<layout class="QGridLayout" name="gridLayout">
|
<layout class="QGridLayout" name="gridLayout">
|
||||||
<item row="0" column="0" colspan="2">
|
<item row="0" column="0">
|
||||||
|
<widget class="QTabWidget" name="tabWidget">
|
||||||
|
<property name="currentIndex">
|
||||||
|
<number>0</number>
|
||||||
|
</property>
|
||||||
|
<widget class="QWidget" name="tab_3">
|
||||||
|
<attribute name="title">
|
||||||
|
<string>The Add &Process</string>
|
||||||
|
</attribute>
|
||||||
|
<layout class="QGridLayout" name="gridLayout_2">
|
||||||
|
<item row="0" column="0" colspan="3">
|
||||||
<widget class="QLabel" name="label_6">
|
<widget class="QLabel" name="label_6">
|
||||||
<property name="text">
|
<property name="text">
|
||||||
<string>Here you can control how calibre will read metadata from the files you add to it. calibre can either read metadata from the contents of the file, or from the filename.</string>
|
<string>Here you can control how calibre will read metadata from the files you add to it. calibre can either read metadata from the contents of the file, or from the filename.</string>
|
||||||
@ -31,7 +41,7 @@
|
|||||||
</property>
|
</property>
|
||||||
</widget>
|
</widget>
|
||||||
</item>
|
</item>
|
||||||
<item row="1" column="1">
|
<item row="1" column="1" colspan="2">
|
||||||
<layout class="QHBoxLayout" name="horizontalLayout">
|
<layout class="QHBoxLayout" name="horizontalLayout">
|
||||||
<item>
|
<item>
|
||||||
<spacer name="horizontalSpacer">
|
<spacer name="horizontalSpacer">
|
||||||
@ -58,7 +68,14 @@
|
|||||||
</item>
|
</item>
|
||||||
</layout>
|
</layout>
|
||||||
</item>
|
</item>
|
||||||
<item row="3" column="0">
|
<item row="2" column="0" colspan="3">
|
||||||
|
<widget class="QCheckBox" name="opt_preserve_date_on_ctl">
|
||||||
|
<property name="text">
|
||||||
|
<string>When using the "&Copy to library" action to copy books between libraries, preserve the date</string>
|
||||||
|
</property>
|
||||||
|
</widget>
|
||||||
|
</item>
|
||||||
|
<item row="3" column="0" colspan="2">
|
||||||
<widget class="QCheckBox" name="opt_add_formats_to_existing">
|
<widget class="QCheckBox" name="opt_add_formats_to_existing">
|
||||||
<property name="toolTip">
|
<property name="toolTip">
|
||||||
<string>Automerge: If books with similar titles and authors found, merge the incoming formats automatically into
|
<string>Automerge: If books with similar titles and authors found, merge the incoming formats automatically into
|
||||||
@ -72,7 +89,7 @@ Title match ignores leading indefinite articles ("the", "a",
|
|||||||
</property>
|
</property>
|
||||||
</widget>
|
</widget>
|
||||||
</item>
|
</item>
|
||||||
<item row="3" column="1">
|
<item row="3" column="2">
|
||||||
<widget class="QComboBox" name="opt_automerge">
|
<widget class="QComboBox" name="opt_automerge">
|
||||||
<property name="toolTip">
|
<property name="toolTip">
|
||||||
<string>Automerge: If books with similar titles and authors found, merge the incoming formats automatically into
|
<string>Automerge: If books with similar titles and authors found, merge the incoming formats automatically into
|
||||||
@ -98,14 +115,14 @@ Author matching is exact.</string>
|
|||||||
</property>
|
</property>
|
||||||
</widget>
|
</widget>
|
||||||
</item>
|
</item>
|
||||||
<item row="4" column="1">
|
<item row="4" column="2">
|
||||||
<widget class="QLineEdit" name="opt_new_book_tags">
|
<widget class="QLineEdit" name="opt_new_book_tags">
|
||||||
<property name="toolTip">
|
<property name="toolTip">
|
||||||
<string>A comma-separated list of tags that will be applied to books added to the library</string>
|
<string>A comma-separated list of tags that will be applied to books added to the library</string>
|
||||||
</property>
|
</property>
|
||||||
</widget>
|
</widget>
|
||||||
</item>
|
</item>
|
||||||
<item row="5" column="0" colspan="2">
|
<item row="5" column="0" colspan="3">
|
||||||
<widget class="QGroupBox" name="metadata_box">
|
<widget class="QGroupBox" name="metadata_box">
|
||||||
<property name="title">
|
<property name="title">
|
||||||
<string>&Configure metadata from file name</string>
|
<string>&Configure metadata from file name</string>
|
||||||
@ -127,16 +144,77 @@ Author matching is exact.</string>
|
|||||||
</layout>
|
</layout>
|
||||||
</widget>
|
</widget>
|
||||||
</item>
|
</item>
|
||||||
<item row="2" column="0" colspan="2">
|
</layout>
|
||||||
<widget class="QCheckBox" name="opt_preserve_date_on_ctl">
|
</widget>
|
||||||
|
<widget class="QWidget" name="tab_4">
|
||||||
|
<attribute name="title">
|
||||||
|
<string>&Automatic Adding</string>
|
||||||
|
</attribute>
|
||||||
|
<layout class="QVBoxLayout" name="verticalLayout_2">
|
||||||
|
<item>
|
||||||
|
<widget class="QLabel" name="label">
|
||||||
<property name="text">
|
<property name="text">
|
||||||
<string>When using the "&Copy to library" action to copy books between libraries, preserve the date</string>
|
<string>Specify a folder. Any files you put into this folder will be automatically added to calibre (restart required).</string>
|
||||||
|
</property>
|
||||||
|
<property name="wordWrap">
|
||||||
|
<bool>true</bool>
|
||||||
|
</property>
|
||||||
|
</widget>
|
||||||
|
</item>
|
||||||
|
<item>
|
||||||
|
<layout class="QHBoxLayout" name="horizontalLayout_2">
|
||||||
|
<item>
|
||||||
|
<widget class="QLineEdit" name="opt_auto_add_path">
|
||||||
|
<property name="placeholderText">
|
||||||
|
<string>Folder to auto-add files from</string>
|
||||||
|
</property>
|
||||||
|
</widget>
|
||||||
|
</item>
|
||||||
|
<item>
|
||||||
|
<widget class="QToolButton" name="auto_add_browse_button">
|
||||||
|
<property name="toolTip">
|
||||||
|
<string>Browse for folder</string>
|
||||||
|
</property>
|
||||||
|
<property name="text">
|
||||||
|
<string>...</string>
|
||||||
|
</property>
|
||||||
|
<property name="icon">
|
||||||
|
<iconset resource="../../../work/calibre/resources/images.qrc">
|
||||||
|
<normaloff>:/images/document_open.png</normaloff>:/images/document_open.png</iconset>
|
||||||
</property>
|
</property>
|
||||||
</widget>
|
</widget>
|
||||||
</item>
|
</item>
|
||||||
</layout>
|
</layout>
|
||||||
|
</item>
|
||||||
|
<item>
|
||||||
|
<widget class="QLabel" name="label_2">
|
||||||
|
<property name="text">
|
||||||
|
<string><b>WARNING:</b> Files in the above folder will be deleted after being added to calibre.</string>
|
||||||
|
</property>
|
||||||
</widget>
|
</widget>
|
||||||
<resources/>
|
</item>
|
||||||
|
<item>
|
||||||
|
<spacer name="verticalSpacer_2">
|
||||||
|
<property name="orientation">
|
||||||
|
<enum>Qt::Vertical</enum>
|
||||||
|
</property>
|
||||||
|
<property name="sizeHint" stdset="0">
|
||||||
|
<size>
|
||||||
|
<width>20</width>
|
||||||
|
<height>40</height>
|
||||||
|
</size>
|
||||||
|
</property>
|
||||||
|
</spacer>
|
||||||
|
</item>
|
||||||
|
</layout>
|
||||||
|
</widget>
|
||||||
|
</widget>
|
||||||
|
</item>
|
||||||
|
</layout>
|
||||||
|
</widget>
|
||||||
|
<resources>
|
||||||
|
<include location="../../../work/calibre/resources/images.qrc"/>
|
||||||
|
</resources>
|
||||||
<connections>
|
<connections>
|
||||||
<connection>
|
<connection>
|
||||||
<sender>opt_add_formats_to_existing</sender>
|
<sender>opt_add_formats_to_existing</sender>
|
||||||
|
@ -24,18 +24,27 @@ from calibre.constants import iswindows
|
|||||||
|
|
||||||
class PluginModel(QAbstractItemModel, SearchQueryParser): # {{{
|
class PluginModel(QAbstractItemModel, SearchQueryParser): # {{{
|
||||||
|
|
||||||
def __init__(self, *args):
|
def __init__(self, show_only_user_plugins=False):
|
||||||
QAbstractItemModel.__init__(self, *args)
|
QAbstractItemModel.__init__(self)
|
||||||
SearchQueryParser.__init__(self, ['all'])
|
SearchQueryParser.__init__(self, ['all'])
|
||||||
|
self.show_only_user_plugins = show_only_user_plugins
|
||||||
self.icon = QVariant(QIcon(I('plugins.png')))
|
self.icon = QVariant(QIcon(I('plugins.png')))
|
||||||
p = QIcon(self.icon).pixmap(32, 32, QIcon.Disabled, QIcon.On)
|
p = QIcon(self.icon).pixmap(32, 32, QIcon.Disabled, QIcon.On)
|
||||||
self.disabled_icon = QVariant(QIcon(p))
|
self.disabled_icon = QVariant(QIcon(p))
|
||||||
self._p = p
|
self._p = p
|
||||||
self.populate()
|
self.populate()
|
||||||
|
|
||||||
|
def toggle_shown_plugins(self, show_only_user_plugins):
|
||||||
|
self.show_only_user_plugins = show_only_user_plugins
|
||||||
|
self.populate()
|
||||||
|
self.reset()
|
||||||
|
|
||||||
def populate(self):
|
def populate(self):
|
||||||
self._data = {}
|
self._data = {}
|
||||||
for plugin in initialized_plugins():
|
for plugin in initialized_plugins():
|
||||||
|
if (getattr(plugin, 'plugin_path', None) is None
|
||||||
|
and self.show_only_user_plugins):
|
||||||
|
continue
|
||||||
if plugin.type not in self._data:
|
if plugin.type not in self._data:
|
||||||
self._data[plugin.type] = [plugin]
|
self._data[plugin.type] = [plugin]
|
||||||
else:
|
else:
|
||||||
@ -64,6 +73,7 @@ class PluginModel(QAbstractItemModel, SearchQueryParser): # {{{
|
|||||||
if p < 0:
|
if p < 0:
|
||||||
if query in lower(self.categories[c]):
|
if query in lower(self.categories[c]):
|
||||||
ans.add((c, p))
|
ans.add((c, p))
|
||||||
|
continue
|
||||||
else:
|
else:
|
||||||
try:
|
try:
|
||||||
plugin = self._data[self.categories[c]][p]
|
plugin = self._data[self.categories[c]][p]
|
||||||
@ -209,7 +219,7 @@ class ConfigWidget(ConfigWidgetBase, Ui_Form):
|
|||||||
|
|
||||||
def genesis(self, gui):
|
def genesis(self, gui):
|
||||||
self.gui = gui
|
self.gui = gui
|
||||||
self._plugin_model = PluginModel()
|
self._plugin_model = PluginModel(self.user_installed_plugins.isChecked())
|
||||||
self.plugin_view.setModel(self._plugin_model)
|
self.plugin_view.setModel(self._plugin_model)
|
||||||
self.plugin_view.setStyleSheet(
|
self.plugin_view.setStyleSheet(
|
||||||
"QTreeView::item { padding-bottom: 10px;}")
|
"QTreeView::item { padding-bottom: 10px;}")
|
||||||
@ -226,6 +236,10 @@ class ConfigWidget(ConfigWidgetBase, Ui_Form):
|
|||||||
self.next_button.clicked.connect(self.find_next)
|
self.next_button.clicked.connect(self.find_next)
|
||||||
self.previous_button.clicked.connect(self.find_previous)
|
self.previous_button.clicked.connect(self.find_previous)
|
||||||
self.changed_signal.connect(self.reload_store_plugins)
|
self.changed_signal.connect(self.reload_store_plugins)
|
||||||
|
self.user_installed_plugins.stateChanged.connect(self.show_user_installed_plugins)
|
||||||
|
|
||||||
|
def show_user_installed_plugins(self, state):
|
||||||
|
self._plugin_model.toggle_shown_plugins(self.user_installed_plugins.isChecked())
|
||||||
|
|
||||||
def find(self, query):
|
def find(self, query):
|
||||||
idx = self._plugin_model.find(query)
|
idx = self._plugin_model.find(query)
|
||||||
|
@ -65,6 +65,16 @@
|
|||||||
</item>
|
</item>
|
||||||
</layout>
|
</layout>
|
||||||
</item>
|
</item>
|
||||||
|
<item>
|
||||||
|
<widget class="QCheckBox" name="user_installed_plugins">
|
||||||
|
<property name="toolTip">
|
||||||
|
<string>Show only those plugins that have been installed by you</string>
|
||||||
|
</property>
|
||||||
|
<property name="text">
|
||||||
|
<string>Show only &user installed plugins</string>
|
||||||
|
</property>
|
||||||
|
</widget>
|
||||||
|
</item>
|
||||||
<item>
|
<item>
|
||||||
<widget class="QTreeView" name="plugin_view">
|
<widget class="QTreeView" name="plugin_view">
|
||||||
<property name="alternatingRowColors">
|
<property name="alternatingRowColors">
|
||||||
|
@ -3,7 +3,7 @@
|
|||||||
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||||
|
|
||||||
__license__ = 'GPL 3'
|
__license__ = 'GPL 3'
|
||||||
__copyright__ = '2011, Tomasz Długosz <tomek3d@gmail.com>'
|
__copyright__ = '2011-2012, Tomasz Długosz <tomek3d@gmail.com>'
|
||||||
__docformat__ = 'restructuredtext en'
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
import re
|
import re
|
||||||
@ -35,16 +35,14 @@ class GandalfStore(BasicStoreConfig, StorePlugin):
|
|||||||
d.exec_()
|
d.exec_()
|
||||||
|
|
||||||
def search(self, query, max_results=10, timeout=60):
|
def search(self, query, max_results=10, timeout=60):
|
||||||
url = 'http://www.gandalf.com.pl/s/'
|
counter = max_results
|
||||||
values={
|
page = 1
|
||||||
'search': query.decode('utf-8').encode('iso8859_2'),
|
url = 'http://www.gandalf.com.pl/we/' + urllib.quote_plus(query.decode('utf-8').encode('iso8859_2')) + '/bdb'
|
||||||
'dzialx':'11'
|
|
||||||
}
|
|
||||||
|
|
||||||
br = browser()
|
br = browser()
|
||||||
|
|
||||||
counter = max_results
|
while counter:
|
||||||
with closing(br.open(url, data=urllib.urlencode(values), timeout=timeout)) as f:
|
with closing(br.open((url + str(page-1) + '/#s') if (page-1) else (url + '/#s'), timeout=timeout)) as f:
|
||||||
doc = html.fromstring(f.read())
|
doc = html.fromstring(f.read())
|
||||||
for data in doc.xpath('//div[@class="box"]'):
|
for data in doc.xpath('//div[@class="box"]'):
|
||||||
if counter <= 0:
|
if counter <= 0:
|
||||||
@ -79,3 +77,6 @@ class GandalfStore(BasicStoreConfig, StorePlugin):
|
|||||||
s.formats = formats.upper().strip()
|
s.formats = formats.upper().strip()
|
||||||
|
|
||||||
yield s
|
yield s
|
||||||
|
if not doc.xpath('boolean(//div[@class="wyszukiwanie_podstawowe_header"]//div[@class="box"])'):
|
||||||
|
break
|
||||||
|
page+=1
|
||||||
|
@ -24,10 +24,12 @@ from calibre.gui2.store.web_store_dialog import WebStoreDialog
|
|||||||
class VirtualoStore(BasicStoreConfig, StorePlugin):
|
class VirtualoStore(BasicStoreConfig, StorePlugin):
|
||||||
|
|
||||||
def open(self, parent=None, detail_item=None, external=False):
|
def open(self, parent=None, detail_item=None, external=False):
|
||||||
url = 'http://virtualo.pl/ebook/c2/'
|
pid = '12'
|
||||||
|
url = 'http://virtualo.pl/ebook/c2/?pr=' + pid
|
||||||
|
detail_url = detail_item + '&pr=' + pid if detail_item else url
|
||||||
|
|
||||||
if external or self.config.get('open_external', False):
|
if external or self.config.get('open_external', False):
|
||||||
open_url(QUrl(url_slash_cleaner(detail_item if detail_item else url)))
|
open_url(QUrl(url_slash_cleaner(detail_url)))
|
||||||
else:
|
else:
|
||||||
d = WebStoreDialog(self.gui, url, parent, detail_item)
|
d = WebStoreDialog(self.gui, url, parent, detail_item)
|
||||||
d.setWindowTitle(self.name)
|
d.setWindowTitle(self.name)
|
||||||
@ -54,11 +56,13 @@ class VirtualoStore(BasicStoreConfig, StorePlugin):
|
|||||||
price = ''.join(data.xpath('.//span[@class="price"]/text() | .//span[@class="price abbr"]/text()'))
|
price = ''.join(data.xpath('.//span[@class="price"]/text() | .//span[@class="price abbr"]/text()'))
|
||||||
cover_url = ''.join(data.xpath('.//table/tr[1]/td[1]/a/img/@src'))
|
cover_url = ''.join(data.xpath('.//table/tr[1]/td[1]/a/img/@src'))
|
||||||
title = ''.join(data.xpath('.//div[@class="title"]/a/text()'))
|
title = ''.join(data.xpath('.//div[@class="title"]/a/text()'))
|
||||||
|
title = re.sub(r'\ WM', '', title)
|
||||||
author = ', '.join(data.xpath('.//div[@class="authors"]/a/text()'))
|
author = ', '.join(data.xpath('.//div[@class="authors"]/a/text()'))
|
||||||
formats = ', '.join(data.xpath('.//span[@class="format"]/a/text()'))
|
formats = ', '.join(data.xpath('.//span[@class="format"]/a/text()'))
|
||||||
formats = re.sub(r'(, )?ONLINE(, )?', '', formats)
|
formats = re.sub(r'(, )?ONLINE(, )?', '', formats)
|
||||||
drm = drm_pattern.search(formats)
|
drm = drm_pattern.search(formats)
|
||||||
formats = re.sub(r'(, )?ADE(, )?', '', formats)
|
formats = re.sub(r'(, )?ADE(, )?', '', formats)
|
||||||
|
formats = re.sub(r'\ WM', '', formats)
|
||||||
|
|
||||||
counter -= 1
|
counter -= 1
|
||||||
|
|
||||||
|
@ -41,6 +41,7 @@ from calibre.gui2.search_box import SearchBoxMixin, SavedSearchBoxMixin
|
|||||||
from calibre.gui2.search_restriction_mixin import SearchRestrictionMixin
|
from calibre.gui2.search_restriction_mixin import SearchRestrictionMixin
|
||||||
from calibre.gui2.tag_browser.ui import TagBrowserMixin
|
from calibre.gui2.tag_browser.ui import TagBrowserMixin
|
||||||
from calibre.gui2.keyboard import Manager
|
from calibre.gui2.keyboard import Manager
|
||||||
|
from calibre.gui2.auto_add import AutoAdder
|
||||||
from calibre.library.sqlite import sqlite, DatabaseException
|
from calibre.library.sqlite import sqlite, DatabaseException
|
||||||
|
|
||||||
class Listener(Thread): # {{{
|
class Listener(Thread): # {{{
|
||||||
@ -292,6 +293,8 @@ class Main(MainWindow, MainWindowMixin, DeviceMixin, EmailMixin, # {{{
|
|||||||
self.library_view.model().books_added(1)
|
self.library_view.model().books_added(1)
|
||||||
if hasattr(self, 'db_images'):
|
if hasattr(self, 'db_images'):
|
||||||
self.db_images.reset()
|
self.db_images.reset()
|
||||||
|
if self.library_view.model().rowCount(None) < 3:
|
||||||
|
self.library_view.resizeColumnsToContents()
|
||||||
|
|
||||||
self.library_view.model().count_changed()
|
self.library_view.model().count_changed()
|
||||||
self.bars_manager.database_changed(self.library_view.model().db)
|
self.bars_manager.database_changed(self.library_view.model().db)
|
||||||
@ -347,6 +350,7 @@ class Main(MainWindow, MainWindowMixin, DeviceMixin, EmailMixin, # {{{
|
|||||||
self.device_manager.set_current_library_uuid(db.library_id)
|
self.device_manager.set_current_library_uuid(db.library_id)
|
||||||
|
|
||||||
self.keyboard.finalize()
|
self.keyboard.finalize()
|
||||||
|
self.auto_adder = AutoAdder(gprefs['auto_add_path'], self)
|
||||||
|
|
||||||
# Collect cycles now
|
# Collect cycles now
|
||||||
gc.collect()
|
gc.collect()
|
||||||
@ -464,6 +468,7 @@ class Main(MainWindow, MainWindowMixin, DeviceMixin, EmailMixin, # {{{
|
|||||||
self.library_view.model().refresh()
|
self.library_view.model().refresh()
|
||||||
self.library_view.model().research()
|
self.library_view.model().research()
|
||||||
self.tags_view.recount()
|
self.tags_view.recount()
|
||||||
|
self.library_view.model().db.refresh_format_cache()
|
||||||
elif msg.startswith('shutdown:'):
|
elif msg.startswith('shutdown:'):
|
||||||
self.quit(confirm_quit=False)
|
self.quit(confirm_quit=False)
|
||||||
else:
|
else:
|
||||||
@ -694,6 +699,7 @@ class Main(MainWindow, MainWindowMixin, DeviceMixin, EmailMixin, # {{{
|
|||||||
while self.spare_servers:
|
while self.spare_servers:
|
||||||
self.spare_servers.pop().close()
|
self.spare_servers.pop().close()
|
||||||
self.device_manager.keep_going = False
|
self.device_manager.keep_going = False
|
||||||
|
self.auto_adder.stop()
|
||||||
mb = self.library_view.model().metadata_backup
|
mb = self.library_view.model().metadata_backup
|
||||||
if mb is not None:
|
if mb is not None:
|
||||||
mb.stop()
|
mb.stop()
|
||||||
|
@ -24,6 +24,7 @@ from calibre.constants import iswindows
|
|||||||
from calibre import prints, guess_type
|
from calibre import prints, guess_type
|
||||||
from calibre.gui2.viewer.keys import SHORTCUTS
|
from calibre.gui2.viewer.keys import SHORTCUTS
|
||||||
from calibre.gui2.viewer.javascript import JavaScriptLoader
|
from calibre.gui2.viewer.javascript import JavaScriptLoader
|
||||||
|
from calibre.gui2.viewer.position import PagePosition
|
||||||
|
|
||||||
# }}}
|
# }}}
|
||||||
|
|
||||||
@ -170,10 +171,12 @@ class Document(QWebPage): # {{{
|
|||||||
settings.setFontFamily(QWebSettings.SerifFont, opts.serif_family)
|
settings.setFontFamily(QWebSettings.SerifFont, opts.serif_family)
|
||||||
settings.setFontFamily(QWebSettings.SansSerifFont, opts.sans_family)
|
settings.setFontFamily(QWebSettings.SansSerifFont, opts.sans_family)
|
||||||
settings.setFontFamily(QWebSettings.FixedFont, opts.mono_family)
|
settings.setFontFamily(QWebSettings.FixedFont, opts.mono_family)
|
||||||
|
settings.setAttribute(QWebSettings.ZoomTextOnly, True)
|
||||||
|
|
||||||
def do_config(self, parent=None):
|
def do_config(self, parent=None):
|
||||||
d = ConfigDialog(self.shortcuts, parent)
|
d = ConfigDialog(self.shortcuts, parent)
|
||||||
if d.exec_() == QDialog.Accepted:
|
if d.exec_() == QDialog.Accepted:
|
||||||
|
with self.page_position:
|
||||||
self.set_font_settings()
|
self.set_font_settings()
|
||||||
self.set_user_stylesheet()
|
self.set_user_stylesheet()
|
||||||
self.misc_config()
|
self.misc_config()
|
||||||
@ -196,6 +199,7 @@ class Document(QWebPage): # {{{
|
|||||||
pal = self.palette()
|
pal = self.palette()
|
||||||
pal.setBrush(QPalette.Background, QColor(0xee, 0xee, 0xee))
|
pal.setBrush(QPalette.Background, QColor(0xee, 0xee, 0xee))
|
||||||
self.setPalette(pal)
|
self.setPalette(pal)
|
||||||
|
self.page_position = PagePosition(self)
|
||||||
|
|
||||||
settings = self.settings()
|
settings = self.settings()
|
||||||
|
|
||||||
@ -895,15 +899,16 @@ class DocumentView(QWebView): # {{{
|
|||||||
@dynamic_property
|
@dynamic_property
|
||||||
def multiplier(self):
|
def multiplier(self):
|
||||||
def fget(self):
|
def fget(self):
|
||||||
return self.document.mainFrame().textSizeMultiplier()
|
return self.zoomFactor()
|
||||||
def fset(self, val):
|
def fset(self, val):
|
||||||
self.document.mainFrame().setTextSizeMultiplier(val)
|
self.setZoomFactor(val)
|
||||||
self.magnification_changed.emit(val)
|
self.magnification_changed.emit(val)
|
||||||
return property(fget=fget, fset=fset)
|
return property(fget=fget, fset=fset)
|
||||||
|
|
||||||
def magnify_fonts(self, amount=None):
|
def magnify_fonts(self, amount=None):
|
||||||
if amount is None:
|
if amount is None:
|
||||||
amount = self.document.font_magnification_step
|
amount = self.document.font_magnification_step
|
||||||
|
with self.document.page_position:
|
||||||
self.multiplier += amount
|
self.multiplier += amount
|
||||||
return self.document.scroll_fraction
|
return self.document.scroll_fraction
|
||||||
|
|
||||||
@ -911,6 +916,7 @@ class DocumentView(QWebView): # {{{
|
|||||||
if amount is None:
|
if amount is None:
|
||||||
amount = self.document.font_magnification_step
|
amount = self.document.font_magnification_step
|
||||||
if self.multiplier >= amount:
|
if self.multiplier >= amount:
|
||||||
|
with self.document.page_position:
|
||||||
self.multiplier -= amount
|
self.multiplier -= amount
|
||||||
return self.document.scroll_fraction
|
return self.document.scroll_fraction
|
||||||
|
|
||||||
|
@ -481,16 +481,14 @@ class EbookViewer(MainWindow, Ui_EbookViewer):
|
|||||||
self.load_ebook(action.path)
|
self.load_ebook(action.path)
|
||||||
|
|
||||||
def font_size_larger(self):
|
def font_size_larger(self):
|
||||||
frac = self.view.magnify_fonts()
|
self.view.magnify_fonts()
|
||||||
self.action_font_size_larger.setEnabled(self.view.multiplier < 3)
|
self.action_font_size_larger.setEnabled(self.view.multiplier < 3)
|
||||||
self.action_font_size_smaller.setEnabled(self.view.multiplier > 0.2)
|
self.action_font_size_smaller.setEnabled(self.view.multiplier > 0.2)
|
||||||
self.set_page_number(frac)
|
|
||||||
|
|
||||||
def font_size_smaller(self):
|
def font_size_smaller(self):
|
||||||
frac = self.view.shrink_fonts()
|
self.view.shrink_fonts()
|
||||||
self.action_font_size_larger.setEnabled(self.view.multiplier < 3)
|
self.action_font_size_larger.setEnabled(self.view.multiplier < 3)
|
||||||
self.action_font_size_smaller.setEnabled(self.view.multiplier > 0.2)
|
self.action_font_size_smaller.setEnabled(self.view.multiplier > 0.2)
|
||||||
self.set_page_number(frac)
|
|
||||||
|
|
||||||
def magnification_changed(self, val):
|
def magnification_changed(self, val):
|
||||||
tt = _('Make font size %(which)s\nCurrent magnification: %(mag).1f')
|
tt = _('Make font size %(which)s\nCurrent magnification: %(mag).1f')
|
||||||
|
68
src/calibre/gui2/viewer/position.py
Normal file
68
src/calibre/gui2/viewer/position.py
Normal file
@ -0,0 +1,68 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
|
||||||
|
from __future__ import (unicode_literals, division, absolute_import,
|
||||||
|
print_function)
|
||||||
|
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
__copyright__ = '2012, Kovid Goyal <kovid@kovidgoyal.net>'
|
||||||
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
|
import json
|
||||||
|
|
||||||
|
class PagePosition(object):
|
||||||
|
|
||||||
|
def __init__(self, document):
|
||||||
|
self.document = document
|
||||||
|
|
||||||
|
@property
|
||||||
|
def viewport_cfi(self):
|
||||||
|
ans = None
|
||||||
|
res = self.document.mainFrame().evaluateJavaScript('''
|
||||||
|
ans = 'undefined';
|
||||||
|
try {
|
||||||
|
ans = window.cfi.at_current();
|
||||||
|
if (!ans) ans = 'undefined';
|
||||||
|
} catch (err) {
|
||||||
|
window.console.log(err);
|
||||||
|
}
|
||||||
|
window.console.log("Viewport cfi: " + ans);
|
||||||
|
ans;
|
||||||
|
''')
|
||||||
|
if res.isValid() and not res.isNull() and res.type() == res.String:
|
||||||
|
c = unicode(res.toString())
|
||||||
|
if c != 'undefined':
|
||||||
|
ans = c
|
||||||
|
return ans
|
||||||
|
|
||||||
|
def scroll_to_cfi(self, cfi):
|
||||||
|
if cfi:
|
||||||
|
cfi = json.dumps(cfi)
|
||||||
|
self.document.mainFrame().evaluateJavaScript('''
|
||||||
|
function fix_scroll() {
|
||||||
|
/* cfi.scroll_to() uses scrollIntoView() which can result
|
||||||
|
in scrolling along the x-axis. So we
|
||||||
|
explicitly scroll to x=0.
|
||||||
|
*/
|
||||||
|
scrollTo(0, window.pageYOffset)
|
||||||
|
}
|
||||||
|
|
||||||
|
window.cfi.scroll_to(%s, fix_scroll);
|
||||||
|
'''%cfi)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def current_pos(self):
|
||||||
|
ans = self.viewport_cfi
|
||||||
|
if not ans:
|
||||||
|
ans = self.document.scroll_fraction
|
||||||
|
return ans
|
||||||
|
|
||||||
|
def __enter__(self):
|
||||||
|
self._cpos = self.current_pos
|
||||||
|
|
||||||
|
def __exit__(self, *args):
|
||||||
|
if isinstance(self._cpos, (int, float)):
|
||||||
|
self.document.scroll_fraction = self._cpos
|
||||||
|
else:
|
||||||
|
self.scroll_to_cfi(self._cpos)
|
||||||
|
self._cpos = None
|
||||||
|
|
@ -11,8 +11,8 @@ from Queue import Empty, Queue
|
|||||||
from contextlib import closing
|
from contextlib import closing
|
||||||
|
|
||||||
|
|
||||||
from PyQt4.Qt import QWizard, QWizardPage, QPixmap, Qt, QAbstractListModel, \
|
from PyQt4.Qt import (QWizard, QWizardPage, QPixmap, Qt, QAbstractListModel,
|
||||||
QVariant, QItemSelectionModel, SIGNAL, QObject, QTimer
|
QVariant, QItemSelectionModel, SIGNAL, QObject, QTimer, pyqtSignal)
|
||||||
from calibre import __appname__, patheq
|
from calibre import __appname__, patheq
|
||||||
from calibre.library.database2 import LibraryDatabase2
|
from calibre.library.database2 import LibraryDatabase2
|
||||||
from calibre.library.move import MoveLibrary
|
from calibre.library.move import MoveLibrary
|
||||||
@ -613,6 +613,7 @@ def move_library(oldloc, newloc, parent, callback_on_complete):
|
|||||||
class LibraryPage(QWizardPage, LibraryUI):
|
class LibraryPage(QWizardPage, LibraryUI):
|
||||||
|
|
||||||
ID = 1
|
ID = 1
|
||||||
|
retranslate = pyqtSignal()
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
QWizardPage.__init__(self)
|
QWizardPage.__init__(self)
|
||||||
@ -620,8 +621,7 @@ class LibraryPage(QWizardPage, LibraryUI):
|
|||||||
self.registerField('library_location', self.location)
|
self.registerField('library_location', self.location)
|
||||||
self.connect(self.button_change, SIGNAL('clicked()'), self.change)
|
self.connect(self.button_change, SIGNAL('clicked()'), self.change)
|
||||||
self.init_languages()
|
self.init_languages()
|
||||||
self.connect(self.language, SIGNAL('currentIndexChanged(int)'),
|
self.language.currentIndexChanged[int].connect(self.change_language)
|
||||||
self.change_language)
|
|
||||||
self.connect(self.location, SIGNAL('textChanged(QString)'),
|
self.connect(self.location, SIGNAL('textChanged(QString)'),
|
||||||
self.location_text_changed)
|
self.location_text_changed)
|
||||||
|
|
||||||
@ -660,7 +660,7 @@ class LibraryPage(QWizardPage, LibraryUI):
|
|||||||
from calibre.gui2 import qt_app
|
from calibre.gui2 import qt_app
|
||||||
set_translators()
|
set_translators()
|
||||||
qt_app.load_translations()
|
qt_app.load_translations()
|
||||||
self.emit(SIGNAL('retranslate()'))
|
self.retranslate.emit()
|
||||||
self.init_languages()
|
self.init_languages()
|
||||||
try:
|
try:
|
||||||
lang = prefs['language'].lower()[:2]
|
lang = prefs['language'].lower()[:2]
|
||||||
@ -780,6 +780,22 @@ class FinishPage(QWizardPage, FinishUI):
|
|||||||
|
|
||||||
class Wizard(QWizard):
|
class Wizard(QWizard):
|
||||||
|
|
||||||
|
BUTTON_TEXTS = {
|
||||||
|
'Next': '&Next >',
|
||||||
|
'Back': '< &Back',
|
||||||
|
'Cancel': 'Cancel',
|
||||||
|
'Finish': '&Finish',
|
||||||
|
'Commit': 'Commit'
|
||||||
|
}
|
||||||
|
# The latter is simply to mark the texts for translation
|
||||||
|
if False:
|
||||||
|
_('&Next >')
|
||||||
|
_('< &Back')
|
||||||
|
_('Cancel')
|
||||||
|
_('&Finish')
|
||||||
|
_('Commit')
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, parent):
|
def __init__(self, parent):
|
||||||
QWizard.__init__(self, parent)
|
QWizard.__init__(self, parent)
|
||||||
self.setWindowTitle(__appname__+' '+_('welcome wizard'))
|
self.setWindowTitle(__appname__+' '+_('welcome wizard'))
|
||||||
@ -792,8 +808,7 @@ class Wizard(QWizard):
|
|||||||
self.setPixmap(self.BackgroundPixmap, QPixmap(I('wizard.png')))
|
self.setPixmap(self.BackgroundPixmap, QPixmap(I('wizard.png')))
|
||||||
self.device_page = DevicePage()
|
self.device_page = DevicePage()
|
||||||
self.library_page = LibraryPage()
|
self.library_page = LibraryPage()
|
||||||
self.connect(self.library_page, SIGNAL('retranslate()'),
|
self.library_page.retranslate.connect(self.retranslate)
|
||||||
self.retranslate)
|
|
||||||
self.finish_page = FinishPage()
|
self.finish_page = FinishPage()
|
||||||
self.set_finish_text()
|
self.set_finish_text()
|
||||||
self.kindle_page = KindlePage()
|
self.kindle_page = KindlePage()
|
||||||
@ -813,12 +828,18 @@ class Wizard(QWizard):
|
|||||||
nh = min(400, nh)
|
nh = min(400, nh)
|
||||||
nw = min(580, nw)
|
nw = min(580, nw)
|
||||||
self.resize(nw, nh)
|
self.resize(nw, nh)
|
||||||
|
self.set_button_texts()
|
||||||
|
|
||||||
|
def set_button_texts(self):
|
||||||
|
for but, text in self.BUTTON_TEXTS.iteritems():
|
||||||
|
self.setButtonText(getattr(self, but+'Button'), _(text))
|
||||||
|
|
||||||
def retranslate(self):
|
def retranslate(self):
|
||||||
for pid in self.pageIds():
|
for pid in self.pageIds():
|
||||||
page = self.page(pid)
|
page = self.page(pid)
|
||||||
page.retranslateUi(page)
|
page.retranslateUi(page)
|
||||||
self.set_finish_text()
|
self.set_finish_text()
|
||||||
|
self.set_button_texts()
|
||||||
|
|
||||||
def accept(self):
|
def accept(self):
|
||||||
pages = map(self.page, self.visitedPages())
|
pages = map(self.page, self.visitedPages())
|
||||||
|
@ -312,10 +312,7 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
|
|||||||
load_user_template_functions(self.prefs.get('user_template_functions', []))
|
load_user_template_functions(self.prefs.get('user_template_functions', []))
|
||||||
|
|
||||||
# Load the format filename cache
|
# Load the format filename cache
|
||||||
self.format_filename_cache = defaultdict(dict)
|
self.refresh_format_cache()
|
||||||
for book_id, fmt, name in self.conn.get(
|
|
||||||
'SELECT book,format,name FROM data'):
|
|
||||||
self.format_filename_cache[book_id][fmt.upper() if fmt else ''] = name
|
|
||||||
|
|
||||||
self.conn.executescript('''
|
self.conn.executescript('''
|
||||||
DROP TRIGGER IF EXISTS author_insert_trg;
|
DROP TRIGGER IF EXISTS author_insert_trg;
|
||||||
@ -509,7 +506,6 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
|
|||||||
self.refresh_ondevice = functools.partial(self.data.refresh_ondevice, self)
|
self.refresh_ondevice = functools.partial(self.data.refresh_ondevice, self)
|
||||||
self.refresh()
|
self.refresh()
|
||||||
self.last_update_check = self.last_modified()
|
self.last_update_check = self.last_modified()
|
||||||
self.format_metadata_cache = defaultdict(dict)
|
|
||||||
|
|
||||||
def break_cycles(self):
|
def break_cycles(self):
|
||||||
self.data.break_cycles()
|
self.data.break_cycles()
|
||||||
@ -528,6 +524,12 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
|
|||||||
''' Return last modified time as a UTC datetime object'''
|
''' Return last modified time as a UTC datetime object'''
|
||||||
return utcfromtimestamp(os.stat(self.dbpath).st_mtime)
|
return utcfromtimestamp(os.stat(self.dbpath).st_mtime)
|
||||||
|
|
||||||
|
def refresh_format_cache(self):
|
||||||
|
self.format_filename_cache = defaultdict(dict)
|
||||||
|
for book_id, fmt, name in self.conn.get(
|
||||||
|
'SELECT book,format,name FROM data'):
|
||||||
|
self.format_filename_cache[book_id][fmt.upper() if fmt else ''] = name
|
||||||
|
self.format_metadata_cache = defaultdict(dict)
|
||||||
|
|
||||||
def check_if_modified(self):
|
def check_if_modified(self):
|
||||||
if self.last_modified() > self.last_update_check:
|
if self.last_modified() > self.last_update_check:
|
||||||
@ -1401,7 +1403,7 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
|
|||||||
id = index if index_is_id else self.id(index)
|
id = index if index_is_id else self.id(index)
|
||||||
if not format: format = ''
|
if not format: format = ''
|
||||||
self.format_metadata_cache[id].pop(format.upper(), None)
|
self.format_metadata_cache[id].pop(format.upper(), None)
|
||||||
name = self.format_filename_cache[id].pop(format.upper(), None)
|
name = self.format_filename_cache[id].get(format.upper(), None)
|
||||||
if name:
|
if name:
|
||||||
if not db_only:
|
if not db_only:
|
||||||
try:
|
try:
|
||||||
@ -1410,6 +1412,7 @@ class LibraryDatabase2(LibraryDatabase, SchemaUpgrade, CustomColumns):
|
|||||||
delete_file(path)
|
delete_file(path)
|
||||||
except:
|
except:
|
||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
|
self.format_filename_cache[id].pop(format.upper(), None)
|
||||||
self.conn.execute('DELETE FROM data WHERE book=? AND format=?', (id, format.upper()))
|
self.conn.execute('DELETE FROM data WHERE book=? AND format=?', (id, format.upper()))
|
||||||
if commit:
|
if commit:
|
||||||
self.conn.commit()
|
self.conn.commit()
|
||||||
|
@ -111,6 +111,10 @@ def main(args=sys.argv):
|
|||||||
from calibre.utils.config import prefs
|
from calibre.utils.config import prefs
|
||||||
if opts.with_library is None:
|
if opts.with_library is None:
|
||||||
opts.with_library = prefs['library_path']
|
opts.with_library = prefs['library_path']
|
||||||
|
if not opts.with_library:
|
||||||
|
print('No saved library path. Use the --with-library option'
|
||||||
|
' to specify the path to the library you want to use.')
|
||||||
|
return 1
|
||||||
db = LibraryDatabase2(opts.with_library)
|
db = LibraryDatabase2(opts.with_library)
|
||||||
server = LibraryServer(db, opts, show_tracebacks=opts.develop)
|
server = LibraryServer(db, opts, show_tracebacks=opts.develop)
|
||||||
server.start()
|
server.start()
|
||||||
|
@ -191,8 +191,14 @@ class SpooledTemporaryFile(tempfile.SpooledTemporaryFile):
|
|||||||
suffix = ''
|
suffix = ''
|
||||||
if dir is None:
|
if dir is None:
|
||||||
dir = base_dir()
|
dir = base_dir()
|
||||||
tempfile.SpooledTemporaryFile.__init__(self, max_size=max_size, suffix=suffix,
|
tempfile.SpooledTemporaryFile.__init__(self, max_size=max_size,
|
||||||
prefix=prefix, dir=dir, mode=mode, bufsize=bufsize)
|
suffix=suffix, prefix=prefix, dir=dir, mode=mode,
|
||||||
|
bufsize=bufsize)
|
||||||
|
|
||||||
|
def truncate(self, *args):
|
||||||
|
# The stdlib SpooledTemporaryFile implementation of truncate() doesn't
|
||||||
|
# allow specifying a size.
|
||||||
|
self._file.truncate(*args)
|
||||||
|
|
||||||
def better_mktemp(*args, **kwargs):
|
def better_mktemp(*args, **kwargs):
|
||||||
fd, path = tempfile.mkstemp(*args, **kwargs)
|
fd, path = tempfile.mkstemp(*args, **kwargs)
|
||||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user