This commit is contained in:
GRiker 2013-04-05 03:11:47 -07:00
commit 266545a2c3
208 changed files with 26476 additions and 18069 deletions

View File

@ -40,6 +40,7 @@ recipes/.gitignore
recipes/README.md
recipes/icon_checker.py
recipes/readme_updater.py
recipes/garfield.recipe
recipes/katalog_egazeciarz.recipe
recipes/tv_axnscifi.recipe
recipes/tv_comedycentral.recipe
@ -63,6 +64,7 @@ recipes/tv_tvppolonia.recipe
recipes/tv_tvpuls.recipe
recipes/tv_viasathistory.recipe
recipes/icons/katalog_egazeciarz.png
recipes/icons/garfield.png
recipes/icons/tv_axnscifi.png
recipes/icons/tv_comedycentral.png
recipes/icons/tv_discoveryscience.png

View File

@ -20,6 +20,58 @@
# new recipes:
# - title:
- version: 0.9.26
date: 2013-04-05
new features:
- title: "PDF Output: Allow using templates to create arbitrary headers and footers. Look under PDF Output in the conversion dialog for this feature."
- title: "ToC Editor: Allow generating the ToC directly from individual files inside the ebook. Useful for EPUBs that have individual chapters in single files."
tickets: [1163520]
- title: "ToC Editor: Add buttons to indent/unindent the current entry"
- title: "ToC Editor: Right-click menu to perform various useful actions on entries in the ToC"
- title: "Column icons: Allow use of wide images as column icons"
- title: "Add USB ids for the Palm Pre2 and Samsung Galaxy phone to the device drivers"
tickets: [1162293,1163115]
bug fixes:
- title: "PDF Output: Fix generating page numbers causing links to not work."
tickets: [1162573]
- title: "Wrong filename output in error message when 'Guide reference not found'"
tickets: [1163659]
- title: "Get Books: Update Amazon, Barnes & Noble, Waterstones and Gutenberg store plugins for website change"
- title: "PDF Output: Fix 1 pixel wide left and top margins on the cover page for some PDF conversions due to incorrect rounding."
tickets: [1162054]
- title: "ToC Editor: Fix drag and drop of multiple items resulting in the dropped items being in random order sometimes."
tickets: [1161999]
improved recipes:
- Financial Times UK
- Sing Tao Daily
- Apple Daily
- A List Apart
- Business Week
- Harpers printed edition
- Harvard Business Review
new recipes:
- title: AM730
author: Eddie Lau
- title: Arret sur images
author: Francois D
- title: Diario de Noticias
author: Jose Pinto
- version: 0.9.25
date: 2013-03-29

View File

@ -750,8 +750,61 @@ If this property is detected by |app|, the following custom properties are recog
opf.series
opf.seriesindex
In addition to this, you can specify the picture to use as the cover by naming it ``opf.cover`` (right click, Picture->Options->Name) in the ODT. If no picture with this name is found, the 'smart' method is used.
As the cover detection might result in double covers in certain output formats, the process will remove the paragraph (only if the only content is the cover!) from the document. But this works only with the named picture!
In addition to this, you can specify the picture to use as the cover by naming
it ``opf.cover`` (right click, Picture->Options->Name) in the ODT. If no
picture with this name is found, the 'smart' method is used. As the cover
detection might result in double covers in certain output formats, the process
will remove the paragraph (only if the only content is the cover!) from the
document. But this works only with the named picture!
To disable cover detection you can set the custom property ``opf.nocover`` ('Yes or No' type) to Yes in advanced mode.
Converting to PDF
~~~~~~~~~~~~~~~~~~~
The first, most important, setting to decide on when converting to PDF is the page
size. By default, |app| uses a page size defined by the current
:guilabel:`Output profile`. So if your output profile is set to Kindle, |app|
will create a PDF with page size suitable for viewing on the small kindle
screen. However, if you view this PDF file on a computer screen, then it will
appear to have too large fonts. To create "normal" sized PDFs, use the override
page size option under :guilabel:`PDF Output` in the conversion dialog.
You can insert arbitrary headers and footers on each page of the PDF by
specifying header and footer templates. Templates are just snippets of HTML
code that get rendered in the header and footer locations. For example, to
display page numbers centered at the bottom of every page, in green, use the following
footer template::
<p style="text-align:center; color:green">Page _PAGENUM_</p>
|app| will automatically replace _PAGENUM_ with the current page number. You
can even put different content on even and odd pages, for example the following
header template will show the title on odd pages and the author on even pages::
<p style="text-align:right"><span class="even_page">_AUTHOR_</span><span class="odd_page"><i>_TITLE_</i></span></p>
|app| will automatically replace _TITLE_ and _AUTHOR_ with the title and author
of the document being converted. You can also display text at the left and
right edges and change the font size, as demonstrated with this header
template::
<div style="font-size:x-small"><p style="float:left">_TITLE_</p><p style="float:right;"><i>_AUTHOR_</i></p></div>
This will display the title at the left and the author at the right, in a font
size smaller than the main text.
Finally, you can also use the current section in templates, as shown below::
<p style="text-align:right">_SECTION_</p>
_SECTION_ is replaced by whatever the name of the current section is. These
names are taken from the metadata Table of Contents in the document (the PDF
Outline). If the document has no table of contents then it will be replaced by
empty text. If a single PDF page has multiple sections, the first section on
the page will be used.
.. note:: When adding headers and footers make sure you set the page top and
bottom margins to large enough values, under the Page Setup section of the
conversion dialog.

View File

@ -66,4 +66,3 @@ class Adventure_zone(BasicNewsRecipe):
if a.has_key('href') and 'http://' not in a['href'] and 'https://' not in a['href']:
a['href']=self.index + a['href']
return soup

290
recipes/am730.recipe Normal file
View File

@ -0,0 +1,290 @@
# vim:fileencoding=UTF-8
from __future__ import unicode_literals
__license__ = 'GPL v3'
__copyright__ = '2013, Eddie Lau'
__Date__ = ''
__HiResImg__ = True
'''
Change Log:
2013/03/30 -- first version
'''
from calibre import (__appname__, force_unicode, strftime)
from calibre.utils.date import now as nowf
import os, datetime, re
from calibre.web.feeds.recipes import BasicNewsRecipe
from contextlib import nested
from calibre.ebooks.BeautifulSoup import BeautifulSoup, Tag
from calibre.ebooks.metadata.opf2 import OPFCreator
from calibre.ebooks.metadata.toc import TOC
from calibre.ebooks.metadata import MetaInformation
from calibre.utils.localization import canonicalize_lang
class AppleDaily(BasicNewsRecipe):
title = u'AM730'
__author__ = 'Eddie Lau'
publisher = 'AM730'
oldest_article = 1
max_articles_per_feed = 100
auto_cleanup = False
language = 'zh'
encoding = 'utf-8'
auto_cleanup = False
remove_javascript = True
use_embedded_content = False
no_stylesheets = True
description = 'http://www.am730.com.hk'
category = 'Chinese, News, Hong Kong'
masthead_url = 'http://www.am730.com.hk/images/logo.jpg'
extra_css = 'img {display: block; margin-left: auto; margin-right: auto; margin-top: 10px; margin-bottom: 10px; max-height:90%;} div[id=articleHeader] {font-size:200%; text-align:left; font-weight:bold;} photocaption {font-size:50%; margin-left:auto; margin-right:auto;}'
keep_only_tags = [dict(name='div', attrs={'id':'articleHeader'}),
dict(name='div', attrs={'class':'thecontent wordsnap'}),
dict(name='a', attrs={'class':'lightboximg'})]
remove_tags = [dict(name='img', attrs={'src':'/images/am730_article_logo.jpg'}),
dict(name='img', attrs={'src':'/images/am_endmark.gif'})]
def get_dtlocal(self):
dt_utc = datetime.datetime.utcnow()
# convert UTC to local hk time - at HKT 6am, all news are available
return dt_utc + datetime.timedelta(8.0/24) - datetime.timedelta(6.0/24)
def get_fetchdate(self):
if __Date__ <> '':
return __Date__
else:
return self.get_dtlocal().strftime("%Y%m%d")
def get_fetchformatteddate(self):
if __Date__ <> '':
return __Date__[0:4]+'-'+__Date__[4:6]+'-'+__Date__[6:8]
else:
return self.get_dtlocal().strftime("%Y-%m-%d")
def get_fetchyear(self):
if __Date__ <> '':
return __Date__[0:4]
else:
return self.get_dtlocal().strftime("%Y")
def get_fetchmonth(self):
if __Date__ <> '':
return __Date__[4:6]
else:
return self.get_dtlocal().strftime("%m")
def get_fetchday(self):
if __Date__ <> '':
return __Date__[6:8]
else:
return self.get_dtlocal().strftime("%d")
# Note: does not work with custom date given by __Date__
def get_weekday(self):
return self.get_dtlocal().weekday()
def populate_article_metadata(self, article, soup, first):
if first and hasattr(self, 'add_toc_thumbnail'):
picdiv = soup.find('img')
if picdiv is not None:
self.add_toc_thumbnail(article,picdiv['src'])
def parse_index(self):
feeds = []
soup = self.index_to_soup('http://www.am730.com.hk/')
ul = soup.find(attrs={'class':'nav-section'})
sectionList = []
for li in ul.findAll('li'):
a = 'http://www.am730.com.hk/' + li.find('a', href=True).get('href', False)
title = li.find('a').get('title', False).strip()
sectionList.append((title, a))
for title, url in sectionList:
articles = self.parse_section(url)
if articles:
feeds.append((title, articles))
return feeds
def parse_section(self, url):
soup = self.index_to_soup(url)
items = soup.findAll(attrs={'style':'padding-bottom: 15px;'})
current_articles = []
for item in items:
a = item.find(attrs={'class':'t6 f14'}).find('a', href=True)
articlelink = 'http://www.am730.com.hk/' + a.get('href', True)
title = self.tag_to_string(a)
description = self.tag_to_string(item.find(attrs={'class':'t3 f14'}))
current_articles.append({'title': title, 'url': articlelink, 'description': description})
return current_articles
def preprocess_html(self, soup):
multia = soup.findAll('a')
for a in multia:
if not (a == None):
image = a.find('img')
if not (image == None):
if __HiResImg__:
image['src'] = image.get('src').replace('/thumbs/', '/')
caption = image.get('alt')
tag = Tag(soup, "photo", [])
tag2 = Tag(soup, "photocaption", [])
tag.insert(0, image)
if not caption == None:
tag2.insert(0, caption)
tag.insert(1, tag2)
a.replaceWith(tag)
return soup
def create_opf(self, feeds, dir=None):
if dir is None:
dir = self.output_dir
title = self.short_title()
if self.output_profile.periodical_date_in_title:
title += strftime(self.timefmt)
mi = MetaInformation(title, [__appname__])
mi.publisher = __appname__
mi.author_sort = __appname__
if self.publication_type:
mi.publication_type = 'periodical:'+self.publication_type+':'+self.short_title()
mi.timestamp = nowf()
article_titles, aseen = [], set()
for f in feeds:
for a in f:
if a.title and a.title not in aseen:
aseen.add(a.title)
article_titles.append(force_unicode(a.title, 'utf-8'))
mi.comments = self.description
if not isinstance(mi.comments, unicode):
mi.comments = mi.comments.decode('utf-8', 'replace')
mi.comments += ('\n\n' + _('Articles in this issue: ') + '\n' +
'\n\n'.join(article_titles))
language = canonicalize_lang(self.language)
if language is not None:
mi.language = language
# This one affects the pub date shown in kindle title
#mi.pubdate = nowf()
# now appears to need the time field to be > 12.00noon as well
mi.pubdate = datetime.datetime(int(self.get_fetchyear()), int(self.get_fetchmonth()), int(self.get_fetchday()), 12, 30, 0)
opf_path = os.path.join(dir, 'index.opf')
ncx_path = os.path.join(dir, 'index.ncx')
opf = OPFCreator(dir, mi)
# Add mastheadImage entry to <guide> section
mp = getattr(self, 'masthead_path', None)
if mp is not None and os.access(mp, os.R_OK):
from calibre.ebooks.metadata.opf2 import Guide
ref = Guide.Reference(os.path.basename(self.masthead_path), os.getcwdu())
ref.type = 'masthead'
ref.title = 'Masthead Image'
opf.guide.append(ref)
manifest = [os.path.join(dir, 'feed_%d'%i) for i in range(len(feeds))]
manifest.append(os.path.join(dir, 'index.html'))
manifest.append(os.path.join(dir, 'index.ncx'))
# Get cover
cpath = getattr(self, 'cover_path', None)
if cpath is None:
pf = open(os.path.join(dir, 'cover.jpg'), 'wb')
if self.default_cover(pf):
cpath = pf.name
if cpath is not None and os.access(cpath, os.R_OK):
opf.cover = cpath
manifest.append(cpath)
# Get masthead
mpath = getattr(self, 'masthead_path', None)
if mpath is not None and os.access(mpath, os.R_OK):
manifest.append(mpath)
opf.create_manifest_from_files_in(manifest)
for mani in opf.manifest:
if mani.path.endswith('.ncx'):
mani.id = 'ncx'
if mani.path.endswith('mastheadImage.jpg'):
mani.id = 'masthead-image'
entries = ['index.html']
toc = TOC(base_path=dir)
self.play_order_counter = 0
self.play_order_map = {}
def feed_index(num, parent):
f = feeds[num]
for j, a in enumerate(f):
if getattr(a, 'downloaded', False):
adir = 'feed_%d/article_%d/'%(num, j)
auth = a.author
if not auth:
auth = None
desc = a.text_summary
if not desc:
desc = None
else:
desc = self.description_limiter(desc)
tt = a.toc_thumbnail if a.toc_thumbnail else None
entries.append('%sindex.html'%adir)
po = self.play_order_map.get(entries[-1], None)
if po is None:
self.play_order_counter += 1
po = self.play_order_counter
parent.add_item('%sindex.html'%adir, None,
a.title if a.title else _('Untitled Article'),
play_order=po, author=auth,
description=desc, toc_thumbnail=tt)
last = os.path.join(self.output_dir, ('%sindex.html'%adir).replace('/', os.sep))
for sp in a.sub_pages:
prefix = os.path.commonprefix([opf_path, sp])
relp = sp[len(prefix):]
entries.append(relp.replace(os.sep, '/'))
last = sp
if os.path.exists(last):
with open(last, 'rb') as fi:
src = fi.read().decode('utf-8')
soup = BeautifulSoup(src)
body = soup.find('body')
if body is not None:
prefix = '/'.join('..'for i in range(2*len(re.findall(r'link\d+', last))))
templ = self.navbar.generate(True, num, j, len(f),
not self.has_single_feed,
a.orig_url, __appname__, prefix=prefix,
center=self.center_navbar)
elem = BeautifulSoup(templ.render(doctype='xhtml').decode('utf-8')).find('div')
body.insert(len(body.contents), elem)
with open(last, 'wb') as fi:
fi.write(unicode(soup).encode('utf-8'))
if len(feeds) == 0:
raise Exception('All feeds are empty, aborting.')
if len(feeds) > 1:
for i, f in enumerate(feeds):
entries.append('feed_%d/index.html'%i)
po = self.play_order_map.get(entries[-1], None)
if po is None:
self.play_order_counter += 1
po = self.play_order_counter
auth = getattr(f, 'author', None)
if not auth:
auth = None
desc = getattr(f, 'description', None)
if not desc:
desc = None
feed_index(i, toc.add_item('feed_%d/index.html'%i, None,
f.title, play_order=po, description=desc, author=auth))
else:
entries.append('feed_%d/index.html'%0)
feed_index(0, toc)
for i, p in enumerate(entries):
entries[i] = os.path.join(dir, p.replace('/', os.sep))
opf.create_spine(entries)
opf.set_toc(toc)
with nested(open(opf_path, 'wb'), open(ncx_path, 'wb')) as (opf_file, ncx_file):
opf.render(opf_file, ncx_file)

View File

@ -1,161 +1,275 @@
# -*- coding: utf-8 -*-
import re
# vim:fileencoding=UTF-8
from __future__ import unicode_literals
__license__ = 'GPL v3'
__copyright__ = '2013, Eddie Lau'
__Date__ = ''
from calibre import (__appname__, force_unicode, strftime)
from calibre.utils.date import now as nowf
import os, datetime, re
from calibre.web.feeds.recipes import BasicNewsRecipe
from contextlib import nested
from calibre.ebooks.BeautifulSoup import BeautifulSoup
from calibre.ebooks.metadata.opf2 import OPFCreator
from calibre.ebooks.metadata.toc import TOC
from calibre.ebooks.metadata import MetaInformation
from calibre.utils.localization import canonicalize_lang
class AppleDaily(BasicNewsRecipe):
title = u'蘋果日報'
__author__ = u'蘋果日報'
__publisher__ = u'蘋果日報'
description = u'蘋果日報'
masthead_url = 'http://hk.apple.nextmedia.com/template/common/header/2009/images/atnextheader_logo_appledaily.gif'
language = 'zh_TW'
encoding = 'UTF-8'
timefmt = ' [%a, %d %b, %Y]'
needs_subscription = False
title = u'蘋果日報 (香港)'
__author__ = 'Eddie Lau'
publisher = '蘋果日報'
oldest_article = 1
max_articles_per_feed = 100
auto_cleanup = False
language = 'zh'
encoding = 'utf-8'
auto_cleanup = False
remove_javascript = True
remove_tags_before = dict(name=['ul', 'h1'])
remove_tags_after = dict(name='form')
remove_tags = [dict(attrs={'class':['articleTools', 'post-tools', 'side_tool', 'nextArticleLink clearfix']}),
dict(id=['footer', 'toolsRight', 'articleInline', 'navigation', 'archive', 'side_search', 'blog_sidebar', 'side_tool', 'side_index']),
dict(name=['script', 'noscript', 'style', 'form'])]
use_embedded_content = False
no_stylesheets = True
extra_css = '''
@font-face {font-family: "uming", serif, sans-serif; src: url(res:///usr/share/fonts/truetype/arphic/uming.ttc); }\n
body {margin-right: 8pt; font-family: 'uming', serif;}
h1 {font-family: 'uming', serif, sans-serif}
'''
#extra_css = 'h1 {font: sans-serif large;}\n.byline {font:monospace;}'
description = 'http://hkm.appledaily.com/'
category = 'Chinese, News, Hong Kong'
masthead_url = 'http://upload.wikimedia.org/wikipedia/zh/c/cf/AppleDailyLogo1.png'
preprocess_regexps = [
(re.compile(r'img.php?server=(?P<server>[^&]+)&path=(?P<path>[^&]+).*', re.DOTALL|re.IGNORECASE),
lambda match: 'http://' + match.group('server') + '/' + match.group('path')),
]
extra_css = 'img {display: block; margin-left: auto; margin-right: auto; margin-top: 10px; margin-bottom: 10px; max-height:90%;} h1 {font-size:200%; text-align:left; font-weight:bold;} p[class=video-caption] {font-size:50%; margin-left:auto; margin-right:auto;}'
keep_only_tags = [dict(name='div', attrs={'id':'content-article'})]
remove_tags = [dict(name='div', attrs={'class':'prev-next-btn'}),
dict(name='p', attrs={'class':'next'})]
def get_dtlocal(self):
dt_utc = datetime.datetime.utcnow()
# convert UTC to local hk time - at HKT 6am, all news are available
return dt_utc + datetime.timedelta(8.0/24) - datetime.timedelta(6.0/24)
def get_fetchdate(self):
if __Date__ <> '':
return __Date__
else:
return self.get_dtlocal().strftime("%Y%m%d")
def get_fetchformatteddate(self):
if __Date__ <> '':
return __Date__[0:4]+'-'+__Date__[4:6]+'-'+__Date__[6:8]
else:
return self.get_dtlocal().strftime("%Y-%m-%d")
def get_fetchyear(self):
if __Date__ <> '':
return __Date__[0:4]
else:
return self.get_dtlocal().strftime("%Y")
def get_fetchmonth(self):
if __Date__ <> '':
return __Date__[4:6]
else:
return self.get_dtlocal().strftime("%m")
def get_fetchday(self):
if __Date__ <> '':
return __Date__[6:8]
else:
return self.get_dtlocal().strftime("%d")
# Note: does not work with custom date given by __Date__
def get_weekday(self):
return self.get_dtlocal().weekday()
def get_cover_url(self):
return 'http://hk.apple.nextmedia.com/template/common/header/2009/images/atnextheader_logo_appledaily.gif'
#def get_browser(self):
#br = BasicNewsRecipe.get_browser(self)
#if self.username is not None and self.password is not None:
# br.open('http://www.nytimes.com/auth/login')
# br.select_form(name='login')
# br['USERID'] = self.username
# br['PASSWORD'] = self.password
# br.submit()
#return br
def preprocess_html(self, soup):
#process all the images
for tag in soup.findAll(lambda tag: tag.name.lower()=='img' and tag.has_key('src')):
iurl = tag['src']
#print 'checking image: ' + iurl
#img\.php?server\=(?P<server>[^&]+)&path=(?P<path>[^&]+)
p = re.compile(r'img\.php\?server=(?P<server>[^&]+)&path=(?P<path>[^&]+)', re.DOTALL|re.IGNORECASE)
m = p.search(iurl)
if m is not None:
iurl = 'http://' + m.group('server') + '/' + m.group('path')
#print 'working! new url: ' + iurl
tag['src'] = iurl
#else:
#print 'not good'
for tag in soup.findAll(lambda tag: tag.name.lower()=='a' and tag.has_key('href')):
iurl = tag['href']
#print 'checking image: ' + iurl
#img\.php?server\=(?P<server>[^&]+)&path=(?P<path>[^&]+)
p = re.compile(r'img\.php\?server=(?P<server>[^&]+)&path=(?P<path>[^&]+)', re.DOTALL|re.IGNORECASE)
m = p.search(iurl)
if m is not None:
iurl = 'http://' + m.group('server') + '/' + m.group('path')
#print 'working! new url: ' + iurl
tag['href'] = iurl
#else:
#print 'not good'
return soup
soup = self.index_to_soup('http://hkm.appledaily.com/')
cover = soup.find(attrs={'class':'top-news'}).get('src', False)
br = BasicNewsRecipe.get_browser(self)
try:
br.open(cover)
except:
cover = None
return cover
def populate_article_metadata(self, article, soup, first):
if first and hasattr(self, 'add_toc_thumbnail'):
picdiv = soup.find('img')
if picdiv is not None:
self.add_toc_thumbnail(article,picdiv['src'])
def parse_index(self):
base = 'http://news.hotpot.hk/fruit'
soup = self.index_to_soup('http://news.hotpot.hk/fruit/index.php')
feeds = []
soup = self.index_to_soup('http://hkm.appledaily.com/')
ul = soup.find(attrs={'class':'menu'})
sectionList = []
for li in ul.findAll('li'):
a = 'http://hkm.appledaily.com/' + li.find('a', href=True).get('href', False)
title = li.find('a', text=True).strip()
if not title == u'動新聞':
sectionList.append((title, a))
for title, url in sectionList:
articles = self.parse_section(url)
if articles:
feeds.append((title, articles))
return feeds
#def feed_title(div):
# return ''.join(div.findAll(text=True, recursive=False)).strip()
def parse_section(self, url):
soup = self.index_to_soup(url)
ul = soup.find(attrs={'class':'list'})
current_articles = []
for li in ul.findAll('li'):
a = li.find('a', href=True)
title = li.find('p', text=True).strip()
if a is not None:
current_articles.append({'title': title, 'url':'http://hkm.appledaily.com/' + a.get('href', False)})
pass
return current_articles
articles = {}
key = None
ans = []
for div in soup.findAll('li'):
key = div.find(text=True, recursive=True);
#if key == u'豪情':
# continue;
def create_opf(self, feeds, dir=None):
if dir is None:
dir = self.output_dir
title = self.short_title()
if self.output_profile.periodical_date_in_title:
title += strftime(self.timefmt)
mi = MetaInformation(title, [__appname__])
mi.publisher = __appname__
mi.author_sort = __appname__
if self.publication_type:
mi.publication_type = 'periodical:'+self.publication_type+':'+self.short_title()
mi.timestamp = nowf()
article_titles, aseen = [], set()
for f in feeds:
for a in f:
if a.title and a.title not in aseen:
aseen.add(a.title)
article_titles.append(force_unicode(a.title, 'utf-8'))
print 'section=' + key
mi.comments = self.description
if not isinstance(mi.comments, unicode):
mi.comments = mi.comments.decode('utf-8', 'replace')
mi.comments += ('\n\n' + _('Articles in this issue: ') + '\n' +
'\n\n'.join(article_titles))
articles[key] = []
language = canonicalize_lang(self.language)
if language is not None:
mi.language = language
# This one affects the pub date shown in kindle title
#mi.pubdate = nowf()
# now appears to need the time field to be > 12.00noon as well
mi.pubdate = datetime.datetime(int(self.get_fetchyear()), int(self.get_fetchmonth()), int(self.get_fetchday()), 12, 30, 0)
opf_path = os.path.join(dir, 'index.opf')
ncx_path = os.path.join(dir, 'index.ncx')
ans.append(key)
opf = OPFCreator(dir, mi)
# Add mastheadImage entry to <guide> section
mp = getattr(self, 'masthead_path', None)
if mp is not None and os.access(mp, os.R_OK):
from calibre.ebooks.metadata.opf2 import Guide
ref = Guide.Reference(os.path.basename(self.masthead_path), os.getcwdu())
ref.type = 'masthead'
ref.title = 'Masthead Image'
opf.guide.append(ref)
a = div.find('a', href=True)
manifest = [os.path.join(dir, 'feed_%d'%i) for i in range(len(feeds))]
manifest.append(os.path.join(dir, 'index.html'))
manifest.append(os.path.join(dir, 'index.ncx'))
if not a:
continue
# Get cover
cpath = getattr(self, 'cover_path', None)
if cpath is None:
pf = open(os.path.join(dir, 'cover.jpg'), 'wb')
if self.default_cover(pf):
cpath = pf.name
if cpath is not None and os.access(cpath, os.R_OK):
opf.cover = cpath
manifest.append(cpath)
url = base + '/' + a['href']
print 'url=' + url
# Get masthead
mpath = getattr(self, 'masthead_path', None)
if mpath is not None and os.access(mpath, os.R_OK):
manifest.append(mpath)
if not articles.has_key(key):
articles[key] = []
else:
# sub page
subSoup = self.index_to_soup(url)
opf.create_manifest_from_files_in(manifest)
for mani in opf.manifest:
if mani.path.endswith('.ncx'):
mani.id = 'ncx'
if mani.path.endswith('mastheadImage.jpg'):
mani.id = 'masthead-image'
for subDiv in subSoup.findAll('li'):
subA = subDiv.find('a', href=True)
subTitle = subDiv.find(text=True, recursive=True)
subUrl = base + '/' + subA['href']
print 'subUrl' + subUrl
articles[key].append(
dict(title=subTitle,
url=subUrl,
date='',
description='',
content=''))
entries = ['index.html']
toc = TOC(base_path=dir)
self.play_order_counter = 0
self.play_order_map = {}
# elif div['class'] in ['story', 'story headline']:
# a = div.find('a', href=True)
# if not a:
# continue
# url = re.sub(r'\?.*', '', a['href'])
# url += '?pagewanted=all'
# title = self.tag_to_string(a, use_alt=True).strip()
# description = ''
# pubdate = strftime('%a, %d %b')
# summary = div.find(True, attrs={'class':'summary'})
# if summary:
# description = self.tag_to_string(summary, use_alt=False)
#
# feed = key if key is not None else 'Uncategorized'
# if not articles.has_key(feed):
# articles[feed] = []
# if not 'podcasts' in url:
# articles[feed].append(
# dict(title=title, url=url, date=pubdate,
# description=description,
# content=''))
# ans = self.sort_index_by(ans, {'The Front Page':-1, 'Dining In, Dining Out':1, 'Obituaries':2})
ans = [(unicode(key), articles[key]) for key in ans if articles.has_key(key)]
return ans
def feed_index(num, parent):
f = feeds[num]
for j, a in enumerate(f):
if getattr(a, 'downloaded', False):
adir = 'feed_%d/article_%d/'%(num, j)
auth = a.author
if not auth:
auth = None
desc = a.text_summary
if not desc:
desc = None
else:
desc = self.description_limiter(desc)
tt = a.toc_thumbnail if a.toc_thumbnail else None
entries.append('%sindex.html'%adir)
po = self.play_order_map.get(entries[-1], None)
if po is None:
self.play_order_counter += 1
po = self.play_order_counter
parent.add_item('%sindex.html'%adir, None,
a.title if a.title else _('Untitled Article'),
play_order=po, author=auth,
description=desc, toc_thumbnail=tt)
last = os.path.join(self.output_dir, ('%sindex.html'%adir).replace('/', os.sep))
for sp in a.sub_pages:
prefix = os.path.commonprefix([opf_path, sp])
relp = sp[len(prefix):]
entries.append(relp.replace(os.sep, '/'))
last = sp
if os.path.exists(last):
with open(last, 'rb') as fi:
src = fi.read().decode('utf-8')
soup = BeautifulSoup(src)
body = soup.find('body')
if body is not None:
prefix = '/'.join('..'for i in range(2*len(re.findall(r'link\d+', last))))
templ = self.navbar.generate(True, num, j, len(f),
not self.has_single_feed,
a.orig_url, __appname__, prefix=prefix,
center=self.center_navbar)
elem = BeautifulSoup(templ.render(doctype='xhtml').decode('utf-8')).find('div')
body.insert(len(body.contents), elem)
with open(last, 'wb') as fi:
fi.write(unicode(soup).encode('utf-8'))
if len(feeds) == 0:
raise Exception('All feeds are empty, aborting.')
if len(feeds) > 1:
for i, f in enumerate(feeds):
entries.append('feed_%d/index.html'%i)
po = self.play_order_map.get(entries[-1], None)
if po is None:
self.play_order_counter += 1
po = self.play_order_counter
auth = getattr(f, 'author', None)
if not auth:
auth = None
desc = getattr(f, 'description', None)
if not desc:
desc = None
feed_index(i, toc.add_item('feed_%d/index.html'%i, None,
f.title, play_order=po, description=desc, author=auth))
else:
entries.append('feed_%d/index.html'%0)
feed_index(0, toc)
for i, p in enumerate(entries):
entries[i] = os.path.join(dir, p.replace('/', os.sep))
opf.create_spine(entries)
opf.set_toc(toc)
with nested(open(opf_path, 'wb'), open(ncx_path, 'wb')) as (opf_file, ncx_file):
opf.render(opf_file, ncx_file)

View File

@ -0,0 +1,54 @@
from __future__ import unicode_literals
__license__ = 'WTFPL'
__author__ = '2013, François D. <franek at chicour.net>'
__description__ = 'Get some fresh news from Arrêt sur images'
from calibre.web.feeds.recipes import BasicNewsRecipe
class Asi(BasicNewsRecipe):
title = 'Arrêt sur images'
__author__ = 'François D. (aka franek)'
description = 'Global news in french from news site "Arrêt sur images"'
oldest_article = 7.0
language = 'fr'
needs_subscription = True
max_articles_per_feed = 100
simultaneous_downloads = 1
timefmt = '[%a, %d %b %Y %I:%M +0200]'
cover_url = 'http://www.arretsurimages.net/images/header/menu/menu_1.png'
use_embedded_content = False
no_stylesheets = True
remove_javascript = True
feeds = [
('vite dit et gratuit', 'http://www.arretsurimages.net/vite-dit.rss'),
('Toutes les chroniques', 'http://www.arretsurimages.net/chroniques.rss'),
('Contenus et dossiers', 'http://www.arretsurimages.net/dossiers.rss'),
]
conversion_options = { 'smarten_punctuation' : True }
remove_tags = [dict(id='vite-titre'), dict(id='header'), dict(id='wrap-connexion'), dict(id='col_right'), dict(name='div', attrs={'class':'bloc-chroniqueur-2'}), dict(id='footercontainer')]
def print_version(self, url):
return url.replace('contenu.php', 'contenu-imprimable.php')
def get_browser(self):
# Need to use robust HTML parser
br = BasicNewsRecipe.get_browser(self, use_robust_parser=True)
if self.username is not None and self.password is not None:
br.open('http://www.arretsurimages.net/index.php')
br.select_form(nr=0)
br.form.set_all_readonly(False)
br['redir'] = 'forum/login.php'
br['username'] = self.username
br['password'] = self.password
br.submit()
return br

View File

@ -9,14 +9,14 @@ class AdvancedUserRecipe1306097511(BasicNewsRecipe):
__author__ = 'Dave Asbury'
cover_url = 'http://profile.ak.fbcdn.net/hprofile-ak-snc4/161987_9010212100_2035706408_n.jpg'
oldest_article = 2
max_articles_per_feed = 12
max_articles_per_feed = 20
linearize_tables = True
remove_empty_feeds = True
remove_javascript = True
no_stylesheets = True
auto_cleanup = True
language = 'en_GB'
compress_news_images = True
cover_url = 'http://profile.ak.fbcdn.net/hprofile-ak-snc4/161987_9010212100_2035706408_n.jpg'
masthead_url = 'http://www.trinitymirror.com/images/birminghampost-logo.gif'

View File

@ -37,68 +37,15 @@ class BusinessWeek(BasicNewsRecipe):
, 'language' : language
}
#remove_tags = [
#dict(attrs={'class':'inStory'})
#,dict(name=['meta','link','iframe','base','embed','object','table','th','tr','td'])
#,dict(attrs={'id':['inset','videoDisplay']})
#]
#keep_only_tags = [dict(name='div', attrs={'id':['story-body','storyBody']})]
remove_attributes = ['lang']
match_regexps = [r'http://www.businessweek.com/.*_page_[1-9].*']
feeds = [
(u'Top Stories', u'http://www.businessweek.com/topStories/rss/topStories.rss'),
(u'Top News' , u'http://www.businessweek.com/rss/bwdaily.rss' ),
(u'Asia', u'http://www.businessweek.com/rss/asia.rss'),
(u'Autos', u'http://www.businessweek.com/rss/autos/index.rss'),
(u'Classic Cars', u'http://rss.businessweek.com/bw_rss/classiccars'),
(u'Hybrids', u'http://rss.businessweek.com/bw_rss/hybrids'),
(u'Europe', u'http://www.businessweek.com/rss/europe.rss'),
(u'Auto Reviews', u'http://rss.businessweek.com/bw_rss/autoreviews'),
(u'Innovation & Design', u'http://www.businessweek.com/rss/innovate.rss'),
(u'Architecture', u'http://www.businessweek.com/rss/architecture.rss'),
(u'Brand Equity', u'http://www.businessweek.com/rss/brandequity.rss'),
(u'Auto Design', u'http://www.businessweek.com/rss/carbuff.rss'),
(u'Game Room', u'http://rss.businessweek.com/bw_rss/gameroom'),
(u'Technology', u'http://www.businessweek.com/rss/technology.rss'),
(u'Investing', u'http://rss.businessweek.com/bw_rss/investor'),
(u'Small Business', u'http://www.businessweek.com/rss/smallbiz.rss'),
(u'Careers', u'http://rss.businessweek.com/bw_rss/careers'),
(u'B-Schools', u'http://www.businessweek.com/rss/bschools.rss'),
(u'Magazine Selections', u'http://www.businessweek.com/rss/magazine.rss'),
(u'CEO Guide to Tech', u'http://www.businessweek.com/rss/ceo_guide_tech.rss'),
(u'Top Stories', u'http://www.businessweek.com/feeds/most-popular.rss'),
]
def get_article_url(self, article):
url = article.get('guid', None)
if 'podcasts' in url:
return None
if 'surveys' in url:
return None
if 'images' in url:
return None
if 'feedroom' in url:
return None
if '/magazine/toc/' in url:
return None
rurl, sep, rest = url.rpartition('?')
if rurl:
return rurl
return rest
def print_version(self, url):
if '/news/' in url or '/blog/ in url':
return url
rurl = url.replace('http://www.businessweek.com/','http://www.businessweek.com/print/')
return rurl.replace('/investing/','/investor/')
soup = self.index_to_soup(url)
prntver = soup.find('li', attrs={'class':'print tracked'})
rurl = prntver.find('a', href=True)['href']
return rurl
def preprocess_html(self, soup):
for item in soup.findAll(style=True):
del item['style']
for alink in soup.findAll('a'):
if alink.string is not None:
tstr = alink.string
alink.replaceWith(tstr)
return soup

View File

@ -7,13 +7,14 @@ class AdvancedUserRecipe1325006965(BasicNewsRecipe):
#cover_url = 'http://www.countryfile.com/sites/default/files/imagecache/160px_wide/cover/2_1.jpg'
__author__ = 'Dave Asbury'
description = 'The official website of Countryfile Magazine'
# last updated 8/12/12
# last updated 19/10/12
language = 'en_GB'
oldest_article = 30
max_articles_per_feed = 25
remove_empty_feeds = True
no_stylesheets = True
auto_cleanup = True
compress_news_images = True
ignore_duplicate_articles = {'title', 'url'}
#articles_are_obfuscated = True
#article_already_exists = False

View File

@ -13,9 +13,9 @@ class AdvancedUserRecipe1306061239(BasicNewsRecipe):
masthead_url = 'http://www.nmauk.co.uk/nma/images/daily_mirror.gif'
compress_news_images = True
oldest_article = 1
max_articles_per_feed = 1
max_articles_per_feed = 12
remove_empty_feeds = True
remove_javascript = True
no_stylesheets = True

View File

@ -0,0 +1,23 @@
# vim:fileencoding=UTF-8
from __future__ import unicode_literals
from calibre.web.feeds.news import BasicNewsRecipe
class AdvancedUserRecipe1365070687(BasicNewsRecipe):
title ='Diário de Notícias'
oldest_article = 7
language = 'pt'
__author__ = 'Jose Pinto'
max_articles_per_feed = 100
keep_only_tags = [dict(name='div', attrs={'id':'cln-esqmid'}) ]
remove_tags = [ dict(name='table', attrs={'class':'TabFerramentasInf'}) ]
feeds = [(u'Portugal', u'http://feeds.dn.pt/DN-Portugal'),
(u'Globo', u'http://feeds.dn.pt/DN-Globo'),
(u'Economia', u'http://feeds.dn.pt/DN-Economia'),
(u'Ci\xeancia', u'http://feeds.dn.pt/DN-Ciencia'),
(u'Artes', u'http://feeds.dn.pt/DN-Artes'),
(u'TV & Media', u'http://feeds.dn.pt/DN-Media'),
(u'Opini\xe3o', u'http://feeds.dn.pt/DN-Opiniao'),
(u'Pessoas', u'http://feeds.dn.pt/DN-Pessoas')
]

View File

@ -0,0 +1,27 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
__author__ = 'teepel <teepel44@gmail.com>'
'''
dzialzagraniczny.pl
'''
from calibre.web.feeds.news import BasicNewsRecipe
class dzial_zagraniczny(BasicNewsRecipe):
title = u'Dział Zagraniczny'
__author__ = 'teepel <teepel44@gmail.com>'
language = 'pl'
description = u'Polskiego czytelnika to nie interesuje'
INDEX = 'http://dzialzagraniczny.pl'
extra_css = 'img {display: block;}'
oldest_article = 7
cover_url = 'https://fbcdn-profile-a.akamaihd.net/hprofile-ak-prn1/c145.5.160.160/559442_415653975115959_2126205128_n.jpg'
max_articles_per_feed = 100
remove_empty_feeds = True
remove_javascript = True
no_stylesheets = True
use_embedded_content = True
feeds = [(u'Dział zagraniczny', u'http://feeds.feedburner.com/dyndns/UOfz')]

View File

@ -26,7 +26,7 @@ class ElDiplo_Recipe(BasicNewsRecipe):
title = u'El Diplo'
__author__ = 'Tomas Di Domenico'
description = 'Publicacion mensual de Le Monde Diplomatique, edicion Argentina'
langauge = 'es_AR'
language = 'es_AR'
needs_subscription = True
auto_cleanup = True

29
recipes/equipped.recipe Normal file
View File

@ -0,0 +1,29 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
__author__ = 'teepel <teepel44@gmail.com>, Artur Stachecki <artur.stachecki@gmail.com>'
'''
equipped.pl
'''
from calibre.web.feeds.news import BasicNewsRecipe
class equipped(BasicNewsRecipe):
title = u'Equipped'
__author__ = 'teepel <teepel44@gmail.com>'
language = 'pl'
description = u'Wiadomości z equipped.pl'
INDEX = 'http://equipped.pl'
extra_css = '.alignleft {float:left; margin-right:5px;}'
oldest_article = 7
max_articles_per_feed = 100
remove_empty_feeds = True
simultaneous_downloads = 5
remove_javascript = True
no_stylesheets = True
use_embedded_content = False
#keep_only_tags = [dict(name='article')]
#remove_tags = [dict(id='disqus_thread')]
#remove_tags_after = [dict(id='disqus_thread')]
feeds = [(u'Equipped', u'http://feeds.feedburner.com/Equippedpl?format=xml')]

View File

@ -12,12 +12,6 @@ class EsensjaRSS(BasicNewsRecipe):
language = 'pl'
encoding = 'utf-8'
INDEX = 'http://www.esensja.pl'
extra_css = '''.t-title {font-size: x-large; font-weight: bold; text-align: left}
.t-author {font-size: x-small; text-align: left}
.t-title2 {font-size: x-small; font-style: italic; text-align: left}
.text {font-size: small; text-align: left}
.annot-ref {font-style: italic; text-align: left}
'''
cover_url = ''
masthead_url = 'http://esensja.pl/img/wrss.gif'
use_embedded_content = False

View File

@ -8,6 +8,7 @@ import datetime
from calibre.ptempfile import PersistentTemporaryFile
from calibre import strftime
from calibre.web.feeds.news import BasicNewsRecipe
from collections import OrderedDict
class FinancialTimes(BasicNewsRecipe):
title = 'Financial Times (UK)'
@ -93,7 +94,7 @@ class FinancialTimes(BasicNewsRecipe):
try:
urlverified = self.browser.open_novisit(url).geturl() # resolve redirect.
except:
continue
continue
title = self.tag_to_string(item)
date = strftime(self.timefmt)
articles.append({
@ -105,29 +106,28 @@ class FinancialTimes(BasicNewsRecipe):
return articles
def parse_index(self):
feeds = []
feeds = OrderedDict()
soup = self.index_to_soup(self.INDEX)
dates= self.tag_to_string(soup.find('div', attrs={'class':'btm-links'}).find('div'))
self.timefmt = ' [%s]'%dates
wide = soup.find('div',attrs={'class':'wide'})
if not wide:
return feeds
allsections = wide.findAll(attrs={'class':lambda x: x and 'footwell' in x.split()})
if not allsections:
return feeds
count = 0
for item in allsections:
count = count + 1
if self.test and count > 2:
return feeds
fitem = item.h3
if not fitem:
fitem = item.h4
ftitle = self.tag_to_string(fitem)
self.report_progress(0, _('Fetching feed')+' %s...'%(ftitle))
feedarts = self.get_artlinks(item.ul)
feeds.append((ftitle,feedarts))
return feeds
#dates= self.tag_to_string(soup.find('div', attrs={'class':'btm-links'}).find('div'))
#self.timefmt = ' [%s]'%dates
for column in soup.findAll('div', attrs = {'class':'feedBoxes clearfix'}):
for section in column. findAll('div', attrs = {'class':'feedBox'}):
section_title=self.tag_to_string(section.find('h4'))
for article in section.ul.findAll('li'):
articles = []
title=self.tag_to_string(article.a)
url=article.a['href']
articles.append({'title':title, 'url':url, 'description':'', 'date':''})
if articles:
if section_title not in feeds:
feeds[section_title] = []
feeds[section_title] += articles
ans = [(key, val) for key, val in feeds.iteritems()]
return ans
def preprocess_html(self, soup):
items = ['promo-box','promo-title',
@ -174,9 +174,6 @@ class FinancialTimes(BasicNewsRecipe):
count += 1
tfile = PersistentTemporaryFile('_fa.html')
tfile.write(html)
tfile.close()
tfile.close()
self.temp_files.append(tfile)
return tfile.name
def cleanup(self):
self.browser.open('https://registration.ft.com/registration/login/logout?location=')

View File

@ -1,12 +1,12 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
import re
from calibre.web.feeds.news import BasicNewsRecipe
class FocusRecipe(BasicNewsRecipe):
__license__ = 'GPL v3'
__author__ = u'intromatyk <intromatyk@gmail.com>'
__author__ = u'Artur Stachecki <artur.stachecki@gmail.com>'
language = 'pl'
version = 1

53
recipes/forbes_pl.recipe Normal file
View File

@ -0,0 +1,53 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
from calibre.web.feeds.news import BasicNewsRecipe
import datetime
import re
class forbes_pl(BasicNewsRecipe):
title = u'Forbes.pl'
__author__ = 'Artur Stachecki <artur.stachecki@gmail.com>'
language = 'pl'
description = u'Biznes, finanse, gospodarka, strategie, wiadomości gospodarcze, analizy finasowe i strategiczne.'
oldest_article = 1
index = 'http://www.forbes.pl'
cover_url = 'http://www.forbes.pl/resources/front/images/logo.png'
max_articles_per_feed = 100
extra_css = '.Block-Photo {float:left; max-width: 300px; margin-right: 5px;}'
preprocess_regexps = [(re.compile(ur'<p>(<strong>)?(Czytaj|Zobacz) (też|także):.*?</p>', re.DOTALL), lambda match: ''), (re.compile(ur'<strong>Zobacz:.*?</strong>', re.DOTALL), lambda match: '')]
remove_javascript = True
no_stylesheets = True
now = datetime.datetime.now()
yesterday = now - datetime.timedelta(hours=24)
yesterday = yesterday.strftime("%d.%m.%Y %H:%M:%S")
pages_count = 4
keep_only_tags = [dict(attrs={'class':['Block-Node Content-Article ', 'Block-Node Content-Article piano-closed']})]
remove_tags = [dict(attrs={'class':['Keywords Styled', 'twitter-share-button', 'Block-List-Related Block-List']})]
feeds = [(u'Wszystkie', 'http://www.forbes.pl/rss')]
'''def preprocess_html(self, soup):
self.append_page(soup, soup.body)
return soup
def append_page(self, soup, appendtag):
cleanup = False
nexturl = appendtag.find('a', attrs={'class':'next'})
if nexturl:
cleanup = True
while nexturl:
soup2 = self.index_to_soup(self.index + nexturl['href'])
nexturl = soup2.find('a', attrs={'class':'next'})
pagetext = soup2.findAll(id='article-body-wrapper')
if not pagetext:
pagetext = soup2.findAll(attrs={'class':'Article-Entry Styled'})
for comment in pagetext.findAll(text=lambda text:isinstance(text, Comment)):
comment.extract()
pos = len(appendtag.contents)
appendtag.insert(pos, pagetext)
if cleanup:
for r in appendtag.findAll(attrs={'class':'paginator'}):
r.extract()'''

View File

@ -14,13 +14,14 @@ class gazetaprawna(BasicNewsRecipe):
title = u'Gazeta Prawna'
__author__ = u'Vroo'
publisher = u'Infor Biznes'
oldest_article = 7
oldest_article = 1
max_articles_per_feed = 20
no_stylesheets = True
remove_javascript = True
description = 'Polski dziennik gospodarczy'
language = 'pl'
encoding = 'utf-8'
ignore_duplicate_articles = {'title', 'url'}
remove_tags_after = [
dict(name='div', attrs={'class':['data-art']})
@ -30,7 +31,7 @@ class gazetaprawna(BasicNewsRecipe):
]
feeds = [
(u'Wiadomo\u015bci - najwa\u017cniejsze', u'http://www.gazetaprawna.pl/wiadomosci/najwazniejsze/rss.xml'),
(u'Z ostatniej chwili', u'http://rss.gazetaprawna.pl/GazetaPrawna'),
(u'Biznes i prawo gospodarcze', u'http://biznes.gazetaprawna.pl/rss.xml'),
(u'Prawo i wymiar sprawiedliwo\u015bci', u'http://prawo.gazetaprawna.pl/rss.xml'),
(u'Praca i ubezpieczenia', u'http://praca.gazetaprawna.pl/rss.xml'),
@ -51,3 +52,8 @@ class gazetaprawna(BasicNewsRecipe):
url = url.replace('prawo.gazetaprawna', 'www.gazetaprawna')
url = url.replace('praca.gazetaprawna', 'www.gazetaprawna')
return url
def get_cover_url(self):
soup = self.index_to_soup('http://www.egazety.pl/infor/e-wydanie-dziennik-gazeta-prawna.html')
self.cover_url = soup.find('p', attrs={'class':'covr'}).a['href']
return getattr(self, 'cover_url', self.cover_url)

View File

@ -10,7 +10,7 @@ krakow.gazeta.pl
from calibre.web.feeds.news import BasicNewsRecipe
class gw_krakow(BasicNewsRecipe):
title = u'Gazeta.pl Kraków'
title = u'Gazeta Wyborcza Kraków'
__author__ = 'teepel <teepel44@gmail.com> based on GW from fenuks'
language = 'pl'
description =u'Wiadomości z Krakowa na portalu Gazeta.pl.'

View File

@ -5,7 +5,7 @@ import string
from calibre.web.feeds.news import BasicNewsRecipe
class GazetaPlSzczecin(BasicNewsRecipe):
title = u'Gazeta.pl Szczecin'
title = u'Gazeta Wyborcza Szczecin'
description = u'Wiadomości ze Szczecina na portalu Gazeta.pl.'
__author__ = u'Michał Szkutnik'
__license__ = u'GPL v3'

View File

@ -10,7 +10,7 @@ warszawa.gazeta.pl
from calibre.web.feeds.news import BasicNewsRecipe
class gw_wawa(BasicNewsRecipe):
title = u'Gazeta.pl Warszawa'
title = u'Gazeta Wyborcza Warszawa'
__author__ = 'teepel <teepel44@gmail.com> based on GW from fenuks'
language = 'pl'
description ='Wiadomości z Warszawy na portalu Gazeta.pl.'

View File

@ -3,7 +3,7 @@ from calibre.web.feeds.news import BasicNewsRecipe
from calibre.ebooks.BeautifulSoup import Comment
class Gazeta_Wyborcza(BasicNewsRecipe):
title = u'Gazeta.pl'
title = u'Gazeta Wyborcza'
__author__ = 'fenuks, Artur Stachecki'
language = 'pl'
description = 'Wiadomości z Polski i ze świata. Serwisy tematyczne i lokalne w 20 miastach.'

View File

@ -1,5 +1,5 @@
__license__ = 'GPL v3'
__copyright__ = '2008-2012, Darko Miletic <darko.miletic at gmail.com>'
__copyright__ = '2008-2013, Darko Miletic <darko.miletic at gmail.com>'
'''
harpers.org - paid subscription/ printed issue articles
This recipe only get's article's published in text format
@ -14,7 +14,7 @@ from calibre import strftime
from calibre.web.feeds.news import BasicNewsRecipe
class Harpers_full(BasicNewsRecipe):
title = "Harper's Magazine - Printed Edition"
title = "Harper's Magazine - articles from printed edition"
__author__ = 'Darko Miletic'
description = "Harper's Magazine, the oldest general-interest monthly in America, explores the issues that drive our national conversation, through long-form narrative journalism and essays, and such celebrated features as the iconic Harper's Index."
publisher = "Harpers's"
@ -29,7 +29,6 @@ class Harpers_full(BasicNewsRecipe):
needs_subscription = 'optional'
masthead_url = 'http://harpers.org/wp-content/themes/harpers/images/pheader.gif'
publication_type = 'magazine'
INDEX = ''
LOGIN = 'http://harpers.org/wp-content/themes/harpers/ajax_login.php'
extra_css = """
body{font-family: adobe-caslon-pro,serif}
@ -66,37 +65,40 @@ class Harpers_full(BasicNewsRecipe):
def parse_index(self):
#find current issue
soup = self.index_to_soup('http://harpers.org/')
currentIssue=soup.find('div',attrs={'class':'mainNavi'}).find('li',attrs={'class':'curentIssue'})
currentIssue_url=self.tag_to_string(currentIssue.a['href'])
self.log(currentIssue_url)
#go to the current issue
soup1 = self.index_to_soup(currentIssue_url)
date = re.split('\s\|\s',self.tag_to_string(soup1.head.title.string))[0]
currentIssue_title = self.tag_to_string(soup1.head.title.string)
date = re.split('\s\|\s',currentIssue_title)[0]
self.timefmt = u' [%s]'%date
#get cover
self.cover_url = soup1.find('div', attrs = {'class':'picture_hp'}).find('img', src=True)['src']
self.cover_url = soup1.find('div', attrs = {'class':'picture_hp'}).find('img', src=True)['src']
self.log(self.cover_url)
articles = []
count = 0
for item in soup1.findAll('div', attrs={'class':'articleData'}):
text_links = item.findAll('h2')
for text_link in text_links:
if count == 0:
count = 1
else:
url = text_link.a['href']
title = text_link.a.contents[0]
date = strftime(' %B %Y')
articles.append({
'title' :title
,'date' :date
,'url' :url
,'description':''
})
return [(soup1.head.title.string, articles)]
if text_links:
for text_link in text_links:
if count == 0:
count = 1
else:
url = text_link.a['href']
title = self.tag_to_string(text_link.a)
date = strftime(' %B %Y')
articles.append({
'title' :title
,'date' :date
,'url' :url
,'description':''
})
return [(currentIssue_title, articles)]
def print_version(self, url):
return url + '?single=1'

View File

@ -1,6 +1,4 @@
from calibre.web.feeds.news import BasicNewsRecipe
import re
from datetime import date, timedelta
class HBR(BasicNewsRecipe):
@ -11,23 +9,18 @@ class HBR(BasicNewsRecipe):
timefmt = ' [%B %Y]'
language = 'en'
no_stylesheets = True
# recipe_disabled = ('hbr.org has started requiring the use of javascript'
# ' to log into their website. This is unsupported in calibre, so'
# ' this recipe has been disabled. If you would like to see '
# ' HBR supported in calibre, contact hbr.org and ask them'
# ' to provide a javascript free login method.')
LOGIN_URL = 'https://hbr.org/login?request_url=/'
LOGOUT_URL = 'https://hbr.org/logout?request_url=/'
INDEX = 'http://hbr.org/archive-toc/BR'
INDEX = 'http://hbr.org'
keep_only_tags = [dict(name='div', id='pageContainer')]
remove_tags = [dict(id=['mastheadContainer', 'magazineHeadline',
'articleToolbarTopRD', 'pageRightSubColumn', 'pageRightColumn',
'todayOnHBRListWidget', 'mostWidget', 'keepUpWithHBR',
'mailingListTout', 'partnerCenter', 'pageFooter',
'superNavHeadContainer', 'hbrDisqus',
'superNavHeadContainer', 'hbrDisqus', 'article-toolbox',
'articleToolbarTop', 'articleToolbarBottom', 'articleToolbarRD']),
dict(name='iframe')]
extra_css = '''
@ -57,22 +50,6 @@ class HBR(BasicNewsRecipe):
if url.endswith('/ar/1'):
return url[:-1]+'pr'
def hbr_get_toc(self):
# return self.index_to_soup(open('/t/toc.html').read())
today = date.today()
future = today + timedelta(days=30)
past = today - timedelta(days=30)
for x in [x.strftime('%y%m') for x in (future, today, past)]:
url = self.INDEX + x
soup = self.index_to_soup(url)
if (not soup.find(text='Issue Not Found') and not soup.find(
text="We're Sorry. There was an error processing your request")
and 'Exception: java.io.FileNotFoundException' not in
unicode(soup)):
return soup
raise Exception('Could not find current issue')
def hbr_parse_toc(self, soup):
feeds = []
current_section = None
@ -105,23 +82,19 @@ class HBR(BasicNewsRecipe):
articles.append({'title':title, 'url':url, 'description':desc,
'date':''})
if current_section is not None and articles:
feeds.append((current_section, articles))
return feeds
def parse_index(self):
soup = self.hbr_get_toc()
# open('/t/hbr.html', 'wb').write(unicode(soup).encode('utf-8'))
soup0 = self.index_to_soup('http://hbr.org/magazine')
datencover = soup0.find('ul', attrs={'id':'magazineArchiveCarousel'}).findAll('li')[-1]
#find date & cover
self.cover_url=datencover.img['src']
dates=self.tag_to_string(datencover.img['alt'])
self.timefmt = u' [%s]'%dates
soup = self.index_to_soup(self.INDEX + soup0.find('div', attrs = {'class':'magazine_page'}).a['href'])
feeds = self.hbr_parse_toc(soup)
return feeds
def get_cover_url(self):
cover_url = None
index = 'http://hbr.org/current'
soup = self.index_to_soup(index)
link_item = soup.find('img', alt=re.compile("Current Issue"), src=True)
if link_item:
cover_url = 'http://hbr.org' + link_item['src']
return cover_url

Binary file not shown.

After

Width:  |  Height:  |  Size: 491 B

BIN
recipes/icons/equipped.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 929 B

BIN
recipes/icons/forbes_pl.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 612 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 802 B

After

Width:  |  Height:  |  Size: 294 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 802 B

After

Width:  |  Height:  |  Size: 294 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 802 B

After

Width:  |  Height:  |  Size: 294 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 802 B

After

Width:  |  Height:  |  Size: 294 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 731 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 982 B

BIN
recipes/icons/media2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 660 B

BIN
recipes/icons/mobilna.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 885 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 307 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 616 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 KiB

BIN
recipes/icons/osw.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 489 B

BIN
recipes/icons/ppe_pl.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 207 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 733 B

BIN
recipes/icons/slashdot.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 250 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 511 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 497 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 205 B

26
recipes/ittechblog.recipe Normal file
View File

@ -0,0 +1,26 @@
__license__ = 'GPL v3'
__copyright__ = 'MrStefan'
'''
www.ittechblog.pl
'''
from calibre.web.feeds.news import BasicNewsRecipe
class ittechblog(BasicNewsRecipe):
title = u'IT techblog'
__author__ = 'MrStefan <mrstefaan@gmail.com>'
language = 'pl'
description =u'Na naszym blogu technologicznym znajdziesz między innymi: testy sprzętu, najnowsze startupy, technologiczne nowinki, felietony tematyczne.'
extra_css = '.cover > img {display:block;}'
remove_empty_feeds = True
oldest_article = 7
max_articles_per_feed = 100
remove_javascript = True
no_stylesheets = True
use_embedded_content = False
keep_only_tags =[dict(attrs={'class':'box'})]
remove_tags =[dict(name='aside'), dict(attrs={'class':['tags', 'counter', 'twitter-share-button']})]
feeds = [(u'Artykuły', u'http://feeds.feedburner.com/ITTechBlog?format=xml')]

View File

@ -2,8 +2,7 @@
from calibre.web.feeds.news import BasicNewsRecipe
class KrytykaPolitycznaRecipe(BasicNewsRecipe):
__license__ = 'GPL v3'
__author__ = u'intromatyk <intromatyk@gmail.com>'
__author__ = u'Artur Stachecki <artur.stachecki@gmail.com>'
language = 'pl'
version = 1

View File

@ -1,33 +1,23 @@
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
# vim:fileencoding=UTF-8
from __future__ import unicode_literals
from calibre.web.feeds.news import BasicNewsRecipe
class AListApart (BasicNewsRecipe):
__author__ = u'Marc Busqué <marc@lamarciana.com>'
__author__ = 'Marc Busqué <marc@lamarciana.com>'
__url__ = 'http://www.lamarciana.com'
__version__ = '1.0'
__version__ = '2.0'
__license__ = 'GPL v3'
__copyright__ = u'2012, Marc Busqué <marc@lamarciana.com>'
__copyright__ = '2012, Marc Busqué <marc@lamarciana.com>'
title = u'A List Apart'
description = u'A List Apart Magazine (ISSN: 1534-0295) explores the design, development, and meaning of web content, with a special focus on web standards and best practices.'
description = u'A List Apart Magazine (ISSN: 1534-0295) explores the design, development, and meaning of web content, with a special focus on web standards and best practices. This recipe retrieve articles and columns.'
language = 'en'
tags = 'web development, software'
oldest_article = 120
remove_empty_feeds = True
no_stylesheets = True
encoding = 'utf8'
cover_url = u'http://alistapart.com/pix/alalogo.gif'
keep_only_tags = [
dict(name='div', attrs={'id': 'content'})
]
remove_tags = [
dict(name='ul', attrs={'id': 'metastuff'}),
dict(name='div', attrs={'class': 'discuss'}),
dict(name='div', attrs={'class': 'discuss'}),
dict(name='div', attrs={'id': 'learnmore'}),
]
remove_attributes = ['border', 'cellspacing', 'align', 'cellpadding', 'colspan', 'valign', 'vspace', 'hspace', 'alt', 'width', 'height']
extra_css = u'img {max-width: 100%; display: block; margin: auto;} #authorbio img {float: left; margin-right: 2%;}'
extra_css = u'img {max-width: 100%; display: block; margin: auto;}'
feeds = [
(u'A List Apart', u'http://www.alistapart.com/site/rss'),
(u'A List Apart', u'http://feeds.feedburner.com/alistapart/abridged'),
]

View File

@ -0,0 +1,88 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
'''
magazynconsido.pl/
'''
from calibre.web.feeds.news import BasicNewsRecipe
from calibre.utils.magick import Image
class magazynconsido(BasicNewsRecipe):
title = u'Magazyn Consido'
__author__ = 'Artur Stachecki <artur.stachecki@gmail.com> ,teepel <teepel44@gmail.com>'
language = 'pl'
description =u'Portal dla architektów i projektantów'
masthead_url='http://qualitypixels.pl/wp-content/themes/airlock/advance/inc/timthumb.php?src=http://qualitypixels.pl/wp-content/uploads/2012/01/logotyp-magazynconsido-11.png&w=455&zc=1'
oldest_article = 7
max_articles_per_feed = 100
remove_javascript=True
no_stylesheets = True
use_embedded_content = False
keep_only_tags =[]
keep_only_tags.append(dict(name = 'h1'))
keep_only_tags.append(dict(name = 'p'))
keep_only_tags.append(dict(attrs = {'class' : 'navigation'}))
remove_tags =[dict(attrs = {'style' : 'font-size: x-small;' })]
remove_tags_after =[dict(attrs = {'class' : 'navigation' })]
extra_css=''' img {max-width:30%; max-height:30%; display: block; margin-left: auto; margin-right: auto;}
h1 {text-align: center;}'''
def parse_index(self): #(kk)
soup = self.index_to_soup('http://feeds.feedburner.com/magazynconsido?format=xml')
feeds = []
articles = {}
sections = []
section = ''
for item in soup.findAll('item') :
section = self.tag_to_string(item.category)
if not articles.has_key(section) :
sections.append(section)
articles[section] = []
article_url = self.tag_to_string(item.guid)
article_title = self.tag_to_string(item.title)
article_date = self.tag_to_string(item.pubDate)
article_description = self.tag_to_string(item.description)
articles[section].append( { 'title' : article_title, 'url' : article_url, 'date' : article_date, 'description' : article_description })
for section in sections :
if section == 'Video':
feeds.append((section, articles[section]))
feeds.pop()
else:
feeds.append((section, articles[section]))
return feeds
def append_page(self, soup, appendtag):
apage = soup.find('div', attrs={'class':'wp-pagenavi'})
if apage is not None:
nexturl = soup.find('a', attrs={'class':'nextpostslink'})
soup2 = self.index_to_soup(nexturl['href'])
pagetext = soup2.findAll('p')
for tag in pagetext:
pos = len(appendtag.contents)
appendtag.insert(pos, tag)
while appendtag.find('div', attrs={'class': ['height: 35px;', 'post-meta', 'addthis_toolbox addthis_default_style addthis_', 'post-meta-bottom', 'block_recently_post', 'fbcomments', 'pin-it-button', 'pages', 'navigation']}) is not None:
appendtag.find('div', attrs={'class': ['height: 35px;', 'post-meta', 'addthis_toolbox addthis_default_style addthis_', 'post-meta-bottom', 'block_recently_post', 'fbcomments', 'pin-it-button', 'pages', 'navigation']}).replaceWith('')
def preprocess_html(self, soup): #(kk)
self.append_page(soup, soup.body)
return self.adeify_images(soup)
def postprocess_html(self, soup, first):
#process all the images
for tag in soup.findAll(lambda tag: tag.name.lower()=='img' and tag.has_key('src')):
iurl = tag['src']
img = Image()
img.open(iurl)
if img < 0:
raise RuntimeError('Out of memory')
img.type = "GrayscaleType"
img.save(iurl)
return soup

35
recipes/media2.recipe Normal file
View File

@ -0,0 +1,35 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
__copyright__ = 'teepel'
'''
media2.pl
'''
from calibre.web.feeds.news import BasicNewsRecipe
class media2_pl(BasicNewsRecipe):
title = u'Media2'
__author__ = 'teepel <teepel44@gmail.com>'
language = 'pl'
description =u'Media2.pl to jeden z najczęściej odwiedzanych serwisów dla profesjonalistów z branży medialnej, telekomunikacyjnej, public relations oraz nowych technologii.'
masthead_url='http://media2.pl/res/logo/www.png'
remove_empty_feeds= True
oldest_article = 1
max_articles_per_feed = 100
remove_javascript=True
no_stylesheets=True
simultaneous_downloads = 5
extra_css = '''.news-lead{font-weight: bold; }'''
keep_only_tags =[]
keep_only_tags.append(dict(name = 'div', attrs = {'class' : 'news-item tpl-big'}))
remove_tags =[]
remove_tags.append(dict(name = 'span', attrs = {'class' : 'news-comments'}))
remove_tags.append(dict(name = 'div', attrs = {'class' : 'item-sidebar'}))
remove_tags.append(dict(name = 'div', attrs = {'class' : 'news-tags'}))
feeds = [(u'Media2', u'http://feeds.feedburner.com/media2')]

View File

@ -6,10 +6,10 @@ import time
class AdvancedUserRecipe1306097511(BasicNewsRecipe):
title = u'Metro UK'
description = 'News as provided by The Metro -UK'
description = 'News from The Metro, UK'
#timefmt = ''
__author__ = 'fleclerc & Dave Asbury'
#last update 20/1/13
__author__ = 'Dave Asbury'
#last update 4/4/13
#cover_url = 'http://profile.ak.fbcdn.net/hprofile-ak-snc4/276636_117118184990145_2132092232_n.jpg'
cover_url = 'https://twimg0-a.akamaihd.net/profile_images/1638332595/METRO_LETTERS-01.jpg'
@ -22,7 +22,7 @@ class AdvancedUserRecipe1306097511(BasicNewsRecipe):
language = 'en_GB'
masthead_url = 'http://e-edition.metro.co.uk/images/metro_logo.gif'
compress_news_images = True
def parse_index(self):
articles = {}
key = None

26
recipes/mobilna.recipe Normal file
View File

@ -0,0 +1,26 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
__copyright__ = 'MrStefan'
'''
www.mobilna.pl
'''
from calibre.web.feeds.news import BasicNewsRecipe
class mobilna(BasicNewsRecipe):
title = u'Mobilna.pl'
__author__ = 'MrStefan <mrstefaan@gmail.com>'
language = 'pl'
description =u'twoja mobilna strona'
#masthead_url=''
remove_empty_feeds= True
oldest_article = 7
max_articles_per_feed = 100
remove_javascript=True
no_stylesheets=True
use_embedded_content = True
#keep_only_tags =[dict(attrs={'class':'Post'})]
feeds = [(u'Artykuły', u'http://mobilna.pl/feed/')]

View File

@ -0,0 +1,50 @@
#!usr/bin/env python
__license__ = 'GPL v3'
__copyright__ = 'MrStefan, teepel'
'''
www.mojegotowanie.pl
'''
from calibre.web.feeds.news import BasicNewsRecipe
class mojegotowanie(BasicNewsRecipe):
title = u'Moje Gotowanie'
__author__ = 'MrStefan <mrstefaan@gmail.com>, teepel <teepel44@gmail.com>'
language = 'pl'
description =u'Gotowanie to Twoja pasja? Uwielbiasz sałatki? Lubisz grillować? Przepisy kulinarne doskonałe na wszystkie okazje znajdziesz na www.mojegotowanie.pl.'
masthead_url='http://www.mojegotowanie.pl/extension/selfstart/design/self/images/top_c2.gif'
cover_url = 'http://www.mojegotowanie.pl/extension/selfstart/design/self/images/mgpl/mojegotowanie.gif'
remove_empty_feeds= True
oldest_article = 7
max_articles_per_feed = 100
remove_javascript=True
no_stylesheets=True
keep_only_tags =[]
keep_only_tags.append(dict(name = 'div', attrs = {'class' : 'content'}))
feeds = [(u'Artykuły', u'http://mojegotowanie.pl/rss/feed/artykuly'),
(u'Przepisy', u'http://mojegotowanie.pl/rss/feed/przepisy')]
def parse_feeds(self):
feeds = BasicNewsRecipe.parse_feeds(self)
for feed in feeds:
for article in feed.articles[:]:
if 'film' in article.title:
feed.articles.remove(article)
return feeds
def get_article_url(self, article):
link = article.get('link')
if 'Clayout0Cset0Cprint0' in link:
return link
def print_version(self, url):
segment = url.split('/')
URLPart = segment[-2]
URLPart = URLPart.replace('0L0Smojegotowanie0Bpl0Clayout0Cset0Cprint0C', '/')
URLPart = URLPart.replace('0I', '_')
URLPart = URLPart.replace('0C', '/')
return 'http://www.mojegotowanie.pl/layout/set/print' + URLPart

View File

@ -0,0 +1,27 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
__author__ = 'teepel <teepel44@gmail.com>'
'''
nczas.com
'''
from calibre.web.feeds.news import BasicNewsRecipe
class nczas(BasicNewsRecipe):
title = u'Najwy\u017cszy Czas'
__author__ = 'teepel <teepel44@gmail.com>'
language = 'pl'
description ='Wiadomości z nczas.com'
INDEX='http://nczas.com'
oldest_article = 7
max_articles_per_feed = 100
use_embedded_content = True
remove_empty_feeds= True
simultaneous_downloads = 5
remove_javascript=True
remove_attributes = ['style']
no_stylesheets=True
feeds = [(u'Najwyższy Czas', u'http://nczas.com/feed/')]

View File

@ -12,6 +12,7 @@ class AdvancedUserRecipe1306061239(BasicNewsRecipe):
max_articles_per_feed = 20
#auto_cleanup = True
language = 'en_GB'
compress_news_images = True
def get_cover_url(self):
soup = self.index_to_soup('http://www.nme.com/component/subscribe')
@ -27,7 +28,7 @@ class AdvancedUserRecipe1306061239(BasicNewsRecipe):
br.open_novisit(cov2)
cover_url = str(cov2)
except:
cover_url = 'http://tawanda3000.files.wordpress.com/2011/02/nme-logo.jpg'
cover_url = 'http://tawanda3000.files.wordpress.com/2011/02/nme-logo.jpg'
return cover_url
masthead_url = 'http://tawanda3000.files.wordpress.com/2011/02/nme-logo.jpg'

View File

@ -0,0 +1,31 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
from calibre.web.feeds.news import BasicNewsRecipe
class NowinyRybnik(BasicNewsRecipe):
title = u'Nowiny - Rybnik'
__author__ = 'Artur Stachecki <artur.stachecki@gmail.com>'
language = 'pl'
description = u'Tygodnik Regionalny NOWINY. Ogłoszenia drobne, wiadomości i wydarzenia z regionu Rybnika i okolic'
oldest_article = 7
masthead_url = 'http://www.nowiny.rybnik.pl/logo/logo.jpg'
max_articles_per_feed = 100
simultaneous_downloads = 5
remove_javascript = True
no_stylesheets = True
keep_only_tags = [(dict(name='div', attrs={'id': 'drukuj'}))]
remove_tags = []
remove_tags.append(dict(name='div', attrs={'id': 'footer'}))
feeds = [(u'Wszystkie artykuły', u'http://www.nowiny.rybnik.pl/rss,artykuly,dzial,0,miasto,0,ile,25.xml')]
def preprocess_html(self, soup):
for alink in soup.findAll('a'):
if alink.string is not None:
tstr = alink.string
alink.replaceWith(tstr)
return soup

41
recipes/osw.recipe Normal file
View File

@ -0,0 +1,41 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
__author__ = 'teepel <teepel44@gmail.com>'
'''
http://www.osw.waw.pl - Osrodek studiow wschodnich
'''
from calibre.web.feeds.news import BasicNewsRecipe
class OSW_Recipe(BasicNewsRecipe):
language = 'pl'
title = u'Ośrodek Studiów Wschodnich'
__author__ = 'teepel <teepel44@gmail.com>'
INDEX='http://www.osw.waw.pl'
description = u'Ośrodek Studiów Wschodnich im. Marka Karpia. Centre for Eastern Studies.'
category = u'News'
oldest_article = 7
max_articles_per_feed = 100
cover_url=''
remove_empty_feeds= True
no_stylesheets=True
remove_javascript = True
simultaneous_downloads = 5
keep_only_tags =[]
#this line should show title of the article, but it doesnt work
keep_only_tags.append(dict(name = 'h1', attrs = {'class' : 'print-title'}))
keep_only_tags.append(dict(name = 'div', attrs = {'class' : 'print-submitted'}))
keep_only_tags.append(dict(name = 'div', attrs = {'class' : 'print-content'}))
remove_tags =[]
remove_tags.append(dict(name = 'table', attrs = {'id' : 'attachments'}))
remove_tags.append(dict(name = 'div', attrs = {'class' : 'print-submitted'}))
feeds = [(u'OSW', u'http://www.osw.waw.pl/pl/rss.xml')]
def print_version(self, url):
return url.replace('http://www.osw.waw.pl/pl/', 'http://www.osw.waw.pl/pl/print/')

41
recipes/ppe_pl.recipe Normal file
View File

@ -0,0 +1,41 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
from calibre.web.feeds.news import BasicNewsRecipe
class ppeRecipe(BasicNewsRecipe):
__author__ = u'Artur Stachecki <artur.stachecki@gmail.com>'
language = 'pl'
title = u'ppe.pl'
category = u'News'
description = u'Portal o konsolach i grach wideo.'
cover_url=''
remove_empty_feeds= True
no_stylesheets=True
oldest_article = 1
max_articles_per_feed = 100000
recursions = 0
no_stylesheets = True
remove_javascript = True
simultaneous_downloads = 2
keep_only_tags =[]
keep_only_tags.append(dict(name = 'div', attrs = {'class' : 'news-heading'}))
keep_only_tags.append(dict(name = 'div', attrs = {'class' : 'tresc-poziom'}))
remove_tags =[]
remove_tags.append(dict(name = 'div', attrs = {'class' : 'bateria1'}))
remove_tags.append(dict(name = 'div', attrs = {'class' : 'bateria2'}))
remove_tags.append(dict(name = 'div', attrs = {'class' : 'bateria3'}))
remove_tags.append(dict(name = 'div', attrs = {'class' : 'news-photo'}))
remove_tags.append(dict(name = 'div', attrs = {'class' : 'fbl'}))
remove_tags.append(dict(name = 'div', attrs = {'class' : 'info'}))
remove_tags.append(dict(name = 'div', attrs = {'class' : 'links'}))
remove_tags.append(dict(name = 'div', attrs = {'style' : 'padding: 4px'}))
feeds = [
('Newsy', 'feed://ppe.pl/rss/rss.xml'),
]

33
recipes/presseurop.recipe Normal file
View File

@ -0,0 +1,33 @@
#!/usr/bin/env python
'''
www.presseurop.eu/pl
'''
__license__ = 'GPL v3'
__author__ = 'teepel <teepel44@gmail.com>'
from calibre.web.feeds.news import BasicNewsRecipe
import re
class presseurop(BasicNewsRecipe):
title = u'Presseurop'
description = u'Najlepsze artykuły z prasy europejskiej'
language = 'pl'
oldest_article = 7
max_articles_per_feed = 100
auto_cleanup = True
feeds = [
(u'Polityka', u'http://www.presseurop.eu/pl/taxonomy/term/1/%2A/feed'),
(u'Społeczeństwo', u'http://www.presseurop.eu/pl/taxonomy/term/2/%2A/feed'),
(u'Gospodarka', u'http://www.presseurop.eu/pl/taxonomy/term/3/%2A/feed'),
(u'Kultura i debaty', u'http://www.presseurop.eu/pl/taxonomy/term/4/%2A/feed'),
(u'UE i Świat', u'http://www.presseurop.eu/pl/taxonomy/term/5/%2A/feed')
]
preprocess_regexps = [
(re.compile(r'\|.*</title>', re.DOTALL|re.IGNORECASE),
lambda match: '</title>'),
]

View File

@ -0,0 +1,35 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
from calibre.web.feeds.news import BasicNewsRecipe
class ResPublicaNowaRecipe(BasicNewsRecipe):
__license__ = 'GPL v3'
__author__ = u'Artur Stachecki <artur.stachecki@gmail.com>'
language = 'pl'
version = 1
title = u'Res Publica Nowa'
category = u'News'
description = u'Portal kulturalno-społecznego kwartalnika o profilu liberalnym, wydawany przez Fundację Res Publica'
cover_url=''
remove_empty_feeds= True
no_stylesheets=True
oldest_article = 7
max_articles_per_feed = 100000
recursions = 0
no_stylesheets = True
remove_javascript = True
simultaneous_downloads = 5
feeds = [
('Artykuly', 'feed://publica.pl/feed'),
]
def preprocess_html(self, soup):
for alink in soup.findAll('a'):
if alink.string is not None:
tstr = alink.string
alink.replaceWith(tstr)
return soup

View File

@ -1,30 +1,30 @@
# vim:fileencoding=UTF-8
from __future__ import unicode_literals
__license__ = 'GPL v3'
__copyright__ = '2011, Eddie Lau'
__copyright__ = '2011-2013, Eddie Lau'
# data source: normal, mobile
__Source__ = 'mobile'
# please replace the following "True" with "False". (Default: True)
__MakePeriodical__ = True
# Turn below to True if your device supports display of CJK titles (Default: False)
__UseChineseTitle__ = False
__UseChineseTitle__ = True
# Set it to False if you want to skip images (Default: True)
__KeepImages__ = True
# Set it to True if you want to include a summary in Kindle's article view (Default: False)
__IncludeSummary__ = False
__IncludeSummary__ = True
# Set it to True if you want thumbnail images in Kindle's article view (Default: True)
__IncludeThumbnails__ = True
'''
Change Log:
2013/03/31 -- fix cover retrieval code and heading size, and remove &nbsp; in summary
2011/12/29 -- first version done
TODO:
* use alternative source at http://m.singtao.com/index.php
'''
from calibre.utils.date import now as nowf
import os, datetime, re
from datetime import date
from calibre.web.feeds.recipes import BasicNewsRecipe
from contextlib import nested
from calibre.ebooks.BeautifulSoup import BeautifulSoup
@ -41,7 +41,7 @@ class STHKRecipe(BasicNewsRecipe):
title = 'Sing Tao Daily - Hong Kong'
description = 'Hong Kong Chinese Newspaper (http://singtao.com)'
category = 'Chinese, News, Hong Kong'
extra_css = 'img {display: block; margin-left: auto; margin-right: auto; margin-top: 10px; margin-bottom: 10px; max-height:90%;} td[class=caption] {font-size:50%;} td[class=bodyhead]{font-weight:bold; font-size:150%;} td[class=stmobheadline]{font-weight:bold; font-size:150%;}'
extra_css = 'img {display: block; margin-left: auto; margin-right: auto; margin-top: 10px; margin-bottom: 10px; max-height:90%;} td[class=caption] {font-size:50%;} td[class=bodyhead]{font-weight:bold; font-size:150%;} td[class=stmobheadline]{font-weight:bold; font-size:200%;}'
masthead_url = 'http://upload.wikimedia.org/wikipedia/en/d/dd/Singtao-usa.png'
if __Source__ == 'normal':
keep_only_tags = [dict(name='td', attrs={'class':['bodyhead','bodytext']})]
@ -96,17 +96,13 @@ class STHKRecipe(BasicNewsRecipe):
return self.get_dtlocal().strftime("%d")
def get_cover_url(self):
#cover = 'http://singtao.com/media/a/a(2660).jpg' # for 2011/12/29
base = 2660
todaydate = date(int(self.get_fetchyear()), int(self.get_fetchmonth()), int(self.get_fetchday()))
diff = todaydate - date(2011, 12, 29)
base = base + int(diff.total_seconds()/(3600*24))
cover = 'http://singtao.com/media/a/a(' + str(base) +').jpg'
soup = self.index_to_soup('http://m.singtao.com/')
cover = soup.find(attrs={'class':'special'}).get('src', False)
br = BasicNewsRecipe.get_browser(self)
try:
br.open(cover)
except:
cover = 'http://singtao.com/images/stlogo.gif'
cover = None
return cover
def parse_index(self):
@ -289,11 +285,11 @@ class STHKRecipe(BasicNewsRecipe):
# the text may or may not be enclosed in <p></p> tag
paras = articlebody.findAll('p')
if not paras:
paras = articlebody
paras = articlebody
textFound = False
for p in paras:
if not textFound:
summary_candidate = self.tag_to_string(p).strip()
summary_candidate = self.tag_to_string(p).strip().replace('&nbsp;', '')
if len(summary_candidate) > 0:
summary_candidate = summary_candidate.replace(u'(\u661f\u5cf6\u65e5\u5831\u5831\u9053)', '', 1)
article.summary = article.text_summary = summary_candidate
@ -489,3 +485,4 @@ class STHKRecipe(BasicNewsRecipe):

View File

@ -20,7 +20,7 @@ class sport_pl(BasicNewsRecipe):
remove_javascript=True
no_stylesheets=True
remove_empty_feeds = True
ignore_duplicate_articles = {'title', 'url'}
keep_only_tags =[]
keep_only_tags.append(dict(name = 'div', attrs = {'id' : 'article'}))

View File

@ -0,0 +1,70 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
import re
from calibre.web.feeds.news import BasicNewsRecipe
from calibre.utils.magick import Image
class sportowefakty(BasicNewsRecipe):
title = u'SportoweFakty'
__author__ = 'Artur Stachecki <artur.stachecki@gmail.com>, Tomasz Długosz <tomek3d@gmail.com>'
language = 'pl'
description = u'Najważniejsze informacje sportowe z kraju i ze świata, relacje, komentarze, wywiady, zdjęcia!'
oldest_article = 1
masthead_url='http://www.sportowefakty.pl/images/logo.png'
max_articles_per_feed = 100
simultaneous_downloads = 5
use_embedded_content=False
remove_javascript=True
no_stylesheets=True
ignore_duplicate_articles = {'title', 'url'}
keep_only_tags = [dict(attrs = {'class' : 'box-article'})]
remove_tags =[]
remove_tags.append(dict(attrs = {'class' : re.compile(r'^newsStream')}))
remove_tags.append(dict(attrs = {'target' : '_blank'}))
feeds = [
(u'Piłka Nożna', u'http://www.sportowefakty.pl/pilka-nozna/index.rss'),
(u'Koszykówka', u'http://www.sportowefakty.pl/koszykowka/index.rss'),
(u'Żużel', u'http://www.sportowefakty.pl/zuzel/index.rss'),
(u'Siatkówka', u'http://www.sportowefakty.pl/siatkowka/index.rss'),
(u'Zimowe', u'http://www.sportowefakty.pl/zimowe/index.rss'),
(u'Hokej', u'http://www.sportowefakty.pl/hokej/index.rss'),
(u'Moto', u'http://www.sportowefakty.pl/moto/index.rss'),
(u'Tenis', u'http://www.sportowefakty.pl/tenis/index.rss')
]
def get_article_url(self, article):
link = article.get('link', None)
if 'utm_source' in link:
return link.split('?utm')[0]
else:
return link
def print_version(self, url):
print_url = url + '/drukuj'
return print_url
def preprocess_html(self, soup):
head = soup.find('h1')
if 'Fotorelacja' in self.tag_to_string(head):
return None
else:
for alink in soup.findAll('a'):
if alink.string is not None:
tstr = alink.string
alink.replaceWith(tstr)
return soup
def postprocess_html(self, soup, first):
for tag in soup.findAll(lambda tag: tag.name.lower()=='img' and tag.has_key('src')):
iurl = tag['src']
img = Image()
img.open(iurl)
if img < 0:
raise RuntimeError('Out of memory')
img.type = "GrayscaleType"
img.save(iurl)
return soup

View File

@ -20,7 +20,7 @@ class AdvancedUserRecipe1325006965(BasicNewsRecipe):
no_stylesheets = True
ignore_duplicate_articles = {'title','url'}
compress_news_images = True
extra_css = '''
body{ text-align: justify; font-family:Arial,Helvetica,sans-serif; font-size:11px; font-size-adjust:none; font-stretch:normal; font-style:normal; font-variant:normal; font-weight:normal;}

View File

@ -1,7 +1,7 @@
from calibre.web.feeds.news import BasicNewsRecipe
class WirtualneMedia(BasicNewsRecipe):
title = u'wirtualnemedia.pl'
title = u'Wirtualnemedia.pl'
oldest_article = 7
max_articles_per_feed = 100
no_stylesheets = True

View File

@ -0,0 +1,26 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
__author__ = 'teepel <teepel44@gmail.com>'
'''
wolnemedia.net
'''
from calibre.web.feeds.news import BasicNewsRecipe
class wolne_media(BasicNewsRecipe):
title = u'Wolne Media'
__author__ = 'teepel <teepel44@gmail.com>'
language = 'pl'
description ='Wiadomości z wolnemedia.net'
INDEX='http://wolnemedia.net'
oldest_article = 1
max_articles_per_feed = 100
remove_empty_feeds= True
simultaneous_downloads = 5
remove_javascript=True
no_stylesheets=True
auto_cleanup = True
feeds = [(u'Wiadomości z kraju', u'http://wolnemedia.net/category/wiadomosci-z-kraju/feed/'),(u'Wiadomości ze świata', u'http://wolnemedia.net/category/wiadomosci-ze-swiata/feed/'),(u'Edukacja', u'http://wolnemedia.net/category/edukacja/feed/'),(u'Ekologia', u'http://wolnemedia.net/category/ekologia/feed/'),(u'Gospodarka', u'http://wolnemedia.net/category/gospodarka/feed/'),(u'Historia', u'http://wolnemedia.net/category/historia/feed/'),(u'Kultura', u'http://wolnemedia.net/category/kultura/feed/'),(u'Kulturoznawstwo', u'http://wolnemedia.net/category/kulturoznawstwo/feed/'),(u'Media', u'http://wolnemedia.net/category/media/feed/'),(u'Nauka', u'http://wolnemedia.net/category/nauka/feed/'),(u'Opowiadania', u'http://wolnemedia.net/category/opowiadania/feed/'),(u'Paranauka i ezoteryka', u'http://wolnemedia.net/category/ezoteryka/feed/'),(u'Polityka', u'http://wolnemedia.net/category/polityka/feed/'),(u'Prawo', u'http://wolnemedia.net/category/prawo/feed/'),(u'Publicystyka', u'http://wolnemedia.net/category/publicystyka/feed/'),(u'Reportaż', u'http://wolnemedia.net/category/reportaz/feed/'),(u'Seks', u'http://wolnemedia.net/category/seks/feed/'),(u'Społeczeństwo', u'http://wolnemedia.net/category/spoleczenstwo/feed/'),(u'Świat komputerów', u'http://wolnemedia.net/category/swiat-komputerow/feed/'),(u'Wierzenia', u'http://wolnemedia.net/category/wierzenia/feed/'),(u'Zdrowie', u'http://wolnemedia.net/category/zdrowie/feed/')]

View File

@ -1,10 +1,9 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
__copyright__ = '2010, matek09, matek09@gmail.com'
__copyright__ = 'Modified 2011, Mariusz Wolek <mariusz_dot_wolek @ gmail dot com>'
__copyright__ = 'Modified 2012, Artur Stachecki <artur.stachecki@gmail.com>'
__copyright__ = '''2010, matek09, matek09@gmail.com
Modified 2011, Mariusz Wolek <mariusz_dot_wolek @ gmail dot com>
Modified 2012, Artur Stachecki <artur.stachecki@gmail.com>'''
from calibre.web.feeds.news import BasicNewsRecipe
import re
@ -16,12 +15,12 @@ class Wprost(BasicNewsRecipe):
ICO_BLOCKED = 'http://www.wprost.pl/G/layout2/ico_blocked.png'
title = u'Wprost'
__author__ = 'matek09'
description = 'Weekly magazine'
description = u'Popularny tygodnik ogólnopolski - Wprost. Najlepszy wśród polskich tygodników - opiniotwórczy - społeczno-informacyjny - społeczno-kulturalny.'
encoding = 'ISO-8859-2'
no_stylesheets = True
language = 'pl'
remove_javascript = True
recursions = 0
recursions = 0
remove_tags_before = dict(dict(name = 'div', attrs = {'id' : 'print-layer'}))
remove_tags_after = dict(dict(name = 'div', attrs = {'id' : 'print-layer'}))
'''
@ -94,5 +93,3 @@ class Wprost(BasicNewsRecipe):
'description' : ''
})
return articles

View File

@ -1,10 +1,9 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
__copyright__ = '2010, matek09, matek09@gmail.com'
__copyright__ = 'Modified 2011, Mariusz Wolek <mariusz_dot_wolek @ gmail dot com>'
__copyright__ = 'Modified 2012, Artur Stachecki <artur.stachecki@gmail.com>'
#!/usr/bin/env python
__license__ = 'GPL v3'
__copyright__ = '''2010, matek09, matek09@gmail.com
Modified 2011, Mariusz Wolek <mariusz_dot_wolek @ gmail dot com>
Modified 2012, Artur Stachecki <artur.stachecki@gmail.com>'''
from calibre.web.feeds.news import BasicNewsRecipe
import re
@ -12,13 +11,14 @@ import re
class Wprost(BasicNewsRecipe):
title = u'Wprost (RSS)'
__author__ = 'matek09'
description = 'Weekly magazine'
description = u'Portal informacyjny. Najświeższe wiadomości, najciekawsze komentarze i opinie. Blogi najlepszych publicystów.'
encoding = 'ISO-8859-2'
no_stylesheets = True
language = 'pl'
remove_javascript = True
recursions = 0
use_embedded_content = False
ignore_duplicate_articles = {'title', 'url'}
remove_empty_feeds = True
remove_tags_before = dict(dict(name = 'div', attrs = {'id' : 'print-layer'}))
remove_tags_after = dict(dict(name = 'div', attrs = {'id' : 'print-layer'}))
@ -48,20 +48,20 @@ class Wprost(BasicNewsRecipe):
#h2 {font-size: x-large; font-weight: bold}
feeds = [(u'Tylko u nas', u'http://www.wprost.pl/rss/rss_wprostextra.php'),
(u'Wydarzenia', u'http://www.wprost.pl/rss/rss.php'),
(u'Komentarze', u'http://www.wprost.pl/rss/rss_komentarze.php'),
(u'Wydarzenia: Kraj', u'http://www.wprost.pl/rss/rss_kraj.php'),
(u'Komentarze: Kraj', u'http://www.wprost.pl/rss/rss_komentarze_kraj.php'),
(u'Wydarzenia: Świat', u'http://www.wprost.pl/rss/rss_swiat.php'),
(u'Komentarze: Świat', u'http://www.wprost.pl/rss/rss_komentarze_swiat.php'),
(u'Wydarzenia: Gospodarka', u'http://www.wprost.pl/rss/rss_gospodarka.php'),
(u'Komentarze: Gospodarka', u'http://www.wprost.pl/rss/rss_komentarze_gospodarka.php'),
(u'Wydarzenia: Życie', u'http://www.wprost.pl/rss/rss_zycie.php'),
(u'Komentarze: Życie', u'http://www.wprost.pl/rss/rss_komentarze_zycie.php'),
(u'Wydarzenia: Sport', u'http://www.wprost.pl/rss/rss_sport.php'),
(u'Komentarze: Sport', u'http://www.wprost.pl/rss/rss_komentarze_sport.php'),
(u'Przegląd prasy', u'http://www.wprost.pl/rss/rss_prasa.php')
]
(u'Wydarzenia', u'http://www.wprost.pl/rss/rss.php'),
(u'Komentarze', u'http://www.wprost.pl/rss/rss_komentarze.php'),
(u'Wydarzenia: Kraj', u'http://www.wprost.pl/rss/rss_kraj.php'),
(u'Komentarze: Kraj', u'http://www.wprost.pl/rss/rss_komentarze_kraj.php'),
(u'Wydarzenia: Świat', u'http://www.wprost.pl/rss/rss_swiat.php'),
(u'Komentarze: Świat', u'http://www.wprost.pl/rss/rss_komentarze_swiat.php'),
(u'Wydarzenia: Gospodarka', u'http://www.wprost.pl/rss/rss_gospodarka.php'),
(u'Komentarze: Gospodarka', u'http://www.wprost.pl/rss/rss_komentarze_gospodarka.php'),
(u'Wydarzenia: Życie', u'http://www.wprost.pl/rss/rss_zycie.php'),
(u'Komentarze: Życie', u'http://www.wprost.pl/rss/rss_komentarze_zycie.php'),
(u'Wydarzenia: Sport', u'http://www.wprost.pl/rss/rss_sport.php'),
(u'Komentarze: Sport', u'http://www.wprost.pl/rss/rss_komentarze_sport.php'),
(u'Przegląd prasy', u'http://www.wprost.pl/rss/rss_prasa.php')
]
def get_cover_url(self):
soup = self.index_to_soup('http://www.wprost.pl/tygodnik')

View File

@ -1,144 +0,0 @@
#!/usr/bin/env python
from calibre.web.feeds.recipes import BasicNewsRecipe
class GazetaWyborczaDuzyForma(BasicNewsRecipe):
cover_url = 'http://bi.gazeta.pl/im/8/5415/m5415058.gif'
title = u"Gazeta Wyborcza Duzy Format"
__author__ = 'ravcio - rlelusz[at]gmail.com'
description = u"Articles from Gazeta's website"
language = 'pl'
max_articles_per_feed = 50 #you can increade it event up to maybe 600, should still work
recursions = 0
encoding = 'iso-8859-2'
no_stylesheets = True
remove_javascript = True
use_embedded_content = False
keep_only_tags = [
dict(name='div', attrs={'id':['k1']})
]
remove_tags = [
dict(name='div', attrs={'class':['zdjM', 'rel_video', 'zdjP', 'rel_box', 'index mod_zi_dolStrony']})
,dict(name='div', attrs={'id':['source', 'banP4', 'article_toolbar', 'rel', 'inContext_disabled']})
,dict(name='ul', attrs={'id':['articleToolbar']})
,dict(name='img', attrs={'class':['brand']})
,dict(name='h5', attrs={'class':['author']})
,dict(name='h6', attrs={'class':['date']})
,dict(name='p', attrs={'class':['txt_upl']})
]
remove_tags_after = [
dict(name='div', attrs={'id':['Str']}) #nawigator numerow linii
]
def load_article_links(self, url, count):
print '--- load_article_links', url, count
#page with link to articles
soup = self.index_to_soup(url)
#table with articles
list = soup.find('div', attrs={'class':'GWdalt'})
#single articles (link, title, ...)
links = list.findAll('div', attrs={'class':['GWdaltE']})
if len(links) < count:
#load links to more articles...
#remove new link
pages_nav = list.find('div', attrs={'class':'pages'})
next = pages_nav.find('a', attrs={'class':'next'})
if next:
print 'next=', next['href']
url = 'http://wyborcza.pl' + next['href']
#e.g. url = 'http://wyborcza.pl/0,75480.html?str=2'
older_links = self.load_article_links(url, count - len(links))
links.extend(older_links)
return links
#produce list of articles to download
def parse_index(self):
print '--- parse_index'
max_articles = 8000
links = self.load_article_links('http://wyborcza.pl/0,75480.html', max_articles)
ans = []
key = None
articles = {}
key = 'Uncategorized'
articles[key] = []
for div_art in links:
div_date = div_art.find('div', attrs={'class':'kL'})
div = div_art.find('div', attrs={'class':'kR'})
a = div.find('a', href=True)
url = a['href']
title = a.string
description = ''
pubdate = div_date.string.rstrip().lstrip()
summary = div.find('span', attrs={'class':'lead'})
desc = summary.find('a', href=True)
if desc:
desc.extract()
description = self.tag_to_string(summary, use_alt=False)
description = description.rstrip().lstrip()
feed = key if key is not None else 'Duzy Format'
if not articles.has_key(feed):
articles[feed] = []
if description != '': # skip just pictures atricle
articles[feed].append(
dict(title=title, url=url, date=pubdate,
description=description,
content=''))
ans = [(key, articles[key])]
return ans
def append_page(self, soup, appendtag, position):
pager = soup.find('div',attrs={'id':'Str'})
if pager:
#seek for 'a' element with nast value (if not found exit)
list = pager.findAll('a')
for elem in list:
if 'nast' in elem.string:
nexturl = elem['href']
soup2 = self.index_to_soup('http://warszawa.gazeta.pl' + nexturl)
texttag = soup2.find('div', attrs={'id':'artykul'})
newpos = len(texttag.contents)
self.append_page(soup2,texttag,newpos)
texttag.extract()
appendtag.insert(position,texttag)
def preprocess_html(self, soup):
self.append_page(soup, soup.body, 3)
# finally remove some tags
pager = soup.find('div',attrs={'id':'Str'})
if pager:
pager.extract()
pager = soup.find('div',attrs={'class':'tylko_int'})
if pager:
pager.extract()
return soup

View File

@ -0,0 +1,57 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
from calibre.web.feeds.news import BasicNewsRecipe
class WysokieObcasyRecipe(BasicNewsRecipe):
__author__ = u'Artur Stachecki <artur.stachecki@gmail.com>'
language = 'pl'
version = 1
title = u'Wysokie Obcasy'
publisher = 'Agora SA'
description = u'Serwis sobotniego dodatku do Gazety Wyborczej'
category='magazine'
language = 'pl'
publication_type = 'magazine'
cover_url=''
remove_empty_feeds= True
no_stylesheets=True
oldest_article = 7
max_articles_per_feed = 100000
recursions = 0
no_stylesheets = True
remove_javascript = True
simultaneous_downloads = 5
keep_only_tags =[]
keep_only_tags.append(dict(name = 'div', attrs = {'id' : 'article'}))
remove_tags =[]
remove_tags.append(dict(name = 'img'))
remove_tags.append(dict(name = 'p', attrs = {'class' : 'info'}))
extra_css = '''
body {font-family: verdana, arial, helvetica, geneva, sans-serif ;}
h1{text-align: left;}
'''
feeds = [
('Wszystkie Artykuly', 'feed://www.wysokieobcasy.pl/pub/rss/wysokieobcasy.xml'),
]
def print_version(self,url):
baseURL='http://www.wysokieobcasy.pl/wysokie-obcasy'
segments = url.split(',')
subPath= '/2029020,'
articleURL1 = segments[1]
articleURL2 = segments[2]
printVerString=articleURL1 + ',' + articleURL2
s= baseURL + subPath + printVerString + '.html'
return s
def get_cover_url(self):
soup = self.index_to_soup('http://www.wysokieobcasy.pl/wysokie-obcasy/0,0.html')
self.cover_url = soup.find(attrs={'class':'holder_cr'}).find('img')['src']
return getattr(self, 'cover_url', self.cover_url)

Binary file not shown.

View File

@ -79,7 +79,7 @@ author_name_copywords = ('Corporation', 'Company', 'Co.', 'Agency', 'Council',
# By default, calibre splits a string containing multiple author names on
# ampersands and the words "and" and "with". You can customize the splitting
# by changing the regular expression below. Strings are split on whatever the
# specified regular expression matches.
# specified regular expression matches, in addition to ampersands.
# Default: r'(?i),?\s+(and|with)\s+'
authors_split_regex = r'(?i),?\s+(and|with)\s+'

View File

@ -357,7 +357,7 @@
<xsl:apply-templates/>
</xsl:template>
<xsl:template match="rtf:table">
<xsl:template match="rtf:table">
<xsl:element name="table">
<xsl:attribute name="id">
<xsl:value-of select="generate-id(.)"/>
@ -390,7 +390,6 @@
<xsl:output method = "xml"/>
<xsl:key name="style-types" match="rtf:paragraph-definition" use="@style-number"/>
@ -415,13 +414,11 @@
</xsl:template>
<xsl:template match="rtf:page-break">
<xsl:element name="br">
<xsl:attribute name="style">page-break-after:always</xsl:attribute>
</xsl:element>
<br style = "page-break-after:always"/>
</xsl:template>
<xsl:template match="rtf:hardline-break">
<xsl:element name="br"/>
<br/>
</xsl:template>
<xsl:template match="rtf:rtf-definition|rtf:font-table|rtf:color-table|rtf:style-table|rtf:page-definition|rtf:list-table|rtf:override-table|rtf:override-list|rtf:list-text"/>
@ -445,7 +442,7 @@
</xsl:template>
<xsl:template match = "rtf:field-block">
<xsl:apply-templates/>
<xsl:apply-templates/>
</xsl:template>
<xsl:template match = "rtf:field[@type='hyperlink']">
@ -472,9 +469,7 @@
</xsl:template>
<xsl:template match="rtf:pict">
<xsl:element name="img">
<xsl:attribute name="src"><xsl:value-of select="@num" /></xsl:attribute>
</xsl:element>
<img src = "{@num}"/>
</xsl:template>
<xsl:template match="*">

View File

@ -47,6 +47,10 @@ binary_includes = [
'/usr/lib/libgthread-2.0.so.0',
'/usr/lib/libpng14.so.14',
'/usr/lib/libexslt.so.0',
# Ensure that libimobiledevice is compiled against openssl, not gnutls
'/usr/lib/libimobiledevice.so.3',
'/usr/lib/libusbmuxd.so.2',
'/usr/lib/libplist.so.1',
MAGICK_PREFIX+'/lib/libMagickWand.so.5',
MAGICK_PREFIX+'/lib/libMagickCore.so.5',
'/usr/lib/libgcrypt.so.11',

View File

@ -399,7 +399,8 @@ class Py2App(object):
@flush
def add_fontconfig(self):
info('\nAdding fontconfig')
for x in ('fontconfig.1', 'freetype.6', 'expat.1'):
for x in ('fontconfig.1', 'freetype.6', 'expat.1',
'plist.1', 'usbmuxd.2', 'imobiledevice.3'):
src = os.path.join(SW, 'lib', 'lib'+x+'.dylib')
self.install_dylib(src)
dst = os.path.join(self.resources_dir, 'fonts')

View File

@ -12,13 +12,13 @@ msgstr ""
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
"devel@lists.alioth.debian.org>\n"
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
"PO-Revision-Date: 2013-03-27 13:07+0000\n"
"PO-Revision-Date: 2013-03-28 13:01+0000\n"
"Last-Translator: Ferran Rius <frius64@hotmail.com>\n"
"Language-Team: Catalan <linux@softcatala.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"X-Launchpad-Export-Date: 2013-03-28 04:41+0000\n"
"X-Launchpad-Export-Date: 2013-03-29 04:36+0000\n"
"X-Generator: Launchpad (build 16546)\n"
"Language: ca\n"
@ -1884,7 +1884,7 @@ msgstr "Awera"
#. name for aws
msgid "Awyu; South"
msgstr "Awyu meridional"
msgstr "Awyu; meridional"
#. name for awt
msgid "Araweté"
@ -1892,7 +1892,7 @@ msgstr "Araweté"
#. name for awu
msgid "Awyu; Central"
msgstr "Awyu central"
msgstr "Awyu; Central"
#. name for awv
msgid "Awyu; Jair"
@ -4052,7 +4052,7 @@ msgstr "Buginès"
#. name for buh
msgid "Bunu; Younuo"
msgstr "Bunu; Younuo"
msgstr "Bunu; Younou"
#. name for bui
msgid "Bongili"
@ -4308,7 +4308,7 @@ msgstr "Bwa"
#. name for bwx
msgid "Bunu; Bu-Nao"
msgstr "Bunu; Bu-Nao"
msgstr "Bunu; Bu Nao"
#. name for bwy
msgid "Bwamu; Cwi"
@ -19804,7 +19804,7 @@ msgstr "Minoà"
#. name for omo
msgid "Utarmbung"
msgstr ""
msgstr "Utarmbung"
#. name for omp
msgid "Manipuri; Old"
@ -20344,7 +20344,7 @@ msgstr "Pear"
#. name for pcc
msgid "Bouyei"
msgstr ""
msgstr "Buyí"
#. name for pcd
msgid "Picard"
@ -20456,11 +20456,11 @@ msgstr "Pengo"
#. name for peh
msgid "Bonan"
msgstr ""
msgstr "Bonan"
#. name for pei
msgid "Chichimeca-Jonaz"
msgstr ""
msgstr "Chichimec"
#. name for pej
msgid "Pomo; Northern"
@ -20484,7 +20484,7 @@ msgstr "Persa Antic"
#. name for pep
msgid "Kunja"
msgstr ""
msgstr "Kunja"
#. name for peq
msgid "Pomo; Southern"
@ -20536,7 +20536,7 @@ msgstr "Pagi"
#. name for pgk
msgid "Rerep"
msgstr ""
msgstr "Rerep"
#. name for pgl
msgid "Irish; Primitive"
@ -20624,7 +20624,7 @@ msgstr "Pima Baix"
#. name for pib
msgid "Yine"
msgstr ""
msgstr "Yine"
#. name for pic
msgid "Pinji"
@ -20660,7 +20660,7 @@ msgstr "Pijao"
#. name for pil
msgid "Yom"
msgstr ""
msgstr "Yom"
#. name for pim
msgid "Powhatan"
@ -20760,7 +20760,7 @@ msgstr "Llenguatge de signes pakistaní"
#. name for pkt
msgid "Maleng"
msgstr ""
msgstr "Maleng"
#. name for pku
msgid "Paku"
@ -20768,7 +20768,7 @@ msgstr "Paku"
#. name for pla
msgid "Miani"
msgstr ""
msgstr "Miani"
#. name for plb
msgid "Polonombauk"
@ -20804,7 +20804,7 @@ msgstr "Polci"
#. name for plk
msgid "Shina; Kohistani"
msgstr ""
msgstr "Shina; Kohistani"
#. name for pll
msgid "Palaung; Shwe"
@ -20852,7 +20852,7 @@ msgstr "Palawà; Brooke"
#. name for ply
msgid "Bolyu"
msgstr ""
msgstr "Bolyu"
#. name for plz
msgid "Paluan"
@ -20896,7 +20896,7 @@ msgstr "Algonquí Carolina"
#. name for pml
msgid "Lingua Franca"
msgstr ""
msgstr "Aljamia"
#. name for pmm
msgid "Pomo"
@ -20924,7 +20924,7 @@ msgstr "Piemontès"
#. name for pmt
msgid "Tuamotuan"
msgstr ""
msgstr "Tuamotu"
#. name for pmu
msgid "Panjabi; Mirpur"
@ -20972,7 +20972,7 @@ msgstr "Penrhyn"
#. name for pni
msgid "Aoheng"
msgstr ""
msgstr "Aoheng"
#. name for pnm
msgid "Punan Batu 1"
@ -21008,7 +21008,7 @@ msgstr "Pontic"
#. name for pnu
msgid "Bunu; Jiongnai"
msgstr ""
msgstr "Bunu; Jiongnai"
#. name for pnv
msgid "Pinigura"
@ -21100,7 +21100,7 @@ msgstr "Potavatomi"
#. name for pov
msgid "Crioulo; Upper Guinea"
msgstr ""
msgstr "Crioll guineà"
#. name for pow
msgid "Popoloca; San Felipe Otlaltepec"
@ -21128,7 +21128,7 @@ msgstr "Paipai"
#. name for ppk
msgid "Uma"
msgstr ""
msgstr "Uma"
#. name for ppl
msgid "Pipil"
@ -21144,7 +21144,7 @@ msgstr "Papapana"
#. name for ppo
msgid "Folopa"
msgstr ""
msgstr "Folopa"
#. name for ppp
msgid "Pelende"
@ -21180,7 +21180,7 @@ msgstr "Malecite-Passamaquoddy"
#. name for prb
msgid "Lua'"
msgstr ""
msgstr "Lua"
#. name for prc
msgid "Parachi"
@ -21220,7 +21220,7 @@ msgstr "Llenguatge de signes peruà"
#. name for prm
msgid "Kibiri"
msgstr ""
msgstr "Kibiri"
#. name for prn
msgid "Prasuni"
@ -21272,7 +21272,7 @@ msgstr "Llenguatge de signes de Providencia"
#. name for psa
msgid "Awyu; Asue"
msgstr ""
msgstr "Awyu; Asue"
#. name for psc
msgid "Persian Sign Language"
@ -21328,7 +21328,7 @@ msgstr "Llenguatge de signes portuguès"
#. name for pss
msgid "Kaulong"
msgstr ""
msgstr "Kaulong"
#. name for pst
msgid "Pashto; Central"
@ -21376,11 +21376,11 @@ msgstr "Pìamatsina"
#. name for ptt
msgid "Enrekang"
msgstr ""
msgstr "Enrekang"
#. name for ptu
msgid "Bambam"
msgstr ""
msgstr "Bambam"
#. name for ptv
msgid "Port Vato"
@ -29584,7 +29584,7 @@ msgstr ""
#. name for yir
msgid "Awyu; North"
msgstr ""
msgstr "Awyu; Septentrional"
#. name for yis
msgid "Yis"

View File

@ -376,7 +376,7 @@ def random_user_agent(choose=None):
choose = random.randint(0, len(choices)-1)
return choices[choose]
def browser(honor_time=True, max_time=2, mobile_browser=False, user_agent=None):
def browser(honor_time=True, max_time=2, mobile_browser=False, user_agent=None, use_robust_parser=False):
'''
Create a mechanize browser for web scraping. The browser handles cookies,
refresh requests and ignores robots.txt. Also uses proxy if available.
@ -385,7 +385,11 @@ def browser(honor_time=True, max_time=2, mobile_browser=False, user_agent=None):
:param max_time: Maximum time in seconds to wait during a refresh request
'''
from calibre.utils.browser import Browser
opener = Browser()
if use_robust_parser:
import mechanize
opener = Browser(factory=mechanize.RobustFactory())
else:
opener = Browser()
opener.set_handle_refresh(True, max_time=max_time, honor_time=honor_time)
opener.set_handle_robots(False)
if user_agent is None:

View File

@ -4,7 +4,7 @@ __license__ = 'GPL v3'
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
__docformat__ = 'restructuredtext en'
__appname__ = u'calibre'
numeric_version = (0, 9, 25)
numeric_version = (0, 9, 26)
__version__ = u'.'.join(map(unicode, numeric_version))
__author__ = u"Kovid Goyal <kovid@kovidgoyal.net>"

View File

@ -757,6 +757,7 @@ from calibre.ebooks.metadata.sources.isbndb import ISBNDB
from calibre.ebooks.metadata.sources.overdrive import OverDrive
from calibre.ebooks.metadata.sources.douban import Douban
from calibre.ebooks.metadata.sources.ozon import Ozon
# from calibre.ebooks.metadata.sources.google_images import GoogleImages
plugins += [GoogleBooks, Amazon, Edelweiss, OpenLibrary, ISBNDB, OverDrive, Douban, Ozon]
@ -1296,15 +1297,6 @@ class StoreBeamEBooksDEStore(StoreBase):
formats = ['EPUB', 'MOBI', 'PDF']
affiliate = True
class StoreBeWriteStore(StoreBase):
name = 'BeWrite Books'
description = u'Publishers of fine books. Highly selective and editorially driven. Does not offer: books for children or exclusively YA, erotica, swords-and-sorcery fantasy and space-opera-style science fiction. All other genres are represented.'
actual_plugin = 'calibre.gui2.store.stores.bewrite_plugin:BeWriteStore'
drm_free_only = True
headquarters = 'US'
formats = ['EPUB', 'MOBI', 'PDF']
class StoreBiblioStore(StoreBase):
name = u'Библио.бг'
author = 'Alex Stanev'
@ -1677,7 +1669,6 @@ plugins += [
StoreBaenWebScriptionStore,
StoreBNStore,
StoreBeamEBooksDEStore,
StoreBeWriteStore,
StoreBiblioStore,
StoreBookotekaStore,
StoreChitankaStore,

View File

@ -91,7 +91,7 @@ def restore_plugin_state_to_default(plugin_or_name):
config['enabled_plugins'] = ep
default_disabled_plugins = set([
'Overdrive', 'Douban Books', 'OZON.ru', 'Edelweiss',
'Overdrive', 'Douban Books', 'OZON.ru', 'Edelweiss', 'Google Images',
])
def is_disabled(plugin):

View File

@ -7,7 +7,9 @@ __license__ = 'GPL v3'
__copyright__ = '2011, Kovid Goyal <kovid@kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
import weakref
from functools import partial
from itertools import izip, imap
def sanitize_sort_field_name(field_metadata, field):
field = field_metadata.search_term_to_field_key(field.lower().strip())
@ -15,11 +17,39 @@ def sanitize_sort_field_name(field_metadata, field):
field = {'title': 'sort', 'authors':'author_sort'}.get(field, field)
return field
class MarkedVirtualField(object):
def __init__(self, marked_ids):
self.marked_ids = marked_ids
def iter_searchable_values(self, get_metadata, candidates, default_value=None):
for book_id in candidates:
yield self.marked_ids.get(book_id, default_value), {book_id}
class TableRow(list):
def __init__(self, book_id, view):
self.book_id = book_id
self.view = weakref.ref(view)
def __getitem__(self, obj):
view = self.view()
if isinstance(obj, slice):
return [view._field_getters[c](self.book_id)
for c in xrange(*obj.indices(len(view._field_getters)))]
else:
return view._field_getters[obj](self.book_id)
class View(object):
''' A table view of the database, with rows and columns. Also supports
filtering and sorting. '''
def __init__(self, cache):
self.cache = cache
self.marked_ids = {}
self.search_restriction_book_count = 0
self.search_restriction = ''
self._field_getters = {}
for col, idx in cache.backend.FIELD_MAP.iteritems():
if isinstance(col, int):
@ -38,16 +68,33 @@ class View(object):
except KeyError:
self._field_getters[idx] = partial(self.get, col)
self._map = list(self.cache.all_book_ids())
self._map_filtered = list(self._map)
self._map = tuple(self.cache.all_book_ids())
self._map_filtered = tuple(self._map)
@property
def field_metadata(self):
return self.cache.field_metadata
def _get_id(self, idx, index_is_id=True):
ans = idx if index_is_id else self.index_to_id(idx)
return ans
return idx if index_is_id else self.index_to_id(idx)
def __getitem__(self, row):
return TableRow(self._map_filtered[row], self.cache)
def __len__(self):
return len(self._map_filtered)
def __iter__(self):
for book_id in self._map_filtered:
yield self._data[book_id]
def iterall(self):
for book_id in self._map:
yield self[book_id]
def iterallids(self):
for book_id in self._map:
yield book_id
def get_field_map_field(self, row, col, index_is_id=True):
'''
@ -66,7 +113,7 @@ class View(object):
def get_ondevice(self, idx, index_is_id=True, default_value=''):
id_ = idx if index_is_id else self.index_to_id(idx)
self.cache.field_for('ondevice', id_, default_value=default_value)
return self.cache.field_for('ondevice', id_, default_value=default_value)
def get_marked(self, idx, index_is_id=True, default_value=None):
id_ = idx if index_is_id else self.index_to_id(idx)
@ -93,7 +140,7 @@ class View(object):
ans.append(self.cache._author_data(id_))
return tuple(ans)
def multisort(self, fields=[], subsort=False):
def multisort(self, fields=[], subsort=False, only_ids=None):
fields = [(sanitize_sort_field_name(self.field_metadata, x), bool(y)) for x, y in fields]
keys = self.field_metadata.sortable_field_keys()
fields = [x for x in fields if x[0] in keys]
@ -102,8 +149,70 @@ class View(object):
if not fields:
fields = [('timestamp', False)]
sorted_book_ids = self.cache.multisort(fields)
sorted_book_ids
# TODO: change maps
sorted_book_ids = self.cache.multisort(fields, ids_to_sort=only_ids)
if only_ids is None:
self._map = tuple(sorted_book_ids)
if len(self._map_filtered) == len(self._map):
self._map_filtered = tuple(self._map)
else:
fids = frozenset(self._map_filtered)
self._map_filtered = tuple(i for i in self._map if i in fids)
else:
smap = {book_id:i for i, book_id in enumerate(sorted_book_ids)}
only_ids.sort(key=smap.get)
def search(self, query, return_matches=False):
ans = self.search_getting_ids(query, self.search_restriction,
set_restriction_count=True)
if return_matches:
return ans
self._map_filtered = tuple(ans)
def search_getting_ids(self, query, search_restriction,
set_restriction_count=False):
q = ''
if not query or not query.strip():
q = search_restriction
else:
q = query
if search_restriction:
q = u'(%s) and (%s)' % (search_restriction, query)
if not q:
if set_restriction_count:
self.search_restriction_book_count = len(self._map)
return list(self._map)
matches = self.cache.search(
query, search_restriction, virtual_fields={'marked':MarkedVirtualField(self.marked_ids)})
rv = [x for x in self._map if x in matches]
if set_restriction_count and q == search_restriction:
self.search_restriction_book_count = len(rv)
return rv
def set_search_restriction(self, s):
self.search_restriction = s
def search_restriction_applied(self):
return bool(self.search_restriction)
def get_search_restriction_book_count(self):
return self.search_restriction_book_count
def set_marked_ids(self, id_dict):
'''
ids in id_dict are "marked". They can be searched for by
using the search term ``marked:true``. Pass in an empty dictionary or
set to clear marked ids.
:param id_dict: Either a dictionary mapping ids to values or a set
of ids. In the latter case, the value is set to 'true' for all ids. If
a mapping is provided, then the search can be used to search for
particular values: ``marked:value``
'''
if not hasattr(id_dict, 'items'):
# Simple list. Make it a dict of string 'true'
self.marked_ids = dict.fromkeys(id_dict, u'true')
else:
# Ensure that all the items in the dict are text
self.marked_ids = dict(izip(id_dict.iterkeys(), imap(unicode,
id_dict.itervalues())))

View File

@ -239,7 +239,7 @@ class ANDROID(USBMS):
'ADVANCED', 'SGH-I727', 'USB_FLASH_DRIVER', 'ANDROID',
'S5830I_CARD', 'MID7042', 'LINK-CREATE', '7035', 'VIEWPAD_7E',
'NOVO7', 'MB526', '_USB#WYK7MSF8KE', 'TABLET_PC', 'F', 'MT65XX_MS',
'ICS', 'E400', '__FILE-STOR_GADG', 'ST80208-1']
'ICS', 'E400', '__FILE-STOR_GADG', 'ST80208-1', 'GT-S5660M_CARD']
WINDOWS_CARD_A_MEM = ['ANDROID_PHONE', 'GT-I9000_CARD', 'SGH-I897',
'FILE-STOR_GADGET', 'SGH-T959_CARD', 'SGH-T959', 'SAMSUNG_ANDROID', 'GT-P1000_CARD',
'A70S', 'A101IT', '7', 'INCREDIBLE', 'A7EB', 'SGH-T849_CARD',

View File

@ -24,11 +24,11 @@ class PALMPRE(USBMS):
FORMATS = ['epub', 'mobi', 'prc', 'pdb', 'txt']
VENDOR_ID = [0x0830]
PRODUCT_ID = [0x8004, 0x8002, 0x0101]
PRODUCT_ID = [0x8004, 0x8002, 0x0101, 0x8042]
BCD = [0x0316]
VENDOR_NAME = 'PALM'
WINDOWS_MAIN_MEM = 'PRE'
WINDOWS_MAIN_MEM = ['PRE', 'PALM_DEVICE']
EBOOK_DIR_MAIN = 'E-books'

View File

@ -82,6 +82,7 @@ class NOOK(USBMS):
return [x.replace('#', '_') for x in components]
class NOOK_COLOR(NOOK):
name = 'Nook Color Device Interface'
description = _('Communicate with the Nook Color, TSR and Tablet eBook readers.')
PRODUCT_ID = [0x002, 0x003, 0x004]

View File

@ -104,13 +104,11 @@ class PDFOutput(OutputFormatPlugin):
'specify a footer template, it will take precedence '
'over this option.')),
OptionRecommendation(name='pdf_footer_template', recommended_value=None,
help=_('An HTML template used to generate footers on every page.'
' The string _PAGENUM_ will be replaced by the current page'
' number.')),
help=_('An HTML template used to generate %s on every page.'
' The strings _PAGENUM_, _TITLE_, _AUTHOR_ and _SECTION_ will be replaced by their current values.')%_('footers')),
OptionRecommendation(name='pdf_header_template', recommended_value=None,
help=_('An HTML template used to generate headers on every page.'
' The string _PAGENUM_ will be replaced by the current page'
' number.')),
help=_('An HTML template used to generate %s on every page.'
' The strings _PAGENUM_, _TITLE_, _AUTHOR_ and _SECTION_ will be replaced by their current values.')%_('headers')),
])
def convert(self, oeb_book, output_path, input_plugin, opts, log):

View File

@ -858,7 +858,7 @@ class Amazon(Source):
# }}}
def download_cover(self, log, result_queue, abort, # {{{
title=None, authors=None, identifiers={}, timeout=30):
title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False):
cached_url = self.get_cached_cover_url(identifiers)
if cached_url is None:
log.info('No cached cover found, running identify')

View File

@ -31,7 +31,7 @@ msprefs.defaults['find_first_edition_date'] = False
# Google covers are often poor quality (scans/errors) but they have high
# resolution, so they trump covers from better sources. So make sure they
# are only used if no other covers are found.
msprefs.defaults['cover_priorities'] = {'Google':2}
msprefs.defaults['cover_priorities'] = {'Google':2, 'Google Images':2}
def create_log(ostream=None):
from calibre.utils.logging import ThreadSafeLog, FileStream
@ -222,6 +222,9 @@ class Source(Plugin):
#: plugin
config_help_message = None
#: If True this source can return multiple covers for a given query
can_get_multiple_covers = False
def __init__(self, *args, **kwargs):
Plugin.__init__(self, *args, **kwargs)
@ -522,7 +525,7 @@ class Source(Plugin):
return None
def download_cover(self, log, result_queue, abort,
title=None, authors=None, identifiers={}, timeout=30):
title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False):
'''
Download a cover and put it into result_queue. The parameters all have
the same meaning as for :meth:`identify`. Put (self, cover_data) into
@ -531,6 +534,9 @@ class Source(Plugin):
This method should use cached cover URLs for efficiency whenever
possible. When cached data is not present, most plugins simply call
identify and use its results.
If the parameter get_best_cover is True and this plugin can get
multiple covers, it should only get the "best" one.
'''
pass

View File

@ -35,9 +35,14 @@ class Worker(Thread):
start_time = time.time()
if not self.abort.is_set():
try:
self.plugin.download_cover(self.log, self.rq, self.abort,
title=self.title, authors=self.authors,
identifiers=self.identifiers, timeout=self.timeout)
if self.plugin.can_get_multiple_covers:
self.plugin.download_cover(self.log, self.rq, self.abort,
title=self.title, authors=self.authors, get_best_cover=True,
identifiers=self.identifiers, timeout=self.timeout)
else:
self.plugin.download_cover(self.log, self.rq, self.abort,
title=self.title, authors=self.authors,
identifiers=self.identifiers, timeout=self.timeout)
except:
self.log.exception('Failed to download cover from',
self.plugin.name)

View File

@ -221,7 +221,7 @@ class Douban(Source):
# }}}
def download_cover(self, log, result_queue, abort, # {{{
title=None, authors=None, identifiers={}, timeout=30):
title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False):
cached_url = self.get_cached_cover_url(identifiers)
if cached_url is None:
log.info('No cached cover found, running identify')

View File

@ -320,7 +320,7 @@ class Edelweiss(Source):
# }}}
def download_cover(self, log, result_queue, abort, # {{{
title=None, authors=None, identifiers={}, timeout=30):
title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False):
cached_url = self.get_cached_cover_url(identifiers)
if cached_url is None:
log.info('No cached cover found, running identify')

View File

@ -209,7 +209,7 @@ class GoogleBooks(Source):
# }}}
def download_cover(self, log, result_queue, abort, # {{{
title=None, authors=None, identifiers={}, timeout=30):
title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False):
cached_url = self.get_cached_cover_url(identifiers)
if cached_url is None:
log.info('No cached cover found, running identify')

View File

@ -0,0 +1,148 @@
#!/usr/bin/env python
# vim:fileencoding=UTF-8
from __future__ import (unicode_literals, division, absolute_import,
print_function)
__license__ = 'GPL v3'
__copyright__ = '2013, Kovid Goyal <kovid@kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
from collections import OrderedDict
from calibre import as_unicode
from calibre.ebooks.metadata.sources.base import Source, Option
class GoogleImages(Source):
name = 'Google Images'
description = _('Downloads covers from a Google Image search. Useful to find larger/alternate covers.')
capabilities = frozenset(['cover'])
config_help_message = _('Configure the Google Image Search plugin')
can_get_multiple_covers = True
options = (Option('max_covers', 'number', 5, _('Maximum number of covers to get'),
_('The maximum number of covers to process from the google search result')),
Option('size', 'choices', 'svga', _('Cover size'),
_('Search for covers larger than the specified size'),
choices=OrderedDict((
('any', _('Any size'),),
('l', _('Large'),),
('qsvga', _('Larger than %s')%'400x300',),
('vga', _('Larger than %s')%'640x480',),
('svga', _('Larger than %s')%'600x800',),
('xga', _('Larger than %s')%'1024x768',),
('2mp', _('Larger than %s')%'2 MP',),
('4mp', _('Larger than %s')%'4 MP',),
))),
)
def download_cover(self, log, result_queue, abort,
title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False):
if not title:
return
from threading import Thread
import time
timeout = max(60, timeout) # Needs at least a minute
title = ' '.join(self.get_title_tokens(title))
author = ' '.join(self.get_author_tokens(authors))
urls = self.get_image_urls(title, author, log, abort, timeout)
if not urls:
log('No images found in Google for, title: %r and authors: %r'%(title, author))
return
urls = urls[:self.prefs['max_covers']]
if get_best_cover:
urls = urls[:1]
workers = [Thread(target=self.download_image, args=(url, timeout, log, result_queue)) for url in urls]
for w in workers:
w.daemon = True
w.start()
alive = True
start_time = time.time()
while alive and not abort.is_set() and time.time() - start_time < timeout:
alive = False
for w in workers:
if w.is_alive():
alive = True
break
abort.wait(0.1)
def download_image(self, url, timeout, log, result_queue):
try:
ans = self.browser.open_novisit(url, timeout=timeout).read()
result_queue.put((self, ans))
log('Downloaded cover from: %s'%url)
except Exception:
self.log.exception('Failed to download cover from: %r'%url)
def get_image_urls(self, title, author, log, abort, timeout):
from calibre.utils.ipc.simple_worker import fork_job, WorkerError
try:
return fork_job('calibre.ebooks.metadata.sources.google_images',
'search', args=(title, author, self.prefs['size'], timeout), no_output=True, abort=abort, timeout=timeout)['result']
except WorkerError as e:
if e.orig_tb:
log.error(e.orig_tb)
log.exception('Searching google failed:' + as_unicode(e))
except Exception as e:
log.exception('Searching google failed:' + as_unicode(e))
return []
USER_AGENT = 'Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) Gecko/20101210 Firefox/3.6.13'
def find_image_urls(br, ans):
import urlparse
for w in br.page.mainFrame().documentElement().findAll('.images_table a[href]'):
try:
imgurl = urlparse.parse_qs(urlparse.urlparse(unicode(w.attribute('href'))).query)['imgurl'][0]
except:
continue
if imgurl not in ans:
ans.append(imgurl)
def search(title, author, size, timeout, debug=False):
import time
from calibre.web.jsbrowser.browser import Browser, LoadWatcher, Timeout
ans = []
start_time = time.time()
br = Browser(user_agent=USER_AGENT, enable_developer_tools=debug)
br.visit('https://www.google.com/advanced_image_search')
f = br.select_form('form[action="/search"]')
f['as_q'] = '%s %s'%(title, author)
if size != 'any':
f['imgsz'] = size
f['imgar'] = 't|xt'
f['as_filetype'] = 'jpg'
br.submit(wait_for_load=False)
# Loop until the page finishes loading or at least five image urls are
# found
lw = LoadWatcher(br.page, br)
while lw.is_loading and len(ans) < 5:
br.run_for_a_time(0.2)
find_image_urls(br, ans)
if time.time() - start_time > timeout:
raise Timeout('Timed out trying to load google image search page')
find_image_urls(br, ans)
if debug:
br.show_browser()
br.close()
del br # Needed to prevent PyQt from segfaulting
return ans
def test_google():
import pprint
pprint.pprint(search('heroes', 'abercrombie', 'svga', 60, debug=True))
def test():
from Queue import Queue
from threading import Event
from calibre.utils.logging import default_log
p = GoogleImages(None)
rq = Queue()
p.download_cover(default_log, rq, Event(), title='The Heroes',
authors=('Joe Abercrombie',))
print ('Downloaded', rq.qsize(), 'covers')
if __name__ == '__main__':
test()

View File

@ -19,7 +19,7 @@ class OpenLibrary(Source):
OPENLIBRARY = 'http://covers.openlibrary.org/b/isbn/%s-L.jpg?default=false'
def download_cover(self, log, result_queue, abort,
title=None, authors=None, identifiers={}, timeout=30):
title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False):
if 'isbn' not in identifiers:
return
isbn = identifiers['isbn']

View File

@ -75,7 +75,7 @@ class OverDrive(Source):
# }}}
def download_cover(self, log, result_queue, abort, # {{{
title=None, authors=None, identifiers={}, timeout=30):
title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False):
import mechanize
cached_url = self.get_cached_cover_url(identifiers)
if cached_url is None:

View File

@ -55,7 +55,7 @@ class Ozon(Source):
# for ozon.ru search we have to format ISBN with '-'
isbn = _format_isbn(log, identifiers.get('isbn', None))
ozonid = identifiers.get('ozon', None)
unk = unicode(_('Unknown')).upper()
if (title and title != unk) or (authors and authors != [unk]) or isbn or not ozonid:
qItems = set([isbn, title])
@ -64,19 +64,19 @@ class Ozon(Source):
qItems.discard(None)
qItems.discard('')
qItems = map(_quoteString, qItems)
q = u' '.join(qItems).strip()
log.info(u'search string: ' + q)
if isinstance(q, unicode):
q = q.encode('utf-8')
if not q:
return None
search_url += quote_plus(q)
else:
search_url = self.ozon_url + '/webservices/OzonWebSvc.asmx/ItemDetail?ID=%s' % ozonid
log.debug(u'search url: %r'%search_url)
return search_url
# }}}
@ -250,7 +250,7 @@ class Ozon(Source):
return url
# }}}
def download_cover(self, log, result_queue, abort, title=None, authors=None, identifiers={}, timeout=30): # {{{
def download_cover(self, log, result_queue, abort, title=None, authors=None, identifiers={}, timeout=30, get_best_cover=False): # {{{
cached_url = self.get_cached_cover_url(identifiers)
if cached_url is None:
log.debug('No cached cover found, running identify')

View File

@ -11,6 +11,7 @@ import os
from threading import Event, Thread
from Queue import Queue, Empty
from io import BytesIO
from collections import Counter
from calibre.utils.date import as_utc
from calibre.ebooks.metadata.sources.identify import identify, msprefs
@ -113,13 +114,18 @@ def single_covers(title, authors, identifiers, caches, tdir):
kwargs=dict(title=title, authors=authors, identifiers=identifiers))
worker.daemon = True
worker.start()
c = Counter()
while worker.is_alive():
try:
plugin, width, height, fmt, data = results.get(True, 1)
except Empty:
continue
else:
name = '%s,,%s,,%s,,%s.cover'%(plugin.name, width, height, fmt)
name = plugin.name
if plugin.can_get_multiple_covers:
name += '{%d}'%c[plugin.name]
c[plugin.name] += 1
name = '%s,,%s,,%s,,%s.cover'%(name, width, height, fmt)
with open(name, 'wb') as f:
f.write(data)
os.mkdir(name+'.done')

Some files were not shown because too many files have changed in this diff Show More