[Sync] Sync with trunk. r6766

This commit is contained in:
Li Fanxi 2010-10-31 22:15:10 +08:00
commit fec3cb8375
69 changed files with 17258 additions and 12100 deletions

View File

@ -4,6 +4,164 @@
# for important features/bug fixes.
# Also, each release can have new and improved recipes.
- version: 0.7.26
date: 2010-10-30
new features:
- title: "Check library: Allow wildcards in ignore names field"
bug fixes:
- title: "Fix regression in 0.7.25 that broke reading metadata from filenames."
- title: "Fix regression in 0.7.25 that caused original files to be mistakenly removed when adding books recursively"
- title: "Fix long series/publisher causing edit metadata in bulk dialog to become very large"
tickets: [7332]
- title: "Only add SONY periodical code to downloaded news if output profile is set to one of the SONY reader profiles. This is needed because the ever delightful Stanza crashes and burns when an EPUB has the periodical code"
improved recipes:
- El Periodico
- New Zealand Herald
new recipes:
- title: "Taggeschau.de"
author: "Florian Andreas Pfaff"
- title: "Gamespot Reviews"
author: "Marc Tonsing"
- version: 0.7.25
date: 2010-10-29
new features:
- title: "Add support for the SONY periodical format."
description: "This means that news downloaded by calibre and sent to a newer SONY device (350/650/900) should appear in the Periodicals section and have the special periodicals navigation user interface"
type: major
- title: "Content server: Make the new browsing interface the default. The old interface can be accessed at /old"
- title: "Content server: Allow running of content server as a WSGI application within another server. Add tutorial for this to the User Manual."
- title: "Support for the Pico Life reader, Kobo Wifi and HTC Aria"
- title: "Content server: Add a new --url-prefix command line option to ease the use of the server with a reverse proxy"
- title: "New social metadata plugin for Amazon that does not rely on AWS. Since Amazon broke AWS, it is recommended you upgrade to this version if you use metadata from Amazon"
- title: "Add a tweak to specify the fonts used when geenrating the default cover"
- title: "Add an output profile for generic Tablet devices"
tickets: [7289]
- title: "SONY driver: Allow sorting of collections by arbitrary field via a new tweak."
- title: "Content server: Make /mobile a little prettier"
- title: "Add button to 'Library Check' to automatically delete spurious files and folders"
bug fixes:
- title: "FB2 Input: Lots of love. Handle stylesheets and style attributes. Make parsinf malformed FB2 files more robust."
tickets: [7219, 7230]
- title: "Fix auto send of news to device with multiple calibre libraries. The fix means that if you have any pending news to be sent, it will be ignored after the update. Future news downloads will once again be automatically sent to the device."
- title: "MOBI Output: Conversion of super/sub scripts now handles nested tags."
tickets: [7264]
- title: "Conversion pipeline: Fix parsing of XML encoding declarations."
tickets: [7328]
- title: "Pandigital (Kobo): Upload thumbnails to correct location"
tickets: [7165]
- title: "Fix auto emailed news with non asci characters in title not being deliverd to Kindle"
tickets: [7322]
- title: "Read metadata only after on import plugins have run when adding books to GUI"
tickets: [7245]
- title: "Various fixes for bugs caused by non ascii temporary paths on windows with non UTF-8 filesystem encodings"
tickets: [7288]
- title: "Various fixes/enhancements to SNB Output"
- title: "Allow Tag editor in edit metadata dialog to be used even if tags have been changed"
tickets: [7298]
- title: "Fix crash on some OS X machines when Preferences->Conversion->Output is clicked"
- title: "MOBI indexing: Fix last entry missing sometimes"
tickets: [6595]
- title: "Fix regression causing books to be deselected after sending to device"
tickets: [7271]
- title: "Conversion pipeline: Fix rescaling of GIF images not working"
tickets: [7306]
- title: "Update PDF metadata/conversion libraries in windows build"
- title: "Fix timezone bug when searching on date fields"
tickets: [7300]
- title: "Fix regression that caused the viewr to crash if the main application is closed"
tickets: [7276]
- title: "Fix bug causing a spurious metadata.opf file to be written at the root of the calibre library when adding books"
- title: "Use the same title casing algorithm in all places"
- title: "Fix bulk edit of dual state boolean custom columns"
- title: "Increase image size for comics in Kindle DX profile for better conversion of comics to PDF"
- title: "Fix restore db to not dies when conflicting custom columns are encountered and report conflicting columns errors. Fix exceptions when referencing invalid _index fields."
- title: "Fix auto merge books not respecting article sort tweak"
tickets: [7147]
- title: "Linux device drivers: Fix udisks based ejecting for devices with multiple nodes"
- title: "Linux device mounting: Mount the drive with the lowest kernel name as main memory"
- title: "Fix use of numeric fields in templates"
- title: "EPUB Input: Handle EPUB files with multiple OPF files."
tickets: [7229]
- title: "Setting EPUB metadata: Fix date format. Fix language being overwritten by und when unspecified. Fix empty ISBN identifier being created"
- title: "Fix cannot delete a Series listing from List view also dismiss fetch metadata dialog when no metadata found automatically"
tickets: [7221, 7220]
- title: "Content server: Handle switch library in GUI gracefully"
- title: "calibre-server: Use cherrypy implementation of --pidfile and --daemonize"
new recipes:
- title: "Ming Pao"
author: "Eddie Lau"
- title: "lenta.ru"
author: "Nikolai Kotchetkov"
- title: "frazpc.pl"
author: "Tomasz Dlugosz"
- title: "Perfil and The Economic Collapse Blog"
author: "Darko Miletic"
- title: "STNN"
author: "Larry Chan"
improved recipes:
- CubaDebate
- El Pais
- Fox News
- New Scientist
- The Economic Times of India
- version: 0.7.24
date: 2010-10-17

View File

@ -208,6 +208,8 @@ h2.library_name {
}
.toplevel li a { text-decoration: none; }
.toplevel li img {
vertical-align: middle;
margin-right: 1em;
@ -261,9 +263,16 @@ h2.library_name {
}
.category div.category-item span.href { display: none }
.category div.category-item a { text-decoration: none; color: inherit; }
#groups span.load_href { display: none }
#groups a.load_href {
text-decoration: none;
color: inherit;
font-size: medium;
font-weight: normal;
padding: 0;
padding-left: 0.5em;
}
#groups h3 {
font-weight: bold;

View File

@ -116,7 +116,7 @@ function toplevel() {
$(".sort_select").hide();
$(".toplevel li").click(function() {
var href = $(this).children("span.url").text();
var href = $(this).children("a").attr('href');
window.location = href;
});
@ -133,7 +133,7 @@ function render_error(msg) {
// Category feed {{{
function category_clicked() {
var href = $(this).find("span.href").html();
var href = $(this).find("a").attr('href');
window.location = href;
}
@ -151,7 +151,7 @@ function category() {
change: function(event, ui) {
if (ui.newContent) {
var href = ui.newContent.children("span.load_href").html();
var href = ui.newContent.prev().children("a.load_href").attr('href');
ui.newContent.children(".loading").show();
if (href) {
$.ajax({

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 781 B

View File

@ -2,7 +2,7 @@
# -*- coding: utf-8 -*-
__license__ = 'GPL v3'
__copyright__ = '2009, Darko Miletic <darko.miletic at gmail.com>'
__copyright__ = '30 October 2010, Jordi Balcells based on an earlier recipe by Darko Miletic <darko.miletic at gmail.com>'
'''
elperiodico.cat
'''
@ -12,8 +12,8 @@ from calibre.ebooks.BeautifulSoup import Tag
class ElPeriodico_cat(BasicNewsRecipe):
title = 'El Periodico de Catalunya'
__author__ = 'Darko Miletic'
description = 'Noticias desde Catalunya'
__author__ = 'Jordi Balcells/Darko Miletic'
description = 'Noticies des de Catalunya'
publisher = 'elperiodico.cat'
category = 'news, politics, Spain, Catalunya'
oldest_article = 2
@ -33,15 +33,25 @@ class ElPeriodico_cat(BasicNewsRecipe):
html2epub_options = 'publisher="' + publisher + '"\ncomments="' + description + '"\ntags="' + category + '"'
feeds = [(u"Tota l'edició", u'http://www.elperiodico.cat/rss.asp?id=46')]
feeds = [(u'Portada', u'http://www.elperiodico.cat/ca/rss/rss_portada.xml'),
(u'Internacional', u'http://www.elperiodico.cat/ca/rss/internacional/rss.xml'),
(u'Societat', u'http://www.elperiodico.cat/ca/rss/societat/rss.xml'),
(u'Ci\xe8ncia i tecnologia', u'http://www.elperiodico.cat/ca/rss/ciencia-i-tecnologia/rss.xml'),
(u'Esports', u'http://www.elperiodico.cat/ca/rss/esports/rss.xml'),
(u'Gent', u'http://www.elperiodico.cat/ca/rss/gent/rss.xml'),
(u'Opini\xf3', u'http://www.elperiodico.cat/ca/rss/opinio/rss.xml'),
(u'Pol\xedtica', u'http://www.elperiodico.cat/ca/rss/politica/rss.xml'),
(u'Barcelona', u'http://www.elperiodico.cat/ca/rss/barcelona/rss.xml'),
(u'Economia', u'http://www.elperiodico.cat/ca/rss/economia/rss.xml'),
(u'Cultura i espectacles', u'http://www.elperiodico.cat/ca/rss/cultura-i-espectacles/rss.xml'),
(u'Tele', u'http://www.elperiodico.cat/ca/rss/tele/rss.xml')]
keep_only_tags = [dict(name='div', attrs={'id':'noticia'})]
keep_only_tags = [dict(name='div', attrs={'class':'titularnoticia'}),
dict(name='div', attrs={'class':'noticia_completa'})]
remove_tags = [
dict(name=['object','link','script'])
,dict(name='ul',attrs={'class':'herramientasDeNoticia'})
,dict(name='div', attrs={'id':'inferiores'})
remove_tags = [dict(name='div', attrs={'class':['opcionb','opcionb last','columna_noticia']}),
dict(name='span', attrs={'class':'opcionesnoticia'})
]
def print_version(self, url):

View File

@ -2,17 +2,17 @@
# -*- coding: utf-8 -*-
__license__ = 'GPL v3'
__copyright__ = '2009, Darko Miletic <darko.miletic at gmail.com>'
__copyright__ = '30 October 2010, Jordi Balcells based on an earlier recipe by Darko Miletic <darko.miletic at gmail.com>'
'''
elperiodico.com
elperiodico.cat
'''
from calibre.web.feeds.news import BasicNewsRecipe
from calibre.ebooks.BeautifulSoup import Tag
class ElPeriodico_esp(BasicNewsRecipe):
class ElPeriodico_cat(BasicNewsRecipe):
title = 'El Periodico de Catalunya'
__author__ = 'Darko Miletic'
__author__ = 'Jordi Balcells/Darko Miletic'
description = 'Noticias desde Catalunya'
publisher = 'elperiodico.com'
category = 'news, politics, Spain, Catalunya'
@ -33,15 +33,25 @@ class ElPeriodico_esp(BasicNewsRecipe):
html2epub_options = 'publisher="' + publisher + '"\ncomments="' + description + '"\ntags="' + category + '"'
feeds = [(u"Toda la edición", u'http://www.elperiodico.com/rss.asp?id=46')]
feeds = [(u'Portada', u'http://www.elperiodico.com/es/rss/rss_portada.xml'),
(u'Internacional', u'http://elperiodico.com/es/rss/internacional/rss.xml'),
(u'Sociedad', u'http://elperiodico.com/es/rss/sociedad/rss.xml'),
(u'Ciencia y Tecnolog\xeda', u'http://elperiodico.com/es/rss/ciencia-y-tecnologia/rss.xml'),
(u'Deportes', u'http://elperiodico.com/es/rss/deportes/rss.xml'),
(u'Gente', u'http://elperiodico.com/es/rss/gente/rss.xml'),
(u'Opini\xf3n', u'http://elperiodico.com/es/rss/opinion/rss.xml'),
(u'Pol\xedtica', u'http://elperiodico.com/es/rss/politica/rss.xml'),
(u'Barcelona', u'http://elperiodico.com/es/rss/barcelona/rss.xml'),
(u'Econom\xeda', u'http://elperiodico.com/es/rss/economia/rss.xml'),
(u'Cultura y espect\xe1culos', u'http://elperiodico.com/es/rss/cultura-y-espectaculos/rss.xml'),
(u'Tele', u'http://elperiodico.com/es/rss/cultura-y-espectaculos/rss.xml')]
keep_only_tags = [dict(name='div', attrs={'id':'noticia'})]
keep_only_tags = [dict(name='div', attrs={'class':'titularnoticia'}),
dict(name='div', attrs={'class':'noticia_completa'})]
remove_tags = [
dict(name=['object','link','script'])
,dict(name='ul',attrs={'class':'herramientasDeNoticia'})
,dict(name='div', attrs={'id':'inferiores'})
remove_tags = [dict(name='div', attrs={'class':['opcionb','opcionb last','columna_noticia']}),
dict(name='span', attrs={'class':'opcionesnoticia'})
]
def print_version(self, url):

View File

@ -0,0 +1,41 @@
__license__ = 'GPL v3'
__author__ = u'Marc T\xf6nsing'
from calibre.web.feeds.news import BasicNewsRecipe
class GamespotCom(BasicNewsRecipe):
title = u'Gamespot.com Reviews'
description = 'review articles from gamespot.com'
language = 'en'
__author__ = u'Marc T\xf6nsing'
oldest_article = 7
max_articles_per_feed = 40
remove_empty_feeds = True
no_stylesheets = True
no_javascript = True
feeds = [
('PC Reviews', 'http://www.gamespot.com/rss/game_updates.php?type=5&platform=5'),
('XBOX 360 Reviews', 'http://www.gamespot.com/rss/game_updates.php?type=5&platform=1029'),
('Wii Reviews', 'http://www.gamespot.com/rss/game_updates.php?type=5&platform=1031'),
('PlayStation 3 Reviews', 'http://www.gamespot.com/rss/game_updates.php?type=5&platform=1028'),
('PlayStation 2 Reviews', 'http://www.gamespot.com/rss/game_updates.php?type=5&platform=7'),
('PlayStation Portable Reviews', 'http://www.gamespot.com/rss/game_updates.php?type=5&platform=1024'),
('Nintendo DS Reviews', 'http://www.gamespot.com/rss/game_updates.php?type=5&platform=1026'),
('iPhone Reviews', 'http://www.gamespot.com/rss/game_updates.php?type=5&platform=1049'),
]
remove_tags = [
dict(name='div', attrs={'class':'top_bar'}),
dict(name='div', attrs={'class':'video_embed'})
]
def get_cover_url(self):
return 'http://image.gamespotcdn.net/gamespot/shared/gs5/gslogo_bw.gif'
def get_article_url(self, article):
return article.get('link') + '?print=1'

View File

@ -0,0 +1,177 @@
#!/usr/bin/env python
'''
Lenta.ru
'''
from calibre.web.feeds.feedparser import parse
from calibre.ebooks.BeautifulSoup import Tag
from calibre.web.feeds.news import BasicNewsRecipe
import re
class LentaRURecipe(BasicNewsRecipe):
title = u'Lenta.ru: \u041d\u043e\u0432\u043e\u0441\u0442\u0438'
__author__ = 'Nikolai Kotchetkov'
publisher = 'lenta.ru'
category = 'news, Russia'
description = u'''\u0415\u0436\u0435\u0434\u043d\u0435\u0432\u043d\u0430\u044f
\u0438\u043d\u0442\u0435\u0440\u043d\u0435\u0442-\u0433\u0430\u0437\u0435\u0442\u0430.
\u041d\u043e\u0432\u043e\u0441\u0442\u0438 \u0441\u043e
\u0432\u0441\u0435\u0433\u043e \u043c\u0438\u0440\u0430 \u043d\u0430
\u0440\u0443\u0441\u0441\u043a\u043e\u043c
\u044f\u0437\u044b\u043a\u0435'''
description = u'Ежедневная интернет-газета. Новости со всего мира на русском языке'
oldest_article = 3
max_articles_per_feed = 100
masthead_url = u'http://img.lenta.ru/i/logowrambler.gif'
cover_url = u'http://img.lenta.ru/i/logowrambler.gif'
#Add feed names if you want them to be sorted (feeds of this list appear first)
sortOrder = [u'_default', u'В России', u'б.СССР', u'В мире']
encoding = 'cp1251'
language = 'ru'
no_stylesheets = True
remove_javascript = True
recursions = 0
conversion_options = {
'comment' : description
, 'tags' : category
, 'publisher' : publisher
, 'language' : language
}
keep_only_tags = [dict(name='td', attrs={'class':['statya','content']})]
remove_tags_after = [dict(name='p', attrs={'class':'links'}), dict(name='div', attrs={'id':'readers-block'})]
remove_tags = [dict(name='table', attrs={'class':['vrezka','content']}), dict(name='div', attrs={'class':'b240'}), dict(name='div', attrs={'id':'readers-block'}), dict(name='p', attrs={'class':'links'})]
feeds = [u'http://lenta.ru/rss/']
extra_css = 'h1 {font-size: 1.2em; margin: 0em 0em 0em 0em;} h2 {font-size: 1.0em; margin: 0em 0em 0em 0em;} h3 {font-size: 0.8em; margin: 0em 0em 0em 0em;}'
def parse_index(self):
try:
feedData = parse(self.feeds[0])
if not feedData:
raise NotImplementedError
self.log("parse_index: Feed loaded successfully.")
if feedData.feed.has_key('title'):
self.title = feedData.feed.title
self.log("parse_index: Title updated to: ", self.title)
if feedData.feed.has_key('image'):
self.log("HAS IMAGE!!!!")
def get_virtual_feed_articles(feed):
if feeds.has_key(feed):
return feeds[feed][1]
self.log("Adding new feed: ", feed)
articles = []
feeds[feed] = (feed, articles)
return articles
feeds = {}
#Iterate feed items and distribute articles using tags
for item in feedData.entries:
link = item.get('link', '');
title = item.get('title', '');
if '' == link or '' == title:
continue
article = {'title':title, 'url':link, 'description':item.get('description', ''), 'date':item.get('date', ''), 'content':''};
if not item.has_key('tags'):
get_virtual_feed_articles('_default').append(article)
continue
for tag in item.tags:
addedToDefault = False
term = tag.get('term', '')
if '' == term:
if (not addedToDefault):
get_virtual_feed_articles('_default').append(article)
continue
get_virtual_feed_articles(term).append(article)
#Get feed list
#Select sorted feeds first of all
result = []
for feedName in self.sortOrder:
if (not feeds.has_key(feedName)): continue
result.append(feeds[feedName])
del feeds[feedName]
result = result + feeds.values()
return result
except Exception, err:
self.log(err)
raise NotImplementedError
def preprocess_html(self, soup):
return self.adeify_images(soup)
def postprocess_html(self, soup, first_fetch):
#self.log('Original: ', soup.prettify())
contents = Tag(soup, 'div')
#Extract tags with given attributes
extractElements = {'div' : [{'id' : 'readers-block'}]}
#Remove all elements that were not extracted before
for tag, attrs in extractElements.iteritems():
for attr in attrs:
garbage = soup.findAll(tag, attr)
if garbage:
for pieceOfGarbage in garbage:
pieceOfGarbage.extract()
#Find article text using header
#and add all elements to contents
element = soup.find({'h1' : True, 'h2' : True})
if (element):
element.name = 'h1'
while element:
nextElement = element.nextSibling
element.extract()
contents.insert(len(contents.contents), element)
element = nextElement
#Place article date after header
dates = soup.findAll(text=re.compile('\d{2}\.\d{2}\.\d{4}, \d{2}:\d{2}:\d{2}'))
if dates:
for date in dates:
for string in date:
parent = date.parent
if (parent and isinstance(parent, Tag) and 'div' == parent.name and 'dt' == parent['class']):
#Date div found
parent.extract()
parent['style'] = 'font-size: 0.5em; color: gray; font-family: monospace;'
contents.insert(1, parent)
break
#Place article picture after date
pic = soup.find('img')
if pic:
picDiv = Tag(soup, 'div')
picDiv['style'] = 'width: 100%; text-align: center;'
pic.extract()
picDiv.insert(0, pic)
title = pic.get('title', None)
if title:
titleDiv = Tag(soup, 'div')
titleDiv['style'] = 'font-size: 0.5em;'
titleDiv.insert(0, title)
picDiv.insert(1, titleDiv)
contents.insert(2, picDiv)
body = soup.find('td', {'class':['statya','content']})
if body:
body.replaceWith(contents)
#self.log('Result: ', soup.prettify())
return soup

View File

@ -1,53 +1,79 @@
#!/usr/bin/env python
__license__ = 'GPL v3'
__copyright__ = '2009, Mathieu Godlewski <mathieu at godlewski.fr>'
__copyright__ = '2009, Mathieu Godlewski <mathieu at godlewski.fr>; 2010, Louis Gesbert <meta at antislash dot info>'
'''
Mediapart
'''
import re, string
from calibre.ebooks.BeautifulSoup import BeautifulSoup
from calibre.ebooks.BeautifulSoup import Tag
from calibre.web.feeds.news import BasicNewsRecipe
class Mediapart(BasicNewsRecipe):
title = 'Mediapart'
__author__ = 'Mathieu Godlewski <mathieu at godlewski.fr>'
__author__ = 'Mathieu Godlewski'
description = 'Global news in french from online newspapers'
oldest_article = 7
language = 'fr'
needs_subscription = True
max_articles_per_feed = 50
no_stylesheets = True
html2lrf_options = ['--base-font-size', '10']
cover_url = 'http://www.mediapart.fr/sites/all/themes/mediapart/mediapart/images/annonce.jpg'
feeds = [
('Les articles', 'http://www.mediapart.fr/articles/feed'),
]
preprocess_regexps = [ (re.compile(i[0], re.IGNORECASE|re.DOTALL), i[1]) for i in
[
(r'<div class="print-title">([^>]+)</div>', lambda match : '<h2>'+match.group(1)+'</h2>'),
(r'<p>Mediapart\.fr</p>', lambda match : ''),
(r'<p[^>]*>[\s]*</p>', lambda match : ''),
(r'<p><a href="[^\.]+\.pdf">[^>]*</a></p>', lambda match : ''),
# -- print-version has poor quality on this website, better do the conversion ourselves
#
# preprocess_regexps = [ (re.compile(i[0], re.IGNORECASE|re.DOTALL), i[1]) for i in
# [
# (r'<div class="print-title">([^>]+)</div>', lambda match : '<h2>'+match.group(1)+'</h2>'),
# (r'<span class=\'auteur_staff\'>[^>]+<a title=\'[^\']*\'[^>]*>([^<]*)</a>[^<]*</span>',
# lambda match : '<i>'+match.group(1)+'</i>'),
# (r'\'', lambda match: '&rsquo;'),
# ]
# ]
#
# remove_tags = [ dict(name='div', attrs={'class':'print-source_url'}),
# dict(name='div', attrs={'class':'print-links'}),
# dict(name='img', attrs={'src':'entete_article.png'}),
# dict(name='br') ]
#
# def print_version(self, url):
# raw = self.browser.open(url).read()
# soup = BeautifulSoup(raw.decode('utf8', 'replace'))
# div = soup.find('div', {'id':re.compile('node-\d+')})
# if div is None:
# return None
# article_id = string.replace(div['id'], 'node-', '')
# if article_id is None:
# return None
# return 'http://www.mediapart.fr/print/'+article_id
# -- Non-print version [dict(name='div', attrs={'class':'advert'})]
keep_only_tags = [
dict(name='h1', attrs={'class':'title'}),
dict(name='div', attrs={'class':'page_papier_detail'}),
]
]
remove_tags = [ dict(name='div', attrs={'class':'print-source_url'}),
dict(name='div', attrs={'class':'print-links'}),
dict(name='img', attrs={'src':'entete_article.png'}),
]
def preprocess_html(self,soup):
for title in soup.findAll('div', {'class':'titre'}):
tag = Tag(soup, 'h3')
title.replaceWith(tag)
tag.insert(0,title)
return soup
# -- Handle login
def get_browser(self):
br = BasicNewsRecipe.get_browser()
if self.username is not None and self.password is not None:
br.open('http://www.mediapart.fr/')
br.select_form(nr=1)
br['name'] = self.username
br['pass'] = self.password
br.submit()
return br
def print_version(self, url):
raw = self.browser.open(url).read()
soup = BeautifulSoup(raw.decode('utf8', 'replace'))
div = soup.find('div', {'class':'node node-type-article'})
if div is None:
return None
article_id = string.replace(div['id'], 'node-', '')
if article_id is None:
return None
return 'http://www.mediapart.fr/print/'+article_id

View File

@ -1,74 +1,43 @@
from calibre.web.feeds.recipes import BasicNewsRecipe
import re
class NewZealandHerald(BasicNewsRecipe):
title = 'New Zealand Herald'
__author__ = 'Krittika Goyal'
__author__ = 'Kovid Goyal'
description = 'Daily news'
timefmt = ' [%d %b, %Y]'
language = 'en_NZ'
oldest_article = 2.5
no_stylesheets = True
remove_tags_before = dict(name='div', attrs={'class':'contentContainer left eight'})
remove_tags_after = dict(name='div', attrs={'class':'callToAction'})
remove_tags = [
dict(name='iframe'),
dict(name='div', attrs={'class':['sectionHeader', 'tools','callToAction', 'contentContainer right two nopad relatedColumn']}),
#dict(name='div', attrs={'id':['shareContainer']}),
#dict(name='form', attrs={'onsubmit':"return verifySearch(this.w,'Keyword, citation, or #author')"}),
#dict(name='table', attrs={'cellspacing':'0'}),
feeds = [
('Business',
'http://rss.nzherald.co.nz/rss/xml/nzhrsscid_000000003.xml'),
('World',
'http://rss.nzherald.co.nz/rss/xml/nzhrsscid_000000002.xml'),
('National',
'http://rss.nzherald.co.nz/rss/xml/nzhrsscid_000000001.xml'),
('Entertainment',
'http://rss.nzherald.co.nz/rss/xml/nzhrsscid_001501119.xml'),
('Travel',
'http://rss.nzherald.co.nz/rss/xml/nzhrsscid_000000007.xml'),
('Opinion',
'http://rss.nzherald.co.nz/rss/xml/nzhrsscid_000000466.xml'),
('Life & Style',
'http://rss.nzherald.co.nz/rss/xml/nzhrsscid_000000006.xml'),
('Technology'
'http://rss.nzherald.co.nz/rss/xml/nzhrsscid_000000005.xml'),
('Sport',
'http://rss.nzherald.co.nz/rss/xml/nzhrsscid_000000004.xml'),
('Motoring',
'http://rss.nzherald.co.nz/rss/xml/nzhrsscid_000000009.xml'),
('Property',
'http://rss.nzherald.co.nz/rss/xml/nzhrsscid_000000008.xml'),
]
def preprocess_html(self, soup):
table = soup.find('table')
if table is not None:
table.extract()
return soup
#TO GET ARTICLES IN SECTION
def nz_parse_section(self, url):
soup = self.index_to_soup(url)
div = soup.find(attrs={'class':'col-300 categoryList'})
date = div.find(attrs={'class':'link-list-heading'})
current_articles = []
for x in date.findAllNext(attrs={'class':['linkList', 'link-list-heading']}):
if x.get('class') == 'link-list-heading': break
for li in x.findAll('li'):
a = li.find('a', href=True)
if a is None:
continue
title = self.tag_to_string(a)
url = a.get('href', False)
if not url or not title:
continue
if url.startswith('/'):
url = 'http://www.nzherald.co.nz'+url
self.log('\t\tFound article:', title)
self.log('\t\t\t', url)
current_articles.append({'title': title, 'url':url,
'description':'', 'date':''})
return current_articles
# To GET SECTIONS
def parse_index(self):
feeds = []
for title, url in [
('National',
'http://www.nzherald.co.nz/nz/news/headlines.cfm?c_id=1'),
('World',
'http://www.nzherald.co.nz/world/news/headlines.cfm?c_id=2'),
('Politics',
'http://www.nzherald.co.nz/politics/news/headlines.cfm?c_id=280'),
('Crime',
'http://www.nzherald.co.nz/crime/news/headlines.cfm?c_id=30'),
('Environment',
'http://www.nzherald.co.nz/environment/news/headlines.cfm?c_id=39'),
]:
articles = self.nz_parse_section(url)
if articles:
feeds.append((title, articles))
return feeds
def print_version(self, url):
m = re.search(r'objectid=(\d+)', url)
if m is None:
return url
return 'http://www.nzherald.co.nz/news/print.cfm?pnum=1&objectid=' + m.group(1)

View File

@ -0,0 +1,66 @@
__license__ = 'GPL v3'
__copyright__ = '2010, Darko Miletic <darko.miletic at gmail.com>'
'''
perfil.com
'''
from calibre.web.feeds.news import BasicNewsRecipe
class Perfil(BasicNewsRecipe):
title = 'Perfil'
__author__ = 'Darko Miletic'
description = 'Noticias de Argentina y el resto del mundo'
publisher = 'perfil.com'
category = 'news, politics, Argentina'
oldest_article = 2
max_articles_per_feed = 200
no_stylesheets = True
encoding = 'cp1252'
use_embedded_content = False
language = 'es'
remove_empty_feeds = True
masthead_url = 'http://www.perfil.com/export/sites/diarioperfil/arte/10/logo_perfilcom_mm.gif'
extra_css = """
body{font-family: Arial,Helvetica,sans-serif }
.seccion{border-bottom: 1px dotted #666666; text-transform: uppercase; font-size: x-large}
.foto1 h1{font-size: x-small}
h1{font-family: Georgia,"Times New Roman",serif}
img{margin-bottom: 0.4em}
"""
conversion_options = {
'comment' : description
, 'tags' : category
, 'publisher' : publisher
, 'language' : language
}
remove_tags = [
dict(name=['iframe','embed','object','base','meta','link'])
,dict(name='a', attrs={'href':'#comentarios'})
,dict(name='div', attrs={'class':'foto3'})
,dict(name='img', attrs={'alt':'ampliar'})
]
keep_only_tags=[dict(attrs={'class':['bd468a','cuerpoSuperior']})]
remove_attributes=['onload','lang','width','height','border']
feeds = [
(u'Ultimo momento' , u'http://www.perfil.com/rss/ultimomomento.xml')
,(u'Politica' , u'http://www.perfil.com/rss/politica.xml' )
,(u'Policia' , u'http://www.perfil.com/rss/policia.xml' )
,(u'Internacionales', u'http://www.perfil.com/rss/internacional.xml')
,(u'Economia' , u'http://www.perfil.com/rss/economia.xml' )
,(u'Deportes' , u'http://www.perfil.com/rss/deportes.xml' )
,(u'Opinion' , u'http://www.perfil.com/rss/columnistas.xml' )
,(u'Sociedad' , u'http://www.perfil.com/rss/sociedad.xml' )
,(u'Cultura' , u'http://www.perfil.com/rss/cultura.xml' )
,(u'Espectaculos' , u'http://www.perfil.com/rss/espectaculos.xml' )
,(u'Ciencia' , u'http://www.perfil.com/rss/ciencia.xml' )
,(u'Salud' , u'http://www.perfil.com/rss/salud.xml' )
,(u'Tecnologia' , u'http://www.perfil.com/rss/tecnologia.xml' )
]
def preprocess_html(self, soup):
for item in soup.findAll(style=True):
del item['style']
return soup

View File

@ -0,0 +1,53 @@
__license__ = 'GPL v3'
__copyright__ = '2010, Louis Gesbert <meta at antislash dot info>'
'''
Rue89
'''
__author__ = '2010, Louis Gesbert <meta at antislash dot info>'
import re
from calibre.ebooks.BeautifulSoup import Tag
from calibre.web.feeds.news import BasicNewsRecipe
class Rue89(BasicNewsRecipe):
title = 'Rue89'
__author__ = 'Louis Gesbert'
description = 'Popular free french news website'
title = u'Rue89'
language = 'fr'
oldest_article = 7
max_articles_per_feed = 50
feeds = [(u'La Une', u'http://www.rue89.com/homepage/feed')]
no_stylesheets = True
preprocess_regexps = [
(re.compile(r'<(/?)h2>', re.IGNORECASE|re.DOTALL),
lambda match : '<'+match.group(1)+'h3>'),
(re.compile(r'<div class="print-title">([^>]+)</div>', re.IGNORECASE|re.DOTALL),
lambda match : '<h2>'+match.group(1)+'</h2>'),
(re.compile(r'<img[^>]+src="[^"]*/numeros/(\d+)[^0-9.">]*.gif"[^>]*/>', re.IGNORECASE|re.DOTALL),
lambda match : '<span style="font-family: Sans-serif; color: red; font-size:24pt; padding=2pt;">'+match.group(1)+'</span>'),
(re.compile(r'\''), lambda match: '&rsquo;'),
]
def preprocess_html(self,soup):
body = Tag(soup, 'body')
title = soup.find('h1', {'class':'title'})
content = soup.find('div', {'class':'content'})
soup.body.replaceWith(body)
body.insert(0, title)
body.insert(1, content)
return soup
remove_tags = [ #dict(name='div', attrs={'class':'print-source_url'}),
#dict(name='div', attrs={'class':'print-links'}),
#dict(name='img', attrs={'class':'print-logo'}),
dict(name='div', attrs={'class':'content_top'}),
dict(name='div', attrs={'id':'sidebar-left'}), ]
# -- print-version has poor quality on this website, better do the conversion ourselves
# def print_version(self, url):
# return re.sub('^.*-([0-9]+)$', 'http://www.rue89.com/print/\\1',url)

View File

@ -0,0 +1,24 @@
from calibre.web.feeds.news import BasicNewsRecipe
class Tagesschau(BasicNewsRecipe):
title = 'Tagesschau'
description = 'Nachrichten der ARD'
publisher = 'ARD'
language = 'de_DE'
__author__ = 'Florian Andreas Pfaff'
oldest_article = 7
max_articles_per_feed = 100
no_stylesheets = True
feeds = [('Tagesschau', 'http://www.tagesschau.de/xml/rss2')]
remove_tags = [
dict(name='div', attrs={'class':['linksZumThema schmal','teaserBox','boxMoreLinks','directLinks','teaserBox boxtext','fPlayer','zitatBox breit flashaudio']}),
dict(name='div',
attrs={'id':['socialBookmarks','seitenanfang']}),
dict(name='ul',
attrs={'class':['directLinks','directLinks weltatlas']}),
dict(name='strong', attrs={'class':['boxTitle inv','inv']})
]
keep_only_tags = [dict(name='div', attrs={'id':'centerCol'})]

View File

@ -30,23 +30,40 @@
<title>
<xsl:value-of select="fb:description/fb:title-info/fb:book-title"/>
</title>
<style type="text/x-oeb1-css">
A { color : #0002CC }
A:HOVER { color : #BF0000 }
BODY { background-color : #FEFEFE; color : #000000; font-family : Verdana, Geneva, Arial, Helvetica, sans-serif; text-align : justify }
H1{ font-size : 160%; font-style : normal; font-weight : bold; text-align : left; border : 1px solid Black; background-color : #E7E7E7; margin-left : 0px; page-break-before : always; }
H2{ font-size : 130%; font-style : normal; font-weight : bold; text-align : left; background-color : #EEEEEE; border : 1px solid Gray; page-break-before : always; }
H3{ font-size : 110%; font-style : normal; font-weight : bold; text-align : left; background-color : #F1F1F1; border : 1px solid Silver;}
H4{ font-size : 100%; font-style : normal; font-weight : bold; text-align : left; border : 1px solid Gray; background-color : #F4F4F4;}
H5{ font-size : 100%; font-style : italic; font-weight : bold; text-align : left; border : 1px solid Gray; background-color : #F4F4F4;}
H6{ font-size : 100%; font-style : italic; font-weight : normal; text-align : left; border : 1px solid Gray; background-color : #F4F4F4;}
SMALL{ font-size : 80% }
BLOCKQUOTE{ margin-left :4em; margin-top:1em; margin-right:0.2em;}
HR{ color : Black }
DIV{font-family : "Times New Roman", Times, serif; text-align : justify}
UL{margin-left: 0}
.epigraph{width:50%; margin-left : 35%;}
<style type="text/css">
a { color : #0002CC }
a:hover { color : #BF0000 }
body { background-color : #FEFEFE; color : #000000; font-family : Verdana, Geneva, Arial, Helvetica, sans-serif; text-align : justify }
h1{ font-size : 160%; font-style : normal; font-weight : bold; text-align : left; border : 1px solid Black; background-color : #E7E7E7; margin-left : 0px; page-break-before : always; }
h2{ font-size : 130%; font-style : normal; font-weight : bold; text-align : left; background-color : #EEEEEE; border : 1px solid Gray; page-break-before : always; }
h3{ font-size : 110%; font-style : normal; font-weight : bold; text-align : left; background-color : #F1F1F1; border : 1px solid Silver;}
h4{ font-size : 100%; font-style : normal; font-weight : bold; text-align : left; border : 1px solid Gray; background-color : #F4F4F4;}
h5{ font-size : 100%; font-style : italic; font-weight : bold; text-align : left; border : 1px solid Gray; background-color : #F4F4F4;}
h6{ font-size : 100%; font-style : italic; font-weight : normal; text-align : left; border : 1px solid Gray; background-color : #F4F4F4;}
small { font-size : 80% }
blockquote { margin-left :4em; margin-top:1em; margin-right:0.2em;}
hr { color : Black }
div {font-family : "Times New Roman", Times, serif; text-align : justify}
ul {margin-left: 0}
.epigraph{width:50%; margin-left : 35%;}
div.paragraph { text-align: justify; text-indent: 2em; }
</style>
<link rel="stylesheet" type="text/css" href="inline-styles.css" />
</head>
<body>
<xsl:for-each select="fb:description/fb:title-info/fb:annotation">
@ -136,12 +153,13 @@
</xsl:choose>
</xsl:variable>
<xsl:if test="$section_has_title = 'None'">
<a name="TOC_{generate-id()}" />
<xsl:if test="@id">
<xsl:element name="a">
<xsl:attribute name="name"><xsl:value-of select="@id"/></xsl:attribute>
</xsl:element>
</xsl:if>
<div id="TOC_{generate-id()}">
<xsl:if test="@id">
<xsl:element name="a">
<xsl:attribute name="id"><xsl:value-of select="@id"/></xsl:attribute>
</xsl:element>
</xsl:if>
</div>
</xsl:if>
<xsl:apply-templates>
<xsl:with-param name="section_toc_id" select="$section_has_title" />
@ -160,13 +178,13 @@
</xsl:if>
<xsl:if test="$section_toc_id != 'None'">
<xsl:element name="a">
<xsl:attribute name="name">TOC_<xsl:value-of select="$section_toc_id"/></xsl:attribute>
<xsl:attribute name="id">TOC_<xsl:value-of select="$section_toc_id"/></xsl:attribute>
</xsl:element>
</xsl:if>
<a name="TOC_{generate-id()}"></a>
<xsl:if test="@id">
<xsl:element name="a">
<xsl:attribute name="name"><xsl:value-of select="@id"/></xsl:attribute>
<xsl:attribute name="id"><xsl:value-of select="@id"/></xsl:attribute>
</xsl:element>
</xsl:if>
<xsl:apply-templates/>
@ -176,7 +194,7 @@
<xsl:element name="h6">
<xsl:if test="@id">
<xsl:element name="a">
<xsl:attribute name="name"><xsl:value-of select="@id"/></xsl:attribute>
<xsl:attribute name="id"><xsl:value-of select="@id"/></xsl:attribute>
</xsl:element>
</xsl:if>
<xsl:apply-templates/>
@ -207,11 +225,18 @@
</xsl:template>
<!-- p -->
<xsl:template match="fb:p">
<div align="justify"><xsl:if test="@id">
<xsl:element name="div">
<xsl:attribute name="class">paragraph</xsl:attribute>
<xsl:if test="@id">
<xsl:element name="a">
<xsl:attribute name="name"><xsl:value-of select="@id"/></xsl:attribute>
</xsl:element>
</xsl:if> &#160;&#160;&#160;<xsl:apply-templates/></div>
</xsl:if>
<xsl:if test="@style">
<xsl:attribute name="style"><xsl:value-of select="@style"/></xsl:attribute>
</xsl:if>
<xsl:apply-templates/>
</xsl:element>
</xsl:template>
<!-- strong -->
<xsl:template match="fb:strong">

View File

@ -20,20 +20,4 @@ function setup_image_scaling_handlers() {
});
}
function extract_svged_images() {
$("svg").each(function() {
var children = $(this).children("img");
if (children.length == 1) {
var img = $(children[0]);
var href = img.attr('xlink:href');
if (href != undefined) {
$(this).replaceWith('<div style="text-align:center; margin: 0; padding: 0"><img style="height: 98%" alt="SVG Image" src="' + href +'"></img></div>');
}
}
});
}
$(document).ready(function() {
//extract_svged_images();
});

View File

@ -8,7 +8,7 @@ __docformat__ = 'restructuredtext en'
__all__ = [
'pot', 'translations', 'get_translations', 'iso639',
'build', 'build_pdf2xml',
'build', 'build_pdf2xml', 'server',
'gui',
'develop', 'install',
'resources',
@ -35,6 +35,9 @@ from setup.extensions import Build, BuildPDF2XML
build = Build()
build_pdf2xml = BuildPDF2XML()
from setup.server import Server
server = Server()
from setup.install import Develop, Install, Sdist
develop = Develop()
install = Install()

102
setup/server.py Normal file
View File

@ -0,0 +1,102 @@
#!/usr/bin/env python
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
__license__ = 'GPL v3'
__copyright__ = '2010, Kovid Goyal <kovid@kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
import subprocess, tempfile, os, time, sys
from threading import RLock
from setup import Command
try:
from pyinotify import WatchManager, ThreadedNotifier, EventsCodes, ProcessEvent
except:
wm = None
else:
wm = WatchManager()
flags = EventsCodes.ALL_FLAGS
mask = flags['IN_MODIFY']
class ProcessEvents(ProcessEvent):
def __init__(self, command):
ProcessEvent.__init__(self)
self.command = command
def process_default(self, event):
name = getattr(event,
'name', None)
if name and os.path.splitext(name)[1] == '.py':
print
print name, 'changed'
self.command.kill_server()
self.command.launch_server()
print self.command.prompt,
sys.stdout.flush()
class Server(Command):
description = 'Run the calibre server in development mode conveniently'
MONOCLE_PATH = '../monocle'
def rebuild_monocole(self):
subprocess.check_call(['sprocketize', '-C', self.MONOCLE_PATH,
'-I', 'src', 'src/monocle.js'],
stdout=open('resources/content_server/read/monocle.js', 'wb'))
def launch_server(self):
print 'Starting server...\n'
with self.lock:
self.rebuild_monocole()
self.server_proc = p = subprocess.Popen(['calibre-server', '--develop'],
stderr=subprocess.STDOUT, stdout=self.server_log)
time.sleep(0.2)
if p.poll() is not None:
print 'Starting server failed'
raise SystemExit(1)
return p
def kill_server(self):
print 'Killing server...\n'
if self.server_proc is not None:
with self.lock:
if self.server_proc.poll() is None:
self.server_proc.terminate()
while self.server_proc.poll() is None:
time.sleep(0.1)
def watch(self):
if wm is not None:
self.notifier = ThreadedNotifier(wm, ProcessEvents(self))
self.notifier.start()
self.wdd = wm.add_watch(os.path.abspath('src'), mask, rec=True)
def run(self, opts):
self.lock = RLock()
tdir = tempfile.gettempdir()
logf = os.path.join(tdir, 'calibre-server.log')
self.server_log = open(logf, 'ab')
self.prompt = 'Press Enter to kill/restart server. Ctrl+C to quit: '
print 'Server log available at:', logf
print
self.watch()
while True:
self.launch_server()
try:
raw_input(self.prompt)
except:
print
self.kill_server()
break
else:
self.kill_server()
print
if hasattr(self, 'notifier'):
self.notifier.stop()

View File

@ -2,7 +2,7 @@ __license__ = 'GPL v3'
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
__docformat__ = 'restructuredtext en'
__appname__ = 'calibre'
__version__ = '0.7.24'
__version__ = '0.7.26'
__author__ = "Kovid Goyal <kovid@kovidgoyal.net>"
import re

View File

@ -292,7 +292,7 @@ class RTFMetadataReader(MetadataReaderPlugin):
def get_metadata(self, stream, ftype):
from calibre.ebooks.metadata.rtf import get_metadata
return get_metadata(stream)
class SNBMetadataReader(MetadataReaderPlugin):
name = 'Read SNB metadata'
@ -471,7 +471,8 @@ from calibre.devices.iriver.driver import IRIVER_STORY
from calibre.devices.binatone.driver import README
from calibre.devices.hanvon.driver import N516, EB511, ALEX, AZBOOKA, THEBOOK
from calibre.devices.edge.driver import EDGE
from calibre.devices.teclast.driver import TECLAST_K3, NEWSMY, IPAPYRUS, SOVOS
from calibre.devices.teclast.driver import TECLAST_K3, NEWSMY, IPAPYRUS, \
SOVOS, PICO
from calibre.devices.sne.driver import SNE
from calibre.devices.misc import PALMPRE, AVANT, SWEEX, PDNOVEL, KOGAN, \
GEMEI, VELOCITYMICRO, PDNOVEL_KOBO
@ -572,6 +573,7 @@ plugins += [
ELONEX,
TECLAST_K3,
NEWSMY,
PICO,
IPAPYRUS,
SOVOS,
EDGE,

View File

@ -259,6 +259,9 @@ class OutputProfile(Plugin):
#: Number of ems that the left margin of a blockquote is rendered as
mobi_ems_per_blockquote = 1.0
#: Special periodical formatting needed in EPUB
epub_periodical_format = None
@classmethod
def tags_to_string(cls, tags):
return escape(', '.join(tags))
@ -439,6 +442,9 @@ class SonyReaderOutput(OutputProfile):
fsizes = [7.5, 9, 10, 12, 15.5, 20, 22, 24]
unsupported_unicode_chars = [u'\u201f', u'\u201b']
epub_periodical_format = 'sony'
#periodical_date_in_title = False
class KoboReaderOutput(OutputProfile):
@ -561,6 +567,8 @@ class CybookOpusOutput(SonyReaderOutput):
fbase = 16
fsizes = [12, 12, 14, 16, 18, 20, 22, 24]
epub_periodical_format = None
class KindleOutput(OutputProfile):
name = 'Kindle'

View File

@ -117,6 +117,12 @@ class PDNOVEL_KOBO(PDNOVEL):
EBOOK_DIR_MAIN = 'eBooks/Kobo'
def upload_cover(self, path, filename, metadata, filepath):
coverdata = getattr(metadata, 'thumbnail', None)
if coverdata and coverdata[2]:
with open(os.path.join(path, '.thumbnail', filename+'.jpg'), 'wb') as coverfile:
coverfile.write(coverdata[2])
class VELOCITYMICRO(USBMS):
name = 'VelocityMicro device interface'

View File

@ -41,6 +41,15 @@ class NEWSMY(TECLAST_K3):
WINDOWS_MAIN_MEM = 'NEWSMY'
WINDOWS_CARD_A_MEM = 'USBDISK____SD'
class PICO(NEWSMY):
name = 'Pico device interface'
gui_name = 'Pico'
description = _('Communicate with the Pico reader.')
WINDOWS_MAIN_MEM = 'USBDISK__USER'
EBOOK_DIR_MAIN = 'Books'
FORMATS = ['EPUB', 'FB2', 'TXT', 'LRC', 'PDB', 'PDF', 'HTML', 'WTXT']
class IPAPYRUS(TECLAST_K3):
name = 'iPapyrus device interface'

View File

@ -30,9 +30,9 @@ def detect(aBuf):
# Added by Kovid
ENCODING_PATS = [
re.compile(r'<\?[^<>]+encoding=[\'"](.*?)[\'"][^<>]*>',
re.compile(r'<\?[^<>]+encoding\s*=\s*[\'"](.*?)[\'"][^<>]*>',
re.IGNORECASE),
re.compile(r'''<meta\s+?[^<>]+?content=['"][^'"]*?charset=([-a-z0-9]+)[^'"]*?['"][^<>]*>''',
re.compile(r'''<meta\s+?[^<>]+?content\s*=\s*['"][^'"]*?charset=([-a-z0-9]+)[^'"]*?['"][^<>]*>''',
re.IGNORECASE)
]
ENTITY_PATTERN = re.compile(r'&(\S+?);')

View File

@ -187,9 +187,10 @@ class EPUBOutput(OutputFormatPlugin):
metadata_xml = None
extra_entries = []
if self.is_periodical:
from calibre.ebooks.epub.periodical import sony_metadata
metadata_xml, atom_xml = sony_metadata(oeb)
extra_entries = [('atom.xml', 'application/atom+xml', atom_xml)]
if self.opts.output_profile.epub_periodical_format == 'sony':
from calibre.ebooks.epub.periodical import sony_metadata
metadata_xml, atom_xml = sony_metadata(oeb)
extra_entries = [('atom.xml', 'application/atom+xml', atom_xml)]
oeb_output = plugin_for_output_format('oeb')
oeb_output.convert(oeb, tdir, input_plugin, opts, log)
opf = [x for x in os.listdir(tdir) if x.endswith('.opf')][0]

View File

@ -40,14 +40,35 @@ class FB2Input(InputFormatPlugin):
accelerators):
from calibre.ebooks.metadata.opf2 import OPFCreator
from calibre.ebooks.metadata.meta import get_metadata
from calibre.ebooks.oeb.base import XLINK_NS
from calibre.ebooks.oeb.base import XLINK_NS, XHTML_NS, RECOVER_PARSER
NAMESPACES = {'f':FB2NS, 'l':XLINK_NS}
log.debug('Parsing XML...')
raw = stream.read()
raw = stream.read().replace('\0', '')
try:
doc = etree.fromstring(raw)
except etree.XMLSyntaxError:
doc = etree.fromstring(raw.replace('& ', '&amp;'))
try:
doc = etree.fromstring(raw, parser=RECOVER_PARSER)
except:
doc = etree.fromstring(raw.replace('& ', '&amp;'),
parser=RECOVER_PARSER)
stylesheets = doc.xpath('//*[local-name() = "stylesheet" and @type="text/css"]')
css = ''
for s in stylesheets:
css += etree.tostring(s, encoding=unicode, method='text',
with_tail=False) + '\n\n'
if css:
import cssutils, logging
parser = cssutils.CSSParser(fetcher=None,
log=logging.getLogger('calibre.css'))
XHTML_CSS_NAMESPACE = '@namespace "%s";\n' % XHTML_NS
text = XHTML_CSS_NAMESPACE + css
log.debug('Parsing stylesheet...')
stylesheet = parser.parseString(text)
stylesheet.namespaces['h'] = XHTML_NS
css = unicode(stylesheet.cssText).replace('h|style', 'h|span')
css = re.sub(r'name\s*=\s*', 'class=', css)
self.extract_embedded_content(doc)
log.debug('Converting XML to HTML...')
ss = open(P('templates/fb2.xsl'), 'rb').read()
@ -63,7 +84,9 @@ class FB2Input(InputFormatPlugin):
for img in result.xpath('//img[@src]'):
src = img.get('src')
img.set('src', self.binary_map.get(src, src))
open('index.xhtml', 'wb').write(transform.tostring(result))
index = transform.tostring(result)
open('index.xhtml', 'wb').write(index)
open('inline-styles.css', 'wb').write(css)
stream.seek(0)
mi = get_metadata(stream, 'fb2')
if not mi.title:

View File

@ -282,15 +282,22 @@ class HTMLInput(InputFormatPlugin):
basedir = os.getcwd()
self.opts = opts
fname = None
if hasattr(stream, 'name'):
basedir = os.path.dirname(stream.name)
fname = os.path.basename(stream.name)
if file_ext != 'opf':
if opts.dont_package:
raise ValueError('The --dont-package option is not supported for an HTML input file')
from calibre.ebooks.metadata.html import get_metadata
oeb = self.create_oebbook(stream.name, basedir, opts, log,
get_metadata(stream))
mi = get_metadata(stream)
if fname:
from calibre.ebooks.metadata.meta import metadata_from_filename
fmi = metadata_from_filename(fname)
fmi.smart_update(mi)
mi = fmi
oeb = self.create_oebbook(stream.name, basedir, opts, log, mi)
return oeb
from calibre.ebooks.conversion.plumber import create_oebbook

View File

@ -92,10 +92,14 @@ def get_metadata(br, asin, mi):
' @class="emptyClear" or @href]'):
c.getparent().remove(c)
desc = html.tostring(desc, method='html', encoding=unicode).strip()
desc = re.sub(r' class=[^>]+>', '>', desc)
# remove all attributes from tags
desc = re.sub(r'<([a-zA-Z0-9]+)\s[^>]+>', r'<\1>', desc)
# Collapse whitespace
desc = re.sub('\n+', '\n', desc)
desc = re.sub(' +', ' ', desc)
# Remove the notice about text referring to out of print editions
desc = re.sub(r'(?s)<em>--This text ref.*?</em>', '', desc)
# Remove comments
desc = re.sub(r'(?s)<!--.*?-->', '', desc)
mi.comments = desc

View File

@ -22,7 +22,8 @@ def get_metadata(stream):
'xlink':XLINK_NS})
tostring = lambda x : etree.tostring(x, method='text',
encoding=unicode).strip()
root = etree.fromstring(stream.read())
parser = etree.XMLParser(recover=True, no_network=True)
root = etree.fromstring(stream.read(), parser=parser)
authors, author_sort = [], None
for au in XPath('//fb2:author')(root):
fname = lname = author = None

View File

@ -12,7 +12,7 @@ import os, time, sys, shutil
from calibre.utils.ipc.job import ParallelJob
from calibre.utils.ipc.server import Server
from calibre.ptempfile import PersistentTemporaryDirectory
from calibre.ptempfile import PersistentTemporaryDirectory, TemporaryDirectory
from calibre import prints
from calibre.constants import filesystem_encoding
@ -21,51 +21,76 @@ def debug(*args):
prints(*args)
sys.stdout.flush()
def read_metadata_(task, tdir, notification=lambda x,y:x):
def serialize_metadata_for(formats, tdir, id_):
from calibre.ebooks.metadata.meta import metadata_from_formats
from calibre.ebooks.metadata.opf2 import metadata_to_opf
mi = metadata_from_formats(formats)
mi.cover = None
cdata = None
if mi.cover_data:
cdata = mi.cover_data[-1]
mi.cover_data = None
if not mi.application_id:
mi.application_id = '__calibre_dummy__'
with open(os.path.join(tdir, '%s.opf'%id_), 'wb') as f:
f.write(metadata_to_opf(mi))
if cdata:
with open(os.path.join(tdir, str(id_)), 'wb') as f:
f.write(cdata)
def read_metadata_(task, tdir, notification=lambda x,y:x):
with TemporaryDirectory() as mdir:
do_read_metadata(task, tdir, mdir, notification)
def do_read_metadata(task, tdir, mdir, notification):
from calibre.customize.ui import run_plugins_on_import
for x in task:
try:
id, formats = x
id_, formats = x
except:
continue
try:
if isinstance(formats, basestring): formats = [formats]
mi = metadata_from_formats(formats)
mi.cover = None
cdata = None
if mi.cover_data:
cdata = mi.cover_data[-1]
mi.cover_data = None
if not mi.application_id:
mi.application_id = '__calibre_dummy__'
with open(os.path.join(tdir, '%s.opf'%id), 'wb') as f:
f.write(metadata_to_opf(mi))
if cdata:
with open(os.path.join(tdir, str(id)), 'wb') as f:
f.write(cdata)
import_map = {}
fmts, metadata_fmts = [], []
for format in formats:
mfmt = format
name, ext = os.path.splitext(os.path.basename(format))
nfp = run_plugins_on_import(format)
if nfp is None:
nfp = format
nfp = os.path.abspath(nfp)
if not nfp or nfp == format or not os.access(nfp, os.R_OK):
nfp = None
else:
# Ensure that the filename is preserved so that
# reading metadata from filename is not broken
nfp = os.path.abspath(nfp)
nfext = os.path.splitext(nfp)[1]
mfmt = os.path.join(mdir, name + nfext)
shutil.copyfile(nfp, mfmt)
metadata_fmts.append(mfmt)
fmts.append(nfp)
serialize_metadata_for(metadata_fmts, tdir, id_)
for format, nfp in zip(formats, fmts):
if not nfp:
continue
if isinstance(nfp, unicode):
nfp.encode(filesystem_encoding)
x = lambda j : os.path.abspath(os.path.normpath(os.path.normcase(j)))
if x(nfp) != x(format) and os.access(nfp, os.R_OK|os.W_OK):
fmt = os.path.splitext(format)[1].replace('.', '').lower()
nfmt = os.path.splitext(nfp)[1].replace('.', '').lower()
dest = os.path.join(tdir, '%s.%s'%(id, nfmt))
dest = os.path.join(tdir, '%s.%s'%(id_, nfmt))
shutil.copyfile(nfp, dest)
import_map[fmt] = dest
os.remove(nfp)
if import_map:
with open(os.path.join(tdir, str(id)+'.import'), 'wb') as f:
with open(os.path.join(tdir, str(id_)+'.import'), 'wb') as f:
for fmt, nfp in import_map.items():
f.write(fmt+':'+nfp+'\n')
notification(0.5, id)
notification(0.5, id_)
except:
import traceback
with open(os.path.join(tdir, '%s.error'%id), 'wb') as f:
with open(os.path.join(tdir, '%s.error'%id_), 'wb') as f:
f.write(traceback.format_exc())
class Progress(object):

View File

@ -27,6 +27,8 @@ TABLE_TAGS = set(['table', 'tr', 'td', 'th', 'caption'])
SPECIAL_TAGS = set(['hr', 'br'])
CONTENT_TAGS = set(['img', 'hr', 'br'])
NOT_VTAGS = HEADER_TAGS | NESTABLE_TAGS | TABLE_TAGS | SPECIAL_TAGS | \
CONTENT_TAGS
PAGE_BREAKS = set(['always', 'left', 'right'])
COLLAPSE = re.compile(r'[ \t\r\n\v]+')
@ -57,8 +59,6 @@ class FormatState(object):
self.indent = 0.
self.fsize = 3
self.ids = set()
self.valign = 'baseline'
self.nest = False
self.italic = False
self.bold = False
self.strikethrough = False
@ -76,7 +76,6 @@ class FormatState(object):
and self.italic == other.italic \
and self.bold == other.bold \
and self.href == other.href \
and self.valign == other.valign \
and self.preserve == other.preserve \
and self.family == other.family \
and self.bgcolor == other.bgcolor \
@ -224,7 +223,6 @@ class MobiMLizer(object):
return
if not pstate or istate != pstate:
inline = para
valign = istate.valign
fsize = istate.fsize
href = istate.href
if not href:
@ -234,19 +232,8 @@ class MobiMLizer(object):
else:
inline = etree.SubElement(inline, XHTML('a'), href=href)
bstate.anchor = inline
if valign == 'super':
parent = inline
if istate.nest and bstate.inline is not None:
parent = bstate.inline
istate.nest = False
inline = etree.SubElement(parent, XHTML('sup'))
elif valign == 'sub':
parent = inline
if istate.nest and bstate.inline is not None:
parent = bstate.inline
istate.nest = False
inline = etree.SubElement(parent, XHTML('sub'))
elif fsize != 3:
if fsize != 3:
inline = etree.SubElement(inline, XHTML('font'),
size=str(fsize))
if istate.family == 'monospace':
@ -279,7 +266,8 @@ class MobiMLizer(object):
else:
inline.append(item)
def mobimlize_elem(self, elem, stylizer, bstate, istates):
def mobimlize_elem(self, elem, stylizer, bstate, istates,
ignore_valign=False):
if not isinstance(elem.tag, basestring) \
or namespace(elem.tag) != XHTML_NS:
return
@ -351,15 +339,6 @@ class MobiMLizer(object):
istate.family = 'sans-serif'
else:
istate.family = 'serif'
valign = style['vertical-align']
if valign in ('super', 'text-top') or asfloat(valign) > 0:
istate.nest = istate.valign in ('sub', 'super')
istate.valign = 'super'
elif valign == 'sub' or asfloat(valign) < 0:
istate.nest = istate.valign in ('sub', 'super')
istate.valign = 'sub'
else:
istate.valign = 'baseline'
if 'id' in elem.attrib:
istate.ids.add(elem.attrib['id'])
if 'name' in elem.attrib:
@ -407,6 +386,30 @@ class MobiMLizer(object):
text = None
else:
text = COLLAPSE.sub(' ', elem.text)
valign = style['vertical-align']
not_baseline = valign in ('super', 'sub', 'text-top',
'text-bottom')
vtag = 'sup' if valign in ('super', 'text-top') else 'sub'
if not_baseline and not ignore_valign and tag not in NOT_VTAGS and not isblock:
nroot = etree.Element(XHTML('html'), nsmap=MOBI_NSMAP)
vbstate = BlockState(etree.SubElement(nroot, XHTML('body')))
vbstate.para = etree.SubElement(vbstate.body, XHTML('p'))
self.mobimlize_elem(elem, stylizer, vbstate, istates,
ignore_valign=True)
if len(istates) > 0:
istates.pop()
if len(istates) == 0:
istates.append(FormatState())
at_start = bstate.para is None
if at_start:
self.mobimlize_content('span', '', bstate, istates)
parent = bstate.para if bstate.inline is None else bstate.inline
if parent is not None:
vtag = etree.SubElement(parent, XHTML(vtag))
for child in vbstate.para:
vtag.append(child)
return
if text or tag in CONTENT_TAGS or tag in NESTABLE_TAGS:
self.mobimlize_content(tag, text, bstate, istates)
for child in elem:
@ -421,6 +424,8 @@ class MobiMLizer(object):
tail = COLLAPSE.sub(' ', child.tail)
if tail:
self.mobimlize_content(tag, tail, bstate, istates)
if bstate.content and style['page-break-after'] in PAGE_BREAKS:
bstate.pbreak = True
if isblock:

View File

@ -31,17 +31,17 @@ class SNBOutput(OutputFormatPlugin):
'the line will be broken at the space after and will exceed the '
'specified value. Also, there is a minimum of 25 characters. '
'Use 0 to disable line splitting.')),
OptionRecommendation(name='insert_empty_line',
OptionRecommendation(name='snb_insert_empty_line',
recommended_value=False, level=OptionRecommendation.LOW,
help=_('Speicfy whether or not to insert an empty line between '
help=_('Specify whether or not to insert an empty line between '
'two paragraphs.')),
OptionRecommendation(name='indent_first_line',
OptionRecommendation(name='snb_indent_first_line',
recommended_value=True, level=OptionRecommendation.LOW,
help=_('Speicfy whether or not to insert two space characters '
help=_('Specify whether or not to insert two space characters '
'to indent the first line of each paragraph.')),
OptionRecommendation(name='hide_chapter_name',
OptionRecommendation(name='snb_hide_chapter_name',
recommended_value=False, level=OptionRecommendation.LOW,
help=_('Speicfy whether or not to hide the chapter title for each '
help=_('Specify whether or not to hide the chapter title for each '
'chapter. Useful for image-only output (eg. comics).')),
])

View File

@ -90,7 +90,7 @@ class SNBMLizer(object):
snbcTree = etree.Element("snbc")
snbcHead = etree.SubElement(snbcTree, "head")
etree.SubElement(snbcHead, "title").text = subtitle
if self.opts and self.opts.hide_chapter_name:
if self.opts and self.opts.snb_hide_chapter_name:
etree.SubElement(snbcHead, "hidetitle").text = u"true"
etree.SubElement(snbcTree, "body")
trees[subitem] = snbcTree
@ -120,13 +120,13 @@ class SNBMLizer(object):
subitem = line[len(CALIBRE_SNB_BM_TAG):]
bodyTree = trees[subitem].find(".//body")
else:
if self.opts and self.opts.indent_first_line:
if self.opts and self.opts.snb_indent_first_line:
prefix = u'\u3000\u3000'
else:
prefix = u''
etree.SubElement(bodyTree, "text").text = \
etree.CDATA(unicode(prefix + line))
if self.opts and self.opts.insert_empty_line:
if self.opts and self.opts.snb_insert_empty_line:
etree.SubElement(bodyTree, "text").text = \
etree.CDATA(u'')

View File

@ -18,7 +18,8 @@ class PluginWidget(Widget, Ui_Form):
def __init__(self, parent, get_option, get_help, db=None, book_id=None):
Widget.__init__(self, parent,
['insert_empty_line', 'indent_first_line', 'hide_chapter_name',])
['snb_insert_empty_line', 'snb_indent_first_line',
'snb_hide_chapter_name',])
self.db, self.book_id = db, book_id
self.initialize_options(get_option, get_help, db, book_id)

View File

@ -28,21 +28,21 @@
</spacer>
</item>
<item row="3" column="0">
<widget class="QCheckBox" name="opt_hide_chapter_name">
<widget class="QCheckBox" name="opt_snb_hide_chapter_name">
<property name="text">
<string>Hide chapter name</string>
</property>
</widget>
</item>
<item row="2" column="0">
<widget class="QCheckBox" name="opt_indent_first_line">
<widget class="QCheckBox" name="opt_snb_indent_first_line">
<property name="text">
<string>Insert space before the first line for each paragraph</string>
</property>
</widget>
</item>
<item row="1" column="0">
<widget class="QCheckBox" name="opt_insert_empty_line">
<widget class="QCheckBox" name="opt_snb_insert_empty_line">
<property name="text">
<string>Insert empty line between paragraphs</string>
</property>

View File

@ -489,7 +489,7 @@ class DeviceMenu(QMenu): # {{{
for actions, desc in (
(basic_actions, ''),
(delete_actions, _('Send and delete from library')),
(specific_actions, _('Send specific format'))
(specific_actions, _('Send specific format to'))
):
mdest = menu
if actions is not basic_actions:
@ -1029,7 +1029,7 @@ class DeviceMixin(object): # {{{
to_s = [account]
subjects = [_('News:')+' '+mi.title]
texts = [_('Attached is the')+' '+mi.title]
attachment_names = [mi.title+os.path.splitext(attachment)[1]]
attachment_names = [ascii_filename(mi.title)+os.path.splitext(attachment)[1]]
attachments = [attachment]
jobnames = ['%s:%s'%(id, mi.title)]
remove = [id] if config['delete_news_from_library_on_upload']\

View File

@ -55,12 +55,16 @@ class CheckLibraryDialog(QDialog):
h.addWidget(ln)
self.name_ignores = QLineEdit()
self.name_ignores.setText(db.prefs.get('check_library_ignore_names', ''))
self.name_ignores.setToolTip(
_('Enter comma-separated standard file name wildcards, such as synctoy*.dat'))
ln.setBuddy(self.name_ignores)
h.addWidget(self.name_ignores)
le = QLabel(_('Extensions to ignore'))
h.addWidget(le)
self.ext_ignores = QLineEdit()
self.ext_ignores.setText(db.prefs.get('check_library_ignore_extensions', ''))
self.ext_ignores.setToolTip(
_('Enter comma-separated extensions without a leading dot. Used only in book folders'))
le.setBuddy(self.ext_ignores)
h.addWidget(self.ext_ignores)
self._layout.addLayout(h)

View File

@ -571,6 +571,10 @@ class MetadataBulkDialog(QDialog, Ui_MetadataBulkDialog):
self.initalize_authors()
self.initialize_series()
self.initialize_publisher()
for x in ('authors', 'publisher', 'series'):
x = getattr(self, x)
x.setSizeAdjustPolicy(x.AdjustToMinimumContentsLengthWithIcon)
x.setMinimumContentsLength(25)
def initalize_authors(self):
all_authors = self.db.all_authors()

View File

@ -678,6 +678,19 @@ nothing should be put between the original text and the inserted text</string>
<item row="8" column="2">
<widget class="QLineEdit" name="test_result"/>
</item>
<item row="25" column="0" colspan="2">
<spacer name="verticalSpacer_2">
<property name="orientation">
<enum>Qt::Vertical</enum>
</property>
<property name="sizeHint" stdset="0">
<size>
<width>20</width>
<height>5</height>
</size>
</property>
</spacer>
</item>
</layout>
</widget>
</widget>

View File

@ -2,7 +2,7 @@ __license__ = 'GPL v3'
__copyright__ = '2008, Kovid Goyal <kovid at kovidgoyal.net>'
''' Code to manage ebook library'''
def db():
def db(path=None):
from calibre.library.database2 import LibraryDatabase2
from calibre.utils.config import prefs
return LibraryDatabase2(prefs['library_path'])
return LibraryDatabase2(path if path else prefs['library_path'])

View File

@ -5,7 +5,7 @@ __license__ = 'GPL v3'
__copyright__ = '2010, Kovid Goyal <kovid@kovidgoyal.net>'
__docformat__ = 'restructuredtext en'
import re, os, traceback
import re, os, traceback, fnmatch
from calibre import isbytestring
from calibre.constants import filesystem_encoding
@ -66,13 +66,19 @@ class CheckLibrary(object):
return self.failed_folders or self.mismatched_dirs or \
self.conflicting_custom_cols or self.failed_restores
def ignore_name(self, filename):
for filespec in self.ignore_names:
if fnmatch.fnmatch(filename, filespec):
return True
return False;
def scan_library(self, name_ignores, extension_ignores):
self.ignore_names = frozenset(name_ignores)
self.ignore_ext = frozenset(['.'+ e for e in extension_ignores])
lib = self.src_library_path
for auth_dir in os.listdir(lib):
if auth_dir in self.ignore_names or auth_dir == 'metadata.db':
if self.ignore_name(auth_dir) or auth_dir == 'metadata.db':
continue
auth_path = os.path.join(lib, auth_dir)
# First check: author must be a directory
@ -85,7 +91,7 @@ class CheckLibrary(object):
# Look for titles in the author directories
found_titles = False
for title_dir in os.listdir(auth_path):
if title_dir in self.ignore_names:
if self.ignore_name(title_dir):
continue
title_path = os.path.join(auth_path, title_dir)
db_path = os.path.join(auth_dir, title_dir)

View File

@ -28,11 +28,13 @@ from calibre.library.server.browse import BrowseServer
class DispatchController(object): # {{{
def __init__(self, prefix):
def __init__(self, prefix, wsgi=False):
self.dispatcher = cherrypy.dispatch.RoutesDispatcher()
self.funcs = []
self.seen = set([])
self.prefix = prefix if prefix else ''
if wsgi:
self.prefix = ''
def __call__(self, name, route, func, **kwargs):
if name in self.seen:
@ -96,7 +98,9 @@ class LibraryServer(ContentServer, MobileServer, XMLServer, OPDSServer, Cache,
server_name = __appname__ + '/' + __version__
def __init__(self, db, opts, embedded=False, show_tracebacks=True):
def __init__(self, db, opts, embedded=False, show_tracebacks=True,
wsgi=False):
self.is_wsgi = bool(wsgi)
self.opts = opts
self.embedded = embedded
self.state_callback = None
@ -114,35 +118,47 @@ class LibraryServer(ContentServer, MobileServer, XMLServer, OPDSServer, Cache,
self.set_database(db)
cherrypy.config.update({
'log.screen' : opts.develop,
'engine.autoreload_on' : opts.develop,
'tools.log_headers.on' : opts.develop,
'checker.on' : opts.develop,
'request.show_tracebacks': show_tracebacks,
'server.socket_host' : listen_on,
'server.socket_port' : opts.port,
'server.socket_timeout' : opts.timeout, #seconds
'server.thread_pool' : opts.thread_pool, # number of threads
})
if embedded:
'log.screen' : opts.develop,
'engine.autoreload_on' : getattr(opts,
'auto_reload', False),
'tools.log_headers.on' : opts.develop,
'checker.on' : opts.develop,
'request.show_tracebacks': show_tracebacks,
'server.socket_host' : listen_on,
'server.socket_port' : opts.port,
'server.socket_timeout' : opts.timeout, #seconds
'server.thread_pool' : opts.thread_pool, # number of threads
})
if embedded or wsgi:
cherrypy.config.update({'engine.SIGHUP' : None,
'engine.SIGTERM' : None,})
self.config = {'global': {
'tools.gzip.on' : True,
'tools.gzip.mime_types': ['text/html', 'text/plain', 'text/xml', 'text/javascript', 'text/css'],
}}
if opts.password:
self.config['/'] = {
'tools.digest_auth.on' : True,
'tools.digest_auth.realm' : (_('Password to access your calibre library. Username is ') + opts.username.strip()).encode('ascii', 'replace'),
'tools.digest_auth.users' : {opts.username.strip():opts.password.strip()},
}
self.config = {}
self.is_running = False
self.exception = None
self.setup_loggers()
cherrypy.engine.bonjour.subscribe()
if not wsgi:
self.setup_loggers()
cherrypy.engine.bonjour.subscribe()
self.config['global'] = {
'tools.gzip.on' : True,
'tools.gzip.mime_types': ['text/html', 'text/plain',
'text/xml', 'text/javascript', 'text/css'],
}
if opts.password:
self.config['/'] = {
'tools.digest_auth.on' : True,
'tools.digest_auth.realm' : (
_('Password to access your calibre library. Username is ')
+ opts.username.strip()),
'tools.digest_auth.users' : {opts.username.strip():opts.password.strip()},
}
self.__dispatcher__ = DispatchController(self.opts.url_prefix, wsgi)
for x in self.__class__.__bases__:
if hasattr(x, 'add_routes'):
x.add_routes(self, self.__dispatcher__)
root_conf = self.config.get('/', {})
root_conf['request.dispatch'] = self.__dispatcher__.dispatcher
self.config['/'] = root_conf
def set_database(self, db):
self.db = db
@ -183,14 +199,6 @@ class LibraryServer(ContentServer, MobileServer, XMLServer, OPDSServer, Cache,
def start(self):
self.is_running = False
d = DispatchController(self.opts.url_prefix)
for x in self.__class__.__bases__:
if hasattr(x, 'add_routes'):
x.add_routes(self, d)
root_conf = self.config.get('/', {})
root_conf['request.dispatch'] = d.dispatcher
self.config['/'] = root_conf
cherrypy.tree.mount(root=None, config=self.config)
try:
try:

View File

@ -123,9 +123,10 @@ def get_category_items(category, items, restriction, datatype, prefix): # {{{
def item(i):
templ = (u'<div title="{4}" class="category-item">'
'<div class="category-name">{0}</div><div>{1}</div>'
'<div>{2}'
'<span class="href">{5}{3}</span></div></div>')
'<div class="category-name">'
'<a href="{5}{3}" title="{4}">{0}</a></div>'
'<div>{1}</div>'
'<div>{2}</div></div>')
rating, rstring = render_rating(i.avg_rating, prefix)
name = xml(i.name)
if datatype == 'rating':
@ -142,7 +143,7 @@ def get_category_items(category, items, restriction, datatype, prefix): # {{{
q = category
href = '/browse/matches/%s/%s'%(quote(q), quote(id_))
return templ.format(xml(name), rating,
xml(desc), xml(href), rstring, prefix)
xml(desc), xml(href, True), rstring, prefix)
items = list(map(item, items))
return '\n'.join(['<div class="category-container">'] + items + ['</div>'])
@ -335,9 +336,10 @@ class BrowseServer(object):
icon = 'blank.png'
cats.append((meta['name'], category, icon))
cats = [('<li title="{2} {0}"><img src="{3}{src}" alt="{0}" />'
cats = [('<li><a title="{2} {0}" href="/browse/category/{1}">&nbsp;</a>'
'<img src="{3}{src}" alt="{0}" />'
'<span class="label">{0}</span>'
'<span class="url">{3}/browse/category/{1}</span></li>')
'</li>')
.format(xml(x, True), xml(quote(y)), xml(_('Browse books by')),
self.opts.url_prefix, src='/browse/icon/'+z)
for x, y, z in cats]
@ -393,14 +395,15 @@ class BrowseServer(object):
for x in sorted(starts):
category_groups[x] = len([y for y in items if
getter(y).upper().startswith(x)])
items = [(u'<h3 title="{0}">{0} <span>[{2}]</span></h3><div>'
items = [(u'<h3 title="{0}"><a class="load_href" title="{0}"'
u' href="{4}{3}"><strong>{0}</strong> [{2}]</a></h3><div>'
u'<div class="loaded" style="display:none"></div>'
u'<div class="loading"><img alt="{1}" src="{4}/static/loading.gif" /><em>{1}</em></div>'
u'<span class="load_href">{4}{3}</span></div>').format(
u'</div>').format(
xml(s, True),
xml(_('Loading, please wait'))+'&hellip;',
unicode(c),
xml(u'/browse/category_group/%s/%s'%(category, s)),
xml(u'/browse/category_group/%s/%s'%(category, s), True),
self.opts.url_prefix)
for s, c in category_groups.items()]
items = '\n\n'.join(items)
@ -460,13 +463,14 @@ class BrowseServer(object):
@Endpoint()
def browse_catalog(self, category=None, category_sort=None):
'Entry point for top-level, categories and sub-categories'
prefix = '' if self.is_wsgi else self.opts.url_prefix
if category == None:
ans = self.browse_toplevel()
elif category == 'newest':
raise cherrypy.InternalRedirect(self.opts.url_prefix +
raise cherrypy.InternalRedirect(prefix +
'/browse/matches/newest/dummy')
elif category == 'allbooks':
raise cherrypy.InternalRedirect(self.opts.url_prefix +
raise cherrypy.InternalRedirect(prefix +
'/browse/matches/allbooks/dummy')
else:
ans = self.browse_category(category, category_sort)
@ -562,7 +566,8 @@ class BrowseServer(object):
if not val:
val = ''
args[key] = xml(val, True)
fname = ascii_filename(args['title']) + ' - ' + ascii_filename(args['authors'])
fname = quote(ascii_filename(args['title']) + ' - ' +
ascii_filename(args['authors']))
return args, fmt, fmts, fname
@Endpoint(mimetype='application/json; charset=utf-8')

View File

@ -70,10 +70,10 @@ class ContentServer(object):
id = id.rpartition('_')[-1].partition('.')[0]
match = re.search(r'\d+', id)
if not match:
raise cherrypy.HTTPError(400, 'id:%s not an integer'%id)
raise cherrypy.HTTPError(404, 'id:%s not an integer'%id)
id = int(match.group())
if not self.db.has_id(id):
raise cherrypy.HTTPError(400, 'id:%d does not exist in database'%id)
raise cherrypy.HTTPError(404, 'id:%d does not exist in database'%id)
if what == 'thumb' or what.startswith('thumb_'):
try:
width, height = map(int, what.split('_')[1:])

View File

@ -24,6 +24,17 @@ def stop_threaded_server(server):
server.exit()
server.thread = None
def create_wsgi_app(path_to_library=None, prefix=''):
'WSGI entry point'
from calibre.library import db
cherrypy.config.update({'environment': 'embedded'})
db = db(path_to_library)
parser = option_parser()
opts, args = parser.parse_args(['calibre-server'])
opts.url_prefix = prefix
server = LibraryServer(db, opts, wsgi=True, show_tracebacks=True)
return cherrypy.Application(server, script_name=None, config=server.config)
def option_parser():
parser = config().option_parser('%prog '+ _(
'''[options]
@ -47,6 +58,9 @@ The OPDS interface is advertised via BonJour automatically.
help=_('Specifies a restriction to be used for this invocation. '
'This option overrides any per-library settings specified'
' in the GUI'))
parser.add_option('--auto-reload', default=False, action='store_true',
help=_('Auto reload server when source code changes. May not'
' work in all environments.'))
return parser

View File

@ -7,6 +7,7 @@ __docformat__ = 'restructuredtext en'
import re, os
import __builtin__
from urllib import quote
import cherrypy
from lxml import html
@ -115,13 +116,13 @@ def build_index(books, num, search, sort, order, start, total, url_base, CKEYS,
data = TD()
for fmt in book['formats'].split(','):
a = ascii_filename(book['authors'])
t = ascii_filename(book['title'])
a = quote(ascii_filename(book['authors']))
t = quote(ascii_filename(book['title']))
s = SPAN(
A(
fmt.lower(),
href=prefix+'/get/%s/%s-%s_%d.%s' % (fmt, a, t,
book['id'], fmt)
book['id'], fmt.lower())
),
CLASS('button'))
s.tail = u''

View File

@ -36,6 +36,7 @@ FileTypePlugin
.. _pluginsMetadataPlugin:
Metadata plugins
-------------------
@ -50,7 +51,6 @@ Metadata plugins
:members:
:member-order: bysource
.. _pluginsMetadataSource:
Catalog plugins
----------------
@ -60,6 +60,7 @@ Catalog plugins
:members:
:member-order: bysource
.. _pluginsMetadataSource:
Metadata download plugins
--------------------------

View File

@ -0,0 +1,108 @@
.. include:: global.rst
.. _servertutorial:
Integrating the |app| content server into other servers
==========================================================
Here, we will show you how to integrate the |app| content server into another server. The most common reason for this is to make use of SSL or more sophisticated authentication. There are two main techniques: Running the |app| content server as a standalone process and using a reverse proxy to connect it with your main server or running the content server in process in your main server with WSGI. The examples below are all for Apache 2.x on linux, but should be easily adaptable to other platforms.
.. contents:: Contents
:depth: 2
:local:
.. note:: This only applies to calibre releases >= 0.7.25
Using a reverse proxy
-----------------------
This is the simplest approach as it allows you to use the binary calibre install with no external dependencies/system integration requirements.
First start the |app| content server as shown below::
calibre-server --url-prefix /calibre --port 8080
Now suppose you are using Apache as your main server. First enable the proxy modules in apache, by adding the following to :file:`httpd.conf`::
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
The exact technique for enabling the proxy modules will vary depending on your Apache installation. Once you have the proxy modules enabled, add the following rules to httpd.conf (or if you are using virtual hosts to the conf file for the virtual host in question::
RewriteEngine on
RewriteRule ^/calibre/(.*) http://localhost:8080/calibre/$1 [proxy]
RewriteRule ^/calibre http://localhost:8080 [proxy]
That's all, you will now be able to access the |app| Content Server under the /calibre URL in your apache server.
.. note:: If you are willing to devote an entire VirtualHost to the content server, then there is no need to use --url-prefix and RewriteRule, instead just use the ProxyPass directive.
In process
------------
The calibre content server can be run directly, in process, inside a host server like Apache using the WSGI framework.
.. note:: For this to work, all the dependencies needed by calibre must be installed on your system. On linux, this can be achieved fairly easily by installing the distribution provided calibre package (provided it is up to date).
First, we have to create a WSGI *adapter* for the calibre content server. Here is a template you can use for the purpose. Replace the paths as directed in the comments
.. code-block:: python
# WSGI script file to run calibre content server as a WSGI app
import sys, os
# You can get the paths referenced here by running
# calibre-debug --paths
# on your server
# The first entry from CALIBRE_PYTHON_PATH
sys.path.insert(0, '/home/kovid/work/calibre/src')
# CALIBRE_RESOURCES_PATH
sys.resources_location = '/home/kovid/work/calibre/resources'
# CALIBRE_EXTENSIONS_PATH
sys.extensions_location = '/home/kovid/work/calibre/src/calibre/plugins'
# Path to directory containing calibre executables
sys.executables_location = '/usr/bin'
# Path to a directory for which the server has read/write permissions
# calibre config will be stored here
os.environ['CALIBRE_CONFIG_DIRECTORY'] = '/var/www/localhost/calibre-config'
del sys
del os
from calibre.library.server.main import create_wsgi_app
application = create_wsgi_app(
# The mount point of this WSGI application (i.e. the first argument to
# the WSGIScriptAlias directive). Set to empty string is mounted at /
prefix='/calibre',
# Path to the calibre library to be served
# The server process must have write permission for all files/dirs
# in this directory or BAD things will happen
path_to_library='/home/kovid/documents/demo library'
)
del create_wsgi_app
Save this adapter as :file:`calibre-wsgi-adpater.py` somewhere your server will have access to it.
Let's suppose that we want to use WSGI in Apache. First enable WSGI in Apache by adding the following to :file:`httpd.conf`::
LoadModule proxy_module modules/mod_wsgi.so
The exact technique for enabling the wsgi module will vary depending on your Apache installation. Once you have the proxy modules enabled, add the following rules to httpd.conf (or if you are using virtual hosts to the conf file for the virtual host in question::
WSGIScriptAlias /calibre /var/www/localhost/cgi-bin/calibre-wsgi-adapter.py
Change the path to :file:`calibre-wsgi-adapter.py` to wherever you saved it previously (make sure Apache has access to it).
That's all, you will now be able to access the |app| Content Server under the /calibre URL in your apache server.
.. note:: For more help with using mod_wsgi in Apache, see `mod_wsgi <http://code.google.com/p/modwsgi/wiki/WhereToGetHelp>`_.

View File

@ -16,4 +16,5 @@ Here you will find tutorials to get you started using |app|'s more advanced feat
template_lang
regexp
portable
server

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -583,7 +583,7 @@ class BasicNewsRecipe(Recipe):
self.title = unicode(self.title, 'utf-8', 'replace')
self.debug = options.verbose > 1
self.output_dir = os.getcwd()
self.output_dir = os.path.abspath(os.getcwdu())
self.verbose = options.verbose
self.test = options.test
self.username = options.username
@ -594,7 +594,6 @@ class BasicNewsRecipe(Recipe):
if self.touchscreen:
self.template_css += self.output_profile.touchscreen_news_css
self.output_dir = os.path.abspath(self.output_dir)
if options.test:
self.max_articles_per_feed = 2
self.simultaneous_downloads = min(4, self.simultaneous_downloads)
@ -958,6 +957,8 @@ class BasicNewsRecipe(Recipe):
self.log.error(_('Could not download cover: %s')%str(err))
self.log.debug(traceback.format_exc())
else:
if not cu:
return
cdata = None
if os.access(cu, os.R_OK):
cdata = open(cu, 'rb').read()
@ -988,6 +989,7 @@ class BasicNewsRecipe(Recipe):
self.cover_path = cpath
def download_cover(self):
self.cover_path = None
try:
self._download_cover()
except: