mirror of
https://github.com/kovidgoyal/calibre.git
synced 2025-07-09 03:04:10 -04:00
Merge from trunk
This commit is contained in:
commit
ab20117b19
@ -5,7 +5,7 @@
|
||||
# Also, each release can have new and improved recipes.
|
||||
|
||||
# - version: ?.?.?
|
||||
# date: 2012-??-??
|
||||
# date: 2013-??-??
|
||||
#
|
||||
# new features:
|
||||
# - title:
|
||||
@ -19,6 +19,68 @@
|
||||
# new recipes:
|
||||
# - title:
|
||||
|
||||
- version: 0.9.14
|
||||
date: 2013-01-11
|
||||
|
||||
new features:
|
||||
- title: "When adding multiple books and duplicates are found, allow the user to select which of the duplicate books will be added anyway."
|
||||
tickets: [1095256]
|
||||
|
||||
- title: "Device drivers for Kobo Arc on linux, Polaroid Android tablet"
|
||||
tickets: [1098049]
|
||||
|
||||
- title: "When sorting by series, use the language of the book to decide what leading articles to remove, just as is done for sorting by title"
|
||||
|
||||
bug fixes:
|
||||
- title: "PDF Output: Do not error out when the input document contains links with anchors not present in the document."
|
||||
tickets: [1096428]
|
||||
|
||||
- title: "Add support for upgraded db on newest Kobo firmware"
|
||||
tickets: [1095617]
|
||||
|
||||
- title: "PDF Output: Fix typo that broke use of custom paper sizes."
|
||||
tickets: [1097563]
|
||||
|
||||
- title: "PDF Output: Handle empty anchors present at the end of a page"
|
||||
|
||||
- title: "PDF Output: Fix side margins of last page in a flow being incorrect when large side margins are used."
|
||||
tickets: [1096290]
|
||||
|
||||
- title: "Edit metadata dialog: Allow setting the series number for custom series type columns to zero"
|
||||
|
||||
- title: "When bulk editing custom series-type columns and not provding a series number use 1 as the default, instead of None"
|
||||
|
||||
- title: "Catalogs: Fix issue with catalog generation using Hungarian UI and author_sort beginning with multiple letter groups."
|
||||
tickets: [1091581]
|
||||
|
||||
- title: "PDF Output: Dont error out on files that have invalid font-family declarations."
|
||||
tickets: [1096279]
|
||||
|
||||
- title: "Do not load QRawFont at global level, to allow calibre installation on systems with missing dependencies"
|
||||
tickets: [1096170]
|
||||
|
||||
- title: "PDF Output: Fix cover not present in generated PDF files"
|
||||
tickets: [1096098]
|
||||
|
||||
improved recipes:
|
||||
- Sueddeutsche Zeitung mobil
|
||||
- Boerse Online
|
||||
- TidBits
|
||||
- New York Review of Books
|
||||
- Fleshbot
|
||||
- Il Messaggero
|
||||
- Libero
|
||||
|
||||
new recipes:
|
||||
- title: Spectator Magazine, Oxford Mail and Outside Magazine
|
||||
author: Krittika Goyal
|
||||
|
||||
- title: Libartes
|
||||
author: Darko Miletic
|
||||
|
||||
- title: El Diplo
|
||||
author: Tomas De Domenico
|
||||
|
||||
- version: 0.9.13
|
||||
date: 2013-01-04
|
||||
|
||||
|
@ -437,10 +437,10 @@ that allows you to create collections on your Kindle from the |app| metadata. It
|
||||
|
||||
.. note:: Amazon have removed the ability to manipulate collections completely in their newer models, like the Kindle Touch and Kindle Fire, making even the above plugin useless. If you really want the ability to manage collections on your Kindle via a USB connection, we encourage you to complain to Amazon about it, or get a reader where this is supported, like the SONY or Kobo Readers.
|
||||
|
||||
I am getting an error when I try to use |app| with my Kobo Touch?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
I am getting an error when I try to use |app| with my Kobo Touch/Glo/etc.?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Kobo Touch has very buggy firmware. Connecting to it has been known to fail at random. Certain combinations of motherboard, USB ports/cables/hubs can exacerbate this tendency to fail. If you are getting an error when connecting to your touch with |app| try the following, each of which has solved the problem for *some* |app| users.
|
||||
The Kobo has very buggy firmware. Connecting to it has been known to fail at random. Certain combinations of motherboard, USB ports/cables/hubs can exacerbate this tendency to fail. If you are getting an error when connecting to your touch with |app| try the following, each of which has solved the problem for *some* |app| users.
|
||||
|
||||
* Connect the Kobo directly to your computer, not via USB Hub
|
||||
* Try a different USB cable and a different USB port on your computer
|
||||
|
@ -11,16 +11,15 @@ class BusinessWeekMagazine(BasicNewsRecipe):
|
||||
category = 'news'
|
||||
encoding = 'UTF-8'
|
||||
keep_only_tags = [
|
||||
dict(name='div', attrs={'id':'article_body_container'}),
|
||||
]
|
||||
remove_tags = [dict(name='ui'),dict(name='li')]
|
||||
dict(name='div', attrs={'id':'article_body_container'}),
|
||||
]
|
||||
remove_tags = [dict(name='ui'),dict(name='li'),dict(name='div', attrs={'id':['share-email']})]
|
||||
no_javascript = True
|
||||
no_stylesheets = True
|
||||
|
||||
cover_url = 'http://images.businessweek.com/mz/covers/current_120x160.jpg'
|
||||
|
||||
def parse_index(self):
|
||||
|
||||
#Go to the issue
|
||||
soup = self.index_to_soup('http://www.businessweek.com/magazine/news/articles/business_news.htm')
|
||||
|
||||
@ -47,7 +46,6 @@ class BusinessWeekMagazine(BasicNewsRecipe):
|
||||
if section_title not in feeds:
|
||||
feeds[section_title] = []
|
||||
feeds[section_title] += articles
|
||||
|
||||
div1 = soup.find ('div', attrs={'class':'column center'})
|
||||
section_title = ''
|
||||
for div in div1.findAll('h5'):
|
||||
|
@ -15,7 +15,7 @@ class LiberoNews(BasicNewsRecipe):
|
||||
description = 'Italian daily newspaper'
|
||||
|
||||
#cover_url = 'http://www.liberoquotidiano.it/images/Libero%20Quotidiano.jpg'
|
||||
cover_url = 'http://www.edicola.liberoquotidiano.it/vnlibero/fpcut.jsp?testata=milano'
|
||||
cover_url = 'http://www.edicola.liberoquotidiano.it/vnlibero/fpcut.jsp?testata=milano'
|
||||
title = u'Libero '
|
||||
publisher = 'EDITORIALE LIBERO s.r.l 2006'
|
||||
category = 'News, politics, culture, economy, general interest'
|
||||
|
@ -1,224 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
##
|
||||
## Title: Microwave and RF
|
||||
##
|
||||
## License: GNU General Public License v3 - http://www.gnu.org/copyleft/gpl.html
|
||||
|
||||
# Feb 2012: Initial release
|
||||
|
||||
__license__ = 'GNU General Public License v3 - http://www.gnu.org/copyleft/gpl.html'
|
||||
'''
|
||||
mwrf.com
|
||||
'''
|
||||
|
||||
import re
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
from calibre.utils.magick import Image
|
||||
|
||||
class Microwaves_and_RF(BasicNewsRecipe):
|
||||
|
||||
Convert_Grayscale = False # Convert images to gray scale or not
|
||||
|
||||
# Add sections that want to be excluded from the magazine
|
||||
exclude_sections = []
|
||||
|
||||
# Add sections that want to be included from the magazine
|
||||
include_sections = []
|
||||
|
||||
title = u'Microwaves and RF'
|
||||
__author__ = u'kiavash'
|
||||
description = u'Microwaves and RF Montly Magazine'
|
||||
publisher = 'Penton Media, Inc.'
|
||||
publication_type = 'magazine'
|
||||
site = 'http://mwrf.com'
|
||||
|
||||
language = 'en'
|
||||
asciiize = True
|
||||
timeout = 120
|
||||
simultaneous_downloads = 1 # very peaky site!
|
||||
|
||||
# Main article is inside this tag
|
||||
keep_only_tags = [dict(name='table', attrs={'id':'prtContent'})]
|
||||
|
||||
no_stylesheets = True
|
||||
remove_javascript = True
|
||||
|
||||
# Flattens all the tables to make it compatible with Nook
|
||||
conversion_options = {'linearize_tables' : True}
|
||||
|
||||
remove_tags = [
|
||||
dict(name='span', attrs={'class':'body12'}),
|
||||
]
|
||||
|
||||
remove_attributes = [ 'border', 'cellspacing', 'align', 'cellpadding', 'colspan',
|
||||
'valign', 'vspace', 'hspace', 'alt', 'width', 'height' ]
|
||||
|
||||
# Specify extra CSS - overrides ALL other CSS (IE. Added last).
|
||||
extra_css = 'body { font-family: verdana, helvetica, sans-serif; } \
|
||||
.introduction, .first { font-weight: bold; } \
|
||||
.cross-head { font-weight: bold; font-size: 125%; } \
|
||||
.cap, .caption { display: block; font-size: 80%; font-style: italic; } \
|
||||
.cap, .caption, .caption img, .caption span { display: block; margin: 5px auto; } \
|
||||
.byl, .byd, .byline img, .byline-name, .byline-title, .author-name, .author-position, \
|
||||
.correspondent-portrait img, .byline-lead-in, .name, .bbc-role { display: block; \
|
||||
font-size: 80%; font-style: italic; margin: 1px auto; } \
|
||||
.story-date, .published { font-size: 80%; } \
|
||||
table { width: 100%; } \
|
||||
td img { display: block; margin: 5px auto; } \
|
||||
ul { padding-top: 10px; } \
|
||||
ol { padding-top: 10px; } \
|
||||
li { padding-top: 5px; padding-bottom: 5px; } \
|
||||
h1 { font-size: 175%; font-weight: bold; } \
|
||||
h2 { font-size: 150%; font-weight: bold; } \
|
||||
h3 { font-size: 125%; font-weight: bold; } \
|
||||
h4, h5, h6 { font-size: 100%; font-weight: bold; }'
|
||||
|
||||
# Remove the line breaks and float left/right and picture width/height.
|
||||
preprocess_regexps = [(re.compile(r'<br[ ]*/>', re.IGNORECASE), lambda m: ''),
|
||||
(re.compile(r'<br[ ]*clear.*/>', re.IGNORECASE), lambda m: ''),
|
||||
(re.compile(r'float:.*?'), lambda m: ''),
|
||||
(re.compile(r'width:.*?px'), lambda m: ''),
|
||||
(re.compile(r'height:.*?px'), lambda m: '')
|
||||
]
|
||||
|
||||
|
||||
def print_version(self, url):
|
||||
url = re.sub(r'.html', '', url)
|
||||
url = re.sub('/ArticleID/.*?/', '/Print.cfm?ArticleID=', url)
|
||||
return url
|
||||
|
||||
# Need to change the user agent to avoid potential download errors
|
||||
def get_browser(self, *args, **kwargs):
|
||||
from calibre import browser
|
||||
kwargs['user_agent'] = 'Mozilla/5.0 (Windows NT 5.1; rv:10.0) Gecko/20100101 Firefox/10.0'
|
||||
return browser(*args, **kwargs)
|
||||
|
||||
|
||||
def parse_index(self):
|
||||
|
||||
# Fetches the main page of Microwaves and RF
|
||||
soup = self.index_to_soup(self.site)
|
||||
|
||||
# First page has the ad, Let's find the redirect address.
|
||||
url = soup.find('span', attrs={'class':'commonCopy'}).find('a').get('href')
|
||||
if url.startswith('/'):
|
||||
url = self.site + url
|
||||
|
||||
soup = self.index_to_soup(url)
|
||||
|
||||
# Searches the site for Issue ID link then returns the href address
|
||||
# pointing to the latest issue
|
||||
latest_issue = soup.find('a', attrs={'href':lambda x: x and 'IssueID' in x}).get('href')
|
||||
|
||||
# Fetches the index page for of the latest issue
|
||||
soup = self.index_to_soup(latest_issue)
|
||||
|
||||
# Finds the main section of the page containing cover, issue date and
|
||||
# TOC
|
||||
ts = soup.find('div', attrs={'id':'columnContainer'})
|
||||
|
||||
# Finds the issue date
|
||||
ds = ' '.join(self.tag_to_string(ts.find('span', attrs={'class':'CurrentIssueSectionHead'})).strip().split()[-2:]).capitalize()
|
||||
self.log('Found Current Issue:', ds)
|
||||
self.timefmt = ' [%s]'%ds
|
||||
|
||||
# Finds the cover image
|
||||
cover = ts.find('img', src = lambda x: x and 'Cover' in x)
|
||||
if cover is not None:
|
||||
self.cover_url = self.site + cover['src']
|
||||
self.log('Found Cover image:', self.cover_url)
|
||||
|
||||
feeds = []
|
||||
article_info = []
|
||||
|
||||
# Finds all the articles (tiles and links)
|
||||
articles = ts.findAll('a', attrs={'class':'commonArticleTitle'})
|
||||
|
||||
# Finds all the descriptions
|
||||
descriptions = ts.findAll('span', attrs={'class':'commonCopy'})
|
||||
|
||||
# Find all the sections
|
||||
sections = ts.findAll('span', attrs={'class':'kicker'})
|
||||
|
||||
title_number = 0
|
||||
|
||||
# Goes thru all the articles one by one and sort them out
|
||||
for section in sections:
|
||||
title_number = title_number + 1
|
||||
|
||||
# Removes the unwanted sections
|
||||
if self.tag_to_string(section) in self.exclude_sections:
|
||||
continue
|
||||
|
||||
# Only includes the wanted sections
|
||||
if self.include_sections:
|
||||
if self.tag_to_string(section) not in self.include_sections:
|
||||
continue
|
||||
|
||||
|
||||
title = self.tag_to_string(articles[title_number])
|
||||
url = articles[title_number].get('href')
|
||||
if url.startswith('/'):
|
||||
url = self.site + url
|
||||
|
||||
self.log('\tFound article:', title, 'at', url)
|
||||
desc = self.tag_to_string(descriptions[title_number])
|
||||
self.log('\t\t', desc)
|
||||
|
||||
article_info.append({'title':title, 'url':url, 'description':desc,
|
||||
'date':self.timefmt})
|
||||
|
||||
if article_info:
|
||||
feeds.append((self.title, article_info))
|
||||
|
||||
#self.log(feeds)
|
||||
return feeds
|
||||
|
||||
def postprocess_html(self, soup, first):
|
||||
if self.Convert_Grayscale:
|
||||
#process all the images
|
||||
for tag in soup.findAll(lambda tag: tag.name.lower()=='img' and tag.has_key('src')):
|
||||
iurl = tag['src']
|
||||
img = Image()
|
||||
img.open(iurl)
|
||||
if img < 0:
|
||||
raise RuntimeError('Out of memory')
|
||||
img.type = "GrayscaleType"
|
||||
img.save(iurl)
|
||||
return soup
|
||||
|
||||
def preprocess_html(self, soup):
|
||||
|
||||
# Includes all the figures inside the final ebook
|
||||
# Finds all the jpg links
|
||||
for figure in soup.findAll('a', attrs = {'href' : lambda x: x and 'jpg' in x}):
|
||||
|
||||
# makes sure that the link points to the absolute web address
|
||||
if figure['href'].startswith('/'):
|
||||
figure['href'] = self.site + figure['href']
|
||||
|
||||
figure.name = 'img' # converts the links to img
|
||||
figure['src'] = figure['href'] # with the same address as href
|
||||
figure['style'] = 'display:block' # adds /n before and after the image
|
||||
del figure['href']
|
||||
del figure['target']
|
||||
|
||||
# Makes the title standing out
|
||||
for title in soup.findAll('a', attrs = {'class': 'commonSectionTitle'}):
|
||||
title.name = 'h1'
|
||||
del title['href']
|
||||
del title['target']
|
||||
|
||||
# Makes the section name more visible
|
||||
for section_name in soup.findAll('a', attrs = {'class': 'kicker2'}):
|
||||
section_name.name = 'h5'
|
||||
del section_name['href']
|
||||
del section_name['target']
|
||||
|
||||
# Removes all unrelated links
|
||||
for link in soup.findAll('a', attrs = {'href': True}):
|
||||
link.name = 'font'
|
||||
del link['href']
|
||||
del link['target']
|
||||
|
||||
return soup
|
65
recipes/outside_magazine.recipe
Normal file
65
recipes/outside_magazine.recipe
Normal file
@ -0,0 +1,65 @@
|
||||
from calibre.web.feeds.recipes import BasicNewsRecipe
|
||||
|
||||
class NYTimes(BasicNewsRecipe):
|
||||
|
||||
title = 'Outside Magazine'
|
||||
__author__ = 'Krittika Goyal'
|
||||
description = 'Outside Magazine - Free 1 Month Old Issue'
|
||||
timefmt = ' [%d %b, %Y]'
|
||||
needs_subscription = False
|
||||
language = 'en'
|
||||
|
||||
no_stylesheets = True
|
||||
#auto_cleanup = True
|
||||
#auto_cleanup_keep = '//div[@class="thumbnail"]'
|
||||
|
||||
keep_only_tags = dict(name='div', attrs={'class':'masonry-box width-four'})
|
||||
remove_tags = [
|
||||
dict(name='div', attrs={'id':['share-bar', 'outbrain_widget_0', 'outbrain_widget_1', 'livefyre']}),
|
||||
#dict(name='div', attrs={'id':['qrformdiv', 'inSection', 'alpha-inner']}),
|
||||
#dict(name='form', attrs={'onsubmit':''}),
|
||||
dict(name='section', attrs={'id':['article-quote', 'article-navigation']}),
|
||||
]
|
||||
#TO GET ARTICLE TOC
|
||||
def out_get_index(self):
|
||||
super_url = 'http://www.outsideonline.com/magazine/'
|
||||
super_soup = self.index_to_soup(super_url)
|
||||
div = super_soup.find(attrs={'class':'masonry-box width-four'})
|
||||
issue = div.findAll(name='article')[1]
|
||||
super_a = issue.find('a', href=True)
|
||||
return super_a.get('href')
|
||||
|
||||
|
||||
# To parse artice toc
|
||||
def parse_index(self):
|
||||
parse_soup = self.index_to_soup(self.out_get_index())
|
||||
|
||||
feeds = []
|
||||
feed_title = 'Articles'
|
||||
|
||||
articles = []
|
||||
self.log('Found section:', feed_title)
|
||||
div = parse_soup.find(attrs={'class':'print clearfix'})
|
||||
for art in div.findAll(name='p'):
|
||||
art_info = art.find(name = 'a')
|
||||
if art_info is None:
|
||||
continue
|
||||
art_title = self.tag_to_string(art_info)
|
||||
url = art_info.get('href') + '?page=all'
|
||||
self.log.info('\tFound article:', art_title, 'at', url)
|
||||
article = {'title':art_title, 'url':url, 'date':''}
|
||||
#au = art.find(attrs={'class':'articleAuthors'})
|
||||
#if au is not None:
|
||||
#article['author'] = self.tag_to_string(au)
|
||||
#desc = art.find(attrs={'class':'hover_text'})
|
||||
#if desc is not None:
|
||||
#desc = self.tag_to_string(desc)
|
||||
#if 'author' in article:
|
||||
#desc = ' by ' + article['author'] + ' ' +desc
|
||||
#article['description'] = desc
|
||||
articles.append(article)
|
||||
if articles:
|
||||
feeds.append((feed_title, articles))
|
||||
|
||||
return feeds
|
||||
|
@ -6,7 +6,6 @@ class PhilosophyNow(BasicNewsRecipe):
|
||||
|
||||
title = 'Philosophy Now'
|
||||
__author__ = 'Rick Shang'
|
||||
|
||||
description = '''Philosophy Now is a lively magazine for everyone
|
||||
interested in ideas. It isn't afraid to tackle all the major questions of
|
||||
life, the universe and everything. Published every two months, it tries to
|
||||
@ -27,7 +26,7 @@ class PhilosophyNow(BasicNewsRecipe):
|
||||
def get_browser(self):
|
||||
br = BasicNewsRecipe.get_browser()
|
||||
br.open('https://philosophynow.org/auth/login')
|
||||
br.select_form(nr = 1)
|
||||
br.select_form(name="loginForm")
|
||||
br['username'] = self.username
|
||||
br['password'] = self.password
|
||||
br.submit()
|
||||
@ -50,19 +49,20 @@ class PhilosophyNow(BasicNewsRecipe):
|
||||
#Go to the main body
|
||||
current_issue_url = 'http://philosophynow.org/issues/' + issuenum
|
||||
soup = self.index_to_soup(current_issue_url)
|
||||
div = soup.find ('div', attrs={'class':'articlesColumn'})
|
||||
div = soup.find ('div', attrs={'class':'contentsColumn'})
|
||||
|
||||
feeds = OrderedDict()
|
||||
|
||||
for post in div.findAll('h3'):
|
||||
|
||||
for post in div.findAll('h1'):
|
||||
articles = []
|
||||
a=post.find('a',href=True)
|
||||
if a is not None:
|
||||
url="http://philosophynow.org" + a['href']
|
||||
title=self.tag_to_string(a).strip()
|
||||
s=post.findPrevious('h4')
|
||||
s=post.findPrevious('h3')
|
||||
section_title = self.tag_to_string(s).strip()
|
||||
d=post.findNext('p')
|
||||
d=post.findNext('h2')
|
||||
desc = self.tag_to_string(d).strip()
|
||||
articles.append({'title':title, 'url':url, 'description':desc, 'date':''})
|
||||
|
||||
@ -73,3 +73,5 @@ class PhilosophyNow(BasicNewsRecipe):
|
||||
ans = [(key, val) for key, val in feeds.iteritems()]
|
||||
return ans
|
||||
|
||||
def cleanup(self):
|
||||
self.browser.open('http://philosophynow.org/auth/logout')
|
||||
|
13
recipes/schattenblick.recipe
Normal file
13
recipes/schattenblick.recipe
Normal file
@ -0,0 +1,13 @@
|
||||
from calibre.web.feeds.news import BasicNewsRecipe
|
||||
|
||||
class AdvancedUserRecipe1345802300(BasicNewsRecipe):
|
||||
title = u'Online-Zeitung Schattenblick'
|
||||
language = 'de'
|
||||
__author__ = 'ThB'
|
||||
publisher = u'MA-Verlag'
|
||||
category = u'Nachrichten'
|
||||
oldest_article = 7
|
||||
max_articles_per_feed = 100
|
||||
cover_url = 'http://www.schattenblick.de/mobi/rss/cover.jpg'
|
||||
feeds = [(u'Schattenblick Tagesausgabe', u'http://www.schattenblick.de/mobi/rss/rss.xml')]
|
||||
|
60
recipes/spectator_magazine.recipe
Normal file
60
recipes/spectator_magazine.recipe
Normal file
@ -0,0 +1,60 @@
|
||||
from calibre.web.feeds.recipes import BasicNewsRecipe
|
||||
|
||||
class NYTimes(BasicNewsRecipe):
|
||||
|
||||
title = 'Spectator Magazine'
|
||||
__author__ = 'Krittika Goyal'
|
||||
description = 'Magazine'
|
||||
timefmt = ' [%d %b, %Y]'
|
||||
needs_subscription = False
|
||||
language = 'en'
|
||||
|
||||
no_stylesheets = True
|
||||
#auto_cleanup = True
|
||||
#auto_cleanup_keep = '//div[@class="thumbnail"]'
|
||||
|
||||
keep_only_tags = dict(name='div', attrs={'id':'content'})
|
||||
remove_tags = [
|
||||
dict(name='div', attrs={'id':['disqus_thread']}),
|
||||
##dict(name='div', attrs={'id':['qrformdiv', 'inSection', 'alpha-inner']}),
|
||||
##dict(name='form', attrs={'onsubmit':''}),
|
||||
#dict(name='section', attrs={'id':['article-quote', 'article-navigation']}),
|
||||
]
|
||||
|
||||
#TO GET ARTICLE TOC
|
||||
def spec_get_index(self):
|
||||
return self.index_to_soup('http://www.spectator.co.uk/')
|
||||
|
||||
# To parse artice toc
|
||||
def parse_index(self):
|
||||
parse_soup = self.index_to_soup('http://www.spectator.co.uk/')
|
||||
|
||||
feeds = []
|
||||
feed_title = 'Spectator Magazine Articles'
|
||||
|
||||
articles = []
|
||||
self.log('Found section:', feed_title)
|
||||
div = parse_soup.find(attrs={'class':'one-col-tax-widget magazine-list columns-1 post-8 taxonomy-category full-width widget section-widget icit-taxonomical-listings'})
|
||||
for art in div.findAll(name='h2'):
|
||||
art_info = art.find(name = 'a')
|
||||
if art_info is None:
|
||||
continue
|
||||
art_title = self.tag_to_string(art_info)
|
||||
url = art_info.get('href')
|
||||
self.log.info('\tFound article:', art_title, 'at', url)
|
||||
article = {'title':art_title, 'url':url, 'date':''}
|
||||
#au = art.find(attrs={'class':'articleAuthors'})
|
||||
#if au is not None:
|
||||
#article['author'] = self.tag_to_string(au)
|
||||
#desc = art.find(attrs={'class':'hover_text'})
|
||||
#if desc is not None:
|
||||
#desc = self.tag_to_string(desc)
|
||||
#if 'author' in article:
|
||||
#desc = ' by ' + article['author'] + ' ' +desc
|
||||
#article['description'] = desc
|
||||
articles.append(article)
|
||||
if articles:
|
||||
feeds.append((feed_title, articles))
|
||||
|
||||
return feeds
|
||||
|
@ -1,9 +1,12 @@
|
||||
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
|
||||
__license__ = 'GPL v3'
|
||||
__copyright__ = '2012, Andreas Zeiser <andreas.zeiser@web.de>'
|
||||
__copyright__ = '2012, 2013 Andreas Zeiser <andreas.zeiser@web.de>'
|
||||
'''
|
||||
szmobil.sueddeutsche.de/
|
||||
'''
|
||||
# History
|
||||
# 2013.01.09 Fixed bugs in article titles containing "strong" and
|
||||
# other small changes
|
||||
# 2012.08.04 Initial release
|
||||
|
||||
from calibre import strftime
|
||||
from calibre.web.feeds.recipes import BasicNewsRecipe
|
||||
@ -26,6 +29,8 @@ class SZmobil(BasicNewsRecipe):
|
||||
delay = 1
|
||||
cover_source = 'http://www.sueddeutsche.de/verlag'
|
||||
|
||||
# if you want to get rid of the date on the title page use
|
||||
# timefmt = ''
|
||||
timefmt = ' [%a, %d %b, %Y]'
|
||||
|
||||
root_url ='http://szmobil.sueddeutsche.de/'
|
||||
@ -50,7 +55,7 @@ class SZmobil(BasicNewsRecipe):
|
||||
|
||||
return browser
|
||||
|
||||
def parse_index(self):
|
||||
def parse_index(self):
|
||||
# find all sections
|
||||
src = self.index_to_soup('http://szmobil.sueddeutsche.de')
|
||||
feeds = []
|
||||
@ -76,10 +81,10 @@ class SZmobil(BasicNewsRecipe):
|
||||
# first check if link is a special article in section "Meinungsseite"
|
||||
if itt.find('strong')!= None:
|
||||
article_name = itt.strong.string
|
||||
article_shorttitle = itt.contents[1]
|
||||
if len(itt.contents)>1:
|
||||
shorttitles[article_id] = itt.contents[1]
|
||||
|
||||
articles.append( (article_name, article_url, article_id) )
|
||||
shorttitles[article_id] = article_shorttitle
|
||||
continue
|
||||
|
||||
|
||||
@ -89,7 +94,7 @@ class SZmobil(BasicNewsRecipe):
|
||||
else:
|
||||
article_name = itt.string
|
||||
|
||||
if (article_name[0:10] == " mehr"):
|
||||
if (article_name.find(" mehr") == 0):
|
||||
# just another link ("mehr") to an article
|
||||
continue
|
||||
|
||||
@ -102,7 +107,9 @@ class SZmobil(BasicNewsRecipe):
|
||||
for article_name, article_url, article_id in articles:
|
||||
url = self.root_url + article_url
|
||||
title = article_name
|
||||
pubdate = strftime('%a, %d %b')
|
||||
# if you want to get rid of date for each article use
|
||||
# pubdate = strftime('')
|
||||
pubdate = strftime('[%a, %d %b]')
|
||||
description = ''
|
||||
if shorttitles.has_key(article_id):
|
||||
description = shorttitles[article_id]
|
||||
@ -115,3 +122,4 @@ class SZmobil(BasicNewsRecipe):
|
||||
|
||||
return all_articles
|
||||
|
||||
|
||||
|
Binary file not shown.
@ -4,7 +4,7 @@ __license__ = 'GPL v3'
|
||||
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
__appname__ = u'calibre'
|
||||
numeric_version = (0, 9, 13)
|
||||
numeric_version = (0, 9, 14)
|
||||
__version__ = u'.'.join(map(unicode, numeric_version))
|
||||
__author__ = u"Kovid Goyal <kovid@kovidgoyal.net>"
|
||||
|
||||
|
@ -214,7 +214,7 @@ class ANDROID(USBMS):
|
||||
'POCKET', 'ONDA_MID', 'ZENITHIN', 'INGENIC', 'PMID701C', 'PD',
|
||||
'PMP5097C', 'MASS', 'NOVO7', 'ZEKI', 'COBY', 'SXZ', 'USB_2.0',
|
||||
'COBY_MID', 'VS', 'AINOL', 'TOPWISE', 'PAD703', 'NEXT8D12',
|
||||
'MEDIATEK']
|
||||
'MEDIATEK', 'KEENHI']
|
||||
WINDOWS_MAIN_MEM = ['ANDROID_PHONE', 'A855', 'A853', 'INC.NEXUS_ONE',
|
||||
'__UMS_COMPOSITE', '_MB200', 'MASS_STORAGE', '_-_CARD', 'SGH-I897',
|
||||
'GT-I9000', 'FILE-STOR_GADGET', 'SGH-T959_CARD', 'SGH-T959', 'SAMSUNG_ANDROID',
|
||||
@ -234,7 +234,8 @@ class ANDROID(USBMS):
|
||||
'THINKPAD_TABLET', 'SGH-T989', 'YP-G70', 'STORAGE_DEVICE',
|
||||
'ADVANCED', 'SGH-I727', 'USB_FLASH_DRIVER', 'ANDROID',
|
||||
'S5830I_CARD', 'MID7042', 'LINK-CREATE', '7035', 'VIEWPAD_7E',
|
||||
'NOVO7', 'MB526', '_USB#WYK7MSF8KE', 'TABLET_PC', 'F', 'MT65XX_MS']
|
||||
'NOVO7', 'MB526', '_USB#WYK7MSF8KE', 'TABLET_PC', 'F', 'MT65XX_MS',
|
||||
'ICS']
|
||||
WINDOWS_CARD_A_MEM = ['ANDROID_PHONE', 'GT-I9000_CARD', 'SGH-I897',
|
||||
'FILE-STOR_GADGET', 'SGH-T959_CARD', 'SGH-T959', 'SAMSUNG_ANDROID', 'GT-P1000_CARD',
|
||||
'A70S', 'A101IT', '7', 'INCREDIBLE', 'A7EB', 'SGH-T849_CARD',
|
||||
|
@ -37,7 +37,7 @@ class KOBO(USBMS):
|
||||
|
||||
dbversion = 0
|
||||
fwversion = 0
|
||||
supported_dbversion = 65
|
||||
supported_dbversion = 75
|
||||
has_kepubs = False
|
||||
|
||||
supported_platforms = ['windows', 'osx', 'linux']
|
||||
|
@ -20,6 +20,9 @@ const calibre_device_entry_t calibre_mtp_device_table[] = {
|
||||
, { "Google", 0x18d1, "Nexus 10", 0x4ee2, DEVICE_FLAGS_ANDROID_BUGS}
|
||||
, { "Google", 0x18d1, "Nexus 10", 0x4ee1, DEVICE_FLAGS_ANDROID_BUGS}
|
||||
|
||||
// Kobo Arc
|
||||
, { "Kobo", 0x2237, "Arc", 0xd108, DEVICE_FLAGS_ANDROID_BUGS}
|
||||
|
||||
, { NULL, 0xffff, NULL, 0xffff, DEVICE_FLAG_NONE }
|
||||
};
|
||||
|
||||
|
@ -13,7 +13,7 @@ from collections import namedtuple
|
||||
from functools import partial
|
||||
|
||||
from calibre import prints, as_unicode
|
||||
from calibre.constants import plugins
|
||||
from calibre.constants import plugins, islinux
|
||||
from calibre.ptempfile import SpooledTemporaryFile
|
||||
from calibre.devices.errors import OpenFailed, DeviceError, BlacklistedDevice
|
||||
from calibre.devices.mtp.base import MTPDeviceBase, synchronous, debug
|
||||
@ -44,6 +44,17 @@ class MTP_DEVICE(MTPDeviceBase):
|
||||
self.blacklisted_devices = set()
|
||||
self.ejected_devices = set()
|
||||
self.currently_connected_dev = None
|
||||
self._is_device_mtp = None
|
||||
if islinux:
|
||||
from calibre.devices.mtp.unix.sysfs import MTPDetect
|
||||
self._is_device_mtp = MTPDetect()
|
||||
|
||||
def is_device_mtp(self, d, debug=None):
|
||||
''' Returns True iff the _is_device_mtp check returns True and libmtp
|
||||
is able to probe the device successfully. '''
|
||||
if self._is_device_mtp is None: return False
|
||||
return (self._is_device_mtp(d, debug=debug) and
|
||||
self.libmtp.is_mtp_device(d.busnum, d.devnum))
|
||||
|
||||
def set_debug_level(self, lvl):
|
||||
self.libmtp.set_debug_level(lvl)
|
||||
@ -77,7 +88,9 @@ class MTP_DEVICE(MTPDeviceBase):
|
||||
for d in devs:
|
||||
ans = cache.get(d, None)
|
||||
if ans is None:
|
||||
ans = (d.vendor_id, d.product_id) in self.known_devices
|
||||
ans = (
|
||||
(d.vendor_id, d.product_id) in self.known_devices or
|
||||
self.is_device_mtp(d))
|
||||
cache[d] = ans
|
||||
if ans:
|
||||
return d
|
||||
@ -95,12 +108,13 @@ class MTP_DEVICE(MTPDeviceBase):
|
||||
err = 'startup() not called on this device driver'
|
||||
p(err)
|
||||
return False
|
||||
devs = [d for d in devices_on_system if (d.vendor_id, d.product_id)
|
||||
in self.known_devices and d.vendor_id != APPLE]
|
||||
devs = [d for d in devices_on_system if
|
||||
( (d.vendor_id, d.product_id) in self.known_devices or
|
||||
self.is_device_mtp(d, debug=p)) and d.vendor_id != APPLE]
|
||||
if not devs:
|
||||
p('No known MTP devices connected to system')
|
||||
p('No MTP devices connected to system')
|
||||
return False
|
||||
p('Known MTP devices connected:')
|
||||
p('MTP devices connected:')
|
||||
for d in devs: p(d)
|
||||
|
||||
for d in devs:
|
||||
|
@ -662,13 +662,6 @@ is_mtp_device(PyObject *self, PyObject *args) {
|
||||
|
||||
if (!PyArg_ParseTuple(args, "ii", &busnum, &devnum)) return NULL;
|
||||
|
||||
/*
|
||||
* LIBMTP_Check_Specific_Device does not seem to work at least on my linux
|
||||
* system. Need to investigate why later. Most devices are in the device
|
||||
* table so this is not terribly important.
|
||||
*/
|
||||
/* LIBMTP_Set_Debug(LIBMTP_DEBUG_ALL); */
|
||||
/* printf("Calling check: %d %d\n", busnum, devnum); */
|
||||
Py_BEGIN_ALLOW_THREADS;
|
||||
ans = LIBMTP_Check_Specific_Device(busnum, devnum);
|
||||
Py_END_ALLOW_THREADS;
|
||||
|
53
src/calibre/devices/mtp/unix/sysfs.py
Normal file
53
src/calibre/devices/mtp/unix/sysfs.py
Normal file
@ -0,0 +1,53 @@
|
||||
#!/usr/bin/env python
|
||||
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
|
||||
from __future__ import (unicode_literals, division, absolute_import,
|
||||
print_function)
|
||||
|
||||
__license__ = 'GPL v3'
|
||||
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
import os, glob
|
||||
|
||||
class MTPDetect(object):
|
||||
|
||||
SYSFS_PATH = os.environ.get('SYSFS_PATH', '/sys')
|
||||
|
||||
def __init__(self):
|
||||
self.base = os.path.join(self.SYSFS_PATH, 'subsystem', 'usb', 'devices')
|
||||
if not os.path.exists(self.base):
|
||||
self.base = os.path.join(self.SYSFS_PATH, 'bus', 'usb', 'devices')
|
||||
self.ok = os.path.exists(self.base)
|
||||
|
||||
def __call__(self, dev, debug=None):
|
||||
'''
|
||||
Check if the device has an interface named "MTP" using sysfs, which
|
||||
avoids probing the device.
|
||||
'''
|
||||
if not self.ok: return False
|
||||
|
||||
def read(x):
|
||||
try:
|
||||
with open(x, 'rb') as f:
|
||||
return f.read()
|
||||
except EnvironmentError:
|
||||
pass
|
||||
|
||||
ipath = os.path.join(self.base, '{0}-*/{0}-*/interface'.format(dev.busnum))
|
||||
for x in glob.glob(ipath):
|
||||
raw = read(x)
|
||||
if not raw or raw.strip() != b'MTP': continue
|
||||
raw = read(os.path.join(os.path.dirname(os.path.dirname(x)),
|
||||
'devnum'))
|
||||
try:
|
||||
if raw and int(raw) == dev.devnum:
|
||||
if debug is not None:
|
||||
debug('Unknown device {0} claims to be an MTP device'
|
||||
.format(dev))
|
||||
return True
|
||||
except (ValueError, TypeError):
|
||||
continue
|
||||
|
||||
return False
|
||||
|
||||
|
@ -36,7 +36,15 @@ class SubsetFonts(object):
|
||||
self.oeb.manifest.remove(font['item'])
|
||||
font['rule'].parentStyleSheet.deleteRule(font['rule'])
|
||||
|
||||
fonts = {}
|
||||
for font in self.embedded_fonts:
|
||||
item, chars = font['item'], font['chars']
|
||||
if item.href in fonts:
|
||||
fonts[item.href]['chars'] |= chars
|
||||
else:
|
||||
fonts[item.href] = font
|
||||
|
||||
for font in fonts.itervalues():
|
||||
if not font['chars']:
|
||||
self.log('The font %s is unused. Removing it.'%font['src'])
|
||||
remove(font)
|
||||
|
@ -9,6 +9,7 @@ __docformat__ = 'restructuredtext en'
|
||||
|
||||
import codecs, zlib
|
||||
from io import BytesIO
|
||||
from datetime import datetime
|
||||
|
||||
from calibre.constants import plugins, ispy3
|
||||
|
||||
@ -65,14 +66,20 @@ def fmtnum(o):
|
||||
def serialize(o, stream):
|
||||
if isinstance(o, float):
|
||||
stream.write_raw(pdf_float(o).encode('ascii'))
|
||||
elif isinstance(o, bool):
|
||||
# Must check bool before int as bools are subclasses of int
|
||||
stream.write_raw(b'true' if o else b'false')
|
||||
elif isinstance(o, (int, long)):
|
||||
stream.write_raw(icb(o))
|
||||
elif hasattr(o, 'pdf_serialize'):
|
||||
o.pdf_serialize(stream)
|
||||
elif o is None:
|
||||
stream.write_raw(b'null')
|
||||
elif isinstance(o, bool):
|
||||
stream.write_raw(b'true' if o else b'false')
|
||||
elif isinstance(o, datetime):
|
||||
val = o.strftime("D:%Y%m%d%H%M%%02d%z")%min(59, o.second)
|
||||
if datetime.tzinfo is not None:
|
||||
val = "(%s'%s')"%(val[:-2], val[-2:])
|
||||
stream.write(val.encode('ascii'))
|
||||
else:
|
||||
raise ValueError('Unknown object: %r'%o)
|
||||
|
||||
|
@ -52,7 +52,6 @@ class PdfEngine(QPaintEngine):
|
||||
FEATURES = QPaintEngine.AllFeatures & ~(
|
||||
QPaintEngine.PorterDuff | QPaintEngine.PerspectiveTransform
|
||||
| QPaintEngine.ObjectBoundingModeGradients
|
||||
| QPaintEngine.LinearGradientFill
|
||||
| QPaintEngine.RadialGradientFill
|
||||
| QPaintEngine.ConicalGradientFill
|
||||
)
|
||||
@ -82,7 +81,7 @@ class PdfEngine(QPaintEngine):
|
||||
self.bottom_margin) / self.pixel_height
|
||||
|
||||
self.pdf_system = QTransform(sx, 0, 0, -sy, dx, dy)
|
||||
self.graphics = Graphics()
|
||||
self.graphics = Graphics(self.pixel_width, self.pixel_height)
|
||||
self.errors_occurred = False
|
||||
self.errors, self.debug = errors, debug
|
||||
self.fonts = {}
|
||||
@ -345,8 +344,8 @@ class PdfDevice(QPaintDevice): # {{{
|
||||
return int(round(self.body_height * self.ydpi / 72.0))
|
||||
return 0
|
||||
|
||||
def end_page(self):
|
||||
self.engine.end_page()
|
||||
def end_page(self, *args, **kwargs):
|
||||
self.engine.end_page(*args, **kwargs)
|
||||
|
||||
def init_page(self):
|
||||
self.engine.init_page()
|
||||
|
@ -47,7 +47,7 @@ def get_page_size(opts, for_comic=False): # {{{
|
||||
if opts.unit == 'devicepixel':
|
||||
factor = 72.0 / opts.output_profile.dpi
|
||||
else:
|
||||
{'point':1.0, 'inch':inch, 'cicero':cicero,
|
||||
factor = {'point':1.0, 'inch':inch, 'cicero':cicero,
|
||||
'didot':didot, 'pica':pica, 'millimeter':mm,
|
||||
'centimeter':cm}[opts.unit]
|
||||
page_size = (factor*width, factor*height)
|
||||
@ -279,6 +279,7 @@ class PDFWriter(QObject):
|
||||
if self.doc.errors_occurred:
|
||||
break
|
||||
|
||||
self.doc.add_links(self.current_item, start_page, amap['links'],
|
||||
amap['anchors'])
|
||||
if not self.doc.errors_occurred:
|
||||
self.doc.add_links(self.current_item, start_page, amap['links'],
|
||||
amap['anchors'])
|
||||
|
||||
|
@ -7,31 +7,147 @@ __license__ = 'GPL v3'
|
||||
__copyright__ = '2013, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
import sys, copy
|
||||
from future_builtins import map
|
||||
from collections import namedtuple
|
||||
|
||||
from PyQt4.Qt import (QPointF)
|
||||
import sip
|
||||
from PyQt4.Qt import QLinearGradient, QPointF
|
||||
|
||||
from calibre.ebooks.pdf.render.common import Stream
|
||||
from calibre.ebooks.pdf.render.common import Name, Array, Dictionary
|
||||
|
||||
def generate_linear_gradient_shader(gradient, page_rect, is_transparent=False):
|
||||
pass
|
||||
Stop = namedtuple('Stop', 't color')
|
||||
|
||||
class LinearGradient(Stream):
|
||||
class LinearGradientPattern(Dictionary):
|
||||
|
||||
def __init__(self, brush, matrix, pixel_page_width, pixel_page_height):
|
||||
is_opaque = brush.isOpaque()
|
||||
gradient = brush.gradient()
|
||||
inv = matrix.inverted()[0]
|
||||
def __init__(self, brush, matrix, pdf, pixel_page_width, pixel_page_height):
|
||||
self.matrix = (matrix.m11(), matrix.m12(), matrix.m21(), matrix.m22(),
|
||||
matrix.dx(), matrix.dy())
|
||||
gradient = sip.cast(brush.gradient(), QLinearGradient)
|
||||
|
||||
page_rect = tuple(map(inv.map, (
|
||||
QPointF(0, 0), QPointF(pixel_page_width, 0), QPointF(0, pixel_page_height),
|
||||
QPointF(pixel_page_width, pixel_page_height))))
|
||||
start, stop, stops = self.spread_gradient(gradient, pixel_page_width,
|
||||
pixel_page_height, matrix)
|
||||
|
||||
shader = generate_linear_gradient_shader(gradient, page_rect)
|
||||
alpha_shader = None
|
||||
if not is_opaque:
|
||||
alpha_shader = generate_linear_gradient_shader(gradient, page_rect, True)
|
||||
# TODO: Handle colors with different opacities
|
||||
self.const_opacity = stops[0].color[-1]
|
||||
|
||||
shader, alpha_shader
|
||||
funcs = Array()
|
||||
bounds = Array()
|
||||
encode = Array()
|
||||
|
||||
for i, current_stop in enumerate(stops):
|
||||
if i < len(stops) - 1:
|
||||
next_stop = stops[i+1]
|
||||
func = Dictionary({
|
||||
'FunctionType': 2,
|
||||
'Domain': Array([0, 1]),
|
||||
'C0': Array(current_stop.color[:3]),
|
||||
'C1': Array(next_stop.color[:3]),
|
||||
'N': 1,
|
||||
})
|
||||
funcs.append(func)
|
||||
encode.extend((0, 1))
|
||||
if i+1 < len(stops) - 1:
|
||||
bounds.append(next_stop.t)
|
||||
|
||||
func = Dictionary({
|
||||
'FunctionType': 3,
|
||||
'Domain': Array([stops[0].t, stops[-1].t]),
|
||||
'Functions': funcs,
|
||||
'Bounds': bounds,
|
||||
'Encode': encode,
|
||||
})
|
||||
|
||||
shader = Dictionary({
|
||||
'ShadingType': 2,
|
||||
'ColorSpace': Name('DeviceRGB'),
|
||||
'AntiAlias': True,
|
||||
'Coords': Array([start.x(), start.y(), stop.x(), stop.y()]),
|
||||
'Function': func,
|
||||
'Extend': Array([True, True]),
|
||||
})
|
||||
|
||||
Dictionary.__init__(self, {
|
||||
'Type': Name('Pattern'),
|
||||
'PatternType': 2,
|
||||
'Shading': shader,
|
||||
'Matrix': Array(self.matrix),
|
||||
})
|
||||
|
||||
self.cache_key = (self.__class__.__name__, self.matrix,
|
||||
tuple(shader['Coords']), stops)
|
||||
|
||||
def spread_gradient(self, gradient, pixel_page_width, pixel_page_height,
|
||||
matrix):
|
||||
start = gradient.start()
|
||||
stop = gradient.finalStop()
|
||||
stops = list(map(lambda x: [x[0], x[1].getRgbF()], gradient.stops()))
|
||||
spread = gradient.spread()
|
||||
if spread != gradient.PadSpread:
|
||||
inv = matrix.inverted()[0]
|
||||
page_rect = tuple(map(inv.map, (
|
||||
QPointF(0, 0), QPointF(pixel_page_width, 0), QPointF(0, pixel_page_height),
|
||||
QPointF(pixel_page_width, pixel_page_height))))
|
||||
maxx = maxy = -sys.maxint-1
|
||||
minx = miny = sys.maxint
|
||||
|
||||
for p in page_rect:
|
||||
minx, maxx = min(minx, p.x()), max(maxx, p.x())
|
||||
miny, maxy = min(miny, p.y()), max(maxy, p.y())
|
||||
|
||||
def in_page(point):
|
||||
return (minx <= point.x() <= maxx and miny <= point.y() <= maxy)
|
||||
|
||||
offset = stop - start
|
||||
llimit, rlimit = start, stop
|
||||
|
||||
reflect = False
|
||||
base_stops = copy.deepcopy(stops)
|
||||
reversed_stops = list(reversed(stops))
|
||||
do_reflect = spread == gradient.ReflectSpread
|
||||
totl = abs(stops[-1][0] - stops[0][0])
|
||||
intervals = [abs(stops[i+1][0] - stops[i][0])/totl
|
||||
for i in xrange(len(stops)-1)]
|
||||
|
||||
while in_page(llimit):
|
||||
reflect ^= True
|
||||
llimit -= offset
|
||||
estops = reversed_stops if (reflect and do_reflect) else base_stops
|
||||
stops = copy.deepcopy(estops) + stops
|
||||
|
||||
first_is_reflected = reflect
|
||||
reflect = False
|
||||
|
||||
while in_page(rlimit):
|
||||
reflect ^= True
|
||||
rlimit += offset
|
||||
estops = reversed_stops if (reflect and do_reflect) else base_stops
|
||||
stops = stops + copy.deepcopy(estops)
|
||||
|
||||
start, stop = llimit, rlimit
|
||||
|
||||
num = len(stops) // len(base_stops)
|
||||
if num > 1:
|
||||
# Adjust the stop parameter values
|
||||
t = base_stops[0][0]
|
||||
rlen = totl/num
|
||||
reflect = first_is_reflected ^ True
|
||||
intervals = [i*rlen for i in intervals]
|
||||
rintervals = list(reversed(intervals))
|
||||
|
||||
for i in xrange(num):
|
||||
reflect ^= True
|
||||
pos = i * len(base_stops)
|
||||
tvals = [t]
|
||||
for ival in (rintervals if reflect and do_reflect else
|
||||
intervals):
|
||||
tvals.append(tvals[-1] + ival)
|
||||
for j in xrange(len(base_stops)):
|
||||
stops[pos+j][0] = tvals[j]
|
||||
t = tvals[-1]
|
||||
|
||||
# In case there were rounding errors
|
||||
stops[-1][0] = base_stops[-1][0]
|
||||
|
||||
return start, stop, tuple(Stop(s[0], s[1]) for s in stops)
|
||||
|
||||
|
@ -16,6 +16,7 @@ from PyQt4.Qt import (
|
||||
from calibre.ebooks.pdf.render.common import (
|
||||
Name, Array, fmtnum, Stream, Dictionary)
|
||||
from calibre.ebooks.pdf.render.serialize import Path
|
||||
from calibre.ebooks.pdf.render.gradients import LinearGradientPattern
|
||||
|
||||
def convert_path(path): # {{{
|
||||
p = Path()
|
||||
@ -280,10 +281,11 @@ class GraphicsState(object):
|
||||
|
||||
class Graphics(object):
|
||||
|
||||
def __init__(self):
|
||||
def __init__(self, page_width_px, page_height_px):
|
||||
self.base_state = GraphicsState()
|
||||
self.current_state = GraphicsState()
|
||||
self.pending_state = None
|
||||
self.page_width_px, self.page_height_px = (page_width_px, page_height_px)
|
||||
|
||||
def begin(self, pdf):
|
||||
self.pdf = pdf
|
||||
@ -360,7 +362,7 @@ class Graphics(object):
|
||||
pdf = self.pdf
|
||||
|
||||
pattern = color = pat = None
|
||||
opacity = 1.0
|
||||
opacity = global_opacity
|
||||
do_fill = True
|
||||
|
||||
matrix = (QTransform.fromTranslate(brush_origin.x(), brush_origin.y())
|
||||
@ -369,29 +371,30 @@ class Graphics(object):
|
||||
self.brushobj = None
|
||||
|
||||
if style <= Qt.DiagCrossPattern:
|
||||
opacity = global_opacity * vals[-1]
|
||||
opacity *= vals[-1]
|
||||
color = vals[:3]
|
||||
|
||||
if style > Qt.SolidPattern:
|
||||
pat = QtPattern(style, matrix)
|
||||
pattern = pdf.add_pattern(pat)
|
||||
|
||||
if opacity < 1e-4 or style == Qt.NoBrush:
|
||||
do_fill = False
|
||||
|
||||
elif style == Qt.TexturePattern:
|
||||
pat = TexturePattern(brush.texture(), matrix, pdf)
|
||||
opacity = global_opacity
|
||||
if pat.paint_type == 2:
|
||||
opacity *= vals[-1]
|
||||
color = vals[:3]
|
||||
pattern = pdf.add_pattern(pat)
|
||||
|
||||
if opacity < 1e-4 or style == Qt.NoBrush:
|
||||
do_fill = False
|
||||
elif style == Qt.LinearGradientPattern:
|
||||
pat = LinearGradientPattern(brush, matrix, pdf, self.page_width_px,
|
||||
self.page_height_px)
|
||||
opacity *= pat.const_opacity
|
||||
# TODO: Add support for radial/conical gradient fills
|
||||
|
||||
if opacity < 1e-4 or style == Qt.NoBrush:
|
||||
do_fill = False
|
||||
self.brushobj = Brush(brush_origin, pat, color)
|
||||
# TODO: Add support for gradient fills
|
||||
|
||||
if pat is not None:
|
||||
pattern = pdf.add_pattern(pat)
|
||||
return color, opacity, pattern, do_fill
|
||||
|
||||
def apply_stroke(self, state, pdf_system, painter):
|
||||
@ -453,7 +456,10 @@ class Graphics(object):
|
||||
TexturePatterns and it also uses TexturePatterns to emulate gradients,
|
||||
leading to brokenness. So this method allows the paint engine to update
|
||||
the brush origin before painting an object. While not perfect, this is
|
||||
better than nothing.
|
||||
better than nothing. The problem is that if the rect being filled has a
|
||||
border, then QtWebKit generates an image of the rect size - border but
|
||||
fills the full rect, and there's no way for the paint engine to know
|
||||
that and adjust the brush origin.
|
||||
'''
|
||||
if not hasattr(self, 'last_fill') or not self.current_state.do_fill:
|
||||
return
|
||||
|
@ -17,10 +17,14 @@ from calibre.ebooks.pdf.render.common import Array, Name, Dictionary, String
|
||||
class Destination(Array):
|
||||
|
||||
def __init__(self, start_page, pos, get_pageref):
|
||||
super(Destination, self).__init__(
|
||||
[get_pageref(start_page + pos['column']), Name('XYZ'), pos['left'],
|
||||
pos['top'], None]
|
||||
)
|
||||
pnum = start_page + pos['column']
|
||||
try:
|
||||
pref = get_pageref(pnum)
|
||||
except IndexError:
|
||||
pref = get_pageref(pnum-1)
|
||||
super(Destination, self).__init__([
|
||||
pref, Name('XYZ'), pos['left'], pos['top'], None
|
||||
])
|
||||
|
||||
class Links(object):
|
||||
|
||||
|
@ -18,6 +18,7 @@ from calibre.ebooks.pdf.render.common import (
|
||||
fmtnum)
|
||||
from calibre.ebooks.pdf.render.fonts import FontManager
|
||||
from calibre.ebooks.pdf.render.links import Links
|
||||
from calibre.utils.date import utcnow
|
||||
|
||||
PDFVER = b'%PDF-1.3'
|
||||
|
||||
@ -259,12 +260,15 @@ class PDFStream(object):
|
||||
self.objects.add(PageTree(page_size))
|
||||
self.objects.add(Catalog(self.page_tree))
|
||||
self.current_page = Page(self.page_tree, compress=self.compress)
|
||||
self.info = Dictionary({'Creator':String(creator),
|
||||
'Producer':String(creator)})
|
||||
self.info = Dictionary({
|
||||
'Creator':String(creator),
|
||||
'Producer':String(creator),
|
||||
'CreationDate': utcnow(),
|
||||
})
|
||||
self.stroke_opacities, self.fill_opacities = {}, {}
|
||||
self.font_manager = FontManager(self.objects, self.compress)
|
||||
self.image_cache = {}
|
||||
self.pattern_cache = {}
|
||||
self.pattern_cache, self.shader_cache = {}, {}
|
||||
self.debug = debug
|
||||
self.links = Links(self, mark_links, page_size)
|
||||
i = QImage(1, 1, QImage.Format_ARGB32)
|
||||
@ -447,6 +451,11 @@ class PDFStream(object):
|
||||
self.pattern_cache[pattern.cache_key] = self.objects.add(pattern)
|
||||
return self.current_page.add_pattern(self.pattern_cache[pattern.cache_key])
|
||||
|
||||
def add_shader(self, shader):
|
||||
if shader.cache_key not in self.shader_cache:
|
||||
self.shader_cache[shader.cache_key] = self.objects.add(shader)
|
||||
return self.shader_cache[shader.cache_key]
|
||||
|
||||
def draw_image(self, x, y, width, height, imgref):
|
||||
name = self.current_page.add_image(imgref)
|
||||
self.current_page.write('q %s 0 0 %s %s %s cm '%(fmtnum(width),
|
||||
|
@ -83,13 +83,16 @@ def run(dev, func):
|
||||
raise SystemExit(1)
|
||||
|
||||
def brush(p, xmax, ymax):
|
||||
x = xmax/3
|
||||
x = 0
|
||||
y = 0
|
||||
w = xmax/2
|
||||
pix = QPixmap(I('console.png'))
|
||||
p.fillRect(x, y, w, w, QBrush(pix))
|
||||
|
||||
p.fillRect(0, y+xmax/1.9, w, w, QBrush(pix))
|
||||
g = QLinearGradient(QPointF(x, y+w/3), QPointF(x, y+(2*w/3)))
|
||||
g.setColorAt(0, QColor('#f00'))
|
||||
g.setColorAt(0.5, QColor('#fff'))
|
||||
g.setColorAt(1, QColor('#00f'))
|
||||
g.setSpread(g.ReflectSpread)
|
||||
p.fillRect(x, y, w, w, QBrush(g))
|
||||
p.drawRect(x, y, w, w)
|
||||
|
||||
def pen(p, xmax, ymax):
|
||||
pix = QPixmap(I('console.png'))
|
||||
@ -110,7 +113,7 @@ def main():
|
||||
app
|
||||
tdir = os.path.abspath('.')
|
||||
pdf = os.path.join(tdir, 'painter.pdf')
|
||||
func = full
|
||||
func = brush
|
||||
dpi = 100
|
||||
with open(pdf, 'wb') as f:
|
||||
dev = PdfDevice(f, xdpi=dpi, ydpi=dpi, compress=False)
|
||||
|
@ -411,7 +411,7 @@
|
||||
<item row="6" column="3" colspan="2">
|
||||
<widget class="QCheckBox" name="opt_subset_embedded_fonts">
|
||||
<property name="text">
|
||||
<string>&Subset all embedded fonts (Experimental)</string>
|
||||
<string>&Subset all embedded fonts</string>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
|
@ -201,6 +201,7 @@ class SearchBar(QWidget): # {{{
|
||||
x.setObjectName("search")
|
||||
x.setToolTip(_("<p>Search the list of books by title, author, publisher, "
|
||||
"tags, comments, etc.<br><br>Words separated by spaces are ANDed"))
|
||||
x.setMinimumContentsLength(10)
|
||||
l.addWidget(x)
|
||||
|
||||
self.search_button = QToolButton()
|
||||
@ -225,7 +226,7 @@ class SearchBar(QWidget): # {{{
|
||||
|
||||
x = parent.saved_search = SavedSearchBox(self)
|
||||
x.setMaximumSize(QSize(150, 16777215))
|
||||
x.setMinimumContentsLength(15)
|
||||
x.setMinimumContentsLength(10)
|
||||
x.setObjectName("saved_search")
|
||||
l.addWidget(x)
|
||||
|
||||
|
@ -88,13 +88,16 @@ class DateDelegate(QStyledItemDelegate): # {{{
|
||||
|
||||
class PubDateDelegate(QStyledItemDelegate): # {{{
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
QStyledItemDelegate.__init__(self, *args, **kwargs)
|
||||
self.format = tweaks['gui_pubdate_display_format']
|
||||
if self.format is None:
|
||||
self.format = 'MMM yyyy'
|
||||
|
||||
def displayText(self, val, locale):
|
||||
d = val.toDateTime()
|
||||
if d <= UNDEFINED_QDATETIME:
|
||||
return ''
|
||||
self.format = tweaks['gui_pubdate_display_format']
|
||||
if self.format is None:
|
||||
self.format = 'MMM yyyy'
|
||||
return format_date(qt_to_dt(d, as_utc=False), self.format)
|
||||
|
||||
def createEditor(self, parent, option, index):
|
||||
|
@ -197,7 +197,7 @@ class NookColor(Nook):
|
||||
|
||||
class NookTablet(NookColor):
|
||||
id = 'nook_tablet'
|
||||
name = 'Nook Tablet'
|
||||
name = 'Nook Tablet/HD'
|
||||
|
||||
class CybookG3(Device):
|
||||
|
||||
|
@ -17,7 +17,7 @@ from calibre.ebooks import calibre_cover
|
||||
from calibre.library import current_library_name
|
||||
from calibre.library.catalogs import AuthorSortMismatchException, EmptyCatalogException
|
||||
from calibre.ptempfile import PersistentTemporaryFile
|
||||
from calibre.utils.localization import get_lang
|
||||
from calibre.utils.localization import calibre_langcode_to_name, canonicalize_lang, get_lang
|
||||
|
||||
Option = namedtuple('Option', 'option, default, dest, action, help')
|
||||
|
||||
@ -223,7 +223,8 @@ class EPUB_MOBI(CatalogPlugin):
|
||||
self.fmt,
|
||||
'for %s ' % opts.output_profile if opts.output_profile else '',
|
||||
'CLI' if opts.cli_environment else 'GUI',
|
||||
get_lang()))
|
||||
calibre_langcode_to_name(canonicalize_lang(get_lang()), localize=False))
|
||||
)
|
||||
|
||||
# If exclude_genre is blank, assume user wants all tags as genres
|
||||
if opts.exclude_genre.strip() == '':
|
||||
|
@ -236,7 +236,7 @@ class ContentServer(object):
|
||||
newmi = mi.deepcopy_metadata()
|
||||
newmi.template_to_attribute(mi, cpb)
|
||||
|
||||
if format in ('MOBI', 'EPUB'):
|
||||
if format in {'MOBI', 'EPUB', 'AZW3'}:
|
||||
# Write the updated file
|
||||
from calibre.ebooks.metadata.meta import set_metadata
|
||||
set_metadata(fmt, newmi, format.lower())
|
||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user