mirror of
https://github.com/kovidgoyal/calibre.git
synced 2025-07-08 10:44:09 -04:00
0.9.13
This commit is contained in:
commit
68e7838c70
@ -19,6 +19,57 @@
|
|||||||
# new recipes:
|
# new recipes:
|
||||||
# - title:
|
# - title:
|
||||||
|
|
||||||
|
- version: 0.9.13
|
||||||
|
date: 2013-01-04
|
||||||
|
|
||||||
|
new features:
|
||||||
|
- title: "Complete rewrite of the PDF Output engine, to support links and fix various bugs"
|
||||||
|
type: major
|
||||||
|
description: "calibre now has a new PDF output engine that supports links in the text. It also fixes various bugs, detailed below. In order to implement support for links and fix these bugs, the engine had to be completely rewritten, so there may be some regressions."
|
||||||
|
|
||||||
|
- title: "Show disabled device plugins in Preferences->Ignored Devices"
|
||||||
|
|
||||||
|
- title: "Get Books: Fix Smashwords, Google books and B&N stores. Add Nook UK store"
|
||||||
|
|
||||||
|
- title: "Allow series numbers lower than -100 for custom series columns."
|
||||||
|
tickets: [1094475]
|
||||||
|
|
||||||
|
- title: "Add mass storage driver for rockhip based android smart phones"
|
||||||
|
tickets: [1087809]
|
||||||
|
|
||||||
|
- title: "Add a clear ratings button to the edit metadata dialog"
|
||||||
|
|
||||||
|
bug fixes:
|
||||||
|
- title: "PDF Output: Fix custom page sizes not working on OS X"
|
||||||
|
|
||||||
|
- title: "PDF Output: Fix embedding of many fonts not supported (note that embedding of OpenType fonts with Postscript outlines is still not supported on windows, though it is supported on other operating systems)"
|
||||||
|
|
||||||
|
- title: "PDF Output: Fix crashes converting some books to PDF on OS X"
|
||||||
|
tickets: [1087688]
|
||||||
|
|
||||||
|
- title: "HTML Input: Handle entities inside href attributes when following the links in an HTML file."
|
||||||
|
tickets: [1094203]
|
||||||
|
|
||||||
|
- title: "Content server: Fix custom icons not used for sub categories"
|
||||||
|
tickets: [1095016]
|
||||||
|
|
||||||
|
- title: "Force use of non-unicode constants in compiled templates. Fixes a problem with regular expression character classes and probably other things."
|
||||||
|
|
||||||
|
- title: "Kobo driver: Do not error out if there are invalid dates in the device database"
|
||||||
|
tickets: [1094597]
|
||||||
|
|
||||||
|
- title: "Content server: Fix for non-unicode hostnames when using mDNS"
|
||||||
|
tickets: [1094063]
|
||||||
|
|
||||||
|
improved recipes:
|
||||||
|
- Today's Zaman
|
||||||
|
- The Economist
|
||||||
|
- Foreign Affairs
|
||||||
|
- New York Times
|
||||||
|
- Alternet
|
||||||
|
- Harper's Magazine
|
||||||
|
- La Stampa
|
||||||
|
|
||||||
- version: 0.9.12
|
- version: 0.9.12
|
||||||
date: 2012-12-28
|
date: 2012-12-28
|
||||||
|
|
||||||
|
10
README
10
README
@ -1,7 +1,7 @@
|
|||||||
calibre is an e-book library manager. It can view, convert and catalog e-books \
|
calibre is an e-book library manager. It can view, convert and catalog e-books
|
||||||
in most of the major e-book formats. It can also talk to e-book reader \
|
in most of the major e-book formats. It can also talk to e-book reader
|
||||||
devices. It can go out to the internet and fetch metadata for your books. \
|
devices. It can go out to the internet and fetch metadata for your books.
|
||||||
It can download newspapers and convert them into e-books for convenient \
|
It can download newspapers and convert them into e-books for convenient
|
||||||
reading. It is cross platform, running on Linux, Windows and OS X.
|
reading. It is cross platform, running on Linux, Windows and OS X.
|
||||||
|
|
||||||
For screenshots: https://calibre-ebook.com/demo
|
For screenshots: https://calibre-ebook.com/demo
|
||||||
@ -15,5 +15,5 @@ bzr branch lp:calibre
|
|||||||
To update your copy of the source code:
|
To update your copy of the source code:
|
||||||
bzr merge
|
bzr merge
|
||||||
|
|
||||||
Tarballs of the source code for each release are now available \
|
Tarballs of the source code for each release are now available
|
||||||
at http://code.google.com/p/calibre-ebook
|
at http://code.google.com/p/calibre-ebook
|
||||||
|
@ -162,7 +162,8 @@ Follow these steps to find the problem:
|
|||||||
* If you are connecting an Apple iDevice (iPad, iPod Touch, iPhone), use the 'Connect to iTunes' method in the 'Getting started' instructions in `Calibre + Apple iDevices: Start here <http://www.mobileread.com/forums/showthread.php?t=118559>`_.
|
* If you are connecting an Apple iDevice (iPad, iPod Touch, iPhone), use the 'Connect to iTunes' method in the 'Getting started' instructions in `Calibre + Apple iDevices: Start here <http://www.mobileread.com/forums/showthread.php?t=118559>`_.
|
||||||
* Make sure you are running the latest version of |app|. The latest version can always be downloaded from `the calibre website <http://calibre-ebook.com/download>`_.
|
* Make sure you are running the latest version of |app|. The latest version can always be downloaded from `the calibre website <http://calibre-ebook.com/download>`_.
|
||||||
* Ensure your operating system is seeing the device. That is, the device should show up in Windows Explorer (in Windows) or Finder (in OS X).
|
* Ensure your operating system is seeing the device. That is, the device should show up in Windows Explorer (in Windows) or Finder (in OS X).
|
||||||
* In |app|, go to Preferences->Plugins->Device Interface plugin and make sure the plugin for your device is enabled, the plugin icon next to it should be green when it is enabled.
|
* In |app|, go to Preferences->Ignored Devices and check that your device
|
||||||
|
is not being ignored
|
||||||
* If all the above steps fail, go to Preferences->Miscellaneous and click debug device detection with your device attached and post the output as a ticket on `the calibre bug tracker <http://bugs.calibre-ebook.com>`_.
|
* If all the above steps fail, go to Preferences->Miscellaneous and click debug device detection with your device attached and post the output as a ticket on `the calibre bug tracker <http://bugs.calibre-ebook.com>`_.
|
||||||
|
|
||||||
My device is non-standard or unusual. What can I do to connect to it?
|
My device is non-standard or unusual. What can I do to connect to it?
|
||||||
@ -668,6 +669,9 @@ There are three possible things I know of, that can cause this:
|
|||||||
the blacklist of programs inside RoboForm to fix this. Or uninstall
|
the blacklist of programs inside RoboForm to fix this. Or uninstall
|
||||||
RoboForm.
|
RoboForm.
|
||||||
|
|
||||||
|
* The Logitech SetPoint Settings application causes random crashes in
|
||||||
|
|app| when it is open. Close it before starting |app|.
|
||||||
|
|
||||||
|app| is not starting on OS X?
|
|app| is not starting on OS X?
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
@ -10,14 +10,12 @@ class Alternet(BasicNewsRecipe):
|
|||||||
category = 'News, Magazine'
|
category = 'News, Magazine'
|
||||||
description = 'News magazine and online community'
|
description = 'News magazine and online community'
|
||||||
feeds = [
|
feeds = [
|
||||||
(u'Front Page', u'http://feeds.feedblitz.com/alternet'),
|
(u'Front Page', u'http://feeds.feedblitz.com/alternet')
|
||||||
(u'Breaking News', u'http://feeds.feedblitz.com/alternet_breaking_news'),
|
|
||||||
(u'Top Ten Campaigns', u'http://feeds.feedblitz.com/alternet_top_10_campaigns'),
|
|
||||||
(u'Special Coverage Areas', u'http://feeds.feedblitz.com/alternet_coverage')
|
|
||||||
]
|
]
|
||||||
|
|
||||||
remove_attributes = ['width', 'align','cellspacing']
|
remove_attributes = ['width', 'align','cellspacing']
|
||||||
remove_javascript = True
|
remove_javascript = True
|
||||||
use_embedded_content = False
|
use_embedded_content = True
|
||||||
no_stylesheets = True
|
no_stylesheets = True
|
||||||
language = 'en'
|
language = 'en'
|
||||||
encoding = 'UTF-8'
|
encoding = 'UTF-8'
|
||||||
|
@ -70,18 +70,6 @@ class Economist(BasicNewsRecipe):
|
|||||||
return br
|
return br
|
||||||
'''
|
'''
|
||||||
|
|
||||||
def get_cover_url(self):
|
|
||||||
soup = self.index_to_soup('http://www.economist.com/printedition/covers')
|
|
||||||
div = soup.find('div', attrs={'class':lambda x: x and
|
|
||||||
'print-cover-links' in x})
|
|
||||||
a = div.find('a', href=True)
|
|
||||||
url = a.get('href')
|
|
||||||
if url.startswith('/'):
|
|
||||||
url = 'http://www.economist.com' + url
|
|
||||||
soup = self.index_to_soup(url)
|
|
||||||
div = soup.find('div', attrs={'class':'cover-content'})
|
|
||||||
img = div.find('img', src=True)
|
|
||||||
return img.get('src')
|
|
||||||
|
|
||||||
def parse_index(self):
|
def parse_index(self):
|
||||||
return self.economist_parse_index()
|
return self.economist_parse_index()
|
||||||
@ -92,7 +80,7 @@ class Economist(BasicNewsRecipe):
|
|||||||
if div is not None:
|
if div is not None:
|
||||||
img = div.find('img', src=True)
|
img = div.find('img', src=True)
|
||||||
if img is not None:
|
if img is not None:
|
||||||
self.cover_url = img['src']
|
self.cover_url = re.sub('thumbnail','full',img['src'])
|
||||||
feeds = OrderedDict()
|
feeds = OrderedDict()
|
||||||
for section in soup.findAll(attrs={'class':lambda x: x and 'section' in
|
for section in soup.findAll(attrs={'class':lambda x: x and 'section' in
|
||||||
x}):
|
x}):
|
||||||
|
@ -9,7 +9,7 @@ from calibre.web.feeds.news import BasicNewsRecipe
|
|||||||
from calibre.ebooks.BeautifulSoup import Tag, NavigableString
|
from calibre.ebooks.BeautifulSoup import Tag, NavigableString
|
||||||
from collections import OrderedDict
|
from collections import OrderedDict
|
||||||
|
|
||||||
import time, re
|
import re
|
||||||
|
|
||||||
class Economist(BasicNewsRecipe):
|
class Economist(BasicNewsRecipe):
|
||||||
|
|
||||||
@ -37,7 +37,6 @@ class Economist(BasicNewsRecipe):
|
|||||||
padding: 7px 0px 9px;
|
padding: 7px 0px 9px;
|
||||||
}
|
}
|
||||||
'''
|
'''
|
||||||
|
|
||||||
oldest_article = 7.0
|
oldest_article = 7.0
|
||||||
remove_tags = [
|
remove_tags = [
|
||||||
dict(name=['script', 'noscript', 'title', 'iframe', 'cf_floatingcontent']),
|
dict(name=['script', 'noscript', 'title', 'iframe', 'cf_floatingcontent']),
|
||||||
@ -46,7 +45,6 @@ class Economist(BasicNewsRecipe):
|
|||||||
{'class': lambda x: x and 'share-links-header' in x},
|
{'class': lambda x: x and 'share-links-header' in x},
|
||||||
]
|
]
|
||||||
keep_only_tags = [dict(id='ec-article-body')]
|
keep_only_tags = [dict(id='ec-article-body')]
|
||||||
needs_subscription = False
|
|
||||||
no_stylesheets = True
|
no_stylesheets = True
|
||||||
preprocess_regexps = [(re.compile('</html>.*', re.DOTALL),
|
preprocess_regexps = [(re.compile('</html>.*', re.DOTALL),
|
||||||
lambda x:'</html>')]
|
lambda x:'</html>')]
|
||||||
@ -55,27 +53,25 @@ class Economist(BasicNewsRecipe):
|
|||||||
# downloaded with connection reset by peer (104) errors.
|
# downloaded with connection reset by peer (104) errors.
|
||||||
delay = 1
|
delay = 1
|
||||||
|
|
||||||
def get_cover_url(self):
|
needs_subscription = False
|
||||||
soup = self.index_to_soup('http://www.economist.com/printedition/covers')
|
'''
|
||||||
div = soup.find('div', attrs={'class':lambda x: x and
|
def get_browser(self):
|
||||||
'print-cover-links' in x})
|
br = BasicNewsRecipe.get_browser()
|
||||||
a = div.find('a', href=True)
|
if self.username and self.password:
|
||||||
url = a.get('href')
|
br.open('http://www.economist.com/user/login')
|
||||||
if url.startswith('/'):
|
br.select_form(nr=1)
|
||||||
url = 'http://www.economist.com' + url
|
br['name'] = self.username
|
||||||
soup = self.index_to_soup(url)
|
br['pass'] = self.password
|
||||||
div = soup.find('div', attrs={'class':'cover-content'})
|
res = br.submit()
|
||||||
img = div.find('img', src=True)
|
raw = res.read()
|
||||||
return img.get('src')
|
if '>Log out<' not in raw:
|
||||||
|
raise ValueError('Failed to login to economist.com. '
|
||||||
|
'Check your username and password.')
|
||||||
|
return br
|
||||||
|
'''
|
||||||
|
|
||||||
|
|
||||||
def parse_index(self):
|
def parse_index(self):
|
||||||
try:
|
|
||||||
return self.economist_parse_index()
|
|
||||||
except:
|
|
||||||
raise
|
|
||||||
self.log.warn(
|
|
||||||
'Initial attempt to parse index failed, retrying in 30 seconds')
|
|
||||||
time.sleep(30)
|
|
||||||
return self.economist_parse_index()
|
return self.economist_parse_index()
|
||||||
|
|
||||||
def economist_parse_index(self):
|
def economist_parse_index(self):
|
||||||
@ -84,7 +80,7 @@ class Economist(BasicNewsRecipe):
|
|||||||
if div is not None:
|
if div is not None:
|
||||||
img = div.find('img', src=True)
|
img = div.find('img', src=True)
|
||||||
if img is not None:
|
if img is not None:
|
||||||
self.cover_url = img['src']
|
self.cover_url = re.sub('thumbnail','full',img['src'])
|
||||||
feeds = OrderedDict()
|
feeds = OrderedDict()
|
||||||
for section in soup.findAll(attrs={'class':lambda x: x and 'section' in
|
for section in soup.findAll(attrs={'class':lambda x: x and 'section' in
|
||||||
x}):
|
x}):
|
||||||
@ -151,154 +147,3 @@ class Economist(BasicNewsRecipe):
|
|||||||
div.insert(2, img)
|
div.insert(2, img)
|
||||||
table.replaceWith(div)
|
table.replaceWith(div)
|
||||||
return soup
|
return soup
|
||||||
|
|
||||||
'''
|
|
||||||
from calibre.web.feeds.news import BasicNewsRecipe
|
|
||||||
from calibre.utils.threadpool import ThreadPool, makeRequests
|
|
||||||
from calibre.ebooks.BeautifulSoup import Tag, NavigableString
|
|
||||||
import time, string, re
|
|
||||||
from datetime import datetime
|
|
||||||
from lxml import html
|
|
||||||
|
|
||||||
class Economist(BasicNewsRecipe):
|
|
||||||
|
|
||||||
title = 'The Economist (RSS)'
|
|
||||||
language = 'en'
|
|
||||||
|
|
||||||
__author__ = "Kovid Goyal"
|
|
||||||
description = ('Global news and current affairs from a European'
|
|
||||||
' perspective. Best downloaded on Friday mornings (GMT).'
|
|
||||||
' Much slower than the print edition based version.')
|
|
||||||
extra_css = '.headline {font-size: x-large;} \n h2 { font-size: small; } \n h1 { font-size: medium; }'
|
|
||||||
oldest_article = 7.0
|
|
||||||
cover_url = 'http://media.economist.com/sites/default/files/imagecache/print-cover-thumbnail/print-covers/currentcoverus_large.jpg'
|
|
||||||
#cover_url = 'http://www.economist.com/images/covers/currentcoverus_large.jpg'
|
|
||||||
remove_tags = [
|
|
||||||
dict(name=['script', 'noscript', 'title', 'iframe', 'cf_floatingcontent']),
|
|
||||||
dict(attrs={'class':['dblClkTrk', 'ec-article-info',
|
|
||||||
'share_inline_header', 'related-items']}),
|
|
||||||
{'class': lambda x: x and 'share-links-header' in x},
|
|
||||||
]
|
|
||||||
keep_only_tags = [dict(id='ec-article-body')]
|
|
||||||
no_stylesheets = True
|
|
||||||
preprocess_regexps = [(re.compile('</html>.*', re.DOTALL),
|
|
||||||
lambda x:'</html>')]
|
|
||||||
|
|
||||||
def parse_index(self):
|
|
||||||
from calibre.web.feeds.feedparser import parse
|
|
||||||
if self.test:
|
|
||||||
self.oldest_article = 14.0
|
|
||||||
raw = self.index_to_soup(
|
|
||||||
'http://feeds.feedburner.com/economist/full_print_edition',
|
|
||||||
raw=True)
|
|
||||||
entries = parse(raw).entries
|
|
||||||
pool = ThreadPool(10)
|
|
||||||
self.feed_dict = {}
|
|
||||||
requests = []
|
|
||||||
for i, item in enumerate(entries):
|
|
||||||
title = item.get('title', _('Untitled article'))
|
|
||||||
published = item.date_parsed
|
|
||||||
if not published:
|
|
||||||
published = time.gmtime()
|
|
||||||
utctime = datetime(*published[:6])
|
|
||||||
delta = datetime.utcnow() - utctime
|
|
||||||
if delta.days*24*3600 + delta.seconds > 24*3600*self.oldest_article:
|
|
||||||
self.log.debug('Skipping article %s as it is too old.'%title)
|
|
||||||
continue
|
|
||||||
link = item.get('link', None)
|
|
||||||
description = item.get('description', '')
|
|
||||||
author = item.get('author', '')
|
|
||||||
|
|
||||||
requests.append([i, link, title, description, author, published])
|
|
||||||
if self.test:
|
|
||||||
requests = requests[:4]
|
|
||||||
requests = makeRequests(self.process_eco_feed_article, requests, self.eco_article_found,
|
|
||||||
self.eco_article_failed)
|
|
||||||
for r in requests: pool.putRequest(r)
|
|
||||||
pool.wait()
|
|
||||||
|
|
||||||
return self.eco_sort_sections([(t, a) for t, a in
|
|
||||||
self.feed_dict.items()])
|
|
||||||
|
|
||||||
def eco_sort_sections(self, feeds):
|
|
||||||
if not feeds:
|
|
||||||
raise ValueError('No new articles found')
|
|
||||||
order = {
|
|
||||||
'The World This Week': 1,
|
|
||||||
'Leaders': 2,
|
|
||||||
'Letters': 3,
|
|
||||||
'Briefing': 4,
|
|
||||||
'Business': 5,
|
|
||||||
'Finance And Economics': 6,
|
|
||||||
'Science & Technology': 7,
|
|
||||||
'Books & Arts': 8,
|
|
||||||
'International': 9,
|
|
||||||
'United States': 10,
|
|
||||||
'Asia': 11,
|
|
||||||
'Europe': 12,
|
|
||||||
'The Americas': 13,
|
|
||||||
'Middle East & Africa': 14,
|
|
||||||
'Britain': 15,
|
|
||||||
'Obituary': 16,
|
|
||||||
}
|
|
||||||
return sorted(feeds, cmp=lambda x,y:cmp(order.get(x[0], 100),
|
|
||||||
order.get(y[0], 100)))
|
|
||||||
|
|
||||||
def process_eco_feed_article(self, args):
|
|
||||||
from calibre import browser
|
|
||||||
i, url, title, description, author, published = args
|
|
||||||
br = browser()
|
|
||||||
ret = br.open(url)
|
|
||||||
raw = ret.read()
|
|
||||||
url = br.geturl().split('?')[0]+'/print'
|
|
||||||
root = html.fromstring(raw)
|
|
||||||
matches = root.xpath('//*[@class = "ec-article-info"]')
|
|
||||||
feedtitle = 'Miscellaneous'
|
|
||||||
if matches:
|
|
||||||
feedtitle = string.capwords(html.tostring(matches[-1], method='text',
|
|
||||||
encoding=unicode).split('|')[-1].strip())
|
|
||||||
return (i, feedtitle, url, title, description, author, published)
|
|
||||||
|
|
||||||
def eco_article_found(self, req, result):
|
|
||||||
from calibre.web.feeds import Article
|
|
||||||
i, feedtitle, link, title, description, author, published = result
|
|
||||||
self.log('Found print version for article:', title, 'in', feedtitle,
|
|
||||||
'at', link)
|
|
||||||
|
|
||||||
a = Article(i, title, link, author, description, published, '')
|
|
||||||
|
|
||||||
article = dict(title=a.title, description=a.text_summary,
|
|
||||||
date=time.strftime(self.timefmt, a.date), author=a.author, url=a.url)
|
|
||||||
if feedtitle not in self.feed_dict:
|
|
||||||
self.feed_dict[feedtitle] = []
|
|
||||||
self.feed_dict[feedtitle].append(article)
|
|
||||||
|
|
||||||
def eco_article_failed(self, req, tb):
|
|
||||||
self.log.error('Failed to download %s with error:'%req.args[0][2])
|
|
||||||
self.log.debug(tb)
|
|
||||||
|
|
||||||
def eco_find_image_tables(self, soup):
|
|
||||||
for x in soup.findAll('table', align=['right', 'center']):
|
|
||||||
if len(x.findAll('font')) in (1,2) and len(x.findAll('img')) == 1:
|
|
||||||
yield x
|
|
||||||
|
|
||||||
def postprocess_html(self, soup, first):
|
|
||||||
body = soup.find('body')
|
|
||||||
for name, val in body.attrs:
|
|
||||||
del body[name]
|
|
||||||
for table in list(self.eco_find_image_tables(soup)):
|
|
||||||
caption = table.find('font')
|
|
||||||
img = table.find('img')
|
|
||||||
div = Tag(soup, 'div')
|
|
||||||
div['style'] = 'text-align:left;font-size:70%'
|
|
||||||
ns = NavigableString(self.tag_to_string(caption))
|
|
||||||
div.insert(0, ns)
|
|
||||||
div.insert(1, Tag(soup, 'br'))
|
|
||||||
img.extract()
|
|
||||||
del img['width']
|
|
||||||
del img['height']
|
|
||||||
div.insert(2, img)
|
|
||||||
table.replaceWith(div)
|
|
||||||
return soup
|
|
||||||
'''
|
|
||||||
|
|
||||||
|
@ -11,21 +11,21 @@ class ForeignAffairsRecipe(BasicNewsRecipe):
|
|||||||
by Chen Wei weichen302@gmx.com, 2012-02-05'''
|
by Chen Wei weichen302@gmx.com, 2012-02-05'''
|
||||||
|
|
||||||
__license__ = 'GPL v3'
|
__license__ = 'GPL v3'
|
||||||
__author__ = 'kwetal'
|
__author__ = 'Rick Shang, kwetal'
|
||||||
language = 'en'
|
language = 'en'
|
||||||
version = 1.01
|
version = 1.01
|
||||||
|
|
||||||
title = u'Foreign Affairs (Subcription or (free) Registration)'
|
title = u'Foreign Affairs (Subcription)'
|
||||||
publisher = u'Council on Foreign Relations'
|
publisher = u'Council on Foreign Relations'
|
||||||
category = u'USA, Foreign Affairs'
|
category = u'USA, Foreign Affairs'
|
||||||
description = u'The leading forum for serious discussion of American foreign policy and international affairs.'
|
description = u'The leading forum for serious discussion of American foreign policy and international affairs.'
|
||||||
|
|
||||||
no_stylesheets = True
|
no_stylesheets = True
|
||||||
remove_javascript = True
|
remove_javascript = True
|
||||||
|
needs_subscription = True
|
||||||
|
|
||||||
INDEX = 'http://www.foreignaffairs.com'
|
INDEX = 'http://www.foreignaffairs.com'
|
||||||
FRONTPAGE = 'http://www.foreignaffairs.com/magazine'
|
FRONTPAGE = 'http://www.foreignaffairs.com/magazine'
|
||||||
INCLUDE_PREMIUM = False
|
|
||||||
|
|
||||||
|
|
||||||
remove_tags = []
|
remove_tags = []
|
||||||
@ -68,42 +68,56 @@ class ForeignAffairsRecipe(BasicNewsRecipe):
|
|||||||
|
|
||||||
|
|
||||||
def parse_index(self):
|
def parse_index(self):
|
||||||
|
|
||||||
answer = []
|
answer = []
|
||||||
soup = self.index_to_soup(self.FRONTPAGE)
|
soup = self.index_to_soup(self.FRONTPAGE)
|
||||||
sec_start = soup.findAll('div', attrs={'class':'panel-separator'})
|
#get dates
|
||||||
|
date = re.split('\s\|\s',self.tag_to_string(soup.head.title.string))[0]
|
||||||
|
self.timefmt = u' [%s]'%date
|
||||||
|
|
||||||
|
sec_start = soup.findAll('div', attrs= {'class':'panel-pane'})
|
||||||
for sec in sec_start:
|
for sec in sec_start:
|
||||||
content = sec.nextSibling
|
|
||||||
if content:
|
|
||||||
section = self.tag_to_string(content.find('h2'))
|
|
||||||
articles = []
|
articles = []
|
||||||
|
section = self.tag_to_string(sec.find('h2'))
|
||||||
tags = []
|
if 'Books' in section:
|
||||||
for div in content.findAll('div', attrs = {'class': re.compile(r'view-row\s+views-row-[0-9]+\s+views-row-[odd|even].*')}):
|
reviewsection=sec.find('div', attrs = {'class': 'item-list'})
|
||||||
tags.append(div)
|
for subsection in reviewsection.findAll('div'):
|
||||||
for li in content.findAll('li'):
|
subsectiontitle=self.tag_to_string(subsection.span.a)
|
||||||
tags.append(li)
|
subsectionurl=self.INDEX + subsection.span.a['href']
|
||||||
|
soup1 = self.index_to_soup(subsectionurl)
|
||||||
for div in tags:
|
for div in soup1.findAll('div', attrs = {'class': 'views-field-title'}):
|
||||||
title = url = description = author = None
|
if div.find('a') is not None:
|
||||||
|
originalauthor=self.tag_to_string(div.findNext('div', attrs = {'class':'views-field-field-article-book-nid'}).div.a)
|
||||||
if self.INCLUDE_PREMIUM:
|
title=subsectiontitle+': '+self.tag_to_string(div.span.a)+' by '+originalauthor
|
||||||
found_premium = False
|
url=self.INDEX+div.span.a['href']
|
||||||
|
atr=div.findNext('div', attrs = {'class': 'views-field-field-article-display-authors-value'})
|
||||||
|
if atr is not None:
|
||||||
|
author=self.tag_to_string(atr.span.a)
|
||||||
else:
|
else:
|
||||||
found_premium = div.findAll('span', attrs={'class':
|
author=''
|
||||||
'premium-icon'})
|
desc=div.findNext('span', attrs = {'class': 'views-field-field-article-summary-value'})
|
||||||
if not found_premium:
|
if desc is not None:
|
||||||
tag = div.find('div', attrs={'class': 'views-field-title'})
|
description=self.tag_to_string(desc.div.p)
|
||||||
|
else:
|
||||||
if tag:
|
description=''
|
||||||
a = tag.find('a')
|
articles.append({'title':title, 'date':None, 'url':url, 'description':description, 'author':author})
|
||||||
if a:
|
subsectiontitle=''
|
||||||
title = self.tag_to_string(a)
|
else:
|
||||||
url = self.INDEX + a['href']
|
for div in sec.findAll('div', attrs = {'class': 'views-field-title'}):
|
||||||
author = self.tag_to_string(div.find('div', attrs = {'class': 'views-field-field-article-display-authors-value'}))
|
if div.find('a') is not None:
|
||||||
tag_summary = div.find('span', attrs = {'class': 'views-field-field-article-summary-value'})
|
title=self.tag_to_string(div.span.a)
|
||||||
description = self.tag_to_string(tag_summary)
|
url=self.INDEX+div.span.a['href']
|
||||||
articles.append({'title':title, 'date':None, 'url':url,
|
atr=div.findNext('div', attrs = {'class': 'views-field-field-article-display-authors-value'})
|
||||||
'description':description, 'author':author})
|
if atr is not None:
|
||||||
|
author=self.tag_to_string(atr.span.a)
|
||||||
|
else:
|
||||||
|
author=''
|
||||||
|
desc=div.findNext('span', attrs = {'class': 'views-field-field-article-summary-value'})
|
||||||
|
if desc is not None:
|
||||||
|
description=self.tag_to_string(desc.div.p)
|
||||||
|
else:
|
||||||
|
description=''
|
||||||
|
articles.append({'title':title, 'date':None, 'url':url, 'description':description, 'author':author})
|
||||||
if articles:
|
if articles:
|
||||||
answer.append((section, articles))
|
answer.append((section, articles))
|
||||||
return answer
|
return answer
|
||||||
@ -115,15 +129,17 @@ class ForeignAffairsRecipe(BasicNewsRecipe):
|
|||||||
|
|
||||||
return soup
|
return soup
|
||||||
|
|
||||||
needs_subscription = True
|
|
||||||
|
|
||||||
def get_browser(self):
|
def get_browser(self):
|
||||||
br = BasicNewsRecipe.get_browser()
|
br = BasicNewsRecipe.get_browser()
|
||||||
if self.username is not None and self.password is not None:
|
if self.username is not None and self.password is not None:
|
||||||
br.open('https://www.foreignaffairs.com/user?destination=home')
|
br.open('https://www.foreignaffairs.com/user?destination=user%3Fop%3Dlo')
|
||||||
br.select_form(nr = 1)
|
br.select_form(nr = 1)
|
||||||
br['name'] = self.username
|
br['name'] = self.username
|
||||||
br['pass'] = self.password
|
br['pass'] = self.password
|
||||||
br.submit()
|
br.submit()
|
||||||
return br
|
return br
|
||||||
|
|
||||||
|
def cleanup(self):
|
||||||
|
self.browser.open('http://www.foreignaffairs.com/logout?destination=user%3Fop=lo')
|
||||||
|
@ -8,7 +8,7 @@ If you have institutional subscription based on access IP you do not need to ent
|
|||||||
anything in username/password fields
|
anything in username/password fields
|
||||||
'''
|
'''
|
||||||
|
|
||||||
import time
|
import time, re
|
||||||
import urllib
|
import urllib
|
||||||
from calibre import strftime
|
from calibre import strftime
|
||||||
from calibre.web.feeds.news import BasicNewsRecipe
|
from calibre.web.feeds.news import BasicNewsRecipe
|
||||||
@ -29,7 +29,6 @@ class Harpers_full(BasicNewsRecipe):
|
|||||||
needs_subscription = 'optional'
|
needs_subscription = 'optional'
|
||||||
masthead_url = 'http://harpers.org/wp-content/themes/harpers/images/pheader.gif'
|
masthead_url = 'http://harpers.org/wp-content/themes/harpers/images/pheader.gif'
|
||||||
publication_type = 'magazine'
|
publication_type = 'magazine'
|
||||||
INDEX = strftime('http://harpers.org/archive/%Y/%m')
|
|
||||||
LOGIN = 'http://harpers.org/wp-content/themes/harpers/ajax_login.php'
|
LOGIN = 'http://harpers.org/wp-content/themes/harpers/ajax_login.php'
|
||||||
extra_css = """
|
extra_css = """
|
||||||
body{font-family: adobe-caslon-pro,serif}
|
body{font-family: adobe-caslon-pro,serif}
|
||||||
@ -65,17 +64,28 @@ class Harpers_full(BasicNewsRecipe):
|
|||||||
return br
|
return br
|
||||||
|
|
||||||
def parse_index(self):
|
def parse_index(self):
|
||||||
|
#find current issue
|
||||||
|
soup = self.index_to_soup('http://harpers.org/')
|
||||||
|
currentIssue=soup.find('div',attrs={'class':'mainNavi'}).find('li',attrs={'class':'curentIssue'})
|
||||||
|
currentIssue_url=self.tag_to_string(currentIssue.a['href'])
|
||||||
|
self.log(currentIssue_url)
|
||||||
|
|
||||||
|
#go to the current issue
|
||||||
|
soup1 = self.index_to_soup(currentIssue_url)
|
||||||
|
date = re.split('\s\|\s',self.tag_to_string(soup1.head.title.string))[0]
|
||||||
|
self.timefmt = u' [%s]'%date
|
||||||
|
|
||||||
|
#get cover
|
||||||
|
coverurl='http://harpers.org/wp-content/themes/harpers/ajax_microfiche.php?img=harpers-'+re.split('harpers.org/',currentIssue_url)[1]+'gif/0001.gif'
|
||||||
|
soup2 = self.index_to_soup(coverurl)
|
||||||
|
self.cover_url = self.tag_to_string(soup2.find('img')['src'])
|
||||||
|
self.log(self.cover_url)
|
||||||
articles = []
|
articles = []
|
||||||
print 'Processing ' + self.INDEX
|
|
||||||
soup = self.index_to_soup(self.INDEX)
|
|
||||||
count = 0
|
count = 0
|
||||||
for item in soup.findAll('div', attrs={'class':'articleData'}):
|
for item in soup1.findAll('div', attrs={'class':'articleData'}):
|
||||||
text_links = item.findAll('h2')
|
text_links = item.findAll('h2')
|
||||||
for text_link in text_links:
|
for text_link in text_links:
|
||||||
if count == 0:
|
if count == 0:
|
||||||
lcover_url = item.find(attrs={'class':'dwpdf'})
|
|
||||||
if lcover_url:
|
|
||||||
self.cover_url = lcover_url.a['href']
|
|
||||||
count = 1
|
count = 1
|
||||||
else:
|
else:
|
||||||
url = text_link.a['href']
|
url = text_link.a['href']
|
||||||
@ -87,7 +97,14 @@ class Harpers_full(BasicNewsRecipe):
|
|||||||
,'url' :url
|
,'url' :url
|
||||||
,'description':''
|
,'description':''
|
||||||
})
|
})
|
||||||
return [(soup.head.title.string, articles)]
|
return [(soup1.head.title.string, articles)]
|
||||||
|
|
||||||
def print_version(self, url):
|
def print_version(self, url):
|
||||||
return url + '?single=1'
|
return url + '?single=1'
|
||||||
|
|
||||||
|
def cleanup(self):
|
||||||
|
soup = self.index_to_soup('http://harpers.org/')
|
||||||
|
signouturl=self.tag_to_string(soup.find('li', attrs={'class':'subLogOut'}).findNext('li').a['href'])
|
||||||
|
self.log(signouturl)
|
||||||
|
self.browser.open(signouturl)
|
||||||
|
|
||||||
|
@ -15,6 +15,7 @@ from calibre.ebooks.BeautifulSoup import BeautifulSoup, Tag, BeautifulStoneSoup
|
|||||||
class NYTimes(BasicNewsRecipe):
|
class NYTimes(BasicNewsRecipe):
|
||||||
|
|
||||||
recursions=1 # set this to zero to omit Related articles lists
|
recursions=1 # set this to zero to omit Related articles lists
|
||||||
|
match_regexps=[r'/[12][0-9][0-9][0-9]/[0-9]+/'] # speeds up processing by preventing index page links from being followed
|
||||||
|
|
||||||
# set getTechBlogs to True to include the technology blogs
|
# set getTechBlogs to True to include the technology blogs
|
||||||
# set tech_oldest_article to control article age
|
# set tech_oldest_article to control article age
|
||||||
@ -24,6 +25,14 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
tech_oldest_article = 14
|
tech_oldest_article = 14
|
||||||
tech_max_articles_per_feed = 25
|
tech_max_articles_per_feed = 25
|
||||||
|
|
||||||
|
# set getPopularArticles to False if you don't want the Most E-mailed and Most Viewed articles
|
||||||
|
# otherwise you will get up to 20 of the most popular e-mailed and viewed articles (in each category)
|
||||||
|
getPopularArticles = True
|
||||||
|
popularPeriod = '1' # set this to the number of days to include in the measurement
|
||||||
|
# e.g. 7 will get the most popular measured over the last 7 days
|
||||||
|
# and 30 will get the most popular measured over 30 days.
|
||||||
|
# you still only get up to 20 articles in each category
|
||||||
|
|
||||||
|
|
||||||
# set headlinesOnly to True for the headlines-only version. If True, webEdition is ignored.
|
# set headlinesOnly to True for the headlines-only version. If True, webEdition is ignored.
|
||||||
headlinesOnly = True
|
headlinesOnly = True
|
||||||
@ -153,7 +162,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
|
|
||||||
timefmt = ''
|
timefmt = ''
|
||||||
|
|
||||||
simultaneous_downloads = 1
|
#simultaneous_downloads = 1 # no longer required to deal with ads
|
||||||
|
|
||||||
cover_margins = (18,18,'grey99')
|
cover_margins = (18,18,'grey99')
|
||||||
|
|
||||||
@ -204,7 +213,8 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
re.compile('^subNavigation'),
|
re.compile('^subNavigation'),
|
||||||
re.compile('^leaderboard'),
|
re.compile('^leaderboard'),
|
||||||
re.compile('^module'),
|
re.compile('^module'),
|
||||||
re.compile('commentCount')
|
re.compile('commentCount'),
|
||||||
|
'credit'
|
||||||
]}),
|
]}),
|
||||||
dict(name='div', attrs={'class':re.compile('toolsList')}), # bits
|
dict(name='div', attrs={'class':re.compile('toolsList')}), # bits
|
||||||
dict(name='div', attrs={'class':re.compile('postNavigation')}), # bits
|
dict(name='div', attrs={'class':re.compile('postNavigation')}), # bits
|
||||||
@ -291,11 +301,11 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
del ans[idx]
|
del ans[idx]
|
||||||
idx_max = idx_max-1
|
idx_max = idx_max-1
|
||||||
continue
|
continue
|
||||||
if self.verbose:
|
if True: #self.verbose
|
||||||
self.log("Section %s: %d articles" % (ans[idx][0], len(ans[idx][1])) )
|
self.log("Section %s: %d articles" % (ans[idx][0], len(ans[idx][1])) )
|
||||||
for article in ans[idx][1]:
|
for article in ans[idx][1]:
|
||||||
total_article_count += 1
|
total_article_count += 1
|
||||||
if self.verbose:
|
if True: #self.verbose
|
||||||
self.log("\t%-40.40s... \t%-60.60s..." % (article['title'].encode('cp1252','replace'),
|
self.log("\t%-40.40s... \t%-60.60s..." % (article['title'].encode('cp1252','replace'),
|
||||||
article['url'].encode('cp1252','replace')))
|
article['url'].encode('cp1252','replace')))
|
||||||
idx = idx+1
|
idx = idx+1
|
||||||
@ -351,23 +361,8 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
br = BasicNewsRecipe.get_browser()
|
br = BasicNewsRecipe.get_browser()
|
||||||
return br
|
return br
|
||||||
|
|
||||||
## This doesn't work (and probably never did). It either gets another serve of the advertisement,
|
|
||||||
## or if it gets the article then get_soup (from which it is invoked) traps trying to do xml decoding.
|
|
||||||
##
|
|
||||||
## def skip_ad_pages(self, soup):
|
|
||||||
## # Skip ad pages served before actual article
|
|
||||||
## skip_tag = soup.find(True, {'name':'skip'})
|
|
||||||
## if skip_tag is not None:
|
|
||||||
## self.log.warn("Found forwarding link: %s" % skip_tag.parent['href'])
|
|
||||||
## url = 'http://www.nytimes.com' + re.sub(r'\?.*', '', skip_tag.parent['href'])
|
|
||||||
## url += '?pagewanted=all'
|
|
||||||
## self.log.warn("Skipping ad to article at '%s'" % url)
|
|
||||||
## return self.index_to_soup(url, raw=True)
|
|
||||||
|
|
||||||
|
|
||||||
cover_tag = 'NY_NYT'
|
cover_tag = 'NY_NYT'
|
||||||
def get_cover_url(self):
|
def get_cover_url(self):
|
||||||
from datetime import timedelta, date
|
|
||||||
cover = 'http://webmedia.newseum.org/newseum-multimedia/dfp/jpg'+str(date.today().day)+'/lg/'+self.cover_tag+'.jpg'
|
cover = 'http://webmedia.newseum.org/newseum-multimedia/dfp/jpg'+str(date.today().day)+'/lg/'+self.cover_tag+'.jpg'
|
||||||
br = BasicNewsRecipe.get_browser()
|
br = BasicNewsRecipe.get_browser()
|
||||||
daysback=1
|
daysback=1
|
||||||
@ -390,6 +385,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
|
|
||||||
masthead_url = 'http://graphics8.nytimes.com/images/misc/nytlogo379x64.gif'
|
masthead_url = 'http://graphics8.nytimes.com/images/misc/nytlogo379x64.gif'
|
||||||
|
|
||||||
|
|
||||||
def short_title(self):
|
def short_title(self):
|
||||||
return self.title
|
return self.title
|
||||||
|
|
||||||
@ -398,6 +394,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
from contextlib import closing
|
from contextlib import closing
|
||||||
import copy
|
import copy
|
||||||
from calibre.ebooks.chardet import xml_to_unicode
|
from calibre.ebooks.chardet import xml_to_unicode
|
||||||
|
print("ARTICLE_TO_SOUP "+url_or_raw)
|
||||||
if re.match(r'\w+://', url_or_raw):
|
if re.match(r'\w+://', url_or_raw):
|
||||||
br = self.clone_browser(self.browser)
|
br = self.clone_browser(self.browser)
|
||||||
open_func = getattr(br, 'open_novisit', br.open)
|
open_func = getattr(br, 'open_novisit', br.open)
|
||||||
@ -489,6 +486,67 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
description=description, author=author,
|
description=description, author=author,
|
||||||
content=''))
|
content=''))
|
||||||
|
|
||||||
|
def get_popular_articles(self,ans):
|
||||||
|
if self.getPopularArticles:
|
||||||
|
popular_articles = {}
|
||||||
|
key_list = []
|
||||||
|
|
||||||
|
def handleh3(h3tag):
|
||||||
|
try:
|
||||||
|
url = h3tag.a['href']
|
||||||
|
except:
|
||||||
|
return ('','','','')
|
||||||
|
url = re.sub(r'\?.*', '', url)
|
||||||
|
if self.exclude_url(url):
|
||||||
|
return ('','','','')
|
||||||
|
url += '?pagewanted=all'
|
||||||
|
title = self.tag_to_string(h3tag.a,False)
|
||||||
|
h6tag = h3tag.findNextSibling('h6')
|
||||||
|
if h6tag is not None:
|
||||||
|
author = self.tag_to_string(h6tag,False)
|
||||||
|
else:
|
||||||
|
author = ''
|
||||||
|
ptag = h3tag.findNextSibling('p')
|
||||||
|
if ptag is not None:
|
||||||
|
desc = self.tag_to_string(ptag,False)
|
||||||
|
else:
|
||||||
|
desc = ''
|
||||||
|
return(title,url,author,desc)
|
||||||
|
|
||||||
|
|
||||||
|
have_emailed = False
|
||||||
|
emailed_soup = self.index_to_soup('http://www.nytimes.com/most-popular-emailed?period='+self.popularPeriod)
|
||||||
|
for h3tag in emailed_soup.findAll('h3'):
|
||||||
|
(title,url,author,desc) = handleh3(h3tag)
|
||||||
|
if url=='':
|
||||||
|
continue
|
||||||
|
if not have_emailed:
|
||||||
|
key_list.append('Most E-Mailed')
|
||||||
|
popular_articles['Most E-Mailed'] = []
|
||||||
|
have_emailed = True
|
||||||
|
popular_articles['Most E-Mailed'].append(
|
||||||
|
dict(title=title, url=url, date=strftime('%a, %d %b'),
|
||||||
|
description=desc, author=author,
|
||||||
|
content=''))
|
||||||
|
have_viewed = False
|
||||||
|
viewed_soup = self.index_to_soup('http://www.nytimes.com/most-popular-viewed?period='+self.popularPeriod)
|
||||||
|
for h3tag in viewed_soup.findAll('h3'):
|
||||||
|
(title,url,author,desc) = handleh3(h3tag)
|
||||||
|
if url=='':
|
||||||
|
continue
|
||||||
|
if not have_viewed:
|
||||||
|
key_list.append('Most Viewed')
|
||||||
|
popular_articles['Most Viewed'] = []
|
||||||
|
have_viewed = True
|
||||||
|
popular_articles['Most Viewed'].append(
|
||||||
|
dict(title=title, url=url, date=strftime('%a, %d %b'),
|
||||||
|
description=desc, author=author,
|
||||||
|
content=''))
|
||||||
|
viewed_ans = [(k, popular_articles[k]) for k in key_list if popular_articles.has_key(k)]
|
||||||
|
for x in viewed_ans:
|
||||||
|
ans.append(x)
|
||||||
|
return ans
|
||||||
|
|
||||||
def get_tech_feeds(self,ans):
|
def get_tech_feeds(self,ans):
|
||||||
if self.getTechBlogs:
|
if self.getTechBlogs:
|
||||||
tech_articles = {}
|
tech_articles = {}
|
||||||
@ -550,7 +608,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
self.handle_article(lidiv)
|
self.handle_article(lidiv)
|
||||||
|
|
||||||
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
||||||
return self.filter_ans(self.get_tech_feeds(self.ans))
|
return self.filter_ans(self.get_tech_feeds(self.get_popular_articles(self.ans)))
|
||||||
|
|
||||||
|
|
||||||
def parse_todays_index(self):
|
def parse_todays_index(self):
|
||||||
@ -583,7 +641,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
self.handle_article(lidiv)
|
self.handle_article(lidiv)
|
||||||
|
|
||||||
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
||||||
return self.filter_ans(self.get_tech_feeds(self.ans))
|
return self.filter_ans(self.get_tech_feeds(self.get_popular_articles(self.ans)))
|
||||||
|
|
||||||
def parse_headline_index(self):
|
def parse_headline_index(self):
|
||||||
|
|
||||||
@ -657,7 +715,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
self.articles[section_name].append(dict(title=title, url=url, date=pubdate, description=description, author=author, content=''))
|
self.articles[section_name].append(dict(title=title, url=url, date=pubdate, description=description, author=author, content=''))
|
||||||
|
|
||||||
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
||||||
return self.filter_ans(self.get_tech_feeds(self.ans))
|
return self.filter_ans(self.get_tech_feeds(self.get_popular_articles(self.ans)))
|
||||||
|
|
||||||
def parse_index(self):
|
def parse_index(self):
|
||||||
if self.headlinesOnly:
|
if self.headlinesOnly:
|
||||||
@ -745,11 +803,12 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
|
|
||||||
|
|
||||||
def preprocess_html(self, soup):
|
def preprocess_html(self, soup):
|
||||||
print("PREPROCESS TITLE="+self.tag_to_string(soup.title))
|
#print(strftime("%H:%M:%S")+" -- PREPROCESS TITLE="+self.tag_to_string(soup.title))
|
||||||
skip_tag = soup.find(True, {'name':'skip'})
|
skip_tag = soup.find(True, {'name':'skip'})
|
||||||
if skip_tag is not None:
|
if skip_tag is not None:
|
||||||
url = 'http://www.nytimes.com' + re.sub(r'\?.*', '', skip_tag.parent['href'])
|
#url = 'http://www.nytimes.com' + re.sub(r'\?.*', '', skip_tag.parent['href'])
|
||||||
url += '?pagewanted=all'
|
url = 'http://www.nytimes.com' + skip_tag.parent['href']
|
||||||
|
#url += '?pagewanted=all'
|
||||||
self.log.warn("Skipping ad to article at '%s'" % url)
|
self.log.warn("Skipping ad to article at '%s'" % url)
|
||||||
sleep(5)
|
sleep(5)
|
||||||
soup = self.handle_tags(self.article_to_soup(url))
|
soup = self.handle_tags(self.article_to_soup(url))
|
||||||
@ -920,6 +979,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
for aside in soup.findAll('div','aside'):
|
for aside in soup.findAll('div','aside'):
|
||||||
aside.extract()
|
aside.extract()
|
||||||
soup = self.strip_anchors(soup,True)
|
soup = self.strip_anchors(soup,True)
|
||||||
|
#print("RECURSIVE: "+self.tag_to_string(soup.title))
|
||||||
|
|
||||||
if soup.find('div',attrs={'id':'blogcontent'}) is None:
|
if soup.find('div',attrs={'id':'blogcontent'}) is None:
|
||||||
if first_fetch:
|
if first_fetch:
|
||||||
@ -1005,7 +1065,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
if headline:
|
if headline:
|
||||||
tag = Tag(soup, "h2")
|
tag = Tag(soup, "h2")
|
||||||
tag['class'] = "headline"
|
tag['class'] = "headline"
|
||||||
tag.insert(0, self.fixChars(headline.renderContents()))
|
tag.insert(0, self.fixChars(self.tag_to_string(headline,False)))
|
||||||
soup.insert(0, tag)
|
soup.insert(0, tag)
|
||||||
hrs = soup.findAll('hr')
|
hrs = soup.findAll('hr')
|
||||||
for hr in hrs:
|
for hr in hrs:
|
||||||
@ -1019,7 +1079,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
if bylineauthor:
|
if bylineauthor:
|
||||||
tag = Tag(soup, "h6")
|
tag = Tag(soup, "h6")
|
||||||
tag['class'] = "byline"
|
tag['class'] = "byline"
|
||||||
tag.insert(0, self.fixChars(bylineauthor.renderContents()))
|
tag.insert(0, self.fixChars(self.tag_to_string(bylineauthor,False)))
|
||||||
bylineauthor.replaceWith(tag)
|
bylineauthor.replaceWith(tag)
|
||||||
except:
|
except:
|
||||||
self.log("ERROR: fixing byline author format")
|
self.log("ERROR: fixing byline author format")
|
||||||
@ -1030,7 +1090,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
if blogcredit:
|
if blogcredit:
|
||||||
tag = Tag(soup, "h6")
|
tag = Tag(soup, "h6")
|
||||||
tag['class'] = "credit"
|
tag['class'] = "credit"
|
||||||
tag.insert(0, self.fixChars(blogcredit.renderContents()))
|
tag.insert(0, self.fixChars(self.tag_to_string(blogcredit,False)))
|
||||||
blogcredit.replaceWith(tag)
|
blogcredit.replaceWith(tag)
|
||||||
except:
|
except:
|
||||||
self.log("ERROR: fixing credit format")
|
self.log("ERROR: fixing credit format")
|
||||||
@ -1084,7 +1144,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
divTag.replaceWith(tag)
|
divTag.replaceWith(tag)
|
||||||
except:
|
except:
|
||||||
self.log("ERROR: Problem in Add class=authorId to <div> so we can format with CSS")
|
self.log("ERROR: Problem in Add class=authorId to <div> so we can format with CSS")
|
||||||
|
#print(strftime("%H:%M:%S")+" -- POSTPROCESS TITLE="+self.tag_to_string(soup.title))
|
||||||
return soup
|
return soup
|
||||||
|
|
||||||
def populate_article_metadata(self, article, soup, first):
|
def populate_article_metadata(self, article, soup, first):
|
||||||
|
@ -15,6 +15,7 @@ from calibre.ebooks.BeautifulSoup import BeautifulSoup, Tag, BeautifulStoneSoup
|
|||||||
class NYTimes(BasicNewsRecipe):
|
class NYTimes(BasicNewsRecipe):
|
||||||
|
|
||||||
recursions=1 # set this to zero to omit Related articles lists
|
recursions=1 # set this to zero to omit Related articles lists
|
||||||
|
match_regexps=[r'/[12][0-9][0-9][0-9]/[0-9]+/'] # speeds up processing by preventing index page links from being followed
|
||||||
|
|
||||||
# set getTechBlogs to True to include the technology blogs
|
# set getTechBlogs to True to include the technology blogs
|
||||||
# set tech_oldest_article to control article age
|
# set tech_oldest_article to control article age
|
||||||
@ -24,6 +25,14 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
tech_oldest_article = 14
|
tech_oldest_article = 14
|
||||||
tech_max_articles_per_feed = 25
|
tech_max_articles_per_feed = 25
|
||||||
|
|
||||||
|
# set getPopularArticles to False if you don't want the Most E-mailed and Most Viewed articles
|
||||||
|
# otherwise you will get up to 20 of the most popular e-mailed and viewed articles (in each category)
|
||||||
|
getPopularArticles = True
|
||||||
|
popularPeriod = '1' # set this to the number of days to include in the measurement
|
||||||
|
# e.g. 7 will get the most popular measured over the last 7 days
|
||||||
|
# and 30 will get the most popular measured over 30 days.
|
||||||
|
# you still only get up to 20 articles in each category
|
||||||
|
|
||||||
|
|
||||||
# set headlinesOnly to True for the headlines-only version. If True, webEdition is ignored.
|
# set headlinesOnly to True for the headlines-only version. If True, webEdition is ignored.
|
||||||
headlinesOnly = False
|
headlinesOnly = False
|
||||||
@ -32,7 +41,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
# number of days old an article can be for inclusion. If oldest_web_article = None all articles
|
# number of days old an article can be for inclusion. If oldest_web_article = None all articles
|
||||||
# will be included. Note: oldest_web_article is ignored if webEdition = False
|
# will be included. Note: oldest_web_article is ignored if webEdition = False
|
||||||
webEdition = False
|
webEdition = False
|
||||||
oldest_web_article = 7
|
oldest_web_article = None
|
||||||
|
|
||||||
# download higher resolution images than the small thumbnails typically included in the article
|
# download higher resolution images than the small thumbnails typically included in the article
|
||||||
# the down side of having large beautiful images is the file size is much larger, on the order of 7MB per paper
|
# the down side of having large beautiful images is the file size is much larger, on the order of 7MB per paper
|
||||||
@ -153,7 +162,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
|
|
||||||
timefmt = ''
|
timefmt = ''
|
||||||
|
|
||||||
simultaneous_downloads = 1
|
#simultaneous_downloads = 1 # no longer required to deal with ads
|
||||||
|
|
||||||
cover_margins = (18,18,'grey99')
|
cover_margins = (18,18,'grey99')
|
||||||
|
|
||||||
@ -204,7 +213,8 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
re.compile('^subNavigation'),
|
re.compile('^subNavigation'),
|
||||||
re.compile('^leaderboard'),
|
re.compile('^leaderboard'),
|
||||||
re.compile('^module'),
|
re.compile('^module'),
|
||||||
re.compile('commentCount')
|
re.compile('commentCount'),
|
||||||
|
'credit'
|
||||||
]}),
|
]}),
|
||||||
dict(name='div', attrs={'class':re.compile('toolsList')}), # bits
|
dict(name='div', attrs={'class':re.compile('toolsList')}), # bits
|
||||||
dict(name='div', attrs={'class':re.compile('postNavigation')}), # bits
|
dict(name='div', attrs={'class':re.compile('postNavigation')}), # bits
|
||||||
@ -291,11 +301,11 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
del ans[idx]
|
del ans[idx]
|
||||||
idx_max = idx_max-1
|
idx_max = idx_max-1
|
||||||
continue
|
continue
|
||||||
if self.verbose:
|
if True: #self.verbose
|
||||||
self.log("Section %s: %d articles" % (ans[idx][0], len(ans[idx][1])) )
|
self.log("Section %s: %d articles" % (ans[idx][0], len(ans[idx][1])) )
|
||||||
for article in ans[idx][1]:
|
for article in ans[idx][1]:
|
||||||
total_article_count += 1
|
total_article_count += 1
|
||||||
if self.verbose:
|
if True: #self.verbose
|
||||||
self.log("\t%-40.40s... \t%-60.60s..." % (article['title'].encode('cp1252','replace'),
|
self.log("\t%-40.40s... \t%-60.60s..." % (article['title'].encode('cp1252','replace'),
|
||||||
article['url'].encode('cp1252','replace')))
|
article['url'].encode('cp1252','replace')))
|
||||||
idx = idx+1
|
idx = idx+1
|
||||||
@ -351,23 +361,8 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
br = BasicNewsRecipe.get_browser()
|
br = BasicNewsRecipe.get_browser()
|
||||||
return br
|
return br
|
||||||
|
|
||||||
## This doesn't work (and probably never did). It either gets another serve of the advertisement,
|
|
||||||
## or if it gets the article then get_soup (from which it is invoked) traps trying to do xml decoding.
|
|
||||||
##
|
|
||||||
## def skip_ad_pages(self, soup):
|
|
||||||
## # Skip ad pages served before actual article
|
|
||||||
## skip_tag = soup.find(True, {'name':'skip'})
|
|
||||||
## if skip_tag is not None:
|
|
||||||
## self.log.warn("Found forwarding link: %s" % skip_tag.parent['href'])
|
|
||||||
## url = 'http://www.nytimes.com' + re.sub(r'\?.*', '', skip_tag.parent['href'])
|
|
||||||
## url += '?pagewanted=all'
|
|
||||||
## self.log.warn("Skipping ad to article at '%s'" % url)
|
|
||||||
## return self.index_to_soup(url, raw=True)
|
|
||||||
|
|
||||||
|
|
||||||
cover_tag = 'NY_NYT'
|
cover_tag = 'NY_NYT'
|
||||||
def get_cover_url(self):
|
def get_cover_url(self):
|
||||||
from datetime import timedelta, date
|
|
||||||
cover = 'http://webmedia.newseum.org/newseum-multimedia/dfp/jpg'+str(date.today().day)+'/lg/'+self.cover_tag+'.jpg'
|
cover = 'http://webmedia.newseum.org/newseum-multimedia/dfp/jpg'+str(date.today().day)+'/lg/'+self.cover_tag+'.jpg'
|
||||||
br = BasicNewsRecipe.get_browser()
|
br = BasicNewsRecipe.get_browser()
|
||||||
daysback=1
|
daysback=1
|
||||||
@ -390,6 +385,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
|
|
||||||
masthead_url = 'http://graphics8.nytimes.com/images/misc/nytlogo379x64.gif'
|
masthead_url = 'http://graphics8.nytimes.com/images/misc/nytlogo379x64.gif'
|
||||||
|
|
||||||
|
|
||||||
def short_title(self):
|
def short_title(self):
|
||||||
return self.title
|
return self.title
|
||||||
|
|
||||||
@ -398,6 +394,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
from contextlib import closing
|
from contextlib import closing
|
||||||
import copy
|
import copy
|
||||||
from calibre.ebooks.chardet import xml_to_unicode
|
from calibre.ebooks.chardet import xml_to_unicode
|
||||||
|
print("ARTICLE_TO_SOUP "+url_or_raw)
|
||||||
if re.match(r'\w+://', url_or_raw):
|
if re.match(r'\w+://', url_or_raw):
|
||||||
br = self.clone_browser(self.browser)
|
br = self.clone_browser(self.browser)
|
||||||
open_func = getattr(br, 'open_novisit', br.open)
|
open_func = getattr(br, 'open_novisit', br.open)
|
||||||
@ -489,6 +486,67 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
description=description, author=author,
|
description=description, author=author,
|
||||||
content=''))
|
content=''))
|
||||||
|
|
||||||
|
def get_popular_articles(self,ans):
|
||||||
|
if self.getPopularArticles:
|
||||||
|
popular_articles = {}
|
||||||
|
key_list = []
|
||||||
|
|
||||||
|
def handleh3(h3tag):
|
||||||
|
try:
|
||||||
|
url = h3tag.a['href']
|
||||||
|
except:
|
||||||
|
return ('','','','')
|
||||||
|
url = re.sub(r'\?.*', '', url)
|
||||||
|
if self.exclude_url(url):
|
||||||
|
return ('','','','')
|
||||||
|
url += '?pagewanted=all'
|
||||||
|
title = self.tag_to_string(h3tag.a,False)
|
||||||
|
h6tag = h3tag.findNextSibling('h6')
|
||||||
|
if h6tag is not None:
|
||||||
|
author = self.tag_to_string(h6tag,False)
|
||||||
|
else:
|
||||||
|
author = ''
|
||||||
|
ptag = h3tag.findNextSibling('p')
|
||||||
|
if ptag is not None:
|
||||||
|
desc = self.tag_to_string(ptag,False)
|
||||||
|
else:
|
||||||
|
desc = ''
|
||||||
|
return(title,url,author,desc)
|
||||||
|
|
||||||
|
|
||||||
|
have_emailed = False
|
||||||
|
emailed_soup = self.index_to_soup('http://www.nytimes.com/most-popular-emailed?period='+self.popularPeriod)
|
||||||
|
for h3tag in emailed_soup.findAll('h3'):
|
||||||
|
(title,url,author,desc) = handleh3(h3tag)
|
||||||
|
if url=='':
|
||||||
|
continue
|
||||||
|
if not have_emailed:
|
||||||
|
key_list.append('Most E-Mailed')
|
||||||
|
popular_articles['Most E-Mailed'] = []
|
||||||
|
have_emailed = True
|
||||||
|
popular_articles['Most E-Mailed'].append(
|
||||||
|
dict(title=title, url=url, date=strftime('%a, %d %b'),
|
||||||
|
description=desc, author=author,
|
||||||
|
content=''))
|
||||||
|
have_viewed = False
|
||||||
|
viewed_soup = self.index_to_soup('http://www.nytimes.com/most-popular-viewed?period='+self.popularPeriod)
|
||||||
|
for h3tag in viewed_soup.findAll('h3'):
|
||||||
|
(title,url,author,desc) = handleh3(h3tag)
|
||||||
|
if url=='':
|
||||||
|
continue
|
||||||
|
if not have_viewed:
|
||||||
|
key_list.append('Most Viewed')
|
||||||
|
popular_articles['Most Viewed'] = []
|
||||||
|
have_viewed = True
|
||||||
|
popular_articles['Most Viewed'].append(
|
||||||
|
dict(title=title, url=url, date=strftime('%a, %d %b'),
|
||||||
|
description=desc, author=author,
|
||||||
|
content=''))
|
||||||
|
viewed_ans = [(k, popular_articles[k]) for k in key_list if popular_articles.has_key(k)]
|
||||||
|
for x in viewed_ans:
|
||||||
|
ans.append(x)
|
||||||
|
return ans
|
||||||
|
|
||||||
def get_tech_feeds(self,ans):
|
def get_tech_feeds(self,ans):
|
||||||
if self.getTechBlogs:
|
if self.getTechBlogs:
|
||||||
tech_articles = {}
|
tech_articles = {}
|
||||||
@ -550,7 +608,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
self.handle_article(lidiv)
|
self.handle_article(lidiv)
|
||||||
|
|
||||||
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
||||||
return self.filter_ans(self.get_tech_feeds(self.ans))
|
return self.filter_ans(self.get_tech_feeds(self.get_popular_articles(self.ans)))
|
||||||
|
|
||||||
|
|
||||||
def parse_todays_index(self):
|
def parse_todays_index(self):
|
||||||
@ -583,7 +641,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
self.handle_article(lidiv)
|
self.handle_article(lidiv)
|
||||||
|
|
||||||
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
||||||
return self.filter_ans(self.get_tech_feeds(self.ans))
|
return self.filter_ans(self.get_tech_feeds(self.get_popular_articles(self.ans)))
|
||||||
|
|
||||||
def parse_headline_index(self):
|
def parse_headline_index(self):
|
||||||
|
|
||||||
@ -657,7 +715,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
self.articles[section_name].append(dict(title=title, url=url, date=pubdate, description=description, author=author, content=''))
|
self.articles[section_name].append(dict(title=title, url=url, date=pubdate, description=description, author=author, content=''))
|
||||||
|
|
||||||
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
self.ans = [(k, self.articles[k]) for k in self.ans if self.articles.has_key(k)]
|
||||||
return self.filter_ans(self.get_tech_feeds(self.ans))
|
return self.filter_ans(self.get_tech_feeds(self.get_popular_articles(self.ans)))
|
||||||
|
|
||||||
def parse_index(self):
|
def parse_index(self):
|
||||||
if self.headlinesOnly:
|
if self.headlinesOnly:
|
||||||
@ -745,11 +803,12 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
|
|
||||||
|
|
||||||
def preprocess_html(self, soup):
|
def preprocess_html(self, soup):
|
||||||
print("PREPROCESS TITLE="+self.tag_to_string(soup.title))
|
#print(strftime("%H:%M:%S")+" -- PREPROCESS TITLE="+self.tag_to_string(soup.title))
|
||||||
skip_tag = soup.find(True, {'name':'skip'})
|
skip_tag = soup.find(True, {'name':'skip'})
|
||||||
if skip_tag is not None:
|
if skip_tag is not None:
|
||||||
url = 'http://www.nytimes.com' + re.sub(r'\?.*', '', skip_tag.parent['href'])
|
#url = 'http://www.nytimes.com' + re.sub(r'\?.*', '', skip_tag.parent['href'])
|
||||||
url += '?pagewanted=all'
|
url = 'http://www.nytimes.com' + skip_tag.parent['href']
|
||||||
|
#url += '?pagewanted=all'
|
||||||
self.log.warn("Skipping ad to article at '%s'" % url)
|
self.log.warn("Skipping ad to article at '%s'" % url)
|
||||||
sleep(5)
|
sleep(5)
|
||||||
soup = self.handle_tags(self.article_to_soup(url))
|
soup = self.handle_tags(self.article_to_soup(url))
|
||||||
@ -920,6 +979,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
for aside in soup.findAll('div','aside'):
|
for aside in soup.findAll('div','aside'):
|
||||||
aside.extract()
|
aside.extract()
|
||||||
soup = self.strip_anchors(soup,True)
|
soup = self.strip_anchors(soup,True)
|
||||||
|
#print("RECURSIVE: "+self.tag_to_string(soup.title))
|
||||||
|
|
||||||
if soup.find('div',attrs={'id':'blogcontent'}) is None:
|
if soup.find('div',attrs={'id':'blogcontent'}) is None:
|
||||||
if first_fetch:
|
if first_fetch:
|
||||||
@ -1005,7 +1065,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
if headline:
|
if headline:
|
||||||
tag = Tag(soup, "h2")
|
tag = Tag(soup, "h2")
|
||||||
tag['class'] = "headline"
|
tag['class'] = "headline"
|
||||||
tag.insert(0, self.fixChars(headline.renderContents()))
|
tag.insert(0, self.fixChars(self.tag_to_string(headline,False)))
|
||||||
soup.insert(0, tag)
|
soup.insert(0, tag)
|
||||||
hrs = soup.findAll('hr')
|
hrs = soup.findAll('hr')
|
||||||
for hr in hrs:
|
for hr in hrs:
|
||||||
@ -1019,7 +1079,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
if bylineauthor:
|
if bylineauthor:
|
||||||
tag = Tag(soup, "h6")
|
tag = Tag(soup, "h6")
|
||||||
tag['class'] = "byline"
|
tag['class'] = "byline"
|
||||||
tag.insert(0, self.fixChars(bylineauthor.renderContents()))
|
tag.insert(0, self.fixChars(self.tag_to_string(bylineauthor,False)))
|
||||||
bylineauthor.replaceWith(tag)
|
bylineauthor.replaceWith(tag)
|
||||||
except:
|
except:
|
||||||
self.log("ERROR: fixing byline author format")
|
self.log("ERROR: fixing byline author format")
|
||||||
@ -1030,7 +1090,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
if blogcredit:
|
if blogcredit:
|
||||||
tag = Tag(soup, "h6")
|
tag = Tag(soup, "h6")
|
||||||
tag['class'] = "credit"
|
tag['class'] = "credit"
|
||||||
tag.insert(0, self.fixChars(blogcredit.renderContents()))
|
tag.insert(0, self.fixChars(self.tag_to_string(blogcredit,False)))
|
||||||
blogcredit.replaceWith(tag)
|
blogcredit.replaceWith(tag)
|
||||||
except:
|
except:
|
||||||
self.log("ERROR: fixing credit format")
|
self.log("ERROR: fixing credit format")
|
||||||
@ -1084,7 +1144,7 @@ class NYTimes(BasicNewsRecipe):
|
|||||||
divTag.replaceWith(tag)
|
divTag.replaceWith(tag)
|
||||||
except:
|
except:
|
||||||
self.log("ERROR: Problem in Add class=authorId to <div> so we can format with CSS")
|
self.log("ERROR: Problem in Add class=authorId to <div> so we can format with CSS")
|
||||||
|
#print(strftime("%H:%M:%S")+" -- POSTPROCESS TITLE="+self.tag_to_string(soup.title))
|
||||||
return soup
|
return soup
|
||||||
|
|
||||||
def populate_article_metadata(self, article, soup, first):
|
def populate_article_metadata(self, article, soup, first):
|
||||||
|
@ -26,24 +26,28 @@ class TodaysZaman_en(BasicNewsRecipe):
|
|||||||
# remove_attributes = ['width','height']
|
# remove_attributes = ['width','height']
|
||||||
|
|
||||||
feeds = [
|
feeds = [
|
||||||
( u'Home', u'http://www.todayszaman.com/rss?sectionId=0'),
|
( u'Home', u'http://www.todayszaman.com/0.rss'),
|
||||||
( u'News', u'http://www.todayszaman.com/rss?sectionId=100'),
|
( u'Sports', u'http://www.todayszaman.com/5.rss'),
|
||||||
( u'Business', u'http://www.todayszaman.com/rss?sectionId=105'),
|
( u'Columnists', u'http://www.todayszaman.com/6.rss'),
|
||||||
( u'Interviews', u'http://www.todayszaman.com/rss?sectionId=8'),
|
( u'Interviews', u'http://www.todayszaman.com/9.rss'),
|
||||||
( u'Columnists', u'http://www.todayszaman.com/rss?sectionId=6'),
|
( u'News', u'http://www.todayszaman.com/100.rss'),
|
||||||
( u'Op-Ed', u'http://www.todayszaman.com/rss?sectionId=109'),
|
( u'National', u'http://www.todayszaman.com/101.rss'),
|
||||||
( u'Arts & Culture', u'http://www.todayszaman.com/rss?sectionId=110'),
|
( u'Diplomacy', u'http://www.todayszaman.com/102.rss'),
|
||||||
( u'Expat Zone', u'http://www.todayszaman.com/rss?sectionId=132'),
|
( u'World', u'http://www.todayszaman.com/104.rss'),
|
||||||
( u'Sports', u'http://www.todayszaman.com/rss?sectionId=5'),
|
( u'Business', u'http://www.todayszaman.com/105.rss'),
|
||||||
( u'Features', u'http://www.todayszaman.com/rss?sectionId=116'),
|
( u'Op-Ed', u'http://www.todayszaman.com/109.rss'),
|
||||||
( u'Travel', u'http://www.todayszaman.com/rss?sectionId=117'),
|
( u'Arts & Culture', u'http://www.todayszaman.com/110.rss'),
|
||||||
( u'Leisure', u'http://www.todayszaman.com/rss?sectionId=118'),
|
( u'Features', u'http://www.todayszaman.com/116.rss'),
|
||||||
( u'Weird But True', u'http://www.todayszaman.com/rss?sectionId=134'),
|
( u'Travel', u'http://www.todayszaman.com/117.rss'),
|
||||||
( u'Life', u'http://www.todayszaman.com/rss?sectionId=133'),
|
( u'Food', u'http://www.todayszaman.com/124.rss'),
|
||||||
( u'Health', u'http://www.todayszaman.com/rss?sectionId=126'),
|
( u'Press Review', u'http://www.todayszaman.com/130.rss'),
|
||||||
( u'Press Review', u'http://www.todayszaman.com/rss?sectionId=130'),
|
( u'Expat Zone', u'http://www.todayszaman.com/132.rss'),
|
||||||
( u'Todays think tanks', u'http://www.todayszaman.com/rss?sectionId=159'),
|
( u'Life', u'http://www.todayszaman.com/133.rss'),
|
||||||
|
( u'Think Tanks', u'http://www.todayszaman.com/159.rss'),
|
||||||
|
( u'Almanac', u'http://www.todayszaman.com/161.rss'),
|
||||||
|
( u'Health', u'http://www.todayszaman.com/162.rss'),
|
||||||
|
( u'Fashion & Beauty', u'http://www.todayszaman.com/163.rss'),
|
||||||
|
( u'Science & Technology', u'http://www.todayszaman.com/349.rss'),
|
||||||
]
|
]
|
||||||
|
|
||||||
#def preprocess_html(self, soup):
|
#def preprocess_html(self, soup):
|
||||||
@ -51,3 +55,4 @@ class TodaysZaman_en(BasicNewsRecipe):
|
|||||||
#def print_version(self, url): #there is a probem caused by table format
|
#def print_version(self, url): #there is a probem caused by table format
|
||||||
#return url.replace('http://www.todayszaman.com/newsDetail_getNewsById.action?load=detay&', 'http://www.todayszaman.com/newsDetail_openPrintPage.action?')
|
#return url.replace('http://www.todayszaman.com/newsDetail_getNewsById.action?load=detay&', 'http://www.todayszaman.com/newsDetail_openPrintPage.action?')
|
||||||
|
|
||||||
|
|
||||||
|
@ -12,13 +12,13 @@ msgstr ""
|
|||||||
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
|
"Report-Msgid-Bugs-To: Debian iso-codes team <pkg-isocodes-"
|
||||||
"devel@lists.alioth.debian.org>\n"
|
"devel@lists.alioth.debian.org>\n"
|
||||||
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
|
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
|
||||||
"PO-Revision-Date: 2012-12-22 17:18+0000\n"
|
"PO-Revision-Date: 2012-12-31 12:50+0000\n"
|
||||||
"Last-Translator: Ferran Rius <frius64@hotmail.com>\n"
|
"Last-Translator: Ferran Rius <frius64@hotmail.com>\n"
|
||||||
"Language-Team: Catalan <linux@softcatala.org>\n"
|
"Language-Team: Catalan <linux@softcatala.org>\n"
|
||||||
"MIME-Version: 1.0\n"
|
"MIME-Version: 1.0\n"
|
||||||
"Content-Type: text/plain; charset=UTF-8\n"
|
"Content-Type: text/plain; charset=UTF-8\n"
|
||||||
"Content-Transfer-Encoding: 8bit\n"
|
"Content-Transfer-Encoding: 8bit\n"
|
||||||
"X-Launchpad-Export-Date: 2012-12-23 04:38+0000\n"
|
"X-Launchpad-Export-Date: 2013-01-01 04:45+0000\n"
|
||||||
"X-Generator: Launchpad (build 16378)\n"
|
"X-Generator: Launchpad (build 16378)\n"
|
||||||
"Language: ca\n"
|
"Language: ca\n"
|
||||||
|
|
||||||
@ -1744,7 +1744,7 @@ msgstr "Asu (Nigèria)"
|
|||||||
|
|
||||||
#. name for aun
|
#. name for aun
|
||||||
msgid "One; Molmo"
|
msgid "One; Molmo"
|
||||||
msgstr "One; Molmo"
|
msgstr "Oneià; Molmo"
|
||||||
|
|
||||||
#. name for auo
|
#. name for auo
|
||||||
msgid "Auyokawa"
|
msgid "Auyokawa"
|
||||||
@ -1964,7 +1964,7 @@ msgstr "Leyigha"
|
|||||||
|
|
||||||
#. name for ayk
|
#. name for ayk
|
||||||
msgid "Akuku"
|
msgid "Akuku"
|
||||||
msgstr "Akuku"
|
msgstr "Okpe-Idesa-Akuku; Akuku"
|
||||||
|
|
||||||
#. name for ayl
|
#. name for ayl
|
||||||
msgid "Arabic; Libyan"
|
msgid "Arabic; Libyan"
|
||||||
@ -9984,7 +9984,7 @@ msgstr "Indri"
|
|||||||
|
|
||||||
#. name for ids
|
#. name for ids
|
||||||
msgid "Idesa"
|
msgid "Idesa"
|
||||||
msgstr "Idesa"
|
msgstr "Okpe-Idesa-Akuku; Idesa"
|
||||||
|
|
||||||
#. name for idt
|
#. name for idt
|
||||||
msgid "Idaté"
|
msgid "Idaté"
|
||||||
@ -19524,7 +19524,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for obi
|
#. name for obi
|
||||||
msgid "Obispeño"
|
msgid "Obispeño"
|
||||||
msgstr ""
|
msgstr "Obispeño"
|
||||||
|
|
||||||
#. name for obk
|
#. name for obk
|
||||||
msgid "Bontok; Southern"
|
msgid "Bontok; Southern"
|
||||||
@ -19532,7 +19532,7 @@ msgstr "Bontoc; meridional"
|
|||||||
|
|
||||||
#. name for obl
|
#. name for obl
|
||||||
msgid "Oblo"
|
msgid "Oblo"
|
||||||
msgstr ""
|
msgstr "Oblo"
|
||||||
|
|
||||||
#. name for obm
|
#. name for obm
|
||||||
msgid "Moabite"
|
msgid "Moabite"
|
||||||
@ -19552,11 +19552,11 @@ msgstr "Bretó; antic"
|
|||||||
|
|
||||||
#. name for obu
|
#. name for obu
|
||||||
msgid "Obulom"
|
msgid "Obulom"
|
||||||
msgstr ""
|
msgstr "Obulom"
|
||||||
|
|
||||||
#. name for oca
|
#. name for oca
|
||||||
msgid "Ocaina"
|
msgid "Ocaina"
|
||||||
msgstr ""
|
msgstr "Ocaina"
|
||||||
|
|
||||||
#. name for och
|
#. name for och
|
||||||
msgid "Chinese; Old"
|
msgid "Chinese; Old"
|
||||||
@ -19576,11 +19576,11 @@ msgstr "Matlazinca; Atzingo"
|
|||||||
|
|
||||||
#. name for oda
|
#. name for oda
|
||||||
msgid "Odut"
|
msgid "Odut"
|
||||||
msgstr ""
|
msgstr "Odut"
|
||||||
|
|
||||||
#. name for odk
|
#. name for odk
|
||||||
msgid "Od"
|
msgid "Od"
|
||||||
msgstr ""
|
msgstr "Od"
|
||||||
|
|
||||||
#. name for odt
|
#. name for odt
|
||||||
msgid "Dutch; Old"
|
msgid "Dutch; Old"
|
||||||
@ -19588,11 +19588,11 @@ msgstr "Holandès; antic"
|
|||||||
|
|
||||||
#. name for odu
|
#. name for odu
|
||||||
msgid "Odual"
|
msgid "Odual"
|
||||||
msgstr ""
|
msgstr "Odual"
|
||||||
|
|
||||||
#. name for ofo
|
#. name for ofo
|
||||||
msgid "Ofo"
|
msgid "Ofo"
|
||||||
msgstr ""
|
msgstr "Ofo"
|
||||||
|
|
||||||
#. name for ofs
|
#. name for ofs
|
||||||
msgid "Frisian; Old"
|
msgid "Frisian; Old"
|
||||||
@ -19604,11 +19604,11 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for ogb
|
#. name for ogb
|
||||||
msgid "Ogbia"
|
msgid "Ogbia"
|
||||||
msgstr ""
|
msgstr "Ogbia"
|
||||||
|
|
||||||
#. name for ogc
|
#. name for ogc
|
||||||
msgid "Ogbah"
|
msgid "Ogbah"
|
||||||
msgstr ""
|
msgstr "Ogbah"
|
||||||
|
|
||||||
#. name for oge
|
#. name for oge
|
||||||
msgid "Georgian; Old"
|
msgid "Georgian; Old"
|
||||||
@ -19616,7 +19616,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for ogg
|
#. name for ogg
|
||||||
msgid "Ogbogolo"
|
msgid "Ogbogolo"
|
||||||
msgstr ""
|
msgstr "Ogbogolo"
|
||||||
|
|
||||||
#. name for ogo
|
#. name for ogo
|
||||||
msgid "Khana"
|
msgid "Khana"
|
||||||
@ -19624,7 +19624,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for ogu
|
#. name for ogu
|
||||||
msgid "Ogbronuagum"
|
msgid "Ogbronuagum"
|
||||||
msgstr ""
|
msgstr "Ogbronuagum"
|
||||||
|
|
||||||
#. name for oht
|
#. name for oht
|
||||||
msgid "Hittite; Old"
|
msgid "Hittite; Old"
|
||||||
@ -19636,27 +19636,27 @@ msgstr "Hongarès; antic"
|
|||||||
|
|
||||||
#. name for oia
|
#. name for oia
|
||||||
msgid "Oirata"
|
msgid "Oirata"
|
||||||
msgstr ""
|
msgstr "Oirata"
|
||||||
|
|
||||||
#. name for oin
|
#. name for oin
|
||||||
msgid "One; Inebu"
|
msgid "One; Inebu"
|
||||||
msgstr ""
|
msgstr "Oneià; Inebu"
|
||||||
|
|
||||||
#. name for ojb
|
#. name for ojb
|
||||||
msgid "Ojibwa; Northwestern"
|
msgid "Ojibwa; Northwestern"
|
||||||
msgstr ""
|
msgstr "Ojibwa; Nordoccidental"
|
||||||
|
|
||||||
#. name for ojc
|
#. name for ojc
|
||||||
msgid "Ojibwa; Central"
|
msgid "Ojibwa; Central"
|
||||||
msgstr ""
|
msgstr "Ojibwa; Central"
|
||||||
|
|
||||||
#. name for ojg
|
#. name for ojg
|
||||||
msgid "Ojibwa; Eastern"
|
msgid "Ojibwa; Eastern"
|
||||||
msgstr ""
|
msgstr "Ojibwa; Oriental"
|
||||||
|
|
||||||
#. name for oji
|
#. name for oji
|
||||||
msgid "Ojibwa"
|
msgid "Ojibwa"
|
||||||
msgstr ""
|
msgstr "Ojibwa; Occidental"
|
||||||
|
|
||||||
#. name for ojp
|
#. name for ojp
|
||||||
msgid "Japanese; Old"
|
msgid "Japanese; Old"
|
||||||
@ -19664,11 +19664,11 @@ msgstr "Japonès; antic"
|
|||||||
|
|
||||||
#. name for ojs
|
#. name for ojs
|
||||||
msgid "Ojibwa; Severn"
|
msgid "Ojibwa; Severn"
|
||||||
msgstr ""
|
msgstr "Ojibwa; Severn"
|
||||||
|
|
||||||
#. name for ojv
|
#. name for ojv
|
||||||
msgid "Ontong Java"
|
msgid "Ontong Java"
|
||||||
msgstr ""
|
msgstr "Ontong Java"
|
||||||
|
|
||||||
#. name for ojw
|
#. name for ojw
|
||||||
msgid "Ojibwa; Western"
|
msgid "Ojibwa; Western"
|
||||||
@ -19676,19 +19676,19 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for oka
|
#. name for oka
|
||||||
msgid "Okanagan"
|
msgid "Okanagan"
|
||||||
msgstr ""
|
msgstr "Colville-Okanagà"
|
||||||
|
|
||||||
#. name for okb
|
#. name for okb
|
||||||
msgid "Okobo"
|
msgid "Okobo"
|
||||||
msgstr ""
|
msgstr "Okobo"
|
||||||
|
|
||||||
#. name for okd
|
#. name for okd
|
||||||
msgid "Okodia"
|
msgid "Okodia"
|
||||||
msgstr ""
|
msgstr "Okodia"
|
||||||
|
|
||||||
#. name for oke
|
#. name for oke
|
||||||
msgid "Okpe (Southwestern Edo)"
|
msgid "Okpe (Southwestern Edo)"
|
||||||
msgstr ""
|
msgstr "Okpe"
|
||||||
|
|
||||||
#. name for okh
|
#. name for okh
|
||||||
msgid "Koresh-e Rostam"
|
msgid "Koresh-e Rostam"
|
||||||
@ -19696,15 +19696,15 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for oki
|
#. name for oki
|
||||||
msgid "Okiek"
|
msgid "Okiek"
|
||||||
msgstr ""
|
msgstr "Okiek"
|
||||||
|
|
||||||
#. name for okj
|
#. name for okj
|
||||||
msgid "Oko-Juwoi"
|
msgid "Oko-Juwoi"
|
||||||
msgstr ""
|
msgstr "Oko-Juwoi"
|
||||||
|
|
||||||
#. name for okk
|
#. name for okk
|
||||||
msgid "One; Kwamtim"
|
msgid "One; Kwamtim"
|
||||||
msgstr ""
|
msgstr "Oneià; Kwamtim"
|
||||||
|
|
||||||
#. name for okl
|
#. name for okl
|
||||||
msgid "Kentish Sign Language; Old"
|
msgid "Kentish Sign Language; Old"
|
||||||
@ -19716,7 +19716,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for okn
|
#. name for okn
|
||||||
msgid "Oki-No-Erabu"
|
msgid "Oki-No-Erabu"
|
||||||
msgstr ""
|
msgstr "Oki-No-Erabu"
|
||||||
|
|
||||||
#. name for oko
|
#. name for oko
|
||||||
msgid "Korean; Old (3rd-9th cent.)"
|
msgid "Korean; Old (3rd-9th cent.)"
|
||||||
@ -19728,19 +19728,19 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for oks
|
#. name for oks
|
||||||
msgid "Oko-Eni-Osayen"
|
msgid "Oko-Eni-Osayen"
|
||||||
msgstr ""
|
msgstr "Oko-Eni-Osayen"
|
||||||
|
|
||||||
#. name for oku
|
#. name for oku
|
||||||
msgid "Oku"
|
msgid "Oku"
|
||||||
msgstr ""
|
msgstr "Oku"
|
||||||
|
|
||||||
#. name for okv
|
#. name for okv
|
||||||
msgid "Orokaiva"
|
msgid "Orokaiva"
|
||||||
msgstr ""
|
msgstr "Orokaiwa"
|
||||||
|
|
||||||
#. name for okx
|
#. name for okx
|
||||||
msgid "Okpe (Northwestern Edo)"
|
msgid "Okpe (Northwestern Edo)"
|
||||||
msgstr ""
|
msgstr "Okpe-Idesa-Akuku; Okpe"
|
||||||
|
|
||||||
#. name for ola
|
#. name for ola
|
||||||
msgid "Walungge"
|
msgid "Walungge"
|
||||||
@ -19752,11 +19752,11 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for ole
|
#. name for ole
|
||||||
msgid "Olekha"
|
msgid "Olekha"
|
||||||
msgstr ""
|
msgstr "Olekha"
|
||||||
|
|
||||||
#. name for olm
|
#. name for olm
|
||||||
msgid "Oloma"
|
msgid "Oloma"
|
||||||
msgstr ""
|
msgstr "Oloma"
|
||||||
|
|
||||||
#. name for olo
|
#. name for olo
|
||||||
msgid "Livvi"
|
msgid "Livvi"
|
||||||
@ -19768,7 +19768,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for oma
|
#. name for oma
|
||||||
msgid "Omaha-Ponca"
|
msgid "Omaha-Ponca"
|
||||||
msgstr ""
|
msgstr "Omaha-Ponca"
|
||||||
|
|
||||||
#. name for omb
|
#. name for omb
|
||||||
msgid "Ambae; East"
|
msgid "Ambae; East"
|
||||||
@ -19780,23 +19780,23 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for ome
|
#. name for ome
|
||||||
msgid "Omejes"
|
msgid "Omejes"
|
||||||
msgstr ""
|
msgstr "Omejes"
|
||||||
|
|
||||||
#. name for omg
|
#. name for omg
|
||||||
msgid "Omagua"
|
msgid "Omagua"
|
||||||
msgstr ""
|
msgstr "Omagua"
|
||||||
|
|
||||||
#. name for omi
|
#. name for omi
|
||||||
msgid "Omi"
|
msgid "Omi"
|
||||||
msgstr ""
|
msgstr "Omi"
|
||||||
|
|
||||||
#. name for omk
|
#. name for omk
|
||||||
msgid "Omok"
|
msgid "Omok"
|
||||||
msgstr ""
|
msgstr "Omok"
|
||||||
|
|
||||||
#. name for oml
|
#. name for oml
|
||||||
msgid "Ombo"
|
msgid "Ombo"
|
||||||
msgstr ""
|
msgstr "Ombo"
|
||||||
|
|
||||||
#. name for omn
|
#. name for omn
|
||||||
msgid "Minoan"
|
msgid "Minoan"
|
||||||
@ -19816,11 +19816,11 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for omt
|
#. name for omt
|
||||||
msgid "Omotik"
|
msgid "Omotik"
|
||||||
msgstr ""
|
msgstr "Omotik"
|
||||||
|
|
||||||
#. name for omu
|
#. name for omu
|
||||||
msgid "Omurano"
|
msgid "Omurano"
|
||||||
msgstr ""
|
msgstr "Omurano"
|
||||||
|
|
||||||
#. name for omw
|
#. name for omw
|
||||||
msgid "Tairora; South"
|
msgid "Tairora; South"
|
||||||
@ -19832,7 +19832,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for ona
|
#. name for ona
|
||||||
msgid "Ona"
|
msgid "Ona"
|
||||||
msgstr ""
|
msgstr "Ona"
|
||||||
|
|
||||||
#. name for onb
|
#. name for onb
|
||||||
msgid "Lingao"
|
msgid "Lingao"
|
||||||
@ -19840,31 +19840,31 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for one
|
#. name for one
|
||||||
msgid "Oneida"
|
msgid "Oneida"
|
||||||
msgstr ""
|
msgstr "Oneida"
|
||||||
|
|
||||||
#. name for ong
|
#. name for ong
|
||||||
msgid "Olo"
|
msgid "Olo"
|
||||||
msgstr ""
|
msgstr "Olo"
|
||||||
|
|
||||||
#. name for oni
|
#. name for oni
|
||||||
msgid "Onin"
|
msgid "Onin"
|
||||||
msgstr ""
|
msgstr "Onin"
|
||||||
|
|
||||||
#. name for onj
|
#. name for onj
|
||||||
msgid "Onjob"
|
msgid "Onjob"
|
||||||
msgstr ""
|
msgstr "Onjob"
|
||||||
|
|
||||||
#. name for onk
|
#. name for onk
|
||||||
msgid "One; Kabore"
|
msgid "One; Kabore"
|
||||||
msgstr ""
|
msgstr "Oneià; Kabore"
|
||||||
|
|
||||||
#. name for onn
|
#. name for onn
|
||||||
msgid "Onobasulu"
|
msgid "Onobasulu"
|
||||||
msgstr ""
|
msgstr "Onobasulu"
|
||||||
|
|
||||||
#. name for ono
|
#. name for ono
|
||||||
msgid "Onondaga"
|
msgid "Onondaga"
|
||||||
msgstr ""
|
msgstr "Onondaga"
|
||||||
|
|
||||||
#. name for onp
|
#. name for onp
|
||||||
msgid "Sartang"
|
msgid "Sartang"
|
||||||
@ -19872,15 +19872,15 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for onr
|
#. name for onr
|
||||||
msgid "One; Northern"
|
msgid "One; Northern"
|
||||||
msgstr ""
|
msgstr "Oneià; Septentrional"
|
||||||
|
|
||||||
#. name for ons
|
#. name for ons
|
||||||
msgid "Ono"
|
msgid "Ono"
|
||||||
msgstr ""
|
msgstr "Ono"
|
||||||
|
|
||||||
#. name for ont
|
#. name for ont
|
||||||
msgid "Ontenu"
|
msgid "Ontenu"
|
||||||
msgstr ""
|
msgstr "Ontenu"
|
||||||
|
|
||||||
#. name for onu
|
#. name for onu
|
||||||
msgid "Unua"
|
msgid "Unua"
|
||||||
@ -19900,23 +19900,23 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for oog
|
#. name for oog
|
||||||
msgid "Ong"
|
msgid "Ong"
|
||||||
msgstr ""
|
msgstr "Ong"
|
||||||
|
|
||||||
#. name for oon
|
#. name for oon
|
||||||
msgid "Önge"
|
msgid "Önge"
|
||||||
msgstr ""
|
msgstr "Onge"
|
||||||
|
|
||||||
#. name for oor
|
#. name for oor
|
||||||
msgid "Oorlams"
|
msgid "Oorlams"
|
||||||
msgstr ""
|
msgstr "Oorlams"
|
||||||
|
|
||||||
#. name for oos
|
#. name for oos
|
||||||
msgid "Ossetic; Old"
|
msgid "Ossetic; Old"
|
||||||
msgstr ""
|
msgstr "Osset"
|
||||||
|
|
||||||
#. name for opa
|
#. name for opa
|
||||||
msgid "Okpamheri"
|
msgid "Okpamheri"
|
||||||
msgstr ""
|
msgstr "Okpamheri"
|
||||||
|
|
||||||
#. name for opk
|
#. name for opk
|
||||||
msgid "Kopkaka"
|
msgid "Kopkaka"
|
||||||
@ -19924,39 +19924,39 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for opm
|
#. name for opm
|
||||||
msgid "Oksapmin"
|
msgid "Oksapmin"
|
||||||
msgstr ""
|
msgstr "Oksapmin"
|
||||||
|
|
||||||
#. name for opo
|
#. name for opo
|
||||||
msgid "Opao"
|
msgid "Opao"
|
||||||
msgstr ""
|
msgstr "Opao"
|
||||||
|
|
||||||
#. name for opt
|
#. name for opt
|
||||||
msgid "Opata"
|
msgid "Opata"
|
||||||
msgstr ""
|
msgstr "Opata"
|
||||||
|
|
||||||
#. name for opy
|
#. name for opy
|
||||||
msgid "Ofayé"
|
msgid "Ofayé"
|
||||||
msgstr ""
|
msgstr "Opaie"
|
||||||
|
|
||||||
#. name for ora
|
#. name for ora
|
||||||
msgid "Oroha"
|
msgid "Oroha"
|
||||||
msgstr ""
|
msgstr "Oroha"
|
||||||
|
|
||||||
#. name for orc
|
#. name for orc
|
||||||
msgid "Orma"
|
msgid "Orma"
|
||||||
msgstr ""
|
msgstr "Orma"
|
||||||
|
|
||||||
#. name for ore
|
#. name for ore
|
||||||
msgid "Orejón"
|
msgid "Orejón"
|
||||||
msgstr ""
|
msgstr "Orejon"
|
||||||
|
|
||||||
#. name for org
|
#. name for org
|
||||||
msgid "Oring"
|
msgid "Oring"
|
||||||
msgstr ""
|
msgstr "Oring"
|
||||||
|
|
||||||
#. name for orh
|
#. name for orh
|
||||||
msgid "Oroqen"
|
msgid "Oroqen"
|
||||||
msgstr ""
|
msgstr "Orotxen"
|
||||||
|
|
||||||
#. name for ori
|
#. name for ori
|
||||||
msgid "Oriya"
|
msgid "Oriya"
|
||||||
@ -19968,19 +19968,19 @@ msgstr "Oromo"
|
|||||||
|
|
||||||
#. name for orn
|
#. name for orn
|
||||||
msgid "Orang Kanaq"
|
msgid "Orang Kanaq"
|
||||||
msgstr ""
|
msgstr "Orang; Kanaq"
|
||||||
|
|
||||||
#. name for oro
|
#. name for oro
|
||||||
msgid "Orokolo"
|
msgid "Orokolo"
|
||||||
msgstr ""
|
msgstr "Orocolo"
|
||||||
|
|
||||||
#. name for orr
|
#. name for orr
|
||||||
msgid "Oruma"
|
msgid "Oruma"
|
||||||
msgstr ""
|
msgstr "Oruma"
|
||||||
|
|
||||||
#. name for ors
|
#. name for ors
|
||||||
msgid "Orang Seletar"
|
msgid "Orang Seletar"
|
||||||
msgstr ""
|
msgstr "Orang; Seletar"
|
||||||
|
|
||||||
#. name for ort
|
#. name for ort
|
||||||
msgid "Oriya; Adivasi"
|
msgid "Oriya; Adivasi"
|
||||||
@ -19988,7 +19988,7 @@ msgstr "Oriya; Adivasi"
|
|||||||
|
|
||||||
#. name for oru
|
#. name for oru
|
||||||
msgid "Ormuri"
|
msgid "Ormuri"
|
||||||
msgstr ""
|
msgstr "Ormuri"
|
||||||
|
|
||||||
#. name for orv
|
#. name for orv
|
||||||
msgid "Russian; Old"
|
msgid "Russian; Old"
|
||||||
@ -19996,31 +19996,31 @@ msgstr "Rus; antic"
|
|||||||
|
|
||||||
#. name for orw
|
#. name for orw
|
||||||
msgid "Oro Win"
|
msgid "Oro Win"
|
||||||
msgstr ""
|
msgstr "Oro Win"
|
||||||
|
|
||||||
#. name for orx
|
#. name for orx
|
||||||
msgid "Oro"
|
msgid "Oro"
|
||||||
msgstr ""
|
msgstr "Oro"
|
||||||
|
|
||||||
#. name for orz
|
#. name for orz
|
||||||
msgid "Ormu"
|
msgid "Ormu"
|
||||||
msgstr ""
|
msgstr "Ormu"
|
||||||
|
|
||||||
#. name for osa
|
#. name for osa
|
||||||
msgid "Osage"
|
msgid "Osage"
|
||||||
msgstr ""
|
msgstr "Osage"
|
||||||
|
|
||||||
#. name for osc
|
#. name for osc
|
||||||
msgid "Oscan"
|
msgid "Oscan"
|
||||||
msgstr ""
|
msgstr "Osc"
|
||||||
|
|
||||||
#. name for osi
|
#. name for osi
|
||||||
msgid "Osing"
|
msgid "Osing"
|
||||||
msgstr ""
|
msgstr "Osing"
|
||||||
|
|
||||||
#. name for oso
|
#. name for oso
|
||||||
msgid "Ososo"
|
msgid "Ososo"
|
||||||
msgstr ""
|
msgstr "Ososo"
|
||||||
|
|
||||||
#. name for osp
|
#. name for osp
|
||||||
msgid "Spanish; Old"
|
msgid "Spanish; Old"
|
||||||
@ -20028,15 +20028,15 @@ msgstr "Espanyol; antic"
|
|||||||
|
|
||||||
#. name for oss
|
#. name for oss
|
||||||
msgid "Ossetian"
|
msgid "Ossetian"
|
||||||
msgstr ""
|
msgstr "Osset"
|
||||||
|
|
||||||
#. name for ost
|
#. name for ost
|
||||||
msgid "Osatu"
|
msgid "Osatu"
|
||||||
msgstr ""
|
msgstr "Osatu"
|
||||||
|
|
||||||
#. name for osu
|
#. name for osu
|
||||||
msgid "One; Southern"
|
msgid "One; Southern"
|
||||||
msgstr ""
|
msgstr "One; Meridional"
|
||||||
|
|
||||||
#. name for osx
|
#. name for osx
|
||||||
msgid "Saxon; Old"
|
msgid "Saxon; Old"
|
||||||
@ -20052,15 +20052,15 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for otd
|
#. name for otd
|
||||||
msgid "Ot Danum"
|
msgid "Ot Danum"
|
||||||
msgstr ""
|
msgstr "Dohoi"
|
||||||
|
|
||||||
#. name for ote
|
#. name for ote
|
||||||
msgid "Otomi; Mezquital"
|
msgid "Otomi; Mezquital"
|
||||||
msgstr ""
|
msgstr "Otomí; Mezquital"
|
||||||
|
|
||||||
#. name for oti
|
#. name for oti
|
||||||
msgid "Oti"
|
msgid "Oti"
|
||||||
msgstr ""
|
msgstr "Oti"
|
||||||
|
|
||||||
#. name for otk
|
#. name for otk
|
||||||
msgid "Turkish; Old"
|
msgid "Turkish; Old"
|
||||||
@ -20068,43 +20068,43 @@ msgstr "Turc; antic"
|
|||||||
|
|
||||||
#. name for otl
|
#. name for otl
|
||||||
msgid "Otomi; Tilapa"
|
msgid "Otomi; Tilapa"
|
||||||
msgstr ""
|
msgstr "Otomí; Tilapa"
|
||||||
|
|
||||||
#. name for otm
|
#. name for otm
|
||||||
msgid "Otomi; Eastern Highland"
|
msgid "Otomi; Eastern Highland"
|
||||||
msgstr ""
|
msgstr "Otomí; Oriental"
|
||||||
|
|
||||||
#. name for otn
|
#. name for otn
|
||||||
msgid "Otomi; Tenango"
|
msgid "Otomi; Tenango"
|
||||||
msgstr ""
|
msgstr "Otomí; Tenango"
|
||||||
|
|
||||||
#. name for otq
|
#. name for otq
|
||||||
msgid "Otomi; Querétaro"
|
msgid "Otomi; Querétaro"
|
||||||
msgstr ""
|
msgstr "Otomí; Queretaro"
|
||||||
|
|
||||||
#. name for otr
|
#. name for otr
|
||||||
msgid "Otoro"
|
msgid "Otoro"
|
||||||
msgstr ""
|
msgstr "Otoro"
|
||||||
|
|
||||||
#. name for ots
|
#. name for ots
|
||||||
msgid "Otomi; Estado de México"
|
msgid "Otomi; Estado de México"
|
||||||
msgstr ""
|
msgstr "Otomí; Estat de Mèxic"
|
||||||
|
|
||||||
#. name for ott
|
#. name for ott
|
||||||
msgid "Otomi; Temoaya"
|
msgid "Otomi; Temoaya"
|
||||||
msgstr ""
|
msgstr "Otomí; Temoaya"
|
||||||
|
|
||||||
#. name for otu
|
#. name for otu
|
||||||
msgid "Otuke"
|
msgid "Otuke"
|
||||||
msgstr ""
|
msgstr "Otuke"
|
||||||
|
|
||||||
#. name for otw
|
#. name for otw
|
||||||
msgid "Ottawa"
|
msgid "Ottawa"
|
||||||
msgstr ""
|
msgstr "Ottawa"
|
||||||
|
|
||||||
#. name for otx
|
#. name for otx
|
||||||
msgid "Otomi; Texcatepec"
|
msgid "Otomi; Texcatepec"
|
||||||
msgstr ""
|
msgstr "Otomí; Texcatepec"
|
||||||
|
|
||||||
#. name for oty
|
#. name for oty
|
||||||
msgid "Tamil; Old"
|
msgid "Tamil; Old"
|
||||||
@ -20112,7 +20112,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for otz
|
#. name for otz
|
||||||
msgid "Otomi; Ixtenco"
|
msgid "Otomi; Ixtenco"
|
||||||
msgstr ""
|
msgstr "Otomí; Ixtenc"
|
||||||
|
|
||||||
#. name for oua
|
#. name for oua
|
||||||
msgid "Tagargrent"
|
msgid "Tagargrent"
|
||||||
@ -20124,7 +20124,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for oue
|
#. name for oue
|
||||||
msgid "Oune"
|
msgid "Oune"
|
||||||
msgstr ""
|
msgstr "Oune"
|
||||||
|
|
||||||
#. name for oui
|
#. name for oui
|
||||||
msgid "Uighur; Old"
|
msgid "Uighur; Old"
|
||||||
@ -20132,15 +20132,15 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for oum
|
#. name for oum
|
||||||
msgid "Ouma"
|
msgid "Ouma"
|
||||||
msgstr ""
|
msgstr "Ouma"
|
||||||
|
|
||||||
#. name for oun
|
#. name for oun
|
||||||
msgid "!O!ung"
|
msgid "!O!ung"
|
||||||
msgstr ""
|
msgstr "Oung"
|
||||||
|
|
||||||
#. name for owi
|
#. name for owi
|
||||||
msgid "Owiniga"
|
msgid "Owiniga"
|
||||||
msgstr ""
|
msgstr "Owiniga"
|
||||||
|
|
||||||
#. name for owl
|
#. name for owl
|
||||||
msgid "Welsh; Old"
|
msgid "Welsh; Old"
|
||||||
@ -20148,11 +20148,11 @@ msgstr "Gal·lès; antic"
|
|||||||
|
|
||||||
#. name for oyb
|
#. name for oyb
|
||||||
msgid "Oy"
|
msgid "Oy"
|
||||||
msgstr ""
|
msgstr "Oy"
|
||||||
|
|
||||||
#. name for oyd
|
#. name for oyd
|
||||||
msgid "Oyda"
|
msgid "Oyda"
|
||||||
msgstr ""
|
msgstr "Oyda"
|
||||||
|
|
||||||
#. name for oym
|
#. name for oym
|
||||||
msgid "Wayampi"
|
msgid "Wayampi"
|
||||||
@ -20160,7 +20160,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for oyy
|
#. name for oyy
|
||||||
msgid "Oya'oya"
|
msgid "Oya'oya"
|
||||||
msgstr ""
|
msgstr "Oya'oya"
|
||||||
|
|
||||||
#. name for ozm
|
#. name for ozm
|
||||||
msgid "Koonzime"
|
msgid "Koonzime"
|
||||||
@ -20168,27 +20168,27 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for pab
|
#. name for pab
|
||||||
msgid "Parecís"
|
msgid "Parecís"
|
||||||
msgstr ""
|
msgstr "Pareci"
|
||||||
|
|
||||||
#. name for pac
|
#. name for pac
|
||||||
msgid "Pacoh"
|
msgid "Pacoh"
|
||||||
msgstr ""
|
msgstr "Pacoh"
|
||||||
|
|
||||||
#. name for pad
|
#. name for pad
|
||||||
msgid "Paumarí"
|
msgid "Paumarí"
|
||||||
msgstr ""
|
msgstr "Paumarí"
|
||||||
|
|
||||||
#. name for pae
|
#. name for pae
|
||||||
msgid "Pagibete"
|
msgid "Pagibete"
|
||||||
msgstr ""
|
msgstr "Pagibete"
|
||||||
|
|
||||||
#. name for paf
|
#. name for paf
|
||||||
msgid "Paranawát"
|
msgid "Paranawát"
|
||||||
msgstr ""
|
msgstr "Paranawat"
|
||||||
|
|
||||||
#. name for pag
|
#. name for pag
|
||||||
msgid "Pangasinan"
|
msgid "Pangasinan"
|
||||||
msgstr ""
|
msgstr "Pangasi"
|
||||||
|
|
||||||
#. name for pah
|
#. name for pah
|
||||||
msgid "Tenharim"
|
msgid "Tenharim"
|
||||||
@ -20196,19 +20196,19 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for pai
|
#. name for pai
|
||||||
msgid "Pe"
|
msgid "Pe"
|
||||||
msgstr ""
|
msgstr "Pe"
|
||||||
|
|
||||||
#. name for pak
|
#. name for pak
|
||||||
msgid "Parakanã"
|
msgid "Parakanã"
|
||||||
msgstr ""
|
msgstr "Akwawa; Parakanà"
|
||||||
|
|
||||||
#. name for pal
|
#. name for pal
|
||||||
msgid "Pahlavi"
|
msgid "Pahlavi"
|
||||||
msgstr ""
|
msgstr "Pahlavi"
|
||||||
|
|
||||||
#. name for pam
|
#. name for pam
|
||||||
msgid "Pampanga"
|
msgid "Pampanga"
|
||||||
msgstr ""
|
msgstr "Pampangà"
|
||||||
|
|
||||||
#. name for pan
|
#. name for pan
|
||||||
msgid "Panjabi"
|
msgid "Panjabi"
|
||||||
@ -20220,63 +20220,63 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for pap
|
#. name for pap
|
||||||
msgid "Papiamento"
|
msgid "Papiamento"
|
||||||
msgstr ""
|
msgstr "Papiament"
|
||||||
|
|
||||||
#. name for paq
|
#. name for paq
|
||||||
msgid "Parya"
|
msgid "Parya"
|
||||||
msgstr ""
|
msgstr "Parya"
|
||||||
|
|
||||||
#. name for par
|
#. name for par
|
||||||
msgid "Panamint"
|
msgid "Panamint"
|
||||||
msgstr ""
|
msgstr "Panamint"
|
||||||
|
|
||||||
#. name for pas
|
#. name for pas
|
||||||
msgid "Papasena"
|
msgid "Papasena"
|
||||||
msgstr ""
|
msgstr "Papasena"
|
||||||
|
|
||||||
#. name for pat
|
#. name for pat
|
||||||
msgid "Papitalai"
|
msgid "Papitalai"
|
||||||
msgstr ""
|
msgstr "Papitalai"
|
||||||
|
|
||||||
#. name for pau
|
#. name for pau
|
||||||
msgid "Palauan"
|
msgid "Palauan"
|
||||||
msgstr ""
|
msgstr "Palavà"
|
||||||
|
|
||||||
#. name for pav
|
#. name for pav
|
||||||
msgid "Pakaásnovos"
|
msgid "Pakaásnovos"
|
||||||
msgstr ""
|
msgstr "Pakaa Nova"
|
||||||
|
|
||||||
#. name for paw
|
#. name for paw
|
||||||
msgid "Pawnee"
|
msgid "Pawnee"
|
||||||
msgstr ""
|
msgstr "Pawnee"
|
||||||
|
|
||||||
#. name for pax
|
#. name for pax
|
||||||
msgid "Pankararé"
|
msgid "Pankararé"
|
||||||
msgstr ""
|
msgstr "Pankararé"
|
||||||
|
|
||||||
#. name for pay
|
#. name for pay
|
||||||
msgid "Pech"
|
msgid "Pech"
|
||||||
msgstr ""
|
msgstr "Pech"
|
||||||
|
|
||||||
#. name for paz
|
#. name for paz
|
||||||
msgid "Pankararú"
|
msgid "Pankararú"
|
||||||
msgstr ""
|
msgstr "Pankarurú"
|
||||||
|
|
||||||
#. name for pbb
|
#. name for pbb
|
||||||
msgid "Páez"
|
msgid "Páez"
|
||||||
msgstr ""
|
msgstr "Páez"
|
||||||
|
|
||||||
#. name for pbc
|
#. name for pbc
|
||||||
msgid "Patamona"
|
msgid "Patamona"
|
||||||
msgstr ""
|
msgstr "Patamona"
|
||||||
|
|
||||||
#. name for pbe
|
#. name for pbe
|
||||||
msgid "Popoloca; Mezontla"
|
msgid "Popoloca; Mezontla"
|
||||||
msgstr ""
|
msgstr "Popoloca; Mezontla"
|
||||||
|
|
||||||
#. name for pbf
|
#. name for pbf
|
||||||
msgid "Popoloca; Coyotepec"
|
msgid "Popoloca; Coyotepec"
|
||||||
msgstr ""
|
msgstr "Popoloca; Coyotepec"
|
||||||
|
|
||||||
#. name for pbg
|
#. name for pbg
|
||||||
msgid "Paraujano"
|
msgid "Paraujano"
|
||||||
@ -20288,7 +20288,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for pbi
|
#. name for pbi
|
||||||
msgid "Parkwa"
|
msgid "Parkwa"
|
||||||
msgstr ""
|
msgstr "Parkwa"
|
||||||
|
|
||||||
#. name for pbl
|
#. name for pbl
|
||||||
msgid "Mak (Nigeria)"
|
msgid "Mak (Nigeria)"
|
||||||
@ -20300,7 +20300,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for pbo
|
#. name for pbo
|
||||||
msgid "Papel"
|
msgid "Papel"
|
||||||
msgstr ""
|
msgstr "Papel"
|
||||||
|
|
||||||
#. name for pbp
|
#. name for pbp
|
||||||
msgid "Badyara"
|
msgid "Badyara"
|
||||||
@ -20336,7 +20336,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for pca
|
#. name for pca
|
||||||
msgid "Popoloca; Santa Inés Ahuatempan"
|
msgid "Popoloca; Santa Inés Ahuatempan"
|
||||||
msgstr ""
|
msgstr "Popoloca; Ahuatempan"
|
||||||
|
|
||||||
#. name for pcb
|
#. name for pcb
|
||||||
msgid "Pear"
|
msgid "Pear"
|
||||||
@ -20832,7 +20832,7 @@ msgstr "Senufo; Palaka"
|
|||||||
|
|
||||||
#. name for pls
|
#. name for pls
|
||||||
msgid "Popoloca; San Marcos Tlalcoyalco"
|
msgid "Popoloca; San Marcos Tlalcoyalco"
|
||||||
msgstr ""
|
msgstr "Popoloca; Tlalcoyalc"
|
||||||
|
|
||||||
#. name for plt
|
#. name for plt
|
||||||
msgid "Malagasy; Plateau"
|
msgid "Malagasy; Plateau"
|
||||||
@ -21040,7 +21040,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for poe
|
#. name for poe
|
||||||
msgid "Popoloca; San Juan Atzingo"
|
msgid "Popoloca; San Juan Atzingo"
|
||||||
msgstr ""
|
msgstr "Popoloca; Atzingo"
|
||||||
|
|
||||||
#. name for pof
|
#. name for pof
|
||||||
msgid "Poke"
|
msgid "Poke"
|
||||||
@ -21104,7 +21104,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for pow
|
#. name for pow
|
||||||
msgid "Popoloca; San Felipe Otlaltepec"
|
msgid "Popoloca; San Felipe Otlaltepec"
|
||||||
msgstr ""
|
msgstr "Popoloca; Otlaltepec"
|
||||||
|
|
||||||
#. name for pox
|
#. name for pox
|
||||||
msgid "Polabian"
|
msgid "Polabian"
|
||||||
@ -21160,7 +21160,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for pps
|
#. name for pps
|
||||||
msgid "Popoloca; San Luís Temalacayuca"
|
msgid "Popoloca; San Luís Temalacayuca"
|
||||||
msgstr ""
|
msgstr "Popoloca; Temalacayuca"
|
||||||
|
|
||||||
#. name for ppt
|
#. name for ppt
|
||||||
msgid "Pare"
|
msgid "Pare"
|
||||||
|
@ -9,13 +9,13 @@ msgstr ""
|
|||||||
"Project-Id-Version: calibre\n"
|
"Project-Id-Version: calibre\n"
|
||||||
"Report-Msgid-Bugs-To: FULL NAME <EMAIL@ADDRESS>\n"
|
"Report-Msgid-Bugs-To: FULL NAME <EMAIL@ADDRESS>\n"
|
||||||
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
|
"POT-Creation-Date: 2011-11-25 14:01+0000\n"
|
||||||
"PO-Revision-Date: 2012-12-24 08:05+0000\n"
|
"PO-Revision-Date: 2012-12-28 09:13+0000\n"
|
||||||
"Last-Translator: Adolfo Jayme Barrientos <fitoschido@gmail.com>\n"
|
"Last-Translator: Jellby <Unknown>\n"
|
||||||
"Language-Team: Español; Castellano <>\n"
|
"Language-Team: Español; Castellano <>\n"
|
||||||
"MIME-Version: 1.0\n"
|
"MIME-Version: 1.0\n"
|
||||||
"Content-Type: text/plain; charset=UTF-8\n"
|
"Content-Type: text/plain; charset=UTF-8\n"
|
||||||
"Content-Transfer-Encoding: 8bit\n"
|
"Content-Transfer-Encoding: 8bit\n"
|
||||||
"X-Launchpad-Export-Date: 2012-12-25 04:46+0000\n"
|
"X-Launchpad-Export-Date: 2012-12-29 05:00+0000\n"
|
||||||
"X-Generator: Launchpad (build 16378)\n"
|
"X-Generator: Launchpad (build 16378)\n"
|
||||||
|
|
||||||
#. name for aaa
|
#. name for aaa
|
||||||
@ -9584,7 +9584,7 @@ msgstr "Holikachuk"
|
|||||||
|
|
||||||
#. name for hoj
|
#. name for hoj
|
||||||
msgid "Hadothi"
|
msgid "Hadothi"
|
||||||
msgstr "Hadothi"
|
msgstr "Hadoti"
|
||||||
|
|
||||||
#. name for hol
|
#. name for hol
|
||||||
msgid "Holu"
|
msgid "Holu"
|
||||||
@ -11796,7 +11796,7 @@ msgstr ""
|
|||||||
|
|
||||||
#. name for khq
|
#. name for khq
|
||||||
msgid "Songhay; Koyra Chiini"
|
msgid "Songhay; Koyra Chiini"
|
||||||
msgstr ""
|
msgstr "Songhay koyra chiini"
|
||||||
|
|
||||||
#. name for khr
|
#. name for khr
|
||||||
msgid "Kharia"
|
msgid "Kharia"
|
||||||
|
@ -227,9 +227,22 @@ class GetTranslations(Translations): # {{{
|
|||||||
ans.append(line.split()[-1])
|
ans.append(line.split()[-1])
|
||||||
return ans
|
return ans
|
||||||
|
|
||||||
|
def resolve_conflicts(self):
|
||||||
|
conflict = False
|
||||||
|
for line in subprocess.check_output(['bzr', 'status']).splitlines():
|
||||||
|
if line == 'conflicts:':
|
||||||
|
conflict = True
|
||||||
|
break
|
||||||
|
if not conflict:
|
||||||
|
raise Exception('bzr merge failed and no conflicts found')
|
||||||
|
subprocess.check_call(['bzr', 'resolve', '--take-other'])
|
||||||
|
|
||||||
def run(self, opts):
|
def run(self, opts):
|
||||||
if not self.modified_translations:
|
if not self.modified_translations:
|
||||||
|
try:
|
||||||
subprocess.check_call(['bzr', 'merge', self.BRANCH])
|
subprocess.check_call(['bzr', 'merge', self.BRANCH])
|
||||||
|
except subprocess.CalledProcessError:
|
||||||
|
self.resolve_conflicts()
|
||||||
self.check_for_errors()
|
self.check_for_errors()
|
||||||
|
|
||||||
if self.modified_translations:
|
if self.modified_translations:
|
||||||
|
@ -4,7 +4,7 @@ __license__ = 'GPL v3'
|
|||||||
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
|
__copyright__ = '2008, Kovid Goyal kovid@kovidgoyal.net'
|
||||||
__docformat__ = 'restructuredtext en'
|
__docformat__ = 'restructuredtext en'
|
||||||
__appname__ = u'calibre'
|
__appname__ = u'calibre'
|
||||||
numeric_version = (0, 9, 12)
|
numeric_version = (0, 9, 13)
|
||||||
__version__ = u'.'.join(map(unicode, numeric_version))
|
__version__ = u'.'.join(map(unicode, numeric_version))
|
||||||
__author__ = u"Kovid Goyal <kovid@kovidgoyal.net>"
|
__author__ = u"Kovid Goyal <kovid@kovidgoyal.net>"
|
||||||
|
|
||||||
|
@ -1529,6 +1529,15 @@ class StoreNextoStore(StoreBase):
|
|||||||
formats = ['EPUB', 'MOBI', 'PDF']
|
formats = ['EPUB', 'MOBI', 'PDF']
|
||||||
affiliate = True
|
affiliate = True
|
||||||
|
|
||||||
|
class StoreNookUKStore(StoreBase):
|
||||||
|
name = 'Nook UK'
|
||||||
|
author = 'John Schember'
|
||||||
|
description = u'Barnes & Noble S.à r.l, a subsidiary of Barnes & Noble, Inc., a leading retailer of content, digital media and educational products, is proud to bring the award-winning NOOK® reading experience and a leading digital bookstore to the UK.'
|
||||||
|
actual_plugin = 'calibre.gui2.store.stores.nook_uk_plugin:NookUKStore'
|
||||||
|
|
||||||
|
headquarters = 'UK'
|
||||||
|
formats = ['NOOK']
|
||||||
|
|
||||||
class StoreOpenBooksStore(StoreBase):
|
class StoreOpenBooksStore(StoreBase):
|
||||||
name = 'Open Books'
|
name = 'Open Books'
|
||||||
description = u'Comprehensive listing of DRM free ebooks from a variety of sources provided by users of calibre.'
|
description = u'Comprehensive listing of DRM free ebooks from a variety of sources provided by users of calibre.'
|
||||||
@ -1660,7 +1669,7 @@ plugins += [
|
|||||||
StoreAmazonITKindleStore,
|
StoreAmazonITKindleStore,
|
||||||
StoreAmazonUKKindleStore,
|
StoreAmazonUKKindleStore,
|
||||||
StoreBaenWebScriptionStore,
|
StoreBaenWebScriptionStore,
|
||||||
StoreBNStore, StoreSonyStore,
|
StoreBNStore,
|
||||||
StoreBeWriteStore,
|
StoreBeWriteStore,
|
||||||
StoreBiblioStore,
|
StoreBiblioStore,
|
||||||
StoreBookotekaStore,
|
StoreBookotekaStore,
|
||||||
@ -1686,12 +1695,14 @@ plugins += [
|
|||||||
StoreMillsBoonUKStore,
|
StoreMillsBoonUKStore,
|
||||||
StoreMobileReadStore,
|
StoreMobileReadStore,
|
||||||
StoreNextoStore,
|
StoreNextoStore,
|
||||||
|
StoreNookUKStore,
|
||||||
StoreOpenBooksStore,
|
StoreOpenBooksStore,
|
||||||
StoreOzonRUStore,
|
StoreOzonRUStore,
|
||||||
StorePragmaticBookshelfStore,
|
StorePragmaticBookshelfStore,
|
||||||
StorePublioStore,
|
StorePublioStore,
|
||||||
StoreRW2010Store,
|
StoreRW2010Store,
|
||||||
StoreSmashwordsStore,
|
StoreSmashwordsStore,
|
||||||
|
StoreSonyStore,
|
||||||
StoreVirtualoStore,
|
StoreVirtualoStore,
|
||||||
StoreWaterstonesUKStore,
|
StoreWaterstonesUKStore,
|
||||||
StoreWeightlessBooksStore,
|
StoreWeightlessBooksStore,
|
||||||
|
@ -48,6 +48,7 @@ class ANDROID(USBMS):
|
|||||||
0x2910 : HTC_BCDS,
|
0x2910 : HTC_BCDS,
|
||||||
0xe77 : HTC_BCDS,
|
0xe77 : HTC_BCDS,
|
||||||
0xff9 : HTC_BCDS,
|
0xff9 : HTC_BCDS,
|
||||||
|
0x0001 : [0x255],
|
||||||
},
|
},
|
||||||
|
|
||||||
# Eken
|
# Eken
|
||||||
@ -190,7 +191,7 @@ class ANDROID(USBMS):
|
|||||||
0x10a9 : { 0x6050 : [0x227] },
|
0x10a9 : { 0x6050 : [0x227] },
|
||||||
|
|
||||||
# Prestigio
|
# Prestigio
|
||||||
0x2207 : { 0 : [0x222] },
|
0x2207 : { 0 : [0x222], 0x10 : [0x222] },
|
||||||
|
|
||||||
}
|
}
|
||||||
EBOOK_DIR_MAIN = ['eBooks/import', 'wordplayer/calibretransfer', 'Books',
|
EBOOK_DIR_MAIN = ['eBooks/import', 'wordplayer/calibretransfer', 'Books',
|
||||||
@ -212,7 +213,8 @@ class ANDROID(USBMS):
|
|||||||
'VIZIO', 'GOOGLE', 'FREESCAL', 'KOBO_INC', 'LENOVO', 'ROCKCHIP',
|
'VIZIO', 'GOOGLE', 'FREESCAL', 'KOBO_INC', 'LENOVO', 'ROCKCHIP',
|
||||||
'POCKET', 'ONDA_MID', 'ZENITHIN', 'INGENIC', 'PMID701C', 'PD',
|
'POCKET', 'ONDA_MID', 'ZENITHIN', 'INGENIC', 'PMID701C', 'PD',
|
||||||
'PMP5097C', 'MASS', 'NOVO7', 'ZEKI', 'COBY', 'SXZ', 'USB_2.0',
|
'PMP5097C', 'MASS', 'NOVO7', 'ZEKI', 'COBY', 'SXZ', 'USB_2.0',
|
||||||
'COBY_MID', 'VS', 'AINOL', 'TOPWISE', 'PAD703', 'NEXT8D12']
|
'COBY_MID', 'VS', 'AINOL', 'TOPWISE', 'PAD703', 'NEXT8D12',
|
||||||
|
'MEDIATEK']
|
||||||
WINDOWS_MAIN_MEM = ['ANDROID_PHONE', 'A855', 'A853', 'INC.NEXUS_ONE',
|
WINDOWS_MAIN_MEM = ['ANDROID_PHONE', 'A855', 'A853', 'INC.NEXUS_ONE',
|
||||||
'__UMS_COMPOSITE', '_MB200', 'MASS_STORAGE', '_-_CARD', 'SGH-I897',
|
'__UMS_COMPOSITE', '_MB200', 'MASS_STORAGE', '_-_CARD', 'SGH-I897',
|
||||||
'GT-I9000', 'FILE-STOR_GADGET', 'SGH-T959_CARD', 'SGH-T959', 'SAMSUNG_ANDROID',
|
'GT-I9000', 'FILE-STOR_GADGET', 'SGH-T959_CARD', 'SGH-T959', 'SAMSUNG_ANDROID',
|
||||||
@ -232,7 +234,7 @@ class ANDROID(USBMS):
|
|||||||
'THINKPAD_TABLET', 'SGH-T989', 'YP-G70', 'STORAGE_DEVICE',
|
'THINKPAD_TABLET', 'SGH-T989', 'YP-G70', 'STORAGE_DEVICE',
|
||||||
'ADVANCED', 'SGH-I727', 'USB_FLASH_DRIVER', 'ANDROID',
|
'ADVANCED', 'SGH-I727', 'USB_FLASH_DRIVER', 'ANDROID',
|
||||||
'S5830I_CARD', 'MID7042', 'LINK-CREATE', '7035', 'VIEWPAD_7E',
|
'S5830I_CARD', 'MID7042', 'LINK-CREATE', '7035', 'VIEWPAD_7E',
|
||||||
'NOVO7', 'MB526', '_USB#WYK7MSF8KE', 'TABLET_PC', 'F']
|
'NOVO7', 'MB526', '_USB#WYK7MSF8KE', 'TABLET_PC', 'F', 'MT65XX_MS']
|
||||||
WINDOWS_CARD_A_MEM = ['ANDROID_PHONE', 'GT-I9000_CARD', 'SGH-I897',
|
WINDOWS_CARD_A_MEM = ['ANDROID_PHONE', 'GT-I9000_CARD', 'SGH-I897',
|
||||||
'FILE-STOR_GADGET', 'SGH-T959_CARD', 'SGH-T959', 'SAMSUNG_ANDROID', 'GT-P1000_CARD',
|
'FILE-STOR_GADGET', 'SGH-T959_CARD', 'SGH-T959', 'SAMSUNG_ANDROID', 'GT-P1000_CARD',
|
||||||
'A70S', 'A101IT', '7', 'INCREDIBLE', 'A7EB', 'SGH-T849_CARD',
|
'A70S', 'A101IT', '7', 'INCREDIBLE', 'A7EB', 'SGH-T849_CARD',
|
||||||
|
@ -734,6 +734,7 @@ initlibmtp(void) {
|
|||||||
// who designs a library without anyway to control/redirect the debugging
|
// who designs a library without anyway to control/redirect the debugging
|
||||||
// output, and hardcoded paths that cannot be changed?
|
// output, and hardcoded paths that cannot be changed?
|
||||||
int bak, new;
|
int bak, new;
|
||||||
|
fprintf(stdout, "\n"); // This is needed, without it, for some odd reason the code below causes stdout to buffer all output after it is restored, rather than using line buffering, and setlinebuf does not work.
|
||||||
fflush(stdout);
|
fflush(stdout);
|
||||||
bak = dup(STDOUT_FILENO);
|
bak = dup(STDOUT_FILENO);
|
||||||
new = open("/dev/null", O_WRONLY);
|
new = open("/dev/null", O_WRONLY);
|
||||||
|
@ -8,11 +8,13 @@ __docformat__ = 'restructuredtext en'
|
|||||||
Convert OEB ebook format to PDF.
|
Convert OEB ebook format to PDF.
|
||||||
'''
|
'''
|
||||||
|
|
||||||
import glob
|
import glob, os
|
||||||
import os
|
|
||||||
|
|
||||||
from calibre.customize.conversion import OutputFormatPlugin, \
|
from PyQt4.Qt import QRawFont, QFont
|
||||||
OptionRecommendation
|
|
||||||
|
from calibre.constants import iswindows
|
||||||
|
from calibre.customize.conversion import (OutputFormatPlugin,
|
||||||
|
OptionRecommendation)
|
||||||
from calibre.ptempfile import TemporaryDirectory
|
from calibre.ptempfile import TemporaryDirectory
|
||||||
|
|
||||||
UNITS = ['millimeter', 'centimeter', 'point', 'inch' , 'pica' , 'didot',
|
UNITS = ['millimeter', 'centimeter', 'point', 'inch' , 'pica' , 'didot',
|
||||||
@ -91,12 +93,14 @@ class PDFOutput(OutputFormatPlugin):
|
|||||||
OptionRecommendation(name='pdf_mono_font_size',
|
OptionRecommendation(name='pdf_mono_font_size',
|
||||||
recommended_value=16, help=_(
|
recommended_value=16, help=_(
|
||||||
'The default font size for monospaced text')),
|
'The default font size for monospaced text')),
|
||||||
# OptionRecommendation(name='old_pdf_engine', recommended_value=False,
|
OptionRecommendation(name='pdf_mark_links', recommended_value=False,
|
||||||
# help=_('Use the old, less capable engine to generate the PDF')),
|
help=_('Surround all links with a red box, useful for debugging.')),
|
||||||
# OptionRecommendation(name='uncompressed_pdf',
|
OptionRecommendation(name='old_pdf_engine', recommended_value=False,
|
||||||
# recommended_value=False, help=_(
|
help=_('Use the old, less capable engine to generate the PDF')),
|
||||||
# 'Generate an uncompressed PDF, useful for debugging, '
|
OptionRecommendation(name='uncompressed_pdf',
|
||||||
# 'only works with the new PDF engine.')),
|
recommended_value=False, help=_(
|
||||||
|
'Generate an uncompressed PDF, useful for debugging, '
|
||||||
|
'only works with the new PDF engine.')),
|
||||||
])
|
])
|
||||||
|
|
||||||
def convert(self, oeb_book, output_path, input_plugin, opts, log):
|
def convert(self, oeb_book, output_path, input_plugin, opts, log):
|
||||||
@ -134,7 +138,7 @@ class PDFOutput(OutputFormatPlugin):
|
|||||||
'''
|
'''
|
||||||
from calibre.ebooks.oeb.base import urlnormalize
|
from calibre.ebooks.oeb.base import urlnormalize
|
||||||
from calibre.gui2 import must_use_qt
|
from calibre.gui2 import must_use_qt
|
||||||
from calibre.utils.fonts.utils import get_font_names, remove_embed_restriction
|
from calibre.utils.fonts.utils import remove_embed_restriction
|
||||||
from PyQt4.Qt import QFontDatabase, QByteArray
|
from PyQt4.Qt import QFontDatabase, QByteArray
|
||||||
|
|
||||||
# First find all @font-face rules and remove them, adding the embedded
|
# First find all @font-face rules and remove them, adding the embedded
|
||||||
@ -164,11 +168,13 @@ class PDFOutput(OutputFormatPlugin):
|
|||||||
except:
|
except:
|
||||||
continue
|
continue
|
||||||
must_use_qt()
|
must_use_qt()
|
||||||
QFontDatabase.addApplicationFontFromData(QByteArray(raw))
|
fid = QFontDatabase.addApplicationFontFromData(QByteArray(raw))
|
||||||
try:
|
|
||||||
family_name = get_font_names(raw)[0]
|
|
||||||
except:
|
|
||||||
family_name = None
|
family_name = None
|
||||||
|
if fid > -1:
|
||||||
|
try:
|
||||||
|
family_name = unicode(QFontDatabase.applicationFontFamilies(fid)[0])
|
||||||
|
except (IndexError, KeyError):
|
||||||
|
pass
|
||||||
if family_name:
|
if family_name:
|
||||||
family_map[icu_lower(font_family)] = family_name
|
family_map[icu_lower(font_family)] = family_name
|
||||||
|
|
||||||
@ -177,6 +183,7 @@ class PDFOutput(OutputFormatPlugin):
|
|||||||
|
|
||||||
# Now map the font family name specified in the css to the actual
|
# Now map the font family name specified in the css to the actual
|
||||||
# family name of the embedded font (they may be different in general).
|
# family name of the embedded font (they may be different in general).
|
||||||
|
font_warnings = set()
|
||||||
for item in self.oeb.manifest:
|
for item in self.oeb.manifest:
|
||||||
if not hasattr(item.data, 'cssRules'): continue
|
if not hasattr(item.data, 'cssRules'): continue
|
||||||
for i, rule in enumerate(item.data.cssRules):
|
for i, rule in enumerate(item.data.cssRules):
|
||||||
@ -188,15 +195,30 @@ class PDFOutput(OutputFormatPlugin):
|
|||||||
k = icu_lower(val[i].value)
|
k = icu_lower(val[i].value)
|
||||||
if k in family_map:
|
if k in family_map:
|
||||||
val[i].value = family_map[k]
|
val[i].value = family_map[k]
|
||||||
|
if iswindows:
|
||||||
|
# On windows, Qt uses GDI which does not support OpenType
|
||||||
|
# (CFF) fonts, so we need to nuke references to OpenType
|
||||||
|
# fonts. Note that you could compile QT with configure
|
||||||
|
# -directwrite, but that requires atleast Vista SP2
|
||||||
|
for i in xrange(val.length):
|
||||||
|
family = val[i].value
|
||||||
|
if family:
|
||||||
|
f = QRawFont.fromFont(QFont(family))
|
||||||
|
if len(f.fontTable('head')) == 0:
|
||||||
|
if family not in font_warnings:
|
||||||
|
self.log.warn('Ignoring unsupported font: %s'
|
||||||
|
%family)
|
||||||
|
font_warnings.add(family)
|
||||||
|
# Either a bitmap or (more likely) a CFF font
|
||||||
|
val[i].value = 'times'
|
||||||
|
|
||||||
def convert_text(self, oeb_book):
|
def convert_text(self, oeb_book):
|
||||||
from calibre.utils.config import tweaks
|
from calibre.ebooks.metadata.opf2 import OPF
|
||||||
if tweaks.get('new_pdf_engine', False):
|
if self.opts.old_pdf_engine:
|
||||||
from calibre.ebooks.pdf.render.from_html import PDFWriter
|
from calibre.ebooks.pdf.writer import PDFWriter
|
||||||
PDFWriter
|
PDFWriter
|
||||||
else:
|
else:
|
||||||
from calibre.ebooks.pdf.writer import PDFWriter
|
from calibre.ebooks.pdf.render.from_html import PDFWriter
|
||||||
from calibre.ebooks.metadata.opf2 import OPF
|
|
||||||
|
|
||||||
self.log.debug('Serializing oeb input to disk for processing...')
|
self.log.debug('Serializing oeb input to disk for processing...')
|
||||||
self.get_cover_data()
|
self.get_cover_data()
|
||||||
@ -231,7 +253,15 @@ class PDFOutput(OutputFormatPlugin):
|
|||||||
out_stream.seek(0)
|
out_stream.seek(0)
|
||||||
out_stream.truncate()
|
out_stream.truncate()
|
||||||
self.log.debug('Rendering pages to PDF...')
|
self.log.debug('Rendering pages to PDF...')
|
||||||
|
import time
|
||||||
|
st = time.time()
|
||||||
|
if False:
|
||||||
|
import cProfile
|
||||||
|
cProfile.runctx('writer.dump(items, out_stream, PDFMetadata(self.metadata))',
|
||||||
|
globals(), locals(), '/tmp/profile')
|
||||||
|
else:
|
||||||
writer.dump(items, out_stream, PDFMetadata(self.metadata))
|
writer.dump(items, out_stream, PDFMetadata(self.metadata))
|
||||||
|
self.log('Rendered PDF in %g seconds:'%(time.time()-st))
|
||||||
|
|
||||||
if close:
|
if close:
|
||||||
out_stream.close()
|
out_stream.close()
|
||||||
|
@ -17,7 +17,7 @@ from urllib import unquote
|
|||||||
|
|
||||||
from calibre.ebooks.chardet import detect_xml_encoding
|
from calibre.ebooks.chardet import detect_xml_encoding
|
||||||
from calibre.constants import iswindows
|
from calibre.constants import iswindows
|
||||||
from calibre import unicode_path, as_unicode
|
from calibre import unicode_path, as_unicode, replace_entities
|
||||||
|
|
||||||
class Link(object):
|
class Link(object):
|
||||||
'''
|
'''
|
||||||
@ -147,6 +147,7 @@ class HTMLFile(object):
|
|||||||
url = match.group(i)
|
url = match.group(i)
|
||||||
if url:
|
if url:
|
||||||
break
|
break
|
||||||
|
url = replace_entities(url)
|
||||||
try:
|
try:
|
||||||
link = self.resolve(url)
|
link = self.resolve(url)
|
||||||
except ValueError:
|
except ValueError:
|
||||||
|
@ -41,7 +41,6 @@ def find_custom_fonts(options, logger):
|
|||||||
if options.serif_family:
|
if options.serif_family:
|
||||||
f = family(options.serif_family)
|
f = family(options.serif_family)
|
||||||
fonts['serif'] = font_scanner.legacy_fonts_for_family(f)
|
fonts['serif'] = font_scanner.legacy_fonts_for_family(f)
|
||||||
print (111111, fonts['serif'])
|
|
||||||
if not fonts['serif']:
|
if not fonts['serif']:
|
||||||
logger.warn('Unable to find serif family %s'%f)
|
logger.warn('Unable to find serif family %s'%f)
|
||||||
if options.sans_family:
|
if options.sans_family:
|
||||||
|
@ -9,7 +9,10 @@ __docformat__ = 'restructuredtext en'
|
|||||||
|
|
||||||
import codecs, zlib
|
import codecs, zlib
|
||||||
from io import BytesIO
|
from io import BytesIO
|
||||||
from struct import pack
|
|
||||||
|
from calibre.constants import plugins, ispy3
|
||||||
|
|
||||||
|
pdf_float = plugins['speedup'][0].pdf_float
|
||||||
|
|
||||||
EOL = b'\n'
|
EOL = b'\n'
|
||||||
|
|
||||||
@ -51,15 +54,25 @@ PAPER_SIZES = {k:globals()[k.upper()] for k in ('a0 a1 a2 a3 a4 a5 a6 b0 b1 b2'
|
|||||||
|
|
||||||
# Basic PDF datatypes {{{
|
# Basic PDF datatypes {{{
|
||||||
|
|
||||||
|
ic = str if ispy3 else unicode
|
||||||
|
icb = (lambda x: str(x).encode('ascii')) if ispy3 else bytes
|
||||||
|
|
||||||
|
def fmtnum(o):
|
||||||
|
if isinstance(o, float):
|
||||||
|
return pdf_float(o)
|
||||||
|
return ic(o)
|
||||||
|
|
||||||
def serialize(o, stream):
|
def serialize(o, stream):
|
||||||
if hasattr(o, 'pdf_serialize'):
|
if isinstance(o, float):
|
||||||
|
stream.write_raw(pdf_float(o).encode('ascii'))
|
||||||
|
elif isinstance(o, (int, long)):
|
||||||
|
stream.write_raw(icb(o))
|
||||||
|
elif hasattr(o, 'pdf_serialize'):
|
||||||
o.pdf_serialize(stream)
|
o.pdf_serialize(stream)
|
||||||
elif isinstance(o, bool):
|
|
||||||
stream.write(b'true' if o else b'false')
|
|
||||||
elif isinstance(o, (int, long, float)):
|
|
||||||
stream.write(type(u'')(o).encode('ascii'))
|
|
||||||
elif o is None:
|
elif o is None:
|
||||||
stream.write(b'null')
|
stream.write_raw(b'null')
|
||||||
|
elif isinstance(o, bool):
|
||||||
|
stream.write_raw(b'true' if o else b'false')
|
||||||
else:
|
else:
|
||||||
raise ValueError('Unknown object: %r'%o)
|
raise ValueError('Unknown object: %r'%o)
|
||||||
|
|
||||||
@ -85,19 +98,13 @@ class String(unicode):
|
|||||||
raw = codecs.BOM_UTF16_BE + s.encode('utf-16-be')
|
raw = codecs.BOM_UTF16_BE + s.encode('utf-16-be')
|
||||||
stream.write(b'('+raw+b')')
|
stream.write(b'('+raw+b')')
|
||||||
|
|
||||||
class GlyphIndex(int):
|
|
||||||
|
|
||||||
def pdf_serialize(self, stream):
|
|
||||||
byts = bytearray(pack(b'>H', self))
|
|
||||||
stream.write('<%s>'%''.join(map(
|
|
||||||
lambda x: bytes(hex(x)[2:]).rjust(2, b'0'), byts)))
|
|
||||||
|
|
||||||
class Dictionary(dict):
|
class Dictionary(dict):
|
||||||
|
|
||||||
def pdf_serialize(self, stream):
|
def pdf_serialize(self, stream):
|
||||||
stream.write(b'<<' + EOL)
|
stream.write(b'<<' + EOL)
|
||||||
sorted_keys = sorted(self.iterkeys(),
|
sorted_keys = sorted(self.iterkeys(),
|
||||||
key=lambda x:((' ' if x == 'Type' else '')+x))
|
key=lambda x:({'Type':'1', 'Subtype':'2'}.get(
|
||||||
|
x, x)+x))
|
||||||
for k in sorted_keys:
|
for k in sorted_keys:
|
||||||
serialize(Name(k), stream)
|
serialize(Name(k), stream)
|
||||||
stream.write(b' ')
|
stream.write(b' ')
|
||||||
@ -161,6 +168,9 @@ class Stream(BytesIO):
|
|||||||
super(Stream, self).write(raw if isinstance(raw, bytes) else
|
super(Stream, self).write(raw if isinstance(raw, bytes) else
|
||||||
raw.encode('ascii'))
|
raw.encode('ascii'))
|
||||||
|
|
||||||
|
def write_raw(self, raw):
|
||||||
|
BytesIO.write(self, raw)
|
||||||
|
|
||||||
class Reference(object):
|
class Reference(object):
|
||||||
|
|
||||||
def __init__(self, num, obj):
|
def __init__(self, num, obj):
|
||||||
@ -169,5 +179,11 @@ class Reference(object):
|
|||||||
def pdf_serialize(self, stream):
|
def pdf_serialize(self, stream):
|
||||||
raw = '%d 0 R'%self.num
|
raw = '%d 0 R'%self.num
|
||||||
stream.write(raw.encode('ascii'))
|
stream.write(raw.encode('ascii'))
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return '%d 0 R'%self.num
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
return repr(self)
|
||||||
# }}}
|
# }}}
|
||||||
|
|
||||||
|
@ -8,24 +8,27 @@ __copyright__ = '2012, Kovid Goyal <kovid at kovidgoyal.net>'
|
|||||||
__docformat__ = 'restructuredtext en'
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
import sys, traceback
|
import sys, traceback
|
||||||
from math import sqrt
|
|
||||||
from collections import namedtuple
|
from collections import namedtuple
|
||||||
from functools import wraps, partial
|
from functools import wraps, partial
|
||||||
|
from future_builtins import map
|
||||||
|
|
||||||
import sip
|
import sip
|
||||||
from PyQt4.Qt import (QPaintEngine, QPaintDevice, Qt, QApplication, QPainter,
|
from PyQt4.Qt import (QPaintEngine, QPaintDevice, Qt, QTransform, QBrush)
|
||||||
QTransform, QPainterPath, QImage, QByteArray, QBuffer,
|
|
||||||
qRgba)
|
|
||||||
|
|
||||||
from calibre.constants import plugins
|
from calibre.constants import plugins
|
||||||
from calibre.ebooks.pdf.render.serialize import (Color, PDFStream, Path)
|
from calibre.ebooks.pdf.render.serialize import (PDFStream, Path)
|
||||||
from calibre.ebooks.pdf.render.common import inch, A4
|
from calibre.ebooks.pdf.render.common import inch, A4, fmtnum
|
||||||
from calibre.utils.fonts.sfnt.container import Sfnt
|
from calibre.ebooks.pdf.render.graphics import convert_path, Graphics
|
||||||
|
from calibre.utils.fonts.sfnt.container import Sfnt, UnsupportedFont
|
||||||
from calibre.utils.fonts.sfnt.metrics import FontMetrics
|
from calibre.utils.fonts.sfnt.metrics import FontMetrics
|
||||||
|
|
||||||
Point = namedtuple('Point', 'x y')
|
Point = namedtuple('Point', 'x y')
|
||||||
ColorState = namedtuple('ColorState', 'color opacity do')
|
ColorState = namedtuple('ColorState', 'color opacity do')
|
||||||
|
|
||||||
|
def repr_transform(t):
|
||||||
|
vals = map(fmtnum, (t.m11(), t.m12(), t.m21(), t.m22(), t.dx(), t.dy()))
|
||||||
|
return '[%s]'%' '.join(vals)
|
||||||
|
|
||||||
def store_error(func):
|
def store_error(func):
|
||||||
|
|
||||||
@wraps(func)
|
@wraps(func)
|
||||||
@ -38,146 +41,6 @@ def store_error(func):
|
|||||||
|
|
||||||
return errh
|
return errh
|
||||||
|
|
||||||
class GraphicsState(object): # {{{
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.ops = {}
|
|
||||||
self.initial_state = {
|
|
||||||
'fill': ColorState(Color(0., 0., 0., 1.), 1.0, False),
|
|
||||||
'transform': QTransform(),
|
|
||||||
'dash': [],
|
|
||||||
'line_width': 0,
|
|
||||||
'stroke': ColorState(Color(0., 0., 0., 1.), 1.0, True),
|
|
||||||
'line_cap': 'flat',
|
|
||||||
'line_join': 'miter',
|
|
||||||
'clip': (Qt.NoClip, QPainterPath()),
|
|
||||||
}
|
|
||||||
self.current_state = self.initial_state.copy()
|
|
||||||
|
|
||||||
def reset(self):
|
|
||||||
self.current_state = self.initial_state.copy()
|
|
||||||
|
|
||||||
def update_color_state(self, which, color=None, opacity=None,
|
|
||||||
brush_style=None, pen_style=None):
|
|
||||||
current = self.ops.get(which, self.current_state[which])
|
|
||||||
n = ColorState(*current)
|
|
||||||
if color is not None:
|
|
||||||
n = n._replace(color=Color(*color.getRgbF()))
|
|
||||||
if opacity is not None:
|
|
||||||
n = n._replace(opacity=opacity)
|
|
||||||
if opacity is not None:
|
|
||||||
opacity *= n.color.opacity
|
|
||||||
if brush_style is not None:
|
|
||||||
if which == 'fill':
|
|
||||||
do = (False if opacity == 0.0 or brush_style == Qt.NoBrush else
|
|
||||||
True)
|
|
||||||
else:
|
|
||||||
do = (False if opacity == 0.0 or brush_style == Qt.NoBrush or
|
|
||||||
pen_style == Qt.NoPen else True)
|
|
||||||
n = n._replace(do=do)
|
|
||||||
self.ops[which] = n
|
|
||||||
|
|
||||||
def read(self, state):
|
|
||||||
flags = state.state()
|
|
||||||
|
|
||||||
if flags & QPaintEngine.DirtyTransform:
|
|
||||||
self.ops['transform'] = state.transform()
|
|
||||||
|
|
||||||
# TODO: Add support for brush patterns
|
|
||||||
if flags & QPaintEngine.DirtyBrush:
|
|
||||||
brush = state.brush()
|
|
||||||
color = brush.color()
|
|
||||||
self.update_color_state('fill', color=color,
|
|
||||||
brush_style=brush.style())
|
|
||||||
|
|
||||||
if flags & QPaintEngine.DirtyPen:
|
|
||||||
pen = state.pen()
|
|
||||||
brush = pen.brush()
|
|
||||||
color = pen.color()
|
|
||||||
self.update_color_state('stroke', color, brush_style=brush.style(),
|
|
||||||
pen_style=pen.style())
|
|
||||||
ps = {Qt.DashLine:[3], Qt.DotLine:[1,2], Qt.DashDotLine:[3,2,1,2],
|
|
||||||
Qt.DashDotDotLine:[3, 2, 1, 2, 1, 2]}.get(pen.style(), [])
|
|
||||||
self.ops['dash'] = ps
|
|
||||||
self.ops['line_width'] = pen.widthF()
|
|
||||||
self.ops['line_cap'] = {Qt.FlatCap:'flat', Qt.RoundCap:'round',
|
|
||||||
Qt.SquareCap:'square'}.get(pen.capStyle(), 'flat')
|
|
||||||
self.ops['line_join'] = {Qt.MiterJoin:'miter', Qt.RoundJoin:'round',
|
|
||||||
Qt.BevelJoin:'bevel'}.get(pen.joinStyle(), 'miter')
|
|
||||||
|
|
||||||
if flags & QPaintEngine.DirtyOpacity:
|
|
||||||
self.update_color_state('fill', opacity=state.opacity())
|
|
||||||
self.update_color_state('stroke', opacity=state.opacity())
|
|
||||||
|
|
||||||
if flags & QPaintEngine.DirtyClipPath or flags & QPaintEngine.DirtyClipRegion:
|
|
||||||
self.ops['clip'] = True
|
|
||||||
|
|
||||||
def __call__(self, engine):
|
|
||||||
if not self.ops:
|
|
||||||
return
|
|
||||||
pdf = engine.pdf
|
|
||||||
ops = self.ops
|
|
||||||
current_transform = self.current_state['transform']
|
|
||||||
transform_changed = 'transform' in ops and ops['transform'] != current_transform
|
|
||||||
reset_stack = transform_changed or 'clip' in ops
|
|
||||||
|
|
||||||
if reset_stack:
|
|
||||||
pdf.restore_stack()
|
|
||||||
pdf.save_stack()
|
|
||||||
# Since we have reset the stack we need to re-apply all previous
|
|
||||||
# operations, that are different from the default value (clip is
|
|
||||||
# handled separately).
|
|
||||||
for op in set(self.initial_state) - {'clip'}:
|
|
||||||
if op in ops: # These will be applied below
|
|
||||||
self.current_state[op] = self.initial_state[op]
|
|
||||||
elif self.current_state[op] != self.initial_state[op]:
|
|
||||||
self.apply(op, self.current_state[op], engine, pdf)
|
|
||||||
|
|
||||||
# Now apply the new operations
|
|
||||||
for op, val in ops.iteritems():
|
|
||||||
if op != 'clip' and self.current_state[op] != val:
|
|
||||||
self.apply(op, val, engine, pdf)
|
|
||||||
self.current_state[op] = val
|
|
||||||
|
|
||||||
if 'clip' in ops:
|
|
||||||
# Get the current clip
|
|
||||||
path = engine.painter().clipPath()
|
|
||||||
if not path.isEmpty():
|
|
||||||
engine.add_clip(path)
|
|
||||||
self.ops = {}
|
|
||||||
|
|
||||||
def apply(self, op, val, engine, pdf):
|
|
||||||
getattr(self, 'apply_'+op)(val, engine, pdf)
|
|
||||||
|
|
||||||
def apply_transform(self, val, engine, pdf):
|
|
||||||
if not val.isIdentity():
|
|
||||||
pdf.transform(val)
|
|
||||||
|
|
||||||
def apply_stroke(self, val, engine, pdf):
|
|
||||||
self.apply_color_state('stroke', val, engine, pdf)
|
|
||||||
|
|
||||||
def apply_fill(self, val, engine, pdf):
|
|
||||||
self.apply_color_state('fill', val, engine, pdf)
|
|
||||||
|
|
||||||
def apply_color_state(self, which, val, engine, pdf):
|
|
||||||
color = val.color._replace(opacity=val.opacity*val.color.opacity)
|
|
||||||
getattr(pdf, 'set_%s_color'%which)(color)
|
|
||||||
setattr(engine, 'do_%s'%which, val.do)
|
|
||||||
|
|
||||||
def apply_dash(self, val, engine, pdf):
|
|
||||||
pdf.set_dash(val)
|
|
||||||
|
|
||||||
def apply_line_width(self, val, engine, pdf):
|
|
||||||
pdf.set_line_width(val)
|
|
||||||
|
|
||||||
def apply_line_cap(self, val, engine, pdf):
|
|
||||||
pdf.set_line_cap(val)
|
|
||||||
|
|
||||||
def apply_line_join(self, val, engine, pdf):
|
|
||||||
pdf.set_line_join(val)
|
|
||||||
|
|
||||||
# }}}
|
|
||||||
|
|
||||||
class Font(FontMetrics):
|
class Font(FontMetrics):
|
||||||
|
|
||||||
def __init__(self, sfnt):
|
def __init__(self, sfnt):
|
||||||
@ -186,12 +49,21 @@ class Font(FontMetrics):
|
|||||||
|
|
||||||
class PdfEngine(QPaintEngine):
|
class PdfEngine(QPaintEngine):
|
||||||
|
|
||||||
|
FEATURES = QPaintEngine.AllFeatures & ~(
|
||||||
|
QPaintEngine.PorterDuff | QPaintEngine.PerspectiveTransform
|
||||||
|
| QPaintEngine.ObjectBoundingModeGradients
|
||||||
|
| QPaintEngine.LinearGradientFill
|
||||||
|
| QPaintEngine.RadialGradientFill
|
||||||
|
| QPaintEngine.ConicalGradientFill
|
||||||
|
)
|
||||||
|
|
||||||
def __init__(self, file_object, page_width, page_height, left_margin,
|
def __init__(self, file_object, page_width, page_height, left_margin,
|
||||||
top_margin, right_margin, bottom_margin, width, height,
|
top_margin, right_margin, bottom_margin, width, height,
|
||||||
errors=print, debug=print, compress=True):
|
errors=print, debug=print, compress=True,
|
||||||
QPaintEngine.__init__(self, self.features)
|
mark_links=False):
|
||||||
|
QPaintEngine.__init__(self, self.FEATURES)
|
||||||
self.file_object = file_object
|
self.file_object = file_object
|
||||||
self.compress = compress
|
self.compress, self.mark_links = compress, mark_links
|
||||||
self.page_height, self.page_width = page_height, page_width
|
self.page_height, self.page_width = page_height, page_width
|
||||||
self.left_margin, self.top_margin = left_margin, top_margin
|
self.left_margin, self.top_margin = left_margin, top_margin
|
||||||
self.right_margin, self.bottom_margin = right_margin, bottom_margin
|
self.right_margin, self.bottom_margin = right_margin, bottom_margin
|
||||||
@ -210,49 +82,48 @@ class PdfEngine(QPaintEngine):
|
|||||||
self.bottom_margin) / self.pixel_height
|
self.bottom_margin) / self.pixel_height
|
||||||
|
|
||||||
self.pdf_system = QTransform(sx, 0, 0, -sy, dx, dy)
|
self.pdf_system = QTransform(sx, 0, 0, -sy, dx, dy)
|
||||||
self.do_stroke = True
|
self.graphics = Graphics()
|
||||||
self.do_fill = False
|
|
||||||
self.scale = sqrt(sy**2 + sx**2)
|
|
||||||
self.xscale, self.yscale = sx, sy
|
|
||||||
self.graphics_state = GraphicsState()
|
|
||||||
self.errors_occurred = False
|
self.errors_occurred = False
|
||||||
self.errors, self.debug = errors, debug
|
self.errors, self.debug = errors, debug
|
||||||
self.fonts = {}
|
self.fonts = {}
|
||||||
i = QImage(1, 1, QImage.Format_ARGB32)
|
|
||||||
i.fill(qRgba(0, 0, 0, 255))
|
|
||||||
self.alpha_bit = i.constBits().asstring(4).find(b'\xff')
|
|
||||||
self.current_page_num = 1
|
self.current_page_num = 1
|
||||||
self.current_page_inited = False
|
self.current_page_inited = False
|
||||||
self.qt_hack, err = plugins['qt_hack']
|
self.qt_hack, err = plugins['qt_hack']
|
||||||
if err:
|
if err:
|
||||||
raise RuntimeError('Failed to load qt_hack with err: %s'%err)
|
raise RuntimeError('Failed to load qt_hack with err: %s'%err)
|
||||||
|
|
||||||
def init_page(self):
|
def apply_graphics_state(self):
|
||||||
self.pdf.transform(self.pdf_system)
|
self.graphics(self.pdf_system, self.painter())
|
||||||
self.pdf.set_rgb_colorspace()
|
|
||||||
width = self.painter().pen().widthF() if self.isActive() else 0
|
def resolve_fill(self, rect):
|
||||||
self.pdf.set_line_width(width)
|
self.graphics.resolve_fill(rect, self.pdf_system,
|
||||||
self.do_stroke = True
|
self.painter().transform())
|
||||||
self.do_fill = False
|
|
||||||
self.graphics_state.reset()
|
|
||||||
self.pdf.save_stack()
|
|
||||||
self.current_page_inited = True
|
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def features(self):
|
def do_fill(self):
|
||||||
return (QPaintEngine.Antialiasing | QPaintEngine.AlphaBlend |
|
return self.graphics.current_state.do_fill
|
||||||
QPaintEngine.ConstantOpacity | QPaintEngine.PainterPaths |
|
|
||||||
QPaintEngine.PaintOutsidePaintEvent |
|
@property
|
||||||
QPaintEngine.PrimitiveTransform)
|
def do_stroke(self):
|
||||||
|
return self.graphics.current_state.do_stroke
|
||||||
|
|
||||||
|
def init_page(self):
|
||||||
|
self.pdf.transform(self.pdf_system)
|
||||||
|
self.graphics.reset()
|
||||||
|
self.pdf.save_stack()
|
||||||
|
self.current_page_inited = True
|
||||||
|
|
||||||
def begin(self, device):
|
def begin(self, device):
|
||||||
if not hasattr(self, 'pdf'):
|
if not hasattr(self, 'pdf'):
|
||||||
try:
|
try:
|
||||||
self.pdf = PDFStream(self.file_object, (self.page_width,
|
self.pdf = PDFStream(self.file_object, (self.page_width,
|
||||||
self.page_height),
|
self.page_height), compress=self.compress,
|
||||||
compress=self.compress)
|
mark_links=self.mark_links,
|
||||||
|
debug=self.debug)
|
||||||
|
self.graphics.begin(self.pdf)
|
||||||
except:
|
except:
|
||||||
self.errors.append(traceback.format_exc())
|
self.errors(traceback.format_exc())
|
||||||
|
self.errors_occurred = True
|
||||||
return False
|
return False
|
||||||
return True
|
return True
|
||||||
|
|
||||||
@ -268,7 +139,8 @@ class PdfEngine(QPaintEngine):
|
|||||||
self.end_page()
|
self.end_page()
|
||||||
self.pdf.end()
|
self.pdf.end()
|
||||||
except:
|
except:
|
||||||
self.errors.append(traceback.format_exc())
|
self.errors(traceback.format_exc())
|
||||||
|
self.errors_occurred = True
|
||||||
return False
|
return False
|
||||||
finally:
|
finally:
|
||||||
self.pdf = self.file_object = None
|
self.pdf = self.file_object = None
|
||||||
@ -277,139 +149,63 @@ class PdfEngine(QPaintEngine):
|
|||||||
def type(self):
|
def type(self):
|
||||||
return QPaintEngine.Pdf
|
return QPaintEngine.Pdf
|
||||||
|
|
||||||
|
def add_image(self, img, cache_key):
|
||||||
|
if img.isNull(): return
|
||||||
|
return self.pdf.add_image(img, cache_key)
|
||||||
|
|
||||||
|
@store_error
|
||||||
|
def drawTiledPixmap(self, rect, pixmap, point):
|
||||||
|
self.apply_graphics_state()
|
||||||
|
brush = QBrush(pixmap)
|
||||||
|
bl = rect.topLeft()
|
||||||
|
color, opacity, pattern, do_fill = self.graphics.convert_brush(
|
||||||
|
brush, bl-point, 1.0, self.pdf_system,
|
||||||
|
self.painter().transform())
|
||||||
|
self.pdf.save_stack()
|
||||||
|
self.pdf.apply_fill(color, pattern)
|
||||||
|
self.pdf.draw_rect(bl.x(), bl.y(), rect.width(), rect.height(),
|
||||||
|
stroke=False, fill=True)
|
||||||
|
self.pdf.restore_stack()
|
||||||
|
|
||||||
@store_error
|
@store_error
|
||||||
def drawPixmap(self, rect, pixmap, source_rect):
|
def drawPixmap(self, rect, pixmap, source_rect):
|
||||||
self.graphics_state(self)
|
self.apply_graphics_state()
|
||||||
source_rect = source_rect.toRect()
|
source_rect = source_rect.toRect()
|
||||||
pixmap = (pixmap if source_rect == pixmap.rect() else
|
pixmap = (pixmap if source_rect == pixmap.rect() else
|
||||||
pixmap.copy(source_rect))
|
pixmap.copy(source_rect))
|
||||||
image = pixmap.toImage()
|
image = pixmap.toImage()
|
||||||
ref = self.add_image(image, pixmap.cacheKey())
|
ref = self.add_image(image, pixmap.cacheKey())
|
||||||
if ref is not None:
|
if ref is not None:
|
||||||
self.pdf.draw_image(rect.x(), rect.height()+rect.y(), rect.width(),
|
self.pdf.draw_image(rect.x(), rect.y(), rect.width(),
|
||||||
-rect.height(), ref)
|
rect.height(), ref)
|
||||||
|
|
||||||
@store_error
|
@store_error
|
||||||
def drawImage(self, rect, image, source_rect, flags=Qt.AutoColor):
|
def drawImage(self, rect, image, source_rect, flags=Qt.AutoColor):
|
||||||
self.graphics_state(self)
|
self.apply_graphics_state()
|
||||||
source_rect = source_rect.toRect()
|
source_rect = source_rect.toRect()
|
||||||
image = (image if source_rect == image.rect() else
|
image = (image if source_rect == image.rect() else
|
||||||
image.copy(source_rect))
|
image.copy(source_rect))
|
||||||
ref = self.add_image(image, image.cacheKey())
|
ref = self.add_image(image, image.cacheKey())
|
||||||
if ref is not None:
|
if ref is not None:
|
||||||
self.pdf.draw_image(rect.x(), rect.height()+rect.y(), rect.width(),
|
self.pdf.draw_image(rect.x(), rect.y(), rect.width(),
|
||||||
-rect.height(), ref)
|
rect.height(), ref)
|
||||||
|
|
||||||
def add_image(self, img, cache_key):
|
|
||||||
if img.isNull(): return
|
|
||||||
ref = self.pdf.get_image(cache_key)
|
|
||||||
if ref is not None:
|
|
||||||
return ref
|
|
||||||
|
|
||||||
fmt = img.format()
|
|
||||||
image = QImage(img)
|
|
||||||
if (image.depth() == 1 and img.colorTable().size() == 2 and
|
|
||||||
img.colorTable().at(0) == QColor(Qt.black).rgba() and
|
|
||||||
img.colorTable().at(1) == QColor(Qt.white).rgba()):
|
|
||||||
if fmt == QImage.Format_MonoLSB:
|
|
||||||
image = image.convertToFormat(QImage.Format_Mono)
|
|
||||||
fmt = QImage.Format_Mono
|
|
||||||
else:
|
|
||||||
if (fmt != QImage.Format_RGB32 and fmt != QImage.Format_ARGB32):
|
|
||||||
image = image.convertToFormat(QImage.Format_ARGB32)
|
|
||||||
fmt = QImage.Format_ARGB32
|
|
||||||
|
|
||||||
w = image.width()
|
|
||||||
h = image.height()
|
|
||||||
d = image.depth()
|
|
||||||
|
|
||||||
if fmt == QImage.Format_Mono:
|
|
||||||
bytes_per_line = (w + 7) >> 3
|
|
||||||
data = image.constBits().asstring(bytes_per_line * h)
|
|
||||||
return self.pdf.write_image(data, w, h, d, cache_key=cache_key)
|
|
||||||
|
|
||||||
ba = QByteArray()
|
|
||||||
buf = QBuffer(ba)
|
|
||||||
image.save(buf, 'jpeg', 94)
|
|
||||||
data = bytes(ba.data())
|
|
||||||
has_alpha = has_mask = False
|
|
||||||
soft_mask = mask = None
|
|
||||||
|
|
||||||
if fmt == QImage.Format_ARGB32:
|
|
||||||
tmask = image.constBits().asstring(4*w*h)[self.alpha_bit::4]
|
|
||||||
sdata = bytearray(tmask)
|
|
||||||
vals = set(sdata)
|
|
||||||
vals.discard(255)
|
|
||||||
has_mask = bool(vals)
|
|
||||||
vals.discard(0)
|
|
||||||
has_alpha = bool(vals)
|
|
||||||
|
|
||||||
if has_alpha:
|
|
||||||
soft_mask = self.pdf.write_image(tmask, w, h, 8)
|
|
||||||
elif has_mask:
|
|
||||||
# dither the soft mask to 1bit and add it. This also helps PDF
|
|
||||||
# viewers without transparency support
|
|
||||||
bytes_per_line = (w + 7) >> 3
|
|
||||||
mdata = bytearray(0 for i in xrange(bytes_per_line * h))
|
|
||||||
spos = mpos = 0
|
|
||||||
for y in xrange(h):
|
|
||||||
for x in xrange(w):
|
|
||||||
if sdata[spos]:
|
|
||||||
mdata[mpos + x>>3] |= (0x80 >> (x&7))
|
|
||||||
spos += 1
|
|
||||||
mpos += bytes_per_line
|
|
||||||
mdata = bytes(mdata)
|
|
||||||
mask = self.pdf.write_image(mdata, w, h, 1)
|
|
||||||
|
|
||||||
return self.pdf.write_image(data, w, h, 32, mask=mask, dct=True,
|
|
||||||
soft_mask=soft_mask, cache_key=cache_key)
|
|
||||||
|
|
||||||
@store_error
|
@store_error
|
||||||
def updateState(self, state):
|
def updateState(self, state):
|
||||||
self.graphics_state.read(state)
|
self.graphics.update_state(state, self.painter())
|
||||||
|
|
||||||
def convert_path(self, path):
|
|
||||||
p = Path()
|
|
||||||
i = 0
|
|
||||||
while i < path.elementCount():
|
|
||||||
elem = path.elementAt(i)
|
|
||||||
em = (elem.x, elem.y)
|
|
||||||
i += 1
|
|
||||||
if elem.isMoveTo():
|
|
||||||
p.move_to(*em)
|
|
||||||
elif elem.isLineTo():
|
|
||||||
p.line_to(*em)
|
|
||||||
elif elem.isCurveTo():
|
|
||||||
added = False
|
|
||||||
if path.elementCount() > i+1:
|
|
||||||
c1, c2 = path.elementAt(i), path.elementAt(i+1)
|
|
||||||
if (c1.type == path.CurveToDataElement and c2.type ==
|
|
||||||
path.CurveToDataElement):
|
|
||||||
i += 2
|
|
||||||
p.curve_to(em[0], em[1], c1.x, c1.y, c2.x, c2.y)
|
|
||||||
added = True
|
|
||||||
if not added:
|
|
||||||
raise ValueError('Invalid curve to operation')
|
|
||||||
return p
|
|
||||||
|
|
||||||
@store_error
|
@store_error
|
||||||
def drawPath(self, path):
|
def drawPath(self, path):
|
||||||
self.graphics_state(self)
|
self.apply_graphics_state()
|
||||||
p = self.convert_path(path)
|
p = convert_path(path)
|
||||||
fill_rule = {Qt.OddEvenFill:'evenodd',
|
fill_rule = {Qt.OddEvenFill:'evenodd',
|
||||||
Qt.WindingFill:'winding'}[path.fillRule()]
|
Qt.WindingFill:'winding'}[path.fillRule()]
|
||||||
self.pdf.draw_path(p, stroke=self.do_stroke,
|
self.pdf.draw_path(p, stroke=self.do_stroke,
|
||||||
fill=self.do_fill, fill_rule=fill_rule)
|
fill=self.do_fill, fill_rule=fill_rule)
|
||||||
|
|
||||||
def add_clip(self, path):
|
|
||||||
p = self.convert_path(path)
|
|
||||||
fill_rule = {Qt.OddEvenFill:'evenodd',
|
|
||||||
Qt.WindingFill:'winding'}[path.fillRule()]
|
|
||||||
self.pdf.add_clip(p, fill_rule=fill_rule)
|
|
||||||
|
|
||||||
@store_error
|
@store_error
|
||||||
def drawPoints(self, points):
|
def drawPoints(self, points):
|
||||||
self.graphics_state(self)
|
self.apply_graphics_state()
|
||||||
p = Path()
|
p = Path()
|
||||||
for point in points:
|
for point in points:
|
||||||
p.move_to(point.x(), point.y())
|
p.move_to(point.x(), point.y())
|
||||||
@ -418,15 +214,21 @@ class PdfEngine(QPaintEngine):
|
|||||||
|
|
||||||
@store_error
|
@store_error
|
||||||
def drawRects(self, rects):
|
def drawRects(self, rects):
|
||||||
self.graphics_state(self)
|
self.apply_graphics_state()
|
||||||
|
with self.graphics:
|
||||||
for rect in rects:
|
for rect in rects:
|
||||||
|
self.resolve_fill(rect)
|
||||||
bl = rect.topLeft()
|
bl = rect.topLeft()
|
||||||
self.pdf.draw_rect(bl.x(), bl.y(), rect.width(), rect.height(),
|
self.pdf.draw_rect(bl.x(), bl.y(), rect.width(), rect.height(),
|
||||||
stroke=self.do_stroke, fill=self.do_fill)
|
stroke=self.do_stroke, fill=self.do_fill)
|
||||||
|
|
||||||
def create_sfnt(self, text_item):
|
def create_sfnt(self, text_item):
|
||||||
get_table = partial(self.qt_hack.get_sfnt_table, text_item)
|
get_table = partial(self.qt_hack.get_sfnt_table, text_item)
|
||||||
|
try:
|
||||||
ans = Font(Sfnt(get_table))
|
ans = Font(Sfnt(get_table))
|
||||||
|
except UnsupportedFont as e:
|
||||||
|
raise UnsupportedFont('The font %s is not a valid sfnt. Error: %s'%(
|
||||||
|
text_item.font().family(), e))
|
||||||
glyph_map = self.qt_hack.get_glyph_map(text_item)
|
glyph_map = self.qt_hack.get_glyph_map(text_item)
|
||||||
gm = {}
|
gm = {}
|
||||||
for uc, glyph_id in enumerate(glyph_map):
|
for uc, glyph_id in enumerate(glyph_map):
|
||||||
@ -438,7 +240,7 @@ class PdfEngine(QPaintEngine):
|
|||||||
@store_error
|
@store_error
|
||||||
def drawTextItem(self, point, text_item):
|
def drawTextItem(self, point, text_item):
|
||||||
# super(PdfEngine, self).drawTextItem(point, text_item)
|
# super(PdfEngine, self).drawTextItem(point, text_item)
|
||||||
self.graphics_state(self)
|
self.apply_graphics_state()
|
||||||
gi = self.qt_hack.get_glyphs(point, text_item)
|
gi = self.qt_hack.get_glyphs(point, text_item)
|
||||||
if not gi.indices:
|
if not gi.indices:
|
||||||
sip.delete(gi)
|
sip.delete(gi)
|
||||||
@ -453,23 +255,19 @@ class PdfEngine(QPaintEngine):
|
|||||||
except (KeyError, ValueError):
|
except (KeyError, ValueError):
|
||||||
pass
|
pass
|
||||||
glyphs = []
|
glyphs = []
|
||||||
pdf_pos = point
|
last_x = last_y = 0
|
||||||
first_baseline = None
|
|
||||||
for i, pos in enumerate(gi.positions):
|
for i, pos in enumerate(gi.positions):
|
||||||
if first_baseline is None:
|
x, y = pos.x(), pos.y()
|
||||||
first_baseline = pos.y()
|
glyphs.append((x-last_x, last_y - y, gi.indices[i]))
|
||||||
glyph_pos = pos
|
last_x, last_y = x, y
|
||||||
delta = glyph_pos - pdf_pos
|
|
||||||
glyphs.append((delta.x(), pos.y()-first_baseline, gi.indices[i]))
|
|
||||||
pdf_pos = glyph_pos
|
|
||||||
|
|
||||||
self.pdf.draw_glyph_run([1, 0, 0, -1, point.x(),
|
self.pdf.draw_glyph_run([gi.stretch, 0, 0, -1, 0, 0], gi.size, metrics,
|
||||||
point.y()], gi.size, metrics, glyphs)
|
glyphs)
|
||||||
sip.delete(gi)
|
sip.delete(gi)
|
||||||
|
|
||||||
@store_error
|
@store_error
|
||||||
def drawPolygon(self, points, mode):
|
def drawPolygon(self, points, mode):
|
||||||
self.graphics_state(self)
|
self.apply_graphics_state()
|
||||||
if not points: return
|
if not points: return
|
||||||
p = Path()
|
p = Path()
|
||||||
p.move_to(points[0].x(), points[0].y())
|
p.move_to(points[0].x(), points[0].y())
|
||||||
@ -484,20 +282,31 @@ class PdfEngine(QPaintEngine):
|
|||||||
def set_metadata(self, *args, **kwargs):
|
def set_metadata(self, *args, **kwargs):
|
||||||
self.pdf.set_metadata(*args, **kwargs)
|
self.pdf.set_metadata(*args, **kwargs)
|
||||||
|
|
||||||
def __enter__(self):
|
def add_outline(self, toc):
|
||||||
self.pdf.save_stack()
|
self.pdf.links.add_outline(toc)
|
||||||
self.saved_ps = (self.do_stroke, self.do_fill)
|
|
||||||
|
|
||||||
def __exit__(self, *args):
|
def add_links(self, current_item, start_page, links, anchors):
|
||||||
self.do_stroke, self.do_fill = self.saved_ps
|
for pos in anchors.itervalues():
|
||||||
self.pdf.restore_stack()
|
pos['left'], pos['top'] = self.pdf_system.map(pos['left'], pos['top'])
|
||||||
|
for link in links:
|
||||||
|
pos = link[1]
|
||||||
|
llx = pos['left']
|
||||||
|
lly = pos['top'] + pos['height']
|
||||||
|
urx = pos['left'] + pos['width']
|
||||||
|
ury = pos['top']
|
||||||
|
llx, lly = self.pdf_system.map(llx, lly)
|
||||||
|
urx, ury = self.pdf_system.map(urx, ury)
|
||||||
|
link[1] = pos['column'] + start_page
|
||||||
|
link.append((llx, lly, urx, ury))
|
||||||
|
self.pdf.links.add(current_item, start_page, links, anchors)
|
||||||
|
|
||||||
class PdfDevice(QPaintDevice): # {{{
|
class PdfDevice(QPaintDevice): # {{{
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, file_object, page_size=A4, left_margin=inch,
|
def __init__(self, file_object, page_size=A4, left_margin=inch,
|
||||||
top_margin=inch, right_margin=inch, bottom_margin=inch,
|
top_margin=inch, right_margin=inch, bottom_margin=inch,
|
||||||
xdpi=1200, ydpi=1200, errors=print, debug=print, compress=True):
|
xdpi=1200, ydpi=1200, errors=print, debug=print,
|
||||||
|
compress=True, mark_links=False):
|
||||||
QPaintDevice.__init__(self)
|
QPaintDevice.__init__(self)
|
||||||
self.xdpi, self.ydpi = xdpi, ydpi
|
self.xdpi, self.ydpi = xdpi, ydpi
|
||||||
self.page_width, self.page_height = page_size
|
self.page_width, self.page_height = page_size
|
||||||
@ -506,7 +315,10 @@ class PdfDevice(QPaintDevice): # {{{
|
|||||||
self.engine = PdfEngine(file_object, self.page_width, self.page_height,
|
self.engine = PdfEngine(file_object, self.page_width, self.page_height,
|
||||||
left_margin, top_margin, right_margin,
|
left_margin, top_margin, right_margin,
|
||||||
bottom_margin, self.width(), self.height(),
|
bottom_margin, self.width(), self.height(),
|
||||||
errors=errors, debug=debug, compress=compress)
|
errors=errors, debug=debug, compress=compress,
|
||||||
|
mark_links=mark_links)
|
||||||
|
self.add_outline = self.engine.add_outline
|
||||||
|
self.add_links = self.engine.add_links
|
||||||
|
|
||||||
def paintEngine(self):
|
def paintEngine(self):
|
||||||
return self.engine
|
return self.engine
|
||||||
@ -553,59 +365,4 @@ class PdfDevice(QPaintDevice): # {{{
|
|||||||
|
|
||||||
# }}}
|
# }}}
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
from PyQt4.Qt import (QBrush, QColor, QPoint, QPixmap)
|
|
||||||
QBrush, QColor, QPoint, QPixmap
|
|
||||||
app = QApplication([])
|
|
||||||
p = QPainter()
|
|
||||||
with open('/tmp/painter.pdf', 'wb') as f:
|
|
||||||
dev = PdfDevice(f, compress=False)
|
|
||||||
p.begin(dev)
|
|
||||||
dev.init_page()
|
|
||||||
xmax, ymax = p.viewport().width(), p.viewport().height()
|
|
||||||
try:
|
|
||||||
p.drawRect(0, 0, xmax, ymax)
|
|
||||||
# p.drawPolyline(QPoint(0, 0), QPoint(xmax, 0), QPoint(xmax, ymax),
|
|
||||||
# QPoint(0, ymax), QPoint(0, 0))
|
|
||||||
# pp = QPainterPath()
|
|
||||||
# pp.addRect(0, 0, xmax, ymax)
|
|
||||||
# p.drawPath(pp)
|
|
||||||
# p.save()
|
|
||||||
# for i in xrange(3):
|
|
||||||
# col = [0, 0, 0, 200]
|
|
||||||
# col[i] = 255
|
|
||||||
# p.setOpacity(0.3)
|
|
||||||
# p.setBrush(QBrush(QColor(*col)))
|
|
||||||
# p.drawRect(0, 0, xmax/10, xmax/10)
|
|
||||||
# p.translate(xmax/10, xmax/10)
|
|
||||||
# p.scale(1, 1.5)
|
|
||||||
# p.restore()
|
|
||||||
|
|
||||||
# # p.scale(2, 2)
|
|
||||||
# # p.rotate(45)
|
|
||||||
# p.drawPixmap(0, 0, 2048, 2048, QPixmap(I('library.png')))
|
|
||||||
# p.drawRect(0, 0, 2048, 2048)
|
|
||||||
|
|
||||||
# p.save()
|
|
||||||
# p.drawLine(0, 0, 5000, 0)
|
|
||||||
# p.rotate(45)
|
|
||||||
# p.drawLine(0, 0, 5000, 0)
|
|
||||||
# p.restore()
|
|
||||||
|
|
||||||
f = p.font()
|
|
||||||
f.setPointSize(20)
|
|
||||||
# f.setLetterSpacing(f.PercentageSpacing, 200)
|
|
||||||
# f.setUnderline(True)
|
|
||||||
# f.setOverline(True)
|
|
||||||
# f.setStrikeOut(True)
|
|
||||||
f.setFamily('Calibri')
|
|
||||||
p.setFont(f)
|
|
||||||
# p.setPen(QColor(0, 0, 255))
|
|
||||||
# p.scale(2, 2)
|
|
||||||
# p.rotate(45)
|
|
||||||
p.drawText(QPoint(300, 300), 'Some—text not By’s ū --- Д AV ff ff')
|
|
||||||
finally:
|
|
||||||
p.end()
|
|
||||||
if dev.engine.errors_occurred:
|
|
||||||
raise SystemExit(1)
|
|
||||||
|
|
||||||
|
@ -20,7 +20,6 @@ from calibre.ebooks.oeb.display.webview import load_html
|
|||||||
from calibre.ebooks.pdf.render.common import (inch, cm, mm, pica, cicero,
|
from calibre.ebooks.pdf.render.common import (inch, cm, mm, pica, cicero,
|
||||||
didot, PAPER_SIZES)
|
didot, PAPER_SIZES)
|
||||||
from calibre.ebooks.pdf.render.engine import PdfDevice
|
from calibre.ebooks.pdf.render.engine import PdfDevice
|
||||||
from calibre.ebooks.pdf.render.links import Links
|
|
||||||
|
|
||||||
def get_page_size(opts, for_comic=False): # {{{
|
def get_page_size(opts, for_comic=False): # {{{
|
||||||
use_profile = not (opts.override_profile_size or
|
use_profile = not (opts.override_profile_size or
|
||||||
@ -143,7 +142,6 @@ class PDFWriter(QObject):
|
|||||||
self.view.page().mainFrame().setScrollBarPolicy(x,
|
self.view.page().mainFrame().setScrollBarPolicy(x,
|
||||||
Qt.ScrollBarAlwaysOff)
|
Qt.ScrollBarAlwaysOff)
|
||||||
self.report_progress = lambda x, y: x
|
self.report_progress = lambda x, y: x
|
||||||
self.links = Links()
|
|
||||||
|
|
||||||
def dump(self, items, out_stream, pdf_metadata):
|
def dump(self, items, out_stream, pdf_metadata):
|
||||||
opts = self.opts
|
opts = self.opts
|
||||||
@ -156,7 +154,8 @@ class PDFWriter(QObject):
|
|||||||
top_margin=0, right_margin=mr, bottom_margin=0,
|
top_margin=0, right_margin=mr, bottom_margin=0,
|
||||||
xdpi=xdpi, ydpi=ydpi, errors=self.log.error,
|
xdpi=xdpi, ydpi=ydpi, errors=self.log.error,
|
||||||
debug=self.log.debug, compress=not
|
debug=self.log.debug, compress=not
|
||||||
opts.uncompressed_pdf)
|
opts.uncompressed_pdf,
|
||||||
|
mark_links=opts.pdf_mark_links)
|
||||||
|
|
||||||
self.page.setViewportSize(QSize(self.doc.width(), self.doc.height()))
|
self.page.setViewportSize(QSize(self.doc.width(), self.doc.height()))
|
||||||
self.render_queue = items
|
self.render_queue = items
|
||||||
@ -187,7 +186,9 @@ class PDFWriter(QObject):
|
|||||||
QTimer.singleShot(0, self.render_book)
|
QTimer.singleShot(0, self.render_book)
|
||||||
self.loop.exec_()
|
self.loop.exec_()
|
||||||
|
|
||||||
# TODO: Outline and links
|
if self.toc is not None and len(self.toc) > 0:
|
||||||
|
self.doc.add_outline(self.toc)
|
||||||
|
|
||||||
self.painter.end()
|
self.painter.end()
|
||||||
|
|
||||||
if self.doc.errors_occurred:
|
if self.doc.errors_occurred:
|
||||||
@ -261,8 +262,7 @@ class PDFWriter(QObject):
|
|||||||
amap = self.bridge_value
|
amap = self.bridge_value
|
||||||
if not isinstance(amap, dict):
|
if not isinstance(amap, dict):
|
||||||
amap = {'links':[], 'anchors':{}} # Some javascript error occurred
|
amap = {'links':[], 'anchors':{}} # Some javascript error occurred
|
||||||
self.links.add(self.current_item, self.current_page_num, amap['links'],
|
start_page = self.current_page_num
|
||||||
amap['anchors'])
|
|
||||||
|
|
||||||
mf = self.view.page().mainFrame()
|
mf = self.view.page().mainFrame()
|
||||||
while True:
|
while True:
|
||||||
@ -278,3 +278,6 @@ class PDFWriter(QObject):
|
|||||||
if self.doc.errors_occurred:
|
if self.doc.errors_occurred:
|
||||||
break
|
break
|
||||||
|
|
||||||
|
self.doc.add_links(self.current_item, start_page, amap['links'],
|
||||||
|
amap['anchors'])
|
||||||
|
|
||||||
|
470
src/calibre/ebooks/pdf/render/graphics.py
Normal file
470
src/calibre/ebooks/pdf/render/graphics.py
Normal file
@ -0,0 +1,470 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
|
||||||
|
from __future__ import (unicode_literals, division, absolute_import,
|
||||||
|
print_function)
|
||||||
|
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
__copyright__ = '2012, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||||
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
|
from math import sqrt
|
||||||
|
from collections import namedtuple
|
||||||
|
|
||||||
|
from PyQt4.Qt import (
|
||||||
|
QBrush, QPen, Qt, QPointF, QTransform, QPainterPath, QPaintEngine, QImage)
|
||||||
|
|
||||||
|
from calibre.ebooks.pdf.render.common import (
|
||||||
|
Name, Array, fmtnum, Stream, Dictionary)
|
||||||
|
from calibre.ebooks.pdf.render.serialize import Path
|
||||||
|
|
||||||
|
def convert_path(path): # {{{
|
||||||
|
p = Path()
|
||||||
|
i = 0
|
||||||
|
while i < path.elementCount():
|
||||||
|
elem = path.elementAt(i)
|
||||||
|
em = (elem.x, elem.y)
|
||||||
|
i += 1
|
||||||
|
if elem.isMoveTo():
|
||||||
|
p.move_to(*em)
|
||||||
|
elif elem.isLineTo():
|
||||||
|
p.line_to(*em)
|
||||||
|
elif elem.isCurveTo():
|
||||||
|
added = False
|
||||||
|
if path.elementCount() > i+1:
|
||||||
|
c1, c2 = path.elementAt(i), path.elementAt(i+1)
|
||||||
|
if (c1.type == path.CurveToDataElement and c2.type ==
|
||||||
|
path.CurveToDataElement):
|
||||||
|
i += 2
|
||||||
|
p.curve_to(em[0], em[1], c1.x, c1.y, c2.x, c2.y)
|
||||||
|
added = True
|
||||||
|
if not added:
|
||||||
|
raise ValueError('Invalid curve to operation')
|
||||||
|
return p
|
||||||
|
# }}}
|
||||||
|
|
||||||
|
Brush = namedtuple('Brush', 'origin brush color')
|
||||||
|
|
||||||
|
class TilingPattern(Stream):
|
||||||
|
|
||||||
|
def __init__(self, cache_key, matrix, w=8, h=8, paint_type=2, compress=False):
|
||||||
|
Stream.__init__(self, compress=compress)
|
||||||
|
self.paint_type = paint_type
|
||||||
|
self.w, self.h = w, h
|
||||||
|
self.matrix = (matrix.m11(), matrix.m12(), matrix.m21(), matrix.m22(),
|
||||||
|
matrix.dx(), matrix.dy())
|
||||||
|
self.resources = Dictionary()
|
||||||
|
self.cache_key = (self.__class__.__name__, cache_key, self.matrix)
|
||||||
|
|
||||||
|
def add_extra_keys(self, d):
|
||||||
|
d['Type'] = Name('Pattern')
|
||||||
|
d['PatternType'] = 1
|
||||||
|
d['PaintType'] = self.paint_type
|
||||||
|
d['TilingType'] = 1
|
||||||
|
d['BBox'] = Array([0, 0, self.w, self.h])
|
||||||
|
d['XStep'] = self.w
|
||||||
|
d['YStep'] = self.h
|
||||||
|
d['Matrix'] = Array(self.matrix)
|
||||||
|
d['Resources'] = self.resources
|
||||||
|
|
||||||
|
class QtPattern(TilingPattern):
|
||||||
|
|
||||||
|
qt_patterns = ( # {{{
|
||||||
|
"0 J\n"
|
||||||
|
"6 w\n"
|
||||||
|
"[] 0 d\n"
|
||||||
|
"4 0 m\n"
|
||||||
|
"4 8 l\n"
|
||||||
|
"0 4 m\n"
|
||||||
|
"8 4 l\n"
|
||||||
|
"S\n", # Dense1Pattern
|
||||||
|
|
||||||
|
"0 J\n"
|
||||||
|
"2 w\n"
|
||||||
|
"[6 2] 1 d\n"
|
||||||
|
"0 0 m\n"
|
||||||
|
"0 8 l\n"
|
||||||
|
"8 0 m\n"
|
||||||
|
"8 8 l\n"
|
||||||
|
"S\n"
|
||||||
|
"[] 0 d\n"
|
||||||
|
"2 0 m\n"
|
||||||
|
"2 8 l\n"
|
||||||
|
"6 0 m\n"
|
||||||
|
"6 8 l\n"
|
||||||
|
"S\n"
|
||||||
|
"[6 2] -3 d\n"
|
||||||
|
"4 0 m\n"
|
||||||
|
"4 8 l\n"
|
||||||
|
"S\n", # Dense2Pattern
|
||||||
|
|
||||||
|
"0 J\n"
|
||||||
|
"2 w\n"
|
||||||
|
"[6 2] 1 d\n"
|
||||||
|
"0 0 m\n"
|
||||||
|
"0 8 l\n"
|
||||||
|
"8 0 m\n"
|
||||||
|
"8 8 l\n"
|
||||||
|
"S\n"
|
||||||
|
"[2 2] -1 d\n"
|
||||||
|
"2 0 m\n"
|
||||||
|
"2 8 l\n"
|
||||||
|
"6 0 m\n"
|
||||||
|
"6 8 l\n"
|
||||||
|
"S\n"
|
||||||
|
"[6 2] -3 d\n"
|
||||||
|
"4 0 m\n"
|
||||||
|
"4 8 l\n"
|
||||||
|
"S\n", # Dense3Pattern
|
||||||
|
|
||||||
|
"0 J\n"
|
||||||
|
"2 w\n"
|
||||||
|
"[2 2] 1 d\n"
|
||||||
|
"0 0 m\n"
|
||||||
|
"0 8 l\n"
|
||||||
|
"8 0 m\n"
|
||||||
|
"8 8 l\n"
|
||||||
|
"S\n"
|
||||||
|
"[2 2] -1 d\n"
|
||||||
|
"2 0 m\n"
|
||||||
|
"2 8 l\n"
|
||||||
|
"6 0 m\n"
|
||||||
|
"6 8 l\n"
|
||||||
|
"S\n"
|
||||||
|
"[2 2] 1 d\n"
|
||||||
|
"4 0 m\n"
|
||||||
|
"4 8 l\n"
|
||||||
|
"S\n", # Dense4Pattern
|
||||||
|
|
||||||
|
"0 J\n"
|
||||||
|
"2 w\n"
|
||||||
|
"[2 6] -1 d\n"
|
||||||
|
"0 0 m\n"
|
||||||
|
"0 8 l\n"
|
||||||
|
"8 0 m\n"
|
||||||
|
"8 8 l\n"
|
||||||
|
"S\n"
|
||||||
|
"[2 2] 1 d\n"
|
||||||
|
"2 0 m\n"
|
||||||
|
"2 8 l\n"
|
||||||
|
"6 0 m\n"
|
||||||
|
"6 8 l\n"
|
||||||
|
"S\n"
|
||||||
|
"[2 6] 3 d\n"
|
||||||
|
"4 0 m\n"
|
||||||
|
"4 8 l\n"
|
||||||
|
"S\n", # Dense5Pattern
|
||||||
|
|
||||||
|
"0 J\n"
|
||||||
|
"2 w\n"
|
||||||
|
"[2 6] -1 d\n"
|
||||||
|
"0 0 m\n"
|
||||||
|
"0 8 l\n"
|
||||||
|
"8 0 m\n"
|
||||||
|
"8 8 l\n"
|
||||||
|
"S\n"
|
||||||
|
"[2 6] 3 d\n"
|
||||||
|
"4 0 m\n"
|
||||||
|
"4 8 l\n"
|
||||||
|
"S\n", # Dense6Pattern
|
||||||
|
|
||||||
|
"0 J\n"
|
||||||
|
"2 w\n"
|
||||||
|
"[2 6] -1 d\n"
|
||||||
|
"0 0 m\n"
|
||||||
|
"0 8 l\n"
|
||||||
|
"8 0 m\n"
|
||||||
|
"8 8 l\n"
|
||||||
|
"S\n", # Dense7Pattern
|
||||||
|
|
||||||
|
"1 w\n"
|
||||||
|
"0 4 m\n"
|
||||||
|
"8 4 l\n"
|
||||||
|
"S\n", # HorPattern
|
||||||
|
|
||||||
|
"1 w\n"
|
||||||
|
"4 0 m\n"
|
||||||
|
"4 8 l\n"
|
||||||
|
"S\n", # VerPattern
|
||||||
|
|
||||||
|
"1 w\n"
|
||||||
|
"4 0 m\n"
|
||||||
|
"4 8 l\n"
|
||||||
|
"0 4 m\n"
|
||||||
|
"8 4 l\n"
|
||||||
|
"S\n", # CrossPattern
|
||||||
|
|
||||||
|
"1 w\n"
|
||||||
|
"-1 5 m\n"
|
||||||
|
"5 -1 l\n"
|
||||||
|
"3 9 m\n"
|
||||||
|
"9 3 l\n"
|
||||||
|
"S\n", # BDiagPattern
|
||||||
|
|
||||||
|
"1 w\n"
|
||||||
|
"-1 3 m\n"
|
||||||
|
"5 9 l\n"
|
||||||
|
"3 -1 m\n"
|
||||||
|
"9 5 l\n"
|
||||||
|
"S\n", # FDiagPattern
|
||||||
|
|
||||||
|
"1 w\n"
|
||||||
|
"-1 3 m\n"
|
||||||
|
"5 9 l\n"
|
||||||
|
"3 -1 m\n"
|
||||||
|
"9 5 l\n"
|
||||||
|
"-1 5 m\n"
|
||||||
|
"5 -1 l\n"
|
||||||
|
"3 9 m\n"
|
||||||
|
"9 3 l\n"
|
||||||
|
"S\n", # DiagCrossPattern
|
||||||
|
) # }}}
|
||||||
|
|
||||||
|
def __init__(self, pattern_num, matrix):
|
||||||
|
super(QtPattern, self).__init__(pattern_num, matrix)
|
||||||
|
self.write(self.qt_patterns[pattern_num-2])
|
||||||
|
|
||||||
|
class TexturePattern(TilingPattern):
|
||||||
|
|
||||||
|
def __init__(self, pixmap, matrix, pdf, clone=None):
|
||||||
|
if clone is None:
|
||||||
|
image = pixmap.toImage()
|
||||||
|
cache_key = pixmap.cacheKey()
|
||||||
|
imgref = pdf.add_image(image, cache_key)
|
||||||
|
paint_type = (2 if image.format() in {QImage.Format_MonoLSB,
|
||||||
|
QImage.Format_Mono} else 1)
|
||||||
|
super(TexturePattern, self).__init__(
|
||||||
|
cache_key, matrix, w=image.width(), h=image.height(),
|
||||||
|
paint_type=paint_type)
|
||||||
|
m = (self.w, 0, 0, -self.h, 0, self.h)
|
||||||
|
self.resources['XObject'] = Dictionary({'Texture':imgref})
|
||||||
|
self.write_line('%s cm /Texture Do'%(' '.join(map(fmtnum, m))))
|
||||||
|
else:
|
||||||
|
super(TexturePattern, self).__init__(
|
||||||
|
clone.cache_key[1], matrix, w=clone.w, h=clone.h,
|
||||||
|
paint_type=clone.paint_type)
|
||||||
|
self.resources['XObject'] = Dictionary(clone.resources['XObject'])
|
||||||
|
self.write(clone.getvalue())
|
||||||
|
|
||||||
|
class GraphicsState(object):
|
||||||
|
|
||||||
|
FIELDS = ('fill', 'stroke', 'opacity', 'transform', 'brush_origin',
|
||||||
|
'clip', 'do_fill', 'do_stroke')
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.fill = QBrush()
|
||||||
|
self.stroke = QPen()
|
||||||
|
self.opacity = 1.0
|
||||||
|
self.transform = QTransform()
|
||||||
|
self.brush_origin = QPointF()
|
||||||
|
self.clip = QPainterPath()
|
||||||
|
self.do_fill = False
|
||||||
|
self.do_stroke = True
|
||||||
|
self.qt_pattern_cache = {}
|
||||||
|
|
||||||
|
def __eq__(self, other):
|
||||||
|
for x in self.FIELDS:
|
||||||
|
if getattr(other, x) != getattr(self, x):
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
def copy(self):
|
||||||
|
ans = GraphicsState()
|
||||||
|
ans.fill = QBrush(self.fill)
|
||||||
|
ans.stroke = QPen(self.stroke)
|
||||||
|
ans.opacity = self.opacity
|
||||||
|
ans.transform = self.transform * QTransform()
|
||||||
|
ans.brush_origin = QPointF(self.brush_origin)
|
||||||
|
ans.clip = self.clip
|
||||||
|
ans.do_fill, ans.do_stroke = self.do_fill, self.do_stroke
|
||||||
|
return ans
|
||||||
|
|
||||||
|
class Graphics(object):
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.base_state = GraphicsState()
|
||||||
|
self.current_state = GraphicsState()
|
||||||
|
self.pending_state = None
|
||||||
|
|
||||||
|
def begin(self, pdf):
|
||||||
|
self.pdf = pdf
|
||||||
|
|
||||||
|
def update_state(self, state, painter):
|
||||||
|
flags = state.state()
|
||||||
|
if self.pending_state is None:
|
||||||
|
self.pending_state = self.current_state.copy()
|
||||||
|
|
||||||
|
s = self.pending_state
|
||||||
|
|
||||||
|
if flags & QPaintEngine.DirtyTransform:
|
||||||
|
s.transform = state.transform()
|
||||||
|
|
||||||
|
if flags & QPaintEngine.DirtyBrushOrigin:
|
||||||
|
s.brush_origin = state.brushOrigin()
|
||||||
|
|
||||||
|
if flags & QPaintEngine.DirtyBrush:
|
||||||
|
s.fill = state.brush()
|
||||||
|
|
||||||
|
if flags & QPaintEngine.DirtyPen:
|
||||||
|
s.stroke = state.pen()
|
||||||
|
|
||||||
|
if flags & QPaintEngine.DirtyOpacity:
|
||||||
|
s.opacity = state.opacity()
|
||||||
|
|
||||||
|
if flags & QPaintEngine.DirtyClipPath or flags & QPaintEngine.DirtyClipRegion:
|
||||||
|
s.clip = painter.clipPath()
|
||||||
|
|
||||||
|
def reset(self):
|
||||||
|
self.current_state = GraphicsState()
|
||||||
|
self.pending_state = None
|
||||||
|
|
||||||
|
def __call__(self, pdf_system, painter):
|
||||||
|
# Apply the currently pending state to the PDF
|
||||||
|
if self.pending_state is None:
|
||||||
|
return
|
||||||
|
|
||||||
|
pdf_state = self.current_state
|
||||||
|
ps = self.pending_state
|
||||||
|
pdf = self.pdf
|
||||||
|
|
||||||
|
if (ps.transform != pdf_state.transform or ps.clip != pdf_state.clip):
|
||||||
|
pdf.restore_stack()
|
||||||
|
pdf.save_stack()
|
||||||
|
pdf_state = self.base_state
|
||||||
|
|
||||||
|
if (pdf_state.transform != ps.transform):
|
||||||
|
pdf.transform(ps.transform)
|
||||||
|
|
||||||
|
if (pdf_state.opacity != ps.opacity or pdf_state.stroke != ps.stroke):
|
||||||
|
self.apply_stroke(ps, pdf_system, painter)
|
||||||
|
|
||||||
|
if (pdf_state.opacity != ps.opacity or pdf_state.fill != ps.fill or
|
||||||
|
pdf_state.brush_origin != ps.brush_origin):
|
||||||
|
self.apply_fill(ps, pdf_system, painter)
|
||||||
|
|
||||||
|
if (pdf_state.clip != ps.clip):
|
||||||
|
p = convert_path(ps.clip)
|
||||||
|
fill_rule = {Qt.OddEvenFill:'evenodd',
|
||||||
|
Qt.WindingFill:'winding'}[ps.clip.fillRule()]
|
||||||
|
pdf.add_clip(p, fill_rule=fill_rule)
|
||||||
|
|
||||||
|
self.current_state = self.pending_state
|
||||||
|
self.pending_state = None
|
||||||
|
|
||||||
|
def convert_brush(self, brush, brush_origin, global_opacity,
|
||||||
|
pdf_system, qt_system):
|
||||||
|
# Convert a QBrush to PDF operators
|
||||||
|
style = brush.style()
|
||||||
|
pdf = self.pdf
|
||||||
|
|
||||||
|
pattern = color = pat = None
|
||||||
|
opacity = 1.0
|
||||||
|
do_fill = True
|
||||||
|
|
||||||
|
matrix = (QTransform.fromTranslate(brush_origin.x(), brush_origin.y())
|
||||||
|
* pdf_system * qt_system.inverted()[0])
|
||||||
|
vals = list(brush.color().getRgbF())
|
||||||
|
self.brushobj = None
|
||||||
|
|
||||||
|
if style <= Qt.DiagCrossPattern:
|
||||||
|
opacity = global_opacity * vals[-1]
|
||||||
|
color = vals[:3]
|
||||||
|
|
||||||
|
if style > Qt.SolidPattern:
|
||||||
|
pat = QtPattern(style, matrix)
|
||||||
|
pattern = pdf.add_pattern(pat)
|
||||||
|
|
||||||
|
if opacity < 1e-4 or style == Qt.NoBrush:
|
||||||
|
do_fill = False
|
||||||
|
|
||||||
|
elif style == Qt.TexturePattern:
|
||||||
|
pat = TexturePattern(brush.texture(), matrix, pdf)
|
||||||
|
opacity = global_opacity
|
||||||
|
if pat.paint_type == 2:
|
||||||
|
opacity *= vals[-1]
|
||||||
|
color = vals[:3]
|
||||||
|
pattern = pdf.add_pattern(pat)
|
||||||
|
|
||||||
|
if opacity < 1e-4 or style == Qt.NoBrush:
|
||||||
|
do_fill = False
|
||||||
|
|
||||||
|
self.brushobj = Brush(brush_origin, pat, color)
|
||||||
|
# TODO: Add support for gradient fills
|
||||||
|
return color, opacity, pattern, do_fill
|
||||||
|
|
||||||
|
def apply_stroke(self, state, pdf_system, painter):
|
||||||
|
# TODO: Support miter limit by using QPainterPathStroker
|
||||||
|
pen = state.stroke
|
||||||
|
self.pending_state.do_stroke = True
|
||||||
|
pdf = self.pdf
|
||||||
|
|
||||||
|
# Width
|
||||||
|
w = pen.widthF()
|
||||||
|
if pen.isCosmetic():
|
||||||
|
t = painter.transform()
|
||||||
|
w /= sqrt(t.m11()**2 + t.m22()**2)
|
||||||
|
pdf.serialize(w)
|
||||||
|
pdf.current_page.write(' w ')
|
||||||
|
|
||||||
|
# Line cap
|
||||||
|
cap = {Qt.FlatCap:0, Qt.RoundCap:1, Qt.SquareCap:
|
||||||
|
2}.get(pen.capStyle(), 0)
|
||||||
|
pdf.current_page.write('%d J '%cap)
|
||||||
|
|
||||||
|
# Line join
|
||||||
|
join = {Qt.MiterJoin:0, Qt.RoundJoin:1,
|
||||||
|
Qt.BevelJoin:2}.get(pen.joinStyle(), 0)
|
||||||
|
pdf.current_page.write('%d j '%join)
|
||||||
|
|
||||||
|
# Dash pattern
|
||||||
|
ps = {Qt.DashLine:[3], Qt.DotLine:[1,2], Qt.DashDotLine:[3,2,1,2],
|
||||||
|
Qt.DashDotDotLine:[3, 2, 1, 2, 1, 2]}.get(pen.style(), [])
|
||||||
|
if ps:
|
||||||
|
pdf.serialize(Array(ps))
|
||||||
|
pdf.current_page.write(' 0 d ')
|
||||||
|
|
||||||
|
# Stroke fill
|
||||||
|
color, opacity, pattern, self.pending_state.do_stroke = self.convert_brush(
|
||||||
|
pen.brush(), state.brush_origin, state.opacity, pdf_system,
|
||||||
|
painter.transform())
|
||||||
|
self.pdf.apply_stroke(color, pattern, opacity)
|
||||||
|
if pen.style() == Qt.NoPen:
|
||||||
|
self.pending_state.do_stroke = False
|
||||||
|
|
||||||
|
def apply_fill(self, state, pdf_system, painter):
|
||||||
|
self.pending_state.do_fill = True
|
||||||
|
color, opacity, pattern, self.pending_state.do_fill = self.convert_brush(
|
||||||
|
state.fill, state.brush_origin, state.opacity, pdf_system,
|
||||||
|
painter.transform())
|
||||||
|
self.pdf.apply_fill(color, pattern, opacity)
|
||||||
|
self.last_fill = self.brushobj
|
||||||
|
|
||||||
|
def __enter__(self):
|
||||||
|
self.pdf.save_stack()
|
||||||
|
|
||||||
|
def __exit__(self, *args):
|
||||||
|
self.pdf.restore_stack()
|
||||||
|
|
||||||
|
def resolve_fill(self, rect, pdf_system, qt_system):
|
||||||
|
'''
|
||||||
|
Qt's paint system does not update brushOrigin when using
|
||||||
|
TexturePatterns and it also uses TexturePatterns to emulate gradients,
|
||||||
|
leading to brokenness. So this method allows the paint engine to update
|
||||||
|
the brush origin before painting an object. While not perfect, this is
|
||||||
|
better than nothing.
|
||||||
|
'''
|
||||||
|
if not hasattr(self, 'last_fill') or not self.current_state.do_fill:
|
||||||
|
return
|
||||||
|
|
||||||
|
if isinstance(self.last_fill.brush, TexturePattern):
|
||||||
|
tl = rect.topLeft()
|
||||||
|
if tl == self.last_fill.origin:
|
||||||
|
return
|
||||||
|
|
||||||
|
matrix = (QTransform.fromTranslate(tl.x(), tl.y())
|
||||||
|
* pdf_system * qt_system.inverted()[0])
|
||||||
|
|
||||||
|
pat = TexturePattern(None, matrix, self.pdf, clone=self.last_fill.brush)
|
||||||
|
pattern = self.pdf.add_pattern(pat)
|
||||||
|
self.pdf.apply_fill(self.last_fill.color, pattern)
|
||||||
|
|
||||||
|
|
@ -8,25 +8,115 @@ __copyright__ = '2012, Kovid Goyal <kovid at kovidgoyal.net>'
|
|||||||
__docformat__ = 'restructuredtext en'
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
import os
|
import os
|
||||||
|
from future_builtins import map
|
||||||
|
from urlparse import urlparse, urlunparse
|
||||||
|
from urllib2 import quote, unquote
|
||||||
|
|
||||||
from calibre.ebooks.pdf.render.common import Array, Name
|
from calibre.ebooks.pdf.render.common import Array, Name, Dictionary, String
|
||||||
|
|
||||||
class Destination(Array):
|
class Destination(Array):
|
||||||
|
|
||||||
def __init__(self, start_page, pos):
|
def __init__(self, start_page, pos, get_pageref):
|
||||||
super(Destination, self).__init__(
|
super(Destination, self).__init__(
|
||||||
[start_page + pos['column'], Name('FitH'), pos['y']])
|
[get_pageref(start_page + pos['column']), Name('XYZ'), pos['left'],
|
||||||
|
pos['top'], None]
|
||||||
|
)
|
||||||
|
|
||||||
class Links(object):
|
class Links(object):
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self, pdf, mark_links, page_size):
|
||||||
self.anchors = {}
|
self.anchors = {}
|
||||||
|
self.links = []
|
||||||
|
self.start = {'top':page_size[1], 'column':0, 'left':0}
|
||||||
|
self.pdf = pdf
|
||||||
|
self.mark_links = mark_links
|
||||||
|
|
||||||
def add(self, base_path, start_page, links, anchors):
|
def add(self, base_path, start_page, links, anchors):
|
||||||
path = os.path.normcase(os.path.abspath(base_path))
|
path = os.path.normcase(os.path.abspath(base_path))
|
||||||
self.anchors[path] = a = {}
|
self.anchors[path] = a = {}
|
||||||
a[None] = Destination(start_page, {'y':0, 'column':0})
|
a[None] = Destination(start_page, self.start, self.pdf.get_pageref)
|
||||||
for anchor, pos in anchors.iteritems():
|
for anchor, pos in anchors.iteritems():
|
||||||
a[anchor] = Destination(start_page, pos)
|
a[anchor] = Destination(start_page, pos, self.pdf.get_pageref)
|
||||||
|
for link in links:
|
||||||
|
href, page, rect = link
|
||||||
|
p, frag = href.partition('#')[0::2]
|
||||||
|
link = ((path, p, frag or None), self.pdf.get_pageref(page).obj, Array(rect))
|
||||||
|
self.links.append(link)
|
||||||
|
|
||||||
|
def add_links(self):
|
||||||
|
for link in self.links:
|
||||||
|
path, href, frag = link[0]
|
||||||
|
page, rect = link[1:]
|
||||||
|
combined_path = os.path.abspath(os.path.join(os.path.dirname(path), *href.split('/')))
|
||||||
|
is_local = not href or combined_path in self.anchors
|
||||||
|
annot = Dictionary({
|
||||||
|
'Type':Name('Annot'), 'Subtype':Name('Link'),
|
||||||
|
'Rect':rect, 'Border':Array([0,0,0]),
|
||||||
|
})
|
||||||
|
if self.mark_links:
|
||||||
|
annot.update({'Border':Array([16, 16, 1]), 'C':Array([1.0, 0,
|
||||||
|
0])})
|
||||||
|
if is_local:
|
||||||
|
path = combined_path if href else path
|
||||||
|
annot['Dest'] = self.anchors[path][frag]
|
||||||
|
else:
|
||||||
|
url = href + (('#'+frag) if frag else '')
|
||||||
|
purl = urlparse(url)
|
||||||
|
if purl.scheme and purl.scheme != 'file':
|
||||||
|
action = Dictionary({
|
||||||
|
'Type':Name('Action'), 'S':Name('URI'),
|
||||||
|
})
|
||||||
|
parts = (x.encode('utf-8') if isinstance(x, type(u'')) else
|
||||||
|
x for x in purl)
|
||||||
|
url = urlunparse(map(quote, map(unquote,
|
||||||
|
parts))).decode('ascii')
|
||||||
|
action['URI'] = String(url)
|
||||||
|
annot['A'] = action
|
||||||
|
if 'A' in annot or 'Dest' in annot:
|
||||||
|
if 'Annots' not in page:
|
||||||
|
page['Annots'] = Array()
|
||||||
|
page['Annots'].append(self.pdf.objects.add(annot))
|
||||||
|
else:
|
||||||
|
self.pdf.debug('Could not find destination for link: %s in file %s'%
|
||||||
|
(href, path))
|
||||||
|
|
||||||
|
def add_outline(self, toc):
|
||||||
|
parent = Dictionary({'Type':Name('Outlines')})
|
||||||
|
parentref = self.pdf.objects.add(parent)
|
||||||
|
self.process_children(toc, parentref, parent_is_root=True)
|
||||||
|
self.pdf.catalog.obj['Outlines'] = parentref
|
||||||
|
|
||||||
|
def process_children(self, toc, parentref, parent_is_root=False):
|
||||||
|
childrefs = []
|
||||||
|
for child in toc:
|
||||||
|
childref = self.process_toc_item(child, parentref)
|
||||||
|
if childref is None:
|
||||||
|
continue
|
||||||
|
if childrefs:
|
||||||
|
childrefs[-1].obj['Next'] = childref
|
||||||
|
childref.obj['Prev'] = childrefs[-1]
|
||||||
|
childrefs.append(childref)
|
||||||
|
|
||||||
|
if len(child) > 0:
|
||||||
|
self.process_children(child, childref)
|
||||||
|
if childrefs:
|
||||||
|
parentref.obj['First'] = childrefs[0]
|
||||||
|
parentref.obj['Last'] = childrefs[-1]
|
||||||
|
if not parent_is_root:
|
||||||
|
parentref.obj['Count'] = -len(childrefs)
|
||||||
|
|
||||||
|
def process_toc_item(self, toc, parentref):
|
||||||
|
path = toc.abspath or None
|
||||||
|
frag = toc.fragment or None
|
||||||
|
if path is None:
|
||||||
|
return
|
||||||
|
path = os.path.normcase(os.path.abspath(path))
|
||||||
|
if path not in self.anchors:
|
||||||
|
return None
|
||||||
|
a = self.anchors[path]
|
||||||
|
dest = a.get(frag, a[None])
|
||||||
|
item = Dictionary({'Parent':parentref, 'Dest':dest,
|
||||||
|
'Title':String(toc.text or _('Unknown'))})
|
||||||
|
return self.pdf.objects.add(item)
|
||||||
|
|
||||||
|
|
||||||
|
@ -17,18 +17,25 @@ GlyphInfo* get_glyphs(QPointF &p, const QTextItem &text_item) {
|
|||||||
QFontEngine *fe = ti.fontEngine;
|
QFontEngine *fe = ti.fontEngine;
|
||||||
qreal size = ti.fontEngine->fontDef.pixelSize;
|
qreal size = ti.fontEngine->fontDef.pixelSize;
|
||||||
#ifdef Q_WS_WIN
|
#ifdef Q_WS_WIN
|
||||||
if (ti.fontEngine->type() == QFontEngine::Win) {
|
if (false && ti.fontEngine->type() == QFontEngine::Win) {
|
||||||
|
// This is used in the Qt sourcecode, but it gives incorrect results,
|
||||||
|
// so I have disabled it. I dont understand how it works in qpdf.cpp
|
||||||
QFontEngineWin *fe = static_cast<QFontEngineWin *>(ti.fontEngine);
|
QFontEngineWin *fe = static_cast<QFontEngineWin *>(ti.fontEngine);
|
||||||
|
// I think this should be tmHeight - tmInternalLeading, but pixelSize
|
||||||
|
// seems to work on windows as well, so leave it as pixelSize
|
||||||
size = fe->tm.tmHeight;
|
size = fe->tm.tmHeight;
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
int synthesized = ti.fontEngine->synthesized();
|
||||||
|
qreal stretch = synthesized & QFontEngine::SynthesizedStretch ? ti.fontEngine->fontDef.stretch/100. : 1.;
|
||||||
|
|
||||||
QVarLengthArray<glyph_t> glyphs;
|
QVarLengthArray<glyph_t> glyphs;
|
||||||
QVarLengthArray<QFixedPoint> positions;
|
QVarLengthArray<QFixedPoint> positions;
|
||||||
QTransform m = QTransform::fromTranslate(p.x(), p.y());
|
QTransform m = QTransform::fromTranslate(p.x(), p.y());
|
||||||
fe->getGlyphPositions(ti.glyphs, m, ti.flags, glyphs, positions);
|
fe->getGlyphPositions(ti.glyphs, m, ti.flags, glyphs, positions);
|
||||||
QVector<QPointF> points = QVector<QPointF>(positions.count());
|
QVector<QPointF> points = QVector<QPointF>(positions.count());
|
||||||
for (int i = 0; i < positions.count(); i++) {
|
for (int i = 0; i < positions.count(); i++) {
|
||||||
points[i].setX(positions[i].x.toReal());
|
points[i].setX(positions[i].x.toReal()/stretch);
|
||||||
points[i].setY(positions[i].y.toReal());
|
points[i].setY(positions[i].y.toReal());
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -38,10 +45,10 @@ GlyphInfo* get_glyphs(QPointF &p, const QTextItem &text_item) {
|
|||||||
|
|
||||||
const quint32 *tag = reinterpret_cast<const quint32 *>("name");
|
const quint32 *tag = reinterpret_cast<const quint32 *>("name");
|
||||||
|
|
||||||
return new GlyphInfo(fe->getSfntTable(qToBigEndian(*tag)), size, points, indices);
|
return new GlyphInfo(fe->getSfntTable(qToBigEndian(*tag)), size, stretch, points, indices);
|
||||||
}
|
}
|
||||||
|
|
||||||
GlyphInfo::GlyphInfo(const QByteArray& name, qreal size, const QVector<QPointF> &positions, const QVector<unsigned int> &indices) :name(name), positions(positions), size(size), indices(indices) {
|
GlyphInfo::GlyphInfo(const QByteArray& name, qreal size, qreal stretch, const QVector<QPointF> &positions, const QVector<unsigned int> &indices) :name(name), positions(positions), size(size), stretch(stretch), indices(indices) {
|
||||||
}
|
}
|
||||||
|
|
||||||
QByteArray get_sfnt_table(const QTextItem &text_item, const char* tag_name) {
|
QByteArray get_sfnt_table(const QTextItem &text_item, const char* tag_name) {
|
||||||
|
@ -17,9 +17,10 @@ class GlyphInfo {
|
|||||||
QByteArray name;
|
QByteArray name;
|
||||||
QVector<QPointF> positions;
|
QVector<QPointF> positions;
|
||||||
qreal size;
|
qreal size;
|
||||||
|
qreal stretch;
|
||||||
QVector<unsigned int> indices;
|
QVector<unsigned int> indices;
|
||||||
|
|
||||||
GlyphInfo(const QByteArray &name, qreal size, const QVector<QPointF> &positions, const QVector<unsigned int> &indices);
|
GlyphInfo(const QByteArray &name, qreal size, qreal stretch, const QVector<QPointF> &positions, const QVector<unsigned int> &indices);
|
||||||
|
|
||||||
private:
|
private:
|
||||||
GlyphInfo(const GlyphInfo&);
|
GlyphInfo(const GlyphInfo&);
|
||||||
|
@ -13,9 +13,10 @@ class GlyphInfo {
|
|||||||
public:
|
public:
|
||||||
QByteArray name;
|
QByteArray name;
|
||||||
qreal size;
|
qreal size;
|
||||||
|
qreal stretch;
|
||||||
QVector<QPointF> &positions;
|
QVector<QPointF> &positions;
|
||||||
QVector<unsigned int> indices;
|
QVector<unsigned int> indices;
|
||||||
GlyphInfo(const QByteArray &name, qreal size, const QVector<QPointF> &positions, const QVector<unsigned int> &indices);
|
GlyphInfo(const QByteArray &name, qreal size, qreal stretch, const QVector<QPointF> &positions, const QVector<unsigned int> &indices);
|
||||||
private:
|
private:
|
||||||
GlyphInfo(const GlyphInfo& g);
|
GlyphInfo(const GlyphInfo& g);
|
||||||
|
|
||||||
|
@ -9,19 +9,18 @@ __docformat__ = 'restructuredtext en'
|
|||||||
|
|
||||||
import hashlib
|
import hashlib
|
||||||
from future_builtins import map
|
from future_builtins import map
|
||||||
from itertools import izip
|
|
||||||
from collections import namedtuple
|
from PyQt4.Qt import QBuffer, QByteArray, QImage, Qt, QColor, qRgba
|
||||||
|
|
||||||
from calibre.constants import (__appname__, __version__)
|
from calibre.constants import (__appname__, __version__)
|
||||||
from calibre.ebooks.pdf.render.common import (
|
from calibre.ebooks.pdf.render.common import (
|
||||||
Reference, EOL, serialize, Stream, Dictionary, String, Name, Array,
|
Reference, EOL, serialize, Stream, Dictionary, String, Name, Array,
|
||||||
GlyphIndex)
|
fmtnum)
|
||||||
from calibre.ebooks.pdf.render.fonts import FontManager
|
from calibre.ebooks.pdf.render.fonts import FontManager
|
||||||
|
from calibre.ebooks.pdf.render.links import Links
|
||||||
|
|
||||||
PDFVER = b'%PDF-1.3'
|
PDFVER = b'%PDF-1.3'
|
||||||
|
|
||||||
Color = namedtuple('Color', 'red green blue opacity')
|
|
||||||
|
|
||||||
class IndirectObjects(object):
|
class IndirectObjects(object):
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
@ -89,6 +88,7 @@ class Page(Stream):
|
|||||||
self.opacities = {}
|
self.opacities = {}
|
||||||
self.fonts = {}
|
self.fonts = {}
|
||||||
self.xobjects = {}
|
self.xobjects = {}
|
||||||
|
self.patterns = {}
|
||||||
|
|
||||||
def set_opacity(self, opref):
|
def set_opacity(self, opref):
|
||||||
if opref not in self.opacities:
|
if opref not in self.opacities:
|
||||||
@ -107,6 +107,11 @@ class Page(Stream):
|
|||||||
self.xobjects[imgref] = 'Image%d'%len(self.xobjects)
|
self.xobjects[imgref] = 'Image%d'%len(self.xobjects)
|
||||||
return self.xobjects[imgref]
|
return self.xobjects[imgref]
|
||||||
|
|
||||||
|
def add_pattern(self, patternref):
|
||||||
|
if patternref not in self.patterns:
|
||||||
|
self.patterns[patternref] = 'Pat%d'%len(self.patterns)
|
||||||
|
return self.patterns[patternref]
|
||||||
|
|
||||||
def add_resources(self):
|
def add_resources(self):
|
||||||
r = Dictionary()
|
r = Dictionary()
|
||||||
if self.opacities:
|
if self.opacities:
|
||||||
@ -124,6 +129,13 @@ class Page(Stream):
|
|||||||
for ref, name in self.xobjects.iteritems():
|
for ref, name in self.xobjects.iteritems():
|
||||||
xobjects[name] = ref
|
xobjects[name] = ref
|
||||||
r['XObject'] = xobjects
|
r['XObject'] = xobjects
|
||||||
|
if self.patterns:
|
||||||
|
r['ColorSpace'] = Dictionary({'PCSp':Array(
|
||||||
|
[Name('Pattern'), Name('DeviceRGB')])})
|
||||||
|
patterns = Dictionary()
|
||||||
|
for ref, name in self.patterns.iteritems():
|
||||||
|
patterns[name] = ref
|
||||||
|
r['Pattern'] = patterns
|
||||||
if r:
|
if r:
|
||||||
self.page_dict['Resources'] = r
|
self.page_dict['Resources'] = r
|
||||||
|
|
||||||
@ -153,54 +165,6 @@ class Path(object):
|
|||||||
def close(self):
|
def close(self):
|
||||||
self.ops.append(('h',))
|
self.ops.append(('h',))
|
||||||
|
|
||||||
class Text(object):
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.transform = self.default_transform = [1, 0, 0, 1, 0, 0]
|
|
||||||
self.font_name = 'Times-Roman'
|
|
||||||
self.font_path = None
|
|
||||||
self.horizontal_scale = self.default_horizontal_scale = 100
|
|
||||||
self.word_spacing = self.default_word_spacing = 0
|
|
||||||
self.char_space = self.default_char_space = 0
|
|
||||||
self.glyph_adjust = self.default_glyph_adjust = None
|
|
||||||
self.size = 12
|
|
||||||
self.text = ''
|
|
||||||
|
|
||||||
def set_transform(self, *args):
|
|
||||||
if len(args) == 1:
|
|
||||||
m = args[0]
|
|
||||||
vals = [m.m11(), m.m12(), m.m21(), m.m22(), m.dx(), m.dy()]
|
|
||||||
else:
|
|
||||||
vals = args
|
|
||||||
self.transform = vals
|
|
||||||
|
|
||||||
def pdf_serialize(self, stream, font_name):
|
|
||||||
if not self.text: return
|
|
||||||
stream.write_line('BT ')
|
|
||||||
serialize(Name(font_name), stream)
|
|
||||||
stream.write(' %g Tf '%self.size)
|
|
||||||
stream.write(' '.join(map(type(u''), self.transform)) + ' Tm ')
|
|
||||||
if self.horizontal_scale != self.default_horizontal_scale:
|
|
||||||
stream.write('%g Tz '%self.horizontal_scale)
|
|
||||||
if self.word_spacing != self.default_word_spacing:
|
|
||||||
stream.write('%g Tw '%self.word_spacing)
|
|
||||||
if self.char_space != self.default_char_space:
|
|
||||||
stream.write('%g Tc '%self.char_space)
|
|
||||||
stream.write_line()
|
|
||||||
if self.glyph_adjust is self.default_glyph_adjust:
|
|
||||||
serialize(String(self.text), stream)
|
|
||||||
stream.write(' Tj ')
|
|
||||||
else:
|
|
||||||
chars = Array()
|
|
||||||
frac, widths = self.glyph_adjust
|
|
||||||
for c, width in izip(self.text, widths):
|
|
||||||
chars.append(String(c))
|
|
||||||
chars.append(int(width * frac))
|
|
||||||
serialize(chars, stream)
|
|
||||||
stream.write(' TJ ')
|
|
||||||
stream.write_line('ET')
|
|
||||||
|
|
||||||
|
|
||||||
class Catalog(Dictionary):
|
class Catalog(Dictionary):
|
||||||
|
|
||||||
def __init__(self, pagetree):
|
def __init__(self, pagetree):
|
||||||
@ -219,6 +183,9 @@ class PageTree(Dictionary):
|
|||||||
self['Kids'].append(pageref)
|
self['Kids'].append(pageref)
|
||||||
self['Count'] += 1
|
self['Count'] += 1
|
||||||
|
|
||||||
|
def get_ref(self, num):
|
||||||
|
return self['Kids'][num-1]
|
||||||
|
|
||||||
class HashingStream(object):
|
class HashingStream(object):
|
||||||
|
|
||||||
def __init__(self, f):
|
def __init__(self, f):
|
||||||
@ -228,7 +195,9 @@ class HashingStream(object):
|
|||||||
self.last_char = b''
|
self.last_char = b''
|
||||||
|
|
||||||
def write(self, raw):
|
def write(self, raw):
|
||||||
raw = raw if isinstance(raw, bytes) else raw.encode('ascii')
|
self.write_raw(raw if isinstance(raw, bytes) else raw.encode('ascii'))
|
||||||
|
|
||||||
|
def write_raw(self, raw):
|
||||||
self.f.write(raw)
|
self.f.write(raw)
|
||||||
self.hashobj.update(raw)
|
self.hashobj.update(raw)
|
||||||
if raw:
|
if raw:
|
||||||
@ -277,7 +246,8 @@ class PDFStream(object):
|
|||||||
( True, True, 'evenodd') : 'B*',
|
( True, True, 'evenodd') : 'B*',
|
||||||
}
|
}
|
||||||
|
|
||||||
def __init__(self, stream, page_size, compress=False):
|
def __init__(self, stream, page_size, compress=False, mark_links=False,
|
||||||
|
debug=print):
|
||||||
self.stream = HashingStream(stream)
|
self.stream = HashingStream(stream)
|
||||||
self.compress = compress
|
self.compress = compress
|
||||||
self.write_line(PDFVER)
|
self.write_line(PDFVER)
|
||||||
@ -294,6 +264,12 @@ class PDFStream(object):
|
|||||||
self.stroke_opacities, self.fill_opacities = {}, {}
|
self.stroke_opacities, self.fill_opacities = {}, {}
|
||||||
self.font_manager = FontManager(self.objects, self.compress)
|
self.font_manager = FontManager(self.objects, self.compress)
|
||||||
self.image_cache = {}
|
self.image_cache = {}
|
||||||
|
self.pattern_cache = {}
|
||||||
|
self.debug = debug
|
||||||
|
self.links = Links(self, mark_links, page_size)
|
||||||
|
i = QImage(1, 1, QImage.Format_ARGB32)
|
||||||
|
i.fill(qRgba(0, 0, 0, 255))
|
||||||
|
self.alpha_bit = i.constBits().asstring(4).find(b'\xff')
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def page_tree(self):
|
def page_tree(self):
|
||||||
@ -303,6 +279,9 @@ class PDFStream(object):
|
|||||||
def catalog(self):
|
def catalog(self):
|
||||||
return self.objects[1]
|
return self.objects[1]
|
||||||
|
|
||||||
|
def get_pageref(self, pagenum):
|
||||||
|
return self.page_tree.obj.get_ref(pagenum)
|
||||||
|
|
||||||
def set_metadata(self, title=None, author=None, tags=None):
|
def set_metadata(self, title=None, author=None, tags=None):
|
||||||
if title:
|
if title:
|
||||||
self.info['Title'] = String(title)
|
self.info['Title'] = String(title)
|
||||||
@ -321,12 +300,9 @@ class PDFStream(object):
|
|||||||
vals = [m.m11(), m.m12(), m.m21(), m.m22(), m.dx(), m.dy()]
|
vals = [m.m11(), m.m12(), m.m21(), m.m22(), m.dx(), m.dy()]
|
||||||
else:
|
else:
|
||||||
vals = args
|
vals = args
|
||||||
cm = ' '.join(map(type(u''), vals))
|
cm = ' '.join(map(fmtnum, vals))
|
||||||
self.current_page.write_line(cm + ' cm')
|
self.current_page.write_line(cm + ' cm')
|
||||||
|
|
||||||
def set_rgb_colorspace(self):
|
|
||||||
self.current_page.write_line('/DeviceRGB CS /DeviceRGB cs')
|
|
||||||
|
|
||||||
def save_stack(self):
|
def save_stack(self):
|
||||||
self.current_page.write_line('q')
|
self.current_page.write_line('q')
|
||||||
|
|
||||||
@ -337,7 +313,7 @@ class PDFStream(object):
|
|||||||
self.current_page.write_line('Q q')
|
self.current_page.write_line('Q q')
|
||||||
|
|
||||||
def draw_rect(self, x, y, width, height, stroke=True, fill=False):
|
def draw_rect(self, x, y, width, height, stroke=True, fill=False):
|
||||||
self.current_page.write('%g %g %g %g re '%(x, y, width, height))
|
self.current_page.write('%s re '%' '.join(map(fmtnum, (x, y, width, height))))
|
||||||
self.current_page.write_line(self.PATH_OPS[(stroke, fill, 'winding')])
|
self.current_page.write_line(self.PATH_OPS[(stroke, fill, 'winding')])
|
||||||
|
|
||||||
def write_path(self, path):
|
def write_path(self, path):
|
||||||
@ -345,7 +321,8 @@ class PDFStream(object):
|
|||||||
if i != 0:
|
if i != 0:
|
||||||
self.current_page.write_line()
|
self.current_page.write_line()
|
||||||
for x in op:
|
for x in op:
|
||||||
self.current_page.write(type(u'')(x) + ' ')
|
self.current_page.write(
|
||||||
|
(fmtnum(x) if isinstance(x, (int, long, float)) else x) + ' ')
|
||||||
|
|
||||||
def draw_path(self, path, stroke=True, fill=False, fill_rule='winding'):
|
def draw_path(self, path, stroke=True, fill=False, fill_rule='winding'):
|
||||||
if not path.ops: return
|
if not path.ops: return
|
||||||
@ -358,67 +335,38 @@ class PDFStream(object):
|
|||||||
op = 'W' if fill_rule == 'winding' else 'W*'
|
op = 'W' if fill_rule == 'winding' else 'W*'
|
||||||
self.current_page.write_line(op + ' ' + 'n')
|
self.current_page.write_line(op + ' ' + 'n')
|
||||||
|
|
||||||
def set_dash(self, array, phase=0):
|
def serialize(self, o):
|
||||||
array = Array(array)
|
serialize(o, self.current_page)
|
||||||
serialize(array, self.current_page)
|
|
||||||
self.current_page.write(b' ')
|
|
||||||
serialize(phase, self.current_page)
|
|
||||||
self.current_page.write_line(' d')
|
|
||||||
|
|
||||||
def set_line_width(self, width):
|
def set_stroke_opacity(self, opacity):
|
||||||
serialize(width, self.current_page)
|
|
||||||
self.current_page.write_line(' w')
|
|
||||||
|
|
||||||
def set_line_cap(self, style):
|
|
||||||
serialize({'flat':0, 'round':1, 'square':2}.get(style),
|
|
||||||
self.current_page)
|
|
||||||
self.current_page.write_line(' J')
|
|
||||||
|
|
||||||
def set_line_join(self, style):
|
|
||||||
serialize({'miter':0, 'round':1, 'bevel':2}[style], self.current_page)
|
|
||||||
self.current_page.write_line(' j')
|
|
||||||
|
|
||||||
def set_stroke_color(self, color):
|
|
||||||
opacity = color.opacity
|
|
||||||
if opacity not in self.stroke_opacities:
|
if opacity not in self.stroke_opacities:
|
||||||
op = Dictionary({'Type':Name('ExtGState'), 'CA': opacity})
|
op = Dictionary({'Type':Name('ExtGState'), 'CA': opacity})
|
||||||
self.stroke_opacities[opacity] = self.objects.add(op)
|
self.stroke_opacities[opacity] = self.objects.add(op)
|
||||||
self.current_page.set_opacity(self.stroke_opacities[opacity])
|
self.current_page.set_opacity(self.stroke_opacities[opacity])
|
||||||
self.current_page.write_line(' '.join(map(type(u''), color[:3])) + ' SC')
|
|
||||||
|
|
||||||
def set_fill_color(self, color):
|
def set_fill_opacity(self, opacity):
|
||||||
opacity = color.opacity
|
opacity = float(opacity)
|
||||||
if opacity not in self.fill_opacities:
|
if opacity not in self.fill_opacities:
|
||||||
op = Dictionary({'Type':Name('ExtGState'), 'ca': opacity})
|
op = Dictionary({'Type':Name('ExtGState'), 'ca': opacity})
|
||||||
self.fill_opacities[opacity] = self.objects.add(op)
|
self.fill_opacities[opacity] = self.objects.add(op)
|
||||||
self.current_page.set_opacity(self.fill_opacities[opacity])
|
self.current_page.set_opacity(self.fill_opacities[opacity])
|
||||||
self.current_page.write_line(' '.join(map(type(u''), color[:3])) + ' sc')
|
|
||||||
|
|
||||||
def end_page(self):
|
def end_page(self):
|
||||||
pageref = self.current_page.end(self.objects, self.stream)
|
pageref = self.current_page.end(self.objects, self.stream)
|
||||||
self.page_tree.obj.add_page(pageref)
|
self.page_tree.obj.add_page(pageref)
|
||||||
self.current_page = Page(self.page_tree, compress=self.compress)
|
self.current_page = Page(self.page_tree, compress=self.compress)
|
||||||
|
|
||||||
def draw_text(self, text_object):
|
|
||||||
if text_object.font_path is None:
|
|
||||||
fontref = self.font_manager.add_standard_font(text_object.font_name)
|
|
||||||
else:
|
|
||||||
raise NotImplementedError()
|
|
||||||
name = self.current_page.add_font(fontref)
|
|
||||||
text_object.pdf_serialize(self.current_page, name)
|
|
||||||
|
|
||||||
def draw_glyph_run(self, transform, size, font_metrics, glyphs):
|
def draw_glyph_run(self, transform, size, font_metrics, glyphs):
|
||||||
glyph_ids = {x[-1] for x in glyphs}
|
glyph_ids = {x[-1] for x in glyphs}
|
||||||
fontref = self.font_manager.add_font(font_metrics, glyph_ids)
|
fontref = self.font_manager.add_font(font_metrics, glyph_ids)
|
||||||
name = self.current_page.add_font(fontref)
|
name = self.current_page.add_font(fontref)
|
||||||
self.current_page.write(b'BT ')
|
self.current_page.write(b'BT ')
|
||||||
serialize(Name(name), self.current_page)
|
serialize(Name(name), self.current_page)
|
||||||
self.current_page.write(' %g Tf '%size)
|
self.current_page.write(' %s Tf '%fmtnum(size))
|
||||||
self.current_page.write('%s Tm '%' '.join(map(type(u''), transform)))
|
self.current_page.write('%s Tm '%' '.join(map(fmtnum, transform)))
|
||||||
for x, y, glyph_id in glyphs:
|
for x, y, glyph_id in glyphs:
|
||||||
self.current_page.write('%g %g Td '%(x, y))
|
self.current_page.write_raw(('%s %s Td <%04X> Tj '%(
|
||||||
serialize(GlyphIndex(glyph_id), self.current_page)
|
fmtnum(x), fmtnum(y), glyph_id)).encode('ascii'))
|
||||||
self.current_page.write(' Tj ')
|
|
||||||
self.current_page.write_line(b' ET')
|
self.current_page.write_line(b' ET')
|
||||||
|
|
||||||
def get_image(self, cache_key):
|
def get_image(self, cache_key):
|
||||||
@ -431,17 +379,109 @@ class PDFStream(object):
|
|||||||
self.objects.commit(r, self.stream)
|
self.objects.commit(r, self.stream)
|
||||||
return r
|
return r
|
||||||
|
|
||||||
def draw_image(self, x, y, xscale, yscale, imgref):
|
def add_image(self, img, cache_key):
|
||||||
|
ref = self.get_image(cache_key)
|
||||||
|
if ref is not None:
|
||||||
|
return ref
|
||||||
|
|
||||||
|
fmt = img.format()
|
||||||
|
image = QImage(img)
|
||||||
|
if (image.depth() == 1 and img.colorTable().size() == 2 and
|
||||||
|
img.colorTable().at(0) == QColor(Qt.black).rgba() and
|
||||||
|
img.colorTable().at(1) == QColor(Qt.white).rgba()):
|
||||||
|
if fmt == QImage.Format_MonoLSB:
|
||||||
|
image = image.convertToFormat(QImage.Format_Mono)
|
||||||
|
fmt = QImage.Format_Mono
|
||||||
|
else:
|
||||||
|
if (fmt != QImage.Format_RGB32 and fmt != QImage.Format_ARGB32):
|
||||||
|
image = image.convertToFormat(QImage.Format_ARGB32)
|
||||||
|
fmt = QImage.Format_ARGB32
|
||||||
|
|
||||||
|
w = image.width()
|
||||||
|
h = image.height()
|
||||||
|
d = image.depth()
|
||||||
|
|
||||||
|
if fmt == QImage.Format_Mono:
|
||||||
|
bytes_per_line = (w + 7) >> 3
|
||||||
|
data = image.constBits().asstring(bytes_per_line * h)
|
||||||
|
return self.write_image(data, w, h, d, cache_key=cache_key)
|
||||||
|
|
||||||
|
ba = QByteArray()
|
||||||
|
buf = QBuffer(ba)
|
||||||
|
image.save(buf, 'jpeg', 94)
|
||||||
|
data = bytes(ba.data())
|
||||||
|
has_alpha = has_mask = False
|
||||||
|
soft_mask = mask = None
|
||||||
|
|
||||||
|
if fmt == QImage.Format_ARGB32:
|
||||||
|
tmask = image.constBits().asstring(4*w*h)[self.alpha_bit::4]
|
||||||
|
sdata = bytearray(tmask)
|
||||||
|
vals = set(sdata)
|
||||||
|
vals.discard(255)
|
||||||
|
has_mask = bool(vals)
|
||||||
|
vals.discard(0)
|
||||||
|
has_alpha = bool(vals)
|
||||||
|
|
||||||
|
if has_alpha:
|
||||||
|
soft_mask = self.write_image(tmask, w, h, 8)
|
||||||
|
elif has_mask:
|
||||||
|
# dither the soft mask to 1bit and add it. This also helps PDF
|
||||||
|
# viewers without transparency support
|
||||||
|
bytes_per_line = (w + 7) >> 3
|
||||||
|
mdata = bytearray(0 for i in xrange(bytes_per_line * h))
|
||||||
|
spos = mpos = 0
|
||||||
|
for y in xrange(h):
|
||||||
|
for x in xrange(w):
|
||||||
|
if sdata[spos]:
|
||||||
|
mdata[mpos + x>>3] |= (0x80 >> (x&7))
|
||||||
|
spos += 1
|
||||||
|
mpos += bytes_per_line
|
||||||
|
mdata = bytes(mdata)
|
||||||
|
mask = self.write_image(mdata, w, h, 1)
|
||||||
|
|
||||||
|
return self.write_image(data, w, h, 32, mask=mask, dct=True,
|
||||||
|
soft_mask=soft_mask, cache_key=cache_key)
|
||||||
|
|
||||||
|
def add_pattern(self, pattern):
|
||||||
|
if pattern.cache_key not in self.pattern_cache:
|
||||||
|
self.pattern_cache[pattern.cache_key] = self.objects.add(pattern)
|
||||||
|
return self.current_page.add_pattern(self.pattern_cache[pattern.cache_key])
|
||||||
|
|
||||||
|
def draw_image(self, x, y, width, height, imgref):
|
||||||
name = self.current_page.add_image(imgref)
|
name = self.current_page.add_image(imgref)
|
||||||
self.current_page.write('q %g 0 0 %g %g %g cm '%(xscale, yscale, x, y))
|
self.current_page.write('q %s 0 0 %s %s %s cm '%(fmtnum(width),
|
||||||
|
fmtnum(-height), fmtnum(x), fmtnum(y+height)))
|
||||||
serialize(Name(name), self.current_page)
|
serialize(Name(name), self.current_page)
|
||||||
self.current_page.write_line(' Do Q')
|
self.current_page.write_line(' Do Q')
|
||||||
|
|
||||||
|
def apply_color_space(self, color, pattern, stroke=False):
|
||||||
|
wl = self.current_page.write_line
|
||||||
|
if color is not None and pattern is None:
|
||||||
|
wl(' '.join(map(fmtnum, color)) + (' RG' if stroke else ' rg'))
|
||||||
|
elif color is None and pattern is not None:
|
||||||
|
wl('/Pattern %s /%s %s'%('CS' if stroke else 'cs', pattern,
|
||||||
|
'SCN' if stroke else 'scn'))
|
||||||
|
elif color is not None and pattern is not None:
|
||||||
|
col = ' '.join(map(fmtnum, color))
|
||||||
|
wl('/PCSp %s %s /%s %s'%('CS' if stroke else 'cs', col, pattern,
|
||||||
|
'SCN' if stroke else 'scn'))
|
||||||
|
|
||||||
|
def apply_fill(self, color=None, pattern=None, opacity=None):
|
||||||
|
if opacity is not None:
|
||||||
|
self.set_fill_opacity(opacity)
|
||||||
|
self.apply_color_space(color, pattern)
|
||||||
|
|
||||||
|
def apply_stroke(self, color=None, pattern=None, opacity=None):
|
||||||
|
if opacity is not None:
|
||||||
|
self.set_stroke_opacity(opacity)
|
||||||
|
self.apply_color_space(color, pattern, stroke=True)
|
||||||
|
|
||||||
def end(self):
|
def end(self):
|
||||||
if self.current_page.getvalue():
|
if self.current_page.getvalue():
|
||||||
self.end_page()
|
self.end_page()
|
||||||
self.font_manager.embed_fonts()
|
self.font_manager.embed_fonts()
|
||||||
inforef = self.objects.add(self.info)
|
inforef = self.objects.add(self.info)
|
||||||
|
self.links.add_links()
|
||||||
self.objects.pdf_serialize(self.stream)
|
self.objects.pdf_serialize(self.stream)
|
||||||
self.write_line()
|
self.write_line()
|
||||||
startxref = self.objects.write_xref(self.stream)
|
startxref = self.objects.write_xref(self.stream)
|
||||||
|
132
src/calibre/ebooks/pdf/render/test.py
Normal file
132
src/calibre/ebooks/pdf/render/test.py
Normal file
@ -0,0 +1,132 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:fdm=marker:ai
|
||||||
|
from __future__ import (unicode_literals, division, absolute_import,
|
||||||
|
print_function)
|
||||||
|
|
||||||
|
__license__ = 'GPL v3'
|
||||||
|
__copyright__ = '2012, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||||
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
from PyQt4.Qt import (QBrush, QColor, QPoint, QPixmap, QPainterPath, QRectF,
|
||||||
|
QApplication, QPainter, Qt, QImage, QLinearGradient,
|
||||||
|
QPointF, QPen)
|
||||||
|
QBrush, QColor, QPoint, QPixmap, QPainterPath, QRectF, Qt, QPointF
|
||||||
|
|
||||||
|
from calibre.ebooks.pdf.render.engine import PdfDevice
|
||||||
|
|
||||||
|
def full(p, xmax, ymax):
|
||||||
|
p.drawRect(0, 0, xmax, ymax)
|
||||||
|
p.drawPolyline(QPoint(0, 0), QPoint(xmax, 0), QPoint(xmax, ymax),
|
||||||
|
QPoint(0, ymax), QPoint(0, 0))
|
||||||
|
pp = QPainterPath()
|
||||||
|
pp.addRect(0, 0, xmax, ymax)
|
||||||
|
p.drawPath(pp)
|
||||||
|
p.save()
|
||||||
|
for i in xrange(3):
|
||||||
|
col = [0, 0, 0, 200]
|
||||||
|
col[i] = 255
|
||||||
|
p.setOpacity(0.3)
|
||||||
|
p.fillRect(0, 0, xmax/10, xmax/10, QBrush(QColor(*col)))
|
||||||
|
p.setOpacity(1)
|
||||||
|
p.drawRect(0, 0, xmax/10, xmax/10)
|
||||||
|
p.translate(xmax/10, xmax/10)
|
||||||
|
p.scale(1, 1.5)
|
||||||
|
p.restore()
|
||||||
|
|
||||||
|
# p.scale(2, 2)
|
||||||
|
# p.rotate(45)
|
||||||
|
p.drawPixmap(0, 0, xmax/4, xmax/4, QPixmap(I('library.png')))
|
||||||
|
p.drawRect(0, 0, xmax/4, xmax/4)
|
||||||
|
|
||||||
|
f = p.font()
|
||||||
|
f.setPointSize(20)
|
||||||
|
# f.setLetterSpacing(f.PercentageSpacing, 200)
|
||||||
|
f.setUnderline(True)
|
||||||
|
# f.setOverline(True)
|
||||||
|
# f.setStrikeOut(True)
|
||||||
|
f.setFamily('Calibri')
|
||||||
|
p.setFont(f)
|
||||||
|
# p.setPen(QColor(0, 0, 255))
|
||||||
|
# p.scale(2, 2)
|
||||||
|
# p.rotate(45)
|
||||||
|
p.drawText(QPoint(xmax/3.9, 30), 'Some—text not By’s ū --- Д AV ff ff')
|
||||||
|
|
||||||
|
b = QBrush(Qt.HorPattern)
|
||||||
|
b.setColor(QColor(Qt.blue))
|
||||||
|
pix = QPixmap(I('console.png'))
|
||||||
|
w = xmax/4
|
||||||
|
p.fillRect(0, ymax/3, w, w, b)
|
||||||
|
p.fillRect(xmax/3, ymax/3, w, w, QBrush(pix))
|
||||||
|
x, y = 2*xmax/3, ymax/3
|
||||||
|
p.drawTiledPixmap(QRectF(x, y, w, w), pix, QPointF(10, 10))
|
||||||
|
|
||||||
|
x, y = 1, ymax/1.9
|
||||||
|
g = QLinearGradient(QPointF(x, y), QPointF(x+w, y+w))
|
||||||
|
g.setColorAt(0, QColor('#00f'))
|
||||||
|
g.setColorAt(1, QColor('#fff'))
|
||||||
|
p.fillRect(x, y, w, w, QBrush(g))
|
||||||
|
|
||||||
|
|
||||||
|
def run(dev, func):
|
||||||
|
p = QPainter(dev)
|
||||||
|
if isinstance(dev, PdfDevice):
|
||||||
|
dev.init_page()
|
||||||
|
xmax, ymax = p.viewport().width(), p.viewport().height()
|
||||||
|
try:
|
||||||
|
func(p, xmax, ymax)
|
||||||
|
finally:
|
||||||
|
p.end()
|
||||||
|
if isinstance(dev, PdfDevice):
|
||||||
|
if dev.engine.errors_occurred:
|
||||||
|
raise SystemExit(1)
|
||||||
|
|
||||||
|
def brush(p, xmax, ymax):
|
||||||
|
x = xmax/3
|
||||||
|
y = 0
|
||||||
|
w = xmax/2
|
||||||
|
pix = QPixmap(I('console.png'))
|
||||||
|
p.fillRect(x, y, w, w, QBrush(pix))
|
||||||
|
|
||||||
|
p.fillRect(0, y+xmax/1.9, w, w, QBrush(pix))
|
||||||
|
|
||||||
|
def pen(p, xmax, ymax):
|
||||||
|
pix = QPixmap(I('console.png'))
|
||||||
|
pen = QPen(QBrush(pix), 60)
|
||||||
|
p.setPen(pen)
|
||||||
|
p.drawRect(0, xmax/3, xmax/3, xmax/2)
|
||||||
|
|
||||||
|
def text(p, xmax, ymax):
|
||||||
|
f = p.font()
|
||||||
|
f.setPixelSize(24)
|
||||||
|
f.setFamily('Candara')
|
||||||
|
p.setFont(f)
|
||||||
|
p.drawText(QPoint(0, 100),
|
||||||
|
'Test intra glyph spacing ffagain imceo')
|
||||||
|
|
||||||
|
def main():
|
||||||
|
app = QApplication([])
|
||||||
|
app
|
||||||
|
tdir = os.path.abspath('.')
|
||||||
|
pdf = os.path.join(tdir, 'painter.pdf')
|
||||||
|
func = full
|
||||||
|
dpi = 100
|
||||||
|
with open(pdf, 'wb') as f:
|
||||||
|
dev = PdfDevice(f, xdpi=dpi, ydpi=dpi, compress=False)
|
||||||
|
img = QImage(dev.width(), dev.height(),
|
||||||
|
QImage.Format_ARGB32_Premultiplied)
|
||||||
|
img.setDotsPerMeterX(dpi*39.37)
|
||||||
|
img.setDotsPerMeterY(dpi*39.37)
|
||||||
|
img.fill(Qt.white)
|
||||||
|
run(dev, func)
|
||||||
|
run(img, func)
|
||||||
|
path = os.path.join(tdir, 'painter.png')
|
||||||
|
img.save(path)
|
||||||
|
print ('PDF written to:', pdf)
|
||||||
|
print ('Image written to:', path)
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
|
|
||||||
|
|
@ -33,7 +33,10 @@ from calibre.utils.config import prefs
|
|||||||
from calibre.utils.logging import Log
|
from calibre.utils.logging import Log
|
||||||
|
|
||||||
class NoSupportedInputFormats(Exception):
|
class NoSupportedInputFormats(Exception):
|
||||||
pass
|
|
||||||
|
def __init__(self, available_formats):
|
||||||
|
Exception.__init__(self)
|
||||||
|
self.available_formats = available_formats
|
||||||
|
|
||||||
def sort_formats_by_preference(formats, prefs):
|
def sort_formats_by_preference(formats, prefs):
|
||||||
uprefs = [x.upper() for x in prefs]
|
uprefs = [x.upper() for x in prefs]
|
||||||
@ -86,7 +89,7 @@ def get_supported_input_formats_for_book(db, book_id):
|
|||||||
input_formats = set([x.lower() for x in supported_input_formats()])
|
input_formats = set([x.lower() for x in supported_input_formats()])
|
||||||
input_formats = sorted(available_formats.intersection(input_formats))
|
input_formats = sorted(available_formats.intersection(input_formats))
|
||||||
if not input_formats:
|
if not input_formats:
|
||||||
raise NoSupportedInputFormats
|
raise NoSupportedInputFormats(tuple(x for x in available_formats if x))
|
||||||
return input_formats
|
return input_formats
|
||||||
|
|
||||||
|
|
||||||
|
@ -372,7 +372,7 @@ class Series(Base):
|
|||||||
|
|
||||||
self.widgets.append(QLabel('&'+self.col_metadata['name']+_(' index:'), parent))
|
self.widgets.append(QLabel('&'+self.col_metadata['name']+_(' index:'), parent))
|
||||||
w = QDoubleSpinBox(parent)
|
w = QDoubleSpinBox(parent)
|
||||||
w.setRange(-100., float(100000000))
|
w.setRange(-10000., float(100000000))
|
||||||
w.setDecimals(2)
|
w.setDecimals(2)
|
||||||
w.setSingleStep(1)
|
w.setSingleStep(1)
|
||||||
self.idx_widget=w
|
self.idx_widget=w
|
||||||
|
@ -5,7 +5,7 @@ __license__ = 'GPL v3'
|
|||||||
__copyright__ = '2010, Kovid Goyal <kovid@kovidgoyal.net>'
|
__copyright__ = '2010, Kovid Goyal <kovid@kovidgoyal.net>'
|
||||||
__docformat__ = 'restructuredtext en'
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
import functools, re, os, traceback, errno
|
import functools, re, os, traceback, errno, time
|
||||||
from collections import defaultdict
|
from collections import defaultdict
|
||||||
|
|
||||||
from PyQt4.Qt import (QAbstractTableModel, Qt, pyqtSignal, QIcon, QImage,
|
from PyQt4.Qt import (QAbstractTableModel, Qt, pyqtSignal, QIcon, QImage,
|
||||||
@ -1419,7 +1419,11 @@ class DeviceBooksModel(BooksModel): # {{{
|
|||||||
return QVariant(human_readable(size))
|
return QVariant(human_readable(size))
|
||||||
elif cname == 'timestamp':
|
elif cname == 'timestamp':
|
||||||
dt = self.db[self.map[row]].datetime
|
dt = self.db[self.map[row]].datetime
|
||||||
|
try:
|
||||||
dt = dt_factory(dt, assume_utc=True, as_utc=False)
|
dt = dt_factory(dt, assume_utc=True, as_utc=False)
|
||||||
|
except OverflowError:
|
||||||
|
dt = dt_factory(time.gmtime(), assume_utc=True,
|
||||||
|
as_utc=False)
|
||||||
return QVariant(strftime(TIME_FMT, dt.timetuple()))
|
return QVariant(strftime(TIME_FMT, dt.timetuple()))
|
||||||
elif cname == 'collections':
|
elif cname == 'collections':
|
||||||
tags = self.db[self.map[row]].device_collections
|
tags = self.db[self.map[row]].device_collections
|
||||||
|
@ -1094,6 +1094,9 @@ class RatingEdit(QSpinBox): # {{{
|
|||||||
db.set_rating(id_, 2*self.current_val, notify=False, commit=False)
|
db.set_rating(id_, 2*self.current_val, notify=False, commit=False)
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
def zero(self):
|
||||||
|
self.setValue(0)
|
||||||
|
|
||||||
# }}}
|
# }}}
|
||||||
|
|
||||||
class TagsEdit(EditWithComplete): # {{{
|
class TagsEdit(EditWithComplete): # {{{
|
||||||
|
@ -181,6 +181,11 @@ class MetadataSingleDialogBase(ResizableDialog):
|
|||||||
self.basic_metadata_widgets.append(self.comments)
|
self.basic_metadata_widgets.append(self.comments)
|
||||||
|
|
||||||
self.rating = RatingEdit(self)
|
self.rating = RatingEdit(self)
|
||||||
|
self.clear_ratings_button = QToolButton(self)
|
||||||
|
self.clear_ratings_button.setToolTip(_('Clear rating'))
|
||||||
|
self.clear_ratings_button.setIcon(QIcon(I('trash.png')))
|
||||||
|
self.clear_ratings_button.clicked.connect(self.rating.zero)
|
||||||
|
|
||||||
self.basic_metadata_widgets.append(self.rating)
|
self.basic_metadata_widgets.append(self.rating)
|
||||||
|
|
||||||
self.tags = TagsEdit(self)
|
self.tags = TagsEdit(self)
|
||||||
@ -659,8 +664,9 @@ class MetadataSingleDialog(MetadataSingleDialogBase): # {{{
|
|||||||
QSizePolicy.Expanding)
|
QSizePolicy.Expanding)
|
||||||
l.addItem(self.tabs[0].spc_one, 1, 0, 1, 3)
|
l.addItem(self.tabs[0].spc_one, 1, 0, 1, 3)
|
||||||
sto(self.cover.buttons[-1], self.rating)
|
sto(self.cover.buttons[-1], self.rating)
|
||||||
create_row2(1, self.rating)
|
create_row2(1, self.rating, self.clear_ratings_button)
|
||||||
sto(self.rating, self.tags_editor_button)
|
sto(self.rating, self.clear_ratings_button)
|
||||||
|
sto(self.clear_ratings_button, self.tags_editor_button)
|
||||||
sto(self.tags_editor_button, self.tags)
|
sto(self.tags_editor_button, self.tags)
|
||||||
create_row2(2, self.tags, self.clear_tags_button, front_button=self.tags_editor_button)
|
create_row2(2, self.tags, self.clear_tags_button, front_button=self.tags_editor_button)
|
||||||
sto(self.clear_tags_button, self.paste_isbn_button)
|
sto(self.clear_tags_button, self.paste_isbn_button)
|
||||||
@ -780,7 +786,7 @@ class MetadataSingleDialogAlt1(MetadataSingleDialogBase): # {{{
|
|||||||
button=self.clear_series_button, icon='trash.png')
|
button=self.clear_series_button, icon='trash.png')
|
||||||
create_row(5, self.series_index, self.tags)
|
create_row(5, self.series_index, self.tags)
|
||||||
create_row(6, self.tags, self.rating, button=self.clear_tags_button)
|
create_row(6, self.tags, self.rating, button=self.clear_tags_button)
|
||||||
create_row(7, self.rating, self.pubdate)
|
create_row(7, self.rating, self.pubdate, button=self.clear_ratings_button)
|
||||||
create_row(8, self.pubdate, self.publisher,
|
create_row(8, self.pubdate, self.publisher,
|
||||||
button=self.pubdate.clear_button, icon='trash.png')
|
button=self.pubdate.clear_button, icon='trash.png')
|
||||||
create_row(9, self.publisher, self.languages)
|
create_row(9, self.publisher, self.languages)
|
||||||
@ -917,7 +923,7 @@ class MetadataSingleDialogAlt2(MetadataSingleDialogBase): # {{{
|
|||||||
button=self.clear_series_button, icon='trash.png')
|
button=self.clear_series_button, icon='trash.png')
|
||||||
create_row(5, self.series_index, self.tags)
|
create_row(5, self.series_index, self.tags)
|
||||||
create_row(6, self.tags, self.rating, button=self.clear_tags_button)
|
create_row(6, self.tags, self.rating, button=self.clear_tags_button)
|
||||||
create_row(7, self.rating, self.pubdate)
|
create_row(7, self.rating, self.pubdate, button=self.clear_ratings_button)
|
||||||
create_row(8, self.pubdate, self.publisher,
|
create_row(8, self.pubdate, self.publisher,
|
||||||
button=self.pubdate.clear_button, icon='trash.png')
|
button=self.pubdate.clear_button, icon='trash.png')
|
||||||
create_row(9, self.publisher, self.languages)
|
create_row(9, self.publisher, self.languages)
|
||||||
|
@ -7,8 +7,10 @@ __license__ = 'GPL v3'
|
|||||||
__copyright__ = '2012, Kovid Goyal <kovid at kovidgoyal.net>'
|
__copyright__ = '2012, Kovid Goyal <kovid at kovidgoyal.net>'
|
||||||
__docformat__ = 'restructuredtext en'
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
from PyQt4.Qt import (QLabel, QVBoxLayout, QListWidget, QListWidgetItem, Qt)
|
from PyQt4.Qt import (QLabel, QVBoxLayout, QListWidget, QListWidgetItem, Qt,
|
||||||
|
QIcon)
|
||||||
|
|
||||||
|
from calibre.customize.ui import enable_plugin
|
||||||
from calibre.gui2.preferences import ConfigWidgetBase, test_widget
|
from calibre.gui2.preferences import ConfigWidgetBase, test_widget
|
||||||
|
|
||||||
class ConfigWidget(ConfigWidgetBase):
|
class ConfigWidget(ConfigWidgetBase):
|
||||||
@ -31,6 +33,18 @@ class ConfigWidget(ConfigWidgetBase):
|
|||||||
f.itemChanged.connect(self.changed_signal)
|
f.itemChanged.connect(self.changed_signal)
|
||||||
f.itemDoubleClicked.connect(self.toggle_item)
|
f.itemDoubleClicked.connect(self.toggle_item)
|
||||||
|
|
||||||
|
self.la2 = la = QLabel(_(
|
||||||
|
'The list of device plugins you have disabled. Uncheck an entry '
|
||||||
|
'to enable the plugin. calibre cannot detect devices that are '
|
||||||
|
'managed by disabled plugins.'))
|
||||||
|
la.setWordWrap(True)
|
||||||
|
l.addWidget(la)
|
||||||
|
|
||||||
|
self.device_plugins = f = QListWidget(f)
|
||||||
|
l.addWidget(f)
|
||||||
|
f.itemChanged.connect(self.changed_signal)
|
||||||
|
f.itemDoubleClicked.connect(self.toggle_item)
|
||||||
|
|
||||||
def toggle_item(self, item):
|
def toggle_item(self, item):
|
||||||
item.setCheckState(Qt.Checked if item.checkState() == Qt.Unchecked else
|
item.setCheckState(Qt.Checked if item.checkState() == Qt.Unchecked else
|
||||||
Qt.Unchecked)
|
Qt.Unchecked)
|
||||||
@ -46,6 +60,17 @@ class ConfigWidget(ConfigWidgetBase):
|
|||||||
item.setCheckState(Qt.Checked)
|
item.setCheckState(Qt.Checked)
|
||||||
self.devices.blockSignals(False)
|
self.devices.blockSignals(False)
|
||||||
|
|
||||||
|
self.device_plugins.blockSignals(True)
|
||||||
|
for dev in self.gui.device_manager.disabled_device_plugins:
|
||||||
|
n = dev.get_gui_name()
|
||||||
|
item = QListWidgetItem(n, self.device_plugins)
|
||||||
|
item.setData(Qt.UserRole, dev)
|
||||||
|
item.setFlags(Qt.ItemIsEnabled|Qt.ItemIsUserCheckable|Qt.ItemIsSelectable)
|
||||||
|
item.setCheckState(Qt.Checked)
|
||||||
|
item.setIcon(QIcon(I('plugins.png')))
|
||||||
|
self.device_plugins.sortItems()
|
||||||
|
self.device_plugins.blockSignals(False)
|
||||||
|
|
||||||
def restore_defaults(self):
|
def restore_defaults(self):
|
||||||
if self.devices.count() > 0:
|
if self.devices.count() > 0:
|
||||||
self.devices.clear()
|
self.devices.clear()
|
||||||
@ -63,6 +88,12 @@ class ConfigWidget(ConfigWidgetBase):
|
|||||||
for dev, bl in devs.iteritems():
|
for dev, bl in devs.iteritems():
|
||||||
dev.set_user_blacklisted_devices(bl)
|
dev.set_user_blacklisted_devices(bl)
|
||||||
|
|
||||||
|
for i in xrange(self.device_plugins.count()):
|
||||||
|
e = self.device_plugins.item(i)
|
||||||
|
dev = e.data(Qt.UserRole).toPyObject()
|
||||||
|
if e.checkState() == Qt.Unchecked:
|
||||||
|
enable_plugin(dev)
|
||||||
|
|
||||||
return True # Restart required
|
return True # Restart required
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
|
@ -273,7 +273,7 @@
|
|||||||
<widget class="QLabel" name="label_13">
|
<widget class="QLabel" name="label_13">
|
||||||
<property name="text">
|
<property name="text">
|
||||||
<string><p>Remember to leave calibre running as the server only runs as long as calibre is running.
|
<string><p>Remember to leave calibre running as the server only runs as long as calibre is running.
|
||||||
<p>To connect to the calibre server from your device you should use a URL of the form <b>http://myhostname:8080</b> as a new catalog in the Stanza reader on your iPhone. Here myhostname should be either the fully qualified hostname or the IP address of the computer calibre is running on.</string>
|
<p>To connect to the calibre server from your device you should use a URL of the form <b>http://myhostname:8080</b>. Here myhostname should be either the fully qualified hostname or the IP address of the computer calibre is running on. If you want to access the server from anywhere in the world, you will have to setup port forwarding for it on your router.</string>
|
||||||
</property>
|
</property>
|
||||||
<property name="wordWrap">
|
<property name="wordWrap">
|
||||||
<bool>true</bool>
|
<bool>true</bool>
|
||||||
|
@ -6,6 +6,7 @@ __license__ = 'GPL 3'
|
|||||||
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
__copyright__ = '2011, John Schember <john@nachtimwald.com>'
|
||||||
__docformat__ = 'restructuredtext en'
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
|
import re
|
||||||
import urllib
|
import urllib
|
||||||
from contextlib import closing
|
from contextlib import closing
|
||||||
|
|
||||||
@ -50,12 +51,17 @@ class BNStore(BasicStoreConfig, StorePlugin):
|
|||||||
if not id:
|
if not id:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
cover_url = ''.join(data.xpath('.//img[contains(@class, "product-image")]/@src'))
|
cover_url = ''
|
||||||
|
cover_id = ''.join(data.xpath('.//img[contains(@class, "product-image")]/@id'))
|
||||||
|
m = re.search(r"%s'.*?srcUrl: '(?P<iurl>.*?)'.*?}" % cover_id, raw)
|
||||||
|
if m:
|
||||||
|
cover_url = m.group('iurl')
|
||||||
|
|
||||||
title = ''.join(data.xpath('descendant::p[@class="title"]//span[@class="name"]//text()')).strip()
|
title = ''.join(data.xpath('descendant::p[@class="title"]//span[@class="name"]//text()')).strip()
|
||||||
if not title: continue
|
if not title:
|
||||||
|
continue
|
||||||
|
|
||||||
author = ', '.join(data.xpath('.//ul[@class="contributors"]//a[@class="subtle"]//text()')).strip()
|
author = ', '.join(data.xpath('.//ul[contains(@class, "contributors")]//a[contains(@class, "subtle")]//text()')).strip()
|
||||||
price = ''.join(data.xpath('.//a[contains(@class, "bn-price")]//text()'))
|
price = ''.join(data.xpath('.//a[contains(@class, "bn-price")]//text()'))
|
||||||
|
|
||||||
counter -= 1
|
counter -= 1
|
||||||
|
@ -59,7 +59,7 @@ class GoogleBooksStore(BasicStoreConfig, StorePlugin):
|
|||||||
counter = max_results
|
counter = max_results
|
||||||
with closing(br.open(url, timeout=timeout)) as f:
|
with closing(br.open(url, timeout=timeout)) as f:
|
||||||
doc = html.fromstring(f.read())
|
doc = html.fromstring(f.read())
|
||||||
for data in doc.xpath('//ol[@id="rso"]/li'):
|
for data in doc.xpath('//ol/li'):
|
||||||
if counter <= 0:
|
if counter <= 0:
|
||||||
break
|
break
|
||||||
|
|
||||||
@ -68,7 +68,7 @@ class GoogleBooksStore(BasicStoreConfig, StorePlugin):
|
|||||||
continue
|
continue
|
||||||
|
|
||||||
title = ''.join(data.xpath('.//h3/a//text()'))
|
title = ''.join(data.xpath('.//h3/a//text()'))
|
||||||
authors = data.xpath('.//div[@class="f"]//a//text()')
|
authors = data.xpath('.//span[contains(@class, "f")]//a//text()')
|
||||||
while authors and authors[-1].strip().lower() in ('preview', 'read', 'more editions'):
|
while authors and authors[-1].strip().lower() in ('preview', 'read', 'more editions'):
|
||||||
authors = authors[:-1]
|
authors = authors[:-1]
|
||||||
if not authors:
|
if not authors:
|
||||||
|
75
src/calibre/gui2/store/stores/nook_uk_plugin.py
Normal file
75
src/calibre/gui2/store/stores/nook_uk_plugin.py
Normal file
@ -0,0 +1,75 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
from __future__ import (unicode_literals, division, absolute_import, print_function)
|
||||||
|
|
||||||
|
__license__ = 'GPL 3'
|
||||||
|
__copyright__ = '2012, John Schember <john@nachtimwald.com>'
|
||||||
|
__docformat__ = 'restructuredtext en'
|
||||||
|
|
||||||
|
import re
|
||||||
|
import urllib
|
||||||
|
from contextlib import closing
|
||||||
|
|
||||||
|
from lxml import html
|
||||||
|
|
||||||
|
from PyQt4.Qt import QUrl
|
||||||
|
|
||||||
|
from calibre import browser, url_slash_cleaner
|
||||||
|
from calibre.gui2 import open_url
|
||||||
|
from calibre.gui2.store import StorePlugin
|
||||||
|
from calibre.gui2.store.basic_config import BasicStoreConfig
|
||||||
|
from calibre.gui2.store.search_result import SearchResult
|
||||||
|
from calibre.gui2.store.web_store_dialog import WebStoreDialog
|
||||||
|
|
||||||
|
class NookUKStore(BasicStoreConfig, StorePlugin):
|
||||||
|
|
||||||
|
def open(self, parent=None, detail_item=None, external=False):
|
||||||
|
url = "http://uk.nook.com"
|
||||||
|
|
||||||
|
if external or self.config.get('open_external', False):
|
||||||
|
open_url(QUrl(url_slash_cleaner(detail_item if detail_item else url)))
|
||||||
|
else:
|
||||||
|
d = WebStoreDialog(self.gui, url, parent, detail_item)
|
||||||
|
d.setWindowTitle(self.name)
|
||||||
|
d.set_tags(self.config.get('tags', ''))
|
||||||
|
d.exec_()
|
||||||
|
|
||||||
|
def search(self, query, max_results=10, timeout=60):
|
||||||
|
url = u'http://uk.nook.com/s/%s?s%%5Bdref%%5D=1&s%%5Bkeyword%%5D=%s' % (query.replace(' ', '-'), urllib.quote(query))
|
||||||
|
|
||||||
|
br = browser()
|
||||||
|
|
||||||
|
counter = max_results
|
||||||
|
with closing(br.open(url, timeout=timeout)) as f:
|
||||||
|
raw = f.read()
|
||||||
|
doc = html.fromstring(raw)
|
||||||
|
for data in doc.xpath('//ul[contains(@class, "product_list")]/li'):
|
||||||
|
if counter <= 0:
|
||||||
|
break
|
||||||
|
|
||||||
|
id = ''.join(data.xpath('.//span[contains(@class, "image")]/a/@href'))
|
||||||
|
if not id:
|
||||||
|
continue
|
||||||
|
|
||||||
|
cover_url = ''.join(data.xpath('.//span[contains(@class, "image")]//img/@data-src'))
|
||||||
|
|
||||||
|
title = ''.join(data.xpath('.//div[contains(@class, "title")]//text()')).strip()
|
||||||
|
if not title:
|
||||||
|
continue
|
||||||
|
|
||||||
|
author = ', '.join(data.xpath('.//div[contains(@class, "contributor")]//a/text()')).strip()
|
||||||
|
price = ''.join(data.xpath('.//div[contains(@class, "action")]//a//text()')).strip()
|
||||||
|
price = re.sub(r'[^\d.,£]', '', price);
|
||||||
|
|
||||||
|
counter -= 1
|
||||||
|
|
||||||
|
s = SearchResult()
|
||||||
|
s.cover_url = cover_url
|
||||||
|
s.title = title.strip()
|
||||||
|
s.author = author.strip()
|
||||||
|
s.price = price.strip()
|
||||||
|
s.detail_item = 'http://uk.nook.com/' + id.strip()
|
||||||
|
s.drm = SearchResult.DRM_UNKNOWN
|
||||||
|
s.formats = 'Nook'
|
||||||
|
|
||||||
|
yield s
|
@ -76,7 +76,7 @@ class SmashwordsStore(BasicStoreConfig, StorePlugin):
|
|||||||
|
|
||||||
title = ''.join(data.xpath('//a[@class="bookTitle"]/text()'))
|
title = ''.join(data.xpath('//a[@class="bookTitle"]/text()'))
|
||||||
subnote = ''.join(data.xpath('//span[@class="subnote"]/text()'))
|
subnote = ''.join(data.xpath('//span[@class="subnote"]/text()'))
|
||||||
author = ''.join(data.xpath('//span[@class="subnote"]/a/text()'))
|
author = ''.join(data.xpath('//span[@class="subnote"]//a[1]//text()'))
|
||||||
if '$' in subnote:
|
if '$' in subnote:
|
||||||
price = subnote.partition('$')[2]
|
price = subnote.partition('$')[2]
|
||||||
price = price.split(u'\xa0')[0]
|
price = price.split(u'\xa0')[0]
|
||||||
|
@ -88,19 +88,34 @@ def convert_single_ebook(parent, db, book_ids, auto_conversion=False, # {{{
|
|||||||
|
|
||||||
changed = True
|
changed = True
|
||||||
d.break_cycles()
|
d.break_cycles()
|
||||||
except NoSupportedInputFormats:
|
except NoSupportedInputFormats as nsif:
|
||||||
bad.append(book_id)
|
bad.append((book_id, nsif.available_formats))
|
||||||
|
|
||||||
if bad and show_no_format_warning:
|
if bad and show_no_format_warning:
|
||||||
|
if len(bad) == 1 and not bad[0][1]:
|
||||||
|
title = db.title(bad[0][0], True)
|
||||||
|
warning_dialog(parent, _('Could not convert'), '<p>'+
|
||||||
|
_('Could not convert <b>%s</b> as it has no ebook files. If you '
|
||||||
|
'think it should have files, but calibre is not finding '
|
||||||
|
'them, that is most likely because you moved the book\'s '
|
||||||
|
'files around outside of calibre. You will need to find those files '
|
||||||
|
'and re-add them to calibre.')%title, show=True)
|
||||||
|
else:
|
||||||
res = []
|
res = []
|
||||||
for id in bad:
|
for id, available_formats in bad:
|
||||||
title = db.title(id, True)
|
title = db.title(id, True)
|
||||||
res.append('%s'%title)
|
if available_formats:
|
||||||
|
msg = _('No supported formats (Available formats: %s)')%(
|
||||||
|
', '.join(available_formats))
|
||||||
|
else:
|
||||||
|
msg = _('This book has no actual ebook files')
|
||||||
|
res.append('%s - %s'%(title, msg))
|
||||||
|
|
||||||
|
|
||||||
msg = '%s' % '\n'.join(res)
|
msg = '%s' % '\n'.join(res)
|
||||||
warning_dialog(parent, _('Could not convert some books'),
|
warning_dialog(parent, _('Could not convert some books'),
|
||||||
_('Could not convert %(num)d of %(tot)d books, because no suitable source'
|
_('Could not convert %(num)d of %(tot)d books, because no supported source'
|
||||||
' format was found.') % dict(num=len(res), tot=total),
|
' formats were found.') % dict(num=len(res), tot=total),
|
||||||
msg).exec_()
|
msg).exec_()
|
||||||
|
|
||||||
return jobs, changed, bad
|
return jobs, changed, bad
|
||||||
|
@ -441,6 +441,10 @@ class BrowseServer(object):
|
|||||||
cat_len = len(category)
|
cat_len = len(category)
|
||||||
if not (len(ucat) > cat_len and ucat.startswith(category+'.')):
|
if not (len(ucat) > cat_len and ucat.startswith(category+'.')):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
if ucat in self.icon_map:
|
||||||
|
icon = '_'+quote(self.icon_map[ucat])
|
||||||
|
else:
|
||||||
icon = category_icon_map['user:']
|
icon = category_icon_map['user:']
|
||||||
# we have a subcategory. Find any further dots (further subcats)
|
# we have a subcategory. Find any further dots (further subcats)
|
||||||
cat_len += 1
|
cat_len += 1
|
||||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user